rdkafka 0.15.1 → 0.16.0.beta1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8636c80e1798cf24b34cf25a20ca24f35e2951fb179843a3a85a94fe0274ca76
4
- data.tar.gz: f115aa7fff4961d42280a7ad6fd78fed40568b936139d588ae2362a2f0f45c25
3
+ metadata.gz: 3135d4f2663517517330d165948e9761ffc0ecd20942f911a6ee9541c437ee7e
4
+ data.tar.gz: 9a489c2400c4054e9cec0d6c8d24f75bce407d370f8c393b7b31dcd0dbf7c361
5
5
  SHA512:
6
- metadata.gz: e5b5368a732e42b1c57aff93a7172c95b0bad93ac646dcba495c0509c5b6c29cf8753601d1f04f028c98579846e5db7e7119ae9a6a4cca2f41441316b60b5c9c
7
- data.tar.gz: 8596d6944d5151df3ad875d93dbd7cf2aee00c1ded85e053e96880b8c5420ca6ba18a72f57cc867a7bfedf38060be3c9ea4334d79d7a21b2503a078bce1d266a
6
+ metadata.gz: be8eba2aec012af189d893ebf6ddcd4e4dec117aecd7f36afa76bc6ee4c2a3fa93f4ac4160efa16e8c91efd7011ee41cda00a5e2146e18044a166035850cd490
7
+ data.tar.gz: d1761a6ab9c7d9ee539d79679dc730cc392fc2d7369d3f78b9c8d8f1c800855ab15ae7c472259b43e95139aa30d4777026c0529671fd85f98f3e5fceeaf53e62
checksums.yaml.gz.sig CHANGED
Binary file
@@ -25,9 +25,7 @@ jobs:
25
25
  - '3.3'
26
26
  - '3.2'
27
27
  - '3.1'
28
- - '3.1.0'
29
28
  - '3.0'
30
- - '3.0.0'
31
29
  - '2.7'
32
30
  include:
33
31
  - ruby: '3.3'
@@ -37,9 +35,9 @@ jobs:
37
35
  - name: Install package dependencies
38
36
  run: "[ -e $APT_DEPS ] || sudo apt-get install -y --no-install-recommends $APT_DEPS"
39
37
 
40
- - name: Start Kafka with docker-compose
38
+ - name: Start Kafka with docker compose
41
39
  run: |
42
- docker-compose up -d || (sleep 5 && docker-compose up -d)
40
+ docker compose up -d || (sleep 5 && docker compose up -d)
43
41
 
44
42
  - name: Set up Ruby
45
43
  uses: ruby/setup-ruby@v1
data/.gitignore CHANGED
@@ -10,3 +10,5 @@ ext/librdkafka.*
10
10
  doc
11
11
  coverage
12
12
  vendor
13
+ .idea/
14
+ out/
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 3.3.0
1
+ 3.3.1
data/CHANGELOG.md CHANGED
@@ -1,5 +1,15 @@
1
1
  # Rdkafka Changelog
2
2
 
3
+ ## 0.16.0 (Unreleased)
4
+ - **[Feature]** Oauthbearer token refresh callback (bruce-szalwinski-he)
5
+ - [Enhancement] Replace time poll based wait engine with an event based to improve response times on blocking operations and wait (nijikon + mensfeld)
6
+ - [Enhancement] Allow for usage of the second regex engine of librdkafka by setting `RDKAFKA_DISABLE_REGEX_EXT` during build (mensfeld)
7
+ - [Enhancement] name polling Thread as `rdkafka.native_kafka#<name>` (nijikon)
8
+ - [Change] Allow for native kafka thread operations deferring and manual start for consumer, producer and admin.
9
+ - [Change] The `wait_timeout` argument in `AbstractHandle.wait` method is deprecated and will be removed in future versions without replacement. We don't rely on it's value anymore (nijikon)
10
+ - [Fix] Background logger stops working after forking causing memory leaks (mensfeld)
11
+ - [Fix] Fix bogus case/when syntax. Levels 1, 2, and 6 previously defaulted to UNKNOWN (jjowdy)
12
+
3
13
  ## 0.15.1 (2024-01-30)
4
14
  - [Enhancement] Provide support for Nix OS (alexandriainfantino)
5
15
  - [Enhancement] Replace `rd_kafka_offset_store` with `rd_kafka_offsets_store` (mensfeld)
@@ -20,7 +30,7 @@
20
30
  - **[Feature]** Add `Admin#delete_group` utility (piotaixr)
21
31
  - **[Feature]** Add Create and Delete ACL Feature To Admin Functions (vgnanasekaran)
22
32
  - **[Feature]** Support `#assignment_lost?` on a consumer to check for involuntary assignment revocation (mensfeld)
23
- - [Enhancement] Expose alternative way of managing consumer events via a separate queue (mensfeld)
33
+ - [Enhancement] Expose alternative way of managing consumer events via a separate queue (mensfeld)
24
34
  - [Enhancement] **Bump** librdkafka to 2.3.0 (mensfeld)
25
35
  - [Enhancement] Increase the `#lag` and `#query_watermark_offsets` default timeouts from 100ms to 1000ms. This will compensate for network glitches and remote clusters operations (mensfeld)
26
36
  - [Change] Use `SecureRandom.uuid` instead of `random` for test consumer groups (mensfeld)
data/README.md CHANGED
@@ -18,7 +18,7 @@ become EOL.
18
18
 
19
19
  `rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
20
20
 
21
- The most important pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
21
+ The most essential pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
22
22
 
23
23
  ## Table of content
24
24
 
@@ -30,6 +30,7 @@ The most important pieces of a Kafka client are implemented, and we aim to provi
30
30
  - [Higher Level Libraries](#higher-level-libraries)
31
31
  * [Message Processing Frameworks](#message-processing-frameworks)
32
32
  * [Message Publishing Libraries](#message-publishing-libraries)
33
+ - [Forking](#forking)
33
34
  - [Development](#development)
34
35
  - [Example](#example)
35
36
  - [Versions](#versions)
@@ -47,12 +48,13 @@ While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications,
47
48
 
48
49
  ## Installation
49
50
 
50
- This gem downloads and compiles librdkafka when it is installed. If you
51
- If you have any problems installing the gem, please open an issue.
51
+ When installed, this gem downloads and compiles librdkafka. If you have any problems installing the gem, please open an issue.
52
52
 
53
53
  ## Usage
54
54
 
55
- See the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Two quick examples:
55
+ Please see the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Below are two quick examples.
56
+
57
+ Unless you are seeking specific low-level capabilities, we **strongly** recommend using [Karafka](https://github.com/karafka/karafka) and [WaterDrop](https://github.com/karafka/waterdrop) when working with Kafka. These are higher-level libraries also maintained by us based on rdkafka-ruby.
56
58
 
57
59
  ### Consuming Messages
58
60
 
@@ -74,7 +76,7 @@ end
74
76
 
75
77
  ### Producing Messages
76
78
 
77
- Produce a number of messages, put the delivery handles in an array, and
79
+ Produce several messages, put the delivery handles in an array, and
78
80
  wait for them before exiting. This way the messages will be batched and
79
81
  efficiently sent to Kafka.
80
82
 
@@ -95,13 +97,11 @@ end
95
97
  delivery_handles.each(&:wait)
96
98
  ```
97
99
 
98
- Note that creating a producer consumes some resources that will not be
99
- released until it `#close` is explicitly called, so be sure to call
100
- `Config#producer` only as necessary.
100
+ Note that creating a producer consumes some resources that will not be released until it `#close` is explicitly called, so be sure to call `Config#producer` only as necessary.
101
101
 
102
102
  ## Higher Level Libraries
103
103
 
104
- Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
104
+ Currently, there are two actively developed frameworks based on `rdkafka-ruby`, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
105
105
 
106
106
  ### Message Processing Frameworks
107
107
 
@@ -112,6 +112,16 @@ Currently, there are two actively developed frameworks based on rdkafka-ruby, th
112
112
 
113
113
  * [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
114
114
 
115
+ ## Forking
116
+
117
+ When working with `rdkafka-ruby`, it's essential to know that the underlying `librdkafka` library does not support fork-safe operations, even though it is thread-safe. Forking a process after initializing librdkafka clients can lead to unpredictable behavior due to inherited file descriptors and memory states. This limitation requires careful handling, especially in Ruby applications that rely on forking.
118
+
119
+ To address this, it's highly recommended to:
120
+
121
+ - Never initialize any `rdkafka-ruby` producers or consumers before forking to avoid state corruption.
122
+ - Before forking, always close any open producers or consumers if you've opened any.
123
+ - Use high-level libraries like [WaterDrop](https://github.com/karafka/waterdrop) and [Karafka](https://github.com/karafka/karafka/), which provide abstractions for handling librdkafka's intricacies.
124
+
115
125
  ## Development
116
126
 
117
127
  Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
data/docker-compose.yml CHANGED
@@ -3,7 +3,7 @@ version: '2'
3
3
  services:
4
4
  kafka:
5
5
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.5.3
6
+ image: confluentinc/cp-kafka:7.6.1
7
7
 
8
8
  ports:
9
9
  - 9092:9092
data/ext/Rakefile CHANGED
@@ -27,6 +27,14 @@ task :default => :clean do
27
27
  :sha256 => Rdkafka::LIBRDKAFKA_SOURCE_SHA256
28
28
  }
29
29
  recipe.configure_options = ["--host=#{recipe.host}"]
30
+
31
+ # Disable using libc regex engine in favor of the embedded one
32
+ # The default regex engine of librdkafka does not always work exactly as most of the users
33
+ # would expect, hence this flag allows for changing it to the other one
34
+ if ENV.key?('RDKAFKA_DISABLE_REGEX_EXT')
35
+ recipe.configure_options << '--disable-regex-ext'
36
+ end
37
+
30
38
  recipe.cook
31
39
  # Move dynamic library we're interested in
32
40
  if recipe.host.include?('darwin')
@@ -14,6 +14,13 @@ module Rdkafka
14
14
 
15
15
  # Registry for registering all the handles.
16
16
  REGISTRY = {}
17
+ # Default wait timeout is 31 years
18
+ MAX_WAIT_TIMEOUT_FOREVER = 10_000_000_000
19
+ # Deprecation message for wait_timeout argument in wait method
20
+ WAIT_TIMEOUT_DEPRECATION_MESSAGE = "The 'wait_timeout' argument is deprecated and will be removed in future versions without replacement. " \
21
+ "We don't rely on it's value anymore. Please refactor your code to remove references to it."
22
+
23
+ private_constant :MAX_WAIT_TIMEOUT_FOREVER
17
24
 
18
25
  class << self
19
26
  # Adds handle to the register
@@ -32,6 +39,12 @@ module Rdkafka
32
39
  end
33
40
  end
34
41
 
42
+ def initialize
43
+ @mutex = Thread::Mutex.new
44
+ @resource = Thread::ConditionVariable.new
45
+
46
+ super
47
+ end
35
48
 
36
49
  # Whether the handle is still pending.
37
50
  #
@@ -45,37 +58,48 @@ module Rdkafka
45
58
  # on the operation. In this case it is possible to call wait again.
46
59
  #
47
60
  # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out.
48
- # If this is nil it does not time out.
49
- # @param wait_timeout [Numeric] Amount of time we should wait before we recheck if the
50
- # operation has completed
61
+ # If this is nil we will wait forever
62
+ # @param wait_timeout [nil] deprecated
51
63
  # @param raise_response_error [Boolean] should we raise error when waiting finishes
52
64
  #
53
65
  # @return [Object] Operation-specific result
54
66
  #
55
67
  # @raise [RdkafkaError] When the operation failed
56
68
  # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
57
- def wait(max_wait_timeout: 60, wait_timeout: 0.1, raise_response_error: true)
58
- timeout = if max_wait_timeout
59
- monotonic_now + max_wait_timeout
60
- else
61
- nil
62
- end
63
- loop do
64
- if pending?
65
- if timeout && timeout <= monotonic_now
66
- raise WaitTimeoutError.new(
67
- "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
68
- )
69
+ def wait(max_wait_timeout: 60, wait_timeout: nil, raise_response_error: true)
70
+ Kernel.warn(WAIT_TIMEOUT_DEPRECATION_MESSAGE) unless wait_timeout.nil?
71
+
72
+ timeout = max_wait_timeout ? monotonic_now + max_wait_timeout : MAX_WAIT_TIMEOUT_FOREVER
73
+
74
+ @mutex.synchronize do
75
+ loop do
76
+ if pending?
77
+ to_wait = (timeout - monotonic_now)
78
+
79
+ if to_wait.positive?
80
+ @resource.wait(@mutex, to_wait)
81
+ else
82
+ raise WaitTimeoutError.new(
83
+ "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
84
+ )
85
+ end
86
+ elsif self[:response] != 0 && raise_response_error
87
+ raise_error
88
+ else
89
+ return create_result
69
90
  end
70
- sleep wait_timeout
71
- elsif self[:response] != 0 && raise_response_error
72
- raise_error
73
- else
74
- return create_result
75
91
  end
76
92
  end
77
93
  end
78
94
 
95
+ # Unlock the resources
96
+ def unlock
97
+ @mutex.synchronize do
98
+ self[:pending] = false
99
+ @resource.broadcast
100
+ end
101
+ end
102
+
79
103
  # @return [String] the name of the operation (e.g. "delivery")
80
104
  def operation_name
81
105
  raise "Must be implemented by subclass!"
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
data/lib/rdkafka/admin.rb CHANGED
@@ -2,6 +2,8 @@
2
2
 
3
3
  module Rdkafka
4
4
  class Admin
5
+ include Helpers::OAuth
6
+
5
7
  # @private
6
8
  def initialize(native_kafka)
7
9
  @native_kafka = native_kafka
@@ -10,6 +12,19 @@ module Rdkafka
10
12
  ObjectSpace.define_finalizer(self, native_kafka.finalizer)
11
13
  end
12
14
 
15
+ # Starts the native Kafka polling thread and kicks off the init polling
16
+ # @note Not needed to run unless explicit start was disabled
17
+ def start
18
+ @native_kafka.start
19
+ end
20
+
21
+ # @return [String] admin name
22
+ def name
23
+ @name ||= @native_kafka.with_inner do |inner|
24
+ ::Rdkafka::Bindings.rd_kafka_name(inner)
25
+ end
26
+ end
27
+
13
28
  def finalizer
14
29
  ->(_) { close }
15
30
  end
@@ -17,6 +17,7 @@ module Rdkafka
17
17
 
18
18
  RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS = -175
19
19
  RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS = -174
20
+ RD_KAFKA_RESP_ERR__STATE = -172
20
21
  RD_KAFKA_RESP_ERR__NOENT = -156
21
22
  RD_KAFKA_RESP_ERR_NO_ERROR = 0
22
23
 
@@ -111,7 +112,10 @@ module Rdkafka
111
112
  callback :error_cb, [:pointer, :int, :string, :pointer], :void
112
113
  attach_function :rd_kafka_conf_set_error_cb, [:pointer, :error_cb], :void
113
114
  attach_function :rd_kafka_rebalance_protocol, [:pointer], :string
114
-
115
+ callback :oauthbearer_token_refresh_cb, [:pointer, :string, :pointer], :void
116
+ attach_function :rd_kafka_conf_set_oauthbearer_token_refresh_cb, [:pointer, :oauthbearer_token_refresh_cb], :void
117
+ attach_function :rd_kafka_oauthbearer_set_token, [:pointer, :string, :int64, :pointer, :pointer, :int, :pointer, :int], :int
118
+ attach_function :rd_kafka_oauthbearer_set_token_failure, [:pointer, :string], :int
115
119
  # Log queue
116
120
  attach_function :rd_kafka_set_log_queue, [:pointer, :pointer], :void
117
121
  attach_function :rd_kafka_queue_get_main, [:pointer], :pointer
@@ -120,19 +124,21 @@ module Rdkafka
120
124
  :void, [:pointer, :int, :string, :string]
121
125
  ) do |_client_ptr, level, _level_string, line|
122
126
  severity = case level
123
- when 0 || 1 || 2
127
+ when 0, 1, 2
124
128
  Logger::FATAL
125
129
  when 3
126
130
  Logger::ERROR
127
131
  when 4
128
132
  Logger::WARN
129
- when 5 || 6
133
+ when 5, 6
130
134
  Logger::INFO
131
135
  when 7
132
136
  Logger::DEBUG
133
137
  else
134
138
  Logger::UNKNOWN
135
139
  end
140
+
141
+ Rdkafka::Config.ensure_log_thread
136
142
  Rdkafka::Config.log_queue << [severity, "rdkafka: #{line}"]
137
143
  end
138
144
 
@@ -159,6 +165,32 @@ module Rdkafka
159
165
  end
160
166
  end
161
167
 
168
+ # The OAuth callback is currently global and contextless.
169
+ # This means that the callback will be called for all instances, and the callback must be able to determine to which instance it is associated.
170
+ # The instance name will be provided in the callback, allowing the callback to reference the correct instance.
171
+ #
172
+ # An example of how to use the instance name in the callback is given below.
173
+ # The `refresh_token` is configured as the `oauthbearer_token_refresh_callback`.
174
+ # `instances` is a map of client names to client instances, maintained by the user.
175
+ #
176
+ # ```
177
+ # def refresh_token(config, client_name)
178
+ # client = instances[client_name]
179
+ # client.oauthbearer_set_token(
180
+ # token: 'new-token-value',
181
+ # lifetime_ms: token-lifetime-ms,
182
+ # principal_name: 'principal-name'
183
+ # )
184
+ # end
185
+ # ```
186
+ OAuthbearerTokenRefreshCallback = FFI::Function.new(
187
+ :void, [:pointer, :string, :pointer]
188
+ ) do |client_ptr, config, _opaque|
189
+ if Rdkafka::Config.oauthbearer_token_refresh_callback
190
+ Rdkafka::Config.oauthbearer_token_refresh_callback.call(config, Rdkafka::Bindings.rd_kafka_name(client_ptr))
191
+ end
192
+ end
193
+
162
194
  # Handle
163
195
 
164
196
  enum :kafka_type, [
@@ -156,7 +156,8 @@ module Rdkafka
156
156
  create_topic_handle[:response] = create_topic_results[0].result_error
157
157
  create_topic_handle[:error_string] = create_topic_results[0].error_string
158
158
  create_topic_handle[:result_name] = create_topic_results[0].result_name
159
- create_topic_handle[:pending] = false
159
+
160
+ create_topic_handle.unlock
160
161
  end
161
162
  end
162
163
 
@@ -173,7 +174,8 @@ module Rdkafka
173
174
  delete_group_handle[:response] = delete_group_results[0].result_error
174
175
  delete_group_handle[:error_string] = delete_group_results[0].error_string
175
176
  delete_group_handle[:result_name] = delete_group_results[0].result_name
176
- delete_group_handle[:pending] = false
177
+
178
+ delete_group_handle.unlock
177
179
  end
178
180
  end
179
181
 
@@ -190,7 +192,8 @@ module Rdkafka
190
192
  delete_topic_handle[:response] = delete_topic_results[0].result_error
191
193
  delete_topic_handle[:error_string] = delete_topic_results[0].error_string
192
194
  delete_topic_handle[:result_name] = delete_topic_results[0].result_name
193
- delete_topic_handle[:pending] = false
195
+
196
+ delete_topic_handle.unlock
194
197
  end
195
198
  end
196
199
 
@@ -207,7 +210,8 @@ module Rdkafka
207
210
  create_partitions_handle[:response] = create_partitions_results[0].result_error
208
211
  create_partitions_handle[:error_string] = create_partitions_results[0].error_string
209
212
  create_partitions_handle[:result_name] = create_partitions_results[0].result_name
210
- create_partitions_handle[:pending] = false
213
+
214
+ create_partitions_handle.unlock
211
215
  end
212
216
  end
213
217
 
@@ -223,7 +227,8 @@ module Rdkafka
223
227
  if create_acl_handle = Rdkafka::Admin::CreateAclHandle.remove(create_acl_handle_ptr.address)
224
228
  create_acl_handle[:response] = create_acl_results[0].result_error
225
229
  create_acl_handle[:response_string] = create_acl_results[0].error_string
226
- create_acl_handle[:pending] = false
230
+
231
+ create_acl_handle.unlock
227
232
  end
228
233
  end
229
234
 
@@ -239,11 +244,13 @@ module Rdkafka
239
244
  if delete_acl_handle = Rdkafka::Admin::DeleteAclHandle.remove(delete_acl_handle_ptr.address)
240
245
  delete_acl_handle[:response] = delete_acl_results[0].result_error
241
246
  delete_acl_handle[:response_string] = delete_acl_results[0].error_string
242
- delete_acl_handle[:pending] = false
247
+
243
248
  if delete_acl_results[0].result_error == 0
244
249
  delete_acl_handle[:matching_acls] = delete_acl_results[0].matching_acls
245
250
  delete_acl_handle[:matching_acls_count] = delete_acl_results[0].matching_acls_count
246
251
  end
252
+
253
+ delete_acl_handle.unlock
247
254
  end
248
255
  end
249
256
 
@@ -254,17 +261,18 @@ module Rdkafka
254
261
  if describe_acl_handle = Rdkafka::Admin::DescribeAclHandle.remove(describe_acl_handle_ptr.address)
255
262
  describe_acl_handle[:response] = describe_acl.result_error
256
263
  describe_acl_handle[:response_string] = describe_acl.error_string
257
- describe_acl_handle[:pending] = false
264
+
258
265
  if describe_acl.result_error == 0
259
266
  describe_acl_handle[:acls] = describe_acl.matching_acls
260
267
  describe_acl_handle[:acls_count] = describe_acl.matching_acls_count
261
268
  end
269
+
270
+ describe_acl_handle.unlock
262
271
  end
263
272
  end
264
273
  end
265
274
 
266
275
  # FFI Function used for Message Delivery callbacks
267
-
268
276
  DeliveryCallbackFunction = FFI::Function.new(
269
277
  :void, [:pointer, :pointer, :pointer]
270
278
  ) do |client_ptr, message_ptr, opaque_ptr|
@@ -284,7 +292,6 @@ module Rdkafka
284
292
  delivery_handle[:partition] = message[:partition]
285
293
  delivery_handle[:offset] = message[:offset]
286
294
  delivery_handle[:topic_name] = FFI::MemoryPointer.from_string(topic_name)
287
- delivery_handle[:pending] = false
288
295
 
289
296
  # Call delivery callback on opaque
290
297
  if opaque = Rdkafka::Config.opaques[opaque_ptr.to_i]
@@ -299,9 +306,10 @@ module Rdkafka
299
306
  delivery_handle
300
307
  )
301
308
  end
309
+
310
+ delivery_handle.unlock
302
311
  end
303
312
  end
304
313
  end
305
-
306
314
  end
307
315
  end