rdkafka 0.15.2 → 0.16.0.beta1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b6e51c150d2ea3c66fd124d14e603c6eeabed5d5fd5431ba94a1a324e4fa5387
4
- data.tar.gz: 577a899836dfb3b2709caa067658b501604b03200f5c8a458eb5c3c4256fd290
3
+ metadata.gz: 3135d4f2663517517330d165948e9761ffc0ecd20942f911a6ee9541c437ee7e
4
+ data.tar.gz: 9a489c2400c4054e9cec0d6c8d24f75bce407d370f8c393b7b31dcd0dbf7c361
5
5
  SHA512:
6
- metadata.gz: 6597d141706506c9f097500a9c0ca5c7888330e2a86399058664f4147ae72a52b5045765bd773c251bac4266b223e5025924d20cf0cb0616b0cd001b9b0198cb
7
- data.tar.gz: cabf6f8020daedef78dbafa42d3b651d46b0515b4529d0288dd865e23ab58231776b22f1042c0ceccdd7e34a8a11124ce3e3affc8e8ffaec9038e1d9d3dcc88d
6
+ metadata.gz: be8eba2aec012af189d893ebf6ddcd4e4dec117aecd7f36afa76bc6ee4c2a3fa93f4ac4160efa16e8c91efd7011ee41cda00a5e2146e18044a166035850cd490
7
+ data.tar.gz: d1761a6ab9c7d9ee539d79679dc730cc392fc2d7369d3f78b9c8d8f1c800855ab15ae7c472259b43e95139aa30d4777026c0529671fd85f98f3e5fceeaf53e62
checksums.yaml.gz.sig CHANGED
Binary file
@@ -25,9 +25,7 @@ jobs:
25
25
  - '3.3'
26
26
  - '3.2'
27
27
  - '3.1'
28
- - '3.1.0'
29
28
  - '3.0'
30
- - '3.0.0'
31
29
  - '2.7'
32
30
  include:
33
31
  - ruby: '3.3'
@@ -37,9 +35,9 @@ jobs:
37
35
  - name: Install package dependencies
38
36
  run: "[ -e $APT_DEPS ] || sudo apt-get install -y --no-install-recommends $APT_DEPS"
39
37
 
40
- - name: Start Kafka with docker-compose
38
+ - name: Start Kafka with docker compose
41
39
  run: |
42
- docker-compose up -d || (sleep 5 && docker-compose up -d)
40
+ docker compose up -d || (sleep 5 && docker compose up -d)
43
41
 
44
42
  - name: Set up Ruby
45
43
  uses: ruby/setup-ruby@v1
data/.gitignore CHANGED
@@ -10,3 +10,5 @@ ext/librdkafka.*
10
10
  doc
11
11
  coverage
12
12
  vendor
13
+ .idea/
14
+ out/
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 3.3.0
1
+ 3.3.1
data/CHANGELOG.md CHANGED
@@ -1,7 +1,14 @@
1
1
  # Rdkafka Changelog
2
2
 
3
- ## 0.15.2 (2024-07-10)
4
- - [Fix] Switch to local release of librdkafka to mitigate its unavailability.
3
+ ## 0.16.0 (Unreleased)
4
+ - **[Feature]** Oauthbearer token refresh callback (bruce-szalwinski-he)
5
+ - [Enhancement] Replace time poll based wait engine with an event based to improve response times on blocking operations and wait (nijikon + mensfeld)
6
+ - [Enhancement] Allow for usage of the second regex engine of librdkafka by setting `RDKAFKA_DISABLE_REGEX_EXT` during build (mensfeld)
7
+ - [Enhancement] name polling Thread as `rdkafka.native_kafka#<name>` (nijikon)
8
+ - [Change] Allow for native kafka thread operations deferring and manual start for consumer, producer and admin.
9
+ - [Change] The `wait_timeout` argument in `AbstractHandle.wait` method is deprecated and will be removed in future versions without replacement. We don't rely on it's value anymore (nijikon)
10
+ - [Fix] Background logger stops working after forking causing memory leaks (mensfeld)
11
+ - [Fix] Fix bogus case/when syntax. Levels 1, 2, and 6 previously defaulted to UNKNOWN (jjowdy)
5
12
 
6
13
  ## 0.15.1 (2024-01-30)
7
14
  - [Enhancement] Provide support for Nix OS (alexandriainfantino)
@@ -23,7 +30,7 @@
23
30
  - **[Feature]** Add `Admin#delete_group` utility (piotaixr)
24
31
  - **[Feature]** Add Create and Delete ACL Feature To Admin Functions (vgnanasekaran)
25
32
  - **[Feature]** Support `#assignment_lost?` on a consumer to check for involuntary assignment revocation (mensfeld)
26
- - [Enhancement] Expose alternative way of managing consumer events via a separate queue (mensfeld)
33
+ - [Enhancement] Expose alternative way of managing consumer events via a separate queue (mensfeld)
27
34
  - [Enhancement] **Bump** librdkafka to 2.3.0 (mensfeld)
28
35
  - [Enhancement] Increase the `#lag` and `#query_watermark_offsets` default timeouts from 100ms to 1000ms. This will compensate for network glitches and remote clusters operations (mensfeld)
29
36
  - [Change] Use `SecureRandom.uuid` instead of `random` for test consumer groups (mensfeld)
data/README.md CHANGED
@@ -18,7 +18,7 @@ become EOL.
18
18
 
19
19
  `rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
20
20
 
21
- The most important pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
21
+ The most essential pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
22
22
 
23
23
  ## Table of content
24
24
 
@@ -30,6 +30,7 @@ The most important pieces of a Kafka client are implemented, and we aim to provi
30
30
  - [Higher Level Libraries](#higher-level-libraries)
31
31
  * [Message Processing Frameworks](#message-processing-frameworks)
32
32
  * [Message Publishing Libraries](#message-publishing-libraries)
33
+ - [Forking](#forking)
33
34
  - [Development](#development)
34
35
  - [Example](#example)
35
36
  - [Versions](#versions)
@@ -47,12 +48,13 @@ While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications,
47
48
 
48
49
  ## Installation
49
50
 
50
- This gem downloads and compiles librdkafka when it is installed. If you
51
- If you have any problems installing the gem, please open an issue.
51
+ When installed, this gem downloads and compiles librdkafka. If you have any problems installing the gem, please open an issue.
52
52
 
53
53
  ## Usage
54
54
 
55
- See the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Two quick examples:
55
+ Please see the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Below are two quick examples.
56
+
57
+ Unless you are seeking specific low-level capabilities, we **strongly** recommend using [Karafka](https://github.com/karafka/karafka) and [WaterDrop](https://github.com/karafka/waterdrop) when working with Kafka. These are higher-level libraries also maintained by us based on rdkafka-ruby.
56
58
 
57
59
  ### Consuming Messages
58
60
 
@@ -74,7 +76,7 @@ end
74
76
 
75
77
  ### Producing Messages
76
78
 
77
- Produce a number of messages, put the delivery handles in an array, and
79
+ Produce several messages, put the delivery handles in an array, and
78
80
  wait for them before exiting. This way the messages will be batched and
79
81
  efficiently sent to Kafka.
80
82
 
@@ -95,13 +97,11 @@ end
95
97
  delivery_handles.each(&:wait)
96
98
  ```
97
99
 
98
- Note that creating a producer consumes some resources that will not be
99
- released until it `#close` is explicitly called, so be sure to call
100
- `Config#producer` only as necessary.
100
+ Note that creating a producer consumes some resources that will not be released until it `#close` is explicitly called, so be sure to call `Config#producer` only as necessary.
101
101
 
102
102
  ## Higher Level Libraries
103
103
 
104
- Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
104
+ Currently, there are two actively developed frameworks based on `rdkafka-ruby`, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
105
105
 
106
106
  ### Message Processing Frameworks
107
107
 
@@ -112,6 +112,16 @@ Currently, there are two actively developed frameworks based on rdkafka-ruby, th
112
112
 
113
113
  * [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
114
114
 
115
+ ## Forking
116
+
117
+ When working with `rdkafka-ruby`, it's essential to know that the underlying `librdkafka` library does not support fork-safe operations, even though it is thread-safe. Forking a process after initializing librdkafka clients can lead to unpredictable behavior due to inherited file descriptors and memory states. This limitation requires careful handling, especially in Ruby applications that rely on forking.
118
+
119
+ To address this, it's highly recommended to:
120
+
121
+ - Never initialize any `rdkafka-ruby` producers or consumers before forking to avoid state corruption.
122
+ - Before forking, always close any open producers or consumers if you've opened any.
123
+ - Use high-level libraries like [WaterDrop](https://github.com/karafka/waterdrop) and [Karafka](https://github.com/karafka/karafka/), which provide abstractions for handling librdkafka's intricacies.
124
+
115
125
  ## Development
116
126
 
117
127
  Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
data/docker-compose.yml CHANGED
@@ -3,7 +3,7 @@ version: '2'
3
3
  services:
4
4
  kafka:
5
5
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.5.3
6
+ image: confluentinc/cp-kafka:7.6.1
7
7
 
8
8
  ports:
9
9
  - 9092:9092
data/ext/Rakefile CHANGED
@@ -22,10 +22,8 @@ task :default => :clean do
22
22
  ENV["LDFLAGS"] = "-L#{homebrew_prefix}/lib" unless ENV["LDFLAGS"]
23
23
  end
24
24
 
25
- releases = File.expand_path(File.join(File.dirname(__FILE__), '../dist'))
26
-
27
25
  recipe.files << {
28
- :url => "file://#{releases}/librdkafka_#{Rdkafka::LIBRDKAFKA_VERSION}.tar.gz",
26
+ :url => "https://codeload.github.com/edenhill/librdkafka/tar.gz/v#{Rdkafka::LIBRDKAFKA_VERSION}",
29
27
  :sha256 => Rdkafka::LIBRDKAFKA_SOURCE_SHA256
30
28
  }
31
29
  recipe.configure_options = ["--host=#{recipe.host}"]
@@ -14,6 +14,13 @@ module Rdkafka
14
14
 
15
15
  # Registry for registering all the handles.
16
16
  REGISTRY = {}
17
+ # Default wait timeout is 31 years
18
+ MAX_WAIT_TIMEOUT_FOREVER = 10_000_000_000
19
+ # Deprecation message for wait_timeout argument in wait method
20
+ WAIT_TIMEOUT_DEPRECATION_MESSAGE = "The 'wait_timeout' argument is deprecated and will be removed in future versions without replacement. " \
21
+ "We don't rely on it's value anymore. Please refactor your code to remove references to it."
22
+
23
+ private_constant :MAX_WAIT_TIMEOUT_FOREVER
17
24
 
18
25
  class << self
19
26
  # Adds handle to the register
@@ -32,6 +39,12 @@ module Rdkafka
32
39
  end
33
40
  end
34
41
 
42
+ def initialize
43
+ @mutex = Thread::Mutex.new
44
+ @resource = Thread::ConditionVariable.new
45
+
46
+ super
47
+ end
35
48
 
36
49
  # Whether the handle is still pending.
37
50
  #
@@ -45,37 +58,48 @@ module Rdkafka
45
58
  # on the operation. In this case it is possible to call wait again.
46
59
  #
47
60
  # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out.
48
- # If this is nil it does not time out.
49
- # @param wait_timeout [Numeric] Amount of time we should wait before we recheck if the
50
- # operation has completed
61
+ # If this is nil we will wait forever
62
+ # @param wait_timeout [nil] deprecated
51
63
  # @param raise_response_error [Boolean] should we raise error when waiting finishes
52
64
  #
53
65
  # @return [Object] Operation-specific result
54
66
  #
55
67
  # @raise [RdkafkaError] When the operation failed
56
68
  # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
57
- def wait(max_wait_timeout: 60, wait_timeout: 0.1, raise_response_error: true)
58
- timeout = if max_wait_timeout
59
- monotonic_now + max_wait_timeout
60
- else
61
- nil
62
- end
63
- loop do
64
- if pending?
65
- if timeout && timeout <= monotonic_now
66
- raise WaitTimeoutError.new(
67
- "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
68
- )
69
+ def wait(max_wait_timeout: 60, wait_timeout: nil, raise_response_error: true)
70
+ Kernel.warn(WAIT_TIMEOUT_DEPRECATION_MESSAGE) unless wait_timeout.nil?
71
+
72
+ timeout = max_wait_timeout ? monotonic_now + max_wait_timeout : MAX_WAIT_TIMEOUT_FOREVER
73
+
74
+ @mutex.synchronize do
75
+ loop do
76
+ if pending?
77
+ to_wait = (timeout - monotonic_now)
78
+
79
+ if to_wait.positive?
80
+ @resource.wait(@mutex, to_wait)
81
+ else
82
+ raise WaitTimeoutError.new(
83
+ "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
84
+ )
85
+ end
86
+ elsif self[:response] != 0 && raise_response_error
87
+ raise_error
88
+ else
89
+ return create_result
69
90
  end
70
- sleep wait_timeout
71
- elsif self[:response] != 0 && raise_response_error
72
- raise_error
73
- else
74
- return create_result
75
91
  end
76
92
  end
77
93
  end
78
94
 
95
+ # Unlock the resources
96
+ def unlock
97
+ @mutex.synchronize do
98
+ self[:pending] = false
99
+ @resource.broadcast
100
+ end
101
+ end
102
+
79
103
  # @return [String] the name of the operation (e.g. "delivery")
80
104
  def operation_name
81
105
  raise "Must be implemented by subclass!"
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
data/lib/rdkafka/admin.rb CHANGED
@@ -2,6 +2,8 @@
2
2
 
3
3
  module Rdkafka
4
4
  class Admin
5
+ include Helpers::OAuth
6
+
5
7
  # @private
6
8
  def initialize(native_kafka)
7
9
  @native_kafka = native_kafka
@@ -10,6 +12,19 @@ module Rdkafka
10
12
  ObjectSpace.define_finalizer(self, native_kafka.finalizer)
11
13
  end
12
14
 
15
+ # Starts the native Kafka polling thread and kicks off the init polling
16
+ # @note Not needed to run unless explicit start was disabled
17
+ def start
18
+ @native_kafka.start
19
+ end
20
+
21
+ # @return [String] admin name
22
+ def name
23
+ @name ||= @native_kafka.with_inner do |inner|
24
+ ::Rdkafka::Bindings.rd_kafka_name(inner)
25
+ end
26
+ end
27
+
13
28
  def finalizer
14
29
  ->(_) { close }
15
30
  end
@@ -17,6 +17,7 @@ module Rdkafka
17
17
 
18
18
  RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS = -175
19
19
  RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS = -174
20
+ RD_KAFKA_RESP_ERR__STATE = -172
20
21
  RD_KAFKA_RESP_ERR__NOENT = -156
21
22
  RD_KAFKA_RESP_ERR_NO_ERROR = 0
22
23
 
@@ -111,7 +112,10 @@ module Rdkafka
111
112
  callback :error_cb, [:pointer, :int, :string, :pointer], :void
112
113
  attach_function :rd_kafka_conf_set_error_cb, [:pointer, :error_cb], :void
113
114
  attach_function :rd_kafka_rebalance_protocol, [:pointer], :string
114
-
115
+ callback :oauthbearer_token_refresh_cb, [:pointer, :string, :pointer], :void
116
+ attach_function :rd_kafka_conf_set_oauthbearer_token_refresh_cb, [:pointer, :oauthbearer_token_refresh_cb], :void
117
+ attach_function :rd_kafka_oauthbearer_set_token, [:pointer, :string, :int64, :pointer, :pointer, :int, :pointer, :int], :int
118
+ attach_function :rd_kafka_oauthbearer_set_token_failure, [:pointer, :string], :int
115
119
  # Log queue
116
120
  attach_function :rd_kafka_set_log_queue, [:pointer, :pointer], :void
117
121
  attach_function :rd_kafka_queue_get_main, [:pointer], :pointer
@@ -120,19 +124,21 @@ module Rdkafka
120
124
  :void, [:pointer, :int, :string, :string]
121
125
  ) do |_client_ptr, level, _level_string, line|
122
126
  severity = case level
123
- when 0 || 1 || 2
127
+ when 0, 1, 2
124
128
  Logger::FATAL
125
129
  when 3
126
130
  Logger::ERROR
127
131
  when 4
128
132
  Logger::WARN
129
- when 5 || 6
133
+ when 5, 6
130
134
  Logger::INFO
131
135
  when 7
132
136
  Logger::DEBUG
133
137
  else
134
138
  Logger::UNKNOWN
135
139
  end
140
+
141
+ Rdkafka::Config.ensure_log_thread
136
142
  Rdkafka::Config.log_queue << [severity, "rdkafka: #{line}"]
137
143
  end
138
144
 
@@ -159,6 +165,32 @@ module Rdkafka
159
165
  end
160
166
  end
161
167
 
168
+ # The OAuth callback is currently global and contextless.
169
+ # This means that the callback will be called for all instances, and the callback must be able to determine to which instance it is associated.
170
+ # The instance name will be provided in the callback, allowing the callback to reference the correct instance.
171
+ #
172
+ # An example of how to use the instance name in the callback is given below.
173
+ # The `refresh_token` is configured as the `oauthbearer_token_refresh_callback`.
174
+ # `instances` is a map of client names to client instances, maintained by the user.
175
+ #
176
+ # ```
177
+ # def refresh_token(config, client_name)
178
+ # client = instances[client_name]
179
+ # client.oauthbearer_set_token(
180
+ # token: 'new-token-value',
181
+ # lifetime_ms: token-lifetime-ms,
182
+ # principal_name: 'principal-name'
183
+ # )
184
+ # end
185
+ # ```
186
+ OAuthbearerTokenRefreshCallback = FFI::Function.new(
187
+ :void, [:pointer, :string, :pointer]
188
+ ) do |client_ptr, config, _opaque|
189
+ if Rdkafka::Config.oauthbearer_token_refresh_callback
190
+ Rdkafka::Config.oauthbearer_token_refresh_callback.call(config, Rdkafka::Bindings.rd_kafka_name(client_ptr))
191
+ end
192
+ end
193
+
162
194
  # Handle
163
195
 
164
196
  enum :kafka_type, [
@@ -156,7 +156,8 @@ module Rdkafka
156
156
  create_topic_handle[:response] = create_topic_results[0].result_error
157
157
  create_topic_handle[:error_string] = create_topic_results[0].error_string
158
158
  create_topic_handle[:result_name] = create_topic_results[0].result_name
159
- create_topic_handle[:pending] = false
159
+
160
+ create_topic_handle.unlock
160
161
  end
161
162
  end
162
163
 
@@ -173,7 +174,8 @@ module Rdkafka
173
174
  delete_group_handle[:response] = delete_group_results[0].result_error
174
175
  delete_group_handle[:error_string] = delete_group_results[0].error_string
175
176
  delete_group_handle[:result_name] = delete_group_results[0].result_name
176
- delete_group_handle[:pending] = false
177
+
178
+ delete_group_handle.unlock
177
179
  end
178
180
  end
179
181
 
@@ -190,7 +192,8 @@ module Rdkafka
190
192
  delete_topic_handle[:response] = delete_topic_results[0].result_error
191
193
  delete_topic_handle[:error_string] = delete_topic_results[0].error_string
192
194
  delete_topic_handle[:result_name] = delete_topic_results[0].result_name
193
- delete_topic_handle[:pending] = false
195
+
196
+ delete_topic_handle.unlock
194
197
  end
195
198
  end
196
199
 
@@ -207,7 +210,8 @@ module Rdkafka
207
210
  create_partitions_handle[:response] = create_partitions_results[0].result_error
208
211
  create_partitions_handle[:error_string] = create_partitions_results[0].error_string
209
212
  create_partitions_handle[:result_name] = create_partitions_results[0].result_name
210
- create_partitions_handle[:pending] = false
213
+
214
+ create_partitions_handle.unlock
211
215
  end
212
216
  end
213
217
 
@@ -223,7 +227,8 @@ module Rdkafka
223
227
  if create_acl_handle = Rdkafka::Admin::CreateAclHandle.remove(create_acl_handle_ptr.address)
224
228
  create_acl_handle[:response] = create_acl_results[0].result_error
225
229
  create_acl_handle[:response_string] = create_acl_results[0].error_string
226
- create_acl_handle[:pending] = false
230
+
231
+ create_acl_handle.unlock
227
232
  end
228
233
  end
229
234
 
@@ -239,11 +244,13 @@ module Rdkafka
239
244
  if delete_acl_handle = Rdkafka::Admin::DeleteAclHandle.remove(delete_acl_handle_ptr.address)
240
245
  delete_acl_handle[:response] = delete_acl_results[0].result_error
241
246
  delete_acl_handle[:response_string] = delete_acl_results[0].error_string
242
- delete_acl_handle[:pending] = false
247
+
243
248
  if delete_acl_results[0].result_error == 0
244
249
  delete_acl_handle[:matching_acls] = delete_acl_results[0].matching_acls
245
250
  delete_acl_handle[:matching_acls_count] = delete_acl_results[0].matching_acls_count
246
251
  end
252
+
253
+ delete_acl_handle.unlock
247
254
  end
248
255
  end
249
256
 
@@ -254,17 +261,18 @@ module Rdkafka
254
261
  if describe_acl_handle = Rdkafka::Admin::DescribeAclHandle.remove(describe_acl_handle_ptr.address)
255
262
  describe_acl_handle[:response] = describe_acl.result_error
256
263
  describe_acl_handle[:response_string] = describe_acl.error_string
257
- describe_acl_handle[:pending] = false
264
+
258
265
  if describe_acl.result_error == 0
259
266
  describe_acl_handle[:acls] = describe_acl.matching_acls
260
267
  describe_acl_handle[:acls_count] = describe_acl.matching_acls_count
261
268
  end
269
+
270
+ describe_acl_handle.unlock
262
271
  end
263
272
  end
264
273
  end
265
274
 
266
275
  # FFI Function used for Message Delivery callbacks
267
-
268
276
  DeliveryCallbackFunction = FFI::Function.new(
269
277
  :void, [:pointer, :pointer, :pointer]
270
278
  ) do |client_ptr, message_ptr, opaque_ptr|
@@ -284,7 +292,6 @@ module Rdkafka
284
292
  delivery_handle[:partition] = message[:partition]
285
293
  delivery_handle[:offset] = message[:offset]
286
294
  delivery_handle[:topic_name] = FFI::MemoryPointer.from_string(topic_name)
287
- delivery_handle[:pending] = false
288
295
 
289
296
  # Call delivery callback on opaque
290
297
  if opaque = Rdkafka::Config.opaques[opaque_ptr.to_i]
@@ -299,9 +306,10 @@ module Rdkafka
299
306
  delivery_handle
300
307
  )
301
308
  end
309
+
310
+ delivery_handle.unlock
302
311
  end
303
312
  end
304
313
  end
305
-
306
314
  end
307
315
  end
@@ -15,13 +15,13 @@ module Rdkafka
15
15
  @@opaques = ObjectSpace::WeakMap.new
16
16
  # @private
17
17
  @@log_queue = Queue.new
18
-
19
- Thread.start do
20
- loop do
21
- severity, msg = @@log_queue.pop
22
- @@logger.add(severity, msg)
23
- end
24
- end
18
+ # We memoize thread on the first log flush
19
+ # This allows us also to restart logger thread on forks
20
+ @@log_thread = nil
21
+ # @private
22
+ @@log_mutex = Mutex.new
23
+ # @private
24
+ @@oauthbearer_token_refresh_callback = nil
25
25
 
26
26
  # Returns the current logger, by default this is a logger to stdout.
27
27
  #
@@ -30,6 +30,24 @@ module Rdkafka
30
30
  @@logger
31
31
  end
32
32
 
33
+ # Makes sure that there is a thread for consuming logs
34
+ # We do not spawn thread immediately and we need to check if it operates to support forking
35
+ def self.ensure_log_thread
36
+ return if @@log_thread && @@log_thread.alive?
37
+
38
+ @@log_mutex.synchronize do
39
+ # Restart if dead (fork, crash)
40
+ @@log_thread = nil if @@log_thread && !@@log_thread.alive?
41
+
42
+ @@log_thread ||= Thread.start do
43
+ loop do
44
+ severity, msg = @@log_queue.pop
45
+ @@logger.add(severity, msg)
46
+ end
47
+ end
48
+ end
49
+ end
50
+
33
51
  # Returns a queue whose contents will be passed to the configured logger. Each entry
34
52
  # should follow the format [Logger::Severity, String]. The benefit over calling the
35
53
  # logger directly is that this is safe to use from trap contexts.
@@ -87,6 +105,24 @@ module Rdkafka
87
105
  @@error_callback
88
106
  end
89
107
 
108
+ # Sets the SASL/OAUTHBEARER token refresh callback.
109
+ # This callback will be triggered when it is time to refresh the client's OAUTHBEARER token
110
+ #
111
+ # @param callback [Proc, #call] The callback
112
+ #
113
+ # @return [nil]
114
+ def self.oauthbearer_token_refresh_callback=(callback)
115
+ raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call) || callback == nil
116
+ @@oauthbearer_token_refresh_callback = callback
117
+ end
118
+
119
+ # Returns the current oauthbearer_token_refresh_callback callback, by default this is nil.
120
+ #
121
+ # @return [Proc, nil]
122
+ def self.oauthbearer_token_refresh_callback
123
+ @@oauthbearer_token_refresh_callback
124
+ end
125
+
90
126
  # @private
91
127
  def self.opaques
92
128
  @@opaques
@@ -159,11 +195,13 @@ module Rdkafka
159
195
 
160
196
  # Creates a consumer with this configuration.
161
197
  #
198
+ # @param native_kafka_auto_start [Boolean] should the native kafka operations be started
199
+ # automatically. Defaults to true. Set to false only when doing complex initialization.
162
200
  # @return [Consumer] The created consumer
163
201
  #
164
202
  # @raise [ConfigError] When the configuration contains invalid options
165
203
  # @raise [ClientCreationError] When the native client cannot be created
166
- def consumer
204
+ def consumer(native_kafka_auto_start: true)
167
205
  opaque = Opaque.new
168
206
  config = native_config(opaque)
169
207
 
@@ -183,18 +221,21 @@ module Rdkafka
183
221
  Rdkafka::NativeKafka.new(
184
222
  kafka,
185
223
  run_polling_thread: false,
186
- opaque: opaque
224
+ opaque: opaque,
225
+ auto_start: native_kafka_auto_start
187
226
  )
188
227
  )
189
228
  end
190
229
 
191
230
  # Create a producer with this configuration.
192
231
  #
232
+ # @param native_kafka_auto_start [Boolean] should the native kafka operations be started
233
+ # automatically. Defaults to true. Set to false only when doing complex initialization.
193
234
  # @return [Producer] The created producer
194
235
  #
195
236
  # @raise [ConfigError] When the configuration contains invalid options
196
237
  # @raise [ClientCreationError] When the native client cannot be created
197
- def producer
238
+ def producer(native_kafka_auto_start: true)
198
239
  # Create opaque
199
240
  opaque = Opaque.new
200
241
  # Create Kafka config
@@ -203,11 +244,15 @@ module Rdkafka
203
244
  Rdkafka::Bindings.rd_kafka_conf_set_dr_msg_cb(config, Rdkafka::Callbacks::DeliveryCallbackFunction)
204
245
  # Return producer with Kafka client
205
246
  partitioner_name = self[:partitioner] || self["partitioner"]
247
+
248
+ kafka = native_kafka(config, :rd_kafka_producer)
249
+
206
250
  Rdkafka::Producer.new(
207
251
  Rdkafka::NativeKafka.new(
208
- native_kafka(config, :rd_kafka_producer),
252
+ kafka,
209
253
  run_polling_thread: true,
210
- opaque: opaque
254
+ opaque: opaque,
255
+ auto_start: native_kafka_auto_start
211
256
  ),
212
257
  partitioner_name
213
258
  ).tap do |producer|
@@ -217,19 +262,25 @@ module Rdkafka
217
262
 
218
263
  # Creates an admin instance with this configuration.
219
264
  #
265
+ # @param native_kafka_auto_start [Boolean] should the native kafka operations be started
266
+ # automatically. Defaults to true. Set to false only when doing complex initialization.
220
267
  # @return [Admin] The created admin instance
221
268
  #
222
269
  # @raise [ConfigError] When the configuration contains invalid options
223
270
  # @raise [ClientCreationError] When the native client cannot be created
224
- def admin
271
+ def admin(native_kafka_auto_start: true)
225
272
  opaque = Opaque.new
226
273
  config = native_config(opaque)
227
274
  Rdkafka::Bindings.rd_kafka_conf_set_background_event_cb(config, Rdkafka::Callbacks::BackgroundEventCallbackFunction)
275
+
276
+ kafka = native_kafka(config, :rd_kafka_producer)
277
+
228
278
  Rdkafka::Admin.new(
229
279
  Rdkafka::NativeKafka.new(
230
- native_kafka(config, :rd_kafka_producer),
280
+ kafka,
231
281
  run_polling_thread: true,
232
- opaque: opaque
282
+ opaque: opaque,
283
+ auto_start: native_kafka_auto_start
233
284
  )
234
285
  )
235
286
  end
@@ -283,6 +334,9 @@ module Rdkafka
283
334
 
284
335
  # Set error callback
285
336
  Rdkafka::Bindings.rd_kafka_conf_set_error_cb(config, Rdkafka::Bindings::ErrorCallback)
337
+
338
+ # Set oauth callback
339
+ Rdkafka::Bindings.rd_kafka_conf_set_oauthbearer_token_refresh_cb(config, Rdkafka::Bindings::OAuthbearerTokenRefreshCallback)
286
340
  end
287
341
  end
288
342