karafka-rdkafka 0.14.11 → 0.15.0.alpha1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- checksums.yaml.gz.sig +0 -0
- data/CHANGELOG.md +4 -2
- data/README.md +19 -9
- data/docker-compose.yml +1 -1
- data/ext/Rakefile +1 -11
- data/lib/rdkafka/abstract_handle.rb +44 -20
- data/lib/rdkafka/admin/config_binding_result.rb +30 -0
- data/lib/rdkafka/admin/config_resource_binding_result.rb +18 -0
- data/lib/rdkafka/admin/create_topic_report.rb +1 -1
- data/lib/rdkafka/admin/delete_groups_report.rb +1 -1
- data/lib/rdkafka/admin/delete_topic_report.rb +1 -1
- data/lib/rdkafka/admin/describe_acl_report.rb +1 -0
- data/lib/rdkafka/admin/describe_configs_handle.rb +33 -0
- data/lib/rdkafka/admin/describe_configs_report.rb +48 -0
- data/lib/rdkafka/admin/incremental_alter_configs_handle.rb +33 -0
- data/lib/rdkafka/admin/incremental_alter_configs_report.rb +48 -0
- data/lib/rdkafka/admin.rb +159 -0
- data/lib/rdkafka/bindings.rb +42 -0
- data/lib/rdkafka/callbacks.rb +103 -19
- data/lib/rdkafka/version.rb +1 -1
- data/lib/rdkafka.rb +6 -0
- data/spec/rdkafka/abstract_handle_spec.rb +34 -21
- data/spec/rdkafka/admin_spec.rb +275 -3
- data/spec/rdkafka/consumer_spec.rb +1 -1
- data/spec/spec_helper.rb +1 -1
- data.tar.gz.sig +0 -0
- metadata +8 -3
- metadata.gz.sig +0 -0
- data/dist/librdkafka_2.3.0.tar.gz +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 2cd7ecb658a7aedb5953f4d39c8941a282d6dc7001c7600129d86ebb05a550ce
|
4
|
+
data.tar.gz: 1dc2ebcb1deebe94197f216f0da3d78e2ed5d55de9b1be8b0654a1042e2d4289
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a300e2dcdf5aac16b59b0289a71fec440747e9238c5a801572c7d1ce6a433843ab47e76915a239f860a24cd832323dab952e09b62c7184462c806183958b2a85
|
7
|
+
data.tar.gz: c1a3b1d23522f2941b9d39d3b5955f40d3205a3820188c7cba02f7825b17bf534963fec68956271908394e443f7b8a84edcdf972021a74f3ed4bcf93d76b1b6d
|
checksums.yaml.gz.sig
CHANGED
Binary file
|
data/CHANGELOG.md
CHANGED
@@ -1,7 +1,9 @@
|
|
1
1
|
# Rdkafka Changelog
|
2
2
|
|
3
|
-
## 0.
|
4
|
-
- [
|
3
|
+
## 0.15.0 (Unreleased)
|
4
|
+
- [Feature] Support incremental config describe + alter API.
|
5
|
+
- [Enhancement] Replace time poll based wait engine with an event based to improve response times on blocking operations and wait (nijikon + mensfeld)
|
6
|
+
- [Change] The `wait_timeout` argument in `AbstractHandle.wait` method is deprecated and will be removed in future versions without replacement. We don't rely on it's value anymore (nijikon)
|
5
7
|
|
6
8
|
## 0.14.10 (2024-02-08)
|
7
9
|
- [Fix] Background logger stops working after forking causing memory leaks (mensfeld).
|
data/README.md
CHANGED
@@ -18,7 +18,7 @@ become EOL.
|
|
18
18
|
|
19
19
|
`rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
|
20
20
|
|
21
|
-
The most
|
21
|
+
The most essential pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
|
22
22
|
|
23
23
|
## Table of content
|
24
24
|
|
@@ -30,6 +30,7 @@ The most important pieces of a Kafka client are implemented, and we aim to provi
|
|
30
30
|
- [Higher Level Libraries](#higher-level-libraries)
|
31
31
|
* [Message Processing Frameworks](#message-processing-frameworks)
|
32
32
|
* [Message Publishing Libraries](#message-publishing-libraries)
|
33
|
+
- [Forking](#forking)
|
33
34
|
- [Development](#development)
|
34
35
|
- [Example](#example)
|
35
36
|
- [Versions](#versions)
|
@@ -47,12 +48,13 @@ While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications,
|
|
47
48
|
|
48
49
|
## Installation
|
49
50
|
|
50
|
-
|
51
|
-
If you have any problems installing the gem, please open an issue.
|
51
|
+
When installed, this gem downloads and compiles librdkafka. If you have any problems installing the gem, please open an issue.
|
52
52
|
|
53
53
|
## Usage
|
54
54
|
|
55
|
-
|
55
|
+
Please see the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Below are two quick examples.
|
56
|
+
|
57
|
+
Unless you are seeking specific low-level capabilities, we **strongly** recommend using [Karafka](https://github.com/karafka/karafka) and [WaterDrop](https://github.com/karafka/waterdrop) when working with Kafka. These are higher-level libraries also maintained by us based on rdkafka-ruby.
|
56
58
|
|
57
59
|
### Consuming Messages
|
58
60
|
|
@@ -74,7 +76,7 @@ end
|
|
74
76
|
|
75
77
|
### Producing Messages
|
76
78
|
|
77
|
-
Produce
|
79
|
+
Produce several messages, put the delivery handles in an array, and
|
78
80
|
wait for them before exiting. This way the messages will be batched and
|
79
81
|
efficiently sent to Kafka.
|
80
82
|
|
@@ -95,13 +97,11 @@ end
|
|
95
97
|
delivery_handles.each(&:wait)
|
96
98
|
```
|
97
99
|
|
98
|
-
Note that creating a producer consumes some resources that will not be
|
99
|
-
released until it `#close` is explicitly called, so be sure to call
|
100
|
-
`Config#producer` only as necessary.
|
100
|
+
Note that creating a producer consumes some resources that will not be released until it `#close` is explicitly called, so be sure to call `Config#producer` only as necessary.
|
101
101
|
|
102
102
|
## Higher Level Libraries
|
103
103
|
|
104
|
-
Currently, there are two actively developed frameworks based on rdkafka-ruby
|
104
|
+
Currently, there are two actively developed frameworks based on `rdkafka-ruby`, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
|
105
105
|
|
106
106
|
### Message Processing Frameworks
|
107
107
|
|
@@ -112,6 +112,16 @@ Currently, there are two actively developed frameworks based on rdkafka-ruby, th
|
|
112
112
|
|
113
113
|
* [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
|
114
114
|
|
115
|
+
## Forking
|
116
|
+
|
117
|
+
When working with `rdkafka-ruby`, it's essential to know that the underlying `librdkafka` library does not support fork-safe operations, even though it is thread-safe. Forking a process after initializing librdkafka clients can lead to unpredictable behavior due to inherited file descriptors and memory states. This limitation requires careful handling, especially in Ruby applications that rely on forking.
|
118
|
+
|
119
|
+
To address this, it's highly recommended to:
|
120
|
+
|
121
|
+
- Never initialize any `rdkafka-ruby` producers or consumers before forking to avoid state corruption.
|
122
|
+
- Before forking, always close any open producers or consumers if you've opened any.
|
123
|
+
- Use high-level libraries like [WaterDrop](https://github.com/karafka/waterdrop) and [Karafka](https://github.com/karafka/karafka/), which provide abstractions for handling librdkafka's intricacies.
|
124
|
+
|
115
125
|
## Development
|
116
126
|
|
117
127
|
Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
|
data/docker-compose.yml
CHANGED
data/ext/Rakefile
CHANGED
@@ -22,21 +22,11 @@ task :default => :clean do
|
|
22
22
|
ENV["LDFLAGS"] = "-L#{homebrew_prefix}/lib" unless ENV["LDFLAGS"]
|
23
23
|
end
|
24
24
|
|
25
|
-
releases = File.expand_path(File.join(File.dirname(__FILE__), '../dist'))
|
26
|
-
|
27
25
|
recipe.files << {
|
28
|
-
:url => "
|
26
|
+
:url => "https://codeload.github.com/edenhill/librdkafka/tar.gz/v#{Rdkafka::LIBRDKAFKA_VERSION}",
|
29
27
|
:sha256 => Rdkafka::LIBRDKAFKA_SOURCE_SHA256
|
30
28
|
}
|
31
29
|
recipe.configure_options = ["--host=#{recipe.host}"]
|
32
|
-
|
33
|
-
# Disable using libc regex engine in favor of the embedded one
|
34
|
-
# The default regex engine of librdkafka does not always work exactly as most of the users
|
35
|
-
# would expect, hence this flag allows for changing it to the other one
|
36
|
-
if ENV.key?('RDKAFKA_DISABLE_REGEX_EXT')
|
37
|
-
recipe.configure_options << '--disable-regex-ext'
|
38
|
-
end
|
39
|
-
|
40
30
|
recipe.cook
|
41
31
|
# Move dynamic library we're interested in
|
42
32
|
if recipe.host.include?('darwin')
|
@@ -14,6 +14,13 @@ module Rdkafka
|
|
14
14
|
|
15
15
|
# Registry for registering all the handles.
|
16
16
|
REGISTRY = {}
|
17
|
+
# Default wait timeout is 31 years
|
18
|
+
MAX_WAIT_TIMEOUT_FOREVER = 10_000_000_000
|
19
|
+
# Deprecation message for wait_timeout argument in wait method
|
20
|
+
WAIT_TIMEOUT_DEPRECATION_MESSAGE = "The 'wait_timeout' argument is deprecated and will be removed in future versions without replacement. " \
|
21
|
+
"We don't rely on it's value anymore. Please refactor your code to remove references to it."
|
22
|
+
|
23
|
+
private_constant :MAX_WAIT_TIMEOUT_FOREVER
|
17
24
|
|
18
25
|
class << self
|
19
26
|
# Adds handle to the register
|
@@ -32,6 +39,12 @@ module Rdkafka
|
|
32
39
|
end
|
33
40
|
end
|
34
41
|
|
42
|
+
def initialize
|
43
|
+
@mutex = Thread::Mutex.new
|
44
|
+
@resource = Thread::ConditionVariable.new
|
45
|
+
|
46
|
+
super
|
47
|
+
end
|
35
48
|
|
36
49
|
# Whether the handle is still pending.
|
37
50
|
#
|
@@ -45,37 +58,48 @@ module Rdkafka
|
|
45
58
|
# on the operation. In this case it is possible to call wait again.
|
46
59
|
#
|
47
60
|
# @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out.
|
48
|
-
# If this is nil
|
49
|
-
# @param wait_timeout [
|
50
|
-
# operation has completed
|
61
|
+
# If this is nil we will wait forever
|
62
|
+
# @param wait_timeout [nil] deprecated
|
51
63
|
# @param raise_response_error [Boolean] should we raise error when waiting finishes
|
52
64
|
#
|
53
65
|
# @return [Object] Operation-specific result
|
54
66
|
#
|
55
67
|
# @raise [RdkafkaError] When the operation failed
|
56
68
|
# @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
|
57
|
-
def wait(max_wait_timeout: 60, wait_timeout:
|
58
|
-
|
59
|
-
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
+
def wait(max_wait_timeout: 60, wait_timeout: nil, raise_response_error: true)
|
70
|
+
Kernel.warn(WAIT_TIMEOUT_DEPRECATION_MESSAGE) unless wait_timeout.nil?
|
71
|
+
|
72
|
+
timeout = max_wait_timeout ? monotonic_now + max_wait_timeout : MAX_WAIT_TIMEOUT_FOREVER
|
73
|
+
|
74
|
+
@mutex.synchronize do
|
75
|
+
loop do
|
76
|
+
if pending?
|
77
|
+
to_wait = (timeout - monotonic_now)
|
78
|
+
|
79
|
+
if to_wait.positive?
|
80
|
+
@resource.wait(@mutex, to_wait)
|
81
|
+
else
|
82
|
+
raise WaitTimeoutError.new(
|
83
|
+
"Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
|
84
|
+
)
|
85
|
+
end
|
86
|
+
elsif self[:response] != 0 && raise_response_error
|
87
|
+
raise_error
|
88
|
+
else
|
89
|
+
return create_result
|
69
90
|
end
|
70
|
-
sleep wait_timeout
|
71
|
-
elsif self[:response] != 0 && raise_response_error
|
72
|
-
raise_error
|
73
|
-
else
|
74
|
-
return create_result
|
75
91
|
end
|
76
92
|
end
|
77
93
|
end
|
78
94
|
|
95
|
+
# Unlock the resources
|
96
|
+
def unlock
|
97
|
+
@mutex.synchronize do
|
98
|
+
self[:pending] = false
|
99
|
+
@resource.broadcast
|
100
|
+
end
|
101
|
+
end
|
102
|
+
|
79
103
|
# @return [String] the name of the operation (e.g. "delivery")
|
80
104
|
def operation_name
|
81
105
|
raise "Must be implemented by subclass!"
|
@@ -0,0 +1,30 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
# A single config binding result that represents its values extracted from C
|
6
|
+
class ConfigBindingResult
|
7
|
+
attr_reader :name, :value, :read_only, :default, :sensitive, :synonym, :synonyms
|
8
|
+
|
9
|
+
# @param config_ptr [FFI::Pointer] config pointer
|
10
|
+
def initialize(config_ptr)
|
11
|
+
@name = Bindings.rd_kafka_ConfigEntry_name(config_ptr)
|
12
|
+
@value = Bindings.rd_kafka_ConfigEntry_value(config_ptr)
|
13
|
+
@read_only = Bindings.rd_kafka_ConfigEntry_is_read_only(config_ptr)
|
14
|
+
@default = Bindings.rd_kafka_ConfigEntry_is_default(config_ptr)
|
15
|
+
@sensitive = Bindings.rd_kafka_ConfigEntry_is_sensitive(config_ptr)
|
16
|
+
@synonym = Bindings.rd_kafka_ConfigEntry_is_synonym(config_ptr)
|
17
|
+
@synonyms = []
|
18
|
+
|
19
|
+
# The code below builds up the config synonyms using same config binding
|
20
|
+
pointer_to_size_t = FFI::MemoryPointer.new(:int32)
|
21
|
+
synonym_ptr = Bindings.rd_kafka_ConfigEntry_synonyms(config_ptr, pointer_to_size_t)
|
22
|
+
synonyms_ptr = synonym_ptr.read_array_of_pointer(pointer_to_size_t.read_int)
|
23
|
+
|
24
|
+
(1..pointer_to_size_t.read_int).map do |ar|
|
25
|
+
@synonyms << self.class.new(synonyms_ptr[ar - 1])
|
26
|
+
end
|
27
|
+
end
|
28
|
+
end
|
29
|
+
end
|
30
|
+
end
|
@@ -0,0 +1,18 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
# A simple binding that represents the requested config resource
|
6
|
+
class ConfigResourceBindingResult
|
7
|
+
attr_reader :name, :type, :configs, :configs_count
|
8
|
+
|
9
|
+
def initialize(config_resource_ptr)
|
10
|
+
ffi_binding = Bindings::ConfigResource.new(config_resource_ptr)
|
11
|
+
|
12
|
+
@name = ffi_binding[:name]
|
13
|
+
@type = ffi_binding[:type]
|
14
|
+
@configs = []
|
15
|
+
end
|
16
|
+
end
|
17
|
+
end
|
18
|
+
end
|
@@ -0,0 +1,33 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class DescribeConfigsHandle < AbstractHandle
|
6
|
+
layout :pending, :bool,
|
7
|
+
:response, :int,
|
8
|
+
:response_string, :pointer,
|
9
|
+
:config_entries, :pointer,
|
10
|
+
:entry_count, :int
|
11
|
+
|
12
|
+
# @return [String] the name of the operation.
|
13
|
+
def operation_name
|
14
|
+
"describe configs"
|
15
|
+
end
|
16
|
+
|
17
|
+
# @return [DescribeAclReport] instance with an array of acls that matches the request filters.
|
18
|
+
def create_result
|
19
|
+
DescribeConfigsReport.new(
|
20
|
+
config_entries: self[:config_entries],
|
21
|
+
entry_count: self[:entry_count]
|
22
|
+
)
|
23
|
+
end
|
24
|
+
|
25
|
+
def raise_error
|
26
|
+
raise RdkafkaError.new(
|
27
|
+
self[:response],
|
28
|
+
broker_message: self[:response_string].read_string
|
29
|
+
)
|
30
|
+
end
|
31
|
+
end
|
32
|
+
end
|
33
|
+
end
|
@@ -0,0 +1,48 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class DescribeConfigsReport
|
6
|
+
attr_reader :resources
|
7
|
+
|
8
|
+
def initialize(config_entries:, entry_count:)
|
9
|
+
@resources=[]
|
10
|
+
|
11
|
+
return if config_entries == FFI::Pointer::NULL
|
12
|
+
|
13
|
+
config_entries
|
14
|
+
.read_array_of_pointer(entry_count)
|
15
|
+
.each { |config_resource_result_ptr| validate!(config_resource_result_ptr) }
|
16
|
+
.each do |config_resource_result_ptr|
|
17
|
+
config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr)
|
18
|
+
|
19
|
+
pointer_to_size_t = FFI::MemoryPointer.new(:int32)
|
20
|
+
configs_ptr = Bindings.rd_kafka_ConfigResource_configs(
|
21
|
+
config_resource_result_ptr,
|
22
|
+
pointer_to_size_t
|
23
|
+
)
|
24
|
+
|
25
|
+
configs_ptr
|
26
|
+
.read_array_of_pointer(pointer_to_size_t.read_int)
|
27
|
+
.map { |config_ptr| ConfigBindingResult.new(config_ptr) }
|
28
|
+
.each { |config_binding| config_resource_result.configs << config_binding }
|
29
|
+
|
30
|
+
@resources << config_resource_result
|
31
|
+
end
|
32
|
+
ensure
|
33
|
+
return if config_entries == FFI::Pointer::NULL
|
34
|
+
|
35
|
+
Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count)
|
36
|
+
end
|
37
|
+
|
38
|
+
private
|
39
|
+
|
40
|
+
def validate!(config_resource_result_ptr)
|
41
|
+
RdkafkaError.validate!(
|
42
|
+
Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr),
|
43
|
+
Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr)
|
44
|
+
)
|
45
|
+
end
|
46
|
+
end
|
47
|
+
end
|
48
|
+
end
|
@@ -0,0 +1,33 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class IncrementalAlterConfigsHandle < AbstractHandle
|
6
|
+
layout :pending, :bool,
|
7
|
+
:response, :int,
|
8
|
+
:response_string, :pointer,
|
9
|
+
:config_entries, :pointer,
|
10
|
+
:entry_count, :int
|
11
|
+
|
12
|
+
# @return [String] the name of the operation.
|
13
|
+
def operation_name
|
14
|
+
"incremental alter configs"
|
15
|
+
end
|
16
|
+
|
17
|
+
# @return [DescribeAclReport] instance with an array of acls that matches the request filters.
|
18
|
+
def create_result
|
19
|
+
IncrementalAlterConfigsReport.new(
|
20
|
+
config_entries: self[:config_entries],
|
21
|
+
entry_count: self[:entry_count]
|
22
|
+
)
|
23
|
+
end
|
24
|
+
|
25
|
+
def raise_error
|
26
|
+
raise RdkafkaError.new(
|
27
|
+
self[:response],
|
28
|
+
broker_message: self[:response_string].read_string
|
29
|
+
)
|
30
|
+
end
|
31
|
+
end
|
32
|
+
end
|
33
|
+
end
|
@@ -0,0 +1,48 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class IncrementalAlterConfigsReport
|
6
|
+
attr_reader :resources
|
7
|
+
|
8
|
+
def initialize(config_entries:, entry_count:)
|
9
|
+
@resources=[]
|
10
|
+
|
11
|
+
return if config_entries == FFI::Pointer::NULL
|
12
|
+
|
13
|
+
config_entries
|
14
|
+
.read_array_of_pointer(entry_count)
|
15
|
+
.each { |config_resource_result_ptr| validate!(config_resource_result_ptr) }
|
16
|
+
.each do |config_resource_result_ptr|
|
17
|
+
config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr)
|
18
|
+
|
19
|
+
pointer_to_size_t = FFI::MemoryPointer.new(:int32)
|
20
|
+
configs_ptr = Bindings.rd_kafka_ConfigResource_configs(
|
21
|
+
config_resource_result_ptr,
|
22
|
+
pointer_to_size_t
|
23
|
+
)
|
24
|
+
|
25
|
+
configs_ptr
|
26
|
+
.read_array_of_pointer(pointer_to_size_t.read_int)
|
27
|
+
.map { |config_ptr| ConfigBindingResult.new(config_ptr) }
|
28
|
+
.each { |config_binding| config_resource_result.configs << config_binding }
|
29
|
+
|
30
|
+
@resources << config_resource_result
|
31
|
+
end
|
32
|
+
ensure
|
33
|
+
return if config_entries == FFI::Pointer::NULL
|
34
|
+
|
35
|
+
Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count)
|
36
|
+
end
|
37
|
+
|
38
|
+
private
|
39
|
+
|
40
|
+
def validate!(config_resource_result_ptr)
|
41
|
+
RdkafkaError.validate!(
|
42
|
+
Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr),
|
43
|
+
Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr)
|
44
|
+
)
|
45
|
+
end
|
46
|
+
end
|
47
|
+
end
|
48
|
+
end
|
data/lib/rdkafka/admin.rb
CHANGED
@@ -605,6 +605,165 @@ module Rdkafka
|
|
605
605
|
describe_acl_handle
|
606
606
|
end
|
607
607
|
|
608
|
+
# Describe configs
|
609
|
+
#
|
610
|
+
# @param resources [Array<Hash>] Array where elements are hashes with two keys:
|
611
|
+
# - `:resource_type` - numerical resource type based on Kafka API
|
612
|
+
# - `:resource_name` - string with resource name
|
613
|
+
# @return [DescribeConfigsHandle] Describe config handle that can be used to wait for the
|
614
|
+
# result of fetching resources with their appropriate configs
|
615
|
+
#
|
616
|
+
# @raise [RdkafkaError]
|
617
|
+
#
|
618
|
+
# @note Several resources can be requested at one go, but only one broker at a time
|
619
|
+
def describe_configs(resources)
|
620
|
+
closed_admin_check(__method__)
|
621
|
+
|
622
|
+
handle = DescribeConfigsHandle.new
|
623
|
+
handle[:pending] = true
|
624
|
+
handle[:response] = -1
|
625
|
+
|
626
|
+
queue_ptr = @native_kafka.with_inner do |inner|
|
627
|
+
Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
|
628
|
+
end
|
629
|
+
|
630
|
+
if queue_ptr.null?
|
631
|
+
raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
|
632
|
+
end
|
633
|
+
|
634
|
+
admin_options_ptr = @native_kafka.with_inner do |inner|
|
635
|
+
Rdkafka::Bindings.rd_kafka_AdminOptions_new(
|
636
|
+
inner,
|
637
|
+
Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS
|
638
|
+
)
|
639
|
+
end
|
640
|
+
|
641
|
+
DescribeConfigsHandle.register(handle)
|
642
|
+
Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr)
|
643
|
+
|
644
|
+
pointer_array = resources.map do |resource_details|
|
645
|
+
Rdkafka::Bindings.rd_kafka_ConfigResource_new(
|
646
|
+
resource_details.fetch(:resource_type),
|
647
|
+
FFI::MemoryPointer.from_string(
|
648
|
+
resource_details.fetch(:resource_name)
|
649
|
+
)
|
650
|
+
)
|
651
|
+
end
|
652
|
+
|
653
|
+
configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size)
|
654
|
+
configs_array_ptr.write_array_of_pointer(pointer_array)
|
655
|
+
|
656
|
+
begin
|
657
|
+
@native_kafka.with_inner do |inner|
|
658
|
+
Rdkafka::Bindings.rd_kafka_DescribeConfigs(
|
659
|
+
inner,
|
660
|
+
configs_array_ptr,
|
661
|
+
pointer_array.size,
|
662
|
+
admin_options_ptr,
|
663
|
+
queue_ptr
|
664
|
+
)
|
665
|
+
end
|
666
|
+
rescue Exception
|
667
|
+
DescribeConfigsHandle.remove(handle.to_ptr.address)
|
668
|
+
|
669
|
+
raise
|
670
|
+
ensure
|
671
|
+
Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array(
|
672
|
+
configs_array_ptr,
|
673
|
+
pointer_array.size
|
674
|
+
) if configs_array_ptr
|
675
|
+
end
|
676
|
+
|
677
|
+
handle
|
678
|
+
end
|
679
|
+
|
680
|
+
# Alters in an incremental way all the configs provided for given resources
|
681
|
+
#
|
682
|
+
# @param resources_with_configs [Array<Hash>] resources with the configs key that contains
|
683
|
+
# name, value and the proper op_type to perform on this value.
|
684
|
+
#
|
685
|
+
# @return [IncrementalAlterConfigsHandle] Incremental alter configs handle that can be used to
|
686
|
+
# wait for the result of altering resources with their appropriate configs
|
687
|
+
#
|
688
|
+
# @raise [RdkafkaError]
|
689
|
+
#
|
690
|
+
# @note Several resources can be requested at one go, but only one broker at a time
|
691
|
+
# @note The results won't contain altered values but only the altered resources
|
692
|
+
def incremental_alter_configs(resources_with_configs)
|
693
|
+
closed_admin_check(__method__)
|
694
|
+
|
695
|
+
handle = IncrementalAlterConfigsHandle.new
|
696
|
+
handle[:pending] = true
|
697
|
+
handle[:response] = -1
|
698
|
+
|
699
|
+
queue_ptr = @native_kafka.with_inner do |inner|
|
700
|
+
Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
|
701
|
+
end
|
702
|
+
|
703
|
+
if queue_ptr.null?
|
704
|
+
raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
|
705
|
+
end
|
706
|
+
|
707
|
+
admin_options_ptr = @native_kafka.with_inner do |inner|
|
708
|
+
Rdkafka::Bindings.rd_kafka_AdminOptions_new(
|
709
|
+
inner,
|
710
|
+
Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_INCREMENTALALTERCONFIGS
|
711
|
+
)
|
712
|
+
end
|
713
|
+
|
714
|
+
IncrementalAlterConfigsHandle.register(handle)
|
715
|
+
Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr)
|
716
|
+
|
717
|
+
# Tu poprawnie tworzyc
|
718
|
+
pointer_array = resources_with_configs.map do |resource_details|
|
719
|
+
# First build the appropriate resource representation
|
720
|
+
resource_ptr = Rdkafka::Bindings.rd_kafka_ConfigResource_new(
|
721
|
+
resource_details.fetch(:resource_type),
|
722
|
+
FFI::MemoryPointer.from_string(
|
723
|
+
resource_details.fetch(:resource_name)
|
724
|
+
)
|
725
|
+
)
|
726
|
+
|
727
|
+
resource_details.fetch(:configs).each do |config|
|
728
|
+
Bindings.rd_kafka_ConfigResource_add_incremental_config(
|
729
|
+
resource_ptr,
|
730
|
+
config.fetch(:name),
|
731
|
+
config.fetch(:op_type),
|
732
|
+
config.fetch(:value)
|
733
|
+
)
|
734
|
+
end
|
735
|
+
|
736
|
+
resource_ptr
|
737
|
+
end
|
738
|
+
|
739
|
+
configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size)
|
740
|
+
configs_array_ptr.write_array_of_pointer(pointer_array)
|
741
|
+
|
742
|
+
|
743
|
+
begin
|
744
|
+
@native_kafka.with_inner do |inner|
|
745
|
+
Rdkafka::Bindings.rd_kafka_IncrementalAlterConfigs(
|
746
|
+
inner,
|
747
|
+
configs_array_ptr,
|
748
|
+
pointer_array.size,
|
749
|
+
admin_options_ptr,
|
750
|
+
queue_ptr
|
751
|
+
)
|
752
|
+
end
|
753
|
+
rescue Exception
|
754
|
+
IncrementalAlterConfigsHandle.remove(handle.to_ptr.address)
|
755
|
+
|
756
|
+
raise
|
757
|
+
ensure
|
758
|
+
Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array(
|
759
|
+
configs_array_ptr,
|
760
|
+
pointer_array.size
|
761
|
+
) if configs_array_ptr
|
762
|
+
end
|
763
|
+
|
764
|
+
handle
|
765
|
+
end
|
766
|
+
|
608
767
|
private
|
609
768
|
|
610
769
|
def closed_admin_check(method)
|