karafka-rdkafka 0.14.10 → 0.15.0.alpha2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 92d61e2b492453bf19ead6abf1c9377d5222aeba36c19c25caff5641a2c8fb1b
4
- data.tar.gz: 4170931c8ced8d09813d22359c36d36da09b3f39d2429daad6a342dd3df2982c
3
+ metadata.gz: 4f13043e49d80f647a9012f54ec3f54f77c93c275f0a16dbc689b9695a9592ab
4
+ data.tar.gz: f1d1d8dbba3389280fa19035b9507e46949b4a34a855147dc68642a12f20a045
5
5
  SHA512:
6
- metadata.gz: ceae2da64aad6589779b60160fc28b5db6bc305c289f76bbf7f0aa034afdf88277b3aa75f4c14ec8b441250e81bca7bf7afededea22c2eb0568753cf6968c2b3
7
- data.tar.gz: 61a3c8a40bca6d598782b093f6a2575fcdbe247d6a31c624f95e236015d713e3d2ace38e2fae43b622b9e4c6b37be57ef0f2d508a0a57310d3eeb79404146436
6
+ metadata.gz: '09c0e2f0171c07b9d70d75d72878b6a8dd039ab2880be4ddf142eb5542637016bc55da799e9ac79f9ecff9adcd7fc63abe17145bb15bda1ab212220b7aeea8c5'
7
+ data.tar.gz: 2eb1fccc6380ca4841f65a6cd319bcffdf848767e4f791c0230d934ba9588735abd8e7d042f6159226c837d7f501c71cce556d683f5b14045610d4aa987c2b02
checksums.yaml.gz.sig CHANGED
Binary file
data/.gitignore CHANGED
@@ -10,3 +10,5 @@ ext/librdkafka.*
10
10
  doc
11
11
  coverage
12
12
  vendor
13
+ .idea/
14
+ out/
data/CHANGELOG.md CHANGED
@@ -1,5 +1,11 @@
1
1
  # Rdkafka Changelog
2
2
 
3
+ ## 0.15.0 (Unreleased)
4
+ - **[Feature]** Oauthbearer token refresh callback (bruce-szalwinski-he)
5
+ - **[Feature]** Support incremental config describe + alter API (mensfeld)
6
+ - [Enhancement] Replace time poll based wait engine with an event based to improve response times on blocking operations and wait (nijikon + mensfeld)
7
+ - [Change] The `wait_timeout` argument in `AbstractHandle.wait` method is deprecated and will be removed in future versions without replacement. We don't rely on it's value anymore (nijikon)
8
+
3
9
  ## 0.14.10 (2024-02-08)
4
10
  - [Fix] Background logger stops working after forking causing memory leaks (mensfeld).
5
11
 
data/README.md CHANGED
@@ -18,7 +18,7 @@ become EOL.
18
18
 
19
19
  `rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
20
20
 
21
- The most important pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
21
+ The most essential pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
22
22
 
23
23
  ## Table of content
24
24
 
@@ -30,6 +30,7 @@ The most important pieces of a Kafka client are implemented, and we aim to provi
30
30
  - [Higher Level Libraries](#higher-level-libraries)
31
31
  * [Message Processing Frameworks](#message-processing-frameworks)
32
32
  * [Message Publishing Libraries](#message-publishing-libraries)
33
+ - [Forking](#forking)
33
34
  - [Development](#development)
34
35
  - [Example](#example)
35
36
  - [Versions](#versions)
@@ -47,12 +48,13 @@ While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications,
47
48
 
48
49
  ## Installation
49
50
 
50
- This gem downloads and compiles librdkafka when it is installed. If you
51
- If you have any problems installing the gem, please open an issue.
51
+ When installed, this gem downloads and compiles librdkafka. If you have any problems installing the gem, please open an issue.
52
52
 
53
53
  ## Usage
54
54
 
55
- See the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Two quick examples:
55
+ Please see the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Below are two quick examples.
56
+
57
+ Unless you are seeking specific low-level capabilities, we **strongly** recommend using [Karafka](https://github.com/karafka/karafka) and [WaterDrop](https://github.com/karafka/waterdrop) when working with Kafka. These are higher-level libraries also maintained by us based on rdkafka-ruby.
56
58
 
57
59
  ### Consuming Messages
58
60
 
@@ -74,7 +76,7 @@ end
74
76
 
75
77
  ### Producing Messages
76
78
 
77
- Produce a number of messages, put the delivery handles in an array, and
79
+ Produce several messages, put the delivery handles in an array, and
78
80
  wait for them before exiting. This way the messages will be batched and
79
81
  efficiently sent to Kafka.
80
82
 
@@ -95,13 +97,11 @@ end
95
97
  delivery_handles.each(&:wait)
96
98
  ```
97
99
 
98
- Note that creating a producer consumes some resources that will not be
99
- released until it `#close` is explicitly called, so be sure to call
100
- `Config#producer` only as necessary.
100
+ Note that creating a producer consumes some resources that will not be released until it `#close` is explicitly called, so be sure to call `Config#producer` only as necessary.
101
101
 
102
102
  ## Higher Level Libraries
103
103
 
104
- Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
104
+ Currently, there are two actively developed frameworks based on `rdkafka-ruby`, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
105
105
 
106
106
  ### Message Processing Frameworks
107
107
 
@@ -112,6 +112,16 @@ Currently, there are two actively developed frameworks based on rdkafka-ruby, th
112
112
 
113
113
  * [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
114
114
 
115
+ ## Forking
116
+
117
+ When working with `rdkafka-ruby`, it's essential to know that the underlying `librdkafka` library does not support fork-safe operations, even though it is thread-safe. Forking a process after initializing librdkafka clients can lead to unpredictable behavior due to inherited file descriptors and memory states. This limitation requires careful handling, especially in Ruby applications that rely on forking.
118
+
119
+ To address this, it's highly recommended to:
120
+
121
+ - Never initialize any `rdkafka-ruby` producers or consumers before forking to avoid state corruption.
122
+ - Before forking, always close any open producers or consumers if you've opened any.
123
+ - Use high-level libraries like [WaterDrop](https://github.com/karafka/waterdrop) and [Karafka](https://github.com/karafka/karafka/), which provide abstractions for handling librdkafka's intricacies.
124
+
115
125
  ## Development
116
126
 
117
127
  Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
data/docker-compose.yml CHANGED
@@ -3,7 +3,7 @@ version: '2'
3
3
  services:
4
4
  kafka:
5
5
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.5.3
6
+ image: confluentinc/cp-kafka:7.6.0
7
7
 
8
8
  ports:
9
9
  - 9092:9092
@@ -14,6 +14,13 @@ module Rdkafka
14
14
 
15
15
  # Registry for registering all the handles.
16
16
  REGISTRY = {}
17
+ # Default wait timeout is 31 years
18
+ MAX_WAIT_TIMEOUT_FOREVER = 10_000_000_000
19
+ # Deprecation message for wait_timeout argument in wait method
20
+ WAIT_TIMEOUT_DEPRECATION_MESSAGE = "The 'wait_timeout' argument is deprecated and will be removed in future versions without replacement. " \
21
+ "We don't rely on it's value anymore. Please refactor your code to remove references to it."
22
+
23
+ private_constant :MAX_WAIT_TIMEOUT_FOREVER
17
24
 
18
25
  class << self
19
26
  # Adds handle to the register
@@ -32,6 +39,12 @@ module Rdkafka
32
39
  end
33
40
  end
34
41
 
42
+ def initialize
43
+ @mutex = Thread::Mutex.new
44
+ @resource = Thread::ConditionVariable.new
45
+
46
+ super
47
+ end
35
48
 
36
49
  # Whether the handle is still pending.
37
50
  #
@@ -45,37 +58,48 @@ module Rdkafka
45
58
  # on the operation. In this case it is possible to call wait again.
46
59
  #
47
60
  # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out.
48
- # If this is nil it does not time out.
49
- # @param wait_timeout [Numeric] Amount of time we should wait before we recheck if the
50
- # operation has completed
61
+ # If this is nil we will wait forever
62
+ # @param wait_timeout [nil] deprecated
51
63
  # @param raise_response_error [Boolean] should we raise error when waiting finishes
52
64
  #
53
65
  # @return [Object] Operation-specific result
54
66
  #
55
67
  # @raise [RdkafkaError] When the operation failed
56
68
  # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
57
- def wait(max_wait_timeout: 60, wait_timeout: 0.1, raise_response_error: true)
58
- timeout = if max_wait_timeout
59
- monotonic_now + max_wait_timeout
60
- else
61
- nil
62
- end
63
- loop do
64
- if pending?
65
- if timeout && timeout <= monotonic_now
66
- raise WaitTimeoutError.new(
67
- "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
68
- )
69
+ def wait(max_wait_timeout: 60, wait_timeout: nil, raise_response_error: true)
70
+ Kernel.warn(WAIT_TIMEOUT_DEPRECATION_MESSAGE) unless wait_timeout.nil?
71
+
72
+ timeout = max_wait_timeout ? monotonic_now + max_wait_timeout : MAX_WAIT_TIMEOUT_FOREVER
73
+
74
+ @mutex.synchronize do
75
+ loop do
76
+ if pending?
77
+ to_wait = (timeout - monotonic_now)
78
+
79
+ if to_wait.positive?
80
+ @resource.wait(@mutex, to_wait)
81
+ else
82
+ raise WaitTimeoutError.new(
83
+ "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
84
+ )
85
+ end
86
+ elsif self[:response] != 0 && raise_response_error
87
+ raise_error
88
+ else
89
+ return create_result
69
90
  end
70
- sleep wait_timeout
71
- elsif self[:response] != 0 && raise_response_error
72
- raise_error
73
- else
74
- return create_result
75
91
  end
76
92
  end
77
93
  end
78
94
 
95
+ # Unlock the resources
96
+ def unlock
97
+ @mutex.synchronize do
98
+ self[:pending] = false
99
+ @resource.broadcast
100
+ end
101
+ end
102
+
79
103
  # @return [String] the name of the operation (e.g. "delivery")
80
104
  def operation_name
81
105
  raise "Must be implemented by subclass!"
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ # A single config binding result that represents its values extracted from C
6
+ class ConfigBindingResult
7
+ attr_reader :name, :value, :read_only, :default, :sensitive, :synonym, :synonyms
8
+
9
+ # @param config_ptr [FFI::Pointer] config pointer
10
+ def initialize(config_ptr)
11
+ @name = Bindings.rd_kafka_ConfigEntry_name(config_ptr)
12
+ @value = Bindings.rd_kafka_ConfigEntry_value(config_ptr)
13
+ @read_only = Bindings.rd_kafka_ConfigEntry_is_read_only(config_ptr)
14
+ @default = Bindings.rd_kafka_ConfigEntry_is_default(config_ptr)
15
+ @sensitive = Bindings.rd_kafka_ConfigEntry_is_sensitive(config_ptr)
16
+ @synonym = Bindings.rd_kafka_ConfigEntry_is_synonym(config_ptr)
17
+ @synonyms = []
18
+
19
+ # The code below builds up the config synonyms using same config binding
20
+ pointer_to_size_t = FFI::MemoryPointer.new(:int32)
21
+ synonym_ptr = Bindings.rd_kafka_ConfigEntry_synonyms(config_ptr, pointer_to_size_t)
22
+ synonyms_ptr = synonym_ptr.read_array_of_pointer(pointer_to_size_t.read_int)
23
+
24
+ (1..pointer_to_size_t.read_int).map do |ar|
25
+ @synonyms << self.class.new(synonyms_ptr[ar - 1])
26
+ end
27
+ end
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,18 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ # A simple binding that represents the requested config resource
6
+ class ConfigResourceBindingResult
7
+ attr_reader :name, :type, :configs, :configs_count
8
+
9
+ def initialize(config_resource_ptr)
10
+ ffi_binding = Bindings::ConfigResource.new(config_resource_ptr)
11
+
12
+ @name = ffi_binding[:name]
13
+ @type = ffi_binding[:type]
14
+ @configs = []
15
+ end
16
+ end
17
+ end
18
+ end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -10,6 +10,7 @@ module Rdkafka
10
10
 
11
11
  def initialize(acls:, acls_count:)
12
12
  @acls=[]
13
+
13
14
  if acls != FFI::Pointer::NULL
14
15
  acl_binding_result_pointers = acls.read_array_of_pointer(acls_count)
15
16
  (1..acls_count).map do |acl_index|
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DescribeConfigsHandle < AbstractHandle
6
+ layout :pending, :bool,
7
+ :response, :int,
8
+ :response_string, :pointer,
9
+ :config_entries, :pointer,
10
+ :entry_count, :int
11
+
12
+ # @return [String] the name of the operation.
13
+ def operation_name
14
+ "describe configs"
15
+ end
16
+
17
+ # @return [DescribeAclReport] instance with an array of acls that matches the request filters.
18
+ def create_result
19
+ DescribeConfigsReport.new(
20
+ config_entries: self[:config_entries],
21
+ entry_count: self[:entry_count]
22
+ )
23
+ end
24
+
25
+ def raise_error
26
+ raise RdkafkaError.new(
27
+ self[:response],
28
+ broker_message: self[:response_string].read_string
29
+ )
30
+ end
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DescribeConfigsReport
6
+ attr_reader :resources
7
+
8
+ def initialize(config_entries:, entry_count:)
9
+ @resources=[]
10
+
11
+ return if config_entries == FFI::Pointer::NULL
12
+
13
+ config_entries
14
+ .read_array_of_pointer(entry_count)
15
+ .each { |config_resource_result_ptr| validate!(config_resource_result_ptr) }
16
+ .each do |config_resource_result_ptr|
17
+ config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr)
18
+
19
+ pointer_to_size_t = FFI::MemoryPointer.new(:int32)
20
+ configs_ptr = Bindings.rd_kafka_ConfigResource_configs(
21
+ config_resource_result_ptr,
22
+ pointer_to_size_t
23
+ )
24
+
25
+ configs_ptr
26
+ .read_array_of_pointer(pointer_to_size_t.read_int)
27
+ .map { |config_ptr| ConfigBindingResult.new(config_ptr) }
28
+ .each { |config_binding| config_resource_result.configs << config_binding }
29
+
30
+ @resources << config_resource_result
31
+ end
32
+ ensure
33
+ return if config_entries == FFI::Pointer::NULL
34
+
35
+ Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count)
36
+ end
37
+
38
+ private
39
+
40
+ def validate!(config_resource_result_ptr)
41
+ RdkafkaError.validate!(
42
+ Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr),
43
+ Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr)
44
+ )
45
+ end
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class IncrementalAlterConfigsHandle < AbstractHandle
6
+ layout :pending, :bool,
7
+ :response, :int,
8
+ :response_string, :pointer,
9
+ :config_entries, :pointer,
10
+ :entry_count, :int
11
+
12
+ # @return [String] the name of the operation.
13
+ def operation_name
14
+ "incremental alter configs"
15
+ end
16
+
17
+ # @return [DescribeAclReport] instance with an array of acls that matches the request filters.
18
+ def create_result
19
+ IncrementalAlterConfigsReport.new(
20
+ config_entries: self[:config_entries],
21
+ entry_count: self[:entry_count]
22
+ )
23
+ end
24
+
25
+ def raise_error
26
+ raise RdkafkaError.new(
27
+ self[:response],
28
+ broker_message: self[:response_string].read_string
29
+ )
30
+ end
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class IncrementalAlterConfigsReport
6
+ attr_reader :resources
7
+
8
+ def initialize(config_entries:, entry_count:)
9
+ @resources=[]
10
+
11
+ return if config_entries == FFI::Pointer::NULL
12
+
13
+ config_entries
14
+ .read_array_of_pointer(entry_count)
15
+ .each { |config_resource_result_ptr| validate!(config_resource_result_ptr) }
16
+ .each do |config_resource_result_ptr|
17
+ config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr)
18
+
19
+ pointer_to_size_t = FFI::MemoryPointer.new(:int32)
20
+ configs_ptr = Bindings.rd_kafka_ConfigResource_configs(
21
+ config_resource_result_ptr,
22
+ pointer_to_size_t
23
+ )
24
+
25
+ configs_ptr
26
+ .read_array_of_pointer(pointer_to_size_t.read_int)
27
+ .map { |config_ptr| ConfigBindingResult.new(config_ptr) }
28
+ .each { |config_binding| config_resource_result.configs << config_binding }
29
+
30
+ @resources << config_resource_result
31
+ end
32
+ ensure
33
+ return if config_entries == FFI::Pointer::NULL
34
+
35
+ Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count)
36
+ end
37
+
38
+ private
39
+
40
+ def validate!(config_resource_result_ptr)
41
+ RdkafkaError.validate!(
42
+ Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr),
43
+ Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr)
44
+ )
45
+ end
46
+ end
47
+ end
48
+ end
data/lib/rdkafka/admin.rb CHANGED
@@ -2,6 +2,8 @@
2
2
 
3
3
  module Rdkafka
4
4
  class Admin
5
+ include Helpers::OAuth
6
+
5
7
  # @private
6
8
  def initialize(native_kafka)
7
9
  @native_kafka = native_kafka
@@ -605,6 +607,165 @@ module Rdkafka
605
607
  describe_acl_handle
606
608
  end
607
609
 
610
+ # Describe configs
611
+ #
612
+ # @param resources [Array<Hash>] Array where elements are hashes with two keys:
613
+ # - `:resource_type` - numerical resource type based on Kafka API
614
+ # - `:resource_name` - string with resource name
615
+ # @return [DescribeConfigsHandle] Describe config handle that can be used to wait for the
616
+ # result of fetching resources with their appropriate configs
617
+ #
618
+ # @raise [RdkafkaError]
619
+ #
620
+ # @note Several resources can be requested at one go, but only one broker at a time
621
+ def describe_configs(resources)
622
+ closed_admin_check(__method__)
623
+
624
+ handle = DescribeConfigsHandle.new
625
+ handle[:pending] = true
626
+ handle[:response] = -1
627
+
628
+ queue_ptr = @native_kafka.with_inner do |inner|
629
+ Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
630
+ end
631
+
632
+ if queue_ptr.null?
633
+ raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
634
+ end
635
+
636
+ admin_options_ptr = @native_kafka.with_inner do |inner|
637
+ Rdkafka::Bindings.rd_kafka_AdminOptions_new(
638
+ inner,
639
+ Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS
640
+ )
641
+ end
642
+
643
+ DescribeConfigsHandle.register(handle)
644
+ Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr)
645
+
646
+ pointer_array = resources.map do |resource_details|
647
+ Rdkafka::Bindings.rd_kafka_ConfigResource_new(
648
+ resource_details.fetch(:resource_type),
649
+ FFI::MemoryPointer.from_string(
650
+ resource_details.fetch(:resource_name)
651
+ )
652
+ )
653
+ end
654
+
655
+ configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size)
656
+ configs_array_ptr.write_array_of_pointer(pointer_array)
657
+
658
+ begin
659
+ @native_kafka.with_inner do |inner|
660
+ Rdkafka::Bindings.rd_kafka_DescribeConfigs(
661
+ inner,
662
+ configs_array_ptr,
663
+ pointer_array.size,
664
+ admin_options_ptr,
665
+ queue_ptr
666
+ )
667
+ end
668
+ rescue Exception
669
+ DescribeConfigsHandle.remove(handle.to_ptr.address)
670
+
671
+ raise
672
+ ensure
673
+ Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array(
674
+ configs_array_ptr,
675
+ pointer_array.size
676
+ ) if configs_array_ptr
677
+ end
678
+
679
+ handle
680
+ end
681
+
682
+ # Alters in an incremental way all the configs provided for given resources
683
+ #
684
+ # @param resources_with_configs [Array<Hash>] resources with the configs key that contains
685
+ # name, value and the proper op_type to perform on this value.
686
+ #
687
+ # @return [IncrementalAlterConfigsHandle] Incremental alter configs handle that can be used to
688
+ # wait for the result of altering resources with their appropriate configs
689
+ #
690
+ # @raise [RdkafkaError]
691
+ #
692
+ # @note Several resources can be requested at one go, but only one broker at a time
693
+ # @note The results won't contain altered values but only the altered resources
694
+ def incremental_alter_configs(resources_with_configs)
695
+ closed_admin_check(__method__)
696
+
697
+ handle = IncrementalAlterConfigsHandle.new
698
+ handle[:pending] = true
699
+ handle[:response] = -1
700
+
701
+ queue_ptr = @native_kafka.with_inner do |inner|
702
+ Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
703
+ end
704
+
705
+ if queue_ptr.null?
706
+ raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
707
+ end
708
+
709
+ admin_options_ptr = @native_kafka.with_inner do |inner|
710
+ Rdkafka::Bindings.rd_kafka_AdminOptions_new(
711
+ inner,
712
+ Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_INCREMENTALALTERCONFIGS
713
+ )
714
+ end
715
+
716
+ IncrementalAlterConfigsHandle.register(handle)
717
+ Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr)
718
+
719
+ # Tu poprawnie tworzyc
720
+ pointer_array = resources_with_configs.map do |resource_details|
721
+ # First build the appropriate resource representation
722
+ resource_ptr = Rdkafka::Bindings.rd_kafka_ConfigResource_new(
723
+ resource_details.fetch(:resource_type),
724
+ FFI::MemoryPointer.from_string(
725
+ resource_details.fetch(:resource_name)
726
+ )
727
+ )
728
+
729
+ resource_details.fetch(:configs).each do |config|
730
+ Bindings.rd_kafka_ConfigResource_add_incremental_config(
731
+ resource_ptr,
732
+ config.fetch(:name),
733
+ config.fetch(:op_type),
734
+ config.fetch(:value)
735
+ )
736
+ end
737
+
738
+ resource_ptr
739
+ end
740
+
741
+ configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size)
742
+ configs_array_ptr.write_array_of_pointer(pointer_array)
743
+
744
+
745
+ begin
746
+ @native_kafka.with_inner do |inner|
747
+ Rdkafka::Bindings.rd_kafka_IncrementalAlterConfigs(
748
+ inner,
749
+ configs_array_ptr,
750
+ pointer_array.size,
751
+ admin_options_ptr,
752
+ queue_ptr
753
+ )
754
+ end
755
+ rescue Exception
756
+ IncrementalAlterConfigsHandle.remove(handle.to_ptr.address)
757
+
758
+ raise
759
+ ensure
760
+ Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array(
761
+ configs_array_ptr,
762
+ pointer_array.size
763
+ ) if configs_array_ptr
764
+ end
765
+
766
+ handle
767
+ end
768
+
608
769
  private
609
770
 
610
771
  def closed_admin_check(method)