karafka-rdkafka 0.14.10 → 0.15.0.alpha1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 92d61e2b492453bf19ead6abf1c9377d5222aeba36c19c25caff5641a2c8fb1b
4
- data.tar.gz: 4170931c8ced8d09813d22359c36d36da09b3f39d2429daad6a342dd3df2982c
3
+ metadata.gz: 2cd7ecb658a7aedb5953f4d39c8941a282d6dc7001c7600129d86ebb05a550ce
4
+ data.tar.gz: 1dc2ebcb1deebe94197f216f0da3d78e2ed5d55de9b1be8b0654a1042e2d4289
5
5
  SHA512:
6
- metadata.gz: ceae2da64aad6589779b60160fc28b5db6bc305c289f76bbf7f0aa034afdf88277b3aa75f4c14ec8b441250e81bca7bf7afededea22c2eb0568753cf6968c2b3
7
- data.tar.gz: 61a3c8a40bca6d598782b093f6a2575fcdbe247d6a31c624f95e236015d713e3d2ace38e2fae43b622b9e4c6b37be57ef0f2d508a0a57310d3eeb79404146436
6
+ metadata.gz: a300e2dcdf5aac16b59b0289a71fec440747e9238c5a801572c7d1ce6a433843ab47e76915a239f860a24cd832323dab952e09b62c7184462c806183958b2a85
7
+ data.tar.gz: c1a3b1d23522f2941b9d39d3b5955f40d3205a3820188c7cba02f7825b17bf534963fec68956271908394e443f7b8a84edcdf972021a74f3ed4bcf93d76b1b6d
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,5 +1,10 @@
1
1
  # Rdkafka Changelog
2
2
 
3
+ ## 0.15.0 (Unreleased)
4
+ - [Feature] Support incremental config describe + alter API.
5
+ - [Enhancement] Replace time poll based wait engine with an event based to improve response times on blocking operations and wait (nijikon + mensfeld)
6
+ - [Change] The `wait_timeout` argument in `AbstractHandle.wait` method is deprecated and will be removed in future versions without replacement. We don't rely on it's value anymore (nijikon)
7
+
3
8
  ## 0.14.10 (2024-02-08)
4
9
  - [Fix] Background logger stops working after forking causing memory leaks (mensfeld).
5
10
 
data/README.md CHANGED
@@ -18,7 +18,7 @@ become EOL.
18
18
 
19
19
  `rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
20
20
 
21
- The most important pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
21
+ The most essential pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
22
22
 
23
23
  ## Table of content
24
24
 
@@ -30,6 +30,7 @@ The most important pieces of a Kafka client are implemented, and we aim to provi
30
30
  - [Higher Level Libraries](#higher-level-libraries)
31
31
  * [Message Processing Frameworks](#message-processing-frameworks)
32
32
  * [Message Publishing Libraries](#message-publishing-libraries)
33
+ - [Forking](#forking)
33
34
  - [Development](#development)
34
35
  - [Example](#example)
35
36
  - [Versions](#versions)
@@ -47,12 +48,13 @@ While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications,
47
48
 
48
49
  ## Installation
49
50
 
50
- This gem downloads and compiles librdkafka when it is installed. If you
51
- If you have any problems installing the gem, please open an issue.
51
+ When installed, this gem downloads and compiles librdkafka. If you have any problems installing the gem, please open an issue.
52
52
 
53
53
  ## Usage
54
54
 
55
- See the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Two quick examples:
55
+ Please see the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Below are two quick examples.
56
+
57
+ Unless you are seeking specific low-level capabilities, we **strongly** recommend using [Karafka](https://github.com/karafka/karafka) and [WaterDrop](https://github.com/karafka/waterdrop) when working with Kafka. These are higher-level libraries also maintained by us based on rdkafka-ruby.
56
58
 
57
59
  ### Consuming Messages
58
60
 
@@ -74,7 +76,7 @@ end
74
76
 
75
77
  ### Producing Messages
76
78
 
77
- Produce a number of messages, put the delivery handles in an array, and
79
+ Produce several messages, put the delivery handles in an array, and
78
80
  wait for them before exiting. This way the messages will be batched and
79
81
  efficiently sent to Kafka.
80
82
 
@@ -95,13 +97,11 @@ end
95
97
  delivery_handles.each(&:wait)
96
98
  ```
97
99
 
98
- Note that creating a producer consumes some resources that will not be
99
- released until it `#close` is explicitly called, so be sure to call
100
- `Config#producer` only as necessary.
100
+ Note that creating a producer consumes some resources that will not be released until it `#close` is explicitly called, so be sure to call `Config#producer` only as necessary.
101
101
 
102
102
  ## Higher Level Libraries
103
103
 
104
- Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
104
+ Currently, there are two actively developed frameworks based on `rdkafka-ruby`, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
105
105
 
106
106
  ### Message Processing Frameworks
107
107
 
@@ -112,6 +112,16 @@ Currently, there are two actively developed frameworks based on rdkafka-ruby, th
112
112
 
113
113
  * [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
114
114
 
115
+ ## Forking
116
+
117
+ When working with `rdkafka-ruby`, it's essential to know that the underlying `librdkafka` library does not support fork-safe operations, even though it is thread-safe. Forking a process after initializing librdkafka clients can lead to unpredictable behavior due to inherited file descriptors and memory states. This limitation requires careful handling, especially in Ruby applications that rely on forking.
118
+
119
+ To address this, it's highly recommended to:
120
+
121
+ - Never initialize any `rdkafka-ruby` producers or consumers before forking to avoid state corruption.
122
+ - Before forking, always close any open producers or consumers if you've opened any.
123
+ - Use high-level libraries like [WaterDrop](https://github.com/karafka/waterdrop) and [Karafka](https://github.com/karafka/karafka/), which provide abstractions for handling librdkafka's intricacies.
124
+
115
125
  ## Development
116
126
 
117
127
  Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
data/docker-compose.yml CHANGED
@@ -3,7 +3,7 @@ version: '2'
3
3
  services:
4
4
  kafka:
5
5
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.5.3
6
+ image: confluentinc/cp-kafka:7.6.0
7
7
 
8
8
  ports:
9
9
  - 9092:9092
@@ -14,6 +14,13 @@ module Rdkafka
14
14
 
15
15
  # Registry for registering all the handles.
16
16
  REGISTRY = {}
17
+ # Default wait timeout is 31 years
18
+ MAX_WAIT_TIMEOUT_FOREVER = 10_000_000_000
19
+ # Deprecation message for wait_timeout argument in wait method
20
+ WAIT_TIMEOUT_DEPRECATION_MESSAGE = "The 'wait_timeout' argument is deprecated and will be removed in future versions without replacement. " \
21
+ "We don't rely on it's value anymore. Please refactor your code to remove references to it."
22
+
23
+ private_constant :MAX_WAIT_TIMEOUT_FOREVER
17
24
 
18
25
  class << self
19
26
  # Adds handle to the register
@@ -32,6 +39,12 @@ module Rdkafka
32
39
  end
33
40
  end
34
41
 
42
+ def initialize
43
+ @mutex = Thread::Mutex.new
44
+ @resource = Thread::ConditionVariable.new
45
+
46
+ super
47
+ end
35
48
 
36
49
  # Whether the handle is still pending.
37
50
  #
@@ -45,37 +58,48 @@ module Rdkafka
45
58
  # on the operation. In this case it is possible to call wait again.
46
59
  #
47
60
  # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out.
48
- # If this is nil it does not time out.
49
- # @param wait_timeout [Numeric] Amount of time we should wait before we recheck if the
50
- # operation has completed
61
+ # If this is nil we will wait forever
62
+ # @param wait_timeout [nil] deprecated
51
63
  # @param raise_response_error [Boolean] should we raise error when waiting finishes
52
64
  #
53
65
  # @return [Object] Operation-specific result
54
66
  #
55
67
  # @raise [RdkafkaError] When the operation failed
56
68
  # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
57
- def wait(max_wait_timeout: 60, wait_timeout: 0.1, raise_response_error: true)
58
- timeout = if max_wait_timeout
59
- monotonic_now + max_wait_timeout
60
- else
61
- nil
62
- end
63
- loop do
64
- if pending?
65
- if timeout && timeout <= monotonic_now
66
- raise WaitTimeoutError.new(
67
- "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
68
- )
69
+ def wait(max_wait_timeout: 60, wait_timeout: nil, raise_response_error: true)
70
+ Kernel.warn(WAIT_TIMEOUT_DEPRECATION_MESSAGE) unless wait_timeout.nil?
71
+
72
+ timeout = max_wait_timeout ? monotonic_now + max_wait_timeout : MAX_WAIT_TIMEOUT_FOREVER
73
+
74
+ @mutex.synchronize do
75
+ loop do
76
+ if pending?
77
+ to_wait = (timeout - monotonic_now)
78
+
79
+ if to_wait.positive?
80
+ @resource.wait(@mutex, to_wait)
81
+ else
82
+ raise WaitTimeoutError.new(
83
+ "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds"
84
+ )
85
+ end
86
+ elsif self[:response] != 0 && raise_response_error
87
+ raise_error
88
+ else
89
+ return create_result
69
90
  end
70
- sleep wait_timeout
71
- elsif self[:response] != 0 && raise_response_error
72
- raise_error
73
- else
74
- return create_result
75
91
  end
76
92
  end
77
93
  end
78
94
 
95
+ # Unlock the resources
96
+ def unlock
97
+ @mutex.synchronize do
98
+ self[:pending] = false
99
+ @resource.broadcast
100
+ end
101
+ end
102
+
79
103
  # @return [String] the name of the operation (e.g. "delivery")
80
104
  def operation_name
81
105
  raise "Must be implemented by subclass!"
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ # A single config binding result that represents its values extracted from C
6
+ class ConfigBindingResult
7
+ attr_reader :name, :value, :read_only, :default, :sensitive, :synonym, :synonyms
8
+
9
+ # @param config_ptr [FFI::Pointer] config pointer
10
+ def initialize(config_ptr)
11
+ @name = Bindings.rd_kafka_ConfigEntry_name(config_ptr)
12
+ @value = Bindings.rd_kafka_ConfigEntry_value(config_ptr)
13
+ @read_only = Bindings.rd_kafka_ConfigEntry_is_read_only(config_ptr)
14
+ @default = Bindings.rd_kafka_ConfigEntry_is_default(config_ptr)
15
+ @sensitive = Bindings.rd_kafka_ConfigEntry_is_sensitive(config_ptr)
16
+ @synonym = Bindings.rd_kafka_ConfigEntry_is_synonym(config_ptr)
17
+ @synonyms = []
18
+
19
+ # The code below builds up the config synonyms using same config binding
20
+ pointer_to_size_t = FFI::MemoryPointer.new(:int32)
21
+ synonym_ptr = Bindings.rd_kafka_ConfigEntry_synonyms(config_ptr, pointer_to_size_t)
22
+ synonyms_ptr = synonym_ptr.read_array_of_pointer(pointer_to_size_t.read_int)
23
+
24
+ (1..pointer_to_size_t.read_int).map do |ar|
25
+ @synonyms << self.class.new(synonyms_ptr[ar - 1])
26
+ end
27
+ end
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,18 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ # A simple binding that represents the requested config resource
6
+ class ConfigResourceBindingResult
7
+ attr_reader :name, :type, :configs, :configs_count
8
+
9
+ def initialize(config_resource_ptr)
10
+ ffi_binding = Bindings::ConfigResource.new(config_resource_ptr)
11
+
12
+ @name = ffi_binding[:name]
13
+ @type = ffi_binding[:type]
14
+ @configs = []
15
+ end
16
+ end
17
+ end
18
+ end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -16,7 +16,7 @@ module Rdkafka
16
16
  @error_string = error_string.read_string
17
17
  end
18
18
  if result_name != FFI::Pointer::NULL
19
- @result_name = @result_name = result_name.read_string
19
+ @result_name = result_name.read_string
20
20
  end
21
21
  end
22
22
  end
@@ -10,6 +10,7 @@ module Rdkafka
10
10
 
11
11
  def initialize(acls:, acls_count:)
12
12
  @acls=[]
13
+
13
14
  if acls != FFI::Pointer::NULL
14
15
  acl_binding_result_pointers = acls.read_array_of_pointer(acls_count)
15
16
  (1..acls_count).map do |acl_index|
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DescribeConfigsHandle < AbstractHandle
6
+ layout :pending, :bool,
7
+ :response, :int,
8
+ :response_string, :pointer,
9
+ :config_entries, :pointer,
10
+ :entry_count, :int
11
+
12
+ # @return [String] the name of the operation.
13
+ def operation_name
14
+ "describe configs"
15
+ end
16
+
17
+ # @return [DescribeAclReport] instance with an array of acls that matches the request filters.
18
+ def create_result
19
+ DescribeConfigsReport.new(
20
+ config_entries: self[:config_entries],
21
+ entry_count: self[:entry_count]
22
+ )
23
+ end
24
+
25
+ def raise_error
26
+ raise RdkafkaError.new(
27
+ self[:response],
28
+ broker_message: self[:response_string].read_string
29
+ )
30
+ end
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DescribeConfigsReport
6
+ attr_reader :resources
7
+
8
+ def initialize(config_entries:, entry_count:)
9
+ @resources=[]
10
+
11
+ return if config_entries == FFI::Pointer::NULL
12
+
13
+ config_entries
14
+ .read_array_of_pointer(entry_count)
15
+ .each { |config_resource_result_ptr| validate!(config_resource_result_ptr) }
16
+ .each do |config_resource_result_ptr|
17
+ config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr)
18
+
19
+ pointer_to_size_t = FFI::MemoryPointer.new(:int32)
20
+ configs_ptr = Bindings.rd_kafka_ConfigResource_configs(
21
+ config_resource_result_ptr,
22
+ pointer_to_size_t
23
+ )
24
+
25
+ configs_ptr
26
+ .read_array_of_pointer(pointer_to_size_t.read_int)
27
+ .map { |config_ptr| ConfigBindingResult.new(config_ptr) }
28
+ .each { |config_binding| config_resource_result.configs << config_binding }
29
+
30
+ @resources << config_resource_result
31
+ end
32
+ ensure
33
+ return if config_entries == FFI::Pointer::NULL
34
+
35
+ Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count)
36
+ end
37
+
38
+ private
39
+
40
+ def validate!(config_resource_result_ptr)
41
+ RdkafkaError.validate!(
42
+ Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr),
43
+ Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr)
44
+ )
45
+ end
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class IncrementalAlterConfigsHandle < AbstractHandle
6
+ layout :pending, :bool,
7
+ :response, :int,
8
+ :response_string, :pointer,
9
+ :config_entries, :pointer,
10
+ :entry_count, :int
11
+
12
+ # @return [String] the name of the operation.
13
+ def operation_name
14
+ "incremental alter configs"
15
+ end
16
+
17
+ # @return [DescribeAclReport] instance with an array of acls that matches the request filters.
18
+ def create_result
19
+ IncrementalAlterConfigsReport.new(
20
+ config_entries: self[:config_entries],
21
+ entry_count: self[:entry_count]
22
+ )
23
+ end
24
+
25
+ def raise_error
26
+ raise RdkafkaError.new(
27
+ self[:response],
28
+ broker_message: self[:response_string].read_string
29
+ )
30
+ end
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class IncrementalAlterConfigsReport
6
+ attr_reader :resources
7
+
8
+ def initialize(config_entries:, entry_count:)
9
+ @resources=[]
10
+
11
+ return if config_entries == FFI::Pointer::NULL
12
+
13
+ config_entries
14
+ .read_array_of_pointer(entry_count)
15
+ .each { |config_resource_result_ptr| validate!(config_resource_result_ptr) }
16
+ .each do |config_resource_result_ptr|
17
+ config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr)
18
+
19
+ pointer_to_size_t = FFI::MemoryPointer.new(:int32)
20
+ configs_ptr = Bindings.rd_kafka_ConfigResource_configs(
21
+ config_resource_result_ptr,
22
+ pointer_to_size_t
23
+ )
24
+
25
+ configs_ptr
26
+ .read_array_of_pointer(pointer_to_size_t.read_int)
27
+ .map { |config_ptr| ConfigBindingResult.new(config_ptr) }
28
+ .each { |config_binding| config_resource_result.configs << config_binding }
29
+
30
+ @resources << config_resource_result
31
+ end
32
+ ensure
33
+ return if config_entries == FFI::Pointer::NULL
34
+
35
+ Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count)
36
+ end
37
+
38
+ private
39
+
40
+ def validate!(config_resource_result_ptr)
41
+ RdkafkaError.validate!(
42
+ Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr),
43
+ Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr)
44
+ )
45
+ end
46
+ end
47
+ end
48
+ end
data/lib/rdkafka/admin.rb CHANGED
@@ -605,6 +605,165 @@ module Rdkafka
605
605
  describe_acl_handle
606
606
  end
607
607
 
608
+ # Describe configs
609
+ #
610
+ # @param resources [Array<Hash>] Array where elements are hashes with two keys:
611
+ # - `:resource_type` - numerical resource type based on Kafka API
612
+ # - `:resource_name` - string with resource name
613
+ # @return [DescribeConfigsHandle] Describe config handle that can be used to wait for the
614
+ # result of fetching resources with their appropriate configs
615
+ #
616
+ # @raise [RdkafkaError]
617
+ #
618
+ # @note Several resources can be requested at one go, but only one broker at a time
619
+ def describe_configs(resources)
620
+ closed_admin_check(__method__)
621
+
622
+ handle = DescribeConfigsHandle.new
623
+ handle[:pending] = true
624
+ handle[:response] = -1
625
+
626
+ queue_ptr = @native_kafka.with_inner do |inner|
627
+ Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
628
+ end
629
+
630
+ if queue_ptr.null?
631
+ raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
632
+ end
633
+
634
+ admin_options_ptr = @native_kafka.with_inner do |inner|
635
+ Rdkafka::Bindings.rd_kafka_AdminOptions_new(
636
+ inner,
637
+ Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS
638
+ )
639
+ end
640
+
641
+ DescribeConfigsHandle.register(handle)
642
+ Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr)
643
+
644
+ pointer_array = resources.map do |resource_details|
645
+ Rdkafka::Bindings.rd_kafka_ConfigResource_new(
646
+ resource_details.fetch(:resource_type),
647
+ FFI::MemoryPointer.from_string(
648
+ resource_details.fetch(:resource_name)
649
+ )
650
+ )
651
+ end
652
+
653
+ configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size)
654
+ configs_array_ptr.write_array_of_pointer(pointer_array)
655
+
656
+ begin
657
+ @native_kafka.with_inner do |inner|
658
+ Rdkafka::Bindings.rd_kafka_DescribeConfigs(
659
+ inner,
660
+ configs_array_ptr,
661
+ pointer_array.size,
662
+ admin_options_ptr,
663
+ queue_ptr
664
+ )
665
+ end
666
+ rescue Exception
667
+ DescribeConfigsHandle.remove(handle.to_ptr.address)
668
+
669
+ raise
670
+ ensure
671
+ Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array(
672
+ configs_array_ptr,
673
+ pointer_array.size
674
+ ) if configs_array_ptr
675
+ end
676
+
677
+ handle
678
+ end
679
+
680
+ # Alters in an incremental way all the configs provided for given resources
681
+ #
682
+ # @param resources_with_configs [Array<Hash>] resources with the configs key that contains
683
+ # name, value and the proper op_type to perform on this value.
684
+ #
685
+ # @return [IncrementalAlterConfigsHandle] Incremental alter configs handle that can be used to
686
+ # wait for the result of altering resources with their appropriate configs
687
+ #
688
+ # @raise [RdkafkaError]
689
+ #
690
+ # @note Several resources can be requested at one go, but only one broker at a time
691
+ # @note The results won't contain altered values but only the altered resources
692
+ def incremental_alter_configs(resources_with_configs)
693
+ closed_admin_check(__method__)
694
+
695
+ handle = IncrementalAlterConfigsHandle.new
696
+ handle[:pending] = true
697
+ handle[:response] = -1
698
+
699
+ queue_ptr = @native_kafka.with_inner do |inner|
700
+ Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
701
+ end
702
+
703
+ if queue_ptr.null?
704
+ raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
705
+ end
706
+
707
+ admin_options_ptr = @native_kafka.with_inner do |inner|
708
+ Rdkafka::Bindings.rd_kafka_AdminOptions_new(
709
+ inner,
710
+ Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_INCREMENTALALTERCONFIGS
711
+ )
712
+ end
713
+
714
+ IncrementalAlterConfigsHandle.register(handle)
715
+ Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr)
716
+
717
+ # Tu poprawnie tworzyc
718
+ pointer_array = resources_with_configs.map do |resource_details|
719
+ # First build the appropriate resource representation
720
+ resource_ptr = Rdkafka::Bindings.rd_kafka_ConfigResource_new(
721
+ resource_details.fetch(:resource_type),
722
+ FFI::MemoryPointer.from_string(
723
+ resource_details.fetch(:resource_name)
724
+ )
725
+ )
726
+
727
+ resource_details.fetch(:configs).each do |config|
728
+ Bindings.rd_kafka_ConfigResource_add_incremental_config(
729
+ resource_ptr,
730
+ config.fetch(:name),
731
+ config.fetch(:op_type),
732
+ config.fetch(:value)
733
+ )
734
+ end
735
+
736
+ resource_ptr
737
+ end
738
+
739
+ configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size)
740
+ configs_array_ptr.write_array_of_pointer(pointer_array)
741
+
742
+
743
+ begin
744
+ @native_kafka.with_inner do |inner|
745
+ Rdkafka::Bindings.rd_kafka_IncrementalAlterConfigs(
746
+ inner,
747
+ configs_array_ptr,
748
+ pointer_array.size,
749
+ admin_options_ptr,
750
+ queue_ptr
751
+ )
752
+ end
753
+ rescue Exception
754
+ IncrementalAlterConfigsHandle.remove(handle.to_ptr.address)
755
+
756
+ raise
757
+ ensure
758
+ Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array(
759
+ configs_array_ptr,
760
+ pointer_array.size
761
+ ) if configs_array_ptr
762
+ end
763
+
764
+ handle
765
+ end
766
+
608
767
  private
609
768
 
610
769
  def closed_admin_check(method)
@@ -97,6 +97,48 @@ module Rdkafka
97
97
  attach_function :rd_kafka_topic_partition_list_destroy, [:pointer], :void
98
98
  attach_function :rd_kafka_topic_partition_list_copy, [:pointer], :pointer
99
99
 
100
+ # Configs management
101
+ #
102
+ # Structs for management of configurations
103
+ # Each configuration is attached to a resource and one resource can have many configuration
104
+ # details. Each resource will also have separate errors results if obtaining configuration
105
+ # was not possible for any reason
106
+ class ConfigResource < FFI::Struct
107
+ layout :type, :int,
108
+ :name, :string
109
+ end
110
+
111
+ attach_function :rd_kafka_DescribeConfigs, [:pointer, :pointer, :size_t, :pointer, :pointer], :void, blocking: true
112
+ attach_function :rd_kafka_ConfigResource_new, [:int32, :pointer], :pointer
113
+ attach_function :rd_kafka_ConfigResource_destroy_array, [:pointer, :int32], :void
114
+ attach_function :rd_kafka_event_DescribeConfigs_result, [:pointer], :pointer
115
+ attach_function :rd_kafka_DescribeConfigs_result_resources, [:pointer, :pointer], :pointer
116
+ attach_function :rd_kafka_ConfigResource_configs, [:pointer, :pointer], :pointer
117
+ attach_function :rd_kafka_ConfigEntry_name, [:pointer], :string
118
+ attach_function :rd_kafka_ConfigEntry_value, [:pointer], :string
119
+ attach_function :rd_kafka_ConfigEntry_is_read_only, [:pointer], :int
120
+ attach_function :rd_kafka_ConfigEntry_is_default, [:pointer], :int
121
+ attach_function :rd_kafka_ConfigEntry_is_sensitive, [:pointer], :int
122
+ attach_function :rd_kafka_ConfigEntry_is_synonym, [:pointer], :int
123
+ attach_function :rd_kafka_ConfigEntry_synonyms, [:pointer, :pointer], :pointer
124
+ attach_function :rd_kafka_ConfigResource_error, [:pointer], :int
125
+ attach_function :rd_kafka_ConfigResource_error_string, [:pointer], :string
126
+ attach_function :rd_kafka_IncrementalAlterConfigs, [:pointer, :pointer, :size_t, :pointer, :pointer], :void, blocking: true
127
+ attach_function :rd_kafka_IncrementalAlterConfigs_result_resources, [:pointer, :pointer], :pointer
128
+ attach_function :rd_kafka_ConfigResource_add_incremental_config, [:pointer, :string, :int32, :string], :pointer
129
+ attach_function :rd_kafka_event_IncrementalAlterConfigs_result, [:pointer], :pointer
130
+
131
+ RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS = 5
132
+ RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT = 104
133
+
134
+ RD_KAFKA_ADMIN_OP_INCREMENTALALTERCONFIGS = 16
135
+ RD_KAFKA_EVENT_INCREMENTALALTERCONFIGS_RESULT = 131072
136
+
137
+ RD_KAFKA_ALTER_CONFIG_OP_TYPE_SET = 0
138
+ RD_KAFKA_ALTER_CONFIG_OP_TYPE_DELETE = 1
139
+ RD_KAFKA_ALTER_CONFIG_OP_TYPE_APPEND = 2
140
+ RD_KAFKA_ALTER_CONFIG_OP_TYPE_SUBTRACT = 3
141
+
100
142
  # Errors
101
143
 
102
144
  attach_function :rd_kafka_err2name, [:int], :string