karafka-rdkafka 0.14.0 → 0.14.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- checksums.yaml.gz.sig +0 -0
- data/.github/FUNDING.yml +1 -0
- data/CHANGELOG.md +104 -96
- data/README.md +32 -22
- data/docker-compose.yml +2 -0
- data/lib/rdkafka/admin/acl_binding_result.rb +37 -0
- data/lib/rdkafka/admin/create_acl_handle.rb +28 -0
- data/lib/rdkafka/admin/create_acl_report.rb +24 -0
- data/lib/rdkafka/admin/delete_acl_handle.rb +30 -0
- data/lib/rdkafka/admin/delete_acl_report.rb +23 -0
- data/lib/rdkafka/admin/delete_groups_handle.rb +28 -0
- data/lib/rdkafka/admin/delete_groups_report.rb +24 -0
- data/lib/rdkafka/admin/describe_acl_handle.rb +30 -0
- data/lib/rdkafka/admin/describe_acl_report.rb +23 -0
- data/lib/rdkafka/admin.rb +377 -0
- data/lib/rdkafka/bindings.rb +108 -0
- data/lib/rdkafka/callbacks.rb +167 -0
- data/lib/rdkafka/consumer/headers.rb +1 -1
- data/lib/rdkafka/consumer/topic_partition_list.rb +8 -7
- data/lib/rdkafka/consumer.rb +18 -18
- data/lib/rdkafka/error.rb +13 -2
- data/lib/rdkafka/producer.rb +2 -2
- data/lib/rdkafka/version.rb +1 -1
- data/lib/rdkafka.rb +9 -0
- data/spec/rdkafka/admin/create_acl_handle_spec.rb +56 -0
- data/spec/rdkafka/admin/create_acl_report_spec.rb +18 -0
- data/spec/rdkafka/admin/delete_acl_handle_spec.rb +85 -0
- data/spec/rdkafka/admin/delete_acl_report_spec.rb +71 -0
- data/spec/rdkafka/admin/describe_acl_handle_spec.rb +85 -0
- data/spec/rdkafka/admin/describe_acl_report_spec.rb +72 -0
- data/spec/rdkafka/admin_spec.rb +204 -0
- data/spec/rdkafka/consumer_spec.rb +9 -0
- data/spec/rdkafka/producer_spec.rb +0 -35
- data/spec/spec_helper.rb +1 -1
- data.tar.gz.sig +0 -0
- metadata +24 -2
- metadata.gz.sig +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 31a04189b142cd826aece2945d0a5415996bc72d4d90b1e843dd7db3378f05b3
|
4
|
+
data.tar.gz: 2ff92cdb856c24e3997e0e3c04b5c96fd9a807965581cec91beb3902b072ef88
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: '09f1db3f1be3ce2f665318ba60356d82ebf00b633470fb0ab249e38110f605d9e4a4f5e296c6bb4a29972ad577d2f71f3e5318c26f699fac71b481e2eb622c6a'
|
7
|
+
data.tar.gz: 6185cdb77c7d75f69052b5df9d5adf8942c28e0b7f842c617fd7f265549e6fa0fcc3cbaa24aeed0c4ea511a24ede8dcbf3a50e0d81ae1289eed627c143764407
|
checksums.yaml.gz.sig
CHANGED
Binary file
|
data/.github/FUNDING.yml
ADDED
@@ -0,0 +1 @@
|
|
1
|
+
custom: ['https://karafka.io/#become-pro']
|
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,13 @@
|
|
1
1
|
# Rdkafka Changelog
|
2
2
|
|
3
|
+
## 0.14.1 (2023-12-02)
|
4
|
+
- **[Feature]** Add `Admin#metadata` (mensfeld)
|
5
|
+
- **[Feature]** Add `Admin#create_partitions` (mensfeld)
|
6
|
+
- **[Feature]** Add `Admin#delete_group` utility (piotaixr)
|
7
|
+
- **[Feature]** Add Create and Delete ACL Feature To Admin Functions (vgnanasekaran)
|
8
|
+
- **[Enhancement]** Improve error reporting on `unknown_topic_or_part` and include missing topic (mensfeld)
|
9
|
+
- **[Enhancement]** Improve error reporting on consumer polling errors (mensfeld)
|
10
|
+
|
3
11
|
## 0.14.0 (2023-11-17)
|
4
12
|
- [Enhancement] Bump librdkafka to 2.3.0
|
5
13
|
- [Enhancement] Increase the `#lag` and `#query_watermark_offsets` default timeouts from 100ms to 1000ms. This will compensate for network glitches and remote clusters operations.
|
@@ -16,162 +24,162 @@
|
|
16
24
|
- [Fix] Fix dangling Opaque references.
|
17
25
|
|
18
26
|
## 0.13.6 (2023-10-17)
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
|
27
|
+
- **[Feature]** Support transactions API in the producer
|
28
|
+
- [Enhancement] Add `raise_response_error` flag to the `Rdkafka::AbstractHandle`.
|
29
|
+
- [Enhancement] Provide `#purge` to remove any outstanding requests from the producer.
|
30
|
+
- [Enhancement] Fix `#flush` does not handle the timeouts errors by making it return true if all flushed or false if failed. We do **not** raise an exception here to keep it backwards compatible.
|
23
31
|
|
24
32
|
## 0.13.5
|
25
|
-
|
33
|
+
- Fix DeliveryReport `create_result#error` being nil despite an error being associated with it
|
26
34
|
|
27
35
|
## 0.13.4
|
28
|
-
|
36
|
+
- Always call initial poll on librdkafka to make sure oauth bearer cb is handled pre-operations.
|
29
37
|
|
30
38
|
## 0.13.3
|
31
|
-
|
39
|
+
- Bump librdkafka to 2.2.0
|
32
40
|
|
33
41
|
## 0.13.2
|
34
|
-
|
35
|
-
|
42
|
+
- Ensure operations counter decrement is fully thread-safe
|
43
|
+
- Bump librdkafka to 2.1.1
|
36
44
|
|
37
45
|
## 0.13.1
|
38
|
-
|
46
|
+
- Add offsets_for_times method on consumer (timflapper)
|
39
47
|
|
40
48
|
## 0.13.0 (2023-07-24)
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
|
49
|
+
- Support cooperative sticky partition assignment in the rebalance callback (methodmissing)
|
50
|
+
- Support both string and symbol header keys (ColinDKelley)
|
51
|
+
- Handle tombstone messages properly (kgalieva)
|
52
|
+
- Add topic name to delivery report (maeve)
|
53
|
+
- Allow string partitioner config (mollyegibson)
|
54
|
+
- Fix documented type for DeliveryReport#error (jimmydo)
|
55
|
+
- Bump librdkafka to 2.0.2 (lmaia)
|
56
|
+
- Use finalizers to cleanly exit producer and admin (thijsc)
|
57
|
+
- Lock access to the native kafka client (thijsc)
|
58
|
+
- Fix potential race condition in multi-threaded producer (mensfeld)
|
59
|
+
- Fix leaking FFI resources in specs (mensfeld)
|
60
|
+
- Improve specs stability (mensfeld)
|
61
|
+
- Make metadata request timeout configurable (mensfeld)
|
62
|
+
- call_on_partitions_assigned and call_on_partitions_revoked only get a tpl passed in (thijsc)
|
63
|
+
- Support `#assignment_lost?` on a consumer to check for involuntary assignment revocation (mensfeld)
|
64
|
+
- Expose `#name` on the consumer and producer (mensfeld)
|
65
|
+
- Introduce producer partitions count metadata cache (mensfeld)
|
66
|
+
- Retry metadta fetches on certain errors with a backoff (mensfeld)
|
67
|
+
- Do not lock access to underlying native kafka client and rely on Karafka granular locking (mensfeld)
|
60
68
|
|
61
69
|
## 0.12.3
|
62
70
|
- Include backtrace in non-raised binded errors.
|
63
71
|
- Include topic name in the delivery reports
|
64
72
|
|
65
73
|
## 0.12.2
|
66
|
-
|
74
|
+
- Increase the metadata default timeout from 250ms to 2 seconds. This should allow for working with remote clusters.
|
67
75
|
|
68
76
|
## 0.12.1
|
69
|
-
|
70
|
-
|
77
|
+
- Bumps librdkafka to 2.0.2 (lmaia)
|
78
|
+
- Add support for adding more partitions via Admin API
|
71
79
|
|
72
80
|
## 0.12.0 (2022-06-17)
|
73
|
-
|
74
|
-
|
75
|
-
|
81
|
+
- Bumps librdkafka to 1.9.0
|
82
|
+
- Fix crash on empty partition key (mensfeld)
|
83
|
+
- Pass the delivery handle to the callback (gvisokinskas)
|
76
84
|
|
77
85
|
## 0.11.0 (2021-11-17)
|
78
|
-
|
79
|
-
|
80
|
-
|
86
|
+
- Upgrade librdkafka to 1.8.2
|
87
|
+
- Bump supported minimum Ruby version to 2.6
|
88
|
+
- Better homebrew path detection
|
81
89
|
|
82
90
|
## 0.10.0 (2021-09-07)
|
83
|
-
|
84
|
-
|
91
|
+
- Upgrade librdkafka to 1.5.0
|
92
|
+
- Add error callback config
|
85
93
|
|
86
94
|
## 0.9.0 (2021-06-23)
|
87
|
-
|
88
|
-
|
89
|
-
|
90
|
-
|
91
|
-
|
92
|
-
|
95
|
+
- Fixes for Ruby 3.0
|
96
|
+
- Allow any callable object for callbacks (gremerritt)
|
97
|
+
- Reduce memory allocations in Rdkafka::Producer#produce (jturkel)
|
98
|
+
- Use queue as log callback to avoid unsafe calls from trap context (breunigs)
|
99
|
+
- Allow passing in topic configuration on create_topic (dezka)
|
100
|
+
- Add each_batch method to consumer (mgrosso)
|
93
101
|
|
94
102
|
## 0.8.1 (2020-12-07)
|
95
|
-
|
96
|
-
|
97
|
-
|
98
|
-
|
103
|
+
- Fix topic_flag behaviour and add tests for Metadata (geoff2k)
|
104
|
+
- Add topic admin interface (geoff2k)
|
105
|
+
- Raise an exception if @native_kafka is nil (geoff2k)
|
106
|
+
- Option to use zstd compression (jasonmartens)
|
99
107
|
|
100
108
|
## 0.8.0 (2020-06-02)
|
101
|
-
|
102
|
-
|
103
|
-
|
104
|
-
|
105
|
-
|
106
|
-
|
109
|
+
- Upgrade librdkafka to 1.4.0
|
110
|
+
- Integrate librdkafka metadata API and add partition_key (by Adithya-copart)
|
111
|
+
- Ruby 2.7 compatibility fix (by Geoff Thé)A
|
112
|
+
- Add error to delivery report (by Alex Stanovsky)
|
113
|
+
- Don't override CPPFLAGS and LDFLAGS if already set on Mac (by Hiroshi Hatake)
|
114
|
+
- Allow use of Rake 13.x and up (by Tomasz Pajor)
|
107
115
|
|
108
116
|
## 0.7.0 (2019-09-21)
|
109
|
-
|
110
|
-
|
117
|
+
- Bump librdkafka to 1.2.0 (by rob-as)
|
118
|
+
- Allow customizing the wait time for delivery report availability (by mensfeld)
|
111
119
|
|
112
120
|
## 0.6.0 (2019-07-23)
|
113
|
-
|
114
|
-
|
121
|
+
- Bump librdkafka to 1.1.0 (by Chris Gaffney)
|
122
|
+
- Implement seek (by breunigs)
|
115
123
|
|
116
124
|
## 0.5.0 (2019-04-11)
|
117
|
-
|
118
|
-
|
119
|
-
|
120
|
-
|
121
|
-
|
125
|
+
- Bump librdkafka to 1.0.0 (by breunigs)
|
126
|
+
- Add cluster and member information (by dmexe)
|
127
|
+
- Support message headers for consumer & producer (by dmexe)
|
128
|
+
- Add consumer rebalance listener (by dmexe)
|
129
|
+
- Implement pause/resume partitions (by dmexe)
|
122
130
|
|
123
131
|
## 0.4.2 (2019-01-12)
|
124
|
-
|
125
|
-
|
126
|
-
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
132
|
+
- Delivery callback for producer
|
133
|
+
- Document list param of commit method
|
134
|
+
- Use default Homebrew openssl location if present
|
135
|
+
- Consumer lag handles empty topics
|
136
|
+
- End iteration in consumer when it is closed
|
137
|
+
- Add support for storing message offsets
|
138
|
+
- Add missing runtime dependency to rake
|
131
139
|
|
132
140
|
## 0.4.1 (2018-10-19)
|
133
|
-
|
141
|
+
- Bump librdkafka to 0.11.6
|
134
142
|
|
135
143
|
## 0.4.0 (2018-09-24)
|
136
|
-
|
137
|
-
|
138
|
-
|
144
|
+
- Improvements in librdkafka archive download
|
145
|
+
- Add global statistics callback
|
146
|
+
- Use Time for timestamps, potentially breaking change if you
|
139
147
|
rely on the previous behavior where it returns an integer with
|
140
148
|
the number of milliseconds.
|
141
|
-
|
142
|
-
|
149
|
+
- Bump librdkafka to 0.11.5
|
150
|
+
- Implement TopicPartitionList in Ruby so we don't have to keep
|
143
151
|
track of native objects.
|
144
|
-
|
145
|
-
|
152
|
+
- Support committing a topic partition list
|
153
|
+
- Add consumer assignment method
|
146
154
|
|
147
155
|
## 0.3.5 (2018-01-17)
|
148
|
-
|
149
|
-
|
156
|
+
- Fix crash when not waiting for delivery handles
|
157
|
+
- Run specs on Ruby 2.5
|
150
158
|
|
151
159
|
## 0.3.4 (2017-12-05)
|
152
|
-
|
160
|
+
- Bump librdkafka to 0.11.3
|
153
161
|
|
154
162
|
## 0.3.3 (2017-10-27)
|
155
|
-
|
163
|
+
- Fix bug that prevent display of `RdkafkaError` message
|
156
164
|
|
157
165
|
## 0.3.2 (2017-10-25)
|
158
|
-
|
159
|
-
|
160
|
-
|
161
|
-
|
166
|
+
- `add_topic` now supports using a partition count
|
167
|
+
- Add way to make errors clearer with an extra message
|
168
|
+
- Show topics in subscribe error message
|
169
|
+
- Show partition and topic in query watermark offsets error message
|
162
170
|
|
163
171
|
## 0.3.1 (2017-10-23)
|
164
|
-
|
165
|
-
|
166
|
-
|
172
|
+
- Bump librdkafka to 0.11.1
|
173
|
+
- Officially support ranges in `add_topic` for topic partition list.
|
174
|
+
- Add consumer lag calculator
|
167
175
|
|
168
176
|
## 0.3.0 (2017-10-17)
|
169
|
-
|
170
|
-
|
171
|
-
|
177
|
+
- Move both add topic methods to one `add_topic` in `TopicPartitionList`
|
178
|
+
- Add committed offsets to consumer
|
179
|
+
- Add query watermark offset to consumer
|
172
180
|
|
173
181
|
## 0.2.0 (2017-10-13)
|
174
|
-
|
182
|
+
- Some refactoring and add inline documentation
|
175
183
|
|
176
184
|
## 0.1.x (2017-09-10)
|
177
|
-
|
185
|
+
- Initial working version including producing and consuming
|
data/README.md
CHANGED
@@ -18,22 +18,31 @@ become EOL.
|
|
18
18
|
|
19
19
|
`rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
|
20
20
|
|
21
|
-
The most important pieces of a Kafka client are implemented.
|
22
|
-
working towards feature completeness. You can track that here:
|
23
|
-
https://github.com/appsignal/rdkafka-ruby/milestone/1
|
21
|
+
The most important pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
|
24
22
|
|
25
23
|
## Table of content
|
26
24
|
|
25
|
+
- [Project Scope](#project-scope)
|
27
26
|
- [Installation](#installation)
|
28
27
|
- [Usage](#usage)
|
29
|
-
* [Consuming
|
30
|
-
* [Producing
|
31
|
-
- [Higher
|
32
|
-
* [Message
|
33
|
-
* [Message
|
28
|
+
* [Consuming Messages](#consuming-messages)
|
29
|
+
* [Producing Messages](#producing-messages)
|
30
|
+
- [Higher Level Libraries](#higher-level-libraries)
|
31
|
+
* [Message Processing Frameworks](#message-processing-frameworks)
|
32
|
+
* [Message Publishing Libraries](#message-publishing-libraries)
|
34
33
|
- [Development](#development)
|
35
34
|
- [Example](#example)
|
36
35
|
|
36
|
+
## Project Scope
|
37
|
+
|
38
|
+
While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications, it's important to understand the limitations of this library:
|
39
|
+
|
40
|
+
- **No Complex Producers/Consumers**: This library does not intend to offer complex producers or consumers. The aim is to stick closely to the functionalities provided by librdkafka itself.
|
41
|
+
|
42
|
+
- **Focus on librdkafka Capabilities**: Features that can be achieved directly in Ruby, without specific needs from librdkafka, are outside the scope of this library.
|
43
|
+
|
44
|
+
- **Existing High-Level Functionalities**: Certain high-level functionalities like producer metadata cache and simple consumer are already part of the library. Although they fall slightly outside the primary goal, they will remain part of the contract, given their existing usage.
|
45
|
+
|
37
46
|
|
38
47
|
## Installation
|
39
48
|
|
@@ -42,9 +51,9 @@ If you have any problems installing the gem, please open an issue.
|
|
42
51
|
|
43
52
|
## Usage
|
44
53
|
|
45
|
-
See the [documentation](https://
|
54
|
+
See the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Two quick examples:
|
46
55
|
|
47
|
-
### Consuming
|
56
|
+
### Consuming Messages
|
48
57
|
|
49
58
|
Subscribe to a topic and get messages. Kafka will automatically spread
|
50
59
|
the available partitions over consumers with the same group id.
|
@@ -62,7 +71,7 @@ consumer.each do |message|
|
|
62
71
|
end
|
63
72
|
```
|
64
73
|
|
65
|
-
### Producing
|
74
|
+
### Producing Messages
|
66
75
|
|
67
76
|
Produce a number of messages, put the delivery handles in an array, and
|
68
77
|
wait for them before exiting. This way the messages will be batched and
|
@@ -89,41 +98,42 @@ Note that creating a producer consumes some resources that will not be
|
|
89
98
|
released until it `#close` is explicitly called, so be sure to call
|
90
99
|
`Config#producer` only as necessary.
|
91
100
|
|
92
|
-
## Higher
|
101
|
+
## Higher Level Libraries
|
93
102
|
|
94
103
|
Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
|
95
104
|
|
96
|
-
### Message
|
105
|
+
### Message Processing Frameworks
|
97
106
|
|
98
107
|
* [Karafka](https://github.com/karafka/karafka) - Ruby and Rails efficient Kafka processing framework.
|
99
108
|
* [Racecar](https://github.com/zendesk/racecar) - A simple framework for Kafka consumers in Ruby
|
100
109
|
|
101
|
-
### Message
|
110
|
+
### Message Publishing Libraries
|
102
111
|
|
103
112
|
* [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
|
104
113
|
|
105
114
|
## Development
|
106
115
|
|
107
|
-
|
108
|
-
|
116
|
+
Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
|
117
|
+
|
118
|
+
- Implement functionalities that can achieved using standard Ruby capabilities without changes to the underlying rdkafka-ruby bindings.
|
119
|
+
- Deviate significantly from the primary aim of providing librdkafka bindings with Ruby-friendly interfaces.
|
120
|
+
|
121
|
+
A Docker Compose file is included to run Kafka. To run that:
|
109
122
|
|
110
123
|
```
|
111
124
|
docker-compose up
|
112
125
|
```
|
113
126
|
|
114
|
-
Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and
|
115
|
-
compile `librdkafka`.
|
127
|
+
Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and compile `librdkafka`.
|
116
128
|
|
117
|
-
You can then run `bundle exec rspec` to run the tests. To see rdkafka
|
118
|
-
debug output:
|
129
|
+
You can then run `bundle exec rspec` to run the tests. To see rdkafka debug output:
|
119
130
|
|
120
131
|
```
|
121
132
|
DEBUG_PRODUCER=true bundle exec rspec
|
122
133
|
DEBUG_CONSUMER=true bundle exec rspec
|
123
134
|
```
|
124
135
|
|
125
|
-
After running the tests, you can bring the cluster down to start with a
|
126
|
-
clean slate:
|
136
|
+
After running the tests, you can bring the cluster down to start with a clean slate:
|
127
137
|
|
128
138
|
```
|
129
139
|
docker-compose down
|
data/docker-compose.yml
CHANGED
@@ -23,3 +23,5 @@ services:
|
|
23
23
|
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
|
24
24
|
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
|
25
25
|
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
|
26
|
+
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
|
27
|
+
KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
|
@@ -0,0 +1,37 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
|
6
|
+
# Extracts attributes of rd_kafka_AclBinding_t
|
7
|
+
#
|
8
|
+
class AclBindingResult
|
9
|
+
attr_reader :result_error, :error_string, :matching_acl_resource_type, :matching_acl_resource_name, :matching_acl_pattern_type, :matching_acl_principal, :matching_acl_host, :matching_acl_operation, :matching_acl_permission_type
|
10
|
+
|
11
|
+
def initialize(matching_acl)
|
12
|
+
rd_kafka_error_pointer = Rdkafka::Bindings.rd_kafka_AclBinding_error(matching_acl)
|
13
|
+
@result_error = Rdkafka::Bindings.rd_kafka_error_code(rd_kafka_error_pointer)
|
14
|
+
error_string = Rdkafka::Bindings.rd_kafka_error_string(rd_kafka_error_pointer)
|
15
|
+
if error_string != FFI::Pointer::NULL
|
16
|
+
@error_string = error_string.read_string
|
17
|
+
end
|
18
|
+
@matching_acl_resource_type = Rdkafka::Bindings.rd_kafka_AclBinding_restype(matching_acl)
|
19
|
+
matching_acl_resource_name = Rdkafka::Bindings.rd_kafka_AclBinding_name(matching_acl)
|
20
|
+
if matching_acl_resource_name != FFI::Pointer::NULL
|
21
|
+
@matching_acl_resource_name = matching_acl_resource_name.read_string
|
22
|
+
end
|
23
|
+
@matching_acl_pattern_type = Rdkafka::Bindings.rd_kafka_AclBinding_resource_pattern_type(matching_acl)
|
24
|
+
matching_acl_principal = Rdkafka::Bindings.rd_kafka_AclBinding_principal(matching_acl)
|
25
|
+
if matching_acl_principal != FFI::Pointer::NULL
|
26
|
+
@matching_acl_principal = matching_acl_principal.read_string
|
27
|
+
end
|
28
|
+
matching_acl_host = Rdkafka::Bindings.rd_kafka_AclBinding_host(matching_acl)
|
29
|
+
if matching_acl_host != FFI::Pointer::NULL
|
30
|
+
@matching_acl_host = matching_acl_host.read_string
|
31
|
+
end
|
32
|
+
@matching_acl_operation = Rdkafka::Bindings.rd_kafka_AclBinding_operation(matching_acl)
|
33
|
+
@matching_acl_permission_type = Rdkafka::Bindings.rd_kafka_AclBinding_permission_type(matching_acl)
|
34
|
+
end
|
35
|
+
end
|
36
|
+
end
|
37
|
+
end
|
@@ -0,0 +1,28 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class CreateAclHandle < AbstractHandle
|
6
|
+
layout :pending, :bool,
|
7
|
+
:response, :int,
|
8
|
+
:response_string, :pointer
|
9
|
+
|
10
|
+
# @return [String] the name of the operation
|
11
|
+
def operation_name
|
12
|
+
"create acl"
|
13
|
+
end
|
14
|
+
|
15
|
+
# @return [CreateAclReport] instance with rdkafka_response value as 0 and rdkafka_response_string value as empty string if the acl creation was successful
|
16
|
+
def create_result
|
17
|
+
CreateAclReport.new(rdkafka_response: self[:response], rdkafka_response_string: self[:response_string])
|
18
|
+
end
|
19
|
+
|
20
|
+
def raise_error
|
21
|
+
raise RdkafkaError.new(
|
22
|
+
self[:response],
|
23
|
+
broker_message: self[:response_string].read_string
|
24
|
+
)
|
25
|
+
end
|
26
|
+
end
|
27
|
+
end
|
28
|
+
end
|
@@ -0,0 +1,24 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class CreateAclReport
|
6
|
+
|
7
|
+
# Upon successful creation of Acl RD_KAFKA_RESP_ERR_NO_ERROR - 0 is returned as rdkafka_response
|
8
|
+
# @return [Integer]
|
9
|
+
attr_reader :rdkafka_response
|
10
|
+
|
11
|
+
|
12
|
+
# Upon successful creation of Acl empty string will be returned as rdkafka_response_string
|
13
|
+
# @return [String]
|
14
|
+
attr_reader :rdkafka_response_string
|
15
|
+
|
16
|
+
def initialize(rdkafka_response:, rdkafka_response_string:)
|
17
|
+
@rdkafka_response = rdkafka_response
|
18
|
+
if rdkafka_response_string != FFI::Pointer::NULL
|
19
|
+
@rdkafka_response_string = rdkafka_response_string.read_string
|
20
|
+
end
|
21
|
+
end
|
22
|
+
end
|
23
|
+
end
|
24
|
+
end
|
@@ -0,0 +1,30 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class DeleteAclHandle < AbstractHandle
|
6
|
+
layout :pending, :bool,
|
7
|
+
:response, :int,
|
8
|
+
:response_string, :pointer,
|
9
|
+
:matching_acls, :pointer,
|
10
|
+
:matching_acls_count, :int
|
11
|
+
|
12
|
+
# @return [String] the name of the operation
|
13
|
+
def operation_name
|
14
|
+
"delete acl"
|
15
|
+
end
|
16
|
+
|
17
|
+
# @return [DeleteAclReport] instance with an array of matching_acls
|
18
|
+
def create_result
|
19
|
+
DeleteAclReport.new(matching_acls: self[:matching_acls], matching_acls_count: self[:matching_acls_count])
|
20
|
+
end
|
21
|
+
|
22
|
+
def raise_error
|
23
|
+
raise RdkafkaError.new(
|
24
|
+
self[:response],
|
25
|
+
broker_message: self[:response_string].read_string
|
26
|
+
)
|
27
|
+
end
|
28
|
+
end
|
29
|
+
end
|
30
|
+
end
|
@@ -0,0 +1,23 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class DeleteAclReport
|
6
|
+
|
7
|
+
# deleted acls
|
8
|
+
# @return [Rdkafka::Bindings::AclBindingResult]
|
9
|
+
attr_reader :deleted_acls
|
10
|
+
|
11
|
+
def initialize(matching_acls:, matching_acls_count:)
|
12
|
+
@deleted_acls=[]
|
13
|
+
if matching_acls != FFI::Pointer::NULL
|
14
|
+
acl_binding_result_pointers = matching_acls.read_array_of_pointer(matching_acls_count)
|
15
|
+
(1..matching_acls_count).map do |matching_acl_index|
|
16
|
+
acl_binding_result = AclBindingResult.new(acl_binding_result_pointers[matching_acl_index - 1])
|
17
|
+
@deleted_acls << acl_binding_result
|
18
|
+
end
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end
|
22
|
+
end
|
23
|
+
end
|
@@ -0,0 +1,28 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class DeleteGroupsHandle < AbstractHandle
|
6
|
+
layout :pending, :bool, # TODO: ???
|
7
|
+
:response, :int,
|
8
|
+
:error_string, :pointer,
|
9
|
+
:result_name, :pointer
|
10
|
+
|
11
|
+
# @return [String] the name of the operation
|
12
|
+
def operation_name
|
13
|
+
"delete groups"
|
14
|
+
end
|
15
|
+
|
16
|
+
def create_result
|
17
|
+
DeleteGroupsReport.new(self[:error_string], self[:result_name])
|
18
|
+
end
|
19
|
+
|
20
|
+
def raise_error
|
21
|
+
raise RdkafkaError.new(
|
22
|
+
self[:response],
|
23
|
+
broker_message: create_result.error_string
|
24
|
+
)
|
25
|
+
end
|
26
|
+
end
|
27
|
+
end
|
28
|
+
end
|
@@ -0,0 +1,24 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class DeleteGroupsReport
|
6
|
+
# Any error message generated from the DeleteTopic
|
7
|
+
# @return [String]
|
8
|
+
attr_reader :error_string
|
9
|
+
|
10
|
+
# The name of the topic deleted
|
11
|
+
# @return [String]
|
12
|
+
attr_reader :result_name
|
13
|
+
|
14
|
+
def initialize(error_string, result_name)
|
15
|
+
if error_string != FFI::Pointer::NULL
|
16
|
+
@error_string = error_string.read_string
|
17
|
+
end
|
18
|
+
if result_name != FFI::Pointer::NULL
|
19
|
+
@result_name = @result_name = result_name.read_string
|
20
|
+
end
|
21
|
+
end
|
22
|
+
end
|
23
|
+
end
|
24
|
+
end
|
@@ -0,0 +1,30 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Rdkafka
|
4
|
+
class Admin
|
5
|
+
class DescribeAclHandle < AbstractHandle
|
6
|
+
layout :pending, :bool,
|
7
|
+
:response, :int,
|
8
|
+
:response_string, :pointer,
|
9
|
+
:acls, :pointer,
|
10
|
+
:acls_count, :int
|
11
|
+
|
12
|
+
# @return [String] the name of the operation.
|
13
|
+
def operation_name
|
14
|
+
"describe acl"
|
15
|
+
end
|
16
|
+
|
17
|
+
# @return [DescribeAclReport] instance with an array of acls that matches the request filters.
|
18
|
+
def create_result
|
19
|
+
DescribeAclReport.new(acls: self[:acls], acls_count: self[:acls_count])
|
20
|
+
end
|
21
|
+
|
22
|
+
def raise_error
|
23
|
+
raise RdkafkaError.new(
|
24
|
+
self[:response],
|
25
|
+
broker_message: self[:response_string].read_string
|
26
|
+
)
|
27
|
+
end
|
28
|
+
end
|
29
|
+
end
|
30
|
+
end
|