rdkafka 0.14.0.rc1 → 0.15.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (41) hide show
  1. checksums.yaml +4 -4
  2. checksums.yaml.gz.sig +2 -1
  3. data/.github/FUNDING.yml +1 -0
  4. data/CHANGELOG.md +104 -92
  5. data/README.md +32 -22
  6. data/docker-compose.yml +2 -0
  7. data/lib/rdkafka/abstract_handle.rb +3 -2
  8. data/lib/rdkafka/admin/acl_binding_result.rb +37 -0
  9. data/lib/rdkafka/admin/create_acl_handle.rb +28 -0
  10. data/lib/rdkafka/admin/create_acl_report.rb +24 -0
  11. data/lib/rdkafka/admin/create_partitions_handle.rb +27 -0
  12. data/lib/rdkafka/admin/create_partitions_report.rb +6 -0
  13. data/lib/rdkafka/admin/delete_acl_handle.rb +30 -0
  14. data/lib/rdkafka/admin/delete_acl_report.rb +23 -0
  15. data/lib/rdkafka/admin/delete_groups_handle.rb +28 -0
  16. data/lib/rdkafka/admin/delete_groups_report.rb +24 -0
  17. data/lib/rdkafka/admin/describe_acl_handle.rb +30 -0
  18. data/lib/rdkafka/admin/describe_acl_report.rb +23 -0
  19. data/lib/rdkafka/admin.rb +443 -0
  20. data/lib/rdkafka/bindings.rb +119 -0
  21. data/lib/rdkafka/callbacks.rb +187 -0
  22. data/lib/rdkafka/config.rb +24 -3
  23. data/lib/rdkafka/consumer/headers.rb +1 -1
  24. data/lib/rdkafka/consumer/topic_partition_list.rb +8 -7
  25. data/lib/rdkafka/consumer.rb +46 -10
  26. data/lib/rdkafka/producer.rb +2 -2
  27. data/lib/rdkafka/version.rb +3 -3
  28. data/lib/rdkafka.rb +11 -0
  29. data/spec/rdkafka/admin/create_acl_handle_spec.rb +56 -0
  30. data/spec/rdkafka/admin/create_acl_report_spec.rb +18 -0
  31. data/spec/rdkafka/admin/delete_acl_handle_spec.rb +85 -0
  32. data/spec/rdkafka/admin/delete_acl_report_spec.rb +71 -0
  33. data/spec/rdkafka/admin/describe_acl_handle_spec.rb +85 -0
  34. data/spec/rdkafka/admin/describe_acl_report_spec.rb +72 -0
  35. data/spec/rdkafka/admin_spec.rb +204 -0
  36. data/spec/rdkafka/config_spec.rb +8 -0
  37. data/spec/rdkafka/consumer_spec.rb +69 -0
  38. data/spec/spec_helper.rb +3 -1
  39. data.tar.gz.sig +0 -0
  40. metadata +28 -4
  41. metadata.gz.sig +0 -0
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e50643841e7ff26de5ee8a05339cdf92d78df620215594f4672d8fa0d03b4022
4
- data.tar.gz: 8c95de343d98a03132a1e8646c334e54576981ba88d66d045bb39d5fff31b3c5
3
+ metadata.gz: d906b2e71dae5b5f45459e915c48dc8cb88e0d51ebb90ded80cef3c8e5531b77
4
+ data.tar.gz: 8f0df2688bbc3b264de22b5943b18462ad41898781cc12e6e534804409133ce0
5
5
  SHA512:
6
- metadata.gz: bf82c6a67542d64d915cfbb15f946107da6a63f46f0b94969d67365c95e0d026c1b4bb3d40f06cac720896b22e52b845b1ea413c7dffcf278f89f5843c15e3f8
7
- data.tar.gz: 350b8490222dcda953c2b65543b95641d2cd566c4bf9df6616123ab57c715fb4d823f1f97702eb247c6ecc8ee9e41b45ea0cd60e7e94a5c04cf0a9dcb7b24468
6
+ metadata.gz: c35d392b326f4d47077f419bced92b929436be548651afc9364f5ada2eda51883ad75feeeb30183369aa6b15db3ac4630410f408eae449d1b0cc5a007cf011fc
7
+ data.tar.gz: 1487bb54713e6330ce55fd95f656dffd2edc34bcd8bc151d94faf2d6f043b8183276e3407d5f45c6152e99463c8323ccb83c15fff7007ebb96c4b369533002d5
checksums.yaml.gz.sig CHANGED
@@ -1 +1,2 @@
1
- G��TAn)wI���w)��c��>�%I�����L����^�Pl��ѯJ:����}��17�$��'4asA�y0e����#*ܗRL��~�c�[��I��rdr�L��q��q1�ӂ'G�"��Պ�sń�II@��]Mu��w���w��@�u��"k�uBnI�R����,#/_TH����$� ��ݭ����K:�{���C����I�S#�:�l�� n�$� ��F��>p%�#jz�=� �<�����3�?A�!�}��]B)㐗\�H����Q��*��
1
+ B5��C5R�z�z�$$T~<ZX:f��c*C��͡L��s��i4�cO0*N��� �?��aeł����a[(��WYXR��u5,Lt\���i$�QɆ��:ܰ!�Ź w��;�j������&4X���u��?�����Yo^d#`��j�ǀ�:�q���Fu�������0ó8���"��LR�,[�#�K������ae���k�M��$�l��kU
2
+ ��qL����ㅂNt ����N��WU�1@��jg�BH�:X�p�dJ���c��ޥ��]��R�6��� J�:B>\�+͒�>}k� _��<��O�֏�"����ೳd�RW������7y�\{��{Hq���K�Ѭr����z���<W�f=��v}�
@@ -0,0 +1 @@
1
+ custom: ['https://karafka.io/#become-pro']
data/CHANGELOG.md CHANGED
@@ -1,140 +1,152 @@
1
1
  # Rdkafka Changelog
2
2
 
3
- ## 0.14.0 (Unreleased)
3
+ ## 0.15.0 (2023-12-03)
4
+ - **[Feature]** Add `Admin#metadata` (mensfeld)
5
+ - **[Feature]** Add `Admin#create_partitions` (mensfeld)
6
+ - **[Feature]** Add `Admin#delete_group` utility (piotaixr)
7
+ - **[Feature]** Add Create and Delete ACL Feature To Admin Functions (vgnanasekaran)
8
+ - **[Feature]** Support `#assignment_lost?` on a consumer to check for involuntary assignment revocation (mensfeld)
9
+ - [Enhancement] Expose alternative way of managing consumer events via a separate queue (mensfeld)
10
+ - [Enhancement] **Bump** librdkafka to 2.3.0 (mensfeld)
11
+ - [Enhancement] Increase the `#lag` and `#query_watermark_offsets` default timeouts from 100ms to 1000ms. This will compensate for network glitches and remote clusters operations (mensfeld)
12
+ - [Change] Use `SecureRandom.uuid` instead of `random` for test consumer groups (mensfeld)
13
+
14
+ ## 0.14.0 (2023-11-21)
15
+ - [Enhancement] Add `raise_response_error` flag to the `Rdkafka::AbstractHandle`.
4
16
  - [Enhancement] Allow for setting `statistics_callback` as nil to reset predefined settings configured by a different gem (mensfeld)
5
- * [Enhancement] Get consumer position (thijsc & mensfeld)
6
- * [Enhancement] Provide `#purge` to remove any outstanding requests from the producer (mensfeld)
7
- * [Enhancement] Update `librdkafka` to `2.2.0` (mensfeld)
8
- * [Enhancement] Introduce producer partitions count metadata cache (mensfeld)
9
- * [Enhancement] Increase metadata timeout request from `250 ms` to `2000 ms` default to allow for remote cluster operations via `rdkafka-ruby` (mensfeld)
10
- * [Enhancement] Introduce `#name` for producers and consumers (mensfeld)
11
- * [Enhancement] Include backtrace in non-raised binded errors (mensfeld)
12
- * [Fix] Reference to Opaque is not released when Admin, Consumer or Producer is closed (mensfeld)
13
- * [Fix] Trigger `#poll` on native kafka creation to handle oauthbearer cb (mensfeld)
14
- * [Fix] `#flush` does not handle the timeouts errors by making it return `true` if all flushed or `false` if failed. We do **not** raise an exception here to keep it backwards compatible (mensfeld)
15
- * [Change] Remove support for Ruby 2.6 due to it being EOL and WeakMap incompatibilities (mensfeld)
16
- * [Change] Update Kafka Docker with Confluent KRaft (mensfeld)
17
- * [Change] Update librdkafka repo reference from edenhill to confluentinc (mensfeld)
17
+ - [Enhancement] Get consumer position (thijsc & mensfeld)
18
+ - [Enhancement] Provide `#purge` to remove any outstanding requests from the producer (mensfeld)
19
+ - [Enhancement] Update `librdkafka` to `2.2.0` (mensfeld)
20
+ - [Enhancement] Introduce producer partitions count metadata cache (mensfeld)
21
+ - [Enhancement] Increase metadata timeout request from `250 ms` to `2000 ms` default to allow for remote cluster operations via `rdkafka-ruby` (mensfeld)
22
+ - [Enhancement] Introduce `#name` for producers and consumers (mensfeld)
23
+ - [Enhancement] Include backtrace in non-raised binded errors (mensfeld)
24
+ - [Fix] Reference to Opaque is not released when Admin, Consumer or Producer is closed (mensfeld)
25
+ - [Fix] Trigger `#poll` on native kafka creation to handle oauthbearer cb (mensfeld)
26
+ - [Fix] `#flush` does not handle the timeouts errors by making it return `true` if all flushed or `false` if failed. We do **not** raise an exception here to keep it backwards compatible (mensfeld)
27
+ - [Change] Remove support for Ruby 2.6 due to it being EOL and WeakMap incompatibilities (mensfeld)
28
+ - [Change] Update Kafka Docker with Confluent KRaft (mensfeld)
29
+ - [Change] Update librdkafka repo reference from edenhill to confluentinc (mensfeld)
18
30
 
19
31
  ## 0.13.0 (2023-07-24)
20
- * Support cooperative sticky partition assignment in the rebalance callback (methodmissing)
21
- * Support both string and symbol header keys (ColinDKelley)
22
- * Handle tombstone messages properly (kgalieva)
23
- * Add topic name to delivery report (maeve)
24
- * Allow string partitioner config (mollyegibson)
25
- * Fix documented type for DeliveryReport#error (jimmydo)
26
- * Bump librdkafka to 2.0.2 (lmaia)
27
- * Use finalizers to cleanly exit producer and admin (thijsc)
28
- * Lock access to the native kafka client (thijsc)
29
- * Fix potential race condition in multi-threaded producer (mensfeld)
30
- * Fix leaking FFI resources in specs (mensfeld)
31
- * Improve specs stability (mensfeld)
32
- * Make metadata request timeout configurable (mensfeld)
33
- * call_on_partitions_assigned and call_on_partitions_revoked only get a tpl passed in (thijsc)
32
+ - Support cooperative sticky partition assignment in the rebalance callback (methodmissing)
33
+ - Support both string and symbol header keys (ColinDKelley)
34
+ - Handle tombstone messages properly (kgalieva)
35
+ - Add topic name to delivery report (maeve)
36
+ - Allow string partitioner config (mollyegibson)
37
+ - Fix documented type for DeliveryReport#error (jimmydo)
38
+ - Bump librdkafka to 2.0.2 (lmaia)
39
+ - Use finalizers to cleanly exit producer and admin (thijsc)
40
+ - Lock access to the native kafka client (thijsc)
41
+ - Fix potential race condition in multi-threaded producer (mensfeld)
42
+ - Fix leaking FFI resources in specs (mensfeld)
43
+ - Improve specs stability (mensfeld)
44
+ - Make metadata request timeout configurable (mensfeld)
45
+ - call_on_partitions_assigned and call_on_partitions_revoked only get a tpl passed in (thijsc)
34
46
 
35
47
  ## 0.12.0 (2022-06-17)
36
- * Bumps librdkafka to 1.9.0
37
- * Fix crash on empty partition key (mensfeld)
38
- * Pass the delivery handle to the callback (gvisokinskas)
48
+ - Bumps librdkafka to 1.9.0
49
+ - Fix crash on empty partition key (mensfeld)
50
+ - Pass the delivery handle to the callback (gvisokinskas)
39
51
 
40
52
  ## 0.11.0 (2021-11-17)
41
- * Upgrade librdkafka to 1.8.2
42
- * Bump supported minimum Ruby version to 2.6
43
- * Better homebrew path detection
53
+ - Upgrade librdkafka to 1.8.2
54
+ - Bump supported minimum Ruby version to 2.6
55
+ - Better homebrew path detection
44
56
 
45
57
  ## 0.10.0 (2021-09-07)
46
- * Upgrade librdkafka to 1.5.0
47
- * Add error callback config
58
+ - Upgrade librdkafka to 1.5.0
59
+ - Add error callback config
48
60
 
49
61
  ## 0.9.0 (2021-06-23)
50
- * Fixes for Ruby 3.0
51
- * Allow any callable object for callbacks (gremerritt)
52
- * Reduce memory allocations in Rdkafka::Producer#produce (jturkel)
53
- * Use queue as log callback to avoid unsafe calls from trap context (breunigs)
54
- * Allow passing in topic configuration on create_topic (dezka)
55
- * Add each_batch method to consumer (mgrosso)
62
+ - Fixes for Ruby 3.0
63
+ - Allow any callable object for callbacks (gremerritt)
64
+ - Reduce memory allocations in Rdkafka::Producer#produce (jturkel)
65
+ - Use queue as log callback to avoid unsafe calls from trap context (breunigs)
66
+ - Allow passing in topic configuration on create_topic (dezka)
67
+ - Add each_batch method to consumer (mgrosso)
56
68
 
57
69
  ## 0.8.1 (2020-12-07)
58
- * Fix topic_flag behaviour and add tests for Metadata (geoff2k)
59
- * Add topic admin interface (geoff2k)
60
- * Raise an exception if @native_kafka is nil (geoff2k)
61
- * Option to use zstd compression (jasonmartens)
70
+ - Fix topic_flag behaviour and add tests for Metadata (geoff2k)
71
+ - Add topic admin interface (geoff2k)
72
+ - Raise an exception if @native_kafka is nil (geoff2k)
73
+ - Option to use zstd compression (jasonmartens)
62
74
 
63
75
  ## 0.8.0 (2020-06-02)
64
- * Upgrade librdkafka to 1.4.0
65
- * Integrate librdkafka metadata API and add partition_key (by Adithya-copart)
66
- * Ruby 2.7 compatibility fix (by Geoff Thé)A
67
- * Add error to delivery report (by Alex Stanovsky)
68
- * Don't override CPPFLAGS and LDFLAGS if already set on Mac (by Hiroshi Hatake)
69
- * Allow use of Rake 13.x and up (by Tomasz Pajor)
76
+ - Upgrade librdkafka to 1.4.0
77
+ - Integrate librdkafka metadata API and add partition_key (by Adithya-copart)
78
+ - Ruby 2.7 compatibility fix (by Geoff Thé)A
79
+ - Add error to delivery report (by Alex Stanovsky)
80
+ - Don't override CPPFLAGS and LDFLAGS if already set on Mac (by Hiroshi Hatake)
81
+ - Allow use of Rake 13.x and up (by Tomasz Pajor)
70
82
 
71
83
  ## 0.7.0 (2019-09-21)
72
- * Bump librdkafka to 1.2.0 (by rob-as)
73
- * Allow customizing the wait time for delivery report availability (by mensfeld)
84
+ - Bump librdkafka to 1.2.0 (by rob-as)
85
+ - Allow customizing the wait time for delivery report availability (by mensfeld)
74
86
 
75
87
  ## 0.6.0 (2019-07-23)
76
- * Bump librdkafka to 1.1.0 (by Chris Gaffney)
77
- * Implement seek (by breunigs)
88
+ - Bump librdkafka to 1.1.0 (by Chris Gaffney)
89
+ - Implement seek (by breunigs)
78
90
 
79
91
  ## 0.5.0 (2019-04-11)
80
- * Bump librdkafka to 1.0.0 (by breunigs)
81
- * Add cluster and member information (by dmexe)
82
- * Support message headers for consumer & producer (by dmexe)
83
- * Add consumer rebalance listener (by dmexe)
84
- * Implement pause/resume partitions (by dmexe)
92
+ - Bump librdkafka to 1.0.0 (by breunigs)
93
+ - Add cluster and member information (by dmexe)
94
+ - Support message headers for consumer & producer (by dmexe)
95
+ - Add consumer rebalance listener (by dmexe)
96
+ - Implement pause/resume partitions (by dmexe)
85
97
 
86
98
  ## 0.4.2 (2019-01-12)
87
- * Delivery callback for producer
88
- * Document list param of commit method
89
- * Use default Homebrew openssl location if present
90
- * Consumer lag handles empty topics
91
- * End iteration in consumer when it is closed
92
- * Add support for storing message offsets
93
- * Add missing runtime dependency to rake
99
+ - Delivery callback for producer
100
+ - Document list param of commit method
101
+ - Use default Homebrew openssl location if present
102
+ - Consumer lag handles empty topics
103
+ - End iteration in consumer when it is closed
104
+ - Add support for storing message offsets
105
+ - Add missing runtime dependency to rake
94
106
 
95
107
  ## 0.4.1 (2018-10-19)
96
- * Bump librdkafka to 0.11.6
108
+ - Bump librdkafka to 0.11.6
97
109
 
98
110
  ## 0.4.0 (2018-09-24)
99
- * Improvements in librdkafka archive download
100
- * Add global statistics callback
101
- * Use Time for timestamps, potentially breaking change if you
111
+ - Improvements in librdkafka archive download
112
+ - Add global statistics callback
113
+ - Use Time for timestamps, potentially breaking change if you
102
114
  rely on the previous behavior where it returns an integer with
103
115
  the number of milliseconds.
104
- * Bump librdkafka to 0.11.5
105
- * Implement TopicPartitionList in Ruby so we don't have to keep
116
+ - Bump librdkafka to 0.11.5
117
+ - Implement TopicPartitionList in Ruby so we don't have to keep
106
118
  track of native objects.
107
- * Support committing a topic partition list
108
- * Add consumer assignment method
119
+ - Support committing a topic partition list
120
+ - Add consumer assignment method
109
121
 
110
122
  ## 0.3.5 (2018-01-17)
111
- * Fix crash when not waiting for delivery handles
112
- * Run specs on Ruby 2.5
123
+ - Fix crash when not waiting for delivery handles
124
+ - Run specs on Ruby 2.5
113
125
 
114
126
  ## 0.3.4 (2017-12-05)
115
- * Bump librdkafka to 0.11.3
127
+ - Bump librdkafka to 0.11.3
116
128
 
117
129
  ## 0.3.3 (2017-10-27)
118
- * Fix bug that prevent display of `RdkafkaError` message
130
+ - Fix bug that prevent display of `RdkafkaError` message
119
131
 
120
132
  ## 0.3.2 (2017-10-25)
121
- * `add_topic` now supports using a partition count
122
- * Add way to make errors clearer with an extra message
123
- * Show topics in subscribe error message
124
- * Show partition and topic in query watermark offsets error message
133
+ - `add_topic` now supports using a partition count
134
+ - Add way to make errors clearer with an extra message
135
+ - Show topics in subscribe error message
136
+ - Show partition and topic in query watermark offsets error message
125
137
 
126
138
  ## 0.3.1 (2017-10-23)
127
- * Bump librdkafka to 0.11.1
128
- * Officially support ranges in `add_topic` for topic partition list.
129
- * Add consumer lag calculator
139
+ - Bump librdkafka to 0.11.1
140
+ - Officially support ranges in `add_topic` for topic partition list.
141
+ - Add consumer lag calculator
130
142
 
131
143
  ## 0.3.0 (2017-10-17)
132
- * Move both add topic methods to one `add_topic` in `TopicPartitionList`
133
- * Add committed offsets to consumer
134
- * Add query watermark offset to consumer
144
+ - Move both add topic methods to one `add_topic` in `TopicPartitionList`
145
+ - Add committed offsets to consumer
146
+ - Add query watermark offset to consumer
135
147
 
136
148
  ## 0.2.0 (2017-10-13)
137
- * Some refactoring and add inline documentation
149
+ - Some refactoring and add inline documentation
138
150
 
139
151
  ## 0.1.x (2017-09-10)
140
- * Initial working version including producing and consuming
152
+ - Initial working version including producing and consuming
data/README.md CHANGED
@@ -18,22 +18,31 @@ become EOL.
18
18
 
19
19
  `rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
20
20
 
21
- The most important pieces of a Kafka client are implemented. We're
22
- working towards feature completeness. You can track that here:
23
- https://github.com/appsignal/rdkafka-ruby/milestone/1
21
+ The most important pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs.
24
22
 
25
23
  ## Table of content
26
24
 
25
+ - [Project Scope](#project-scope)
27
26
  - [Installation](#installation)
28
27
  - [Usage](#usage)
29
- * [Consuming messages](#consuming-messages)
30
- * [Producing messages](#producing-messages)
31
- - [Higher level libraries](#higher-level-libraries)
32
- * [Message processing frameworks](#message-processing-frameworks)
33
- * [Message publishing libraries](#message-publishing-libraries)
28
+ * [Consuming Messages](#consuming-messages)
29
+ * [Producing Messages](#producing-messages)
30
+ - [Higher Level Libraries](#higher-level-libraries)
31
+ * [Message Processing Frameworks](#message-processing-frameworks)
32
+ * [Message Publishing Libraries](#message-publishing-libraries)
34
33
  - [Development](#development)
35
34
  - [Example](#example)
36
35
 
36
+ ## Project Scope
37
+
38
+ While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications, it's important to understand the limitations of this library:
39
+
40
+ - **No Complex Producers/Consumers**: This library does not intend to offer complex producers or consumers. The aim is to stick closely to the functionalities provided by librdkafka itself.
41
+
42
+ - **Focus on librdkafka Capabilities**: Features that can be achieved directly in Ruby, without specific needs from librdkafka, are outside the scope of this library.
43
+
44
+ - **Existing High-Level Functionalities**: Certain high-level functionalities like producer metadata cache and simple consumer are already part of the library. Although they fall slightly outside the primary goal, they will remain part of the contract, given their existing usage.
45
+
37
46
 
38
47
  ## Installation
39
48
 
@@ -42,9 +51,9 @@ If you have any problems installing the gem, please open an issue.
42
51
 
43
52
  ## Usage
44
53
 
45
- See the [documentation](https://www.rubydoc.info/github/appsignal/rdkafka-ruby) for full details on how to use this gem. Two quick examples:
54
+ See the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Two quick examples:
46
55
 
47
- ### Consuming messages
56
+ ### Consuming Messages
48
57
 
49
58
  Subscribe to a topic and get messages. Kafka will automatically spread
50
59
  the available partitions over consumers with the same group id.
@@ -62,7 +71,7 @@ consumer.each do |message|
62
71
  end
63
72
  ```
64
73
 
65
- ### Producing messages
74
+ ### Producing Messages
66
75
 
67
76
  Produce a number of messages, put the delivery handles in an array, and
68
77
  wait for them before exiting. This way the messages will be batched and
@@ -89,41 +98,42 @@ Note that creating a producer consumes some resources that will not be
89
98
  released until it `#close` is explicitly called, so be sure to call
90
99
  `Config#producer` only as necessary.
91
100
 
92
- ## Higher level libraries
101
+ ## Higher Level Libraries
93
102
 
94
103
  Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
95
104
 
96
- ### Message processing frameworks
105
+ ### Message Processing Frameworks
97
106
 
98
107
  * [Karafka](https://github.com/karafka/karafka) - Ruby and Rails efficient Kafka processing framework.
99
108
  * [Racecar](https://github.com/zendesk/racecar) - A simple framework for Kafka consumers in Ruby
100
109
 
101
- ### Message publishing libraries
110
+ ### Message Publishing Libraries
102
111
 
103
112
  * [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
104
113
 
105
114
  ## Development
106
115
 
107
- A Docker Compose file is included to run Kafka. To run
108
- that:
116
+ Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
117
+
118
+ - Implement functionalities that can achieved using standard Ruby capabilities without changes to the underlying rdkafka-ruby bindings.
119
+ - Deviate significantly from the primary aim of providing librdkafka bindings with Ruby-friendly interfaces.
120
+
121
+ A Docker Compose file is included to run Kafka. To run that:
109
122
 
110
123
  ```
111
124
  docker-compose up
112
125
  ```
113
126
 
114
- Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and
115
- compile `librdkafka`.
127
+ Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and compile `librdkafka`.
116
128
 
117
- You can then run `bundle exec rspec` to run the tests. To see rdkafka
118
- debug output:
129
+ You can then run `bundle exec rspec` to run the tests. To see rdkafka debug output:
119
130
 
120
131
  ```
121
132
  DEBUG_PRODUCER=true bundle exec rspec
122
133
  DEBUG_CONSUMER=true bundle exec rspec
123
134
  ```
124
135
 
125
- After running the tests, you can bring the cluster down to start with a
126
- clean slate:
136
+ After running the tests, you can bring the cluster down to start with a clean slate:
127
137
 
128
138
  ```
129
139
  docker-compose down
data/docker-compose.yml CHANGED
@@ -23,3 +23,5 @@ services:
23
23
  KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
24
24
  KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
25
25
  KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
26
+ KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
27
+ KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
@@ -48,12 +48,13 @@ module Rdkafka
48
48
  # If this is nil it does not time out.
49
49
  # @param wait_timeout [Numeric] Amount of time we should wait before we recheck if the
50
50
  # operation has completed
51
+ # @param raise_response_error [Boolean] should we raise error when waiting finishes
51
52
  #
52
53
  # @return [Object] Operation-specific result
53
54
  #
54
55
  # @raise [RdkafkaError] When the operation failed
55
56
  # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
56
- def wait(max_wait_timeout: 60, wait_timeout: 0.1)
57
+ def wait(max_wait_timeout: 60, wait_timeout: 0.1, raise_response_error: true)
57
58
  timeout = if max_wait_timeout
58
59
  monotonic_now + max_wait_timeout
59
60
  else
@@ -67,7 +68,7 @@ module Rdkafka
67
68
  )
68
69
  end
69
70
  sleep wait_timeout
70
- elsif self[:response] != 0
71
+ elsif self[:response] != 0 && raise_response_error
71
72
  raise_error
72
73
  else
73
74
  return create_result
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+
6
+ # Extracts attributes of rd_kafka_AclBinding_t
7
+ #
8
+ class AclBindingResult
9
+ attr_reader :result_error, :error_string, :matching_acl_resource_type, :matching_acl_resource_name, :matching_acl_pattern_type, :matching_acl_principal, :matching_acl_host, :matching_acl_operation, :matching_acl_permission_type
10
+
11
+ def initialize(matching_acl)
12
+ rd_kafka_error_pointer = Rdkafka::Bindings.rd_kafka_AclBinding_error(matching_acl)
13
+ @result_error = Rdkafka::Bindings.rd_kafka_error_code(rd_kafka_error_pointer)
14
+ error_string = Rdkafka::Bindings.rd_kafka_error_string(rd_kafka_error_pointer)
15
+ if error_string != FFI::Pointer::NULL
16
+ @error_string = error_string.read_string
17
+ end
18
+ @matching_acl_resource_type = Rdkafka::Bindings.rd_kafka_AclBinding_restype(matching_acl)
19
+ matching_acl_resource_name = Rdkafka::Bindings.rd_kafka_AclBinding_name(matching_acl)
20
+ if matching_acl_resource_name != FFI::Pointer::NULL
21
+ @matching_acl_resource_name = matching_acl_resource_name.read_string
22
+ end
23
+ @matching_acl_pattern_type = Rdkafka::Bindings.rd_kafka_AclBinding_resource_pattern_type(matching_acl)
24
+ matching_acl_principal = Rdkafka::Bindings.rd_kafka_AclBinding_principal(matching_acl)
25
+ if matching_acl_principal != FFI::Pointer::NULL
26
+ @matching_acl_principal = matching_acl_principal.read_string
27
+ end
28
+ matching_acl_host = Rdkafka::Bindings.rd_kafka_AclBinding_host(matching_acl)
29
+ if matching_acl_host != FFI::Pointer::NULL
30
+ @matching_acl_host = matching_acl_host.read_string
31
+ end
32
+ @matching_acl_operation = Rdkafka::Bindings.rd_kafka_AclBinding_operation(matching_acl)
33
+ @matching_acl_permission_type = Rdkafka::Bindings.rd_kafka_AclBinding_permission_type(matching_acl)
34
+ end
35
+ end
36
+ end
37
+ end
@@ -0,0 +1,28 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class CreateAclHandle < AbstractHandle
6
+ layout :pending, :bool,
7
+ :response, :int,
8
+ :response_string, :pointer
9
+
10
+ # @return [String] the name of the operation
11
+ def operation_name
12
+ "create acl"
13
+ end
14
+
15
+ # @return [CreateAclReport] instance with rdkafka_response value as 0 and rdkafka_response_string value as empty string if the acl creation was successful
16
+ def create_result
17
+ CreateAclReport.new(rdkafka_response: self[:response], rdkafka_response_string: self[:response_string])
18
+ end
19
+
20
+ def raise_error
21
+ raise RdkafkaError.new(
22
+ self[:response],
23
+ broker_message: self[:response_string].read_string
24
+ )
25
+ end
26
+ end
27
+ end
28
+ end
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class CreateAclReport
6
+
7
+ # Upon successful creation of Acl RD_KAFKA_RESP_ERR_NO_ERROR - 0 is returned as rdkafka_response
8
+ # @return [Integer]
9
+ attr_reader :rdkafka_response
10
+
11
+
12
+ # Upon successful creation of Acl empty string will be returned as rdkafka_response_string
13
+ # @return [String]
14
+ attr_reader :rdkafka_response_string
15
+
16
+ def initialize(rdkafka_response:, rdkafka_response_string:)
17
+ @rdkafka_response = rdkafka_response
18
+ if rdkafka_response_string != FFI::Pointer::NULL
19
+ @rdkafka_response_string = rdkafka_response_string.read_string
20
+ end
21
+ end
22
+ end
23
+ end
24
+ end
@@ -0,0 +1,27 @@
1
+ module Rdkafka
2
+ class Admin
3
+ class CreatePartitionsHandle < AbstractHandle
4
+ layout :pending, :bool,
5
+ :response, :int,
6
+ :error_string, :pointer,
7
+ :result_name, :pointer
8
+
9
+ # @return [String] the name of the operation
10
+ def operation_name
11
+ "create partitions"
12
+ end
13
+
14
+ # @return [Boolean] whether the create topic was successful
15
+ def create_result
16
+ CreatePartitionsReport.new(self[:error_string], self[:result_name])
17
+ end
18
+
19
+ def raise_error
20
+ raise RdkafkaError.new(
21
+ self[:response],
22
+ broker_message: CreateTopicReport.new(self[:error_string], self[:result_name]).error_string
23
+ )
24
+ end
25
+ end
26
+ end
27
+ end
@@ -0,0 +1,6 @@
1
+ module Rdkafka
2
+ class Admin
3
+ class CreatePartitionsReport < CreateTopicReport
4
+ end
5
+ end
6
+ end
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DeleteAclHandle < AbstractHandle
6
+ layout :pending, :bool,
7
+ :response, :int,
8
+ :response_string, :pointer,
9
+ :matching_acls, :pointer,
10
+ :matching_acls_count, :int
11
+
12
+ # @return [String] the name of the operation
13
+ def operation_name
14
+ "delete acl"
15
+ end
16
+
17
+ # @return [DeleteAclReport] instance with an array of matching_acls
18
+ def create_result
19
+ DeleteAclReport.new(matching_acls: self[:matching_acls], matching_acls_count: self[:matching_acls_count])
20
+ end
21
+
22
+ def raise_error
23
+ raise RdkafkaError.new(
24
+ self[:response],
25
+ broker_message: self[:response_string].read_string
26
+ )
27
+ end
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DeleteAclReport
6
+
7
+ # deleted acls
8
+ # @return [Rdkafka::Bindings::AclBindingResult]
9
+ attr_reader :deleted_acls
10
+
11
+ def initialize(matching_acls:, matching_acls_count:)
12
+ @deleted_acls=[]
13
+ if matching_acls != FFI::Pointer::NULL
14
+ acl_binding_result_pointers = matching_acls.read_array_of_pointer(matching_acls_count)
15
+ (1..matching_acls_count).map do |matching_acl_index|
16
+ acl_binding_result = AclBindingResult.new(acl_binding_result_pointers[matching_acl_index - 1])
17
+ @deleted_acls << acl_binding_result
18
+ end
19
+ end
20
+ end
21
+ end
22
+ end
23
+ end
@@ -0,0 +1,28 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DeleteGroupsHandle < AbstractHandle
6
+ layout :pending, :bool, # TODO: ???
7
+ :response, :int,
8
+ :error_string, :pointer,
9
+ :result_name, :pointer
10
+
11
+ # @return [String] the name of the operation
12
+ def operation_name
13
+ "delete groups"
14
+ end
15
+
16
+ def create_result
17
+ DeleteGroupsReport.new(self[:error_string], self[:result_name])
18
+ end
19
+
20
+ def raise_error
21
+ raise RdkafkaError.new(
22
+ self[:response],
23
+ broker_message: create_result.error_string
24
+ )
25
+ end
26
+ end
27
+ end
28
+ end
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rdkafka
4
+ class Admin
5
+ class DeleteGroupsReport
6
+ # Any error message generated from the DeleteTopic
7
+ # @return [String]
8
+ attr_reader :error_string
9
+
10
+ # The name of the topic deleted
11
+ # @return [String]
12
+ attr_reader :result_name
13
+
14
+ def initialize(error_string, result_name)
15
+ if error_string != FFI::Pointer::NULL
16
+ @error_string = error_string.read_string
17
+ end
18
+ if result_name != FFI::Pointer::NULL
19
+ @result_name = @result_name = result_name.read_string
20
+ end
21
+ end
22
+ end
23
+ end
24
+ end