rdkafka 0.13.0.beta.3 → 0.13.0.beta.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.semaphore/semaphore.yml +1 -1
- data/CHANGELOG.md +6 -0
- data/README.md +26 -0
- data/lib/rdkafka/admin.rb +30 -18
- data/lib/rdkafka/bindings.rb +33 -36
- data/lib/rdkafka/config.rb +4 -4
- data/lib/rdkafka/consumer.rb +63 -31
- data/lib/rdkafka/metadata.rb +2 -2
- data/lib/rdkafka/native_kafka.rb +30 -9
- data/lib/rdkafka/producer.rb +15 -6
- data/lib/rdkafka/version.rb +1 -1
- data/spec/rdkafka/config_spec.rb +7 -6
- data/spec/rdkafka/consumer/message_spec.rb +1 -1
- data/spec/rdkafka/consumer_spec.rb +15 -9
- data/spec/rdkafka/metadata_spec.rb +1 -1
- data/spec/rdkafka/native_kafka_spec.rb +7 -40
- data/spec/spec_helper.rb +1 -1
- metadata +3 -5
- data/bin/console +0 -11
    
        checksums.yaml
    CHANGED
    
    | @@ -1,7 +1,7 @@ | |
| 1 1 | 
             
            ---
         | 
| 2 2 | 
             
            SHA256:
         | 
| 3 | 
            -
              metadata.gz:  | 
| 4 | 
            -
              data.tar.gz:  | 
| 3 | 
            +
              metadata.gz: 1d9baa27220c65598729e786ff2c585cdb23db317b964fd96aee7067b5922dd1
         | 
| 4 | 
            +
              data.tar.gz: 4b072016e30266c9ef8ed1ff5e88ff6caa16dd5ede48eec79e2ea22b14cbf222
         | 
| 5 5 | 
             
            SHA512:
         | 
| 6 | 
            -
              metadata.gz:  | 
| 7 | 
            -
              data.tar.gz:  | 
| 6 | 
            +
              metadata.gz: a42852f8a2bb57d28d8a55df808ebcc2cd7ca08daf1acdd139c6e8662ac81ad31da417193e5e6a6f9ed0df9f842e3842b5ad01461f1338e12deb1ca01d56830f
         | 
| 7 | 
            +
              data.tar.gz: 8dde39b477b76a7afae0f9f51ec3e1942b2d4d599828bbe7acdd777c17f8fec49f4cc359e3f850db30921d0119bbe32b04c1dbef49e16f0e46d3de313cd56744
         | 
    
        data/.semaphore/semaphore.yml
    CHANGED
    
    | @@ -20,8 +20,8 @@ blocks: | |
| 20 20 | 
             
                      commands:
         | 
| 21 21 | 
             
                        - sem-version ruby $RUBY_VERSION
         | 
| 22 22 | 
             
                        - checkout
         | 
| 23 | 
            +
                        - docker-compose up -d --no-recreate
         | 
| 23 24 | 
             
                        - bundle install --path vendor/bundle
         | 
| 24 25 | 
             
                        - cd ext && bundle exec rake && cd ..
         | 
| 25 | 
            -
                        - docker-compose up -d --no-recreate
         | 
| 26 26 | 
             
                        - ulimit -c unlimited
         | 
| 27 27 | 
             
                        - valgrind -v bundle exec rspec
         | 
    
        data/CHANGELOG.md
    CHANGED
    
    | @@ -7,6 +7,12 @@ | |
| 7 7 | 
             
            * Fix documented type for DeliveryReport#error (jimmydo)
         | 
| 8 8 | 
             
            * Bump librdkafka to 1.9.2 (thijsc)
         | 
| 9 9 | 
             
            * Use finalizers to cleanly exit producer and admin (thijsc)
         | 
| 10 | 
            +
            * Lock access to the native kafka client (thijsc)
         | 
| 11 | 
            +
            * Fix potential race condition in multi-threaded producer (mensfeld)
         | 
| 12 | 
            +
            * Fix leaking FFI resources in specs (mensfeld)
         | 
| 13 | 
            +
            * Improve specs stability (mensfeld)
         | 
| 14 | 
            +
            * Make metadata request timeout configurable (mensfeld)
         | 
| 15 | 
            +
            * call_on_partitions_assigned and call_on_partitions_revoked only get a tpl passed in (thijsc)
         | 
| 10 16 |  | 
| 11 17 | 
             
            # 0.12.0
         | 
| 12 18 | 
             
            * Bumps librdkafka to 1.9.0
         | 
    
        data/README.md
    CHANGED
    
    | @@ -23,6 +23,19 @@ The most important pieces of a Kafka client are implemented. We're | |
| 23 23 | 
             
            working towards feature completeness, you can track that here:
         | 
| 24 24 | 
             
            https://github.com/appsignal/rdkafka-ruby/milestone/1
         | 
| 25 25 |  | 
| 26 | 
            +
            ## Table of content
         | 
| 27 | 
            +
             | 
| 28 | 
            +
            - [Installation](#installation)
         | 
| 29 | 
            +
            - [Usage](#usage)
         | 
| 30 | 
            +
              * [Consuming messages](#consuming-messages)
         | 
| 31 | 
            +
              * [Producing messages](#producing-messages)
         | 
| 32 | 
            +
            - [Higher level libraries](#higher-level-libraries)
         | 
| 33 | 
            +
              * [Message processing frameworks](#message-processing-frameworks)
         | 
| 34 | 
            +
              * [Message publishing libraries](#message-publishing-libraries)
         | 
| 35 | 
            +
            - [Development](#development)
         | 
| 36 | 
            +
            - [Example](#example)
         | 
| 37 | 
            +
             | 
| 38 | 
            +
             | 
| 26 39 | 
             
            ## Installation
         | 
| 27 40 |  | 
| 28 41 | 
             
            This gem downloads and compiles librdkafka when it is installed. If you
         | 
| @@ -77,6 +90,19 @@ Note that creating a producer consumes some resources that will not be | |
| 77 90 | 
             
            released until it `#close` is explicitly called, so be sure to call
         | 
| 78 91 | 
             
            `Config#producer` only as necessary.
         | 
| 79 92 |  | 
| 93 | 
            +
            ## Higher level libraries
         | 
| 94 | 
            +
             | 
| 95 | 
            +
            Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher level API that can be used to work with Kafka messages and one library for publishing messages.
         | 
| 96 | 
            +
             | 
| 97 | 
            +
            ### Message processing frameworks
         | 
| 98 | 
            +
             | 
| 99 | 
            +
            * [Karafka](https://github.com/karafka/karafka) - Ruby and Rails efficient Kafka processing framework.
         | 
| 100 | 
            +
            * [Racecar](https://github.com/zendesk/racecar) - A simple framework for Kafka consumers in Ruby 
         | 
| 101 | 
            +
             | 
| 102 | 
            +
            ### Message publishing libraries
         | 
| 103 | 
            +
             | 
| 104 | 
            +
            * [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
         | 
| 105 | 
            +
             | 
| 80 106 | 
             
            ## Development
         | 
| 81 107 |  | 
| 82 108 | 
             
            A Docker Compose file is included to run Kafka and Zookeeper. To run
         | 
    
        data/lib/rdkafka/admin.rb
    CHANGED
    
    | @@ -67,7 +67,9 @@ module Rdkafka | |
| 67 67 | 
             
                  topics_array_ptr.write_array_of_pointer(pointer_array)
         | 
| 68 68 |  | 
| 69 69 | 
             
                  # Get a pointer to the queue that our request will be enqueued on
         | 
| 70 | 
            -
                  queue_ptr =  | 
| 70 | 
            +
                  queue_ptr = @native_kafka.with_inner do |inner|
         | 
| 71 | 
            +
                    Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
         | 
| 72 | 
            +
                  end
         | 
| 71 73 | 
             
                  if queue_ptr.null?
         | 
| 72 74 | 
             
                    Rdkafka::Bindings.rd_kafka_NewTopic_destroy(new_topic_ptr)
         | 
| 73 75 | 
             
                    raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
         | 
| @@ -78,17 +80,21 @@ module Rdkafka | |
| 78 80 | 
             
                  create_topic_handle[:pending] = true
         | 
| 79 81 | 
             
                  create_topic_handle[:response] = -1
         | 
| 80 82 | 
             
                  CreateTopicHandle.register(create_topic_handle)
         | 
| 81 | 
            -
                  admin_options_ptr =  | 
| 83 | 
            +
                  admin_options_ptr = @native_kafka.with_inner do |inner|
         | 
| 84 | 
            +
                    Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_CREATETOPICS)
         | 
| 85 | 
            +
                  end
         | 
| 82 86 | 
             
                  Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, create_topic_handle.to_ptr)
         | 
| 83 87 |  | 
| 84 88 | 
             
                  begin
         | 
| 85 | 
            -
                     | 
| 86 | 
            -
                       | 
| 87 | 
            -
             | 
| 88 | 
            -
             | 
| 89 | 
            -
             | 
| 90 | 
            -
             | 
| 91 | 
            -
             | 
| 89 | 
            +
                    @native_kafka.with_inner do |inner|
         | 
| 90 | 
            +
                      Rdkafka::Bindings.rd_kafka_CreateTopics(
         | 
| 91 | 
            +
                        inner,
         | 
| 92 | 
            +
                        topics_array_ptr,
         | 
| 93 | 
            +
                        1,
         | 
| 94 | 
            +
                        admin_options_ptr,
         | 
| 95 | 
            +
                        queue_ptr
         | 
| 96 | 
            +
                      )
         | 
| 97 | 
            +
                    end
         | 
| 92 98 | 
             
                  rescue Exception
         | 
| 93 99 | 
             
                    CreateTopicHandle.remove(create_topic_handle.to_ptr.address)
         | 
| 94 100 | 
             
                    raise
         | 
| @@ -118,7 +124,9 @@ module Rdkafka | |
| 118 124 | 
             
                  topics_array_ptr.write_array_of_pointer(pointer_array)
         | 
| 119 125 |  | 
| 120 126 | 
             
                  # Get a pointer to the queue that our request will be enqueued on
         | 
| 121 | 
            -
                  queue_ptr =  | 
| 127 | 
            +
                  queue_ptr = @native_kafka.with_inner do |inner|
         | 
| 128 | 
            +
                    Rdkafka::Bindings.rd_kafka_queue_get_background(inner)
         | 
| 129 | 
            +
                  end
         | 
| 122 130 | 
             
                  if queue_ptr.null?
         | 
| 123 131 | 
             
                    Rdkafka::Bindings.rd_kafka_DeleteTopic_destroy(delete_topic_ptr)
         | 
| 124 132 | 
             
                    raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
         | 
| @@ -129,17 +137,21 @@ module Rdkafka | |
| 129 137 | 
             
                  delete_topic_handle[:pending] = true
         | 
| 130 138 | 
             
                  delete_topic_handle[:response] = -1
         | 
| 131 139 | 
             
                  DeleteTopicHandle.register(delete_topic_handle)
         | 
| 132 | 
            -
                  admin_options_ptr =  | 
| 140 | 
            +
                  admin_options_ptr = @native_kafka.with_inner do |inner|
         | 
| 141 | 
            +
                    Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DELETETOPICS)
         | 
| 142 | 
            +
                  end
         | 
| 133 143 | 
             
                  Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, delete_topic_handle.to_ptr)
         | 
| 134 144 |  | 
| 135 145 | 
             
                  begin
         | 
| 136 | 
            -
                     | 
| 137 | 
            -
                       | 
| 138 | 
            -
             | 
| 139 | 
            -
             | 
| 140 | 
            -
             | 
| 141 | 
            -
             | 
| 142 | 
            -
             | 
| 146 | 
            +
                    @native_kafka.with_inner do |inner|
         | 
| 147 | 
            +
                      Rdkafka::Bindings.rd_kafka_DeleteTopics(
         | 
| 148 | 
            +
                        inner,
         | 
| 149 | 
            +
                        topics_array_ptr,
         | 
| 150 | 
            +
                        1,
         | 
| 151 | 
            +
                        admin_options_ptr,
         | 
| 152 | 
            +
                        queue_ptr
         | 
| 153 | 
            +
                      )
         | 
| 154 | 
            +
                    end
         | 
| 143 155 | 
             
                  rescue Exception
         | 
| 144 156 | 
             
                    DeleteTopicHandle.remove(delete_topic_handle.to_ptr.address)
         | 
| 145 157 | 
             
                    raise
         | 
    
        data/lib/rdkafka/bindings.rb
    CHANGED
    
    | @@ -41,10 +41,10 @@ module Rdkafka | |
| 41 41 |  | 
| 42 42 | 
             
                # Metadata
         | 
| 43 43 |  | 
| 44 | 
            -
                attach_function :rd_kafka_memberid, [:pointer], :string
         | 
| 45 | 
            -
                attach_function :rd_kafka_clusterid, [:pointer], :string
         | 
| 46 | 
            -
                attach_function :rd_kafka_metadata, [:pointer, :int, :pointer, :pointer, :int], :int
         | 
| 47 | 
            -
                attach_function :rd_kafka_metadata_destroy, [:pointer], :void
         | 
| 44 | 
            +
                attach_function :rd_kafka_memberid, [:pointer], :string, blocking: true
         | 
| 45 | 
            +
                attach_function :rd_kafka_clusterid, [:pointer], :string, blocking: true
         | 
| 46 | 
            +
                attach_function :rd_kafka_metadata, [:pointer, :int, :pointer, :pointer, :int], :int, blocking: true
         | 
| 47 | 
            +
                attach_function :rd_kafka_metadata_destroy, [:pointer], :void, blocking: true
         | 
| 48 48 |  | 
| 49 49 | 
             
                # Message struct
         | 
| 50 50 |  | 
| @@ -170,27 +170,26 @@ module Rdkafka | |
| 170 170 |  | 
| 171 171 | 
             
                attach_function :rd_kafka_new, [:kafka_type, :pointer, :pointer, :int], :pointer
         | 
| 172 172 |  | 
| 173 | 
            -
                 | 
| 174 | 
            -
                attach_function :rd_kafka_destroy_flags, [:pointer, :int], :void
         | 
| 173 | 
            +
                attach_function :rd_kafka_destroy, [:pointer], :void
         | 
| 175 174 |  | 
| 176 175 | 
             
                # Consumer
         | 
| 177 176 |  | 
| 178 | 
            -
                attach_function :rd_kafka_subscribe, [:pointer, :pointer], :int
         | 
| 179 | 
            -
                attach_function :rd_kafka_unsubscribe, [:pointer], :int
         | 
| 180 | 
            -
                attach_function :rd_kafka_subscription, [:pointer, :pointer], :int
         | 
| 181 | 
            -
                attach_function :rd_kafka_assign, [:pointer, :pointer], :int
         | 
| 182 | 
            -
                attach_function :rd_kafka_incremental_assign, [:pointer, :pointer], :int
         | 
| 183 | 
            -
                attach_function :rd_kafka_incremental_unassign, [:pointer, :pointer], :int
         | 
| 184 | 
            -
                attach_function :rd_kafka_assignment, [:pointer, :pointer], :int
         | 
| 185 | 
            -
                attach_function :rd_kafka_committed, [:pointer, :pointer, :int], :int
         | 
| 177 | 
            +
                attach_function :rd_kafka_subscribe, [:pointer, :pointer], :int, blocking: true
         | 
| 178 | 
            +
                attach_function :rd_kafka_unsubscribe, [:pointer], :int, blocking: true
         | 
| 179 | 
            +
                attach_function :rd_kafka_subscription, [:pointer, :pointer], :int, blocking: true
         | 
| 180 | 
            +
                attach_function :rd_kafka_assign, [:pointer, :pointer], :int, blocking: true
         | 
| 181 | 
            +
                attach_function :rd_kafka_incremental_assign, [:pointer, :pointer], :int, blocking: true
         | 
| 182 | 
            +
                attach_function :rd_kafka_incremental_unassign, [:pointer, :pointer], :int, blocking: true
         | 
| 183 | 
            +
                attach_function :rd_kafka_assignment, [:pointer, :pointer], :int, blocking: true
         | 
| 184 | 
            +
                attach_function :rd_kafka_committed, [:pointer, :pointer, :int], :int, blocking: true
         | 
| 186 185 | 
             
                attach_function :rd_kafka_commit, [:pointer, :pointer, :bool], :int, blocking: true
         | 
| 187 | 
            -
                attach_function :rd_kafka_poll_set_consumer, [:pointer], :void
         | 
| 186 | 
            +
                attach_function :rd_kafka_poll_set_consumer, [:pointer], :void, blocking: true
         | 
| 188 187 | 
             
                attach_function :rd_kafka_consumer_poll, [:pointer, :int], :pointer, blocking: true
         | 
| 189 188 | 
             
                attach_function :rd_kafka_consumer_close, [:pointer], :void, blocking: true
         | 
| 190 | 
            -
                attach_function :rd_kafka_offset_store, [:pointer, :int32, :int64], :int
         | 
| 191 | 
            -
                attach_function :rd_kafka_pause_partitions, [:pointer, :pointer], :int
         | 
| 192 | 
            -
                attach_function :rd_kafka_resume_partitions, [:pointer, :pointer], :int
         | 
| 193 | 
            -
                attach_function :rd_kafka_seek, [:pointer, :int32, :int64, :int], :int
         | 
| 189 | 
            +
                attach_function :rd_kafka_offset_store, [:pointer, :int32, :int64], :int, blocking: true
         | 
| 190 | 
            +
                attach_function :rd_kafka_pause_partitions, [:pointer, :pointer], :int, blocking: true
         | 
| 191 | 
            +
                attach_function :rd_kafka_resume_partitions, [:pointer, :pointer], :int, blocking: true
         | 
| 192 | 
            +
                attach_function :rd_kafka_seek, [:pointer, :int32, :int64, :int], :int, blocking: true
         | 
| 194 193 |  | 
| 195 194 | 
             
                # Headers
         | 
| 196 195 | 
             
                attach_function :rd_kafka_header_get_all, [:pointer, :size_t, :pointer, :pointer, SizePtr], :int
         | 
| @@ -199,7 +198,7 @@ module Rdkafka | |
| 199 198 | 
             
                # Rebalance
         | 
| 200 199 |  | 
| 201 200 | 
             
                callback :rebalance_cb_function, [:pointer, :int, :pointer, :pointer], :void
         | 
| 202 | 
            -
                attach_function :rd_kafka_conf_set_rebalance_cb, [:pointer, :rebalance_cb_function], :void
         | 
| 201 | 
            +
                attach_function :rd_kafka_conf_set_rebalance_cb, [:pointer, :rebalance_cb_function], :void, blocking: true
         | 
| 203 202 |  | 
| 204 203 | 
             
                RebalanceCallback = FFI::Function.new(
         | 
| 205 204 | 
             
                  :void, [:pointer, :int, :pointer, :pointer]
         | 
| @@ -223,14 +222,12 @@ module Rdkafka | |
| 223 222 | 
             
                  return unless opaque
         | 
| 224 223 |  | 
| 225 224 | 
             
                  tpl = Rdkafka::Consumer::TopicPartitionList.from_native_tpl(partitions_ptr).freeze
         | 
| 226 | 
            -
                  consumer = Rdkafka::Consumer.new(client_ptr)
         | 
| 227 | 
            -
             | 
| 228 225 | 
             
                  begin
         | 
| 229 226 | 
             
                    case code
         | 
| 230 227 | 
             
                    when RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS
         | 
| 231 | 
            -
                      opaque.call_on_partitions_assigned( | 
| 228 | 
            +
                      opaque.call_on_partitions_assigned(tpl)
         | 
| 232 229 | 
             
                    when RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS
         | 
| 233 | 
            -
                      opaque.call_on_partitions_revoked( | 
| 230 | 
            +
                      opaque.call_on_partitions_revoked(tpl)
         | 
| 234 231 | 
             
                    end
         | 
| 235 232 | 
             
                  rescue Exception => err
         | 
| 236 233 | 
             
                    Rdkafka::Config.logger.error("Unhandled exception: #{err.class} - #{err.message}")
         | 
| @@ -257,7 +254,7 @@ module Rdkafka | |
| 257 254 |  | 
| 258 255 | 
             
                RD_KAFKA_MSG_F_COPY = 0x2
         | 
| 259 256 |  | 
| 260 | 
            -
                attach_function :rd_kafka_producev, [:pointer, :varargs], :int
         | 
| 257 | 
            +
                attach_function :rd_kafka_producev, [:pointer, :varargs], :int, blocking: true
         | 
| 261 258 | 
             
                callback :delivery_cb, [:pointer, :pointer, :pointer], :void
         | 
| 262 259 | 
             
                attach_function :rd_kafka_conf_set_dr_msg_cb, [:pointer, :delivery_cb], :void
         | 
| 263 260 |  | 
| @@ -284,23 +281,23 @@ module Rdkafka | |
| 284 281 | 
             
                RD_KAFKA_ADMIN_OP_CREATETOPICS     = 1   # rd_kafka_admin_op_t
         | 
| 285 282 | 
             
                RD_KAFKA_EVENT_CREATETOPICS_RESULT = 100 # rd_kafka_event_type_t
         | 
| 286 283 |  | 
| 287 | 
            -
                attach_function :rd_kafka_CreateTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :void
         | 
| 288 | 
            -
                attach_function :rd_kafka_NewTopic_new, [:pointer, :size_t, :size_t, :pointer, :size_t], :pointer
         | 
| 289 | 
            -
                attach_function :rd_kafka_NewTopic_set_config, [:pointer, :string, :string], :int32
         | 
| 290 | 
            -
                attach_function :rd_kafka_NewTopic_destroy, [:pointer], :void
         | 
| 291 | 
            -
                attach_function :rd_kafka_event_CreateTopics_result, [:pointer], :pointer
         | 
| 292 | 
            -
                attach_function :rd_kafka_CreateTopics_result_topics, [:pointer, :pointer], :pointer
         | 
| 284 | 
            +
                attach_function :rd_kafka_CreateTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :void, blocking: true
         | 
| 285 | 
            +
                attach_function :rd_kafka_NewTopic_new, [:pointer, :size_t, :size_t, :pointer, :size_t], :pointer, blocking: true
         | 
| 286 | 
            +
                attach_function :rd_kafka_NewTopic_set_config, [:pointer, :string, :string], :int32, blocking: true
         | 
| 287 | 
            +
                attach_function :rd_kafka_NewTopic_destroy, [:pointer], :void, blocking: true
         | 
| 288 | 
            +
                attach_function :rd_kafka_event_CreateTopics_result, [:pointer], :pointer, blocking: true
         | 
| 289 | 
            +
                attach_function :rd_kafka_CreateTopics_result_topics, [:pointer, :pointer], :pointer, blocking: true
         | 
| 293 290 |  | 
| 294 291 | 
             
                # Delete Topics
         | 
| 295 292 |  | 
| 296 293 | 
             
                RD_KAFKA_ADMIN_OP_DELETETOPICS     = 2   # rd_kafka_admin_op_t
         | 
| 297 294 | 
             
                RD_KAFKA_EVENT_DELETETOPICS_RESULT = 101 # rd_kafka_event_type_t
         | 
| 298 295 |  | 
| 299 | 
            -
                attach_function :rd_kafka_DeleteTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :int32
         | 
| 300 | 
            -
                attach_function :rd_kafka_DeleteTopic_new, [:pointer], :pointer
         | 
| 301 | 
            -
                attach_function :rd_kafka_DeleteTopic_destroy, [:pointer], :void
         | 
| 302 | 
            -
                attach_function :rd_kafka_event_DeleteTopics_result, [:pointer], :pointer
         | 
| 303 | 
            -
                attach_function :rd_kafka_DeleteTopics_result_topics, [:pointer, :pointer], :pointer
         | 
| 296 | 
            +
                attach_function :rd_kafka_DeleteTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :int32, blocking: true
         | 
| 297 | 
            +
                attach_function :rd_kafka_DeleteTopic_new, [:pointer], :pointer, blocking: true
         | 
| 298 | 
            +
                attach_function :rd_kafka_DeleteTopic_destroy, [:pointer], :void, blocking: true
         | 
| 299 | 
            +
                attach_function :rd_kafka_event_DeleteTopics_result, [:pointer], :pointer, blocking: true
         | 
| 300 | 
            +
                attach_function :rd_kafka_DeleteTopics_result_topics, [:pointer, :pointer], :pointer, blocking: true
         | 
| 304 301 |  | 
| 305 302 | 
             
                # Background Queue and Callback
         | 
| 306 303 |  | 
    
        data/lib/rdkafka/config.rb
    CHANGED
    
    | @@ -285,18 +285,18 @@ module Rdkafka | |
| 285 285 | 
             
                  producer.call_delivery_callback(delivery_report, delivery_handle) if producer
         | 
| 286 286 | 
             
                end
         | 
| 287 287 |  | 
| 288 | 
            -
                def call_on_partitions_assigned( | 
| 288 | 
            +
                def call_on_partitions_assigned(list)
         | 
| 289 289 | 
             
                  return unless consumer_rebalance_listener
         | 
| 290 290 | 
             
                  return unless consumer_rebalance_listener.respond_to?(:on_partitions_assigned)
         | 
| 291 291 |  | 
| 292 | 
            -
                  consumer_rebalance_listener.on_partitions_assigned( | 
| 292 | 
            +
                  consumer_rebalance_listener.on_partitions_assigned(list)
         | 
| 293 293 | 
             
                end
         | 
| 294 294 |  | 
| 295 | 
            -
                def call_on_partitions_revoked( | 
| 295 | 
            +
                def call_on_partitions_revoked(list)
         | 
| 296 296 | 
             
                  return unless consumer_rebalance_listener
         | 
| 297 297 | 
             
                  return unless consumer_rebalance_listener.respond_to?(:on_partitions_revoked)
         | 
| 298 298 |  | 
| 299 | 
            -
                  consumer_rebalance_listener.on_partitions_revoked( | 
| 299 | 
            +
                  consumer_rebalance_listener.on_partitions_revoked(list)
         | 
| 300 300 | 
             
                end
         | 
| 301 301 | 
             
              end
         | 
| 302 302 | 
             
            end
         | 
    
        data/lib/rdkafka/consumer.rb
    CHANGED
    
    | @@ -27,7 +27,9 @@ module Rdkafka | |
| 27 27 | 
             
                def close
         | 
| 28 28 | 
             
                  return if closed?
         | 
| 29 29 | 
             
                  ObjectSpace.undefine_finalizer(self)
         | 
| 30 | 
            -
                   | 
| 30 | 
            +
                  @native_kafka.with_inner do |inner|
         | 
| 31 | 
            +
                    Rdkafka::Bindings.rd_kafka_consumer_close(inner)
         | 
| 32 | 
            +
                  end
         | 
| 31 33 | 
             
                  @native_kafka.close
         | 
| 32 34 | 
             
                end
         | 
| 33 35 |  | 
| @@ -54,7 +56,9 @@ module Rdkafka | |
| 54 56 | 
             
                  end
         | 
| 55 57 |  | 
| 56 58 | 
             
                  # Subscribe to topic partition list and check this was successful
         | 
| 57 | 
            -
                  response =  | 
| 59 | 
            +
                  response = @native_kafka.with_inner do |inner|
         | 
| 60 | 
            +
                    Rdkafka::Bindings.rd_kafka_subscribe(inner, tpl)
         | 
| 61 | 
            +
                  end
         | 
| 58 62 | 
             
                  if response != 0
         | 
| 59 63 | 
             
                    raise Rdkafka::RdkafkaError.new(response, "Error subscribing to '#{topics.join(', ')}'")
         | 
| 60 64 | 
             
                  end
         | 
| @@ -70,7 +74,9 @@ module Rdkafka | |
| 70 74 | 
             
                def unsubscribe
         | 
| 71 75 | 
             
                  closed_consumer_check(__method__)
         | 
| 72 76 |  | 
| 73 | 
            -
                  response =  | 
| 77 | 
            +
                  response = @native_kafka.with_inner do |inner|
         | 
| 78 | 
            +
                    Rdkafka::Bindings.rd_kafka_unsubscribe(inner)
         | 
| 79 | 
            +
                  end
         | 
| 74 80 | 
             
                  if response != 0
         | 
| 75 81 | 
             
                    raise Rdkafka::RdkafkaError.new(response)
         | 
| 76 82 | 
             
                  end
         | 
| @@ -93,7 +99,9 @@ module Rdkafka | |
| 93 99 | 
             
                  tpl = list.to_native_tpl
         | 
| 94 100 |  | 
| 95 101 | 
             
                  begin
         | 
| 96 | 
            -
                    response =  | 
| 102 | 
            +
                    response = @native_kafka.with_inner do |inner|
         | 
| 103 | 
            +
                      Rdkafka::Bindings.rd_kafka_pause_partitions(inner, tpl)
         | 
| 104 | 
            +
                    end
         | 
| 97 105 |  | 
| 98 106 | 
             
                    if response != 0
         | 
| 99 107 | 
             
                      list = TopicPartitionList.from_native_tpl(tpl)
         | 
| @@ -121,7 +129,9 @@ module Rdkafka | |
| 121 129 | 
             
                  tpl = list.to_native_tpl
         | 
| 122 130 |  | 
| 123 131 | 
             
                  begin
         | 
| 124 | 
            -
                    response =  | 
| 132 | 
            +
                    response = @native_kafka.with_inner do |inner|
         | 
| 133 | 
            +
                      Rdkafka::Bindings.rd_kafka_resume_partitions(inner, tpl)
         | 
| 134 | 
            +
                    end
         | 
| 125 135 | 
             
                    if response != 0
         | 
| 126 136 | 
             
                      raise Rdkafka::RdkafkaError.new(response, "Error resume '#{list.to_h}'")
         | 
| 127 137 | 
             
                    end
         | 
| @@ -139,7 +149,9 @@ module Rdkafka | |
| 139 149 | 
             
                  closed_consumer_check(__method__)
         | 
| 140 150 |  | 
| 141 151 | 
             
                  ptr = FFI::MemoryPointer.new(:pointer)
         | 
| 142 | 
            -
                  response =  | 
| 152 | 
            +
                  response = @native_kafka.with_inner do |inner|
         | 
| 153 | 
            +
                    Rdkafka::Bindings.rd_kafka_subscription(inner, ptr)
         | 
| 154 | 
            +
                  end
         | 
| 143 155 |  | 
| 144 156 | 
             
                  if response != 0
         | 
| 145 157 | 
             
                    raise Rdkafka::RdkafkaError.new(response)
         | 
| @@ -169,7 +181,9 @@ module Rdkafka | |
| 169 181 | 
             
                  tpl = list.to_native_tpl
         | 
| 170 182 |  | 
| 171 183 | 
             
                  begin
         | 
| 172 | 
            -
                    response =  | 
| 184 | 
            +
                    response = @native_kafka.with_inner do |inner|
         | 
| 185 | 
            +
                      Rdkafka::Bindings.rd_kafka_assign(inner, tpl)
         | 
| 186 | 
            +
                    end
         | 
| 173 187 | 
             
                    if response != 0
         | 
| 174 188 | 
             
                      raise Rdkafka::RdkafkaError.new(response, "Error assigning '#{list.to_h}'")
         | 
| 175 189 | 
             
                    end
         | 
| @@ -187,7 +201,9 @@ module Rdkafka | |
| 187 201 | 
             
                  closed_consumer_check(__method__)
         | 
| 188 202 |  | 
| 189 203 | 
             
                  ptr = FFI::MemoryPointer.new(:pointer)
         | 
| 190 | 
            -
                  response =  | 
| 204 | 
            +
                  response = @native_kafka.with_inner do |inner|
         | 
| 205 | 
            +
                    Rdkafka::Bindings.rd_kafka_assignment(inner, ptr)
         | 
| 206 | 
            +
                  end
         | 
| 191 207 | 
             
                  if response != 0
         | 
| 192 208 | 
             
                    raise Rdkafka::RdkafkaError.new(response)
         | 
| 193 209 | 
             
                  end
         | 
| @@ -226,7 +242,9 @@ module Rdkafka | |
| 226 242 | 
             
                  tpl = list.to_native_tpl
         | 
| 227 243 |  | 
| 228 244 | 
             
                  begin
         | 
| 229 | 
            -
                    response =  | 
| 245 | 
            +
                    response = @native_kafka.with_inner do |inner|
         | 
| 246 | 
            +
                      Rdkafka::Bindings.rd_kafka_committed(inner, tpl, timeout_ms)
         | 
| 247 | 
            +
                    end
         | 
| 230 248 | 
             
                    if response != 0
         | 
| 231 249 | 
             
                      raise Rdkafka::RdkafkaError.new(response)
         | 
| 232 250 | 
             
                    end
         | 
| @@ -251,14 +269,16 @@ module Rdkafka | |
| 251 269 | 
             
                  low = FFI::MemoryPointer.new(:int64, 1)
         | 
| 252 270 | 
             
                  high = FFI::MemoryPointer.new(:int64, 1)
         | 
| 253 271 |  | 
| 254 | 
            -
                  response =  | 
| 255 | 
            -
                     | 
| 256 | 
            -
             | 
| 257 | 
            -
             | 
| 258 | 
            -
             | 
| 259 | 
            -
             | 
| 260 | 
            -
             | 
| 261 | 
            -
             | 
| 272 | 
            +
                  response = @native_kafka.with_inner do |inner|
         | 
| 273 | 
            +
                    Rdkafka::Bindings.rd_kafka_query_watermark_offsets(
         | 
| 274 | 
            +
                      inner,
         | 
| 275 | 
            +
                      topic,
         | 
| 276 | 
            +
                      partition,
         | 
| 277 | 
            +
                      low,
         | 
| 278 | 
            +
                      high,
         | 
| 279 | 
            +
                      timeout_ms,
         | 
| 280 | 
            +
                    )
         | 
| 281 | 
            +
                  end
         | 
| 262 282 | 
             
                  if response != 0
         | 
| 263 283 | 
             
                    raise Rdkafka::RdkafkaError.new(response, "Error querying watermark offsets for partition #{partition} of #{topic}")
         | 
| 264 284 | 
             
                  end
         | 
| @@ -306,7 +326,9 @@ module Rdkafka | |
| 306 326 | 
             
                # @return [String, nil]
         | 
| 307 327 | 
             
                def cluster_id
         | 
| 308 328 | 
             
                  closed_consumer_check(__method__)
         | 
| 309 | 
            -
                   | 
| 329 | 
            +
                  @native_kafka.with_inner do |inner|
         | 
| 330 | 
            +
                    Rdkafka::Bindings.rd_kafka_clusterid(inner)
         | 
| 331 | 
            +
                  end
         | 
| 310 332 | 
             
                end
         | 
| 311 333 |  | 
| 312 334 | 
             
                # Returns this client's broker-assigned group member id
         | 
| @@ -316,7 +338,9 @@ module Rdkafka | |
| 316 338 | 
             
                # @return [String, nil]
         | 
| 317 339 | 
             
                def member_id
         | 
| 318 340 | 
             
                  closed_consumer_check(__method__)
         | 
| 319 | 
            -
                   | 
| 341 | 
            +
                  @native_kafka.with_inner do |inner|
         | 
| 342 | 
            +
                    Rdkafka::Bindings.rd_kafka_memberid(inner)
         | 
| 343 | 
            +
                  end
         | 
| 320 344 | 
             
                end
         | 
| 321 345 |  | 
| 322 346 | 
             
                # Store offset of a message to be used in the next commit of this consumer
         | 
| @@ -333,11 +357,13 @@ module Rdkafka | |
| 333 357 |  | 
| 334 358 | 
             
                  # rd_kafka_offset_store is one of the few calls that does not support
         | 
| 335 359 | 
             
                  # a string as the topic, so create a native topic for it.
         | 
| 336 | 
            -
                  native_topic =  | 
| 337 | 
            -
                     | 
| 338 | 
            -
             | 
| 339 | 
            -
             | 
| 340 | 
            -
             | 
| 360 | 
            +
                  native_topic = @native_kafka.with_inner do |inner|
         | 
| 361 | 
            +
                    Rdkafka::Bindings.rd_kafka_topic_new(
         | 
| 362 | 
            +
                      inner,
         | 
| 363 | 
            +
                      message.topic,
         | 
| 364 | 
            +
                      nil
         | 
| 365 | 
            +
                    )
         | 
| 366 | 
            +
                  end
         | 
| 341 367 | 
             
                  response = Rdkafka::Bindings.rd_kafka_offset_store(
         | 
| 342 368 | 
             
                    native_topic,
         | 
| 343 369 | 
             
                    message.partition,
         | 
| @@ -365,11 +391,13 @@ module Rdkafka | |
| 365 391 |  | 
| 366 392 | 
             
                  # rd_kafka_offset_store is one of the few calls that does not support
         | 
| 367 393 | 
             
                  # a string as the topic, so create a native topic for it.
         | 
| 368 | 
            -
                  native_topic =  | 
| 369 | 
            -
                     | 
| 370 | 
            -
             | 
| 371 | 
            -
             | 
| 372 | 
            -
             | 
| 394 | 
            +
                  native_topic = @native_kafka.with_inner do |inner|
         | 
| 395 | 
            +
                    Rdkafka::Bindings.rd_kafka_topic_new(
         | 
| 396 | 
            +
                      inner,
         | 
| 397 | 
            +
                      message.topic,
         | 
| 398 | 
            +
                      nil
         | 
| 399 | 
            +
                    )
         | 
| 400 | 
            +
                  end
         | 
| 373 401 | 
             
                  response = Rdkafka::Bindings.rd_kafka_seek(
         | 
| 374 402 | 
             
                    native_topic,
         | 
| 375 403 | 
             
                    message.partition,
         | 
| @@ -410,7 +438,9 @@ module Rdkafka | |
| 410 438 | 
             
                  tpl = list ? list.to_native_tpl : nil
         | 
| 411 439 |  | 
| 412 440 | 
             
                  begin
         | 
| 413 | 
            -
                    response =  | 
| 441 | 
            +
                    response = @native_kafka.with_inner do |inner|
         | 
| 442 | 
            +
                      Rdkafka::Bindings.rd_kafka_commit(inner, tpl, async)
         | 
| 443 | 
            +
                    end
         | 
| 414 444 | 
             
                    if response != 0
         | 
| 415 445 | 
             
                      raise Rdkafka::RdkafkaError.new(response)
         | 
| 416 446 | 
             
                    end
         | 
| @@ -429,7 +459,9 @@ module Rdkafka | |
| 429 459 | 
             
                def poll(timeout_ms)
         | 
| 430 460 | 
             
                  closed_consumer_check(__method__)
         | 
| 431 461 |  | 
| 432 | 
            -
                  message_ptr =  | 
| 462 | 
            +
                  message_ptr = @native_kafka.with_inner do |inner|
         | 
| 463 | 
            +
                    Rdkafka::Bindings.rd_kafka_consumer_poll(inner, timeout_ms)
         | 
| 464 | 
            +
                  end
         | 
| 433 465 | 
             
                  if message_ptr.null?
         | 
| 434 466 | 
             
                    nil
         | 
| 435 467 | 
             
                  else
         | 
    
        data/lib/rdkafka/metadata.rb
    CHANGED
    
    | @@ -4,7 +4,7 @@ module Rdkafka | |
| 4 4 | 
             
              class Metadata
         | 
| 5 5 | 
             
                attr_reader :brokers, :topics
         | 
| 6 6 |  | 
| 7 | 
            -
                def initialize(native_client, topic_name = nil)
         | 
| 7 | 
            +
                def initialize(native_client, topic_name = nil, timeout_ms = 250)
         | 
| 8 8 | 
             
                  native_topic = if topic_name
         | 
| 9 9 | 
             
                    Rdkafka::Bindings.rd_kafka_topic_new(native_client, topic_name, nil)
         | 
| 10 10 | 
             
                  end
         | 
| @@ -16,7 +16,7 @@ module Rdkafka | |
| 16 16 | 
             
                  topic_flag = topic_name.nil? ? 1 : 0
         | 
| 17 17 |  | 
| 18 18 | 
             
                  # Retrieve the Metadata
         | 
| 19 | 
            -
                  result = Rdkafka::Bindings.rd_kafka_metadata(native_client, topic_flag, native_topic, ptr,  | 
| 19 | 
            +
                  result = Rdkafka::Bindings.rd_kafka_metadata(native_client, topic_flag, native_topic, ptr, timeout_ms)
         | 
| 20 20 |  | 
| 21 21 | 
             
                  # Error Handling
         | 
| 22 22 | 
             
                  raise Rdkafka::RdkafkaError.new(result) unless result.zero?
         | 
    
        data/lib/rdkafka/native_kafka.rb
    CHANGED
    
    | @@ -6,19 +6,27 @@ module Rdkafka | |
| 6 6 | 
             
              class NativeKafka
         | 
| 7 7 | 
             
                def initialize(inner, run_polling_thread:)
         | 
| 8 8 | 
             
                  @inner = inner
         | 
| 9 | 
            +
                  # Lock around external access
         | 
| 10 | 
            +
                  @access_mutex = Mutex.new
         | 
| 11 | 
            +
                  # Lock around internal polling
         | 
| 12 | 
            +
                  @poll_mutex = Mutex.new
         | 
| 9 13 |  | 
| 10 14 | 
             
                  if run_polling_thread
         | 
| 11 15 | 
             
                    # Start thread to poll client for delivery callbacks,
         | 
| 12 16 | 
             
                    # not used in consumer.
         | 
| 13 17 | 
             
                    @polling_thread = Thread.new do
         | 
| 14 18 | 
             
                      loop do
         | 
| 15 | 
            -
                         | 
| 19 | 
            +
                        @poll_mutex.synchronize do
         | 
| 20 | 
            +
                          Rdkafka::Bindings.rd_kafka_poll(inner, 100)
         | 
| 21 | 
            +
                        end
         | 
| 22 | 
            +
             | 
| 16 23 | 
             
                        # Exit thread if closing and the poll queue is empty
         | 
| 17 24 | 
             
                        if Thread.current[:closing] && Rdkafka::Bindings.rd_kafka_outq_len(inner) == 0
         | 
| 18 25 | 
             
                          break
         | 
| 19 26 | 
             
                        end
         | 
| 20 27 | 
             
                      end
         | 
| 21 28 | 
             
                    end
         | 
| 29 | 
            +
             | 
| 22 30 | 
             
                    @polling_thread.abort_on_exception = true
         | 
| 23 31 | 
             
                    @polling_thread[:closing] = false
         | 
| 24 32 | 
             
                  end
         | 
| @@ -26,8 +34,12 @@ module Rdkafka | |
| 26 34 | 
             
                  @closing = false
         | 
| 27 35 | 
             
                end
         | 
| 28 36 |  | 
| 29 | 
            -
                def  | 
| 30 | 
            -
                  @inner
         | 
| 37 | 
            +
                def with_inner
         | 
| 38 | 
            +
                  return if @inner.nil?
         | 
| 39 | 
            +
             | 
| 40 | 
            +
                  @access_mutex.synchronize do
         | 
| 41 | 
            +
                    yield @inner
         | 
| 42 | 
            +
                  end
         | 
| 31 43 | 
             
                end
         | 
| 32 44 |  | 
| 33 45 | 
             
                def finalizer
         | 
| @@ -41,21 +53,30 @@ module Rdkafka | |
| 41 53 | 
             
                def close(object_id=nil)
         | 
| 42 54 | 
             
                  return if closed?
         | 
| 43 55 |  | 
| 56 | 
            +
                  @access_mutex.lock
         | 
| 57 | 
            +
             | 
| 44 58 | 
             
                  # Indicate to the outside world that we are closing
         | 
| 45 59 | 
             
                  @closing = true
         | 
| 46 60 |  | 
| 47 61 | 
             
                  if @polling_thread
         | 
| 48 62 | 
             
                    # Indicate to polling thread that we're closing
         | 
| 49 63 | 
             
                    @polling_thread[:closing] = true
         | 
| 50 | 
            -
             | 
| 64 | 
            +
             | 
| 65 | 
            +
                    # Wait for the polling thread to finish up,
         | 
| 66 | 
            +
                    # this can be aborted in practice if this
         | 
| 67 | 
            +
                    # code runs from a finalizer.
         | 
| 51 68 | 
             
                    @polling_thread.join
         | 
| 52 69 | 
             
                  end
         | 
| 53 70 |  | 
| 54 | 
            -
                  # Destroy the client
         | 
| 55 | 
            -
                   | 
| 56 | 
            -
             | 
| 57 | 
            -
             | 
| 58 | 
            -
                   | 
| 71 | 
            +
                  # Destroy the client after locking both mutexes
         | 
| 72 | 
            +
                  @poll_mutex.lock
         | 
| 73 | 
            +
             | 
| 74 | 
            +
                  # This check prevents a race condition, where we would enter the close in two threads
         | 
| 75 | 
            +
                  # and after unlocking the primary one that hold the lock but finished, ours would be unlocked
         | 
| 76 | 
            +
                  # and would continue to run, trying to destroy inner twice
         | 
| 77 | 
            +
                  return unless @inner
         | 
| 78 | 
            +
             | 
| 79 | 
            +
                  Rdkafka::Bindings.rd_kafka_destroy(@inner)
         | 
| 59 80 | 
             
                  @inner = nil
         | 
| 60 81 | 
             
                end
         | 
| 61 82 | 
             
              end
         | 
    
        data/lib/rdkafka/producer.rb
    CHANGED
    
    | @@ -52,9 +52,14 @@ module Rdkafka | |
| 52 52 |  | 
| 53 53 | 
             
                # Wait until all outstanding producer requests are completed, with the given timeout
         | 
| 54 54 | 
             
                # in seconds. Call this before closing a producer to ensure delivery of all messages.
         | 
| 55 | 
            +
                #
         | 
| 56 | 
            +
                # @param timeout_ms [Integer] how long should we wait for flush of all messages
         | 
| 55 57 | 
             
                def flush(timeout_ms=5_000)
         | 
| 56 58 | 
             
                  closed_producer_check(__method__)
         | 
| 57 | 
            -
             | 
| 59 | 
            +
             | 
| 60 | 
            +
                  @native_kafka.with_inner do |inner|
         | 
| 61 | 
            +
                    Rdkafka::Bindings.rd_kafka_flush(inner, timeout_ms)
         | 
| 62 | 
            +
                  end
         | 
| 58 63 | 
             
                end
         | 
| 59 64 |  | 
| 60 65 | 
             
                # Partition count for a given topic.
         | 
| @@ -65,7 +70,9 @@ module Rdkafka | |
| 65 70 | 
             
                # @return partition count [Integer,nil]
         | 
| 66 71 | 
             
                def partition_count(topic)
         | 
| 67 72 | 
             
                  closed_producer_check(__method__)
         | 
| 68 | 
            -
                   | 
| 73 | 
            +
                  @native_kafka.with_inner do |inner|
         | 
| 74 | 
            +
                    Rdkafka::Metadata.new(inner, topic).topics&.first[:partition_count]
         | 
| 75 | 
            +
                  end
         | 
| 69 76 | 
             
                end
         | 
| 70 77 |  | 
| 71 78 | 
             
                # Produces a message to a Kafka topic. The message is added to rdkafka's queue, call {DeliveryHandle#wait wait} on the returned delivery handle to make sure it is delivered.
         | 
| @@ -156,10 +163,12 @@ module Rdkafka | |
| 156 163 | 
             
                  args << :int << Rdkafka::Bindings::RD_KAFKA_VTYPE_END
         | 
| 157 164 |  | 
| 158 165 | 
             
                  # Produce the message
         | 
| 159 | 
            -
                  response =  | 
| 160 | 
            -
                     | 
| 161 | 
            -
             | 
| 162 | 
            -
             | 
| 166 | 
            +
                  response = @native_kafka.with_inner do |inner|
         | 
| 167 | 
            +
                    Rdkafka::Bindings.rd_kafka_producev(
         | 
| 168 | 
            +
                      inner,
         | 
| 169 | 
            +
                      *args
         | 
| 170 | 
            +
                    )
         | 
| 171 | 
            +
                  end
         | 
| 163 172 |  | 
| 164 173 | 
             
                  # Raise error if the produce call was not successful
         | 
| 165 174 | 
             
                  if response != 0
         | 
    
        data/lib/rdkafka/version.rb
    CHANGED
    
    
    
        data/spec/rdkafka/config_spec.rb
    CHANGED
    
    | @@ -151,22 +151,23 @@ describe Rdkafka::Config do | |
| 151 151 | 
             
                end
         | 
| 152 152 |  | 
| 153 153 | 
             
                it "allows string partitioner key" do
         | 
| 154 | 
            -
                  expect(Rdkafka::Producer).to receive(:new).with(kind_of(Rdkafka::NativeKafka), "murmur2")
         | 
| 154 | 
            +
                  expect(Rdkafka::Producer).to receive(:new).with(kind_of(Rdkafka::NativeKafka), "murmur2").and_call_original
         | 
| 155 155 | 
             
                  config = Rdkafka::Config.new("partitioner" => "murmur2")
         | 
| 156 | 
            -
                  config.producer
         | 
| 156 | 
            +
                  config.producer.close
         | 
| 157 157 | 
             
                end
         | 
| 158 158 |  | 
| 159 159 | 
             
                it "allows symbol partitioner key" do
         | 
| 160 | 
            -
                  expect(Rdkafka::Producer).to receive(:new).with(kind_of(Rdkafka::NativeKafka), "murmur2")
         | 
| 160 | 
            +
                  expect(Rdkafka::Producer).to receive(:new).with(kind_of(Rdkafka::NativeKafka), "murmur2").and_call_original
         | 
| 161 161 | 
             
                  config = Rdkafka::Config.new(:partitioner => "murmur2")
         | 
| 162 | 
            -
                  config.producer
         | 
| 162 | 
            +
                  config.producer.close
         | 
| 163 163 | 
             
                end
         | 
| 164 164 |  | 
| 165 165 | 
             
                it "should allow configuring zstd compression" do
         | 
| 166 166 | 
             
                  config = Rdkafka::Config.new('compression.codec' => 'zstd')
         | 
| 167 167 | 
             
                  begin
         | 
| 168 | 
            -
                     | 
| 169 | 
            -
                     | 
| 168 | 
            +
                    producer = config.producer
         | 
| 169 | 
            +
                    expect(producer).to be_a Rdkafka::Producer
         | 
| 170 | 
            +
                    producer.close
         | 
| 170 171 | 
             
                  rescue Rdkafka::Config::ConfigError => ex
         | 
| 171 172 | 
             
                    pending "Zstd compression not supported on this machine"
         | 
| 172 173 | 
             
                    raise ex
         | 
| @@ -28,7 +28,7 @@ describe Rdkafka::Consumer::Message do | |
| 28 28 | 
             
              end
         | 
| 29 29 |  | 
| 30 30 | 
             
              after(:each) do
         | 
| 31 | 
            -
                Rdkafka::Bindings. | 
| 31 | 
            +
                Rdkafka::Bindings.rd_kafka_destroy(native_client)
         | 
| 32 32 | 
             
              end
         | 
| 33 33 |  | 
| 34 34 | 
             
              subject { Rdkafka::Consumer::Message.new(native_message) }
         | 
| @@ -55,7 +55,7 @@ describe Rdkafka::Consumer do | |
| 55 55 |  | 
| 56 56 | 
             
              describe "#pause and #resume" do
         | 
| 57 57 | 
             
                context "subscription" do
         | 
| 58 | 
            -
                  let(:timeout) {  | 
| 58 | 
            +
                  let(:timeout) { 2000 }
         | 
| 59 59 |  | 
| 60 60 | 
             
                  before { consumer.subscribe("consume_test_topic") }
         | 
| 61 61 | 
             
                  after { consumer.unsubscribe }
         | 
| @@ -738,7 +738,8 @@ describe Rdkafka::Consumer do | |
| 738 738 | 
             
                  #
         | 
| 739 739 | 
             
                  # This is, in effect, an integration test and the subsequent specs are
         | 
| 740 740 | 
             
                  # unit tests.
         | 
| 741 | 
            -
                   | 
| 741 | 
            +
                  admin = rdkafka_config.admin
         | 
| 742 | 
            +
                  create_topic_handle = admin.create_topic(topic_name, 1, 1)
         | 
| 742 743 | 
             
                  create_topic_handle.wait(max_wait_timeout: 15.0)
         | 
| 743 744 | 
             
                  consumer.subscribe(topic_name)
         | 
| 744 745 | 
             
                  produce_n 42
         | 
| @@ -751,6 +752,7 @@ describe Rdkafka::Consumer do | |
| 751 752 | 
             
                  expect(all_yields.flatten.size).to eq 42
         | 
| 752 753 | 
             
                  expect(all_yields.size).to be > 4
         | 
| 753 754 | 
             
                  expect(all_yields.flatten.map(&:key)).to eq (0..41).map { |x| x.to_s }
         | 
| 755 | 
            +
                  admin.close
         | 
| 754 756 | 
             
                end
         | 
| 755 757 |  | 
| 756 758 | 
             
                it "should batch poll results and yield arrays of messages" do
         | 
| @@ -793,13 +795,15 @@ describe Rdkafka::Consumer do | |
| 793 795 | 
             
                end
         | 
| 794 796 |  | 
| 795 797 | 
             
                it "should yield [] if nothing is received before the timeout" do
         | 
| 796 | 
            -
                   | 
| 798 | 
            +
                  admin = rdkafka_config.admin
         | 
| 799 | 
            +
                  create_topic_handle = admin.create_topic(topic_name, 1, 1)
         | 
| 797 800 | 
             
                  create_topic_handle.wait(max_wait_timeout: 15.0)
         | 
| 798 801 | 
             
                  consumer.subscribe(topic_name)
         | 
| 799 802 | 
             
                  consumer.each_batch do |batch|
         | 
| 800 803 | 
             
                    expect(batch).to eq([])
         | 
| 801 804 | 
             
                    break
         | 
| 802 805 | 
             
                  end
         | 
| 806 | 
            +
                  admin.close
         | 
| 803 807 | 
             
                end
         | 
| 804 808 |  | 
| 805 809 | 
             
                it "should yield batchs of max_items in size if messages are already fetched" do
         | 
| @@ -876,6 +880,7 @@ describe Rdkafka::Consumer do | |
| 876 880 | 
             
                    expect(batches_yielded.first.size).to eq 2
         | 
| 877 881 | 
             
                    expect(exceptions_yielded.flatten.size).to eq 1
         | 
| 878 882 | 
             
                    expect(exceptions_yielded.flatten.first).to be_instance_of(Rdkafka::RdkafkaError)
         | 
| 883 | 
            +
                    consumer.close
         | 
| 879 884 | 
             
                  end
         | 
| 880 885 | 
             
                end
         | 
| 881 886 |  | 
| @@ -917,6 +922,7 @@ describe Rdkafka::Consumer do | |
| 917 922 | 
             
                    expect(each_batch_iterations).to eq 0
         | 
| 918 923 | 
             
                    expect(batches_yielded.size).to eq 0
         | 
| 919 924 | 
             
                    expect(exceptions_yielded.size).to eq 0
         | 
| 925 | 
            +
                    consumer.close
         | 
| 920 926 | 
             
                  end
         | 
| 921 927 | 
             
                end
         | 
| 922 928 | 
             
              end
         | 
| @@ -931,11 +937,11 @@ describe Rdkafka::Consumer do | |
| 931 937 | 
             
                context "with a working listener" do
         | 
| 932 938 | 
             
                  let(:listener) do
         | 
| 933 939 | 
             
                    Struct.new(:queue) do
         | 
| 934 | 
            -
                      def on_partitions_assigned( | 
| 940 | 
            +
                      def on_partitions_assigned(list)
         | 
| 935 941 | 
             
                        collect(:assign, list)
         | 
| 936 942 | 
             
                      end
         | 
| 937 943 |  | 
| 938 | 
            -
                      def on_partitions_revoked( | 
| 944 | 
            +
                      def on_partitions_revoked(list)
         | 
| 939 945 | 
             
                        collect(:revoke, list)
         | 
| 940 946 | 
             
                      end
         | 
| 941 947 |  | 
| @@ -959,12 +965,12 @@ describe Rdkafka::Consumer do | |
| 959 965 | 
             
                context "with a broken listener" do
         | 
| 960 966 | 
             
                  let(:listener) do
         | 
| 961 967 | 
             
                    Struct.new(:queue) do
         | 
| 962 | 
            -
                      def on_partitions_assigned( | 
| 968 | 
            +
                      def on_partitions_assigned(list)
         | 
| 963 969 | 
             
                        queue << :assigned
         | 
| 964 970 | 
             
                        raise 'boom'
         | 
| 965 971 | 
             
                      end
         | 
| 966 972 |  | 
| 967 | 
            -
                      def on_partitions_revoked( | 
| 973 | 
            +
                      def on_partitions_revoked(list)
         | 
| 968 974 | 
             
                        queue << :revoked
         | 
| 969 975 | 
             
                        raise 'boom'
         | 
| 970 976 | 
             
                      end
         | 
| @@ -1031,11 +1037,11 @@ describe Rdkafka::Consumer do | |
| 1031 1037 |  | 
| 1032 1038 | 
             
                let(:listener) do
         | 
| 1033 1039 | 
             
                  Struct.new(:queue) do
         | 
| 1034 | 
            -
                    def on_partitions_assigned( | 
| 1040 | 
            +
                    def on_partitions_assigned(list)
         | 
| 1035 1041 | 
             
                      collect(:assign, list)
         | 
| 1036 1042 | 
             
                    end
         | 
| 1037 1043 |  | 
| 1038 | 
            -
                    def on_partitions_revoked( | 
| 1044 | 
            +
                    def on_partitions_revoked(list)
         | 
| 1039 1045 | 
             
                      collect(:revoke, list)
         | 
| 1040 1046 | 
             
                    end
         | 
| 1041 1047 |  | 
| @@ -10,7 +10,7 @@ describe Rdkafka::Metadata do | |
| 10 10 |  | 
| 11 11 | 
             
              after do
         | 
| 12 12 | 
             
                Rdkafka::Bindings.rd_kafka_consumer_close(native_kafka)
         | 
| 13 | 
            -
                Rdkafka::Bindings. | 
| 13 | 
            +
                Rdkafka::Bindings.rd_kafka_destroy(native_kafka)
         | 
| 14 14 | 
             
              end
         | 
| 15 15 |  | 
| 16 16 | 
             
              context "passing in a topic name" do
         | 
| @@ -11,9 +11,6 @@ describe Rdkafka::NativeKafka do | |
| 11 11 | 
             
              subject(:client) { described_class.new(native, run_polling_thread: true) }
         | 
| 12 12 |  | 
| 13 13 | 
             
              before do
         | 
| 14 | 
            -
                allow(Rdkafka::Bindings).to receive(:rd_kafka_poll).with(instance_of(FFI::Pointer), 250).and_call_original
         | 
| 15 | 
            -
                allow(Rdkafka::Bindings).to receive(:rd_kafka_outq_len).with(instance_of(FFI::Pointer)).and_return(0).and_call_original
         | 
| 16 | 
            -
                allow(Rdkafka::Bindings).to receive(:rd_kafka_destroy_flags)
         | 
| 17 14 | 
             
                allow(Thread).to receive(:new).and_return(thread)
         | 
| 18 15 |  | 
| 19 16 | 
             
                allow(thread).to receive(:[]=).with(:closing, anything)
         | 
| @@ -21,6 +18,8 @@ describe Rdkafka::NativeKafka do | |
| 21 18 | 
             
                allow(thread).to receive(:abort_on_exception=).with(anything)
         | 
| 22 19 | 
             
              end
         | 
| 23 20 |  | 
| 21 | 
            +
              after { client.close }
         | 
| 22 | 
            +
             | 
| 24 23 | 
             
              context "defaults" do
         | 
| 25 24 | 
             
                it "sets the thread to abort on exception" do
         | 
| 26 25 | 
             
                  expect(thread).to receive(:abort_on_exception=).with(true)
         | 
| @@ -41,42 +40,12 @@ describe Rdkafka::NativeKafka do | |
| 41 40 |  | 
| 42 41 | 
             
                  client
         | 
| 43 42 | 
             
                end
         | 
| 44 | 
            -
             | 
| 45 | 
            -
                it "polls the native with default 250ms timeout" do
         | 
| 46 | 
            -
                  polling_loop_expects do
         | 
| 47 | 
            -
                    expect(Rdkafka::Bindings).to receive(:rd_kafka_poll).with(instance_of(FFI::Pointer), 250).at_least(:once)
         | 
| 48 | 
            -
                  end
         | 
| 49 | 
            -
                end
         | 
| 50 | 
            -
             | 
| 51 | 
            -
                it "check the out queue of native client" do
         | 
| 52 | 
            -
                  polling_loop_expects do
         | 
| 53 | 
            -
                    expect(Rdkafka::Bindings).to receive(:rd_kafka_outq_len).with(native).at_least(:once)
         | 
| 54 | 
            -
                  end
         | 
| 55 | 
            -
                end
         | 
| 56 | 
            -
             | 
| 57 | 
            -
                context "if not enabled" do
         | 
| 58 | 
            -
                  subject(:client) { described_class.new(native, run_polling_thread: false) }
         | 
| 59 | 
            -
             | 
| 60 | 
            -
                  it "is not created" do
         | 
| 61 | 
            -
                    expect(Thread).not_to receive(:new)
         | 
| 62 | 
            -
             | 
| 63 | 
            -
                    client
         | 
| 64 | 
            -
                  end
         | 
| 65 | 
            -
                end
         | 
| 66 | 
            -
              end
         | 
| 67 | 
            -
             | 
| 68 | 
            -
              def polling_loop_expects(&block)
         | 
| 69 | 
            -
                Thread.current[:closing] = true # this forces the loop break with line #12
         | 
| 70 | 
            -
             | 
| 71 | 
            -
                allow(Thread).to receive(:new).and_yield do |_|
         | 
| 72 | 
            -
                  block.call
         | 
| 73 | 
            -
                end.and_return(thread)
         | 
| 74 | 
            -
             | 
| 75 | 
            -
                client
         | 
| 76 43 | 
             
              end
         | 
| 77 44 |  | 
| 78 | 
            -
              it "exposes inner client" do
         | 
| 79 | 
            -
                 | 
| 45 | 
            +
              it "exposes the inner client" do
         | 
| 46 | 
            +
                client.with_inner do |inner|
         | 
| 47 | 
            +
                  expect(inner).to eq(native)
         | 
| 48 | 
            +
                end
         | 
| 80 49 | 
             
              end
         | 
| 81 50 |  | 
| 82 51 | 
             
              context "when client was not yet closed (`nil`)" do
         | 
| @@ -86,7 +55,7 @@ describe Rdkafka::NativeKafka do | |
| 86 55 |  | 
| 87 56 | 
             
                context "and attempt to close" do
         | 
| 88 57 | 
             
                  it "calls the `destroy` binding" do
         | 
| 89 | 
            -
                    expect(Rdkafka::Bindings).to receive(: | 
| 58 | 
            +
                    expect(Rdkafka::Bindings).to receive(:rd_kafka_destroy).with(native).and_call_original
         | 
| 90 59 |  | 
| 91 60 | 
             
                    client.close
         | 
| 92 61 | 
             
                  end
         | 
| @@ -106,7 +75,6 @@ describe Rdkafka::NativeKafka do | |
| 106 75 | 
             
                  it "closes and unassign the native client" do
         | 
| 107 76 | 
             
                    client.close
         | 
| 108 77 |  | 
| 109 | 
            -
                    expect(client.inner).to eq(nil)
         | 
| 110 78 | 
             
                    expect(client.closed?).to eq(true)
         | 
| 111 79 | 
             
                  end
         | 
| 112 80 | 
             
                end
         | 
| @@ -141,7 +109,6 @@ describe Rdkafka::NativeKafka do | |
| 141 109 | 
             
                  it "does not close and unassign the native client again" do
         | 
| 142 110 | 
             
                    client.close
         | 
| 143 111 |  | 
| 144 | 
            -
                    expect(client.inner).to eq(nil)
         | 
| 145 112 | 
             
                    expect(client.closed?).to eq(true)
         | 
| 146 113 | 
             
                  end
         | 
| 147 114 | 
             
                end
         | 
    
        data/spec/spec_helper.rb
    CHANGED
    
    | @@ -73,7 +73,7 @@ def new_native_topic(topic_name="topic_name", native_client: ) | |
| 73 73 | 
             
            end
         | 
| 74 74 |  | 
| 75 75 | 
             
            def wait_for_message(topic:, delivery_report:, timeout_in_seconds: 30, consumer: nil)
         | 
| 76 | 
            -
              new_consumer =  | 
| 76 | 
            +
              new_consumer = consumer.nil?
         | 
| 77 77 | 
             
              consumer ||= rdkafka_consumer_config.consumer
         | 
| 78 78 | 
             
              consumer.subscribe(topic)
         | 
| 79 79 | 
             
              timeout = Time.now.to_i + timeout_in_seconds
         | 
    
        metadata
    CHANGED
    
    | @@ -1,14 +1,14 @@ | |
| 1 1 | 
             
            --- !ruby/object:Gem::Specification
         | 
| 2 2 | 
             
            name: rdkafka
         | 
| 3 3 | 
             
            version: !ruby/object:Gem::Version
         | 
| 4 | 
            -
              version: 0.13.0.beta. | 
| 4 | 
            +
              version: 0.13.0.beta.6
         | 
| 5 5 | 
             
            platform: ruby
         | 
| 6 6 | 
             
            authors:
         | 
| 7 7 | 
             
            - Thijs Cadier
         | 
| 8 8 | 
             
            autorequire:
         | 
| 9 9 | 
             
            bindir: bin
         | 
| 10 10 | 
             
            cert_chain: []
         | 
| 11 | 
            -
            date:  | 
| 11 | 
            +
            date: 2023-04-28 00:00:00.000000000 Z
         | 
| 12 12 | 
             
            dependencies:
         | 
| 13 13 | 
             
            - !ruby/object:Gem::Dependency
         | 
| 14 14 | 
             
              name: ffi
         | 
| @@ -139,8 +139,7 @@ dependencies: | |
| 139 139 | 
             
            description: Modern Kafka client library for Ruby based on librdkafka
         | 
| 140 140 | 
             
            email:
         | 
| 141 141 | 
             
            - thijs@appsignal.com
         | 
| 142 | 
            -
            executables:
         | 
| 143 | 
            -
            - console
         | 
| 142 | 
            +
            executables: []
         | 
| 144 143 | 
             
            extensions:
         | 
| 145 144 | 
             
            - ext/Rakefile
         | 
| 146 145 | 
             
            extra_rdoc_files: []
         | 
| @@ -155,7 +154,6 @@ files: | |
| 155 154 | 
             
            - LICENSE
         | 
| 156 155 | 
             
            - README.md
         | 
| 157 156 | 
             
            - Rakefile
         | 
| 158 | 
            -
            - bin/console
         | 
| 159 157 | 
             
            - docker-compose.yml
         | 
| 160 158 | 
             
            - ext/README.md
         | 
| 161 159 | 
             
            - ext/Rakefile
         |