ruby-kafka 0.7.1 → 0.7.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 07f5c2801a1dda787213437b1b27aba5bd23b25e53c64687b5910ee0e1ccc221
4
- data.tar.gz: 4f277fc15146a78d2374da0dff2e8201bf521012a07b160fc8f0630164656509
3
+ metadata.gz: cae08d9f12b1f6758def92d1d02ab046993bb11eed7b985b91be9bd5eee57b51
4
+ data.tar.gz: 89564d3428e93b5d377d995a8400a27fe07d40e09bb860371c78cca550549d7a
5
5
  SHA512:
6
- metadata.gz: 58eb8f78f5d2cea9793416ab9e01d74f8c7737e7d7a5174a13a58278fa6b5849a2ee3b88155c97f37915664d0c9f393738ae7b8e06c2b78d48543188514a9cf8
7
- data.tar.gz: 714ce12cca6bc8dd34c20835cdbc89257cb0f5a1cb020d1312e66adfc3d0ec91e1cec3207af0d708cdc936afb803d13576b2fe565d73495e45fd7cc89fc90f1f
6
+ metadata.gz: b602a8d11ceb4a489a091b7c316493a58e3c009d1f02d3b9008e882408bd4ff9c4f5ce51f4f50aec6c9478eab782b617268b206fee76e50ead20eff260bfa6a2
7
+ data.tar.gz: ddaae218d4d56885b610e89cc9aa9cd70d12ea196e888c5b50a2ff7be7d53e35932403b81c99e131cbaecc9d56340cd607b381d09ed1a25db324440a35e2ddb5
@@ -4,6 +4,13 @@ Changes and additions to the library will be listed here.
4
4
 
5
5
  ## Unreleased
6
6
 
7
+ ## 0.7.2
8
+
9
+ - Handle case when paused partition does not belong to group on resume (#656).
10
+ - Fix compatibility version in documentation (#651).
11
+ - Fix message set backward compatible (#648).
12
+ - Refresh metadata on connection error when listing topics (#644).
13
+
7
14
  ## 0.7.1
8
15
 
9
16
  - Compatibility with dogstatsd-ruby v4.0.0.
data/README.md CHANGED
@@ -90,7 +90,7 @@ Or install it yourself as:
90
90
  </tr>
91
91
  <tr>
92
92
  <th>Kafka 0.11</th>
93
- <td>Limited support</td>
93
+ <td>Full support in v0.7.x</td>
94
94
  <td>Limited support</td>
95
95
  </tr>
96
96
  <tr>
@@ -105,8 +105,8 @@ This library is targeting Kafka 0.9 with the v0.4.x series and Kafka 0.10 with t
105
105
  - **Kafka 0.8:** Full support for the Producer API in ruby-kafka v0.4.x, but no support for consumer groups. Simple message fetching works.
106
106
  - **Kafka 0.9:** Full support for the Producer and Consumer API in ruby-kafka v0.4.x.
107
107
  - **Kafka 0.10:** Full support for the Producer and Consumer API in ruby-kafka v0.5.x. Note that you _must_ run version 0.10.1 or higher of Kafka due to limitations in 0.10.0.
108
- - **Kafka 0.11:** Everything that works with Kafka 0.10 should still work, but so far no features specific to Kafka 0.11 have been added.
109
- - **Kafka 0.11:** Everything that works with Kafka 0.10 should still work, but so far no features specific to Kafka 1.0 have been added.
108
+ - **Kafka 0.11:** Full support for Producer API, limited support for Consumer API in ruby-kafka v0.7.x. New features in 0.11.x includes new Record Batch format, idempotent and transactional production. The missing feature is dirty reading of Consumer API.
109
+ - **Kafka 1.0:** Everything that works with Kafka 0.11 should still work, but so far no features specific to Kafka 1.0 have been added.
110
110
 
111
111
  This library requires Ruby 2.1 or higher.
112
112
 
@@ -749,7 +749,7 @@ All notifications have `group_id` in the payload, referring to the Kafka consume
749
749
  * `key` is the message key.
750
750
  * `topic` is the topic that the message was consumed from.
751
751
  * `partition` is the topic partition that the message was consumed from.
752
- * `offset` is the message's offset within the topic partition.
752
+ * `offset` is the message's offset within the topic partition.
753
753
  * `offset_lag` is the number of messages within the topic partition that have not yet been consumed.
754
754
 
755
755
  * `start_process_message.consumer.kafka` is sent before `process_message.consumer.kafka`, and contains the same payload. It is delivered _before_ the message is processed, rather than _after_.
@@ -758,7 +758,7 @@ All notifications have `group_id` in the payload, referring to the Kafka consume
758
758
  * `message_count` is the number of messages in the batch.
759
759
  * `topic` is the topic that the message batch was consumed from.
760
760
  * `partition` is the topic partition that the message batch was consumed from.
761
- * `highwater_mark_offset` is the message batch's highest offset within the topic partition.
761
+ * `highwater_mark_offset` is the message batch's highest offset within the topic partition.
762
762
  * `offset_lag` is the number of messages within the topic partition that have not yet been consumed.
763
763
 
764
764
  * `start_process_batch.consumer.kafka` is sent before `process_batch.consumer.kafka`, and contains the same payload. It is delivered _before_ the batch is processed, rather than _after_.
@@ -767,21 +767,21 @@ All notifications have `group_id` in the payload, referring to the Kafka consume
767
767
  * `group_id` is the consumer group id.
768
768
 
769
769
  * `sync_group.consumer.kafka` is sent whenever a consumer is assigned topic partitions within a consumer group. It includes the following payload:
770
- * `group_id` is the consumer group id.
771
-
770
+ * `group_id` is the consumer group id.
771
+
772
772
  * `leave_group.consumer.kafka` is sent whenever a consumer leaves a consumer group. It includes the following payload:
773
773
  * `group_id` is the consumer group id.
774
-
774
+
775
775
  * `seek.consumer.kafka` is sent when a consumer first seeks to an offset. It includes the following payload:
776
776
  * `group_id` is the consumer group id.
777
777
  * `topic` is the topic we are seeking in.
778
778
  * `partition` is the partition we are seeking in.
779
779
  * `offset` is the offset we have seeked to.
780
-
780
+
781
781
  * `heartbeat.consumer.kafka` is sent when a consumer group completes a heartbeat. It includes the following payload:
782
782
  * `group_id` is the consumer group id.
783
783
  * `topic_partitions` is a hash of { topic_name => array of assigned partition IDs }
784
-
784
+
785
785
  #### Connection Notifications
786
786
 
787
787
  * `request.connection.kafka` is sent whenever a network request is sent to a Kafka broker. It includes the following payload:
@@ -908,7 +908,7 @@ can use:
908
908
  kafka = Kafka.new(["kafka1:9092"], ssl_ca_certs_from_system: true)
909
909
  ```
910
910
 
911
- This configures the store to look up CA certificates from the system default certificate store on an as needed basis. The location of the store can usually be determined by:
911
+ This configures the store to look up CA certificates from the system default certificate store on an as needed basis. The location of the store can usually be determined by:
912
912
  `OpenSSL::X509::DEFAULT_CERT_FILE`
913
913
 
914
914
  ##### Client Authentication
@@ -963,7 +963,7 @@ kafka = Kafka.new(
963
963
  ```
964
964
 
965
965
  ##### SCRAM
966
- Since 0.11 kafka supports [SCRAM](https://kafka.apache.org/documentation.html#security_sasl_scram).
966
+ Since 0.11 kafka supports [SCRAM](https://kafka.apache.org/documentation.html#security_sasl_scram).
967
967
 
968
968
  ```ruby
969
969
  kafka = Kafka.new(
@@ -618,7 +618,15 @@ module Kafka
618
618
  #
619
619
  # @return [Array<String>] the list of topic names.
620
620
  def topics
621
- @cluster.list_topics
621
+ attempts = 0
622
+ begin
623
+ attempts += 1
624
+ @cluster.list_topics
625
+ rescue Kafka::ConnectionError
626
+ @cluster.mark_as_stale!
627
+ retry unless attempts > 1
628
+ raise
629
+ end
622
630
  end
623
631
 
624
632
  # Lists all consumer groups in the cluster
@@ -397,6 +397,7 @@ module Kafka
397
397
  end
398
398
 
399
399
  def random_broker
400
+ refresh_metadata_if_necessary!
400
401
  node_id = cluster_info.brokers.sample.node_id
401
402
  connect_to_broker(node_id)
402
403
  end
@@ -153,7 +153,8 @@ module Kafka
153
153
  def resume(topic, partition)
154
154
  pause_for(topic, partition).resume!
155
155
 
156
- seek_to_next(topic, partition)
156
+ # During re-balancing we might have lost the paused partition. Check if partition is still in group before seek.
157
+ seek_to_next(topic, partition) if @group.assigned_to?(topic, partition)
157
158
  end
158
159
 
159
160
  # Whether the topic partition is currently paused.
@@ -102,6 +102,15 @@ module Kafka
102
102
  new(key: key, value: value, codec_id: codec_id, offset: offset, create_time: create_time)
103
103
  end
104
104
 
105
+ # Ensure the backward compatibility of Message format from Kafka 0.11.x
106
+ def is_control_record
107
+ false
108
+ end
109
+
110
+ def headers
111
+ {}
112
+ end
113
+
105
114
  private
106
115
 
107
116
  # Offsets may be relative with regards to wrapped message offset, but there are special cases.
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Kafka
4
- VERSION = "0.7.1"
4
+ VERSION = "0.7.2"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.7.1
4
+ version: 0.7.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2018-09-12 00:00:00.000000000 Z
11
+ date: 2018-09-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: digest-crc