ruby-kafka 1.2.0 → 1.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0f8391cc7b1989cb5f669796bc4ad647b77d882e6506fae42bab18acb8a6bcc6
4
- data.tar.gz: 012baaff5d2cc9eb17e3a7b7342f49f7c905a5f91d26078fa0ecf2f0fa81a2ad
3
+ metadata.gz: b16e9e52014e784610725bb2ba5c5a431694ce6da7878b6c1ff024504d6ef6c4
4
+ data.tar.gz: '039026011e9cd5e5dce59aed4f53b2a8970aa8025fe3b008620bdef8a802bbf5'
5
5
  SHA512:
6
- metadata.gz: 7f4e9302ca0ab41a6fded75f95ed866d959a2a027b90051ed7f3d7fba573aa63e57be7692d004dbeb3bacd99fe44b24188f81b0f3bed50f68e1da5189262271f
7
- data.tar.gz: 7266bdd50e66a7ab9b3c71025468af0a4bea778fd312c41d4c699e2609420c11c28accb3246407ce98330bf521c109691bd1d4943dd532a44ee6732f1a410922
6
+ metadata.gz: 1d13c2032f4bd38e09a714fe40d59467ff61b2c51e0a604daf7f9e1a3431af32f77e62433b863862a68e6bc74f53ff4eceed7e88145acc102613ac6fa2600b9f
7
+ data.tar.gz: f424e6e2bee5f318766880f3b7eb338ce706783ea647908902a8a6d409fcf8f9ac3f0afe7ca8a932710a9d8f192e44a25a346da6639861fb7d377f3dd3257309
@@ -4,6 +4,11 @@ Changes and additions to the library will be listed here.
4
4
 
5
5
  ## Unreleased
6
6
 
7
+ ## 1.3.0
8
+
9
+ - Support custom assignment strategy (#846).
10
+ - Improved Exceptions in TransactionManager (#862).
11
+
7
12
  ## 1.2.0
8
13
 
9
14
  - Add producer consumer interceptors (#837).
data/README.md CHANGED
@@ -26,6 +26,7 @@ A Ruby client library for [Apache Kafka](http://kafka.apache.org/), a distribute
26
26
  4. [Shutting Down a Consumer](#shutting-down-a-consumer)
27
27
  5. [Consuming Messages in Batches](#consuming-messages-in-batches)
28
28
  6. [Balancing Throughput and Latency](#balancing-throughput-and-latency)
29
+ 7. [Customizing Partition Assignment Strategy](#customizing-partition-assignment-strategy)
29
30
  4. [Thread Safety](#thread-safety)
30
31
  5. [Logging](#logging)
31
32
  6. [Instrumentation](#instrumentation)
@@ -743,6 +744,88 @@ consumer.each_message do |message|
743
744
  end
744
745
  ```
745
746
 
747
+ #### Customizing Partition Assignment Strategy
748
+
749
+ In some cases, you might want to assign more partitions to some consumers. For example, in applications inserting some records to a database, the consumers running on hosts nearby the database can process more messages than the consumers running on other hosts.
750
+ You can use a custom assignment strategy by passing an object that implements `#call` as the argument `assignment_strategy` like below:
751
+
752
+ ```ruby
753
+ class CustomAssignmentStrategy
754
+ def initialize(user_data)
755
+ @user_data = user_data
756
+ end
757
+
758
+ # Assign the topic partitions to the group members.
759
+ #
760
+ # @param cluster [Kafka::Cluster]
761
+ # @param members [Hash<String, Kafka::Protocol::JoinGroupResponse::Metadata>] a hash
762
+ # mapping member ids to metadata
763
+ # @param partitions [Array<Kafka::ConsumerGroup::Assignor::Partition>] a list of
764
+ # partitions the consumer group processes
765
+ # @return [Hash<String, Array<Kafka::ConsumerGroup::Assignor::Partition>] a hash
766
+ # mapping member ids to partitions.
767
+ def call(cluster:, members:, partitions:)
768
+ ...
769
+ end
770
+ end
771
+
772
+ strategy = CustomAssignmentStrategy.new("some-host-information")
773
+ consumer = kafka.consumer(group_id: "some-group", assignment_strategy: strategy)
774
+ ```
775
+
776
+ `members` is a hash mapping member IDs to metadata, and partitions is a list of partitions the consumer group processes. The method `call` must return a hash mapping member IDs to partitions. For example, the following strategy assigns partitions randomly:
777
+
778
+ ```ruby
779
+ class RandomAssignmentStrategy
780
+ def call(cluster:, members:, partitions:)
781
+ member_ids = members.keys
782
+ partitions.each_with_object(Hash.new {|h, k| h[k] = [] }) do |partition, partitions_per_member|
783
+ partitions_per_member[member_ids[rand(member_ids.count)]] << partition
784
+ end
785
+ end
786
+ end
787
+ ```
788
+
789
+ If the strategy needs user data, you should define the method `user_data` that returns user data on each consumer. For example, the following strategy uses the consumers' IP addresses as user data:
790
+
791
+ ```ruby
792
+ class NetworkTopologyAssignmentStrategy
793
+ def user_data
794
+ Socket.ip_address_list.find(&:ipv4_private?).ip_address
795
+ end
796
+
797
+ def call(cluster:, members:, partitions:)
798
+ # Display the pair of the member ID and IP address
799
+ members.each do |id, metadata|
800
+ puts "#{id}: #{metadata.user_data}"
801
+ end
802
+
803
+ # Assign partitions considering the network topology
804
+ ...
805
+ end
806
+ end
807
+ ```
808
+
809
+ Note that the strategy uses the class name as the default protocol name. You can change it by defining the method `protocol_name`:
810
+
811
+ ```ruby
812
+ class NetworkTopologyAssignmentStrategy
813
+ def protocol_name
814
+ "networktopology"
815
+ end
816
+
817
+ def user_data
818
+ Socket.ip_address_list.find(&:ipv4_private?).ip_address
819
+ end
820
+
821
+ def call(cluster:, members:, partitions:)
822
+ ...
823
+ end
824
+ end
825
+ ```
826
+
827
+ As the method `call` might receive different user data from what it expects, you should avoid using the same protocol name as another strategy that uses different user data.
828
+
746
829
 
747
830
  ### Thread Safety
748
831
 
@@ -972,7 +1055,7 @@ This configures the store to look up CA certificates from the system default cer
972
1055
 
973
1056
  In order to authenticate the client to the cluster, you need to pass in a certificate and key created for the client and trusted by the brokers.
974
1057
 
975
- **NOTE**: You can disable hostname validation by passing `verify_hostname: false`.
1058
+ **NOTE**: You can disable hostname validation by passing `ssl_verify_hostname: false`.
976
1059
 
977
1060
  ```ruby
978
1061
  kafka = Kafka.new(
@@ -981,6 +1064,7 @@ kafka = Kafka.new(
981
1064
  ssl_client_cert: File.read('my_client_cert.pem'),
982
1065
  ssl_client_cert_key: File.read('my_client_cert_key.pem'),
983
1066
  ssl_client_cert_key_password: 'my_client_cert_key_password',
1067
+ ssl_verify_hostname: false,
984
1068
  # ...
985
1069
  )
986
1070
  ```
@@ -59,8 +59,6 @@ module Kafka
59
59
  # producer.shutdown
60
60
  #
61
61
  class AsyncProducer
62
- THREAD_MUTEX = Mutex.new
63
-
64
62
  # Initializes a new AsyncProducer.
65
63
  #
66
64
  # @param sync_producer [Kafka::Producer] the synchronous producer that should
@@ -94,6 +92,8 @@ module Kafka
94
92
 
95
93
  # The timer will no-op if the delivery interval is zero.
96
94
  @timer = Timer.new(queue: @queue, interval: delivery_interval)
95
+
96
+ @thread_mutex = Mutex.new
97
97
  end
98
98
 
99
99
  # Produces a message to the specified topic.
@@ -131,6 +131,8 @@ module Kafka
131
131
  # @see Kafka::Producer#deliver_messages
132
132
  # @return [nil]
133
133
  def deliver_messages
134
+ ensure_threads_running!
135
+
134
136
  @queue << [:deliver_messages, nil]
135
137
 
136
138
  nil
@@ -142,6 +144,8 @@ module Kafka
142
144
  # @see Kafka::Producer#shutdown
143
145
  # @return [nil]
144
146
  def shutdown
147
+ ensure_threads_running!
148
+
145
149
  @timer_thread && @timer_thread.exit
146
150
  @queue << [:shutdown, nil]
147
151
  @worker_thread && @worker_thread.join
@@ -152,17 +156,22 @@ module Kafka
152
156
  private
153
157
 
154
158
  def ensure_threads_running!
155
- THREAD_MUTEX.synchronize do
156
- @worker_thread = nil unless @worker_thread && @worker_thread.alive?
157
- @worker_thread ||= Thread.new { @worker.run }
158
- end
159
+ return if worker_thread_alive? && timer_thread_alive?
159
160
 
160
- THREAD_MUTEX.synchronize do
161
- @timer_thread = nil unless @timer_thread && @timer_thread.alive?
162
- @timer_thread ||= Thread.new { @timer.run }
161
+ @thread_mutex.synchronize do
162
+ @worker_thread = Thread.new { @worker.run } unless worker_thread_alive?
163
+ @timer_thread = Thread.new { @timer.run } unless timer_thread_alive?
163
164
  end
164
165
  end
165
166
 
167
+ def worker_thread_alive?
168
+ !!@worker_thread && @worker_thread.alive?
169
+ end
170
+
171
+ def timer_thread_alive?
172
+ !!@timer_thread && @timer_thread.alive?
173
+ end
174
+
166
175
  def buffer_overflow(topic, message)
167
176
  @instrumenter.instrument("buffer_overflow.async_producer", {
168
177
  topic: topic,
@@ -208,7 +217,7 @@ module Kafka
208
217
 
209
218
  case operation
210
219
  when :produce
211
- produce(*payload)
220
+ produce(payload[0], **payload[1])
212
221
  deliver_messages if threshold_reached?
213
222
  when :deliver_messages
214
223
  deliver_messages
@@ -357,6 +357,8 @@ module Kafka
357
357
  # If it is n (n > 0), the topic list will be refreshed every n seconds
358
358
  # @param interceptors [Array<Object>] a list of consumer interceptors that implement
359
359
  # `call(Kafka::FetchedBatch)`.
360
+ # @param assignment_strategy [Object] a partition assignment strategy that
361
+ # implements `protocol_type()`, `user_data()`, and `assign(members:, partitions:)`
360
362
  # @return [Consumer]
361
363
  def consumer(
362
364
  group_id:,
@@ -368,7 +370,8 @@ module Kafka
368
370
  offset_retention_time: nil,
369
371
  fetcher_max_queue_size: 100,
370
372
  refresh_topic_interval: 0,
371
- interceptors: []
373
+ interceptors: [],
374
+ assignment_strategy: nil
372
375
  )
373
376
  cluster = initialize_cluster
374
377
 
@@ -387,6 +390,7 @@ module Kafka
387
390
  rebalance_timeout: rebalance_timeout,
388
391
  retention_time: retention_time,
389
392
  instrumenter: instrumenter,
393
+ assignment_strategy: assignment_strategy
390
394
  )
391
395
 
392
396
  fetcher = Fetcher.new(
@@ -1,13 +1,14 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require "set"
4
+ require "kafka/consumer_group/assignor"
4
5
  require "kafka/round_robin_assignment_strategy"
5
6
 
6
7
  module Kafka
7
8
  class ConsumerGroup
8
9
  attr_reader :assigned_partitions, :generation_id, :group_id
9
10
 
10
- def initialize(cluster:, logger:, group_id:, session_timeout:, rebalance_timeout:, retention_time:, instrumenter:)
11
+ def initialize(cluster:, logger:, group_id:, session_timeout:, rebalance_timeout:, retention_time:, instrumenter:, assignment_strategy:)
11
12
  @cluster = cluster
12
13
  @logger = TaggedLogger.new(logger)
13
14
  @group_id = group_id
@@ -19,7 +20,10 @@ module Kafka
19
20
  @members = {}
20
21
  @topics = Set.new
21
22
  @assigned_partitions = {}
22
- @assignment_strategy = RoundRobinAssignmentStrategy.new(cluster: @cluster)
23
+ @assignor = Assignor.new(
24
+ cluster: cluster,
25
+ strategy: assignment_strategy || RoundRobinAssignmentStrategy.new
26
+ )
23
27
  @retention_time = retention_time
24
28
  end
25
29
 
@@ -113,9 +117,12 @@ module Kafka
113
117
 
114
118
  Protocol.handle_error(response.error_code)
115
119
  end
116
- rescue ConnectionError, UnknownMemberId, RebalanceInProgress, IllegalGeneration => e
120
+ rescue ConnectionError, UnknownMemberId, IllegalGeneration => e
117
121
  @logger.error "Error sending heartbeat: #{e}"
118
122
  raise HeartbeatError, e
123
+ rescue RebalanceInProgress => e
124
+ @logger.warn "Error sending heartbeat: #{e}"
125
+ raise HeartbeatError, e
119
126
  rescue NotCoordinatorForGroup
120
127
  @logger.error "Failed to find coordinator for group `#{@group_id}`; retrying..."
121
128
  sleep 1
@@ -144,6 +151,8 @@ module Kafka
144
151
  rebalance_timeout: @rebalance_timeout,
145
152
  member_id: @member_id,
146
153
  topics: @topics,
154
+ protocol_name: @assignor.protocol_name,
155
+ user_data: @assignor.user_data,
147
156
  )
148
157
 
149
158
  Protocol.handle_error(response.error_code)
@@ -180,8 +189,8 @@ module Kafka
180
189
  if group_leader?
181
190
  @logger.info "Chosen as leader of group `#{@group_id}`"
182
191
 
183
- group_assignment = @assignment_strategy.assign(
184
- members: @members.keys,
192
+ group_assignment = @assignor.assign(
193
+ members: @members,
185
194
  topics: @topics,
186
195
  )
187
196
  end
@@ -0,0 +1,63 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "kafka/protocol/member_assignment"
4
+
5
+ module Kafka
6
+ class ConsumerGroup
7
+
8
+ # A consumer group partition assignor
9
+ class Assignor
10
+ Partition = Struct.new(:topic, :partition_id)
11
+
12
+ # @param cluster [Kafka::Cluster]
13
+ # @param strategy [Object] an object that implements #protocol_type,
14
+ # #user_data, and #assign.
15
+ def initialize(cluster:, strategy:)
16
+ @cluster = cluster
17
+ @strategy = strategy
18
+ end
19
+
20
+ def protocol_name
21
+ @strategy.respond_to?(:protocol_name) ? @strategy.protocol_name : @strategy.class.to_s
22
+ end
23
+
24
+ def user_data
25
+ @strategy.user_data if @strategy.respond_to?(:user_data)
26
+ end
27
+
28
+ # Assign the topic partitions to the group members.
29
+ #
30
+ # @param members [Hash<String, Kafka::Protocol::JoinGroupResponse::Metadata>] a hash
31
+ # mapping member ids to metadata.
32
+ # @param topics [Array<String>] topics
33
+ # @return [Hash<String, Kafka::Protocol::MemberAssignment>] a hash mapping member
34
+ # ids to assignments.
35
+ def assign(members:, topics:)
36
+ topic_partitions = topics.flat_map do |topic|
37
+ begin
38
+ partition_ids = @cluster.partitions_for(topic).map(&:partition_id)
39
+ rescue UnknownTopicOrPartition
40
+ raise UnknownTopicOrPartition, "unknown topic #{topic}"
41
+ end
42
+ partition_ids.map {|partition_id| Partition.new(topic, partition_id) }
43
+ end
44
+
45
+ group_assignment = {}
46
+
47
+ members.each_key do |member_id|
48
+ group_assignment[member_id] = Protocol::MemberAssignment.new
49
+ end
50
+ @strategy.call(cluster: @cluster, members: members, partitions: topic_partitions).each do |member_id, partitions|
51
+ Array(partitions).each do |partition|
52
+ group_assignment[member_id].assign(partition.topic, [partition.partition_id])
53
+ end
54
+ end
55
+
56
+ group_assignment
57
+ rescue Kafka::LeaderNotAvailable
58
+ sleep 1
59
+ retry
60
+ end
61
+ end
62
+ end
63
+ end
@@ -7,14 +7,14 @@ module Kafka
7
7
  class JoinGroupRequest
8
8
  PROTOCOL_TYPE = "consumer"
9
9
 
10
- def initialize(group_id:, session_timeout:, rebalance_timeout:, member_id:, topics: [])
10
+ def initialize(group_id:, session_timeout:, rebalance_timeout:, member_id:, topics: [], protocol_name:, user_data: nil)
11
11
  @group_id = group_id
12
12
  @session_timeout = session_timeout * 1000 # Kafka wants ms.
13
13
  @rebalance_timeout = rebalance_timeout * 1000 # Kafka wants ms.
14
14
  @member_id = member_id || ""
15
15
  @protocol_type = PROTOCOL_TYPE
16
16
  @group_protocols = {
17
- "roundrobin" => ConsumerGroupProtocol.new(topics: topics),
17
+ protocol_name => ConsumerGroupProtocol.new(topics: topics, user_data: user_data),
18
18
  }
19
19
  end
20
20
 
@@ -3,6 +3,8 @@
3
3
  module Kafka
4
4
  module Protocol
5
5
  class JoinGroupResponse
6
+ Metadata = Struct.new(:version, :topics, :user_data)
7
+
6
8
  attr_reader :error_code
7
9
 
8
10
  attr_reader :generation_id, :group_protocol
@@ -25,7 +27,13 @@ module Kafka
25
27
  group_protocol: decoder.string,
26
28
  leader_id: decoder.string,
27
29
  member_id: decoder.string,
28
- members: Hash[decoder.array { [decoder.string, decoder.bytes] }],
30
+ members: Hash[
31
+ decoder.array do
32
+ member_id = decoder.string
33
+ d = Decoder.from_string(decoder.bytes)
34
+ [member_id, Metadata.new(d.int16, d.array { d.string }, d.bytes)]
35
+ end
36
+ ],
29
37
  )
30
38
  end
31
39
  end
@@ -1,54 +1,31 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require "kafka/protocol/member_assignment"
4
-
5
3
  module Kafka
6
4
 
7
5
  # A consumer group partition assignment strategy that assigns partitions to
8
6
  # consumers in a round-robin fashion.
9
7
  class RoundRobinAssignmentStrategy
10
- def initialize(cluster:)
11
- @cluster = cluster
8
+ def protocol_name
9
+ "roundrobin"
12
10
  end
13
11
 
14
12
  # Assign the topic partitions to the group members.
15
13
  #
16
- # @param members [Array<String>] member ids
17
- # @param topics [Array<String>] topics
18
- # @return [Hash<String, Protocol::MemberAssignment>] a hash mapping member
19
- # ids to assignments.
20
- def assign(members:, topics:)
21
- group_assignment = {}
22
-
23
- members.each do |member_id|
24
- group_assignment[member_id] = Protocol::MemberAssignment.new
25
- end
26
-
27
- topic_partitions = topics.flat_map do |topic|
28
- begin
29
- partitions = @cluster.partitions_for(topic).map(&:partition_id)
30
- rescue UnknownTopicOrPartition
31
- raise UnknownTopicOrPartition, "unknown topic #{topic}"
32
- end
33
- Array.new(partitions.count) { topic }.zip(partitions)
34
- end
35
-
36
- partitions_per_member = topic_partitions.group_by.with_index do |_, index|
37
- index % members.count
38
- end.values
39
-
40
- members.zip(partitions_per_member).each do |member_id, member_partitions|
41
- unless member_partitions.nil?
42
- member_partitions.each do |topic, partition|
43
- group_assignment[member_id].assign(topic, [partition])
44
- end
45
- end
14
+ # @param cluster [Kafka::Cluster]
15
+ # @param members [Hash<String, Kafka::Protocol::JoinGroupResponse::Metadata>] a hash
16
+ # mapping member ids to metadata
17
+ # @param partitions [Array<Kafka::ConsumerGroup::Assignor::Partition>] a list of
18
+ # partitions the consumer group processes
19
+ # @return [Hash<String, Array<Kafka::ConsumerGroup::Assignor::Partition>] a hash
20
+ # mapping member ids to partitions.
21
+ def call(cluster:, members:, partitions:)
22
+ member_ids = members.keys
23
+ partitions_per_member = Hash.new {|h, k| h[k] = [] }
24
+ partitions.each_with_index do |partition, index|
25
+ partitions_per_member[member_ids[index % member_ids.count]] << partition
46
26
  end
47
27
 
48
- group_assignment
49
- rescue Kafka::LeaderNotAvailable
50
- sleep 1
51
- retry
28
+ partitions_per_member
52
29
  end
53
30
  end
54
31
  end
@@ -95,7 +95,7 @@ module Kafka
95
95
  force_transactional!
96
96
 
97
97
  if @transaction_state.uninitialized?
98
- raise 'Transaction is uninitialized'
98
+ raise Kafka::InvalidTxnStateError, 'Transaction is uninitialized'
99
99
  end
100
100
 
101
101
  # Extract newly created partitions
@@ -138,8 +138,8 @@ module Kafka
138
138
 
139
139
  def begin_transaction
140
140
  force_transactional!
141
- raise 'Transaction has already started' if @transaction_state.in_transaction?
142
- raise 'Transaction is not ready' unless @transaction_state.ready?
141
+ raise Kafka::InvalidTxnStateError, 'Transaction has already started' if @transaction_state.in_transaction?
142
+ raise Kafka::InvalidTxnStateError, 'Transaction is not ready' unless @transaction_state.ready?
143
143
  @transaction_state.transition_to!(TransactionStateMachine::IN_TRANSACTION)
144
144
 
145
145
  @logger.info "Begin transaction #{@transactional_id}, Producer ID: #{@producer_id} (Epoch #{@producer_epoch})"
@@ -159,7 +159,7 @@ module Kafka
159
159
  end
160
160
 
161
161
  unless @transaction_state.in_transaction?
162
- raise 'Transaction is not valid to commit'
162
+ raise Kafka::InvalidTxnStateError, 'Transaction is not valid to commit'
163
163
  end
164
164
 
165
165
  @transaction_state.transition_to!(TransactionStateMachine::COMMITTING_TRANSACTION)
@@ -192,7 +192,8 @@ module Kafka
192
192
  end
193
193
 
194
194
  unless @transaction_state.in_transaction?
195
- raise 'Transaction is not valid to abort'
195
+ @logger.warn('Aborting transaction that was never opened on brokers')
196
+ return
196
197
  end
197
198
 
198
199
  @transaction_state.transition_to!(TransactionStateMachine::ABORTING_TRANSACTION)
@@ -221,7 +222,7 @@ module Kafka
221
222
  force_transactional!
222
223
 
223
224
  unless @transaction_state.in_transaction?
224
- raise 'Transaction is not valid to send offsets'
225
+ raise Kafka::InvalidTxnStateError, 'Transaction is not valid to send offsets'
225
226
  end
226
227
 
227
228
  add_response = transaction_coordinator.add_offsets_to_txn(
@@ -250,6 +251,10 @@ module Kafka
250
251
  @transaction_state.error?
251
252
  end
252
253
 
254
+ def ready?
255
+ @transaction_state.ready?
256
+ end
257
+
253
258
  def close
254
259
  if in_transaction?
255
260
  @logger.warn("Aborting pending transaction ...")
@@ -264,11 +269,11 @@ module Kafka
264
269
 
265
270
  def force_transactional!
266
271
  unless transactional?
267
- raise 'Please turn on transactional mode to use transaction'
272
+ raise Kafka::InvalidTxnStateError, 'Please turn on transactional mode to use transaction'
268
273
  end
269
274
 
270
275
  if @transactional_id.nil? || @transactional_id.empty?
271
- raise 'Please provide a transaction_id to use transactional mode'
276
+ raise Kafka::InvalidTxnStateError, 'Please provide a transaction_id to use transactional mode'
272
277
  end
273
278
  end
274
279
 
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Kafka
4
- VERSION = "1.2.0"
4
+ VERSION = "1.3.0"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.2.0
4
+ version: 1.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2020-08-03 00:00:00.000000000 Z
11
+ date: 2020-10-14 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: digest-crc
@@ -376,6 +376,7 @@ files:
376
376
  - lib/kafka/connection_builder.rb
377
377
  - lib/kafka/consumer.rb
378
378
  - lib/kafka/consumer_group.rb
379
+ - lib/kafka/consumer_group/assignor.rb
379
380
  - lib/kafka/datadog.rb
380
381
  - lib/kafka/fetch_operation.rb
381
382
  - lib/kafka/fetched_batch.rb
@@ -494,8 +495,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
494
495
  - !ruby/object:Gem::Version
495
496
  version: '0'
496
497
  requirements: []
497
- rubyforge_project:
498
- rubygems_version: 2.7.6
498
+ rubygems_version: 3.1.2
499
499
  signing_key:
500
500
  specification_version: 4
501
501
  summary: A client library for the Kafka distributed commit log.