ruby-kafka 0.4.0 → 0.4.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 8e5c2b78f0d4883922641e90ac150a738959c00e
4
- data.tar.gz: 0166be6c6bef06356abdf4f771dd633043e996ba
3
+ metadata.gz: 9845964409d385d935b55e4ded95f210721fe7ce
4
+ data.tar.gz: d7bebf3d0222a80c0fe95ebec4d83b992abe306d
5
5
  SHA512:
6
- metadata.gz: 1207e355018718c240fabd89ba2286949dc156f5ce9734f9cf48635154ee9d20587d4401e4ca8c5f31edb4fda3b1e049766ad0a93030b5df99b8762e160f5970
7
- data.tar.gz: 3c7b974bf3347c841d9bd85118dc8bf66dc892f6c48df7fc32599949bca6b1bf71d301559ce28709eac108dd92acced6a9f15557226d12d0c3311ac4b3c1115e
6
+ metadata.gz: 5656f9723720414c954a7087385d52d9618322d31e70f9600bd4b8f34bb76432a31fb0bc88bc6baf5506fe518c47a7a5c9cb0134c91f224acdaa6ef76bbd8fce
7
+ data.tar.gz: 042d3a1bf5f8847e9b50a93e2bc8b151e78c4fac463e999e32e9d0acd7f1d5498ebb301d166a8a255adcc2efd15f7bd53de6397348a94822f2462c626c49741f
@@ -4,6 +4,9 @@ Changes and additions to the library will be listed here.
4
4
 
5
5
  ## Unreleased
6
6
 
7
+ - Allow seeking the consumer position (#386).
8
+ - Reopen idle connections after 5 minutes (#399).
9
+
7
10
  ## v0.4.0
8
11
 
9
12
  - Support SASL authentication (#334 and #370)
@@ -1,8 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-kafka (0.3.18.beta2)
5
- gssapi (>= 1.2.0)
4
+ ruby-kafka (0.4.0)
6
5
 
7
6
  GEM
8
7
  remote: https://rubygems.org/
@@ -74,6 +73,7 @@ DEPENDENCIES
74
73
  docker-api
75
74
  dogstatsd-ruby (>= 2.0.0)
76
75
  dotenv
76
+ gssapi (>= 1.2.0)
77
77
  pry
78
78
  rake (~> 10.0)
79
79
  rspec
data/README.md CHANGED
@@ -42,7 +42,9 @@ Although parts of this library work with Kafka 0.8 – specifically, the Produce
42
42
  2. [Asynchronous Producer Design](#asynchronous-producer-design)
43
43
  3. [Consumer Design](#consumer-design)
44
44
  5. [Development](#development)
45
- 6. [Roadmap](#roadmap)
45
+ 6. [Support and Discussion](#support-and-discussion)
46
+ 7. [Roadmap](#roadmap)
47
+ 8. [Higher level libraries](#higher-level-libraries)
46
48
 
47
49
  ## Installation
48
50
 
@@ -68,25 +70,30 @@ Or install it yourself as:
68
70
  <th>Kafka 0.8</th>
69
71
  <th>Kafka 0.9</th>
70
72
  <th>Kafka 0.10</th>
73
+ <th>Kafka 0.11</th>
71
74
  </tr>
72
75
  <tr>
73
76
  <th>Producer API</th>
74
77
  <td>Full support</td>
75
- <td>Full support</td>
78
+ <td>Full support in v0.4.x</td>
79
+ <td>Full support in v0.5.x</td>
76
80
  <td>Limited support</td>
77
81
  </tr>
78
82
  <tr>
79
83
  <th>Consumer API</th>
80
84
  <td>Unsupported</td>
81
- <td>Full support</td>
85
+ <td>Full support in v0.4.x</td>
86
+ <td>Full support in v0.5.x</td>
82
87
  <td>Limited support</td>
83
88
  </tr>
84
89
  </table>
85
90
 
86
- This library is targeting Kafka 0.9, although there is limited support for versions 0.8 and 0.10:
91
+ This library is targeting Kafka 0.9 with the v0.4.x series and Kafka 0.10 with the v0.5.x series. There's limited support for Kafka 0.8, and things should work with Kafka 0.11, although there may be performance issues due to changes in the protocol.
87
92
 
88
93
  - **Kafka 0.8:** Full support for the Producer API, but no support for consumer groups. Simple message fetching works.
89
- - **Kafka 0.10:** Full support for both the Producer and Consumer APIs, but the addition of message timestamps is not supported. However, ruby-kafka should be completely compatible with Kafka 0.10 brokers.
94
+ - **Kafka 0.9:** Full support for the Producer and Consumer API in ruby-kafka v0.4.x.
95
+ - **Kafka 0.10:** Full support for the Producer and Consumer API in ruby-kafka v0.5.x.
96
+ - **Kafka 0.11:** Everything that works with Kafka 0.10 should still work, but so far no features specific to Kafka 0.11 have been added.
90
97
 
91
98
  This library requires Ruby 2.1 or higher.
92
99
 
@@ -452,7 +459,9 @@ end
452
459
 
453
460
  ### Consuming Messages from Kafka
454
461
 
455
- Consuming messages from a Kafka topic is simple:
462
+ **Note:** If you're just looking to get started with Kafka consumers, you might be interested in visiting the [Higher level libraries](#higher-level-libraries) section that lists ruby-kafka based frameworks. Read on, if you're interested in either rolling your own executable consumers or if you want to learn more about how consumers work in Kafka.
463
+
464
+ Consuming messages from a Kafka topic with ruby-kafka is simple:
456
465
 
457
466
  ```ruby
458
467
  require "kafka"
@@ -489,6 +498,10 @@ consumer = kafka.consumer(group_id: "my-consumer")
489
498
  # repeatedly.
490
499
  consumer.subscribe("greetings")
491
500
 
501
+ # Stop the consumer when the SIGTERM signal is sent to the process.
502
+ # It's better to shut down gracefully than to kill the process.
503
+ trap("TERM") { consumer.stop }
504
+
492
505
  # This will loop indefinitely, yielding each message in turn.
493
506
  consumer.each_message do |message|
494
507
  puts message.topic, message.partition
@@ -591,8 +604,8 @@ In order to shut down a running consumer process cleanly, call `#stop` on it. A
591
604
  consumer = kafka.consumer(...)
592
605
 
593
606
  # The consumer can be stopped from the command line by executing
594
- # `kill -s QUIT <process-id>`.
595
- trap("QUIT") { consumer.stop }
607
+ # `kill -s TERM <process-id>`.
608
+ trap("TERM") { consumer.stop }
596
609
 
597
610
  consumer.each_message do |message|
598
611
  ...
@@ -741,7 +754,7 @@ You can quite easily build monitoring on top of the provided [instrumentation ho
741
754
  The Statsd reporter is automatically enabled when the `kafka/statsd` library is required. You can optionally change the configuration.
742
755
 
743
756
  ```ruby
744
- require "kafka/statds"
757
+ require "kafka/statsd"
745
758
 
746
759
  # Default is "ruby_kafka".
747
760
  Kafka::Statsd.namespace = "custom-namespace"
@@ -897,19 +910,37 @@ After checking out the repo, run `bin/setup` to install dependencies. Then, run
897
910
 
898
911
  **Note:** the specs require a working [Docker](https://www.docker.com/) instance, but should work out of the box if you have Docker installed. Please create an issue if that's not the case.
899
912
 
913
+ If you would like to contribute to ruby-kafka, please [join our Slack team](https://ruby-kafka-slack.herokuapp.com/) and ask how best to do it.
914
+
900
915
  [![Circle CI](https://circleci.com/gh/zendesk/ruby-kafka.svg?style=shield)](https://circleci.com/gh/zendesk/ruby-kafka/tree/master)
901
916
 
917
+ ## Support and Discussion
918
+
919
+ If you've discovered a bug, please file a [Github issue](https://github.com/zendesk/ruby-kafka/issues/new), and make sure to include all the relevant information, including the version of ruby-kafka and Kafka that you're using.
920
+
921
+ If you have other questions, or would like to discuss best practises, how to contribute to the project, or any other ruby-kafka related topic, [join our Slack team](https://ruby-kafka-slack.herokuapp.com/)!
922
+
902
923
  ## Roadmap
903
924
 
904
- The current stable release is v0.3. This release is running in production at Zendesk, but it's still not recommended that you use it when data loss is unacceptable. It will take a little while until all edge cases have been uncovered and handled.
925
+ Version 0.4 will be the last minor release with support for the Kafka 0.9 protocol. It is recommended that you pin your dependency on ruby-kafka to `~> 0.4.0` in order to receive bugfixes and security updates. New features will only target version 0.5 and up, which will be incompatible with the Kafka 0.9 protocol.
905
926
 
906
927
  ### v0.4
907
928
 
908
- Beta release of the Consumer API, allowing balanced Consumer Groups coordinating access to partitions. Kafka 0.9 only.
929
+ Current stable release with support for the Kafka 0.9 protocol.
930
+
931
+ ### v0.5
932
+
933
+ Next stable release, with support for the Kafka 0.10 protocol and eventually newer protocol versions.
934
+
935
+ ## Higher level libraries
936
+
937
+ Currently, there are three actively developed frameworks based on ruby-kafka, that provide higher level API that can be used to work with Kafka messages:
938
+
939
+ * [Racecar](https://github.com/zendesk/racecar) - A simple framework that integrates with Ruby on Rails to provide a seamless way to write, test, configure, and run Kafka consumers. It comes with sensible defaults and conventions.
909
940
 
910
- ### v1.0
941
+ * [Karafka](https://github.com/karafka/karafka) - Framework used to simplify Apache Kafka based Ruby and Rails applications development. Karafka provides higher abstraction layers, including Capistrano, Docker and Heroku support.
911
942
 
912
- API freeze. All new changes will be backwards compatible.
943
+ * [Phobos](https://github.com/klarna/phobos) - Micro framework and library for applications dealing with Apache Kafka. It wraps common behaviors needed by consumers and producers in an easy and convenient API.
913
944
 
914
945
  ## Why Create A New Library?
915
946
 
@@ -114,7 +114,7 @@ module Kafka
114
114
  class ReplicaNotAvailable < ProtocolError
115
115
  end
116
116
 
117
- #
117
+ # 25
118
118
  class UnknownMemberId < ProtocolError
119
119
  end
120
120
 
@@ -163,37 +163,40 @@ module Kafka
163
163
  end
164
164
 
165
165
  # 36
166
- class TopicAlreadyExists < Error
166
+ class TopicAlreadyExists < ProtocolError
167
167
  end
168
168
 
169
169
  # 37
170
- class InvalidPartitions < Error
170
+ class InvalidPartitions < ProtocolError
171
171
  end
172
172
 
173
173
  # 38
174
- class InvalidReplicationFactor < Error
174
+ class InvalidReplicationFactor < ProtocolError
175
175
  end
176
176
 
177
177
  # 39
178
- class InvalidReplicaAssignment < Error
178
+ class InvalidReplicaAssignment < ProtocolError
179
179
  end
180
180
 
181
181
  # 40
182
- class InvalidConfig < Error
182
+ class InvalidConfig < ProtocolError
183
183
  end
184
184
 
185
185
  # 41
186
- class NotController < Error
186
+ class NotController < ProtocolError
187
187
  end
188
188
 
189
189
  # 42
190
- class InvalidRequest < Error
190
+ class InvalidRequest < ProtocolError
191
191
  end
192
192
 
193
193
  # Raised when there's a network connection error.
194
194
  class ConnectionError < Error
195
195
  end
196
196
 
197
+ class NoSuchBroker < Error
198
+ end
199
+
197
200
  # Raised when a producer buffer has reached its maximum size.
198
201
  class BufferOverflow < Error
199
202
  end
@@ -100,7 +100,6 @@ module Kafka
100
100
  partition,
101
101
  partition_key,
102
102
  create_time,
103
- key.to_s.bytesize + value.to_s.bytesize
104
103
  )
105
104
 
106
105
  if partition.nil?
@@ -28,6 +28,9 @@ module Kafka
28
28
  SOCKET_TIMEOUT = 10
29
29
  CONNECT_TIMEOUT = 10
30
30
 
31
+ # Time after which an idle connection will be reopened.
32
+ IDLE_TIMEOUT = 60 * 5
33
+
31
34
  attr_reader :encoder
32
35
  attr_reader :decoder
33
36
 
@@ -88,13 +91,18 @@ module Kafka
88
91
 
89
92
  @instrumenter.instrument("request.connection", notification) do
90
93
  open unless open?
94
+ reopen if idle?
91
95
 
92
96
  @correlation_id += 1
93
97
 
94
98
  write_request(request, notification)
95
99
 
96
100
  response_class = request.response_class
97
- wait_for_response(response_class, notification) unless response_class.nil?
101
+ response = wait_for_response(response_class, notification) unless response_class.nil?
102
+
103
+ @last_request = Time.now
104
+
105
+ response
98
106
  end
99
107
  rescue Errno::EPIPE, Errno::ECONNRESET, Errno::ETIMEDOUT, EOFError => e
100
108
  close
@@ -118,6 +126,8 @@ module Kafka
118
126
 
119
127
  # Correlation id is initialized to zero and bumped for each request.
120
128
  @correlation_id = 0
129
+
130
+ @last_request = nil
121
131
  rescue Errno::ETIMEDOUT => e
122
132
  @logger.error "Timed out while trying to connect to #{self}: #{e}"
123
133
  raise ConnectionError, e
@@ -126,6 +136,15 @@ module Kafka
126
136
  raise ConnectionError, e
127
137
  end
128
138
 
139
+ def reopen
140
+ close
141
+ open
142
+ end
143
+
144
+ def idle?
145
+ @last_request && @last_request < Time.now - IDLE_TIMEOUT
146
+ end
147
+
129
148
  # Writes a request over the connection.
130
149
  #
131
150
  # @param request [#encode] the request that should be encoded and written.
@@ -4,9 +4,6 @@ require "kafka/fetch_operation"
4
4
 
5
5
  module Kafka
6
6
 
7
- # @note This code is still alpha level. Don't use this for anything important.
8
- # The API may also change without warning.
9
- #
10
7
  # A client that consumes messages from a Kafka cluster in coordination with
11
8
  # other clients.
12
9
  #
@@ -90,6 +87,11 @@ module Kafka
90
87
  nil
91
88
  end
92
89
 
90
+ # Stop the consumer.
91
+ #
92
+ # The consumer will finish any in-progress work and shut down.
93
+ #
94
+ # @return [nil]
93
95
  def stop
94
96
  @running = false
95
97
  end
@@ -280,6 +282,21 @@ module Kafka
280
282
  end
281
283
  end
282
284
 
285
+ # Move the consumer's position in a topic partition to the specified offset.
286
+ #
287
+ # Note that this has to be done prior to calling {#each_message} or {#each_batch}
288
+ # and only has an effect if the consumer is assigned the partition. Typically,
289
+ # you will want to do this in every consumer group member in order to make sure
290
+ # that the member that's assigned the partition knows where to start.
291
+ #
292
+ # @param topic [String]
293
+ # @param partition [Integer]
294
+ # @param offset [Integer]
295
+ # @return [nil]
296
+ def seek(topic, partition, offset)
297
+ @offset_manager.seek_to(topic, partition, offset)
298
+ end
299
+
283
300
  def commit_offsets
284
301
  @offset_manager.commit_offsets
285
302
  end
@@ -46,6 +46,7 @@ module Kafka
46
46
  retry
47
47
  rescue ConnectionError
48
48
  @logger.error "Connection error while trying to join group `#{@group_id}`; retrying..."
49
+ sleep 1
49
50
  @coordinator = nil
50
51
  retry
51
52
  end
@@ -141,6 +141,7 @@ module Kafka
141
141
  buffer_size = event.payload.fetch(:buffer_size)
142
142
  max_buffer_size = event.payload.fetch(:max_buffer_size)
143
143
  buffer_fill_ratio = buffer_size.to_f / max_buffer_size.to_f
144
+ buffer_fill_percentage = buffer_fill_ratio * 100.0
144
145
 
145
146
  tags = {
146
147
  client: client,
@@ -156,6 +157,7 @@ module Kafka
156
157
 
157
158
  # This gets us the avg/max buffer fill ratio per producer.
158
159
  histogram("producer.buffer.fill_ratio", buffer_fill_ratio, tags: tags)
160
+ histogram("producer.buffer.fill_percentage", buffer_fill_percentage, tags: tags)
159
161
  end
160
162
 
161
163
  def buffer_overflow(event)
@@ -36,8 +36,12 @@ module Kafka
36
36
  end
37
37
 
38
38
  def seek_to_default(topic, partition)
39
+ seek_to(topic, partition, -1)
40
+ end
41
+
42
+ def seek_to(topic, partition, offset)
39
43
  @processed_offsets[topic] ||= {}
40
- @processed_offsets[topic][partition] = -1
44
+ @processed_offsets[topic][partition] = offset
41
45
  end
42
46
 
43
47
  def next_offset_for(topic, partition)
@@ -1,11 +1,15 @@
1
1
  module Kafka
2
- PendingMessage = Struct.new(
3
- "PendingMessage",
4
- :value,
5
- :key,
6
- :topic,
7
- :partition,
8
- :partition_key,
9
- :create_time,
10
- :bytesize)
2
+ class PendingMessage
3
+ attr_reader :value, :key, :topic, :partition, :partition_key, :create_time, :bytesize
4
+
5
+ def initialize(value, key, topic, partition, partition_key, create_time)
6
+ @value = value
7
+ @key = key
8
+ @topic = topic
9
+ @partition = partition
10
+ @partition_key = partition_key
11
+ @create_time = create_time
12
+ @bytesize = key.to_s.bytesize + value.to_s.bytesize
13
+ end
14
+ end
11
15
  end
@@ -185,13 +185,12 @@ module Kafka
185
185
  create_time = Time.now
186
186
 
187
187
  message = PendingMessage.new(
188
- value,
189
- key,
190
- topic,
191
- partition,
192
- partition_key,
188
+ value && value.to_s,
189
+ key && key.to_s,
190
+ topic.to_s,
191
+ partition && Integer(partition),
192
+ partition_key && partition_key.to_s,
193
193
  create_time,
194
- key.to_s.bytesize + value.to_s.bytesize
195
194
  )
196
195
 
197
196
  if buffer_size >= @max_buffer_size
@@ -1,4 +1,13 @@
1
1
  module Kafka
2
+
3
+ # The protocol layer of the library.
4
+ #
5
+ # The Kafka protocol (https://kafka.apache.org/protocol) defines a set of API
6
+ # requests, each with a well-known numeric API key, as well as a set of error
7
+ # codes with specific meanings.
8
+ #
9
+ # This module, and the classes contained in it, implement the client side of
10
+ # the protocol.
2
11
  module Protocol
3
12
  # The replica id of non-brokers is always -1.
4
13
  REPLICA_ID = -1
@@ -16,6 +25,7 @@ module Kafka
16
25
  SYNC_GROUP_API = 14
17
26
  SASL_HANDSHAKE_API = 17
18
27
 
28
+ # A mapping from numeric API keys to symbolic API names.
19
29
  APIS = {
20
30
  PRODUCE_API => :produce,
21
31
  FETCH_API => :fetch,
@@ -31,6 +41,7 @@ module Kafka
31
41
  SASL_HANDSHAKE_API => :sasl_handshake,
32
42
  }
33
43
 
44
+ # A mapping from numeric error codes to exception classes.
34
45
  ERRORS = {
35
46
  -1 => UnknownError,
36
47
  1 => OffsetOutOfRange,
@@ -72,6 +83,12 @@ module Kafka
72
83
  42 => InvalidRequest
73
84
  }
74
85
 
86
+ # Handles an error code by either doing nothing (if there was no error) or
87
+ # by raising an appropriate exception.
88
+ #
89
+ # @param error_code Integer
90
+ # @raise [ProtocolError]
91
+ # @return [nil]
75
92
  def self.handle_error(error_code)
76
93
  if error_code == 0
77
94
  # No errors, yay!
@@ -82,6 +99,10 @@ module Kafka
82
99
  end
83
100
  end
84
101
 
102
+ # Returns the symbolic name for an API key.
103
+ #
104
+ # @param api_key Integer
105
+ # @return [Symbol]
85
106
  def self.api_name(api_key)
86
107
  APIS.fetch(api_key, :unknown)
87
108
  end
@@ -123,7 +123,7 @@ module Kafka
123
123
  def find_broker(node_id)
124
124
  broker = @brokers.find {|broker| broker.node_id == node_id }
125
125
 
126
- raise Kafka::Error, "No broker with id #{node_id}" if broker.nil?
126
+ raise Kafka::NoSuchBroker, "No broker with id #{node_id}" if broker.nil?
127
127
 
128
128
  broker
129
129
  end
@@ -1,5 +1,3 @@
1
- require 'gssapi'
2
-
3
1
  module Kafka
4
2
  class SaslGssapiAuthenticator
5
3
  GSSAPI_IDENT = "GSSAPI"
@@ -11,6 +9,7 @@ module Kafka
11
9
  @principal = sasl_gssapi_principal
12
10
  @keytab = sasl_gssapi_keytab
13
11
 
12
+ load_gssapi
14
13
  initialize_gssapi_context
15
14
  end
16
15
 
@@ -59,6 +58,15 @@ module Kafka
59
58
  @decoder.bytes
60
59
  end
61
60
 
61
+ def load_gssapi
62
+ begin
63
+ require "gssapi"
64
+ rescue LoadError
65
+ @logger.error "In order to use GSSAPI authentication you need to install the `gssapi` gem."
66
+ raise
67
+ end
68
+ end
69
+
62
70
  def initialize_gssapi_context
63
71
  @logger.debug "GSSAPI: Initializing context with #{@connection.to_s}, principal #{@principal}"
64
72
 
@@ -10,7 +10,7 @@ require "active_support/subscriber"
10
10
  module Kafka
11
11
  # Reports operational metrics to a Statsd agent.
12
12
  #
13
- # require "kafka/statds"
13
+ # require "kafka/statsd"
14
14
  #
15
15
  # # Default is "ruby_kafka".
16
16
  # Kafka::Statsd.namespace = "custom-namespace"
@@ -122,6 +122,7 @@ module Kafka
122
122
  buffer_size = event.payload.fetch(:buffer_size)
123
123
  max_buffer_size = event.payload.fetch(:max_buffer_size)
124
124
  buffer_fill_ratio = buffer_size.to_f / max_buffer_size.to_f
125
+ buffer_fill_percentage = buffer_fill_ratio * 100.0
125
126
 
126
127
  # This gets us the write rate.
127
128
  increment("producer.#{client}.#{topic}.produce.messages")
@@ -133,6 +134,7 @@ module Kafka
133
134
 
134
135
  # This gets us the avg/max buffer fill ratio per producer.
135
136
  timing("producer.#{client}.buffer.fill_ratio", buffer_fill_ratio)
137
+ timing("producer.#{client}.buffer.fill_percentage", buffer_fill_percentage)
136
138
  end
137
139
 
138
140
  def buffer_overflow(event)
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.4.0"
2
+ VERSION = "0.4.1"
3
3
  end
@@ -25,8 +25,6 @@ Gem::Specification.new do |spec|
25
25
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
26
26
  spec.require_paths = ["lib"]
27
27
 
28
- spec.add_runtime_dependency 'gssapi', '>=1.2.0'
29
-
30
28
  spec.add_development_dependency "bundler", ">= 1.9.5"
31
29
  spec.add_development_dependency "rake", "~> 10.0"
32
30
  spec.add_development_dependency "rspec"
@@ -42,4 +40,5 @@ Gem::Specification.new do |spec|
42
40
  spec.add_development_dependency "statsd-ruby"
43
41
  spec.add_development_dependency "ruby-prof"
44
42
  spec.add_development_dependency "timecop"
43
+ spec.add_development_dependency "gssapi", '>=1.2.0'
45
44
  end
metadata CHANGED
@@ -1,29 +1,15 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.4.0
4
+ version: 0.4.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2017-07-19 00:00:00.000000000 Z
11
+ date: 2017-08-25 00:00:00.000000000 Z
12
12
  dependencies:
13
- - !ruby/object:Gem::Dependency
14
- name: gssapi
15
- requirement: !ruby/object:Gem::Requirement
16
- requirements:
17
- - - ">="
18
- - !ruby/object:Gem::Version
19
- version: 1.2.0
20
- type: :runtime
21
- prerelease: false
22
- version_requirements: !ruby/object:Gem::Requirement
23
- requirements:
24
- - - ">="
25
- - !ruby/object:Gem::Version
26
- version: 1.2.0
27
13
  - !ruby/object:Gem::Dependency
28
14
  name: bundler
29
15
  requirement: !ruby/object:Gem::Requirement
@@ -234,6 +220,20 @@ dependencies:
234
220
  - - ">="
235
221
  - !ruby/object:Gem::Version
236
222
  version: '0'
223
+ - !ruby/object:Gem::Dependency
224
+ name: gssapi
225
+ requirement: !ruby/object:Gem::Requirement
226
+ requirements:
227
+ - - ">="
228
+ - !ruby/object:Gem::Version
229
+ version: 1.2.0
230
+ type: :development
231
+ prerelease: false
232
+ version_requirements: !ruby/object:Gem::Requirement
233
+ requirements:
234
+ - - ">="
235
+ - !ruby/object:Gem::Version
236
+ version: 1.2.0
237
237
  description: A client library for the Kafka distributed commit log.
238
238
  email:
239
239
  - daniel.schierbeck@gmail.com