ruby-kafka 0.3.18.beta2 → 0.4.0.beta1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: a315e2a5db26fa2430705e5dc25757593682703b
4
- data.tar.gz: d80c0b9f184d4ec2da61139de39bf97177c7de4a
3
+ metadata.gz: 9ac9f20c365f90e85bd6df6000838fbc349b893b
4
+ data.tar.gz: fc16d9bf3784a21d1a82ec106b5b528bf57bfbe2
5
5
  SHA512:
6
- metadata.gz: 63f090c16636aff10749d7e20628996207bf07dcd68da7fc58a76f49113972eb35e995b46af91d4b418692effefdc737e652750a583b6d7c3f066d5e797ff1e6
7
- data.tar.gz: d5e813fbdf4d9663ca7e7718493f6b6b34523828eefd1697f69da2fb715fa1b533f84681be2fbd38ecd023c9b70bfb5085b04873764ab6f3cc040cc95c2f1ef0
6
+ metadata.gz: 22c77ea79b8e971916e1671253b709b207501a3e068d29e5629efea30733e9ab224748f068cc916c6472f7507c2e1073f7b0a3e9ee2cdbcafee21bd27e21f277
7
+ data.tar.gz: e5ecd931e2348ee989bcaf70387b30e64d9b24560231745b1c4d0e275cfcc0f65202cdf018b45bcc1678699ac05648bb4f0a5e5689eee030958823a864e73fb1
data/CHANGELOG.md CHANGED
@@ -4,6 +4,12 @@ Changes and additions to the library will be listed here.
4
4
 
5
5
  ## Unreleased
6
6
 
7
+ ## v0.4.0
8
+
9
+ - Support SASL authentication (#334 and #370)
10
+ - Allow loading SSL certificates from files (#371)
11
+ - Add Statsd metric reporting (#373)
12
+
7
13
  ## v0.3.17
8
14
 
9
15
  - Re-commit previously committed offsets periodically with an interval of half
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-kafka (0.3.18.beta1)
4
+ ruby-kafka (0.3.18.beta2)
5
5
  gssapi (>= 1.2.0)
6
6
 
7
7
  GEM
@@ -58,6 +58,7 @@ GEM
58
58
  ruby-prof (0.15.9)
59
59
  slop (3.6.0)
60
60
  snappy (0.0.12)
61
+ statsd-ruby (1.4.0)
61
62
  thread_safe (0.3.5)
62
63
  timecop (0.8.0)
63
64
  tzinfo (1.2.2)
@@ -81,6 +82,7 @@ DEPENDENCIES
81
82
  ruby-kafka!
82
83
  ruby-prof
83
84
  snappy
85
+ statsd-ruby
84
86
  timecop
85
87
 
86
88
  RUBY VERSION
data/README.md CHANGED
@@ -31,7 +31,8 @@ Although parts of this library work with Kafka 0.8 – specifically, the Produce
31
31
  5. [Logging](#logging)
32
32
  6. [Instrumentation](#instrumentation)
33
33
  7. [Monitoring](#monitoring)
34
- 1. [Reporting Metrics to Datadog](#reporting-metrics-to-datadog)
34
+ 1. [Reporting Metrics to Statsd](#reporting-metrics-to-statsd)
35
+ 2. [Reporting Metrics to Datadog](#reporting-metrics-to-datadog)
35
36
  8. [Understanding Timeouts](#understanding-timeouts)
36
37
  9. [Security](#security)
37
38
  1. [Encryption and Authentication using SSL](#encryption-and-authentication-using-ssl)
@@ -732,7 +733,25 @@ It is highly recommended that you monitor your Kafka client applications in prod
732
733
  * consumer processing errors, indicating exceptions are being raised in the processing code;
733
734
  * frequent consumer rebalances, which may indicate unstable network conditions or consumer configurations.
734
735
 
735
- You can quite easily build monitoring on top of the provided [instrumentation hooks](#instrumentation). In order to further help with monitoring, a prebuilt [Datadog](https://www.datadoghq.com/) reporter is included with ruby-kafka.
736
+ You can quite easily build monitoring on top of the provided [instrumentation hooks](#instrumentation). In order to further help with monitoring, a prebuilt [Statsd](https://github.com/etsy/statsd) and [Datadog](https://www.datadoghq.com/) reporter is included with ruby-kafka.
737
+
738
+
739
+ #### Reporting Metrics to Statsd
740
+
741
+ The Statsd reporter is automatically enabled when the `kafka/statsd` library is required. You can optionally change the configuration.
742
+
743
+ ```ruby
744
+ require "kafka/statds"
745
+
746
+ # Default is "ruby_kafka".
747
+ Kafka::Statsd.namespace = "custom-namespace"
748
+
749
+ # Default is "127.0.0.1".
750
+ Kafka::Statsd.host = "statsd.something.com"
751
+
752
+ # Default is 8125.
753
+ Kafka::Statsd.port = 1234
754
+ ```
736
755
 
737
756
 
738
757
  #### Reporting Metrics to Datadog
@@ -815,9 +834,9 @@ Typically, Kafka certificates come in the JKS format, which isn't supported by r
815
834
 
816
835
  #### Authentication using SASL
817
836
 
818
- Kafka has support for using SASL to authenticate clients. Currently only the GSSAPI mechanism is supported by ruby-kafka.
837
+ Kafka has support for using SASL to authenticate clients. Currently GSSAPI and PLAIN mechanisms are supported by ruby-kafka.
819
838
 
820
- In order to authenticate using SASL, set your principal and optionally your keytab when initializing the Kafka client:
839
+ In order to authenticate using GSSAPI, set your principal and optionally your keytab when initializing the Kafka client:
821
840
 
822
841
  ```ruby
823
842
  kafka = Kafka.new(
@@ -827,6 +846,19 @@ kafka = Kafka.new(
827
846
  )
828
847
  ```
829
848
 
849
+ In order to authenticate using PLAIN, you must set your username and password when initializing the Kafka client:
850
+
851
+ ```ruby
852
+ kafka = Kafka.new(
853
+ ssl_ca_cert: File.read('/etc/openssl/cert.pem'), # Optional but highly recommended
854
+ sasl_plain_username: 'username',
855
+ sasl_plain_password: 'password'
856
+ # ...
857
+ )
858
+ ```
859
+
860
+ **NOTE**: It is __highly__ recommended that you use SSL for encryption when using SASL_PLAIN
861
+
830
862
  ## Design
831
863
 
832
864
  The library has been designed as a layered system, with each layer having a clear responsibility:
data/lib/kafka/client.rb CHANGED
@@ -34,6 +34,9 @@ module Kafka
34
34
  # @param ssl_ca_cert [String, Array<String>, nil] a PEM encoded CA cert, or an Array of
35
35
  # PEM encoded CA certs, to use with an SSL connection.
36
36
  #
37
+ # @param ssl_ca_cert_file_path [String, nil] a path on the filesystem to a PEM encoded CA cert
38
+ # to use with an SSL connection.
39
+ #
37
40
  # @param ssl_client_cert [String, nil] a PEM encoded client cert to use with an
38
41
  # SSL connection. Must be used in combination with ssl_client_cert_key.
39
42
  #
@@ -46,13 +49,14 @@ module Kafka
46
49
  #
47
50
  # @return [Client]
48
51
  def initialize(seed_brokers:, client_id: "ruby-kafka", logger: nil, connect_timeout: nil, socket_timeout: nil,
49
- ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil,
50
- sasl_gssapi_principal: nil, sasl_gssapi_keytab: nil)
52
+ ssl_ca_cert_file_path: nil, ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil,
53
+ sasl_gssapi_principal: nil, sasl_gssapi_keytab: nil,
54
+ sasl_plain_authzid: '', sasl_plain_username: nil, sasl_plain_password: nil)
51
55
  @logger = logger || Logger.new(nil)
52
56
  @instrumenter = Instrumenter.new(client_id: client_id)
53
57
  @seed_brokers = normalize_seed_brokers(seed_brokers)
54
58
 
55
- ssl_context = build_ssl_context(ssl_ca_cert, ssl_client_cert, ssl_client_cert_key)
59
+ ssl_context = build_ssl_context(ssl_ca_cert_file_path, ssl_ca_cert, ssl_client_cert, ssl_client_cert_key)
56
60
 
57
61
  @connection_builder = ConnectionBuilder.new(
58
62
  client_id: client_id,
@@ -62,7 +66,10 @@ module Kafka
62
66
  logger: @logger,
63
67
  instrumenter: @instrumenter,
64
68
  sasl_gssapi_principal: sasl_gssapi_principal,
65
- sasl_gssapi_keytab: sasl_gssapi_keytab
69
+ sasl_gssapi_keytab: sasl_gssapi_keytab,
70
+ sasl_plain_authzid: sasl_plain_authzid,
71
+ sasl_plain_username: sasl_plain_username,
72
+ sasl_plain_password: sasl_plain_password
66
73
  )
67
74
 
68
75
  @cluster = initialize_cluster
@@ -464,8 +471,8 @@ module Kafka
464
471
  )
465
472
  end
466
473
 
467
- def build_ssl_context(ca_cert, client_cert, client_cert_key)
468
- return nil unless ca_cert || client_cert || client_cert_key
474
+ def build_ssl_context(ca_cert_file_path, ca_cert, client_cert, client_cert_key)
475
+ return nil unless ca_cert_file_path || ca_cert || client_cert || client_cert_key
469
476
 
470
477
  ssl_context = OpenSSL::SSL::SSLContext.new
471
478
 
@@ -480,11 +487,14 @@ module Kafka
480
487
  raise ArgumentError, "Kafka client initialized with `ssl_client_cert_key`, but no `ssl_client_cert`. Please provide both."
481
488
  end
482
489
 
483
- if ca_cert
490
+ if ca_cert || ca_cert_file_path
484
491
  store = OpenSSL::X509::Store.new
485
492
  Array(ca_cert).each do |cert|
486
493
  store.add_cert(OpenSSL::X509::Certificate.new(cert))
487
494
  end
495
+ if ca_cert_file_path
496
+ store.add_file(ca_cert_file_path)
497
+ end
488
498
  ssl_context.cert_store = store
489
499
  end
490
500
 
@@ -1,8 +1,9 @@
1
1
  require 'kafka/sasl_gssapi_authenticator'
2
+ require 'kafka/sasl_plain_authenticator'
2
3
 
3
4
  module Kafka
4
5
  class ConnectionBuilder
5
- def initialize(client_id:, logger:, instrumenter:, connect_timeout:, socket_timeout:, ssl_context:, sasl_gssapi_principal:, sasl_gssapi_keytab:)
6
+ def initialize(client_id:, logger:, instrumenter:, connect_timeout:, socket_timeout:, ssl_context:, sasl_gssapi_principal:, sasl_gssapi_keytab:, sasl_plain_authzid:, sasl_plain_username:, sasl_plain_password:)
6
7
  @client_id = client_id
7
8
  @logger = logger
8
9
  @instrumenter = instrumenter
@@ -11,6 +12,9 @@ module Kafka
11
12
  @ssl_context = ssl_context
12
13
  @sasl_gssapi_principal = sasl_gssapi_principal
13
14
  @sasl_gssapi_keytab = sasl_gssapi_keytab
15
+ @sasl_plain_authzid = sasl_plain_authzid
16
+ @sasl_plain_username = sasl_plain_username
17
+ @sasl_plain_password = sasl_plain_password
14
18
  end
15
19
 
16
20
  def build_connection(host, port)
@@ -27,6 +31,8 @@ module Kafka
27
31
 
28
32
  if authenticate_using_sasl_gssapi?
29
33
  sasl_gssapi_authenticate(connection)
34
+ elsif authenticate_using_sasl_plain?
35
+ sasl_plain_authenticate(connection)
30
36
  end
31
37
 
32
38
  connection
@@ -45,8 +51,24 @@ module Kafka
45
51
  auth.authenticate!
46
52
  end
47
53
 
54
+ def sasl_plain_authenticate(connection)
55
+ auth = SaslPlainAuthenticator.new(
56
+ connection: connection,
57
+ logger: @logger,
58
+ authzid: @sasl_plain_authzid,
59
+ username: @sasl_plain_username,
60
+ password: @sasl_plain_password
61
+ )
62
+
63
+ auth.authenticate!
64
+ end
65
+
48
66
  def authenticate_using_sasl_gssapi?
49
67
  !@ssl_context && @sasl_gssapi_principal && !@sasl_gssapi_principal.empty?
50
68
  end
69
+
70
+ def authenticate_using_sasl_plain?
71
+ @sasl_plain_authzid && @sasl_plain_username && @sasl_plain_password
72
+ end
51
73
  end
52
74
  end
@@ -258,13 +258,13 @@ module Kafka
258
258
  begin
259
259
  yield batch
260
260
  rescue => e
261
- offset_range = batch.empty? ? "N/A" : [batch.first_offset, batch.last_offset].join("..")
261
+ offset_range = (batch.first_offset .. batch.last_offset)
262
262
  location = "#{batch.topic}/#{batch.partition} in offset range #{offset_range}"
263
263
  backtrace = e.backtrace.join("\n")
264
264
 
265
265
  @logger.error "Exception raised when processing #{location} -- #{e.class}: #{e}\n#{backtrace}"
266
266
 
267
- raise
267
+ raise ProcessingError.new(batch.topic, batch.partition, offset_range)
268
268
  end
269
269
  end
270
270
 
@@ -308,6 +308,9 @@ module Kafka
308
308
  @logger.error "Leader not available; waiting 1s before retrying"
309
309
  @cluster.mark_as_stale!
310
310
  sleep 1
311
+ rescue SignalException => e
312
+ @logger.warn "Received signal #{e.message}, shutting down"
313
+ @running = false
311
314
  end
312
315
  end
313
316
  ensure
@@ -320,7 +323,7 @@ module Kafka
320
323
 
321
324
  def make_final_offsets_commit!(attempts = 3)
322
325
  @offset_manager.commit_offsets
323
- rescue ConnectionError
326
+ rescue ConnectionError, EOFError
324
327
  # It's important to make sure final offsets commit is done
325
328
  # As otherwise messages that have been processed after last auto-commit
326
329
  # will be processed again and that may be huge amount of messages
@@ -6,7 +6,7 @@ module Kafka
6
6
 
7
7
  class SaslHandshakeRequest
8
8
 
9
- SUPPORTED_MECHANISMS = %w(GSSAPI)
9
+ SUPPORTED_MECHANISMS = %w(GSSAPI PLAIN)
10
10
 
11
11
  def initialize(mechanism)
12
12
  unless SUPPORTED_MECHANISMS.include?(mechanism)
@@ -0,0 +1,37 @@
1
+ module Kafka
2
+ class SaslPlainAuthenticator
3
+ PLAIN_IDENT = "PLAIN"
4
+
5
+ def initialize(connection:, logger:, authzid:, username:, password:)
6
+ @connection = connection
7
+ @logger = logger
8
+ @authzid = authzid
9
+ @username = username
10
+ @password = password
11
+ end
12
+
13
+ def authenticate!
14
+ response = @connection.send_request(Kafka::Protocol::SaslHandshakeRequest.new(PLAIN_IDENT))
15
+
16
+ @encoder = @connection.encoder
17
+ @decoder = @connection.decoder
18
+
19
+ unless response.error_code == 0 && response.enabled_mechanisms.include?(PLAIN_IDENT)
20
+ raise Kafka::Error, "#{PLAIN_IDENT} is not supported."
21
+ end
22
+
23
+ # SASL PLAIN
24
+ msg = [@authzid.to_s,
25
+ @username.to_s,
26
+ @password.to_s].join("\000").force_encoding('utf-8')
27
+ @encoder.write_bytes(msg)
28
+ begin
29
+ msg = @decoder.bytes
30
+ raise Kafka::Error, 'SASL PLAIN authentication failed: unknown error' unless msg
31
+ rescue Errno::ETIMEDOUT, EOFError => e
32
+ raise Kafka::Error, "SASL PLAIN authentication failed: #{e.message}"
33
+ end
34
+ @logger.debug 'SASL PLAIN authentication successful.'
35
+ end
36
+ end
37
+ end
@@ -0,0 +1,209 @@
1
+ begin
2
+ require "statsd"
3
+ rescue LoadError
4
+ $stderr.puts "In order to report Kafka client metrics to Statsd you need to install the `statsd-ruby` gem."
5
+ raise
6
+ end
7
+
8
+ require "active_support/subscriber"
9
+
10
+ module Kafka
11
+ # Reports operational metrics to a Statsd agent.
12
+ #
13
+ # require "kafka/statds"
14
+ #
15
+ # # Default is "ruby_kafka".
16
+ # Kafka::Statsd.namespace = "custom-namespace"
17
+ #
18
+ # # Default is "127.0.0.1".
19
+ # Kafka::Statsd.host = "statsd.something.com"
20
+ #
21
+ # # Default is 8125.
22
+ # Kafka::Statsd.port = 1234
23
+ #
24
+ # Once the file has been required, no further configuration is needed – all operational
25
+ # metrics are automatically emitted.
26
+ module Statsd
27
+ DEFAULT_NAMESPACE = "ruby_kafka"
28
+ DEFAULT_HOST = '127.0.0.1'
29
+ DEFAULT_PORT = 8125
30
+
31
+ def self.statsd
32
+ @statsd ||= ::Statsd.new(DEFAULT_HOST, DEFAULT_PORT).tap{ |sd| sd.namespace = DEFAULT_NAMESPACE }
33
+ end
34
+
35
+ def self.host=(host)
36
+ statsd.host = host
37
+ statsd.connect if statsd.respond_to?(:connect)
38
+ end
39
+
40
+ def self.port=(port)
41
+ statsd.port = port
42
+ statsd.connect if statsd.respond_to?(:connect)
43
+ end
44
+
45
+ def self.namespace=(namespace)
46
+ statsd.namespace = namespace
47
+ end
48
+
49
+ class StatsdSubscriber < ActiveSupport::Subscriber
50
+ private
51
+
52
+ %w[increment count timing gauge].each do |type|
53
+ define_method(type) do |*args|
54
+ Kafka::Statsd.statsd.send(type, *args)
55
+ end
56
+ end
57
+ end
58
+
59
+ class ConnectionSubscriber < StatsdSubscriber
60
+ def request(event)
61
+ client = event.payload.fetch(:client_id)
62
+ api = event.payload.fetch(:api, "unknown")
63
+ request_size = event.payload.fetch(:request_size, 0)
64
+ response_size = event.payload.fetch(:response_size, 0)
65
+ broker = event.payload.fetch(:broker_host)
66
+
67
+ timing("api.#{client}.#{api}.#{broker}.latency", event.duration)
68
+ increment("api.#{client}.#{api}.#{broker}.calls")
69
+
70
+ timing("api.#{client}.#{api}.#{broker}.request_size", request_size)
71
+ timing("api.#{client}.#{api}.#{broker}.response_size", response_size)
72
+
73
+ if event.payload.key?(:exception)
74
+ increment("api.#{client}.#{api}.#{broker}.errors")
75
+ end
76
+ end
77
+
78
+ attach_to "connection.kafka"
79
+ end
80
+
81
+ class ConsumerSubscriber < StatsdSubscriber
82
+ def process_message(event)
83
+ lag = event.payload.fetch(:offset_lag)
84
+ client = event.payload.fetch(:client_id)
85
+ group_id = event.payload.fetch(:group_id)
86
+ topic = event.payload.fetch(:topic)
87
+ partition = event.payload.fetch(:partition)
88
+
89
+ if event.payload.key?(:exception)
90
+ increment("consumer.#{client}.#{group_id}.#{topic}.#{partition}.process_message.errors")
91
+ else
92
+ timing("consumer.#{client}.#{group_id}.#{topic}.#{partition}.process_message.latency", event.duration)
93
+ increment("consumer.#{client}.#{group_id}.#{topic}.#{partition}.messages")
94
+ end
95
+
96
+ gauge("consumer.#{client}.#{group_id}.#{topic}.#{partition}.lag", lag)
97
+ end
98
+
99
+ def process_batch(event)
100
+ messages = event.payload.fetch(:message_count)
101
+ client = event.payload.fetch(:client_id)
102
+ group_id = event.payload.fetch(:group_id)
103
+ topic = event.payload.fetch(:topic)
104
+ partition = event.payload.fetch(:partition)
105
+
106
+ if event.payload.key?(:exception)
107
+ increment("consumer.#{client}.#{group_id}.#{topic}.#{partition}.process_batch.errors")
108
+ else
109
+ timing("consumer.#{client}.#{group_id}.#{topic}.#{partition}.process_batch.latency", event.duration)
110
+ count("consumer.#{client}.#{group_id}.#{topic}.#{partition}.messages", messages)
111
+ end
112
+ end
113
+
114
+ attach_to "consumer.kafka"
115
+ end
116
+
117
+ class ProducerSubscriber < StatsdSubscriber
118
+ def produce_message(event)
119
+ client = event.payload.fetch(:client_id)
120
+ topic = event.payload.fetch(:topic)
121
+ message_size = event.payload.fetch(:message_size)
122
+ buffer_size = event.payload.fetch(:buffer_size)
123
+ max_buffer_size = event.payload.fetch(:max_buffer_size)
124
+ buffer_fill_ratio = buffer_size.to_f / max_buffer_size.to_f
125
+
126
+ # This gets us the write rate.
127
+ increment("producer.#{client}.#{topic}.produce.messages")
128
+
129
+ timing("producer.#{client}.#{topic}.produce.message_size", message_size)
130
+
131
+ # This gets us the avg/max buffer size per producer.
132
+ timing("producer.#{client}.buffer.size", buffer_size)
133
+
134
+ # This gets us the avg/max buffer fill ratio per producer.
135
+ timing("producer.#{client}.buffer.fill_ratio", buffer_fill_ratio)
136
+ end
137
+
138
+ def buffer_overflow(event)
139
+ client = event.payload.fetch(:client_id)
140
+ topic = event.payload.fetch(:topic)
141
+
142
+ increment("producer.#{client}.#{topic}.produce.errors")
143
+ end
144
+
145
+ def deliver_messages(event)
146
+ client = event.payload.fetch(:client_id)
147
+ message_count = event.payload.fetch(:delivered_message_count)
148
+ attempts = event.payload.fetch(:attempts)
149
+
150
+ if event.payload.key?(:exception)
151
+ increment("producer.#{client}.deliver.errors")
152
+ end
153
+
154
+ timing("producer.#{client}.deliver.latency", event.duration)
155
+
156
+ # Messages delivered to Kafka:
157
+ count("producer.#{client}.deliver.messages", message_count)
158
+
159
+ # Number of attempts to deliver messages:
160
+ timing("producer.#{client}.deliver.attempts", attempts)
161
+ end
162
+
163
+ def ack_message(event)
164
+ client = event.payload.fetch(:client_id)
165
+ topic = event.payload.fetch(:topic)
166
+
167
+ # Number of messages ACK'd for the topic.
168
+ increment("producer.#{client}.#{topic}.ack.messages")
169
+
170
+ # Histogram of delay between a message being produced and it being ACK'd.
171
+ timing("producer.#{client}.#{topic}.ack.delay", event.payload.fetch(:delay))
172
+ end
173
+
174
+ def topic_error(event)
175
+ client = event.payload.fetch(:client_id)
176
+ topic = event.payload.fetch(:topic)
177
+
178
+ increment("producer.#{client}.#{topic}.ack.errors")
179
+ end
180
+
181
+ attach_to "producer.kafka"
182
+ end
183
+
184
+ class AsyncProducerSubscriber < StatsdSubscriber
185
+ def enqueue_message(event)
186
+ client = event.payload.fetch(:client_id)
187
+ topic = event.payload.fetch(:topic)
188
+ queue_size = event.payload.fetch(:queue_size)
189
+ max_queue_size = event.payload.fetch(:max_queue_size)
190
+ queue_fill_ratio = queue_size.to_f / max_queue_size.to_f
191
+
192
+ # This gets us the avg/max queue size per producer.
193
+ timing("async_producer.#{client}.#{topic}.queue.size", queue_size)
194
+
195
+ # This gets us the avg/max queue fill ratio per producer.
196
+ timing("async_producer.#{client}.#{topic}.queue.fill_ratio", queue_fill_ratio)
197
+ end
198
+
199
+ def buffer_overflow(event)
200
+ client = event.payload.fetch(:client_id)
201
+ topic = event.payload.fetch(:topic)
202
+
203
+ increment("async_producer.#{client}.#{topic}.produce.errors")
204
+ end
205
+
206
+ attach_to "async_producer.kafka"
207
+ end
208
+ end
209
+ end
data/lib/kafka/version.rb CHANGED
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.3.18.beta2"
2
+ VERSION = "0.4.0.beta1"
3
3
  end
data/ruby-kafka.gemspec CHANGED
@@ -39,6 +39,7 @@ Gem::Specification.new do |spec|
39
39
  spec.add_development_dependency "colored"
40
40
  spec.add_development_dependency "rspec_junit_formatter", "0.2.2"
41
41
  spec.add_development_dependency "dogstatsd-ruby", ">= 2.0.0"
42
+ spec.add_development_dependency "statsd-ruby"
42
43
  spec.add_development_dependency "ruby-prof"
43
44
  spec.add_development_dependency "timecop"
44
45
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.18.beta2
4
+ version: 0.4.0.beta1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2017-06-29 00:00:00.000000000 Z
11
+ date: 2017-07-18 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: gssapi
@@ -192,6 +192,20 @@ dependencies:
192
192
  - - ">="
193
193
  - !ruby/object:Gem::Version
194
194
  version: 2.0.0
195
+ - !ruby/object:Gem::Dependency
196
+ name: statsd-ruby
197
+ requirement: !ruby/object:Gem::Requirement
198
+ requirements:
199
+ - - ">="
200
+ - !ruby/object:Gem::Version
201
+ version: '0'
202
+ type: :development
203
+ prerelease: false
204
+ version_requirements: !ruby/object:Gem::Requirement
205
+ requirements:
206
+ - - ">="
207
+ - !ruby/object:Gem::Version
208
+ version: '0'
195
209
  - !ruby/object:Gem::Dependency
196
210
  name: ruby-prof
197
211
  requirement: !ruby/object:Gem::Requirement
@@ -309,9 +323,11 @@ files:
309
323
  - lib/kafka/protocol/topic_metadata_request.rb
310
324
  - lib/kafka/round_robin_assignment_strategy.rb
311
325
  - lib/kafka/sasl_gssapi_authenticator.rb
326
+ - lib/kafka/sasl_plain_authenticator.rb
312
327
  - lib/kafka/snappy_codec.rb
313
328
  - lib/kafka/socket_with_timeout.rb
314
329
  - lib/kafka/ssl_socket_with_timeout.rb
330
+ - lib/kafka/statsd.rb
315
331
  - lib/kafka/version.rb
316
332
  - lib/ruby-kafka.rb
317
333
  - performance/profile.rb