ruby-kafka 0.1.4 → 0.1.5

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 19434a542527d7e7ccc4b2c5d768f42a6f6e6837
4
- data.tar.gz: 23392b9a3567d5ef80acf3a31ed6f7d97e4c0648
3
+ metadata.gz: add61b5ca7d42ab4a606bf411d78d9fb8baa0224
4
+ data.tar.gz: c8704b6429bed2e7e1159c5fc32e03db9476717f
5
5
  SHA512:
6
- metadata.gz: 6acc0c5c6b58a6cd6e44573d26d86e1c1eb6d1222cb37f81d820514cc8bb0cfd9d8821e05aff8e46b0172247cc4803dcffbffe32f7cde32274cdea4f1fd3941e
7
- data.tar.gz: f4d57945d640265d13c75efaa91c3f3ba182b5d33f0c33adadbea8d6f41a9b0d50d215a6841e3dc2f9431585a05193879e8185044ad4c35b015ac3ee8e05c71c
6
+ metadata.gz: 1669663bd73bee8b0ac73d0aefe93b8219a996f1c482ad6e7867c0b46ce5d3f91857acb21e418880d7754b67d687ff785596383b31eff7793f1211af13c943d8
7
+ data.tar.gz: f1de6fec8420801ca562d99925da270e61b955495a486117cc1659ccafe3c431fe8872000ebb4346af69ea8306ff4cceb237d047a957f1bebce3d8e48a87ef78
data/README.md CHANGED
@@ -26,57 +26,122 @@ Or install it yourself as:
26
26
 
27
27
  Please see the [documentation site](http://www.rubydoc.info/gems/ruby-kafka) for detailed documentation on the latest release.
28
28
 
29
- An example of a fairly simple Kafka producer:
29
+ ### Producing Messages to Kafka
30
+
31
+ A client must be initialized with at least one Kafka broker. Each client keeps a separate pool of broker connections. Don't use the same client from more than one thread.
30
32
 
31
33
  ```ruby
32
34
  require "kafka"
33
35
 
34
- # A client must be initialized with at least one Kafka broker. Each client keeps
35
- # a separate pool of broker connections. Don't use the same client from more than
36
- # one thread.
37
- kafka = Kafka.new(
38
- seed_brokers: ["kafka1:9092", "kafka2:9092"],
39
- client_id: "my-app",
40
- logger: Logger.new($stderr),
41
- )
42
-
43
- # A producer buffers messages and sends them to the broker that is the leader of
44
- # the partition a given message is being produced to.
36
+ kafka = Kafka.new(seed_brokers: ["kafka1:9092", "kafka2:9092"])
37
+ ```
38
+
39
+ A producer buffers messages and sends them to the broker that is the leader of the partition a given message is assigned to.
40
+
41
+ ```ruby
45
42
  producer = kafka.get_producer
43
+ ```
44
+
45
+ `produce` will buffer the message in the producer but will _not_ actually send it to the Kafka cluster.
46
46
 
47
- # `produce` will buffer the message in the producer.
47
+ ```ruby
48
48
  producer.produce("hello1", topic: "test-messages")
49
+ ```
50
+
51
+ It's possible to specify a message key.
49
52
 
50
- # It's possible to specify a message key:
53
+ ```ruby
51
54
  producer.produce("hello2", key: "x", topic: "test-messages")
55
+ ```
56
+
57
+ If you need to control which partition a message should be assigned to, you can pass in the `partition` parameter.
52
58
 
53
- # If you need to control which partition a message should be written to, you
54
- # can pass in the `partition` parameter:
59
+ ```ruby
55
60
  producer.produce("hello3", topic: "test-messages", partition: 1)
61
+ ```
56
62
 
57
- # If you don't know exactly how many partitions are in the topic, or you'd
58
- # rather have some level of indirection, you can pass in `partition_key`.
59
- # Two messages with the same partition key will always be written to the
60
- # same partition.
63
+ If you don't know exactly how many partitions are in the topic, or you'd rather have some level of indirection, you can pass in `partition_key`. Two messages with the same partition key will always be assigned to the same partition.
64
+
65
+ ```ruby
61
66
  producer.produce("hello4", topic: "test-messages", partition_key: "yo")
67
+ ```
68
+
69
+ `deliver_messages` will send the buffered messages to the cluster. Since messages may be destined for different partitions, this could involve writing to more than one Kafka broker. Note that a failure to send all buffered messages after the configured number of retries will result in `Kafka::FailedToSendMessages` being raised. This can be rescued and ignored; the messages will be kept in the buffer until the next attempt.
62
70
 
63
- # `send_messages` will send the buffered messages to the cluster. Since messages
64
- # may be destined for different partitions, this could involve writing to more
65
- # than one Kafka broker.
66
- producer.send_messages
71
+ ```ruby
72
+ producer.deliver_messages
67
73
  ```
68
74
 
69
75
  Read the docs for [Kafka::Producer](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Producer) for more details.
70
76
 
77
+ ### Partitioning
78
+
79
+ Kafka topics are partitioned, with messages being assigned to a partition by the client. This allows a great deal of flexibility for the users. This section describes several strategies for partitioning and how they impact performance, data locality, etc.
80
+
81
+
82
+ #### Load Balanced Partitioning
83
+
84
+ When optimizing for efficiency, we either distribute messages as evenly as possible to all partitions, or make sure each producer always writes to a single partition. The former ensures an even load for downstream consumers; the latter ensures the highest producer performance, since message batching is done per partition.
85
+
86
+ If no explicit partition is specified, the producer will look to the partition key or the message key for a value that can be used to deterministically assign the message to a partition. If there is a big number of different keys, the resulting distribution will be pretty even. If no keys are passed, the producer will randomly assign a partition. Random partitioning can be achieved even if you use message keys by passing a random partition key, e.g. `partition_key: rand(100)`.
87
+
88
+ If you wish to have the producer write all messages to a single partition, simply generate a random value and re-use that as the partition key:
89
+
90
+ ```ruby
91
+ partition_key = rand(100)
92
+
93
+ producer.produce(msg1, topic: "messages", partition_key: partition_key)
94
+ producer.produce(msg2, topic: "messages", partition_key: partition_key)
95
+
96
+ # ...
97
+ ```
98
+
99
+ You can also base the partition key on some property of the producer, for example the host name.
100
+
101
+ #### Semantic Partitioning
102
+
103
+ By assigning messages to a partition based on some property of the message, e.g. making sure all events tracked in a user session are assigned to the same partition, downstream consumers can make simplifying assumptions about data locality. In this example, a consumer can keep process local state pertaining to a user session knowing that all events for the session will be read from a single partition. This is also called _semantic partitioning_, since the partition assignment is part of the application behavior.
104
+
105
+ Typically it's sufficient to simply pass a partition key in order to guarantee that a set of messages will be assigned to the same partition, e.g.
106
+
107
+ ```ruby
108
+ # All messages with the same `session_id` will be assigned to the same partition.
109
+ producer.produce(event, topic: "user-events", partition_key: session_id)
110
+ ```
111
+
112
+ However, sometimes it's necessary to select a specific partition. When doing this, make sure that you don't pick a partition number outside the range of partitions for the topic:
113
+
114
+ ```ruby
115
+ partitions = kafka.partitions_for("events")
116
+
117
+ # Make sure that we don't exceed the partition count!
118
+ partition = some_number % partitions
119
+
120
+ producer.produce(event, topic: "events", partition: partition)
121
+ ```
122
+
123
+ #### Compatibility with Other Clients
124
+
125
+ There's no standardized way to assign messages to partitions across different Kafka client implementations. If you have a heterogeneous set of clients producing messages to the same topics it may be important to ensure a consistent partitioning scheme. This library doesn't try to implement all schemes, so you'll have to figure out which scheme the other client is using and replicate it. An example:
126
+
127
+ ```ruby
128
+ partitions = kafka.partitions_for("events")
129
+
130
+ # Insert your custom partitioning scheme here:
131
+ partition = PartitioningScheme.assign(partitions, event)
132
+
133
+ producer.produce(event, topic: "events", partition: partition)
134
+ ```
135
+
71
136
  ### Buffering and Error Handling
72
137
 
73
138
  The producer is designed for resilience in the face of temporary network errors, Kafka broker failovers, and other issues that prevent the client from writing messages to the destination topics. It does this by employing local, in-memory buffers. Only when messages are acknowledged by a Kafka broker will they be removed from the buffer.
74
139
 
75
- Typically, you'd configure the producer to retry failed attempts at sending messages, but sometimes all retries are exhausted. In that case, `Kafka::FailedToSendMessages` is raised from `Kafka::Producer#send_messages`. If you wish to have your application be resilient to this happening (e.g. if you're logging to Kafka from a web application) you can rescue this exception. The failed messages are still retained in the buffer, so a subsequent call to `#send_messages` will still attempt to send them.
140
+ Typically, you'd configure the producer to retry failed attempts at sending messages, but sometimes all retries are exhausted. In that case, `Kafka::FailedToSendMessages` is raised from `Kafka::Producer#deliver_messages`. If you wish to have your application be resilient to this happening (e.g. if you're logging to Kafka from a web application) you can rescue this exception. The failed messages are still retained in the buffer, so a subsequent call to `#deliver_messages` will still attempt to send them.
76
141
 
77
142
  Note that there's a maximum buffer size; pass in a different value for `max_buffer_size` when calling `#get_producer` in order to configure this.
78
143
 
79
- A final note on buffers: local buffers give resilience against broker and network failures, and allow higher throughput due to message batching, but they also trade off consistency guarantees for higher availibility and resilience. If your local process dies while messages are buffered, those messages will be lost. If you require high levels of consistency, you should call `#send_messages` immediately after `#produce`.
144
+ A final note on buffers: local buffers give resilience against broker and network failures, and allow higher throughput due to message batching, but they also trade off consistency guarantees for higher availibility and resilience. If your local process dies while messages are buffered, those messages will be lost. If you require high levels of consistency, you should call `#deliver_messages` immediately after `#produce`.
80
145
 
81
146
  ### Understanding Timeouts
82
147
 
@@ -92,7 +157,7 @@ It's important to understand how timeouts work if you have a latency sensitive a
92
157
  * `ack_timeout` is a timeout executed by a broker when the client is sending messages to it. It defines the number of seconds the broker should wait for replicas to acknowledge the write before responding to the client with an error. As such, it relates to the `required_acks` setting. It should be set lower than `socket_timeout`.
93
158
  * `retry_backoff` configures the number of seconds to wait after a failed attempt to send messages to a Kafka broker before retrying. The `max_retries` setting defines the maximum number of retries to attempt, and so the total duration could be up to `max_retries * retry_backoff` seconds. The timeout can be arbitrarily long, and shouldn't be too short: if a broker goes down its partitions will be handed off to another broker, and that can take tens of seconds.
94
159
 
95
- When sending many messages, it's likely that the client needs to send some messages to each broker in the cluster. Given `n` brokers in the cluster, the total wait time when calling `Kafka::Producer#send_messages` can be up to
160
+ When sending many messages, it's likely that the client needs to send some messages to each broker in the cluster. Given `n` brokers in the cluster, the total wait time when calling `Kafka::Producer#deliver_messages` can be up to
96
161
 
97
162
  n * (connect_timeout + socket_timeout + retry_backoff) * max_retries
98
163
 
@@ -30,11 +30,11 @@ begin
30
30
  producer.produce(line, topic: topic)
31
31
 
32
32
  # Send messages for every 10 lines.
33
- producer.send_messages if index % 10 == 0
33
+ producer.deliver_messages if index % 10 == 0
34
34
  end
35
35
  ensure
36
36
  # Make sure to send any remaining messages.
37
- producer.send_messages
37
+ producer.deliver_messages
38
38
 
39
39
  producer.shutdown
40
40
  end
@@ -31,9 +31,8 @@ module Kafka
31
31
  # @return [Kafka::Protocol::MetadataResponse]
32
32
  def fetch_metadata(**options)
33
33
  request = Protocol::TopicMetadataRequest.new(**options)
34
- response_class = Protocol::MetadataResponse
35
34
 
36
- @connection.send_request(request, response_class)
35
+ @connection.send_request(request)
37
36
  end
38
37
 
39
38
  # Fetches messages from a specified topic and partition.
@@ -42,9 +41,8 @@ module Kafka
42
41
  # @return [Kafka::Protocol::FetchResponse]
43
42
  def fetch_messages(**options)
44
43
  request = Protocol::FetchRequest.new(**options)
45
- response_class = Protocol::FetchResponse
46
44
 
47
- @connection.send_request(request, response_class)
45
+ @connection.send_request(request)
48
46
  end
49
47
 
50
48
  # Lists the offset of the specified topics and partitions.
@@ -53,9 +51,8 @@ module Kafka
53
51
  # @return [Kafka::Protocol::ListOffsetResponse]
54
52
  def list_offsets(**options)
55
53
  request = Protocol::ListOffsetRequest.new(**options)
56
- response_class = Protocol::ListOffsetResponse
57
54
 
58
- @connection.send_request(request, response_class)
55
+ @connection.send_request(request)
59
56
  end
60
57
 
61
58
  # Produces a set of messages to the broker.
@@ -64,9 +61,8 @@ module Kafka
64
61
  # @return [Kafka::Protocol::ProduceResponse]
65
62
  def produce(**options)
66
63
  request = Protocol::ProduceRequest.new(**options)
67
- response_class = request.requires_acks? ? Protocol::ProduceResponse : nil
68
64
 
69
- @connection.send_request(request, response_class)
65
+ @connection.send_request(request)
70
66
  end
71
67
  end
72
68
  end
@@ -132,6 +132,14 @@ module Kafka
132
132
  @cluster.topics
133
133
  end
134
134
 
135
+ # Counts the number of partitions in a topic.
136
+ #
137
+ # @param topic [String]
138
+ # @return [Integer] the number of partitions in the topic.
139
+ def partitions_for(topic)
140
+ @cluster.partitions_for(topic).count
141
+ end
142
+
135
143
  def close
136
144
  @cluster.disconnect
137
145
  end
@@ -67,6 +67,7 @@ module Kafka
67
67
  end
68
68
 
69
69
  def partitions_for(topic)
70
+ add_target_topics([topic])
70
71
  cluster_info.partitions_for(topic)
71
72
  end
72
73
 
@@ -15,7 +15,7 @@ module Kafka
15
15
  #
16
16
  # ## Instrumentation
17
17
  #
18
- # Connections emit a `request.kafka` notification on each request. The following
18
+ # Connections emit a `request.connection.kafka` notification on each request. The following
19
19
  # keys will be found in the payload:
20
20
  #
21
21
  # * `:api` — the name of the API being invoked.
@@ -68,23 +68,26 @@ module Kafka
68
68
 
69
69
  # Sends a request over the connection.
70
70
  #
71
- # @param request [#encode] the request that should be encoded and written.
72
- # @param response_class [#decode] an object that can decode the response.
71
+ # @param request [#encode, #response_class] the request that should be
72
+ # encoded and written.
73
73
  #
74
- # @return [Object] the response that was decoded by `response_class`.
75
- def send_request(request, response_class)
76
- Instrumentation.instrument("request.kafka") do |notification|
74
+ # @return [Object] the response.
75
+ def send_request(request)
76
+ # Default notification payload.
77
+ notification = {
78
+ api: Protocol.api_name(request.api_key),
79
+ request_size: 0,
80
+ response_size: 0,
81
+ }
82
+
83
+ Instrumentation.instrument("request.connection.kafka", notification) do
77
84
  open unless open?
78
85
 
79
86
  @correlation_id += 1
80
87
 
81
- # Look up the API name.
82
- notification[:api] = Protocol.api_name(request.api_key)
83
-
84
- # We may not read a response, in which case the size is zero.
85
- notification[:response_size] = 0
86
-
87
88
  write_request(request, notification)
89
+
90
+ response_class = request.response_class
88
91
  wait_for_response(response_class, notification) unless response_class.nil?
89
92
  end
90
93
  rescue Errno::EPIPE, Errno::ECONNRESET, Errno::ETIMEDOUT, EOFError => e
@@ -5,7 +5,7 @@ module Kafka
5
5
  #
6
6
  # ## Instrumentation
7
7
  #
8
- # When executing the operation, an `append_message_set.kafka` notification will be
8
+ # When executing the operation, an `ack_messages.producer.kafka` notification will be
9
9
  # emitted for each message set that was successfully appended to a topic partition.
10
10
  # The following keys will be found in the payload:
11
11
  #
@@ -14,10 +14,13 @@ module Kafka
14
14
  # * `:offset` — the offset of the first message in the message set.
15
15
  # * `:message_count` — the number of messages that were appended.
16
16
  #
17
- # If there was an error appending the message set, the key `:exception` will be set
18
- # in the payload. In that case, the message set will most likely not have been
19
- # appended and will possibly be retried later. Check this key before reporting the
20
- # operation as successful.
17
+ # In addition to these notifications, a `send_messages.producer.kafka` notification will
18
+ # be emitted after the operation completes, regardless of whether it succeeds. This
19
+ # notification will have the following keys:
20
+ #
21
+ # * `message_count` – the total number of messages that the operation tried to
22
+ # send. Note that not all messages may get delivered.
23
+ # * `sent_message_count` – the number of messages that were successfully sent.
21
24
  #
22
25
  class ProduceOperation
23
26
  def initialize(cluster:, buffer:, required_acks:, ack_timeout:, logger:)
@@ -29,6 +32,22 @@ module Kafka
29
32
  end
30
33
 
31
34
  def execute
35
+ Instrumentation.instrument("send_messages.producer.kafka") do |notification|
36
+ message_count = @buffer.size
37
+
38
+ notification[:message_count] = message_count
39
+
40
+ begin
41
+ send_buffered_messages
42
+ ensure
43
+ notification[:sent_message_count] = message_count - @buffer.size
44
+ end
45
+ end
46
+ end
47
+
48
+ private
49
+
50
+ def send_buffered_messages
32
51
  messages_for_broker = {}
33
52
 
34
53
  @buffer.each do |topic, partition, messages|
@@ -68,8 +87,6 @@ module Kafka
68
87
  end
69
88
  end
70
89
 
71
- private
72
-
73
90
  def handle_response(response)
74
91
  response.each_partition do |topic_info, partition_info|
75
92
  topic = topic_info.topic
@@ -78,16 +95,14 @@ module Kafka
78
95
  message_count = @buffer.message_count_for_partition(topic: topic, partition: partition)
79
96
 
80
97
  begin
81
- payload = {
98
+ Protocol.handle_error(partition_info.error_code)
99
+
100
+ Instrumentation.instrument("ack_messages.producer.kafka", {
82
101
  topic: topic,
83
102
  partition: partition,
84
103
  offset: offset,
85
104
  message_count: message_count,
86
- }
87
-
88
- Instrumentation.instrument("append_message_set.kafka", payload) do
89
- Protocol.handle_error(partition_info.error_code)
90
- end
105
+ })
91
106
  rescue Kafka::CorruptMessage
92
107
  @logger.error "Corrupt message when writing to #{topic}/#{partition}"
93
108
  rescue Kafka::UnknownTopicOrPartition
@@ -23,10 +23,10 @@ module Kafka
23
23
  #
24
24
  # ## Buffering
25
25
  #
26
- # The producer buffers pending messages until {#send_messages} is called. Note that there is
26
+ # The producer buffers pending messages until {#deliver_messages} is called. Note that there is
27
27
  # a maximum buffer size (default is 1,000 messages) and writing messages after the
28
28
  # buffer has reached this size will result in a BufferOverflow exception. Make sure
29
- # to periodically call {#send_messages} or set `max_buffer_size` to an appropriate value.
29
+ # to periodically call {#deliver_messages} or set `max_buffer_size` to an appropriate value.
30
30
  #
31
31
  # Buffering messages and sending them in batches greatly improves performance, so
32
32
  # try to avoid sending messages after every write. The tradeoff between throughput and
@@ -46,6 +46,17 @@ module Kafka
46
46
  # not, we do another round of requests, this time with just the remaining messages.
47
47
  # We do this for as long as `max_retries` permits.
48
48
  #
49
+ # ## Instrumentation
50
+ #
51
+ # After {#deliver_messages} completes, the notification
52
+ # `deliver_messages.producer.kafka` will be emitted.
53
+ #
54
+ # * `message_count` – the total number of messages that the producer tried to
55
+ # deliver. Note that not all messages may get delivered.
56
+ # * `delivered_message_count` – the number of messages that were successfully
57
+ # delivered.
58
+ # * `attempts` – the number of attempts made to deliver the messages.
59
+ #
49
60
  # ## Example
50
61
  #
51
62
  # This is an example of an application which reads lines from stdin and writes them
@@ -73,11 +84,11 @@ module Kafka
73
84
  # producer.produce(line, topic: topic)
74
85
  #
75
86
  # # Send messages for every 10 lines.
76
- # producer.send_messages if index % 10 == 0
87
+ # producer.deliver_messages if index % 10 == 0
77
88
  # end
78
89
  # ensure
79
90
  # # Make sure to send any remaining messages.
80
- # producer.send_messages
91
+ # producer.deliver_messages
81
92
  #
82
93
  # producer.shutdown
83
94
  # end
@@ -123,7 +134,7 @@ module Kafka
123
134
  end
124
135
 
125
136
  # Produces a message to the specified topic. Note that messages are buffered in
126
- # the producer until {#send_messages} is called.
137
+ # the producer until {#deliver_messages} is called.
127
138
  #
128
139
  # ## Partitioning
129
140
  #
@@ -176,7 +187,41 @@ module Kafka
176
187
  #
177
188
  # @raise [FailedToSendMessages] if not all messages could be successfully sent.
178
189
  # @return [nil]
179
- def send_messages
190
+ def deliver_messages
191
+ # There's no need to do anything if the buffer is empty.
192
+ return if buffer_size == 0
193
+
194
+ Instrumentation.instrument("deliver_messages.producer.kafka") do |notification|
195
+ message_count = buffer_size
196
+
197
+ notification[:message_count] = message_count
198
+ notification[:attempts] = 0
199
+
200
+ begin
201
+ deliver_messages_with_retries(notification)
202
+ ensure
203
+ notification[:delivered_message_count] = message_count - buffer_size
204
+ end
205
+ end
206
+ end
207
+
208
+ # Returns the number of messages currently held in the buffer.
209
+ #
210
+ # @return [Integer] buffer size.
211
+ def buffer_size
212
+ @pending_messages.size + @buffer.size
213
+ end
214
+
215
+ # Closes all connections to the brokers.
216
+ #
217
+ # @return [nil]
218
+ def shutdown
219
+ @cluster.disconnect
220
+ end
221
+
222
+ private
223
+
224
+ def deliver_messages_with_retries(notification)
180
225
  attempt = 0
181
226
 
182
227
  # Make sure we get metadata for this topic.
@@ -194,6 +239,8 @@ module Kafka
194
239
  loop do
195
240
  attempt += 1
196
241
 
242
+ notification[:attempts] = attempt
243
+
197
244
  @cluster.refresh_metadata_if_necessary!
198
245
 
199
246
  assign_partitions!
@@ -225,22 +272,6 @@ module Kafka
225
272
  end
226
273
  end
227
274
 
228
- # Returns the number of messages currently held in the buffer.
229
- #
230
- # @return [Integer] buffer size.
231
- def buffer_size
232
- @pending_messages.size + @buffer.size
233
- end
234
-
235
- # Closes all connections to the brokers.
236
- #
237
- # @return [nil]
238
- def shutdown
239
- @cluster.disconnect
240
- end
241
-
242
- private
243
-
244
275
  def assign_partitions!
245
276
  until @pending_messages.empty?
246
277
  # We want to keep the message in the first-stage buffer in case there's an error.
@@ -30,6 +30,10 @@ module Kafka
30
30
  1
31
31
  end
32
32
 
33
+ def response_class
34
+ Protocol::FetchResponse
35
+ end
36
+
33
37
  def encode(encoder)
34
38
  encoder.write_int32(@replica_id)
35
39
  encoder.write_int32(@max_wait_time)
@@ -23,6 +23,10 @@ module Kafka
23
23
  2
24
24
  end
25
25
 
26
+ def response_class
27
+ Protocol::ListOffsetResponse
28
+ end
29
+
26
30
  def encode(encoder)
27
31
  encoder.write_int32(@replica_id)
28
32
 
@@ -40,6 +40,10 @@ module Kafka
40
40
  0
41
41
  end
42
42
 
43
+ def response_class
44
+ requires_acks? ? Protocol::ProduceResponse : nil
45
+ end
46
+
43
47
  # Whether this request requires any acknowledgements at all. If no acknowledgements
44
48
  # are required, the server will not send back a response at all.
45
49
  #
@@ -13,6 +13,10 @@ module Kafka
13
13
  3
14
14
  end
15
15
 
16
+ def response_class
17
+ Protocol::MetadataResponse
18
+ end
19
+
16
20
  def encode(encoder)
17
21
  encoder.write_array(@topics) {|topic| encoder.write_string(topic) }
18
22
  end
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.1.4"
2
+ VERSION = "0.1.5"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.4
4
+ version: 0.1.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-02-15 00:00:00.000000000 Z
11
+ date: 2016-02-18 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler