ruby-kafka 0.3.1 → 0.3.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 2842846dbe233b635e3dae0c3e14d9d660d19089
4
- data.tar.gz: f7a9b7a9aea3df0e3175b0b239fe9d72f743fed5
3
+ metadata.gz: c742496ae66067502319aecdfd06536baf455636
4
+ data.tar.gz: c5b8d1c26990d1d00460907fb4d5838d75d64635
5
5
  SHA512:
6
- metadata.gz: a4c354e1677e76d0d0eb5483fd56b9d0fba7b8d16baf0f1fe450960977e8c18edba2c00d0a650168d280d03784170e078ae77e7914b6451bd20fc1784aea4867
7
- data.tar.gz: 25e871eb218789b094a167371560db89edb1be38387969c2c74fa892701c527abb9d1cd9be0b14b41b65e4805cbbab19dd729c56c963b7555f6d7e15136ea20d
6
+ metadata.gz: 70392fc8aed22ceabdfaeef0f4d41bc076537edc4a188d927999ab4ecdca8b1c9169d8441e6a2d0b7f2cbea136e8e87fa13fb05ef5d1cb68699770c24ba28acf
7
+ data.tar.gz: cac553e7b994d4e4bce613d1fa08f6757d82f088e6add7c8f6cce51233901882d4f6297e54890fb7852afd5fd9a64a356faa8bfe17baa1ebc70132a74eca18c8
@@ -4,6 +4,10 @@ Changes and additions to the library will be listed here.
4
4
 
5
5
  ## Unreleased
6
6
 
7
+ ## v0.3.2
8
+
9
+ - Experimental batch consumer API.
10
+
7
11
  ## v0.3.1
8
12
 
9
13
  - Simplify the heartbeat algorithm.
data/README.md CHANGED
@@ -17,10 +17,12 @@ Although parts of this library work with Kafka 0.8 – specifically, the Produce
17
17
  4. [Buffering and Error Handling](#buffering-and-error-handling)
18
18
  5. [Message Delivery Guarantees](#message-delivery-guarantees)
19
19
  6. [Compression](#compression)
20
+ 7. [Producing Messages from a Rails Application](#producing-messages-from-a-rails-application)
20
21
  2. [Consuming Messages from Kafka](#consuming-messages-from-kafka)
21
22
  3. [Logging](#logging)
22
- 4. [Understanding Timeouts](#understanding-timeouts)
23
- 5. [Encryption and Authentication using SSL](#encryption-and-authentication-using-ssl)
23
+ 4. [Instrumentation](#instrumentation)
24
+ 5. [Understanding Timeouts](#understanding-timeouts)
25
+ 6. [Encryption and Authentication using SSL](#encryption-and-authentication-using-ssl)
24
26
  3. [Development](#development)
25
27
  4. [Roadmap](#roadmap)
26
28
 
@@ -275,6 +277,52 @@ producer = kafka.producer(
275
277
  )
276
278
  ```
277
279
 
280
+ #### Producing Messages from a Rails Application
281
+
282
+ A typical use case for Kafka is tracking events that occur in web applications. Oftentimes it's advisable to avoid having a hard dependency on Kafka being available, allowing your application to survive a Kafka outage. By using an asynchronous producer, you can avoid doing IO within the individual request/response cycles, instead pushing that to the producer's internal background thread.
283
+
284
+ In this example, a producer is configured in a Rails initializer:
285
+
286
+ ```ruby
287
+ # config/initializers/kafka_producer.rb
288
+ require "kafka"
289
+
290
+ # Configure the Kafka client with the broker hosts and the Rails
291
+ # logger.
292
+ $kafka = Kafka.new(
293
+ seed_brokers: ["kafka1:9092", "kafka2:9092"],
294
+ logger: Rails.logger,
295
+ )
296
+
297
+ # Set up an asynchronous producer that delivers its buffered messages
298
+ # every ten seconds:
299
+ $kafka_producer = $kafka.async_producer(
300
+ delivery_interval: 10,
301
+ )
302
+
303
+ # Make sure to shut down the producer when exiting.
304
+ at_exit { $kafka_producer.shutdown }
305
+ ```
306
+
307
+ In your controllers, simply call the producer directly:
308
+
309
+ ```ruby
310
+ # app/controllers/orders_controller.rb
311
+ class OrdersController
312
+ def create
313
+ @order = Order.create!(params[:order])
314
+
315
+ event = {
316
+ order_id: @order.id,
317
+ amount: @order.amount,
318
+ timestamp: Time.now,
319
+ }
320
+
321
+ $kafka_producer.produce(event.to_json, topic: "order_events")
322
+ end
323
+ end
324
+ ```
325
+
278
326
  ### Consuming Messages from Kafka
279
327
 
280
328
  **Warning:** The Consumer API is still alpha level and will likely change. The consumer code should not be considered stable, as it hasn't been exhaustively tested in production environments yet.
@@ -311,17 +359,14 @@ kafka = Kafka.new(seed_brokers: ["kafka1:9092", "kafka2:9092"])
311
359
  # Consumers with the same group id will form a Consumer Group together.
312
360
  consumer = kafka.consumer(group_id: "my-consumer")
313
361
 
362
+ # It's possible to subscribe to multiple topics by calling `subscribe`
363
+ # repeatedly.
314
364
  consumer.subscribe("greetings")
315
365
 
316
- begin
317
- # This will loop indefinitely, yielding each message in turn.
318
- consumer.each_message do |message|
319
- puts message.topic, message.partition
320
- puts message.offset, message.key, message.value
321
- end
322
- ensure
323
- # Always make sure to shut down the consumer properly.
324
- consumer.shutdown
366
+ # This will loop indefinitely, yielding each message in turn.
367
+ consumer.each_message do |message|
368
+ puts message.topic, message.partition
369
+ puts message.offset, message.key, message.value
325
370
  end
326
371
  ```
327
372
 
@@ -339,6 +384,47 @@ kafka = Kafka.new(logger: logger, ...)
339
384
 
340
385
  By default, nothing is logged.
341
386
 
387
+ ### Instrumentation
388
+
389
+ Most operations are instrumented using [Active Support Notifications](http://api.rubyonrails.org/classes/ActiveSupport/Notifications.html). In order to subscribe to notifications, make sure to require the notifications library _before_ you require ruby-kafka, e.g.
390
+
391
+ ```ruby
392
+ require "active_support/notifications"
393
+ require "kafka"
394
+ ```
395
+
396
+ The notifications are namespaced based on their origin, with separate namespaces for the producer and the consumer.
397
+
398
+ In order to receive notifications you can either subscribe to individual notification names or use regular expressions to subscribe to entire namespaces. This example will subscribe to _all_ notifications sent by ruby-kafka:
399
+
400
+ ```ruby
401
+ ActiveSupport::Notifications.subscribe(/.*\.kafka$/) do |*args|
402
+ event = ActiveSupport::Notifications::Event.new(*args)
403
+ puts "Received notification `#{event.name}` with payload: #{event.payload.inspect}"
404
+ end
405
+ ```
406
+
407
+ #### Producer Notifications
408
+
409
+ * `produce_message.producer.kafka` is sent whenever a message is produced to a buffer. It includes the following payload:
410
+ * `value` is the message value.
411
+ * `key` is the message key.
412
+ * `topic` is the topic that the message was produced to.
413
+ * `buffer_size` is the size of the producer buffer after adding the message.
414
+ * `max_buffer_size` is the maximum size of the producer buffer.
415
+
416
+ * `deliver_messages.producer.kafka` is sent whenever a producer attempts to deliver its buffered messages to the Kafka brokers. It includes the following payload:
417
+ * `attempts` is the number of times delivery was attempted.
418
+ * `message_count` is the number of messages for which delivery was attempted.
419
+ * `delivered_message_count` is the number of messages that were acknowledged by the brokers - if this number is smaller than `message_count` not all messages were successfully delivered.
420
+
421
+ #### Connection Notifications
422
+
423
+ * `request.connection.kafka` is sent whenever a network request is sent to a Kafka broker. It includes the following payload:
424
+ * `api` is the name of the API that was called, e.g. `produce` or `fetch`.
425
+ * `request_size` is the number of bytes in the request.
426
+ * `response_size` is the number of bytes in the response.
427
+
342
428
  ### Understanding Timeouts
343
429
 
344
430
  It's important to understand how timeouts work if you have a latency sensitive application. This library allows configuring timeouts on different levels:
@@ -405,7 +491,7 @@ After checking out the repo, run `bin/setup` to install dependencies. Then, run
405
491
 
406
492
  The current stable release is v0.2. This release is running in production at Zendesk, but it's still not recommended that you use it when data loss is unacceptable. It will take a little while until all edge cases have been uncovered and handled.
407
493
 
408
- ### v0.3
494
+ ### v0.4
409
495
 
410
496
  Beta release of the Consumer API, allowing balanced Consumer Groups coordinating access to partitions. Kafka 0.9 only.
411
497
 
@@ -35,20 +35,16 @@ threads = NUM_THREADS.times.map do |worker_id|
35
35
  consumer = kafka.consumer(group_id: "firehose")
36
36
  consumer.subscribe(KAFKA_TOPIC)
37
37
 
38
- begin
39
- i = 0
40
- consumer.each_message do |message|
41
- i += 1
38
+ i = 0
39
+ consumer.each_message do |message|
40
+ i += 1
42
41
 
43
- if i % 1000 == 0
44
- queue << i
45
- i = 0
46
- end
47
-
48
- sleep 0.01
42
+ if i % 1000 == 0
43
+ queue << i
44
+ i = 0
49
45
  end
50
- ensure
51
- consumer.shutdown
46
+
47
+ sleep 0.01
52
48
  end
53
49
  end
54
50
  end
@@ -235,7 +235,7 @@ module Kafka
235
235
 
236
236
  operation.fetch_from_partition(topic, partition, offset: offset, max_bytes: max_bytes)
237
237
 
238
- operation.execute
238
+ operation.execute.flat_map {|batch| batch.messages }
239
239
  end
240
240
 
241
241
  # Lists all topics in the cluster.
@@ -34,7 +34,7 @@ module Kafka
34
34
 
35
35
  wrapper_message = Protocol::Message.new(
36
36
  value: compressed_data,
37
- attributes: @codec.codec_id,
37
+ codec_id: @codec.codec_id,
38
38
  )
39
39
 
40
40
  Protocol::MessageSet.new(messages: [wrapper_message])
@@ -31,22 +31,14 @@ module Kafka
31
31
  # # Subscribe to a Kafka topic:
32
32
  # consumer.subscribe("messages")
33
33
  #
34
- # begin
35
- # # Loop forever, reading in messages from all topics that have been
36
- # # subscribed to.
37
- # consumer.each_message do |message|
38
- # puts message.topic
39
- # puts message.partition
40
- # puts message.key
41
- # puts message.value
42
- # puts message.offset
43
- # end
44
- # ensure
45
- # # Make sure to shut down the consumer after use. This lets
46
- # # the consumer notify the Kafka cluster that it's leaving
47
- # # the group, causing a synchronization and re-balancing of
48
- # # the group.
49
- # consumer.shutdown
34
+ # # Loop forever, reading in messages from all topics that have been
35
+ # # subscribed to.
36
+ # consumer.each_message do |message|
37
+ # puts message.topic
38
+ # puts message.partition
39
+ # puts message.key
40
+ # puts message.value
41
+ # puts message.offset
50
42
  # end
51
43
  #
52
44
  class Consumer
@@ -60,6 +52,9 @@ module Kafka
60
52
 
61
53
  # Send two heartbeats in each session window, just to be sure.
62
54
  @heartbeat_interval = @session_timeout / 2
55
+
56
+ # Whether or not the consumer is currently consuming messages.
57
+ @running = false
63
58
  end
64
59
 
65
60
  # Subscribes the consumer to a topic.
@@ -94,27 +89,81 @@ module Kafka
94
89
  # @yieldparam message [Kafka::FetchedMessage] a message fetched from Kafka.
95
90
  # @return [nil]
96
91
  def each_message
92
+ @running = true
93
+
94
+ while @running
95
+ begin
96
+ fetch_batches.each do |batch|
97
+ batch.messages.each do |message|
98
+ Instrumentation.instrument("process_message.consumer.kafka") do |notification|
99
+ notification.update(
100
+ topic: message.topic,
101
+ partition: message.partition,
102
+ offset: message.offset,
103
+ offset_lag: batch.highwater_mark_offset - message.offset,
104
+ key: message.key,
105
+ value: message.value,
106
+ )
107
+
108
+ yield message
109
+ end
110
+
111
+ @offset_manager.commit_offsets_if_necessary
112
+
113
+ send_heartbeat_if_necessary
114
+ mark_message_as_processed(message)
115
+
116
+ break if !@running
117
+ end
118
+ end
119
+ rescue ConnectionError => e
120
+ @logger.error "Connection error while sending heartbeat; rejoining"
121
+ join_group
122
+ rescue UnknownMemberId
123
+ @logger.error "Kicked out of group; rejoining"
124
+ join_group
125
+ rescue RebalanceInProgress
126
+ @logger.error "Group is rebalancing; rejoining"
127
+ join_group
128
+ rescue IllegalGeneration
129
+ @logger.error "Group has transitioned to a new generation; rejoining"
130
+ join_group
131
+ end
132
+ end
133
+ ensure
134
+ # In order to quickly have the consumer group re-balance itself, it's
135
+ # important that members explicitly tell Kafka when they're leaving.
136
+ @offset_manager.commit_offsets
137
+ @group.leave
138
+ @running = false
139
+ end
140
+
141
+ def stop
142
+ @running = false
143
+ end
144
+
145
+ def each_batch
97
146
  loop do
98
147
  begin
99
- batch = fetch_batch
100
-
101
- batch.each do |message|
102
- Instrumentation.instrument("process_message.consumer.kafka") do |notification|
103
- notification.update(
104
- topic: message.topic,
105
- partition: message.partition,
106
- offset: message.offset,
107
- key: message.key,
108
- value: message.value,
109
- )
110
-
111
- yield message
148
+ fetch_batches.each do |batch|
149
+ unless batch.empty?
150
+ Instrumentation.instrument("process_batch.consumer.kafka") do |notification|
151
+ notification.update(
152
+ topic: batch.topic,
153
+ partition: batch.partition,
154
+ highwater_mark_offset: batch.highwater_mark_offset,
155
+ message_count: batch.messages.count,
156
+ )
157
+
158
+ yield batch
159
+ end
160
+
161
+ mark_message_as_processed(batch.messages.last)
112
162
  end
113
163
 
114
164
  @offset_manager.commit_offsets_if_necessary
115
165
 
116
166
  send_heartbeat_if_necessary
117
- mark_message_as_processed(message)
118
167
  end
119
168
  rescue ConnectionError => e
120
169
  @logger.error "Connection error while sending heartbeat; rejoining"
@@ -130,24 +179,11 @@ module Kafka
130
179
  join_group
131
180
  end
132
181
  end
133
- end
134
-
135
- # Shuts down the consumer.
136
- #
137
- # In order to quickly have the consumer group re-balance itself, it's
138
- # important that members explicitly tell Kafka when they're leaving.
139
- # Therefore it's a good idea to call this method whenever your consumer
140
- # is about to quit. If this method is not called, it may take up to
141
- # the amount of time defined by the `session_timeout` parameter for
142
- # Kafka to realize that this consumer is no longer present and trigger
143
- # a group re-balance. In that period of time, the partitions that used
144
- # to be assigned to this consumer won't be processed.
145
- #
146
- # @return [nil]
147
- def shutdown
182
+ ensure
183
+ # In order to quickly have the consumer group re-balance itself, it's
184
+ # important that members explicitly tell Kafka when they're leaving.
148
185
  @offset_manager.commit_offsets
149
186
  @group.leave
150
- rescue ConnectionError
151
187
  end
152
188
 
153
189
  private
@@ -157,11 +193,9 @@ module Kafka
157
193
  @group.join
158
194
  end
159
195
 
160
- def fetch_batch
196
+ def fetch_batches
161
197
  join_group unless @group.member?
162
198
 
163
- @logger.debug "Fetching a batch of messages"
164
-
165
199
  assigned_partitions = @group.assigned_partitions
166
200
 
167
201
  send_heartbeat_if_necessary
@@ -179,17 +213,13 @@ module Kafka
179
213
  partitions.each do |partition|
180
214
  offset = @offset_manager.next_offset_for(topic, partition)
181
215
 
182
- @logger.debug "Fetching from #{topic}/#{partition} starting at offset #{offset}"
216
+ @logger.debug "Fetching batch from #{topic}/#{partition} starting at offset #{offset}"
183
217
 
184
218
  operation.fetch_from_partition(topic, partition, offset: offset)
185
219
  end
186
220
  end
187
221
 
188
- messages = operation.execute
189
-
190
- @logger.info "Fetched #{messages.count} messages"
191
-
192
- messages
222
+ operation.execute
193
223
  rescue ConnectionError => e
194
224
  @logger.error "Connection error while fetching messages: #{e}"
195
225
 
@@ -1,3 +1,5 @@
1
+ require "kafka/fetched_batch"
2
+
1
3
  module Kafka
2
4
 
3
5
  # Fetches messages from one or more partitions.
@@ -66,10 +68,10 @@ module Kafka
66
68
  response = broker.fetch_messages(**options)
67
69
 
68
70
  response.topics.flat_map {|fetched_topic|
69
- fetched_topic.partitions.flat_map {|fetched_partition|
71
+ fetched_topic.partitions.map {|fetched_partition|
70
72
  Protocol.handle_error(fetched_partition.error_code)
71
73
 
72
- fetched_partition.messages.map {|message|
74
+ messages = fetched_partition.messages.map {|message|
73
75
  FetchedMessage.new(
74
76
  value: message.value,
75
77
  key: message.key,
@@ -78,6 +80,13 @@ module Kafka
78
80
  offset: message.offset,
79
81
  )
80
82
  }
83
+
84
+ FetchedBatch.new(
85
+ topic: fetched_topic.name,
86
+ partition: fetched_partition.partition,
87
+ highwater_mark_offset: fetched_partition.highwater_mark_offset,
88
+ messages: messages,
89
+ )
81
90
  }
82
91
  }
83
92
  }
@@ -0,0 +1,16 @@
1
+ module Kafka
2
+ class FetchedBatch
3
+ attr_reader :topic, :partition, :highwater_mark_offset, :messages
4
+
5
+ def initialize(topic:, partition:, highwater_mark_offset:, messages:)
6
+ @topic = topic
7
+ @partition = partition
8
+ @highwater_mark_offset = highwater_mark_offset
9
+ @messages = messages
10
+ end
11
+
12
+ def empty?
13
+ @messages.empty?
14
+ end
15
+ end
16
+ end
@@ -14,8 +14,8 @@ module Kafka
14
14
  @bytesize = 0
15
15
  end
16
16
 
17
- def write(value:, key:, topic:, partition:)
18
- message = Protocol::Message.new(key: key, value: value)
17
+ def write(value:, key:, topic:, partition:, create_time: Time.now)
18
+ message = Protocol::Message.new(key: key, value: value, create_time: create_time)
19
19
 
20
20
  buffer_for(topic, partition) << message
21
21
 
@@ -60,8 +60,8 @@ module Kafka
60
60
  @buffer.delete(topic) if @buffer[topic].empty?
61
61
  end
62
62
 
63
- def message_count_for_partition(topic:, partition:)
64
- buffer_for(topic, partition).count
63
+ def messages_for(topic:, partition:)
64
+ buffer_for(topic, partition)
65
65
  end
66
66
 
67
67
  # Clears messages across all topics and partitions.
@@ -2,14 +2,15 @@ module Kafka
2
2
  class PendingMessage
3
3
  attr_reader :value, :key, :topic, :partition, :partition_key
4
4
 
5
- attr_reader :bytesize
5
+ attr_reader :bytesize, :create_time
6
6
 
7
- def initialize(value:, key:, topic:, partition:, partition_key:)
7
+ def initialize(value:, key:, topic:, partition:, partition_key:, create_time:)
8
8
  @key = key
9
9
  @value = value
10
10
  @topic = topic
11
11
  @partition = partition
12
12
  @partition_key = partition_key
13
+ @create_time = create_time
13
14
 
14
15
  @bytesize = key.to_s.bytesize + value.to_s.bytesize
15
16
  end
@@ -104,18 +104,21 @@ module Kafka
104
104
  response.each_partition do |topic_info, partition_info|
105
105
  topic = topic_info.topic
106
106
  partition = partition_info.partition
107
- offset = partition_info.offset
108
- message_count = @buffer.message_count_for_partition(topic: topic, partition: partition)
107
+ messages = @buffer.messages_for(topic: topic, partition: partition)
108
+ ack_time = Time.now
109
109
 
110
110
  begin
111
111
  Protocol.handle_error(partition_info.error_code)
112
112
 
113
- Instrumentation.instrument("ack_messages.producer.kafka", {
114
- topic: topic,
115
- partition: partition,
116
- offset: offset,
117
- message_count: message_count,
118
- })
113
+ messages.each do |message|
114
+ Instrumentation.instrument("ack_message.producer.kafka", {
115
+ key: message.key,
116
+ value: message.value,
117
+ topic: topic,
118
+ partition: partition,
119
+ delay: ack_time - message.create_time,
120
+ })
121
+ end
119
122
  rescue Kafka::CorruptMessage
120
123
  @logger.error "Corrupt message when writing to #{topic}/#{partition}"
121
124
  rescue Kafka::UnknownTopicOrPartition
@@ -133,7 +136,7 @@ module Kafka
133
136
  rescue Kafka::NotEnoughReplicasAfterAppend
134
137
  @logger.error "Messages written, but to fewer in-sync replicas than required for #{topic}/#{partition}"
135
138
  else
136
- @logger.debug "Successfully appended #{message_count} messages to #{topic}/#{partition} at offset #{offset}"
139
+ @logger.debug "Successfully appended #{messages.count} messages to #{topic}/#{partition}"
137
140
 
138
141
  # The messages were successfully written; clear them from the buffer.
139
142
  @buffer.clear_messages(topic: topic, partition: partition)
@@ -181,12 +181,15 @@ module Kafka
181
181
  # @raise [BufferOverflow] if the maximum buffer size has been reached.
182
182
  # @return [nil]
183
183
  def produce(value, key: nil, topic:, partition: nil, partition_key: nil)
184
+ create_time = Time.now
185
+
184
186
  message = PendingMessage.new(
185
187
  value: value,
186
188
  key: key,
187
189
  topic: topic,
188
190
  partition: partition,
189
191
  partition_key: partition_key,
192
+ create_time: create_time,
190
193
  )
191
194
 
192
195
  if buffer_size >= @max_buffer_size
@@ -204,6 +207,7 @@ module Kafka
204
207
  value: value,
205
208
  key: key,
206
209
  topic: topic,
210
+ create_time: create_time,
207
211
  buffer_size: buffer_size,
208
212
  max_buffer_size: @max_buffer_size,
209
213
  })
@@ -322,6 +326,7 @@ module Kafka
322
326
  key: message.key,
323
327
  topic: message.topic,
324
328
  partition: partition,
329
+ create_time: message.create_time,
325
330
  )
326
331
  end
327
332
  rescue Kafka::Error => e
@@ -92,7 +92,13 @@ module Kafka
92
92
  #
93
93
  # @return [String]
94
94
  def read(number_of_bytes)
95
- @io.read(number_of_bytes) or raise EOFError
95
+ data = @io.read(number_of_bytes) or raise EOFError
96
+
97
+ # If the `read` call returned less data than expected we should not
98
+ # proceed.
99
+ raise EOFError if data.size != number_of_bytes
100
+
101
+ data
96
102
  end
97
103
  end
98
104
  end
@@ -16,15 +16,16 @@ module Kafka
16
16
  class Message
17
17
  MAGIC_BYTE = 0
18
18
 
19
- attr_reader :key, :value, :attributes, :offset
19
+ attr_reader :key, :value, :codec_id, :offset
20
20
 
21
- attr_reader :bytesize
21
+ attr_reader :bytesize, :create_time
22
22
 
23
- def initialize(value:, key: nil, attributes: 0, offset: -1)
23
+ def initialize(value:, key: nil, create_time: Time.now, codec_id: 0, offset: -1)
24
24
  @key = key
25
25
  @value = value
26
- @attributes = attributes
26
+ @codec_id = codec_id
27
27
  @offset = offset
28
+ @create_time = create_time
28
29
 
29
30
  @bytesize = @key.to_s.bytesize + @value.to_s.bytesize
30
31
  end
@@ -39,17 +40,17 @@ module Kafka
39
40
  def ==(other)
40
41
  @key == other.key &&
41
42
  @value == other.value &&
42
- @attributes == other.attributes &&
43
+ @codec_id == other.codec_id &&
43
44
  @offset == other.offset
44
45
  end
45
46
 
46
47
  def compressed?
47
- @attributes != 0
48
+ @codec_id != 0
48
49
  end
49
50
 
50
51
  # @return [Kafka::Protocol::MessageSet]
51
52
  def decompress
52
- codec = Compression.find_codec_by_id(@attributes)
53
+ codec = Compression.find_codec_by_id(@codec_id)
53
54
 
54
55
  # For some weird reason we need to cut out the first 20 bytes.
55
56
  data = codec.decompress(value)
@@ -73,7 +74,11 @@ module Kafka
73
74
  key = message_decoder.bytes
74
75
  value = message_decoder.bytes
75
76
 
76
- new(key: key, value: value, attributes: attributes, offset: offset)
77
+ # The codec id is encoded in the three least significant bits of the
78
+ # attributes.
79
+ codec_id = attributes & 0b111
80
+
81
+ new(key: key, value: value, codec_id: codec_id, offset: offset)
77
82
  end
78
83
 
79
84
  private
@@ -96,7 +101,7 @@ module Kafka
96
101
  encoder = Encoder.new(buffer)
97
102
 
98
103
  encoder.write_int8(MAGIC_BYTE)
99
- encoder.write_int8(@attributes)
104
+ encoder.write_int8(@codec_id)
100
105
  encoder.write_bytes(@key)
101
106
  encoder.write_bytes(@value)
102
107
 
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.3.1"
2
+ VERSION = "0.3.2"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.1
4
+ version: 0.3.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-03-08 00:00:00.000000000 Z
11
+ date: 2016-03-15 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -180,6 +180,7 @@ files:
180
180
  - lib/kafka/consumer.rb
181
181
  - lib/kafka/consumer_group.rb
182
182
  - lib/kafka/fetch_operation.rb
183
+ - lib/kafka/fetched_batch.rb
183
184
  - lib/kafka/fetched_message.rb
184
185
  - lib/kafka/gzip_codec.rb
185
186
  - lib/kafka/instrumentation.rb