ruby-kafka-temp-fork 0.0.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (144) hide show
  1. checksums.yaml +7 -0
  2. data/.circleci/config.yml +393 -0
  3. data/.github/workflows/stale.yml +19 -0
  4. data/.gitignore +13 -0
  5. data/.readygo +1 -0
  6. data/.rspec +3 -0
  7. data/.rubocop.yml +44 -0
  8. data/.ruby-version +1 -0
  9. data/.yardopts +3 -0
  10. data/CHANGELOG.md +310 -0
  11. data/Gemfile +5 -0
  12. data/ISSUE_TEMPLATE.md +23 -0
  13. data/LICENSE.txt +176 -0
  14. data/Procfile +2 -0
  15. data/README.md +1342 -0
  16. data/Rakefile +8 -0
  17. data/benchmarks/message_encoding.rb +23 -0
  18. data/bin/console +8 -0
  19. data/bin/setup +5 -0
  20. data/docker-compose.yml +39 -0
  21. data/examples/consumer-group.rb +35 -0
  22. data/examples/firehose-consumer.rb +64 -0
  23. data/examples/firehose-producer.rb +54 -0
  24. data/examples/simple-consumer.rb +34 -0
  25. data/examples/simple-producer.rb +42 -0
  26. data/examples/ssl-producer.rb +44 -0
  27. data/lib/kafka.rb +373 -0
  28. data/lib/kafka/async_producer.rb +291 -0
  29. data/lib/kafka/broker.rb +217 -0
  30. data/lib/kafka/broker_info.rb +16 -0
  31. data/lib/kafka/broker_pool.rb +41 -0
  32. data/lib/kafka/broker_uri.rb +43 -0
  33. data/lib/kafka/client.rb +833 -0
  34. data/lib/kafka/cluster.rb +513 -0
  35. data/lib/kafka/compression.rb +45 -0
  36. data/lib/kafka/compressor.rb +86 -0
  37. data/lib/kafka/connection.rb +223 -0
  38. data/lib/kafka/connection_builder.rb +33 -0
  39. data/lib/kafka/consumer.rb +642 -0
  40. data/lib/kafka/consumer_group.rb +231 -0
  41. data/lib/kafka/consumer_group/assignor.rb +63 -0
  42. data/lib/kafka/crc32_hash.rb +15 -0
  43. data/lib/kafka/datadog.rb +420 -0
  44. data/lib/kafka/digest.rb +22 -0
  45. data/lib/kafka/fetch_operation.rb +115 -0
  46. data/lib/kafka/fetched_batch.rb +58 -0
  47. data/lib/kafka/fetched_batch_generator.rb +120 -0
  48. data/lib/kafka/fetched_message.rb +48 -0
  49. data/lib/kafka/fetched_offset_resolver.rb +48 -0
  50. data/lib/kafka/fetcher.rb +224 -0
  51. data/lib/kafka/gzip_codec.rb +34 -0
  52. data/lib/kafka/heartbeat.rb +25 -0
  53. data/lib/kafka/instrumenter.rb +38 -0
  54. data/lib/kafka/interceptors.rb +33 -0
  55. data/lib/kafka/lz4_codec.rb +27 -0
  56. data/lib/kafka/message_buffer.rb +87 -0
  57. data/lib/kafka/murmur2_hash.rb +17 -0
  58. data/lib/kafka/offset_manager.rb +259 -0
  59. data/lib/kafka/partitioner.rb +40 -0
  60. data/lib/kafka/pause.rb +92 -0
  61. data/lib/kafka/pending_message.rb +29 -0
  62. data/lib/kafka/pending_message_queue.rb +41 -0
  63. data/lib/kafka/produce_operation.rb +205 -0
  64. data/lib/kafka/producer.rb +528 -0
  65. data/lib/kafka/prometheus.rb +316 -0
  66. data/lib/kafka/protocol.rb +225 -0
  67. data/lib/kafka/protocol/add_offsets_to_txn_request.rb +29 -0
  68. data/lib/kafka/protocol/add_offsets_to_txn_response.rb +21 -0
  69. data/lib/kafka/protocol/add_partitions_to_txn_request.rb +34 -0
  70. data/lib/kafka/protocol/add_partitions_to_txn_response.rb +47 -0
  71. data/lib/kafka/protocol/alter_configs_request.rb +44 -0
  72. data/lib/kafka/protocol/alter_configs_response.rb +49 -0
  73. data/lib/kafka/protocol/api_versions_request.rb +21 -0
  74. data/lib/kafka/protocol/api_versions_response.rb +53 -0
  75. data/lib/kafka/protocol/consumer_group_protocol.rb +19 -0
  76. data/lib/kafka/protocol/create_partitions_request.rb +42 -0
  77. data/lib/kafka/protocol/create_partitions_response.rb +28 -0
  78. data/lib/kafka/protocol/create_topics_request.rb +45 -0
  79. data/lib/kafka/protocol/create_topics_response.rb +26 -0
  80. data/lib/kafka/protocol/decoder.rb +175 -0
  81. data/lib/kafka/protocol/delete_topics_request.rb +33 -0
  82. data/lib/kafka/protocol/delete_topics_response.rb +26 -0
  83. data/lib/kafka/protocol/describe_configs_request.rb +35 -0
  84. data/lib/kafka/protocol/describe_configs_response.rb +73 -0
  85. data/lib/kafka/protocol/describe_groups_request.rb +27 -0
  86. data/lib/kafka/protocol/describe_groups_response.rb +73 -0
  87. data/lib/kafka/protocol/encoder.rb +184 -0
  88. data/lib/kafka/protocol/end_txn_request.rb +29 -0
  89. data/lib/kafka/protocol/end_txn_response.rb +19 -0
  90. data/lib/kafka/protocol/fetch_request.rb +70 -0
  91. data/lib/kafka/protocol/fetch_response.rb +136 -0
  92. data/lib/kafka/protocol/find_coordinator_request.rb +29 -0
  93. data/lib/kafka/protocol/find_coordinator_response.rb +29 -0
  94. data/lib/kafka/protocol/heartbeat_request.rb +27 -0
  95. data/lib/kafka/protocol/heartbeat_response.rb +17 -0
  96. data/lib/kafka/protocol/init_producer_id_request.rb +26 -0
  97. data/lib/kafka/protocol/init_producer_id_response.rb +27 -0
  98. data/lib/kafka/protocol/join_group_request.rb +47 -0
  99. data/lib/kafka/protocol/join_group_response.rb +41 -0
  100. data/lib/kafka/protocol/leave_group_request.rb +25 -0
  101. data/lib/kafka/protocol/leave_group_response.rb +17 -0
  102. data/lib/kafka/protocol/list_groups_request.rb +23 -0
  103. data/lib/kafka/protocol/list_groups_response.rb +35 -0
  104. data/lib/kafka/protocol/list_offset_request.rb +53 -0
  105. data/lib/kafka/protocol/list_offset_response.rb +89 -0
  106. data/lib/kafka/protocol/member_assignment.rb +42 -0
  107. data/lib/kafka/protocol/message.rb +172 -0
  108. data/lib/kafka/protocol/message_set.rb +55 -0
  109. data/lib/kafka/protocol/metadata_request.rb +31 -0
  110. data/lib/kafka/protocol/metadata_response.rb +185 -0
  111. data/lib/kafka/protocol/offset_commit_request.rb +47 -0
  112. data/lib/kafka/protocol/offset_commit_response.rb +29 -0
  113. data/lib/kafka/protocol/offset_fetch_request.rb +38 -0
  114. data/lib/kafka/protocol/offset_fetch_response.rb +56 -0
  115. data/lib/kafka/protocol/produce_request.rb +94 -0
  116. data/lib/kafka/protocol/produce_response.rb +63 -0
  117. data/lib/kafka/protocol/record.rb +88 -0
  118. data/lib/kafka/protocol/record_batch.rb +223 -0
  119. data/lib/kafka/protocol/request_message.rb +26 -0
  120. data/lib/kafka/protocol/sasl_handshake_request.rb +33 -0
  121. data/lib/kafka/protocol/sasl_handshake_response.rb +28 -0
  122. data/lib/kafka/protocol/sync_group_request.rb +33 -0
  123. data/lib/kafka/protocol/sync_group_response.rb +26 -0
  124. data/lib/kafka/protocol/txn_offset_commit_request.rb +46 -0
  125. data/lib/kafka/protocol/txn_offset_commit_response.rb +47 -0
  126. data/lib/kafka/round_robin_assignment_strategy.rb +52 -0
  127. data/lib/kafka/sasl/gssapi.rb +76 -0
  128. data/lib/kafka/sasl/oauth.rb +64 -0
  129. data/lib/kafka/sasl/plain.rb +39 -0
  130. data/lib/kafka/sasl/scram.rb +180 -0
  131. data/lib/kafka/sasl_authenticator.rb +61 -0
  132. data/lib/kafka/snappy_codec.rb +29 -0
  133. data/lib/kafka/socket_with_timeout.rb +96 -0
  134. data/lib/kafka/ssl_context.rb +66 -0
  135. data/lib/kafka/ssl_socket_with_timeout.rb +188 -0
  136. data/lib/kafka/statsd.rb +296 -0
  137. data/lib/kafka/tagged_logger.rb +77 -0
  138. data/lib/kafka/transaction_manager.rb +306 -0
  139. data/lib/kafka/transaction_state_machine.rb +72 -0
  140. data/lib/kafka/version.rb +5 -0
  141. data/lib/kafka/zstd_codec.rb +27 -0
  142. data/lib/ruby-kafka-temp-fork.rb +5 -0
  143. data/ruby-kafka-temp-fork.gemspec +54 -0
  144. metadata +520 -0
data/Procfile ADDED
@@ -0,0 +1,2 @@
1
+ producer: bundle exec ruby ci/producer.rb
2
+ consumer: bundle exec ruby ci/consumer.rb
data/README.md ADDED
@@ -0,0 +1,1342 @@
1
+ # ruby-kafka
2
+
3
+ A Ruby client library for [Apache Kafka](http://kafka.apache.org/), a distributed log and message bus. The focus of this library will be operational simplicity, with good logging and metrics that can make debugging issues easier.
4
+
5
+
6
+ ## Table of Contents
7
+
8
+ 1. [Installation](#installation)
9
+ 2. [Compatibility](#compatibility)
10
+ 3. [Usage](#usage)
11
+ 1. [Setting up the Kafka Client](#setting-up-the-kafka-client)
12
+ 2. [Producing Messages to Kafka](#producing-messages-to-kafka)
13
+ 1. [Efficiently Producing Messages](#efficiently-producing-messages)
14
+ 1. [Asynchronously Producing Messages](#asynchronously-producing-messages)
15
+ 2. [Serialization](#serialization)
16
+ 3. [Partitioning](#partitioning)
17
+ 4. [Buffering and Error Handling](#buffering-and-error-handling)
18
+ 5. [Message Durability](#message-durability)
19
+ 6. [Message Delivery Guarantees](#message-delivery-guarantees)
20
+ 7. [Compression](#compression)
21
+ 8. [Producing Messages from a Rails Application](#producing-messages-from-a-rails-application)
22
+ 3. [Consuming Messages from Kafka](#consuming-messages-from-kafka)
23
+ 1. [Consumer Groups](#consumer-groups)
24
+ 2. [Consumer Checkpointing](#consumer-checkpointing)
25
+ 3. [Topic Subscriptions](#topic-subscriptions)
26
+ 4. [Shutting Down a Consumer](#shutting-down-a-consumer)
27
+ 5. [Consuming Messages in Batches](#consuming-messages-in-batches)
28
+ 6. [Balancing Throughput and Latency](#balancing-throughput-and-latency)
29
+ 7. [Customizing Partition Assignment Strategy](#customizing-partition-assignment-strategy)
30
+ 4. [Thread Safety](#thread-safety)
31
+ 5. [Logging](#logging)
32
+ 6. [Instrumentation](#instrumentation)
33
+ 7. [Monitoring](#monitoring)
34
+ 1. [What to Monitor](#what-to-monitor)
35
+ 2. [Reporting Metrics to Statsd](#reporting-metrics-to-statsd)
36
+ 3. [Reporting Metrics to Datadog](#reporting-metrics-to-datadog)
37
+ 8. [Understanding Timeouts](#understanding-timeouts)
38
+ 9. [Security](#security)
39
+ 1. [Encryption and Authentication using SSL](#encryption-and-authentication-using-ssl)
40
+ 2. [Authentication using SASL](#authentication-using-sasl)
41
+ 10. [Topic management](#topic-management)
42
+ 4. [Design](#design)
43
+ 1. [Producer Design](#producer-design)
44
+ 2. [Asynchronous Producer Design](#asynchronous-producer-design)
45
+ 3. [Consumer Design](#consumer-design)
46
+ 5. [Development](#development)
47
+ 6. [Support and Discussion](#support-and-discussion)
48
+ 7. [Roadmap](#roadmap)
49
+ 8. [Higher level libraries](#higher-level-libraries)
50
+ 1. [Message processing frameworks](#message-processing-frameworks)
51
+ 2. [Message publishing libraries](#message-publishing-libraries)
52
+
53
+ ## Installation
54
+
55
+ Add this line to your application's Gemfile:
56
+
57
+ ```ruby
58
+ gem 'ruby-kafka'
59
+ ```
60
+
61
+ And then execute:
62
+
63
+ $ bundle
64
+
65
+ Or install it yourself as:
66
+
67
+ $ gem install ruby-kafka
68
+
69
+ ## Compatibility
70
+
71
+ <table>
72
+ <tr>
73
+ <th></th>
74
+ <th>Producer API</th>
75
+ <th>Consumer API</th>
76
+ </tr>
77
+ <tr>
78
+ <th>Kafka 0.8</th>
79
+ <td>Full support in v0.4.x</td>
80
+ <td>Unsupported</td>
81
+ </tr>
82
+ <tr>
83
+ <th>Kafka 0.9</th>
84
+ <td>Full support in v0.4.x</td>
85
+ <td>Full support in v0.4.x</td>
86
+ </tr>
87
+ <tr>
88
+ <th>Kafka 0.10</th>
89
+ <td>Full support in v0.5.x</td>
90
+ <td>Full support in v0.5.x</td>
91
+ </tr>
92
+ <tr>
93
+ <th>Kafka 0.11</th>
94
+ <td>Full support in v0.7.x</td>
95
+ <td>Limited support</td>
96
+ </tr>
97
+ <tr>
98
+ <th>Kafka 1.0</th>
99
+ <td>Limited support</td>
100
+ <td>Limited support</td>
101
+ </tr>
102
+ <tr>
103
+ <th>Kafka 2.0</th>
104
+ <td>Limited support</td>
105
+ <td>Limited support</td>
106
+ </tr>
107
+ <tr>
108
+ <th>Kafka 2.1</th>
109
+ <td>Limited support</td>
110
+ <td>Limited support</td>
111
+ </tr>
112
+ <tr>
113
+ <th>Kafka 2.2</th>
114
+ <td>Limited support</td>
115
+ <td>Limited support</td>
116
+ </tr>
117
+ <tr>
118
+ <th>Kafka 2.3</th>
119
+ <td>Limited support</td>
120
+ <td>Limited support</td>
121
+ </tr>
122
+ <tr>
123
+ <th>Kafka 2.4</th>
124
+ <td>Limited support</td>
125
+ <td>Limited support</td>
126
+ </tr>
127
+ <tr>
128
+ <th>Kafka 2.5</th>
129
+ <td>Limited support</td>
130
+ <td>Limited support</td>
131
+ </tr>
132
+ <tr>
133
+ <th>Kafka 2.6</th>
134
+ <td>Limited support</td>
135
+ <td>Limited support</td>
136
+ </tr>
137
+ <tr>
138
+ <th>Kafka 2.7</th>
139
+ <td>Limited support</td>
140
+ <td>Limited support</td>
141
+ </tr>
142
+ </table>
143
+
144
+ This library is targeting Kafka 0.9 with the v0.4.x series and Kafka 0.10 with the v0.5.x series. There's limited support for Kafka 0.8, and things should work with Kafka 0.11, although there may be performance issues due to changes in the protocol.
145
+
146
+ - **Kafka 0.8:** Full support for the Producer API in ruby-kafka v0.4.x, but no support for consumer groups. Simple message fetching works.
147
+ - **Kafka 0.9:** Full support for the Producer and Consumer API in ruby-kafka v0.4.x.
148
+ - **Kafka 0.10:** Full support for the Producer and Consumer API in ruby-kafka v0.5.x. Note that you _must_ run version 0.10.1 or higher of Kafka due to limitations in 0.10.0.
149
+ - **Kafka 0.11:** Full support for Producer API, limited support for Consumer API in ruby-kafka v0.7.x. New features in 0.11.x includes new Record Batch format, idempotent and transactional production. The missing feature is dirty reading of Consumer API.
150
+ - **Kafka 1.0:** Everything that works with Kafka 0.11 should still work, but so far no features specific to Kafka 1.0 have been added.
151
+ - **Kafka 2.0:** Everything that works with Kafka 1.0 should still work, but so far no features specific to Kafka 2.0 have been added.
152
+ - **Kafka 2.1:** Everything that works with Kafka 2.0 should still work, but so far no features specific to Kafka 2.1 have been added.
153
+ - **Kafka 2.2:** Everything that works with Kafka 2.1 should still work, but so far no features specific to Kafka 2.2 have been added.
154
+ - **Kafka 2.3:** Everything that works with Kafka 2.2 should still work, but so far no features specific to Kafka 2.3 have been added.
155
+ - **Kafka 2.4:** Everything that works with Kafka 2.3 should still work, but so far no features specific to Kafka 2.4 have been added.
156
+ - **Kafka 2.5:** Everything that works with Kafka 2.4 should still work, but so far no features specific to Kafka 2.5 have been added.
157
+ - **Kafka 2.6:** Everything that works with Kafka 2.5 should still work, but so far no features specific to Kafka 2.6 have been added.
158
+ - **Kafka 2.7:** Everything that works with Kafka 2.6 should still work, but so far no features specific to Kafka 2.7 have been added.
159
+
160
+ This library requires Ruby 2.1 or higher.
161
+
162
+ ## Usage
163
+
164
+ Please see the [documentation site](http://www.rubydoc.info/gems/ruby-kafka) for detailed documentation on the latest release. Note that the documentation on GitHub may not match the version of the library you're using – there are still being made many changes to the API.
165
+
166
+ ### Setting up the Kafka Client
167
+
168
+ A client must be initialized with at least one Kafka broker, from which the entire Kafka cluster will be discovered. Each client keeps a separate pool of broker connections. Don't use the same client from more than one thread.
169
+
170
+ ```ruby
171
+ require "kafka"
172
+
173
+ # The first argument is a list of "seed brokers" that will be queried for the full
174
+ # cluster topology. At least one of these *must* be available. `client_id` is
175
+ # used to identify this client in logs and metrics. It's optional but recommended.
176
+ kafka = Kafka.new(["kafka1:9092", "kafka2:9092"], client_id: "my-application")
177
+ ```
178
+
179
+ You can also use a hostname with seed brokers' IP addresses:
180
+
181
+ ```ruby
182
+ kafka = Kafka.new("seed-brokers:9092", client_id: "my-application", resolve_seed_brokers: true)
183
+ ```
184
+
185
+ ### Producing Messages to Kafka
186
+
187
+ The simplest way to write a message to a Kafka topic is to call `#deliver_message`:
188
+
189
+ ```ruby
190
+ kafka = Kafka.new(...)
191
+ kafka.deliver_message("Hello, World!", topic: "greetings")
192
+ ```
193
+
194
+ This will write the message to a random partition in the `greetings` topic. If you want to write to a _specific_ partition, pass the `partition` parameter:
195
+
196
+ ```ruby
197
+ # Will write to partition 42.
198
+ kafka.deliver_message("Hello, World!", topic: "greetings", partition: 42)
199
+ ```
200
+
201
+ If you don't know exactly how many partitions are in the topic, or if you'd rather have some level of indirection, you can pass in `partition_key` instead. Two messages with the same partition key will always be assigned to the same partition. This is useful if you want to make sure all messages with a given attribute are always written to the same partition, e.g. all purchase events for a given customer id.
202
+
203
+ ```ruby
204
+ # Partition keys assign a partition deterministically.
205
+ kafka.deliver_message("Hello, World!", topic: "greetings", partition_key: "hello")
206
+ ```
207
+
208
+ Kafka also supports _message keys_. When passed, a message key can be used instead of a partition key. The message key is written alongside the message value and can be read by consumers. Message keys in Kafka can be used for interesting things such as [Log Compaction](http://kafka.apache.org/documentation.html#compaction). See [Partitioning](#partitioning) for more information.
209
+
210
+ ```ruby
211
+ # Set a message key; the key will be used for partitioning since no explicit
212
+ # `partition_key` is set.
213
+ kafka.deliver_message("Hello, World!", key: "hello", topic: "greetings")
214
+ ```
215
+
216
+
217
+ #### Efficiently Producing Messages
218
+
219
+ While `#deliver_message` works fine for infrequent writes, there are a number of downsides:
220
+
221
+ * Kafka is optimized for transmitting messages in _batches_ rather than individually, so there's a significant overhead and performance penalty in using the single-message API.
222
+ * The message delivery can fail in a number of different ways, but this simplistic API does not provide automatic retries.
223
+ * The message is not buffered, so if there is an error, it is lost.
224
+
225
+ The Producer API solves all these problems and more:
226
+
227
+ ```ruby
228
+ # Instantiate a new producer.
229
+ producer = kafka.producer
230
+
231
+ # Add a message to the producer buffer.
232
+ producer.produce("hello1", topic: "test-messages")
233
+
234
+ # Deliver the messages to Kafka.
235
+ producer.deliver_messages
236
+ ```
237
+
238
+ `#produce` will buffer the message in the producer but will _not_ actually send it to the Kafka cluster. Buffered messages are only delivered to the Kafka cluster once `#deliver_messages` is called. Since messages may be destined for different partitions, this could involve writing to more than one Kafka broker. Note that a failure to send all buffered messages after the configured number of retries will result in `Kafka::DeliveryFailed` being raised. This can be rescued and ignored; the messages will be kept in the buffer until the next attempt.
239
+
240
+ Read the docs for [Kafka::Producer](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Producer) for more details.
241
+
242
+ #### Asynchronously Producing Messages
243
+
244
+ A normal producer will block while `#deliver_messages` is sending messages to Kafka, possibly for tens of seconds or even minutes at a time, depending on your timeout and retry settings. Furthermore, you have to call `#deliver_messages` manually, with a frequency that balances batch size with message delay.
245
+
246
+ In order to avoid blocking during message deliveries you can use the _asynchronous producer_ API. It is mostly similar to the synchronous API, with calls to `#produce` and `#deliver_messages`. The main difference is that rather than blocking, these calls will return immediately. The actual work will be done in a background thread, with the messages and operations being sent from the caller over a thread safe queue.
247
+
248
+ ```ruby
249
+ # `#async_producer` will create a new asynchronous producer.
250
+ producer = kafka.async_producer
251
+
252
+ # The `#produce` API works as normal.
253
+ producer.produce("hello", topic: "greetings")
254
+
255
+ # `#deliver_messages` will return immediately.
256
+ producer.deliver_messages
257
+
258
+ # Make sure to call `#shutdown` on the producer in order to avoid leaking
259
+ # resources. `#shutdown` will wait for any pending messages to be delivered
260
+ # before returning.
261
+ producer.shutdown
262
+ ```
263
+
264
+ By default, the delivery policy will be the same as for a synchronous producer: only when `#deliver_messages` is called will the messages be delivered. However, the asynchronous producer offers two complementary policies for _automatic delivery_:
265
+
266
+ 1. Trigger a delivery once the producer's message buffer reaches a specified _threshold_. This can be used to improve efficiency by increasing the batch size when sending messages to the Kafka cluster.
267
+ 2. Trigger a delivery at a _fixed time interval_. This puts an upper bound on message delays.
268
+
269
+ These policies can be used alone or in combination.
270
+
271
+ ```ruby
272
+ # `async_producer` will create a new asynchronous producer.
273
+ producer = kafka.async_producer(
274
+ # Trigger a delivery once 100 messages have been buffered.
275
+ delivery_threshold: 100,
276
+
277
+ # Trigger a delivery every 30 seconds.
278
+ delivery_interval: 30,
279
+ )
280
+
281
+ producer.produce("hello", topic: "greetings")
282
+
283
+ # ...
284
+ ```
285
+
286
+ When calling `#shutdown`, the producer will attempt to deliver the messages and the method call will block until that has happened. Note that there's no _guarantee_ that the messages will be delivered.
287
+
288
+ **Note:** if the calling thread produces messages faster than the producer can write them to Kafka, you'll eventually run into problems. The internal queue used for sending messages from the calling thread to the background worker has a size limit; once this limit is reached, a call to `#produce` will raise `Kafka::BufferOverflow`.
289
+
290
+ #### Serialization
291
+
292
+ This library is agnostic to which serialization format you prefer. Both the value and key of a message is treated as a binary string of data. This makes it easier to use whatever serialization format you want, since you don't have to do anything special to make it work with ruby-kafka. Here's an example of encoding data with JSON:
293
+
294
+ ```ruby
295
+ require "json"
296
+
297
+ # ...
298
+
299
+ event = {
300
+ "name" => "pageview",
301
+ "url" => "https://example.com/posts/123",
302
+ # ...
303
+ }
304
+
305
+ data = JSON.dump(event)
306
+
307
+ producer.produce(data, topic: "events")
308
+ ```
309
+
310
+ There's also an example of [encoding messages with Apache Avro](https://github.com/zendesk/ruby-kafka/wiki/Encoding-messages-with-Avro).
311
+
312
+ #### Partitioning
313
+
314
+ Kafka topics are partitioned, with messages being assigned to a partition by the client. This allows a great deal of flexibility for the users. This section describes several strategies for partitioning and how they impact performance, data locality, etc.
315
+
316
+
317
+ ##### Load Balanced Partitioning
318
+
319
+ When optimizing for efficiency, we either distribute messages as evenly as possible to all partitions, or make sure each producer always writes to a single partition. The former ensures an even load for downstream consumers; the latter ensures the highest producer performance, since message batching is done per partition.
320
+
321
+ If no explicit partition is specified, the producer will look to the partition key or the message key for a value that can be used to deterministically assign the message to a partition. If there is a big number of different keys, the resulting distribution will be pretty even. If no keys are passed, the producer will randomly assign a partition. Random partitioning can be achieved even if you use message keys by passing a random partition key, e.g. `partition_key: rand(100)`.
322
+
323
+ If you wish to have the producer write all messages to a single partition, simply generate a random value and re-use that as the partition key:
324
+
325
+ ```ruby
326
+ partition_key = rand(100)
327
+
328
+ producer.produce(msg1, topic: "messages", partition_key: partition_key)
329
+ producer.produce(msg2, topic: "messages", partition_key: partition_key)
330
+
331
+ # ...
332
+ ```
333
+
334
+ You can also base the partition key on some property of the producer, for example the host name.
335
+
336
+ ##### Semantic Partitioning
337
+
338
+ By assigning messages to a partition based on some property of the message, e.g. making sure all events tracked in a user session are assigned to the same partition, downstream consumers can make simplifying assumptions about data locality. In this example, a consumer can keep process local state pertaining to a user session knowing that all events for the session will be read from a single partition. This is also called _semantic partitioning_, since the partition assignment is part of the application behavior.
339
+
340
+ Typically it's sufficient to simply pass a partition key in order to guarantee that a set of messages will be assigned to the same partition, e.g.
341
+
342
+ ```ruby
343
+ # All messages with the same `session_id` will be assigned to the same partition.
344
+ producer.produce(event, topic: "user-events", partition_key: session_id)
345
+ ```
346
+
347
+ However, sometimes it's necessary to select a specific partition. When doing this, make sure that you don't pick a partition number outside the range of partitions for the topic:
348
+
349
+ ```ruby
350
+ partitions = kafka.partitions_for("events")
351
+
352
+ # Make sure that we don't exceed the partition count!
353
+ partition = some_number % partitions
354
+
355
+ producer.produce(event, topic: "events", partition: partition)
356
+ ```
357
+
358
+ ##### Compatibility with Other Clients
359
+
360
+ There's no standardized way to assign messages to partitions across different Kafka client implementations. If you have a heterogeneous set of clients producing messages to the same topics it may be important to ensure a consistent partitioning scheme. This library doesn't try to implement all schemes, so you'll have to figure out which scheme the other client is using and replicate it. An example:
361
+
362
+ ```ruby
363
+ partitions = kafka.partitions_for("events")
364
+
365
+ # Insert your custom partitioning scheme here:
366
+ partition = PartitioningScheme.assign(partitions, event)
367
+
368
+ producer.produce(event, topic: "events", partition: partition)
369
+ ```
370
+
371
+ Another option is to configure a custom client partitioner that implements `call(partition_count, message)` and uses the same schema as the other client. For example:
372
+
373
+ ```ruby
374
+ class CustomPartitioner
375
+ def call(partition_count, message)
376
+ ...
377
+ end
378
+ end
379
+
380
+ partitioner = CustomPartitioner.new
381
+ Kafka.new(partitioner: partitioner, ...)
382
+ ```
383
+
384
+ Or, simply create a Proc handling the partitioning logic instead of having to add a new class. For example:
385
+
386
+ ```ruby
387
+ partitioner = -> (partition_count, message) { ... }
388
+ Kafka.new(partitioner: partitioner, ...)
389
+ ```
390
+
391
+ ##### Supported partitioning schemes
392
+
393
+ In order for semantic partitioning to work a `partition_key` must map to the same partition number every time. The general approach, and the one used by this library, is to hash the key and mod it by the number of partitions. There are many different algorithms that can be used to calculate a hash. By default `crc32` is used. `murmur2` is also supported for compatibility with Java based Kafka producers.
394
+
395
+ To use `murmur2` hashing pass it as an argument to `Partitioner`. For example:
396
+
397
+ ```ruby
398
+ Kafka.new(partitioner: Kafka::Partitioner.new(hash_function: :murmur2))
399
+ ```
400
+
401
+ #### Buffering and Error Handling
402
+
403
+ The producer is designed for resilience in the face of temporary network errors, Kafka broker failovers, and other issues that prevent the client from writing messages to the destination topics. It does this by employing local, in-memory buffers. Only when messages are acknowledged by a Kafka broker will they be removed from the buffer.
404
+
405
+ Typically, you'd configure the producer to retry failed attempts at sending messages, but sometimes all retries are exhausted. In that case, `Kafka::DeliveryFailed` is raised from `Kafka::Producer#deliver_messages`. If you wish to have your application be resilient to this happening (e.g. if you're logging to Kafka from a web application) you can rescue this exception. The failed messages are still retained in the buffer, so a subsequent call to `#deliver_messages` will still attempt to send them.
406
+
407
+ Note that there's a maximum buffer size; by default, it's set to 1,000 messages and 10MB. It's possible to configure both these numbers:
408
+
409
+ ```ruby
410
+ producer = kafka.producer(
411
+ max_buffer_size: 5_000, # Allow at most 5K messages to be buffered.
412
+ max_buffer_bytesize: 100_000_000, # Allow at most 100MB to be buffered.
413
+ ...
414
+ )
415
+ ```
416
+
417
+ A final note on buffers: local buffers give resilience against broker and network failures, and allow higher throughput due to message batching, but they also trade off consistency guarantees for higher availability and resilience. If your local process dies while messages are buffered, those messages will be lost. If you require high levels of consistency, you should call `#deliver_messages` immediately after `#produce`.
418
+
419
+ #### Message Durability
420
+
421
+ Once the client has delivered a set of messages to a Kafka broker the broker will forward them to its replicas, thus ensuring that a single broker failure will not result in message loss. However, the client can choose _when the leader acknowledges the write_. At one extreme, the client can choose fire-and-forget delivery, not even bothering to check whether the messages have been acknowledged. At the other end, the client can ask the broker to wait until _all_ its replicas have acknowledged the write before returning. This is the safest option, and the default. It's also possible to have the broker return as soon as it has written the messages to its own log but before the replicas have done so. This leaves a window of time where a failure of the leader will result in the messages being lost, although this should not be a common occurrence.
422
+
423
+ Write latency and throughput are negatively impacted by having more replicas acknowledge a write, so if you require low-latency, high throughput writes you may want to accept lower durability.
424
+
425
+ This behavior is controlled by the `required_acks` option to `#producer` and `#async_producer`:
426
+
427
+ ```ruby
428
+ # This is the default: all replicas must acknowledge.
429
+ producer = kafka.producer(required_acks: :all)
430
+
431
+ # This is fire-and-forget: messages can easily be lost.
432
+ producer = kafka.producer(required_acks: 0)
433
+
434
+ # This only waits for the leader to acknowledge.
435
+ producer = kafka.producer(required_acks: 1)
436
+ ```
437
+
438
+ Unless you absolutely need lower latency it's highly recommended to use the default setting (`:all`).
439
+
440
+
441
+ #### Message Delivery Guarantees
442
+
443
+ There are basically two different and incompatible guarantees that can be made in a message delivery system such as Kafka:
444
+
445
+ 1. _at-most-once_ delivery guarantees that a message is at most delivered to the recipient _once_. This is useful only if delivering the message twice carries some risk and should be avoided. Implicit is the fact that there's no guarantee that the message will be delivered at all.
446
+ 2. _at-least-once_ delivery guarantees that a message is delivered, but it may be delivered more than once. If the final recipient de-duplicates messages, e.g. by checking a unique message id, then it's even possible to implement _exactly-once_ delivery.
447
+
448
+ Of these two options, ruby-kafka implements the second one: when in doubt about whether a message has been delivered, a producer will try to deliver it again.
449
+
450
+ The guarantee is made only for the synchronous producer and boils down to this:
451
+
452
+ ```ruby
453
+ producer = kafka.producer
454
+
455
+ producer.produce("hello", topic: "greetings")
456
+
457
+ # If this line fails with Kafka::DeliveryFailed we *may* have succeeded in delivering
458
+ # the message to Kafka but won't know for sure.
459
+ producer.deliver_messages
460
+
461
+ # If we get to this line we can be sure that the message has been delivered to Kafka!
462
+ ```
463
+
464
+ That is, once `#deliver_messages` returns we can be sure that Kafka has received the message. Note that there are some big caveats here:
465
+
466
+ - Depending on how your cluster and topic is configured the message could still be lost by Kafka.
467
+ - If you configure the producer to not require acknowledgements from the Kafka brokers by setting `required_acks` to zero there is no guarantee that the message will ever make it to a Kafka broker.
468
+ - If you use the asynchronous producer there's no guarantee that messages will have been delivered after `#deliver_messages` returns. A way of blocking until a message has been delivered with the asynchronous producer may be implemented in the future.
469
+
470
+ It's possible to improve your chances of success when calling `#deliver_messages`, at the price of a longer max latency:
471
+
472
+ ```ruby
473
+ producer = kafka.producer(
474
+ # The number of retries when attempting to deliver messages. The default is
475
+ # 2, so 3 attempts in total, but you can configure a higher or lower number:
476
+ max_retries: 5,
477
+
478
+ # The number of seconds to wait between retries. In order to handle longer
479
+ # periods of Kafka being unavailable, increase this number. The default is
480
+ # 1 second.
481
+ retry_backoff: 5,
482
+ )
483
+ ```
484
+
485
+ Note that these values affect the max latency of the operation; see [Understanding Timeouts](#understanding-timeouts) for an explanation of the various timeouts and latencies.
486
+
487
+ If you use the asynchronous producer you typically don't have to worry too much about this, as retries will be done in the background.
488
+
489
+ #### Compression
490
+
491
+ Depending on what kind of data you produce, enabling compression may yield improved bandwidth and space usage. Compression in Kafka is done on entire messages sets rather than on individual messages. This improves the compression rate and generally means that compressions works better the larger your buffers get, since the message sets will be larger by the time they're compressed.
492
+
493
+ Since many workloads have variations in throughput and distribution across partitions, it's possible to configure a threshold for when to enable compression by setting `compression_threshold`. Only if the defined number of messages are buffered for a partition will the messages be compressed.
494
+
495
+ Compression is enabled by passing the `compression_codec` parameter to `#producer` with the name of one of the algorithms allowed by Kafka:
496
+
497
+ * `:snappy` for [Snappy](http://google.github.io/snappy/) compression.
498
+ * `:gzip` for [gzip](https://en.wikipedia.org/wiki/Gzip) compression.
499
+ * `:lz4` for [LZ4](https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)) compression.
500
+ * `:zstd` for [zstd](https://facebook.github.io/zstd/) compression.
501
+
502
+ By default, all message sets will be compressed if you specify a compression codec. To increase the compression threshold, set `compression_threshold` to an integer value higher than one.
503
+
504
+ ```ruby
505
+ producer = kafka.producer(
506
+ compression_codec: :snappy,
507
+ compression_threshold: 10,
508
+ )
509
+ ```
510
+
511
+ #### Producing Messages from a Rails Application
512
+
513
+ A typical use case for Kafka is tracking events that occur in web applications. Oftentimes it's advisable to avoid having a hard dependency on Kafka being available, allowing your application to survive a Kafka outage. By using an asynchronous producer, you can avoid doing IO within the individual request/response cycles, instead pushing that to the producer's internal background thread.
514
+
515
+ In this example, a producer is configured in a Rails initializer:
516
+
517
+ ```ruby
518
+ # config/initializers/kafka_producer.rb
519
+ require "kafka"
520
+
521
+ # Configure the Kafka client with the broker hosts and the Rails
522
+ # logger.
523
+ $kafka = Kafka.new(["kafka1:9092", "kafka2:9092"], logger: Rails.logger)
524
+
525
+ # Set up an asynchronous producer that delivers its buffered messages
526
+ # every ten seconds:
527
+ $kafka_producer = $kafka.async_producer(
528
+ delivery_interval: 10,
529
+ )
530
+
531
+ # Make sure to shut down the producer when exiting.
532
+ at_exit { $kafka_producer.shutdown }
533
+ ```
534
+
535
+ In your controllers, simply call the producer directly:
536
+
537
+ ```ruby
538
+ # app/controllers/orders_controller.rb
539
+ class OrdersController
540
+ def create
541
+ @order = Order.create!(params[:order])
542
+
543
+ event = {
544
+ order_id: @order.id,
545
+ amount: @order.amount,
546
+ timestamp: Time.now,
547
+ }
548
+
549
+ $kafka_producer.produce(event.to_json, topic: "order_events")
550
+ end
551
+ end
552
+ ```
553
+
554
+ ### Consuming Messages from Kafka
555
+
556
+ **Note:** If you're just looking to get started with Kafka consumers, you might be interested in visiting the [Higher level libraries](#higher-level-libraries) section that lists ruby-kafka based frameworks. Read on, if you're interested in either rolling your own executable consumers or if you want to learn more about how consumers work in Kafka.
557
+
558
+ Consuming messages from a Kafka topic with ruby-kafka is simple:
559
+
560
+ ```ruby
561
+ require "kafka"
562
+
563
+ kafka = Kafka.new(["kafka1:9092", "kafka2:9092"])
564
+
565
+ kafka.each_message(topic: "greetings") do |message|
566
+ puts message.offset, message.key, message.value
567
+ end
568
+ ```
569
+
570
+ While this is great for extremely simple use cases, there are a number of downsides:
571
+
572
+ - You can only fetch from a single topic at a time.
573
+ - If you want to have multiple processes consume from the same topic, there's no way of coordinating which processes should fetch from which partitions.
574
+ - If the process dies, there's no way to have another process resume fetching from the point in the partition that the original process had reached.
575
+
576
+
577
+ #### Consumer Groups
578
+
579
+ The Consumer API solves all of the above issues, and more. It uses the Consumer Groups feature released in Kafka 0.9 to allow multiple consumer processes to coordinate access to a topic, assigning each partition to a single consumer. When a consumer fails, the partitions that were assigned to it are re-assigned to other members of the group.
580
+
581
+ Using the API is simple:
582
+
583
+ ```ruby
584
+ require "kafka"
585
+
586
+ kafka = Kafka.new(["kafka1:9092", "kafka2:9092"])
587
+
588
+ # Consumers with the same group id will form a Consumer Group together.
589
+ consumer = kafka.consumer(group_id: "my-consumer")
590
+
591
+ # It's possible to subscribe to multiple topics by calling `subscribe`
592
+ # repeatedly.
593
+ consumer.subscribe("greetings")
594
+
595
+ # Stop the consumer when the SIGTERM signal is sent to the process.
596
+ # It's better to shut down gracefully than to kill the process.
597
+ trap("TERM") { consumer.stop }
598
+
599
+ # This will loop indefinitely, yielding each message in turn.
600
+ consumer.each_message do |message|
601
+ puts message.topic, message.partition
602
+ puts message.offset, message.key, message.value
603
+ end
604
+ ```
605
+
606
+ Each consumer process will be assigned one or more partitions from each topic that the group subscribes to. In order to handle more messages, simply start more processes.
607
+
608
+ #### Consumer Checkpointing
609
+
610
+ In order to be able to resume processing after a consumer crashes, each consumer will periodically _checkpoint_ its position within each partition it reads from. Since each partition has a monotonically increasing sequence of message offsets, this works by _committing_ the offset of the last message that was processed in a given partition. Kafka handles these commits and allows another consumer in a group to resume from the last commit when a member crashes or becomes unresponsive.
611
+
612
+ By default, offsets are committed every 10 seconds. You can increase the frequency, known as the _offset commit interval_, to limit the duration of double-processing scenarios, at the cost of a lower throughput due to the added coordination. If you want to improve throughput, and double-processing is of less concern to you, then you can decrease the frequency. Set the commit interval to zero in order to disable the timer-based commit trigger entirely.
613
+
614
+ In addition to the time based trigger it's possible to trigger checkpointing in response to _n_ messages having been processed, known as the _offset commit threshold_. This puts a bound on the number of messages that can be double-processed before the problem is detected. Setting this to 1 will cause an offset commit to take place every time a message has been processed. By default this trigger is disabled (set to zero).
615
+
616
+ It is possible to trigger an immediate offset commit by calling `Consumer#commit_offsets`. This blocks the caller until the Kafka cluster has acknowledged the commit.
617
+
618
+ Stale offsets are periodically purged by the broker. The broker setting `offsets.retention.minutes` controls the retention window for committed offsets, and defaults to 1 day. The length of the retention window, known as _offset retention time_, can be changed for the consumer.
619
+
620
+ Previously committed offsets are re-committed, to reset the retention window, at the first commit and periodically at an interval of half the _offset retention time_.
621
+
622
+ ```ruby
623
+ consumer = kafka.consumer(
624
+ group_id: "some-group",
625
+
626
+ # Increase offset commit frequency to once every 5 seconds.
627
+ offset_commit_interval: 5,
628
+
629
+ # Commit offsets when 100 messages have been processed.
630
+ offset_commit_threshold: 100,
631
+
632
+ # Increase the length of time that committed offsets are kept.
633
+ offset_retention_time: 7 * 60 * 60
634
+ )
635
+ ```
636
+
637
+ For some use cases it may be necessary to control when messages are marked as processed. Note that since only the consumer position within each partition can be saved, marking a message as processed implies that all messages in the partition with a lower offset should also be considered as having been processed.
638
+
639
+ The method `Consumer#mark_message_as_processed` marks a message (and all those that precede it in a partition) as having been processed. This is an advanced API that you should only use if you know what you're doing.
640
+
641
+ ```ruby
642
+ # Manually controlling checkpointing:
643
+
644
+ # Typically you want to use this API in order to buffer messages until some
645
+ # special "commit" message is received, e.g. in order to group together
646
+ # transactions consisting of several items.
647
+ buffer = []
648
+
649
+ # Messages will not be marked as processed automatically. If you shut down the
650
+ # consumer without calling `#mark_message_as_processed` first, the consumer will
651
+ # not resume where you left off!
652
+ consumer.each_message(automatically_mark_as_processed: false) do |message|
653
+ # Our messages are JSON with a `type` field and other stuff.
654
+ event = JSON.parse(message.value)
655
+
656
+ case event.fetch("type")
657
+ when "add_to_cart"
658
+ buffer << event
659
+ when "complete_purchase"
660
+ # We've received all the messages we need, time to save the transaction.
661
+ save_transaction(buffer)
662
+
663
+ # Now we can set the checkpoint by marking the last message as processed.
664
+ consumer.mark_message_as_processed(message)
665
+
666
+ # We can optionally trigger an immediate, blocking offset commit in order
667
+ # to minimize the risk of crashing before the automatic triggers have
668
+ # kicked in.
669
+ consumer.commit_offsets
670
+
671
+ # Make the buffer ready for the next transaction.
672
+ buffer.clear
673
+ end
674
+ end
675
+ ```
676
+
677
+
678
+ #### Topic Subscriptions
679
+
680
+ For each topic subscription it's possible to decide whether to consume messages starting at the beginning of the topic or to just consume new messages that are produced to the topic. This policy is configured by setting the `start_from_beginning` argument when calling `#subscribe`:
681
+
682
+ ```ruby
683
+ # Consume messages from the very beginning of the topic. This is the default.
684
+ consumer.subscribe("users", start_from_beginning: true)
685
+
686
+ # Only consume new messages.
687
+ consumer.subscribe("notifications", start_from_beginning: false)
688
+ ```
689
+
690
+ Once the consumer group has checkpointed its progress in the topic's partitions, the consumers will always start from the checkpointed offsets, regardless of `start_from_beginning`. As such, this setting only applies when the consumer initially starts consuming from a topic.
691
+
692
+
693
+ #### Shutting Down a Consumer
694
+
695
+ In order to shut down a running consumer process cleanly, call `#stop` on it. A common pattern is to trap a process signal and initiate the shutdown from there:
696
+
697
+ ```ruby
698
+ consumer = kafka.consumer(...)
699
+
700
+ # The consumer can be stopped from the command line by executing
701
+ # `kill -s TERM <process-id>`.
702
+ trap("TERM") { consumer.stop }
703
+
704
+ consumer.each_message do |message|
705
+ ...
706
+ end
707
+ ```
708
+
709
+
710
+ #### Consuming Messages in Batches
711
+
712
+ Sometimes it is easier to deal with messages in batches rather than individually. A _batch_ is a sequence of one or more Kafka messages that all belong to the same topic and partition. One common reason to want to use batches is when some external system has a batch or transactional API.
713
+
714
+ ```ruby
715
+ # A mock search index that we'll be keeping up to date with new Kafka messages.
716
+ index = SearchIndex.new
717
+
718
+ consumer.subscribe("posts")
719
+
720
+ consumer.each_batch do |batch|
721
+ puts "Received batch: #{batch.topic}/#{batch.partition}"
722
+
723
+ transaction = index.transaction
724
+
725
+ batch.messages.each do |message|
726
+ # Let's assume that adding a document is idempotent.
727
+ transaction.add(id: message.key, body: message.value)
728
+ end
729
+
730
+ # Once this method returns, the messages have been successfully written to the
731
+ # search index. The consumer will only checkpoint a batch *after* the block
732
+ # has completed without an exception.
733
+ transaction.commit!
734
+ end
735
+ ```
736
+
737
+ One important thing to note is that the client commits the offset of the batch's messages only after the _entire_ batch has been processed.
738
+
739
+
740
+ #### Balancing Throughput and Latency
741
+
742
+ There are two performance properties that can at times be at odds: _throughput_ and _latency_. Throughput is the number of messages that can be processed in a given timespan; latency is the time it takes from a message is written to a topic until it has been processed.
743
+
744
+ In order to optimize for throughput, you want to make sure to fetch as many messages as possible every time you do a round trip to the Kafka cluster. This minimizes network overhead and allows processing data in big chunks.
745
+
746
+ In order to optimize for low latency, you want to process a message as soon as possible, even if that means fetching a smaller batch of messages.
747
+
748
+ There are three values that can be tuned in order to balance these two concerns.
749
+
750
+ * `min_bytes` is the minimum number of bytes to return from a single message fetch. By setting this to a high value you can increase the processing throughput. The default value is one byte.
751
+ * `max_wait_time` is the maximum number of seconds to wait before returning data from a single message fetch. By setting this high you also increase the processing throughput – and by setting it low you set a bound on latency. This configuration overrides `min_bytes`, so you'll _always_ get data back within the time specified. The default value is one second. If you want to have at most five seconds of latency, set `max_wait_time` to 5. You should make sure `max_wait_time` * num brokers + `heartbeat_interval` is less than `session_timeout`.
752
+ * `max_bytes_per_partition` is the maximum amount of data a broker will return for a single partition when fetching new messages. The default is 1MB, but increasing this number may lead to better throughtput since you'll need to fetch less frequently. Setting it to a lower value is not recommended unless you have so many partitions that it's causing network and latency issues to transfer a fetch response from a broker to a client. Setting the number too high may result in instability, so be careful.
753
+
754
+ The first two settings can be passed to either `#each_message` or `#each_batch`, e.g.
755
+
756
+ ```ruby
757
+ # Waits for data for up to 5 seconds on each broker, preferring to fetch at least 5KB at a time.
758
+ # This can wait up to num brokers * 5 seconds.
759
+ consumer.each_message(min_bytes: 1024 * 5, max_wait_time: 5) do |message|
760
+ # ...
761
+ end
762
+ ```
763
+
764
+ The last setting is configured when subscribing to a topic, and can vary between topics:
765
+
766
+ ```ruby
767
+ # Fetches up to 5MB per partition at a time for better throughput.
768
+ consumer.subscribe("greetings", max_bytes_per_partition: 5 * 1024 * 1024)
769
+
770
+ consumer.each_message do |message|
771
+ # ...
772
+ end
773
+ ```
774
+
775
+ #### Customizing Partition Assignment Strategy
776
+
777
+ In some cases, you might want to assign more partitions to some consumers. For example, in applications inserting some records to a database, the consumers running on hosts nearby the database can process more messages than the consumers running on other hosts.
778
+ You can use a custom assignment strategy by passing an object that implements `#call` as the argument `assignment_strategy` like below:
779
+
780
+ ```ruby
781
+ class CustomAssignmentStrategy
782
+ def initialize(user_data)
783
+ @user_data = user_data
784
+ end
785
+
786
+ # Assign the topic partitions to the group members.
787
+ #
788
+ # @param cluster [Kafka::Cluster]
789
+ # @param members [Hash<String, Kafka::Protocol::JoinGroupResponse::Metadata>] a hash
790
+ # mapping member ids to metadata
791
+ # @param partitions [Array<Kafka::ConsumerGroup::Assignor::Partition>] a list of
792
+ # partitions the consumer group processes
793
+ # @return [Hash<String, Array<Kafka::ConsumerGroup::Assignor::Partition>] a hash
794
+ # mapping member ids to partitions.
795
+ def call(cluster:, members:, partitions:)
796
+ ...
797
+ end
798
+ end
799
+
800
+ strategy = CustomAssignmentStrategy.new("some-host-information")
801
+ consumer = kafka.consumer(group_id: "some-group", assignment_strategy: strategy)
802
+ ```
803
+
804
+ `members` is a hash mapping member IDs to metadata, and partitions is a list of partitions the consumer group processes. The method `call` must return a hash mapping member IDs to partitions. For example, the following strategy assigns partitions randomly:
805
+
806
+ ```ruby
807
+ class RandomAssignmentStrategy
808
+ def call(cluster:, members:, partitions:)
809
+ member_ids = members.keys
810
+ partitions.each_with_object(Hash.new {|h, k| h[k] = [] }) do |partition, partitions_per_member|
811
+ partitions_per_member[member_ids[rand(member_ids.count)]] << partition
812
+ end
813
+ end
814
+ end
815
+ ```
816
+
817
+ If the strategy needs user data, you should define the method `user_data` that returns user data on each consumer. For example, the following strategy uses the consumers' IP addresses as user data:
818
+
819
+ ```ruby
820
+ class NetworkTopologyAssignmentStrategy
821
+ def user_data
822
+ Socket.ip_address_list.find(&:ipv4_private?).ip_address
823
+ end
824
+
825
+ def call(cluster:, members:, partitions:)
826
+ # Display the pair of the member ID and IP address
827
+ members.each do |id, metadata|
828
+ puts "#{id}: #{metadata.user_data}"
829
+ end
830
+
831
+ # Assign partitions considering the network topology
832
+ ...
833
+ end
834
+ end
835
+ ```
836
+
837
+ Note that the strategy uses the class name as the default protocol name. You can change it by defining the method `protocol_name`:
838
+
839
+ ```ruby
840
+ class NetworkTopologyAssignmentStrategy
841
+ def protocol_name
842
+ "networktopology"
843
+ end
844
+
845
+ def user_data
846
+ Socket.ip_address_list.find(&:ipv4_private?).ip_address
847
+ end
848
+
849
+ def call(cluster:, members:, partitions:)
850
+ ...
851
+ end
852
+ end
853
+ ```
854
+
855
+ As the method `call` might receive different user data from what it expects, you should avoid using the same protocol name as another strategy that uses different user data.
856
+
857
+
858
+ ### Thread Safety
859
+
860
+ You typically don't want to share a Kafka client object between threads, since the network communication is not synchronized. Furthermore, you should avoid using threads in a consumer unless you're very careful about waiting for all work to complete before returning from the `#each_message` or `#each_batch` block. This is because _checkpointing_ assumes that returning from the block means that the messages that have been yielded have been successfully processed.
861
+
862
+ You should also avoid sharing a synchronous producer between threads, as the internal buffers are not thread safe. However, the _asynchronous_ producer should be safe to use in a multi-threaded environment. This is because producers, when instantiated, get their own copy of any non-thread-safe data such as network sockets. Furthermore, the asynchronous producer has been designed in such a way to only a single background thread operates on this data while any foreground thread with a reference to the producer object can only send messages to that background thread over a safe queue. Therefore it is safe to share an async producer object between many threads.
863
+
864
+ ### Logging
865
+
866
+ It's a very good idea to configure the Kafka client with a logger. All important operations and errors are logged. When instantiating your client, simply pass in a valid logger:
867
+
868
+ ```ruby
869
+ logger = Logger.new("log/kafka.log")
870
+ kafka = Kafka.new(logger: logger, ...)
871
+ ```
872
+
873
+ By default, nothing is logged.
874
+
875
+ ### Instrumentation
876
+
877
+ Most operations are instrumented using [Active Support Notifications](http://api.rubyonrails.org/classes/ActiveSupport/Notifications.html). In order to subscribe to notifications, make sure to require the notifications library:
878
+
879
+ ```ruby
880
+ require "active_support/notifications"
881
+ require "kafka"
882
+ ```
883
+
884
+ The notifications are namespaced based on their origin, with separate namespaces for the producer and the consumer.
885
+
886
+ In order to receive notifications you can either subscribe to individual notification names or use regular expressions to subscribe to entire namespaces. This example will subscribe to _all_ notifications sent by ruby-kafka:
887
+
888
+ ```ruby
889
+ ActiveSupport::Notifications.subscribe(/.*\.kafka$/) do |*args|
890
+ event = ActiveSupport::Notifications::Event.new(*args)
891
+ puts "Received notification `#{event.name}` with payload: #{event.payload.inspect}"
892
+ end
893
+ ```
894
+
895
+ All notification events have the `client_id` key in the payload, referring to the Kafka client id.
896
+
897
+ #### Producer Notifications
898
+
899
+ * `produce_message.producer.kafka` is sent whenever a message is produced to a buffer. It includes the following payload:
900
+ * `value` is the message value.
901
+ * `key` is the message key.
902
+ * `topic` is the topic that the message was produced to.
903
+ * `buffer_size` is the size of the producer buffer after adding the message.
904
+ * `max_buffer_size` is the maximum size of the producer buffer.
905
+
906
+ * `deliver_messages.producer.kafka` is sent whenever a producer attempts to deliver its buffered messages to the Kafka brokers. It includes the following payload:
907
+ * `attempts` is the number of times delivery was attempted.
908
+ * `message_count` is the number of messages for which delivery was attempted.
909
+ * `delivered_message_count` is the number of messages that were acknowledged by the brokers - if this number is smaller than `message_count` not all messages were successfully delivered.
910
+
911
+ #### Consumer Notifications
912
+
913
+ All notifications have `group_id` in the payload, referring to the Kafka consumer group id.
914
+
915
+ * `process_message.consumer.kafka` is sent whenever a message is processed by a consumer. It includes the following payload:
916
+ * `value` is the message value.
917
+ * `key` is the message key.
918
+ * `topic` is the topic that the message was consumed from.
919
+ * `partition` is the topic partition that the message was consumed from.
920
+ * `offset` is the message's offset within the topic partition.
921
+ * `offset_lag` is the number of messages within the topic partition that have not yet been consumed.
922
+
923
+ * `start_process_message.consumer.kafka` is sent before `process_message.consumer.kafka`, and contains the same payload. It is delivered _before_ the message is processed, rather than _after_.
924
+
925
+ * `process_batch.consumer.kafka` is sent whenever a message batch is processed by a consumer. It includes the following payload:
926
+ * `message_count` is the number of messages in the batch.
927
+ * `topic` is the topic that the message batch was consumed from.
928
+ * `partition` is the topic partition that the message batch was consumed from.
929
+ * `highwater_mark_offset` is the message batch's highest offset within the topic partition.
930
+ * `offset_lag` is the number of messages within the topic partition that have not yet been consumed.
931
+
932
+ * `start_process_batch.consumer.kafka` is sent before `process_batch.consumer.kafka`, and contains the same payload. It is delivered _before_ the batch is processed, rather than _after_.
933
+
934
+ * `join_group.consumer.kafka` is sent whenever a consumer joins a consumer group. It includes the following payload:
935
+ * `group_id` is the consumer group id.
936
+
937
+ * `sync_group.consumer.kafka` is sent whenever a consumer is assigned topic partitions within a consumer group. It includes the following payload:
938
+ * `group_id` is the consumer group id.
939
+
940
+ * `leave_group.consumer.kafka` is sent whenever a consumer leaves a consumer group. It includes the following payload:
941
+ * `group_id` is the consumer group id.
942
+
943
+ * `seek.consumer.kafka` is sent when a consumer first seeks to an offset. It includes the following payload:
944
+ * `group_id` is the consumer group id.
945
+ * `topic` is the topic we are seeking in.
946
+ * `partition` is the partition we are seeking in.
947
+ * `offset` is the offset we have seeked to.
948
+
949
+ * `heartbeat.consumer.kafka` is sent when a consumer group completes a heartbeat. It includes the following payload:
950
+ * `group_id` is the consumer group id.
951
+ * `topic_partitions` is a hash of { topic_name => array of assigned partition IDs }
952
+
953
+ #### Connection Notifications
954
+
955
+ * `request.connection.kafka` is sent whenever a network request is sent to a Kafka broker. It includes the following payload:
956
+ * `api` is the name of the API that was called, e.g. `produce` or `fetch`.
957
+ * `request_size` is the number of bytes in the request.
958
+ * `response_size` is the number of bytes in the response.
959
+
960
+
961
+ ### Monitoring
962
+
963
+ It is highly recommended that you monitor your Kafka client applications in production. Typical problems you'll see are:
964
+
965
+ * high network error rates, which may impact performance and time-to-delivery;
966
+ * producer buffer growth, which may indicate that producers are unable to deliver messages at the rate they're being produced;
967
+ * consumer processing errors, indicating exceptions are being raised in the processing code;
968
+ * frequent consumer rebalances, which may indicate unstable network conditions or consumer configurations.
969
+
970
+ You can quite easily build monitoring on top of the provided [instrumentation hooks](#instrumentation). In order to further help with monitoring, a prebuilt [Statsd](https://github.com/etsy/statsd) and [Datadog](https://www.datadoghq.com/) reporter is included with ruby-kafka.
971
+
972
+
973
+ #### What to Monitor
974
+
975
+ We recommend monitoring the following:
976
+
977
+ * Low-level Kafka API calls:
978
+ * The rate of API call errors to the total number of calls by both API and broker.
979
+ * The API call throughput by both API and broker.
980
+ * The API call latency by both API and broker.
981
+ * Producer-level metrics:
982
+ * Delivery throughput by topic.
983
+ * The latency of deliveries.
984
+ * The producer buffer fill ratios.
985
+ * The async producer queue sizes.
986
+ * Message delivery delays.
987
+ * Failed delivery attempts.
988
+ * Consumer-level metrics:
989
+ * Message processing throughput by topic.
990
+ * Processing latency by topic.
991
+ * Processing errors by topic.
992
+ * Consumer lag (how many messages are yet to be processed) by topic/partition.
993
+ * Group join/sync/leave by client host.
994
+
995
+
996
+ #### Reporting Metrics to Statsd
997
+
998
+ The Statsd reporter is automatically enabled when the `kafka/statsd` library is required. You can optionally change the configuration.
999
+
1000
+ ```ruby
1001
+ require "kafka/statsd"
1002
+
1003
+ # Default is "ruby_kafka".
1004
+ Kafka::Statsd.namespace = "custom-namespace"
1005
+
1006
+ # Default is "127.0.0.1".
1007
+ Kafka::Statsd.host = "statsd.something.com"
1008
+
1009
+ # Default is 8125.
1010
+ Kafka::Statsd.port = 1234
1011
+ ```
1012
+
1013
+
1014
+ #### Reporting Metrics to Datadog
1015
+
1016
+ The Datadog reporter is automatically enabled when the `kafka/datadog` library is required. You can optionally change the configuration.
1017
+
1018
+ ```ruby
1019
+ # This enables the reporter:
1020
+ require "kafka/datadog"
1021
+
1022
+ # Default is "ruby_kafka".
1023
+ Kafka::Datadog.namespace = "custom-namespace"
1024
+
1025
+ # Default is "127.0.0.1".
1026
+ Kafka::Datadog.host = "statsd.something.com"
1027
+
1028
+ # Default is 8125.
1029
+ Kafka::Datadog.port = 1234
1030
+ ```
1031
+
1032
+ ### Understanding Timeouts
1033
+
1034
+ It's important to understand how timeouts work if you have a latency sensitive application. This library allows configuring timeouts on different levels:
1035
+
1036
+ **Network timeouts** apply to network connections to individual Kafka brokers. There are two config keys here, each passed to `Kafka.new`:
1037
+
1038
+ * `connect_timeout` sets the number of seconds to wait while connecting to a broker for the first time. When ruby-kafka initializes, it needs to connect to at least one host in `seed_brokers` in order to discover the Kafka cluster. Each host is tried until there's one that works. Usually that means the first one, but if your entire cluster is down, or there's a network partition, you could wait up to `n * connect_timeout` seconds, where `n` is the number of seed brokers.
1039
+ * `socket_timeout` sets the number of seconds to wait when reading from or writing to a socket connection to a broker. After this timeout expires the connection will be killed. Note that some Kafka operations are by definition long-running, such as waiting for new messages to arrive in a partition, so don't set this value too low. When configuring timeouts relating to specific Kafka operations, make sure to make them shorter than this one.
1040
+
1041
+ **Producer timeouts** can be configured when calling `#producer` on a client instance:
1042
+
1043
+ * `ack_timeout` is a timeout executed by a broker when the client is sending messages to it. It defines the number of seconds the broker should wait for replicas to acknowledge the write before responding to the client with an error. As such, it relates to the `required_acks` setting. It should be set lower than `socket_timeout`.
1044
+ * `retry_backoff` configures the number of seconds to wait after a failed attempt to send messages to a Kafka broker before retrying. The `max_retries` setting defines the maximum number of retries to attempt, and so the total duration could be up to `max_retries * retry_backoff` seconds. The timeout can be arbitrarily long, and shouldn't be too short: if a broker goes down its partitions will be handed off to another broker, and that can take tens of seconds.
1045
+
1046
+ When sending many messages, it's likely that the client needs to send some messages to each broker in the cluster. Given `n` brokers in the cluster, the total wait time when calling `Kafka::Producer#deliver_messages` can be up to
1047
+
1048
+ n * (connect_timeout + socket_timeout + retry_backoff) * max_retries
1049
+
1050
+ Make sure your application can survive being blocked for so long.
1051
+
1052
+ ### Security
1053
+
1054
+ #### Encryption and Authentication using SSL
1055
+
1056
+ By default, communication between Kafka clients and brokers is unencrypted and unauthenticated. Kafka 0.9 added optional support for [encryption and client authentication and authorization](http://kafka.apache.org/documentation.html#security_ssl). There are two layers of security made possible by this:
1057
+
1058
+ ##### Encryption of Communication
1059
+
1060
+ By enabling SSL encryption you can have some confidence that messages can be sent to Kafka over an untrusted network without being intercepted.
1061
+
1062
+ In this case you just need to pass a valid CA certificate as a string when configuring your `Kafka` client:
1063
+
1064
+ ```ruby
1065
+ kafka = Kafka.new(["kafka1:9092"], ssl_ca_cert: File.read('my_ca_cert.pem'))
1066
+ ```
1067
+
1068
+ Without passing the CA certificate to the client it would be impossible to protect against [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack).
1069
+
1070
+ ##### Using your system's CA cert store
1071
+
1072
+ If you want to use the CA certs from your system's default certificate store, you
1073
+ can use:
1074
+
1075
+ ```ruby
1076
+ kafka = Kafka.new(["kafka1:9092"], ssl_ca_certs_from_system: true)
1077
+ ```
1078
+
1079
+ This configures the store to look up CA certificates from the system default certificate store on an as needed basis. The location of the store can usually be determined by:
1080
+ `OpenSSL::X509::DEFAULT_CERT_FILE`
1081
+
1082
+ ##### Client Authentication
1083
+
1084
+ In order to authenticate the client to the cluster, you need to pass in a certificate and key created for the client and trusted by the brokers.
1085
+
1086
+ **NOTE**: You can disable hostname validation by passing `ssl_verify_hostname: false`.
1087
+
1088
+ ```ruby
1089
+ kafka = Kafka.new(
1090
+ ["kafka1:9092"],
1091
+ ssl_ca_cert: File.read('my_ca_cert.pem'),
1092
+ ssl_client_cert: File.read('my_client_cert.pem'),
1093
+ ssl_client_cert_key: File.read('my_client_cert_key.pem'),
1094
+ ssl_client_cert_key_password: 'my_client_cert_key_password',
1095
+ ssl_verify_hostname: false,
1096
+ # ...
1097
+ )
1098
+ ```
1099
+
1100
+ Once client authentication is set up, it is possible to configure the Kafka cluster to [authorize client requests](http://kafka.apache.org/documentation.html#security_authz).
1101
+
1102
+ ##### Using JKS Certificates
1103
+
1104
+ Typically, Kafka certificates come in the JKS format, which isn't supported by ruby-kafka. There's [a wiki page](https://github.com/zendesk/ruby-kafka/wiki/Creating-X509-certificates-from-JKS-format) that describes how to generate valid X509 certificates from JKS certificates.
1105
+
1106
+ #### Authentication using SASL
1107
+
1108
+ Kafka has support for using SASL to authenticate clients. Currently GSSAPI, SCRAM and PLAIN mechanisms are supported by ruby-kafka.
1109
+
1110
+ **NOTE:** With SASL for authentication, it is highly recommended to use SSL encryption. The default behavior of ruby-kafka enforces you to use SSL and you need to configure SSL encryption by passing `ssl_ca_cert` or enabling `ssl_ca_certs_from_system`. However, this strict SSL mode check can be disabled by setting `sasl_over_ssl` to `false` while initializing the client.
1111
+
1112
+ ##### GSSAPI
1113
+ In order to authenticate using GSSAPI, set your principal and optionally your keytab when initializing the Kafka client:
1114
+
1115
+ ```ruby
1116
+ kafka = Kafka.new(
1117
+ ["kafka1:9092"],
1118
+ sasl_gssapi_principal: 'kafka/kafka.example.com@EXAMPLE.COM',
1119
+ sasl_gssapi_keytab: '/etc/keytabs/kafka.keytab',
1120
+ # ...
1121
+ )
1122
+ ```
1123
+
1124
+ ##### PLAIN
1125
+ In order to authenticate using PLAIN, you must set your username and password when initializing the Kafka client:
1126
+
1127
+ ```ruby
1128
+ kafka = Kafka.new(
1129
+ ["kafka1:9092"],
1130
+ ssl_ca_cert: File.read('/etc/openssl/cert.pem'),
1131
+ sasl_plain_username: 'username',
1132
+ sasl_plain_password: 'password'
1133
+ # ...
1134
+ )
1135
+ ```
1136
+
1137
+ ##### SCRAM
1138
+ Since 0.11 kafka supports [SCRAM](https://kafka.apache.org/documentation.html#security_sasl_scram).
1139
+
1140
+ ```ruby
1141
+ kafka = Kafka.new(
1142
+ ["kafka1:9092"],
1143
+ sasl_scram_username: 'username',
1144
+ sasl_scram_password: 'password',
1145
+ sasl_scram_mechanism: 'sha256',
1146
+ # ...
1147
+ )
1148
+ ```
1149
+
1150
+ ##### OAUTHBEARER
1151
+ This mechanism is supported in kafka >= 2.0.0 as of [KIP-255](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75968876)
1152
+
1153
+ In order to authenticate using OAUTHBEARER, you must set the client with an instance of a class that implements a `token` method (the interface is described in [Kafka::Sasl::OAuth](lib/kafka/sasl/oauth.rb)) which returns an ID/Access token.
1154
+
1155
+ Optionally, the client may implement an `extensions` method that returns a map of key-value pairs. These can be sent with the SASL/OAUTHBEARER initial client response. This is only supported in kafka >= 2.1.0.
1156
+
1157
+ ```ruby
1158
+ class TokenProvider
1159
+ def token
1160
+ "some_id_token"
1161
+ end
1162
+ end
1163
+ # ...
1164
+ client = Kafka.new(
1165
+ ["kafka1:9092"],
1166
+ sasl_oauth_token_provider: TokenProvider.new
1167
+ )
1168
+ ```
1169
+
1170
+ ### Topic management
1171
+
1172
+ In addition to producing and consuming messages, ruby-kafka supports managing Kafka topics and their configurations. See [the Kafka documentation](https://kafka.apache.org/documentation/#topicconfigs) for a full list of topic configuration keys.
1173
+
1174
+ #### List all topics
1175
+
1176
+ Return an array of topic names.
1177
+
1178
+ ```ruby
1179
+ kafka = Kafka.new(["kafka:9092"])
1180
+ kafka.topics
1181
+ # => ["topic1", "topic2", "topic3"]
1182
+ ```
1183
+
1184
+ #### Create a topic
1185
+
1186
+ ```ruby
1187
+ kafka = Kafka.new(["kafka:9092"])
1188
+ kafka.create_topic("topic")
1189
+ ```
1190
+
1191
+ By default, the new topic has 1 partition, replication factor 1 and default configs from the brokers. Those configurations are customizable:
1192
+
1193
+ ```ruby
1194
+ kafka = Kafka.new(["kafka:9092"])
1195
+ kafka.create_topic("topic",
1196
+ num_partitions: 3,
1197
+ replication_factor: 2,
1198
+ config: {
1199
+ "max.message.bytes" => 100000
1200
+ }
1201
+ )
1202
+ ```
1203
+
1204
+ #### Create more partitions for a topic
1205
+
1206
+ After a topic is created, you can increase the number of partitions for the topic. The new number of partitions must be greater than the current one.
1207
+
1208
+ ```ruby
1209
+ kafka = Kafka.new(["kafka:9092"])
1210
+ kafka.create_partitions_for("topic", num_partitions: 10)
1211
+ ```
1212
+
1213
+ #### Fetch configuration for a topic (alpha feature)
1214
+
1215
+ ```ruby
1216
+ kafka = Kafka.new(["kafka:9092"])
1217
+ kafka.describe_topic("topic", ["max.message.bytes", "retention.ms"])
1218
+ # => {"max.message.bytes"=>"100000", "retention.ms"=>"604800000"}
1219
+ ```
1220
+
1221
+ #### Alter a topic configuration (alpha feature)
1222
+
1223
+ Update the topic configurations.
1224
+
1225
+ **NOTE**: This feature is for advanced usage. Only use this if you know what you're doing.
1226
+
1227
+ ```ruby
1228
+ kafka = Kafka.new(["kafka:9092"])
1229
+ kafka.alter_topic("topic", "max.message.bytes" => 100000, "retention.ms" => 604800000)
1230
+ ```
1231
+
1232
+ #### Delete a topic
1233
+
1234
+ ```ruby
1235
+ kafka = Kafka.new(["kafka:9092"])
1236
+ kafka.delete_topic("topic")
1237
+ ```
1238
+
1239
+ After a topic is marked as deleted, Kafka only hides it from clients. It would take a while before a topic is completely deleted.
1240
+
1241
+ ## Design
1242
+
1243
+ The library has been designed as a layered system, with each layer having a clear responsibility:
1244
+
1245
+ * The **network layer** handles low-level connection tasks, such as keeping open connections to each Kafka broker, reconnecting when there's an error, etc. See [`Kafka::Connection`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Connection) for more details.
1246
+ * The **protocol layer** is responsible for encoding and decoding the Kafka protocol's various structures. See [`Kafka::Protocol`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Protocol) for more details.
1247
+ * The **operational layer** provides high-level operations, such as fetching messages from a topic, that may involve more than one API request to the Kafka cluster. Some complex operations are made available through [`Kafka::Cluster`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Cluster), which represents an entire cluster, while simpler ones are only available through [`Kafka::Broker`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Broker), which represents a single Kafka broker. In general, `Kafka::Cluster` is the high-level API, with more polish.
1248
+ * The **API layer** provides APIs to users of the libraries. The Consumer API is implemented in [`Kafka::Consumer`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Consumer) while the Producer API is implemented in [`Kafka::Producer`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Producer) and [`Kafka::AsyncProducer`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/AsyncProducer).
1249
+ * The **configuration layer** provides a way to set up and configure the client, as well as easy entrypoints to the various APIs. [`Kafka::Client`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Client) implements the public APIs. For convenience, the method [`Kafka.new`](http://www.rubydoc.info/gems/ruby-kafka/Kafka.new) can instantiate the class for you.
1250
+
1251
+ Note that only the API and configuration layers have any backwards compatibility guarantees – the other layers are considered internal and may change without warning. Don't use them directly.
1252
+
1253
+ ### Producer Design
1254
+
1255
+ The producer is designed with resilience and operational ease of use in mind, sometimes at the cost of raw performance. For instance, the operation is heavily instrumented, allowing operators to monitor the producer at a very granular level.
1256
+
1257
+ The producer has two main internal data structures: a list of _pending messages_ and a _message buffer_. When the user calls [`Kafka::Producer#produce`](http://www.rubydoc.info/gems/ruby-kafka/Kafka%2FProducer%3Aproduce), a message is appended to the pending message list, but no network communication takes place. This means that the call site does not have to handle the broad range of errors that can happen at the network or protocol level. Instead, those errors will only happen once [`Kafka::Producer#deliver_messages`](http://www.rubydoc.info/gems/ruby-kafka/Kafka%2FProducer%3Adeliver_messages) is called. This method will go through the pending messages one by one, making sure they're assigned a partition. This may fail for some messages, as it could require knowing the current configuration for the message's topic, necessitating API calls to Kafka. Messages that cannot be assigned a partition are kept in the list, while the others are written into the message buffer. The producer then figures out which topic partitions are led by which Kafka brokers so that messages can be sent to the right place – in Kafka, it is the responsibility of the client to do this routing. A separate _produce_ API request will be sent to each broker; the response will be inspected; and messages that were acknowledged by the broker will be removed from the message buffer. Any messages that were _not_ acknowledged will be kept in the buffer.
1258
+
1259
+ If there are any messages left in either the pending message list _or_ the message buffer after this operation, [`Kafka::DeliveryFailed`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/DeliveryFailed) will be raised. This exception must be rescued and handled by the user, possibly by calling `#deliver_messages` at a later time.
1260
+
1261
+ ### Asynchronous Producer Design
1262
+
1263
+ The synchronous producer allows the user fine-grained control over when network activity and the possible errors arising from that will take place, but it requires the user to handle the errors nonetheless. The async producer provides a more hands-off approach that trades off control for ease of use and resilience.
1264
+
1265
+ Instead of writing directly into the pending message list, [`Kafka::AsyncProducer`](http://www.rubydoc.info/gems/ruby-kafka/Kafka/AsyncProducer) writes the message to an internal thread-safe queue, returning immediately. A background thread reads messages off the queue and passes them to a synchronous producer.
1266
+
1267
+ Rather than triggering message deliveries directly, users of the async producer will typically set up _automatic triggers_, such as a timer.
1268
+
1269
+ ### Consumer Design
1270
+
1271
+ The Consumer API is designed for flexibility and stability. The first is accomplished by not dictating any high-level object model, instead opting for a simple loop-based approach. The second is accomplished by handling group membership, heartbeats, and checkpointing automatically. Messages are marked as processed as soon as they've been successfully yielded to the user-supplied processing block, minimizing the cost of processing errors.
1272
+
1273
+ ## Development
1274
+
1275
+ After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
1276
+
1277
+ **Note:** the specs require a working [Docker](https://www.docker.com/) instance, but should work out of the box if you have Docker installed. Please create an issue if that's not the case.
1278
+
1279
+ If you would like to contribute to ruby-kafka, please [join our Slack team](https://ruby-kafka-slack.herokuapp.com/) and ask how best to do it.
1280
+
1281
+ [![Circle CI](https://circleci.com/gh/zendesk/ruby-kafka.svg?style=shield)](https://circleci.com/gh/zendesk/ruby-kafka/tree/master)
1282
+
1283
+ ## Support and Discussion
1284
+
1285
+ If you've discovered a bug, please file a [Github issue](https://github.com/zendesk/ruby-kafka/issues/new), and make sure to include all the relevant information, including the version of ruby-kafka and Kafka that you're using.
1286
+
1287
+ If you have other questions, or would like to discuss best practises, how to contribute to the project, or any other ruby-kafka related topic, [join our Slack team](https://ruby-kafka-slack.herokuapp.com/)!
1288
+
1289
+ ## Roadmap
1290
+
1291
+ Version 0.4 will be the last minor release with support for the Kafka 0.9 protocol. It is recommended that you pin your dependency on ruby-kafka to `~> 0.4.0` in order to receive bugfixes and security updates. New features will only target version 0.5 and up, which will be incompatible with the Kafka 0.9 protocol.
1292
+
1293
+ ### v0.4
1294
+
1295
+ Last stable release with support for the Kafka 0.9 protocol. Bug and security fixes will be released in patch updates.
1296
+
1297
+ ### v0.5
1298
+
1299
+ Latest stable release, with native support for the Kafka 0.10 protocol and eventually newer protocol versions. Kafka 0.9 is no longer supported by this release series.
1300
+
1301
+ ## Higher level libraries
1302
+
1303
+ Currently, there are three actively developed frameworks based on ruby-kafka, that provide higher level API that can be used to work with Kafka messages and two libraries for publishing messages.
1304
+
1305
+ ### Message processing frameworks
1306
+
1307
+ * [Racecar](https://github.com/zendesk/racecar) - A simple framework that integrates with Ruby on Rails to provide a seamless way to write, test, configure, and run Kafka consumers. It comes with sensible defaults and conventions.
1308
+
1309
+ * [Karafka](https://github.com/karafka/karafka) - Framework used to simplify Apache Kafka based Ruby and Rails applications development. Karafka provides higher abstraction layers, including Capistrano, Docker and Heroku support.
1310
+
1311
+ * [Phobos](https://github.com/klarna/phobos) - Micro framework and library for applications dealing with Apache Kafka. It wraps common behaviors needed by consumers and producers in an easy and convenient API.
1312
+
1313
+ ### Message publishing libraries
1314
+
1315
+ * [DeliveryBoy](https://github.com/zendesk/delivery_boy) – A library that integrates with Ruby on Rails, making it easy to publish Kafka messages from any Rails application.
1316
+
1317
+ * [WaterDrop](https://github.com/karafka/waterdrop) – A library for Ruby and Ruby on Rails applications, to easy publish Kafka messages in both sync and async way.
1318
+
1319
+ ## Why Create A New Library?
1320
+
1321
+ There are a few existing Kafka clients in Ruby:
1322
+
1323
+ * [Poseidon](https://github.com/bpot/poseidon) seems to work for Kafka 0.8, but the project is unmaintained and has known issues.
1324
+ * [Hermann](https://github.com/reiseburo/hermann) wraps the C library [librdkafka](https://github.com/edenhill/librdkafka) and seems to be very efficient, but its API and mode of operation is too intrusive for our needs.
1325
+ * [jruby-kafka](https://github.com/joekiller/jruby-kafka) is a great option if you're running on JRuby.
1326
+
1327
+ We needed a robust client that could be used from our existing Ruby apps, allowed our Ops to monitor operation, and provided flexible error handling. There didn't exist such a client, hence this project.
1328
+
1329
+ ## Contributing
1330
+
1331
+ Bug reports and pull requests are welcome on GitHub at https://github.com/zendesk/ruby-kafka.
1332
+
1333
+
1334
+ ## Copyright and license
1335
+
1336
+ Copyright 2015 Zendesk
1337
+
1338
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
1339
+
1340
+ You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
1341
+
1342
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.