racecar 1.2.0 → 2.0.0.alpha1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 28a2c032c29aeba007b00c13eb0bfec94b2d32bf6a6076235b801c4eee28a6b1
4
- data.tar.gz: b2f63c4a91f2a9ecfa26d7f4ac1902dc865221a8d0ed203a13dfb59c04dc35f5
3
+ metadata.gz: 1004df9eda77cd501025c24124d3bd01702bcd272571cddf4db2dd4da2ae9140
4
+ data.tar.gz: 539fea6b9d8a836d85af104e3441d3481c58d9861843d18ec70be2e2dd842af4
5
5
  SHA512:
6
- metadata.gz: eabaf84b44a70f65ea8fda8ff9e9721f39d342b65c66995b580bfc99d5977d8d675100e01428c0c64b4342d3165e52951b4a08b11cdcd54c6cb93601fca9abf0
7
- data.tar.gz: 7155e003d823e1cfbdb80b374b8f6b14169c65c3dfaeb9eda734d72cca373a5bc9662616616cf4ee12f3002801c0cbfd0664758a8406fd410867e5aca5f45659
6
+ metadata.gz: f86885745b07c476fd5bc104ebe1293ad1fd7c25e28089f7c7b53badfb5ee51ace00e8259ea1d1c668e9ea27289d6d11cdeaf9b208fe64457a4bde039271ff60
7
+ data.tar.gz: 20cedc0b4278fdf0c391057660fa2835071f005a43c8e133e01818e2ad314ebad8b1e9c5246ddda7d4a405019d9223840ecb67f6635a30884cd5836111a30838
@@ -1,20 +1,30 @@
1
1
  # Changelog
2
2
 
3
- ## Unreleased
4
-
5
- ## racecar v1.2.0
6
-
7
- * Support for `ssl_client_cert_key_password` (#173).
8
- * Support for `OAUTHBEARER` authentication (`sasl_oauth_token_provider`) (#178).
9
-
10
- ## racecar v1.1.0
11
-
12
- * Require ruby-kafka v1.0 or higher.
13
- * Add error handling for required libraries (#149).
14
-
15
- ## racecar v1.0.1
16
-
17
- * Add `--without-rails` option to boot consumer without Rails (#139).
3
+ ## racecar v2.0.0
4
+
5
+ * Replace `ruby-kafka` with `rdkafka-ruby`
6
+ * Removed config option `sasl_over_ssl`
7
+ * [Racecar::Consumer] Do not pause consuming partitions on exception
8
+ * [Racecar::Consumer] `topic`, `payload` and `key` are mandadory to method `produce`
9
+ * [Racecar::Consumer] `process_batch` retrieves an array of messages instead of batch object
10
+ * [Racecar::Consumer] Remove `offset_retention_time`
11
+ * [Racecar::Consumer] Allow providing `additional_config` for subscriptions
12
+ * [Racecar::Consumer] Provide access to `producer` and `consumer`
13
+ * [Racecar::Consumer] Enforce delivering messages with method `deliver!`
14
+ * [Racecar::Consumer] instead of raising when a partition EOF is reached, the result can be queried through `consumer.last_poll_read_partition_eof?`
15
+ * [Racecar::Config] Remove `offset_retention_time`, `connect_timeout` and `offset_commit_threshold`
16
+ * [Racecar::Config] Pass config to `rdkafka-ruby` via `producer` and `consumer`
17
+ * [Racecar::Config] Replace `max_fetch_queue_size` with `min_message_queue_size`
18
+ * [Racecar::Config] Add `synchronous_commits` to control blocking of `consumer.commit` (default `false`)
19
+ * [Racecar::Config] Add `security_protocol` to control protocol between client and broker
20
+ * [Racecar::Config] SSL configuration via `ssl_ca_location`, `ssl_crl_location`, `ssl_keystore_location` and `ssl_keystore_password`
21
+ * [Racecar::Config] SASL configuration via `sasl_mechanism`, `sasl_kerberos_service_name`, `sasl_kerberos_principal`, `sasl_kerberos_kinit_cmd`, `sasl_kerberos_keytab`, `sasl_kerberos_min_time_before_relogin`, `sasl_username` and `sasl_password`
22
+ * [Instrumentation] `produce_message.racecar` sent whenever a produced message is queued. Payload includes `topic`, `key`, `value` and `create_time`.
23
+ * [Instrumentation] `acknowledged_message.racecar` send whenever a produced message was successfully received by Kafka. Payload includes `offset` and `partition`, but no message details.
24
+ * [Instrumentation] `rdkafka-ruby` does not yet provide instrumentation [rdkafka-ruby#54](https://github.com/appsignal/rdkafka-ruby/issues/54)
25
+ * [Instrumentation] if processors define a `statistics_callback`, it will be called once every second for every subscription or producer connection. The first argument will be a Hash, for contents see [librdkafka STATISTICS.md](https://github.com/edenhill/librdkafka/blob/master/STATISTICS.md)
26
+ * Add current directory to `$LOAD_PATH` only when `--require` option is used (#117).
27
+ * Remove manual heartbeat support, see [Long-running message processing section in README](README.md#long-running-message-processing)
18
28
 
19
29
  ## racecar v1.0.0
20
30
 
data/README.md CHANGED
@@ -1,8 +1,10 @@
1
+ **IMPORTANT:** The `master` branch is unstable, working towards a v2 release that breaks a lot of stuff. Use the `v1-stable` branch if you want to suggest changes.
2
+
1
3
  # Racecar
2
4
 
3
5
  Racecar is a friendly and easy-to-approach Kafka consumer framework. It allows you to write small applications that process messages stored in Kafka topics while optionally integrating with your Rails models.
4
6
 
5
- The framework is based on [ruby-kafka](https://github.com/zendesk/ruby-kafka), which, when used directly, can be a challenge: it's a flexible library with lots of knobs and options. Most users don't need that level of flexibility, though. Racecar provides a simple and intuitive way to build and configure Kafka consumers.
7
+ The framework is based on [rdkafka-ruby](https://github.com/appsignal/rdkafka-ruby), which, when used directly, can be a challenge: it's a flexible library with lots of knobs and options. Most users don't need that level of flexibility, though. Racecar provides a simple and intuitive way to build and configure Kafka consumers.
6
8
 
7
9
  **NOTE:** Racecar requires Kafka 0.10 or higher.
8
10
 
@@ -121,23 +123,23 @@ Note that once the consumer has started, it will commit the offsets it has proce
121
123
 
122
124
  #### Processing messages in batches
123
125
 
124
- If you want to process whole _batches_ of messages at a time, simply rename your `#process` method to `#process_batch`. The method will now be called with a "batch" object rather than a message:
126
+ If you want to process whole _batches_ of messages at a time, simply rename your `#process` method to `#process_batch`. The method will now be called with an array of message objects:
125
127
 
126
128
  ```ruby
127
129
  class ArchiveEventsConsumer < Racecar::Consumer
128
130
  subscribes_to "events"
129
131
 
130
- def process_batch(batch)
132
+ def process_batch(messages)
131
133
  file_name = [
132
- batch.topic, # the topic this batch of messages came from.
133
- batch.partition, # the partition this batch of messages came from.
134
- batch.first_offset, # offset of the first message in the batch.
135
- batch.last_offset, # offset of the last message in the batch.
134
+ messages.first.topic, # the topic this batch of messages came from.
135
+ messages.first.partition, # the partition this batch of messages came from.
136
+ messages.first.offset, # offset of the first message in the batch.
137
+ messages.last.offset, # offset of the last message in the batch.
136
138
  ].join("-")
137
139
 
138
140
  File.open(file_name, "w") do |file|
139
141
  # the messages in the batch.
140
- batch.messages.each do |message|
142
+ messages.each do |message|
141
143
  file << message.value
142
144
  end
143
145
  end
@@ -155,23 +157,13 @@ Any headers set on the message will be available when consuming the message:
155
157
  message.headers #=> { "Header-A" => 42, ... }
156
158
  ```
157
159
 
158
- #### Heartbeats
160
+ #### Long-running message processing
159
161
 
160
- In order to avoid your consumer being kicked out of its group during long-running message processing operations, it may be a good idea to periodically send so-called _heartbeats_ back to Kafka. This is done automatically for you after each message has been processed, but if the processing of a _single_ message takes a long time you may run into group stability issues.
162
+ In order to avoid your consumer being kicked out of its group during long-running message processing operations, you'll need to let Kafka regularly know that the consumer is still healthy. There's two mechanisms in place to ensure that:
161
163
 
162
- If possible, intersperse `heartbeat` calls in between long-running operations in your consumer, e.g.
163
-
164
- ```ruby
165
- def process(message)
166
- long_running_op_one(message)
167
-
168
- # Signals back to Kafka that we're still alive!
169
- heartbeat
170
-
171
- long_running_op_two(message)
172
- end
173
- ```
164
+ *Heartbeats:* They are automatically sent in the background and ensure the broker can still talk to the consumer. This will detect network splits, ungraceful shutdowns, etc.
174
165
 
166
+ *Message Fetch Interval:* Kafka expects the consumer to query for new messages within this time limit. This will detect situations with slow IO or the consumer being stuck in an infinite loop without making actual progress. This limit applies to a whole batch if you do batch processing. Use `max_poll_interval` to increase the default 5 minute timeout, or reduce batching with `fetch_messages`.
175
167
 
176
168
  #### Tearing down resources when stopping
177
169
 
@@ -222,11 +214,13 @@ class GeoCodingConsumer < Racecar::Consumer
222
214
 
223
215
  # The `produce` method enqueues a message to be delivered after #process
224
216
  # returns. It won't actually deliver the message.
225
- produce(JSON.dump(pageview), topic: "pageviews-with-country")
217
+ produce(payload: JSON.dump(pageview), topic: "pageviews-with-country", key: pageview["id"])
226
218
  end
227
219
  end
228
220
  ```
229
221
 
222
+ The `deliver!` method can be used to block until the broker received all queued published messages (according to the publisher ack settings). This will automatically being called in the shutdown procedure of a consumer.
223
+
230
224
  You can set message headers by passing a `headers:` option with a Hash of headers.
231
225
 
232
226
  ### Configuration
@@ -263,8 +257,6 @@ end
263
257
  The consumers will checkpoint their positions from time to time in order to be able to recover from failures. This is called _committing offsets_, since it's done by tracking the offset reached in each partition being processed, and committing those offset numbers to the Kafka offset storage API. If you can tolerate more double-processing after a failure, you can increase the interval between commits in order to better performance. You can also do the opposite if you prefer less chance of double-processing.
264
258
 
265
259
  * `offset_commit_interval` – How often to save the consumer's position in Kafka. Default is every 10 seconds.
266
- * `offset_commit_threshold` – How many messages to process before forcing a checkpoint. Default is 0, which means there's no limit. Setting this to e.g. 100 makes the consumer stop every 100 messages to checkpoint its position.
267
- * `offset_retention_time` - How long committed offsets will be retained. Defaults to the broker setting.
268
260
 
269
261
  #### Timeouts & intervals
270
262
 
@@ -272,8 +264,9 @@ All timeouts are defined in number of seconds.
272
264
 
273
265
  * `session_timeout` – The idle timeout after which a consumer is kicked out of the group. Consumers must send heartbeats with at least this frequency.
274
266
  * `heartbeat_interval` – How often to send a heartbeat message to Kafka.
275
- * `pause_timeout` – How long to pause a partition for if the consumer raises an exception while processing a message. Default is to pause for 10 seconds. Set this to zero in order to disable automatic pausing of partitions.
276
- * `connect_timeout` – How long to wait when trying to connect to a Kafka broker. Default is 10 seconds.
267
+ * `max_poll_interval` – The maximum time between two message fetches before the consumer is kicked out of the group. Put differently, your (batch) processing must finish earlier than this.
268
+ * `pause_timeout` – How long to pause a partition for if the consumer raises an exception while processing a message. Default is to pause for 10 seconds. Set this to `0` in order to disable automatic pausing of partitions or to `-1` to pause indefinitely.
269
+ * `pause_with_exponential_backoff` – Set to `true` if you want to double the `pause_timeout` on each consecutive failure of a particular partition.
277
270
  * `socket_timeout` – How long to wait when trying to communicate with a Kafka broker. Default is 30 seconds.
278
271
  * `max_wait_time` – How long to allow the Kafka brokers to wait before returning messages. A higher number means larger batches, at the cost of higher latency. Default is 1 second.
279
272
 
@@ -283,48 +276,32 @@ Kafka is _really_ good at throwing data at consumers, so you may want to tune th
283
276
 
284
277
  Racecar uses ruby-kafka under the hood, which fetches messages from the Kafka brokers in a background thread. This thread pushes fetch responses, possible containing messages from many partitions, into a queue that is read by the processing thread (AKA your code). The main way to control the fetcher thread is to control the size of those responses and the size of the queue.
285
278
 
286
- * `max_bytes` — The maximum size of message sets returned from a single fetch request.
287
- * `max_fetch_queue_size` — The maximum number of fetch responses to keep in the queue. Once reached, the fetcher will back off until the queue gets back down under to limit.
279
+ * `max_bytes` — Maximum amount of data the broker shall return for a Fetch request.
280
+ * `min_message_queue_size` — The minimum number of messages in the local consumer queue.
288
281
 
289
- The memory usage limit is roughly estimated as `max_bytes * max_fetch_queue_size`, plus whatever your application uses.
282
+ The memory usage limit is roughly estimated as `max_bytes * min_message_queue_size`, plus whatever your application uses.
290
283
 
291
284
  #### SSL encryption, authentication & authorization
292
285
 
293
- * `ssl_ca_cert` – A valid SSL certificate authority, as a string.
294
- * `ssl_ca_cert_file_path` - The path to a valid SSL certificate authority file.
295
- * `ssl_client_cert` – A valid SSL client certificate, as a string.
296
- * `ssl_client_cert_key` – A valid SSL client certificate key, as a string.
297
- * `ssl_client_cert_key_password` – The password for the client cert key, as a string (optional).
286
+ * `security_protocol` – Protocol used to communicate with brokers (`:ssl`)
287
+ * `ssl_ca_location` File or directory path to CA certificate(s) for verifying the broker's key
288
+ * `ssl_crl_location` – Path to CRL for verifying broker's certificate validity
289
+ * `ssl_keystore_location` – Path to client's keystore (PKCS#12) used for authentication
290
+ * `ssl_keystore_password` – Client's keystore (PKCS#12) password
298
291
 
299
292
  #### SASL encryption, authentication & authorization
300
293
 
301
- Racecar has support for using SASL to authenticate clients using either the GSSAPI or PLAIN mechanism.
294
+ Racecar has support for using SASL to authenticate clients using either the GSSAPI or PLAIN mechanism either via plaintext or SSL connection.
302
295
 
303
- If using GSSAPI:
296
+ * `security_protocol` – Protocol used to communicate with brokers (`:sasl_plaintext` `:sasl_ssl`)
297
+ * `sasl_mechanism` – SASL mechanism to use for authentication (`GSSAPI` `PLAIN` `SCRAM-SHA-256` `SCRAM-SHA-512`)
304
298
 
305
- * `sasl_gssapi_principal` – The GSSAPI principal.
306
- * `sasl_gssapi_keytab` – Optional GSSAPI keytab.
307
-
308
- If using PLAIN:
309
-
310
- * `sasl_plain_authzid` – The authorization identity to use.
311
- * `sasl_plain_username` – The username used to authenticate.
312
- * `sasl_plain_password` – The password used to authenticate.
313
-
314
- If using SCRAM:
315
-
316
- * `sasl_scram_username` – The username used to authenticate.
317
- * `sasl_scram_password` – The password used to authenticate.
318
- * `sasl_scram_mechanism` – The SCRAM mechanism to use, either `sha256` or `sha512`.
319
-
320
- If using OAUTHBEARER:
321
-
322
- * `sasl_oauth_token_provider`- In order to authenticate using OAUTHBEARER, you must set the client with an instance of a class that implements a token method (the interface is described in Kafka::Sasl::OAuth) which returns an ID/Access token.
323
- This mechanism is supported in kafka >= 2.0.0 as of KIP-255.
324
-
325
- See more at [here](https://github.com/zendesk/ruby-kafka/tree/master#oauthbearer).
326
-
327
- NOTE: `sasl_oauth_token_provider` only works using the `config/racecar.rb` configuration file.
299
+ * `sasl_kerberos_principal` – This client's Kerberos principal name
300
+ * `sasl_kerberos_kinit_cmd` – Full kerberos kinit command string, `%{config.prop.name}` is replaced by corresponding config object value, `%{broker.name}` returns the broker's hostname
301
+ * `sasl_kerberos_keytab` – Path to Kerberos keytab file. Uses system default if not set
302
+ * `sasl_kerberos_min_time_before_relogin` – Minimum time in milliseconds between key refresh attempts
303
+ * `sasl_username` – SASL username for use with the PLAIN and SASL-SCRAM-.. mechanism
304
+ * `sasl_password` – SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism
328
305
 
329
306
  #### Producing messages
330
307
 
@@ -342,9 +319,6 @@ Racecar supports configuring ruby-kafka's [Datadog](https://www.datadoghq.com/)
342
319
  * `datadog_namespace` – The namespace to use for Datadog metrics.
343
320
  * `datadog_tags` – Tags that should always be set on Datadog metrics.
344
321
 
345
- #### Consumers Without Rails ####
346
-
347
- By default, if Rails is detected, it will be automatically started when the consumer is started. There are cases where you might not want or need Rails. You can pass the `--without-rails` option when starting the consumer and Rails won't be started.
348
322
 
349
323
  ### Testing consumers
350
324
 
@@ -0,0 +1,10 @@
1
+ # only needed when ruby < 2.4 and not using active support
2
+
3
+ unless {}.respond_to? :compact
4
+ # https://github.com/rails/rails/blob/fc5dd0b85189811062c85520fd70de8389b55aeb/activesupport/lib/active_support/core_ext/hash/compact.rb
5
+ class Hash
6
+ def compact
7
+ select { |_, value| !value.nil? }
8
+ end
9
+ end
10
+ end
@@ -1,14 +1,16 @@
1
1
  require "logger"
2
2
 
3
3
  require "racecar/consumer"
4
+ require "racecar/consumer_set"
4
5
  require "racecar/runner"
5
6
  require "racecar/config"
7
+ require "ensure_hash_compact"
6
8
 
7
9
  module Racecar
8
10
  # Ignores all instrumentation events.
9
11
  class NullInstrumenter
10
12
  def self.instrument(*)
11
- yield if block_given?
13
+ yield({}) if block_given?
12
14
  end
13
15
  end
14
16
 
@@ -22,10 +24,6 @@ module Racecar
22
24
  @config ||= Config.new
23
25
  end
24
26
 
25
- def self.config=(config)
26
- @config = config
27
- end
28
-
29
27
  def self.configure
30
28
  yield config
31
29
  end
@@ -23,7 +23,7 @@ module Racecar
23
23
  def run
24
24
  $stderr.puts "=> Starting Racecar consumer #{consumer_name}..."
25
25
 
26
- RailsConfigFileLoader.load! unless config.without_rails?
26
+ RailsConfigFileLoader.load!
27
27
 
28
28
  if File.exist?("config/racecar.rb")
29
29
  require "./config/racecar"
@@ -102,12 +102,7 @@ module Racecar
102
102
  opts.on("-r", "--require STRING", "Require a library before starting the consumer") do |lib|
103
103
  $LOAD_PATH.unshift(Dir.pwd) unless load_path_modified
104
104
  load_path_modified = true
105
- begin
106
- require lib
107
- rescue => e
108
- $stderr.puts "=> #{lib} failed to load: #{e.message}"
109
- exit
110
- end
105
+ require lib
111
106
  end
112
107
 
113
108
  opts.on("-l", "--log STRING", "Log to the specified file") do |logfile|
@@ -13,17 +13,23 @@ module Racecar
13
13
  desc "How frequently to commit offset positions"
14
14
  float :offset_commit_interval, default: 10
15
15
 
16
- desc "How many messages to process before forcing a checkpoint"
17
- integer :offset_commit_threshold, default: 0
18
-
19
16
  desc "How often to send a heartbeat message to Kafka"
20
17
  float :heartbeat_interval, default: 10
21
18
 
22
- desc "How long committed offsets will be retained."
23
- integer :offset_retention_time
19
+ desc "The minimum number of messages in the local consumer queue"
20
+ integer :min_message_queue_size, default: 2000
21
+
22
+ desc "Kafka consumer configuration options, separated with '=' -- https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md"
23
+ list :consumer, default: []
24
+
25
+ desc "Kafka producer configuration options, separated with '=' -- https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md"
26
+ list :producer, default: []
24
27
 
25
- desc "The maximum number of fetch responses to keep queued before processing"
26
- integer :max_fetch_queue_size, default: 10
28
+ desc "The maxium number of messages that get consumed within one batch"
29
+ integer :fetch_messages, default: 1000
30
+
31
+ desc "Automatically store offset of last message provided to application"
32
+ boolean :synchronous_commits, default: false
27
33
 
28
34
  desc "How long to pause a partition for if the consumer raises an exception while processing a message -- set to -1 to pause indefinitely"
29
35
  float :pause_timeout, default: 10
@@ -37,8 +43,8 @@ module Racecar
37
43
  desc "The idle timeout after which a consumer is kicked out of the group"
38
44
  float :session_timeout, default: 30
39
45
 
40
- desc "How long to wait when trying to connect to a Kafka broker"
41
- float :connect_timeout, default: 10
46
+ desc "The maximum time between two message fetches before the consumer is kicked out of the group (in seconds)"
47
+ integer :max_poll_interval, default: 5*60
42
48
 
43
49
  desc "How long to wait when trying to communicate with a Kafka broker"
44
50
  float :socket_timeout, default: 30
@@ -46,7 +52,7 @@ module Racecar
46
52
  desc "How long to allow the Kafka brokers to wait before returning messages"
47
53
  float :max_wait_time, default: 1
48
54
 
49
- desc "The maximum size of message sets returned from a single fetch"
55
+ desc "Maximum amount of data the broker shall return for a Fetch request"
50
56
  integer :max_bytes, default: 10485760
51
57
 
52
58
  desc "A prefix used when generating consumer group names"
@@ -61,47 +67,44 @@ module Racecar
61
67
  desc "The log level for the Racecar logs"
62
68
  string :log_level, default: "info"
63
69
 
64
- desc "A valid SSL certificate authority"
65
- string :ssl_ca_cert
66
-
67
- desc "The path to a valid SSL certificate authority file"
68
- string :ssl_ca_cert_file_path
70
+ desc "Protocol used to communicate with brokers"
71
+ symbol :security_protocol, allowed_values: %i{plaintext ssl sasl_plaintext sasl_ssl}
69
72
 
70
- desc "A valid SSL client certificate"
71
- string :ssl_client_cert
73
+ desc "File or directory path to CA certificate(s) for verifying the broker's key"
74
+ string :ssl_ca_location
72
75
 
73
- desc "A valid SSL client certificate key"
74
- string :ssl_client_cert_key
76
+ desc "Path to CRL for verifying broker's certificate validity"
77
+ string :ssl_crl_location
75
78
 
76
- desc "The password for the SSL client certificate key"
77
- string :ssl_client_cert_key_password
79
+ desc "Path to client's keystore (PKCS#12) used for authentication"
80
+ string :ssl_keystore_location
78
81
 
79
- desc "Support for using the CA certs installed on your system by default for SSL. More info, see: https://github.com/zendesk/ruby-kafka/pull/521"
80
- boolean :ssl_ca_certs_from_system, default: false
82
+ desc "Client's keystore (PKCS#12) password"
83
+ string :ssl_keystore_password
81
84
 
82
- desc "The GSSAPI principal"
83
- string :sasl_gssapi_principal
85
+ desc "SASL mechanism to use for authentication"
86
+ string :sasl_mechanism, allowed_values: %w{GSSAPI PLAIN SCRAM-SHA-256 SCRAM-SHA-512}
84
87
 
85
- desc "Optional GSSAPI keytab"
86
- string :sasl_gssapi_keytab
88
+ desc "Kerberos principal name that Kafka runs as, not including /hostname@REALM"
89
+ string :sasl_kerberos_service_name
87
90
 
88
- desc "The authorization identity to use"
89
- string :sasl_plain_authzid
91
+ desc "This client's Kerberos principal name"
92
+ string :sasl_kerberos_principal
90
93
 
91
- desc "The username used to authenticate"
92
- string :sasl_plain_username
94
+ desc "Full kerberos kinit command string, %{config.prop.name} is replaced by corresponding config object value, %{broker.name} returns the broker's hostname"
95
+ string :sasl_kerberos_kinit_cmd
93
96
 
94
- desc "The password used to authenticate"
95
- string :sasl_plain_password
97
+ desc "Path to Kerberos keytab file. Uses system default if not set"
98
+ string :sasl_kerberos_keytab
96
99
 
97
- desc "The username used to authenticate"
98
- string :sasl_scram_username
100
+ desc "Minimum time in milliseconds between key refresh attempts"
101
+ integer :sasl_kerberos_min_time_before_relogin
99
102
 
100
- desc "The password used to authenticate"
101
- string :sasl_scram_password
103
+ desc "SASL username for use with the PLAIN and SASL-SCRAM-.. mechanism"
104
+ string :sasl_username
102
105
 
103
- desc "The SCRAM mechanism to use, either `sha256` or `sha512`"
104
- string :sasl_scram_mechanism, allowed_values: ["sha256", "sha512"]
106
+ desc "SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism"
107
+ string :sasl_password
105
108
 
106
109
  desc "Whether to use SASL over SSL."
107
110
  boolean :sasl_over_ssl, default: true
@@ -113,7 +116,7 @@ module Racecar
113
116
  boolean :daemonize, default: false
114
117
 
115
118
  desc "The codec used to compress messages with"
116
- symbol :producer_compression_codec
119
+ symbol :producer_compression_codec, allowed_values: %i{none lz4 snappy gzip}
117
120
 
118
121
  desc "Enable Datadog metrics"
119
122
  boolean :datadog_enabled, default: false
@@ -133,17 +136,11 @@ module Racecar
133
136
  desc "Whether to check the server certificate is valid for the hostname"
134
137
  boolean :ssl_verify_hostname, default: true
135
138
 
136
- desc "Whether to boot Rails when starting the consumer"
137
- boolean :without_rails, default: false
138
-
139
139
  # The error handler must be set directly on the object.
140
140
  attr_reader :error_handler
141
141
 
142
142
  attr_accessor :subscriptions, :logger
143
143
 
144
- # The OAUTHBEARER token provider class.
145
- attr_accessor :sasl_oauth_token_provider
146
-
147
144
  def initialize(env: ENV)
148
145
  super(env: env)
149
146
  @error_handler = proc {}
@@ -167,17 +164,9 @@ module Racecar
167
164
  raise ConfigError, "`socket_timeout` must be longer than `max_wait_time`"
168
165
  end
169
166
 
170
- if connect_timeout <= max_wait_time
171
- raise ConfigError, "`connect_timeout` must be longer than `max_wait_time`"
172
- end
173
-
174
167
  if max_pause_timeout && !pause_with_exponential_backoff?
175
168
  raise ConfigError, "`max_pause_timeout` only makes sense when `pause_with_exponential_backoff` is enabled"
176
169
  end
177
-
178
- if ssl_client_cert_key_password && !ssl_client_cert_key
179
- raise ConfigError, "`ssl_client_cert_key_password` must be used in conjunction with `ssl_client_cert_key`"
180
- end
181
170
  end
182
171
 
183
172
  def load_consumer_class(consumer_class)
@@ -193,12 +182,47 @@ module Racecar
193
182
 
194
183
  self.subscriptions = consumer_class.subscriptions
195
184
  self.max_wait_time = consumer_class.max_wait_time || self.max_wait_time
196
- self.offset_retention_time = consumer_class.offset_retention_time || self.offset_retention_time
197
185
  self.pidfile ||= "#{group_id}.pid"
198
186
  end
199
187
 
200
188
  def on_error(&handler)
201
189
  @error_handler = handler
202
190
  end
191
+
192
+ def rdkafka_consumer
193
+ consumer_config = consumer.map do |param|
194
+ param.split("=", 2).map(&:strip)
195
+ end.to_h
196
+ consumer_config.merge!(rdkafka_security_config)
197
+ consumer_config
198
+ end
199
+
200
+ def rdkafka_producer
201
+ producer_config = producer.map do |param|
202
+ param.split("=", 2).map(&:strip)
203
+ end.to_h
204
+ producer_config.merge!(rdkafka_security_config)
205
+ producer_config
206
+ end
207
+
208
+ private
209
+
210
+ def rdkafka_security_config
211
+ {
212
+ "security.protocol" => security_protocol,
213
+ "ssl.ca.location" => ssl_ca_location,
214
+ "ssl.crl.location" => ssl_crl_location,
215
+ "ssl.keystore.location" => ssl_keystore_location,
216
+ "ssl.keystore.password" => ssl_keystore_password,
217
+ "sasl.mechanism" => sasl_mechanism,
218
+ "sasl.kerberos.service.name" => sasl_kerberos_service_name,
219
+ "sasl.kerberos.principal" => sasl_kerberos_principal,
220
+ "sasl.kerberos.kinit.cmd" => sasl_kerberos_kinit_cmd,
221
+ "sasl.kerberos.keytab" => sasl_kerberos_keytab,
222
+ "sasl.kerberos.min.time.before.relogin" => sasl_kerberos_min_time_before_relogin,
223
+ "sasl.username" => sasl_username,
224
+ "sasl.password" => sasl_password,
225
+ }.compact
226
+ end
203
227
  end
204
228
  end