racecar 2.0.0 → 2.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: dbf41d42dfa928893b72e569837c86cdab99e180a4c182c6b7ef7194553f57f5
4
- data.tar.gz: 4e67b732e59c59431c34a19d32b0a5d0ffae64a31fd858e35d2673b26c3ac192
3
+ metadata.gz: 6ace90fc8d6ce4eca70ff85fcfbc414f97c9d00d10a9b9a6f1bac49e3741ec61
4
+ data.tar.gz: a92ae22281bf3a9ac797413cef79f30e4a0ea8e316afb6444abe4ab95bf33372
5
5
  SHA512:
6
- metadata.gz: a501912a2c4f9f8d32c943b1c9516042998b241ce743449c3db48623507e360bc14af96c70a2f3b1448550de9e5e6a45d917bf31b3a2f6270f4401eed9c1d70a
7
- data.tar.gz: 8e13043d7f41ff8fcaa47d7e5b1cdd4f5a2fd75eba04579172e31dbe76c7831a0d8fc9ed1dd976baec2a8d782c6ad4656f96709d7b3161f7186004aa633a5324
6
+ metadata.gz: fefe0c546f36549a4fe0e4f4293534389b2136c68ea4e90c19c769c0cba713abde96ae5acab9a471d0385948a6056349519b2a38547f9f56a5eae1dfe4f79f13
7
+ data.tar.gz: ca062ec3985f6d5c8099f32c1f70eb5d7a86edfbb12b176f5485509aa5b61468db074e34c1b77d7b327d63cab8b32be66310fcd330c14b1c6fc3474f10a4518d
@@ -0,0 +1,56 @@
1
+ version: 2.1
2
+ orbs:
3
+ ruby: circleci/ruby@0.1.2
4
+
5
+ jobs:
6
+ build:
7
+ docker:
8
+ - image: circleci/ruby:2.6.3-stretch-node
9
+ executor: ruby/default
10
+ steps:
11
+ - checkout
12
+ - run:
13
+ name: Which bundler?
14
+ command: bundle -v
15
+ - ruby/bundle-install
16
+ - run: bundle exec rspec --exclude-pattern='spec/integration/*_spec.rb'
17
+ integration-tests:
18
+ docker:
19
+ - image: circleci/ruby:2.6.3-stretch-node
20
+ - image: wurstmeister/zookeeper
21
+ - image: wurstmeister/kafka:2.11-2.0.0
22
+ environment:
23
+ KAFKA_ADVERTISED_HOST_NAME: localhost
24
+ KAFKA_ADVERTISED_PORT: 9092
25
+ KAFKA_PORT: 9092
26
+ KAFKA_ZOOKEEPER_CONNECT: localhost:2181
27
+ KAFKA_DELETE_TOPIC_ENABLE: true
28
+ - image: wurstmeister/kafka:2.11-2.0.0
29
+ environment:
30
+ KAFKA_ADVERTISED_HOST_NAME: localhost
31
+ KAFKA_ADVERTISED_PORT: 9093
32
+ KAFKA_PORT: 9093
33
+ KAFKA_ZOOKEEPER_CONNECT: localhost:2181
34
+ KAFKA_DELETE_TOPIC_ENABLE: true
35
+ - image: wurstmeister/kafka:2.11-2.0.0
36
+ environment:
37
+ KAFKA_ADVERTISED_HOST_NAME: localhost
38
+ KAFKA_ADVERTISED_PORT: 9094
39
+ KAFKA_PORT: 9094
40
+ KAFKA_ZOOKEEPER_CONNECT: localhost:2181
41
+ KAFKA_DELETE_TOPIC_ENABLE: true
42
+ executor: ruby/default
43
+ steps:
44
+ - checkout
45
+ - run:
46
+ name: Which bundler?
47
+ command: bundle -v
48
+ - ruby/bundle-install
49
+ - run: bundle exec rspec --pattern='spec/integration/*_spec.rb'
50
+
51
+ workflows:
52
+ version: 2
53
+ test:
54
+ jobs:
55
+ - build
56
+ - integration-tests
@@ -0,0 +1,41 @@
1
+ name: CI
2
+
3
+ on:
4
+ pull_request:
5
+ branches: ["master"]
6
+ push:
7
+ branches: ["master"]
8
+
9
+ jobs:
10
+ unit-specs:
11
+ runs-on: ubuntu-latest
12
+
13
+ strategy:
14
+ matrix:
15
+ ruby-version: ["2.5", "2.6", "3.0"]
16
+
17
+ steps:
18
+ - uses: zendesk/checkout@v2
19
+ - name: Set up Ruby
20
+ uses: zendesk/setup-ruby@v1.64.1
21
+ with:
22
+ ruby-version: ${{ matrix.ruby-version }}
23
+ bundler-cache: true
24
+ - name: Build and test with RSpec
25
+ run: bundle exec rspec --format documentation --require spec_helper --color --exclude-pattern='spec/integration/*_spec.rb'
26
+
27
+ integration-specs:
28
+ runs-on: ubuntu-latest
29
+ steps:
30
+ - uses: zendesk/checkout@v2
31
+ - name: Set up Ruby
32
+ uses: zendesk/setup-ruby@v1.64.1
33
+ with:
34
+ ruby-version: 2.7
35
+ bundler-cache: true
36
+ - name: Bring up docker-compose stack
37
+ run: docker-compose up -d
38
+ - name: Build and test with RSpec
39
+ env:
40
+ RACECAR_BROKERS: localhost:9092
41
+ run: timeout --kill-after 180 150 bundle exec rspec --format documentation --require spec_helper --color spec/integration/*_spec.rb
data/.gitignore CHANGED
@@ -1,10 +1,9 @@
1
1
  /.bundle/
2
2
  /.yardoc
3
- /Gemfile.lock
4
3
  /_yardoc/
5
4
  /coverage/
6
5
  /doc/
7
6
  /pkg/
8
7
  /spec/reports/
9
8
  /tmp/
10
- /vendor/bundle/
9
+ /vendor/bundle/
data/CHANGELOG.md CHANGED
@@ -1,8 +1,34 @@
1
1
  # Changelog
2
2
 
3
+ ## Unreleased
4
+
5
+ ## racecar v2.3.0
6
+
7
+ * Add native support for Heroku (#248)
8
+ * [Racecar::Consumer] When messages fail to deliver, an extended error with hints is now raised. Instead of `Rdkafka::RdkafkaError` you'll get a `Racecar::MessageDeliveryError` instead. ([#219](https://github.com/zendesk/racecar/pull/219)). If you have set a `Racecar.config.error_handler`, it might need to be updated.
9
+ * [Racecar::Consumer] When message delivery times out, Racecar will reset the producer in an attempt to fix some of the potential causes for this error. ([#219](https://github.com/zendesk/racecar/pull/219))
10
+ * Validate the `process` and `process_batch` method signature on consumer classes when initializing (#236)
11
+ * Add Ruby 3.0 compatibility (#237)
12
+ * Introduce parallel runner, which forks a number of independent consumers, allowing partitions to be processed in parallel. ([#222](https://github.com/zendesk/racecar/pull/222))
13
+ * [Racecar::Runner] Ensure producer is closed, whether it closes or errors. ([#222](https://github.com/zendesk/racecar/pull/222))
14
+ * Configure `statistics_interval` directly in the config. Disable statistics when no callback is defined ([#232](https://github.com/zendesk/racecar/pull/232))
15
+
16
+ ## racecar v2.2.0
17
+
18
+ * [Racecar::ConsumerSet] **breaking change** `Racecar::ConsumerSet`'s functions `poll` and `batch_pall` expect the max wait values to be given in milliseconds. The defaults were using `config.max_wait_time`, which is in seconds. If you do not directly use `Racecar::ConsumerSet`, or always call its `poll` and `batch_poll` functions by specfiying the max wait time (the first argument), then this breaking change does not affect you. ([#214](https://github.com/zendesk/racecar/pull/214))
19
+
20
+ ## racecar v2.1.1
21
+
22
+ * [Bugfix] Close RdKafka consumer in ConsumerSet#reset_current_consumer to prevent memory leak (#196)
23
+ * [Bugfix] `poll`/`batch_poll` would not retry in edge cases and raise immediately. They still honor the `max_wait_time` setting, but might return no messages instead and only retry on their next call. ([#177](https://github.com/zendesk/racecar/pull/177))
24
+
25
+ ## racecar v2.1.0
26
+
27
+ * Bump rdkafka to 0.8.0 (#191)
28
+
3
29
  ## racecar v2.0.0
4
30
 
5
- * Replace `ruby-kafka` with `rdkafka-ruby` as the low-level library underneath Racecar.
31
+ * Replace `ruby-kafka` with `rdkafka-ruby` as the low-level library underneath Racecar (#91).
6
32
  * Fix `max_wait_time` usage (#179).
7
33
  * Removed config option `sasl_over_ssl`.
8
34
  * [Racecar::Consumer] Do not pause consuming partitions on exception.
@@ -26,6 +52,8 @@
26
52
  * [Instrumentation] if processors define a `statistics_callback`, it will be called once every second for every subscription or producer connection. The first argument will be a Hash, for contents see [librdkafka STATISTICS.md](https://github.com/edenhill/librdkafka/blob/master/STATISTICS.md).
27
53
  * Add current directory to `$LOAD_PATH` only when `--require` option is used (#117).
28
54
  * Remove manual heartbeat support, see [Long-running message processing section in README](README.md#long-running-message-processing).
55
+ * Rescue exceptions--then log and pass to `on_error`--at the outermost level of `exe/racecar`, so that exceptions raised outside `Cli.run` are not silently discarded (#186).
56
+ * When exceptions with a `cause` are logged, recursively log the `cause` detail, separated by `--- Caused by: ---\n`.
29
57
 
30
58
  ## racecar v1.0.0
31
59
 
data/Dockerfile ADDED
@@ -0,0 +1,9 @@
1
+ FROM circleci/ruby:2.7.2
2
+
3
+ RUN sudo apt-get update
4
+ RUN sudo apt-get install docker
5
+
6
+ WORKDIR /app
7
+ COPY . .
8
+
9
+ RUN bundle install
data/Gemfile CHANGED
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  source 'https://rubygems.org'
2
4
 
3
5
  # Specify your gem's dependencies in racecar.gemspec
data/Gemfile.lock ADDED
@@ -0,0 +1,69 @@
1
+ PATH
2
+ remote: .
3
+ specs:
4
+ racecar (2.3.0.alpha1)
5
+ king_konf (~> 1.0.0)
6
+ rdkafka (~> 0.8.0)
7
+
8
+ GEM
9
+ remote: https://rubygems.org/
10
+ specs:
11
+ activesupport (6.0.3.4)
12
+ concurrent-ruby (~> 1.0, >= 1.0.2)
13
+ i18n (>= 0.7, < 2)
14
+ minitest (~> 5.1)
15
+ tzinfo (~> 1.1)
16
+ zeitwerk (~> 2.2, >= 2.2.2)
17
+ coderay (1.1.3)
18
+ concurrent-ruby (1.1.7)
19
+ diff-lcs (1.4.4)
20
+ dogstatsd-ruby (4.8.2)
21
+ ffi (1.15.0)
22
+ i18n (1.8.5)
23
+ concurrent-ruby (~> 1.0)
24
+ king_konf (1.0.0)
25
+ method_source (1.0.0)
26
+ mini_portile2 (2.5.1)
27
+ minitest (5.14.2)
28
+ pry (0.13.1)
29
+ coderay (~> 1.1)
30
+ method_source (~> 1.0)
31
+ rake (13.0.1)
32
+ rdkafka (0.8.1)
33
+ ffi (~> 1.9)
34
+ mini_portile2 (~> 2.1)
35
+ rake (>= 12.3)
36
+ rspec (3.10.0)
37
+ rspec-core (~> 3.10.0)
38
+ rspec-expectations (~> 3.10.0)
39
+ rspec-mocks (~> 3.10.0)
40
+ rspec-core (3.10.1)
41
+ rspec-support (~> 3.10.0)
42
+ rspec-expectations (3.10.1)
43
+ diff-lcs (>= 1.2.0, < 2.0)
44
+ rspec-support (~> 3.10.0)
45
+ rspec-mocks (3.10.2)
46
+ diff-lcs (>= 1.2.0, < 2.0)
47
+ rspec-support (~> 3.10.0)
48
+ rspec-support (3.10.2)
49
+ thread_safe (0.3.6)
50
+ timecop (0.9.2)
51
+ tzinfo (1.2.8)
52
+ thread_safe (~> 0.1)
53
+ zeitwerk (2.4.2)
54
+
55
+ PLATFORMS
56
+ ruby
57
+
58
+ DEPENDENCIES
59
+ activesupport (>= 4.0, < 6.1)
60
+ bundler (>= 1.13, < 3)
61
+ dogstatsd-ruby (>= 4.0.0, < 5.0.0)
62
+ pry
63
+ racecar!
64
+ rake (> 10.0)
65
+ rspec (~> 3.0)
66
+ timecop
67
+
68
+ BUNDLED WITH
69
+ 2.1.4
data/README.md CHANGED
@@ -10,22 +10,21 @@ The framework is based on [rdkafka-ruby](https://github.com/appsignal/rdkafka-ru
10
10
 
11
11
  1. [Installation](#installation)
12
12
  2. [Usage](#usage)
13
- 1. [Creating consumers](#creating-consumers)
14
- 2. [Running consumers](#running-consumers)
15
- 3. [Producing messages](#producing-messages)
16
- 4. [Configuration](#configuration)
17
- 5. [Testing consumers](#testing-consumers)
18
- 6. [Deploying consumers](#deploying-consumers)
19
- 7. [Handling errors](#handling-errors)
20
- 8. [Logging](#logging)
21
- 9. [Operations](#operations)
22
- 10. [Upgrading from v1 to v2](#upgrading-from-v1-to-v2)
13
+ 1. [Creating consumers](#creating-consumers)
14
+ 2. [Running consumers](#running-consumers)
15
+ 3. [Producing messages](#producing-messages)
16
+ 4. [Configuration](#configuration)
17
+ 5. [Testing consumers](#testing-consumers)
18
+ 6. [Deploying consumers](#deploying-consumers)
19
+ 7. [Handling errors](#handling-errors)
20
+ 8. [Logging](#logging)
21
+ 9. [Operations](#operations)
22
+ 10. [Upgrading from v1 to v2](#upgrading-from-v1-to-v2)
23
23
  3. [Development](#development)
24
24
  4. [Contributing](#contributing)
25
25
  5. [Support and Discussion](#support-and-discussion)
26
26
  6. [Copyright and license](#copyright-and-license)
27
27
 
28
-
29
28
  ## Installation
30
29
 
31
30
  Add this line to your application's Gemfile:
@@ -50,9 +49,7 @@ This will add a config file in `config/racecar.yml`.
50
49
 
51
50
  ## Usage
52
51
 
53
- Racecar is built for simplicity of development and operation. If you need more flexibility, it's quite straightforward to build your own Kafka consumer executables using [ruby-kafka](https://github.com/zendesk/ruby-kafka#consuming-messages-from-kafka) directly.
54
-
55
- First, a short introduction to the Kafka consumer concept as well as some basic background on Kafka.
52
+ Racecar is built for simplicity of development and operation. First, a short introduction to the Kafka consumer concept as well as some basic background on Kafka.
56
53
 
57
54
  Kafka stores messages in so-called _partitions_ which are grouped into _topics_. Within a partition, each message gets a unique offset.
58
55
 
@@ -79,12 +76,38 @@ In order to create your own consumer, run the Rails generator `racecar:consumer`
79
76
 
80
77
  $ bundle exec rails generate racecar:consumer TapDance
81
78
 
82
- This will create a file at `app/consumers/tap_dance_consumer.rb` which you can modify to your liking. Add one or more calls to `subscribes_to` in order to have the consumer subscribe to Kafka topics.
79
+ This will create a file at `app/consumers/tap_dance_consumer.rb` which you can modify to your liking. Add one or more calls to `subscribes_to` in order to have the consumer subscribe to Kafka topics.
83
80
 
84
81
  Now run your consumer with `bundle exec racecar TapDanceConsumer`.
85
82
 
86
83
  Note: if you're not using Rails, you'll have to add the file yourself. No-one will judge you for copy-pasting it.
87
84
 
85
+ #### Running consumers in parallel (experimental)
86
+
87
+ Warning - limited battle testing in production environments; use at your own risk!
88
+
89
+ If you want to process different partitions in parallel, and don't want to deploy a number of instances matching the total partitions of the topic, you can specify the number of workers to spin up - that number of processes will be forked, and each will register its own consumer in the group. Some things to note:
90
+
91
+ - This would make no difference on a single partitioned topic - only one consumer would ever be assigned a partition. A couple of example configurations to process all partitions in parallel (we'll assume a 15 partition topic):
92
+ - Parallel workers set to 3, 5 separate instances / replicas running in your container orchestrator
93
+ - Parallel workers set to 5, 3 separate instances / replicas running in your container orchestrator
94
+ - Since we're forking new processes, the memory demands are a little higher
95
+ - From some initial testing, running 5 parallel workers requires no more than double the memory of running a Racecar consumer without parallelism.
96
+
97
+ The number of parallel workers is configured per consumer class; you may only want to take advantage of this for busier consumers:
98
+
99
+ ```ruby
100
+ class ParallelProcessingConsumer < Racecar::Consumer
101
+ subscribes_to "some-topic"
102
+
103
+ self.parallel_workers = 5
104
+
105
+ def process(message)
106
+ ...
107
+ end
108
+ end
109
+ ```
110
+
88
111
  #### Initializing consumers
89
112
 
90
113
  You can optionally add an `initialize` method if you need to do any set-up work before processing messages, e.g.
@@ -160,9 +183,9 @@ message.headers #=> { "Header-A" => 42, ... }
160
183
 
161
184
  In order to avoid your consumer being kicked out of its group during long-running message processing operations, you'll need to let Kafka regularly know that the consumer is still healthy. There's two mechanisms in place to ensure that:
162
185
 
163
- *Heartbeats:* They are automatically sent in the background and ensure the broker can still talk to the consumer. This will detect network splits, ungraceful shutdowns, etc.
186
+ _Heartbeats:_ They are automatically sent in the background and ensure the broker can still talk to the consumer. This will detect network splits, ungraceful shutdowns, etc.
164
187
 
165
- *Message Fetch Interval:* Kafka expects the consumer to query for new messages within this time limit. This will detect situations with slow IO or the consumer being stuck in an infinite loop without making actual progress. This limit applies to a whole batch if you do batch processing. Use `max_poll_interval` to increase the default 5 minute timeout, or reduce batching with `fetch_messages`.
188
+ _Message Fetch Interval:_ Kafka expects the consumer to query for new messages within this time limit. This will detect situations with slow IO or the consumer being stuck in an infinite loop without making actual progress. This limit applies to a whole batch if you do batch processing. Use `max_poll_interval` to increase the default 5 minute timeout, or reduce batching with `fetch_messages`.
166
189
 
167
190
  #### Tearing down resources when stopping
168
191
 
@@ -226,7 +249,7 @@ You can set message headers by passing a `headers:` option with a Hash of header
226
249
 
227
250
  Racecar provides a flexible way to configure your consumer in a way that feels at home in a Rails application. If you haven't already, run `bundle exec rails generate racecar:install` in order to generate a config file. You'll get a separate section for each Rails environment, with the common configuration values in a shared `common` section.
228
251
 
229
- **Note:** many of these configuration keys correspond directly to similarly named concepts in [ruby-kafka](https://github.com/zendesk/ruby-kafka); for more details on low-level operations, read that project's documentation.
252
+ **Note:** many of these configuration keys correspond directly to similarly named concepts in [rdkafka-ruby](https://github.com/appsignal/rdkafka-ruby); for more details on low-level operations, read that project's documentation.
230
253
 
231
254
  It's also possible to configure Racecar using environment variables. For any given configuration key, there should be a corresponding environment variable with the prefix `RACECAR_`, in upper case. For instance, in order to configure the client id, set `RACECAR_CLIENT_ID=some-id` in the process in which the Racecar consumer is launched. You can set `brokers` by passing a comma-separated list, e.g. `RACECAR_BROKERS=kafka1:9092,kafka2:9092,kafka3:9092`.
232
255
 
@@ -241,87 +264,91 @@ end
241
264
 
242
265
  #### Basic configuration
243
266
 
244
- * `brokers` – A list of Kafka brokers in the cluster that you're consuming from. Defaults to `localhost` on port 9092, the default Kafka port.
245
- * `client_id` – A string used to identify the client in logs and metrics.
246
- * `group_id` – The group id to use for a given group of consumers. Note that this _must_ be different for each consumer class. If left blank a group id is generated based on the consumer class name such that (for example) a consumer with the class name `BaconConsumer` would default to a group id of `bacon-consumer`.
247
- * `group_id_prefix` – A prefix used when generating consumer group names. For instance, if you set the prefix to be `kevin.` and your consumer class is named `BaconConsumer`, the resulting consumer group will be named `kevin.bacon-consumer`.
267
+ - `brokers` – A list of Kafka brokers in the cluster that you're consuming from. Defaults to `localhost` on port 9092, the default Kafka port.
268
+ - `client_id` – A string used to identify the client in logs and metrics.
269
+ - `group_id` – The group id to use for a given group of consumers. Note that this _must_ be different for each consumer class. If left blank a group id is generated based on the consumer class name such that (for example) a consumer with the class name `BaconConsumer` would default to a group id of `bacon-consumer`.
270
+ - `group_id_prefix` – A prefix used when generating consumer group names. For instance, if you set the prefix to be `kevin.` and your consumer class is named `BaconConsumer`, the resulting consumer group will be named `kevin.bacon-consumer`.
248
271
 
249
272
  #### Logging
250
273
 
251
- * `logfile` – A filename that log messages should be written to. Default is `nil`, which means logs will be written to standard output.
252
- * `log_level` – The log level for the Racecar logs, one of `debug`, `info`, `warn`, or `error`. Default is `info`.
274
+ - `logfile` – A filename that log messages should be written to. Default is `nil`, which means logs will be written to standard output.
275
+ - `log_level` – The log level for the Racecar logs, one of `debug`, `info`, `warn`, or `error`. Default is `info`.
253
276
 
254
277
  #### Consumer checkpointing
255
278
 
256
279
  The consumers will checkpoint their positions from time to time in order to be able to recover from failures. This is called _committing offsets_, since it's done by tracking the offset reached in each partition being processed, and committing those offset numbers to the Kafka offset storage API. If you can tolerate more double-processing after a failure, you can increase the interval between commits in order to better performance. You can also do the opposite if you prefer less chance of double-processing.
257
280
 
258
- * `offset_commit_interval` – How often to save the consumer's position in Kafka. Default is every 10 seconds.
281
+ - `offset_commit_interval` – How often to save the consumer's position in Kafka. Default is every 10 seconds.
259
282
 
260
283
  #### Timeouts & intervals
261
284
 
262
285
  All timeouts are defined in number of seconds.
263
286
 
264
- * `session_timeout` – The idle timeout after which a consumer is kicked out of the group. Consumers must send heartbeats with at least this frequency.
265
- * `heartbeat_interval` – How often to send a heartbeat message to Kafka.
266
- * `max_poll_interval` – The maximum time between two message fetches before the consumer is kicked out of the group. Put differently, your (batch) processing must finish earlier than this.
267
- * `pause_timeout` – How long to pause a partition for if the consumer raises an exception while processing a message. Default is to pause for 10 seconds. Set this to `0` in order to disable automatic pausing of partitions or to `-1` to pause indefinitely.
268
- * `pause_with_exponential_backoff` – Set to `true` if you want to double the `pause_timeout` on each consecutive failure of a particular partition.
269
- * `socket_timeout` – How long to wait when trying to communicate with a Kafka broker. Default is 30 seconds.
270
- * `max_wait_time` – How long to allow the Kafka brokers to wait before returning messages. A higher number means larger batches, at the cost of higher latency. Default is 1 second.
287
+ - `session_timeout` – The idle timeout after which a consumer is kicked out of the group. Consumers must send heartbeats with at least this frequency.
288
+ - `heartbeat_interval` – How often to send a heartbeat message to Kafka.
289
+ - `max_poll_interval` – The maximum time between two message fetches before the consumer is kicked out of the group. Put differently, your (batch) processing must finish earlier than this.
290
+ - `pause_timeout` – How long to pause a partition for if the consumer raises an exception while processing a message. Default is to pause for 10 seconds. Set this to `0` in order to disable automatic pausing of partitions or to `-1` to pause indefinitely.
291
+ - `pause_with_exponential_backoff` – Set to `true` if you want to double the `pause_timeout` on each consecutive failure of a particular partition.
292
+ - `socket_timeout` – How long to wait when trying to communicate with a Kafka broker. Default is 30 seconds.
293
+ - `max_wait_time` – How long to allow the Kafka brokers to wait before returning messages. A higher number means larger batches, at the cost of higher latency. Default is 1 second.
294
+ - `message_timeout` – How long to try to deliver a produced message before finally giving up. Default is 5 minutes. Transient errors are automatically retried. If a message delivery fails, the current read message batch is retried.
295
+ - `statistics_interval` – How frequently librdkafka should publish statistics about its consumers and producers; you must also add a `statistics_callback` method to your processor, otherwise the stats are disabled. The default is 1 second, however this can be quite memory hungry, so you may want to tune this and monitor.
271
296
 
272
297
  #### Memory & network usage
273
298
 
274
299
  Kafka is _really_ good at throwing data at consumers, so you may want to tune these variables in order to avoid ballooning your process' memory or saturating your network capacity.
275
300
 
276
- Racecar uses ruby-kafka under the hood, which fetches messages from the Kafka brokers in a background thread. This thread pushes fetch responses, possible containing messages from many partitions, into a queue that is read by the processing thread (AKA your code). The main way to control the fetcher thread is to control the size of those responses and the size of the queue.
301
+ Racecar uses [rdkafka-ruby](https://github.com/appsignal/rdkafka-ruby) under the hood, which fetches messages from the Kafka brokers in a background thread. This thread pushes fetch responses, possible containing messages from many partitions, into a queue that is read by the processing thread (AKA your code). The main way to control the fetcher thread is to control the size of those responses and the size of the queue.
277
302
 
278
- * `max_bytes` — Maximum amount of data the broker shall return for a Fetch request.
279
- * `min_message_queue_size` — The minimum number of messages in the local consumer queue.
303
+ - `max_bytes` — Maximum amount of data the broker shall return for a Fetch request.
304
+ - `min_message_queue_size` — The minimum number of messages in the local consumer queue.
280
305
 
281
306
  The memory usage limit is roughly estimated as `max_bytes * min_message_queue_size`, plus whatever your application uses.
282
307
 
283
308
  #### SSL encryption, authentication & authorization
284
309
 
285
- * `security_protocol` – Protocol used to communicate with brokers (`:ssl`)
286
- * `ssl_ca_location` – File or directory path to CA certificate(s) for verifying the broker's key
287
- * `ssl_crl_location` – Path to CRL for verifying broker's certificate validity
288
- * `ssl_keystore_location` – Path to client's keystore (PKCS#12) used for authentication
289
- * `ssl_keystore_password` – Client's keystore (PKCS#12) password
290
- * `ssl_certificate_location` – Path to the certificate
291
- * `ssl_key_location` – Path to client's certificate used for authentication
292
- * `ssl_key_password` – Client's certificate password
310
+ - `security_protocol` – Protocol used to communicate with brokers (`:ssl`)
311
+ - `ssl_ca_location` – File or directory path to CA certificate(s) for verifying the broker's key
312
+ - `ssl_crl_location` – Path to CRL for verifying broker's certificate validity
313
+ - `ssl_keystore_location` – Path to client's keystore (PKCS#12) used for authentication
314
+ - `ssl_keystore_password` – Client's keystore (PKCS#12) password
315
+ - `ssl_certificate_location` – Path to the certificate
316
+ - `ssl_key_location` – Path to client's certificate used for authentication
317
+ - `ssl_key_password` – Client's certificate password
293
318
 
294
319
  #### SASL encryption, authentication & authorization
295
320
 
296
321
  Racecar has support for using SASL to authenticate clients using either the GSSAPI or PLAIN mechanism either via plaintext or SSL connection.
297
322
 
298
- * `security_protocol` – Protocol used to communicate with brokers (`:sasl_plaintext` `:sasl_ssl`)
299
- * `sasl_mechanism` – SASL mechanism to use for authentication (`GSSAPI` `PLAIN` `SCRAM-SHA-256` `SCRAM-SHA-512`)
323
+ - `security_protocol` – Protocol used to communicate with brokers (`:sasl_plaintext` `:sasl_ssl`)
324
+ - `sasl_mechanism` – SASL mechanism to use for authentication (`GSSAPI` `PLAIN` `SCRAM-SHA-256` `SCRAM-SHA-512`)
300
325
 
301
- * `sasl_kerberos_principal` – This client's Kerberos principal name
302
- * `sasl_kerberos_kinit_cmd` – Full kerberos kinit command string, `%{config.prop.name}` is replaced by corresponding config object value, `%{broker.name}` returns the broker's hostname
303
- * `sasl_kerberos_keytab` – Path to Kerberos keytab file. Uses system default if not set
304
- * `sasl_kerberos_min_time_before_relogin` – Minimum time in milliseconds between key refresh attempts
305
- * `sasl_username` – SASL username for use with the PLAIN and SASL-SCRAM-.. mechanism
306
- * `sasl_password` – SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism
326
+ - `sasl_kerberos_principal` – This client's Kerberos principal name
327
+ - `sasl_kerberos_kinit_cmd` – Full kerberos kinit command string, `%{config.prop.name}` is replaced by corresponding config object value, `%{broker.name}` returns the broker's hostname
328
+ - `sasl_kerberos_keytab` – Path to Kerberos keytab file. Uses system default if not set
329
+ - `sasl_kerberos_min_time_before_relogin` – Minimum time in milliseconds between key refresh attempts
330
+ - `sasl_username` – SASL username for use with the PLAIN and SASL-SCRAM-.. mechanism
331
+ - `sasl_password` – SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism
307
332
 
308
333
  #### Producing messages
309
334
 
310
335
  These settings are related to consumers that _produce messages to Kafka_.
311
336
 
312
- * `producer_compression_codec` – If defined, Racecar will compress messages before writing them to Kafka. The codec needs to be one of `gzip`, `lz4`, or `snappy`, either as a Symbol or a String.
337
+ - `producer_compression_codec` – If defined, Racecar will compress messages before writing them to Kafka. The codec needs to be one of `gzip`, `lz4`, or `snappy`, either as a Symbol or a String.
313
338
 
314
339
  #### Datadog monitoring
315
340
 
316
- Racecar supports configuring ruby-kafka's [Datadog](https://www.datadoghq.com/) monitoring integration. If you're running a normal Datadog agent on your host, you just need to set `datadog_enabled` to `true`, as the rest of the settings come with sane defaults.
341
+ Racecar supports [Datadog](https://www.datadoghq.com/) monitoring integration. If you're running a normal Datadog agent on your host, you just need to set `datadog_enabled` to `true`, as the rest of the settings come with sane defaults.
317
342
 
318
- * `datadog_enabled` – Whether Datadog monitoring is enabled (defaults to `false`).
319
- * `datadog_host` – The host running the Datadog agent.
320
- * `datadog_port` – The port of the Datadog agent.
321
- * `datadog_namespace` – The namespace to use for Datadog metrics.
322
- * `datadog_tags` – Tags that should always be set on Datadog metrics.
343
+ - `datadog_enabled` – Whether Datadog monitoring is enabled (defaults to `false`).
344
+ - `datadog_host` – The host running the Datadog agent.
345
+ - `datadog_port` – The port of the Datadog agent.
346
+ - `datadog_namespace` – The namespace to use for Datadog metrics.
347
+ - `datadog_tags` – Tags that should always be set on Datadog metrics.
323
348
 
324
- #### Consumers Without Rails ####
349
+ Furthermore, there's a [standard Datadog dashboard configution file](https://raw.githubusercontent.com/zendesk/racecar/master/extra/datadog-dashboard.json) that you can import to get started with a Racecar dashboard for all of your consumers.
350
+
351
+ #### Consumers Without Rails
325
352
 
326
353
  By default, if Rails is detected, it will be automatically started when the consumer is started. There are cases where you might not want or need Rails. You can pass the `--without-rails` option when starting the consumer and Rails won't be started.
327
354
 
@@ -359,7 +386,6 @@ describe CreateContactsConsumer do
359
386
  end
360
387
  ```
361
388
 
362
-
363
389
  ### Deploying consumers
364
390
 
365
391
  If you're already deploying your Rails application using e.g. [Capistrano](http://capistranorb.com/), all you need to do to run your Racecar consumers in production is to have some _process supervisor_ start the processes and manage them for you.
@@ -371,7 +397,7 @@ racecar-process-payments: bundle exec racecar ProcessPaymentsConsumer
371
397
  racecar-resize-images: bundle exec racecar ResizeImagesConsumer
372
398
  ```
373
399
 
374
- If you've ever used Heroku you'll recognize the format – indeed, deploying to Heroku should just work if you add Racecar invocations to your Procfile.
400
+ If you've ever used Heroku you'll recognize the format – indeed, deploying to Heroku should just work if you add Racecar invocations to your Procfile and [enable the Heroku integration](#deploying-to-heroku)
375
401
 
376
402
  With Foreman, you can easily run these processes locally by executing `foreman run`; in production you'll want to _export_ to another process management format such as Upstart or Runit. [capistrano-foreman](https://github.com/hyperoslo/capistrano-foreman) allows you to do this with Capistrano.
377
403
 
@@ -399,20 +425,37 @@ spec:
399
425
  app: my-racecar
400
426
  spec:
401
427
  containers:
402
- - name: my-racecar
403
- image: my-racecar-image
404
- command: ["bundle", "exec", "racecar", "MyConsumer"]
405
- env: # <-- you can configure the consumer using environment variables!
406
- - name: RACECAR_BROKERS
407
- value: kafka1,kafka2,kafka3
408
- - name: RACECAR_OFFSET_COMMIT_INTERVAL
409
- value: 5
428
+ - name: my-racecar
429
+ image: my-racecar-image
430
+ command: ["bundle", "exec", "racecar", "MyConsumer"]
431
+ env: # <-- you can configure the consumer using environment variables!
432
+ - name: RACECAR_BROKERS
433
+ value: kafka1,kafka2,kafka3
434
+ - name: RACECAR_OFFSET_COMMIT_INTERVAL
435
+ value: 5
410
436
  ```
411
437
 
412
438
  The important part is the `strategy.type` value, which tells Kubernetes how to upgrade from one version of your Deployment to another. Many services use so-called _rolling updates_, where some but not all containers are replaced with the new version. This is done so that, if the new version doesn't work, the old version is still there to serve most of the requests. For Kafka consumers, this doesn't work well. The reason is that every time a consumer joins or leaves a group, every other consumer in the group needs to stop and synchronize the list of partitions assigned to each group member. So if the group is updated in a rolling fashion, this synchronization would occur over and over again, causing undesirable double-processing of messages as consumers would start only to be synchronized shortly after.
413
439
 
414
440
  Instead, the `Recreate` update strategy should be used. It completely tears down the existing containers before starting all of the new containers simultaneously, allowing for a single synchronization stage and a much faster, more stable deployment update.
415
441
 
442
+ #### Deploying to Heroku
443
+
444
+ If you run your applications in Heroku and/or use the Heroku Kafka add-on, you application will be provided with 4 ENV variables that allow connecting to the cluster: `KAFKA_URL`, `KAFKA_TRUSTED_CERT`, `KAFKA_CLIENT_CERT`, and `KAFKA_CLIENT_CERT_KEY`.
445
+
446
+ Racecar has a built-in helper for configuring your application based on these variables – just add `require "racecar/heroku"` and everything should just work.
447
+
448
+ Please note aliasing the Heroku Kafka add-on will break this integration. If you have a need to do that, please ask on [the discussion board](https://github.com/zendesk/racecar/discussions).
449
+
450
+ ```ruby
451
+ # This takes care of setting up your consumer based on the ENV
452
+ # variables provided by Heroku.
453
+ require "racecar/heroku"
454
+
455
+ class SomeConsumer < Racecar::Consumer
456
+ # ...
457
+ end
458
+ ```
416
459
 
417
460
  #### Running consumers in the background
418
461
 
@@ -430,7 +473,6 @@ Since the process is daemonized, you need to know the process id (PID) in order
430
473
 
431
474
  Again, the recommended approach is to manage the processes using process managers. Only do this if you have to.
432
475
 
433
-
434
476
  ### Handling errors
435
477
 
436
478
  When processing messages from a Kafka topic, your code may encounter an error and raise an exception. The cause is typically one of two things:
@@ -468,42 +510,48 @@ end
468
510
 
469
511
  It is highly recommended that you set up an error handler. Please note that the `info` object contains different keys and values depending on whether you are using `process` or `process_batch`. See the `instrumentation_payload` object in the `process` and `process_batch` methods in the `Runner` class for the complete list.
470
512
 
471
-
472
513
  ### Logging
473
514
 
474
515
  By default, Racecar will log to `STDOUT`. If you're using Rails, your application code will use whatever logger you've configured there.
475
516
 
476
517
  In order to make Racecar log its own operations to a log file, set the `logfile` configuration variable or pass `--log filename.log` to the `racecar` command.
477
518
 
478
-
479
519
  ### Operations
480
520
 
481
521
  In order to gracefully shut down a Racecar consumer process, send it the `SIGTERM` signal. Most process supervisors such as Runit and Kubernetes send this signal when shutting down a process, so using those systems will make things easier.
482
522
 
483
523
  In order to introspect the configuration of a consumer process, send it the `SIGUSR1` signal. This will make Racecar print its configuration to the standard error file descriptor associated with the consumer process, so you'll need to know where that is written to.
484
524
 
485
-
486
525
  ### Upgrading from v1 to v2
487
526
 
488
527
  In order to safely upgrade from Racecar v1 to v2, you need to completely shut down your consumer group before starting it up again with the v2 Racecar dependency. In general, you should avoid rolling deploys for consumers groups, so it is likely the case that this will just work for you, but it's a good idea to check first.
489
528
 
490
-
491
529
  ## Development
492
530
 
493
531
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rspec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
494
532
 
533
+ The integration tests run against a Kafka instance that is not automatically started from within `rspec`. You can set one up using the provided `docker-compose.yml` by running `docker-compose up`.
534
+
535
+ ### Running RSpec within Docker
536
+
537
+ There can be behavioural inconsistencies between running the specs on your machine, and in the CI pipeline. Due to this, there is now a Dockerfile included in the project, which is based on the CircleCI ruby 2.7.2 image. This could easily be extended with more Dockerfiles to cover different Ruby versions if desired. In order to run the specs via Docker:
538
+
539
+ - Uncomment the `tests` service from the docker-compose.yml
540
+ - Bring up the stack with `docker-compose up -d`
541
+ - Execute the entire suite with `docker-compose run --rm tests rspec`
542
+ - Execute a single spec or directory with `docker-compose run --rm tests rspec spec/integration/consumer_spec.rb`
543
+
544
+ Please note - your code directory is mounted as a volume, so you can make code changes without needing to rebuild
495
545
 
496
546
  ## Contributing
497
547
 
498
548
  Bug reports and pull requests are welcome on [GitHub](https://github.com/zendesk/racecar). Feel free to [join our Slack team](https://ruby-kafka-slack.herokuapp.com/) and ask how best to contribute!
499
549
 
500
-
501
550
  ## Support and Discussion
502
551
 
503
- If you've discovered a bug, please file a [Github issue](https://github.com/zendesk/racecar/issues/new), and make sure to include all the relevant information, including the version of Racecar, ruby-kafka, and Kafka that you're using.
504
-
505
- If you have other questions, or would like to discuss best practises, how to contribute to the project, or any other ruby-kafka related topic, [join our Slack team](https://ruby-kafka-slack.herokuapp.com/)!
552
+ If you've discovered a bug, please file a [Github issue](https://github.com/zendesk/racecar/issues/new), and make sure to include all the relevant information, including the version of Racecar, rdkafka-ruby, and Kafka that you're using.
506
553
 
554
+ If you have other questions, or would like to discuss best practises, or how to contribute to the project, [join our Slack team](https://ruby-kafka-slack.herokuapp.com/)!
507
555
 
508
556
  ## Copyright and license
509
557