racecar 2.8.2 → 2.9.0.beta1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4cbf6ca4837bdd3893c8342936f9d5ea115a5b3a585e0c843e62171e00038e72
4
- data.tar.gz: d8603c46cf801d0f41f017d5b27b35e0d2b195eec5be11a1f7b6bf77413726b4
3
+ metadata.gz: 364239e804c99b816c0a37802b605ff4035d2d4c51abc0b161e11609268349ea
4
+ data.tar.gz: 4d7bd65a04b8c914640a2edc11f3c7521b2e061db8eb3c215ba29a785f031e12
5
5
  SHA512:
6
- metadata.gz: 78bdedfe7bd888a3716f5a29f28810f4c4d2f03e43c33b8e47ed08607155211fca1d47f4eab6a4f0def6976bb7ca3263117e52d2e0fa7f0a721c56ec70c7ee3d
7
- data.tar.gz: 2bf6e2b50980add68363b59718f54403caada58abcb404a9018ac4304519bbbb7ed920578d37bcadefad63964812e7374d367298f403577725eef79af269c604
6
+ metadata.gz: c60037cd0dd5477a77a5fddd42003a3d4be67c38fb1c812cfff698cdb8fb7cd62e7e65097b6508c2a19a768daad5139f3eb469e91f0ef3ff3fe9535f9fd86a8a
7
+ data.tar.gz: 88d3ffcfa27b6bcb89b454151db353165ab62fa0dd42586c177d05ea6091663227b8d78f14180e69c4937417d0f761fc5666657be1320d44d54eeebcc5d78601
@@ -0,0 +1,12 @@
1
+ name: Publish Gem
2
+
3
+ on:
4
+ push:
5
+ tags: v*
6
+
7
+ jobs:
8
+ call-workflow:
9
+ uses: zendesk/gw/.github/workflows/ruby-gem-publication.yml@main
10
+ secrets:
11
+ RUBY_GEMS_API_KEY: ${{ secrets.RUBY_GEMS_API_KEY }}
12
+ RUBY_GEMS_TOTP_DEVICE: ${{ secrets.RUBY_GEMS_TOTP_DEVICE }}
data/CHANGELOG.md CHANGED
@@ -1,15 +1,20 @@
1
1
  # Changelog
2
2
 
3
- ## Unreleased
3
+ ## v2.9.0.beta1
4
+
5
+ * Add `partitioner` producer config option to allow changing the strategy to
6
+ determine which topic partition a message is written to when racecar
7
+ produces a kafka message
8
+ * Add built-in liveness probe for Kubernetes deployments.
4
9
 
5
10
  ## v2.8.2
6
- * Handles ErroneousStateError, in previous versions the consumer would do several unecessary group leave/joins. The log level is also changed to WARN instead of ERROR. ([#285](https://github.com/zendesk/racecar/pull/295))
11
+ * Handles ErroneousStateError, in previous versions the consumer would do several unecessary group leave/joins. The log level is also changed to WARN instead of ERROR. ([#295](https://github.com/zendesk/racecar/pull/295))
7
12
 
8
13
  ## v2.8.1
9
14
  * Adds new ErroneousStateError to racecar in order to give more information on this new possible exception.
10
15
 
11
16
  ## v2.8.0
12
- * Update librdkafka version from 1.8.2 to 1.9.0 by upgrading from rdkafka 0.10.0 to 0.12.0. ([#283](https://github.com/zendesk/racecar/pull/293))
17
+ * Update librdkafka version from 1.8.2 to 1.9.0 by upgrading from rdkafka 0.10.0 to 0.12.0. ([#293](https://github.com/zendesk/racecar/pull/293))
13
18
 
14
19
  ## v2.7.0
15
20
 
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- racecar (2.8.2.beta)
4
+ racecar (2.9.0.beta1)
5
5
  king_konf (~> 1.0.0)
6
6
  rdkafka (~> 0.12.0)
7
7
 
@@ -20,9 +20,9 @@ GEM
20
20
  ffi (1.15.5)
21
21
  i18n (1.8.10)
22
22
  concurrent-ruby (~> 1.0)
23
- king_konf (1.0.0)
23
+ king_konf (1.0.1)
24
24
  method_source (1.0.0)
25
- mini_portile2 (2.8.0)
25
+ mini_portile2 (2.8.1)
26
26
  minitest (5.14.4)
27
27
  pry (0.13.1)
28
28
  coderay (~> 1.1)
@@ -47,7 +47,7 @@ GEM
47
47
  rspec-support (3.10.2)
48
48
  thread_safe (0.3.6)
49
49
  timecop (0.9.2)
50
- tzinfo (1.2.9)
50
+ tzinfo (1.2.10)
51
51
  thread_safe (~> 0.1)
52
52
 
53
53
  PLATFORMS
data/README.md CHANGED
@@ -20,6 +20,7 @@ The framework is based on [rdkafka-ruby](https://github.com/appsignal/rdkafka-ru
20
20
  8. [Logging](#logging)
21
21
  9. [Operations](#operations)
22
22
  10. [Upgrading from v1 to v2](#upgrading-from-v1-to-v2)
23
+ 11. [Compression](#compression)
23
24
  3. [Development](#development)
24
25
  4. [Contributing](#contributing)
25
26
  5. [Support and Discussion](#support-and-discussion)
@@ -245,6 +246,69 @@ The `deliver!` method can be used to block until the broker received all queued
245
246
 
246
247
  You can set message headers by passing a `headers:` option with a Hash of headers.
247
248
 
249
+ ### Standalone Producer
250
+
251
+ Racecar provides a standalone producer to publish messages to Kafka directly from your Rails application:
252
+ ```
253
+ # app/controllers/comments_controller.rb
254
+ class CommentsController < ApplicationController
255
+ def create
256
+ @comment = Comment.create!(params)
257
+
258
+ # This will publish a JSON representation of the comment to the `comments` topic
259
+ # in Kafka. Make sure to create the topic first, or this may fail.
260
+ Racecar.produce_sync(value:comment.to_json, topic: "comments")
261
+ end
262
+ end
263
+ ```
264
+
265
+ The above example will block the server process until the message has been delivered. If you want deliveries to happen in the background in order to free up your server processes more quickly, call #deliver_async instead:
266
+ ```
267
+ # app/controllers/comments_controller.rb
268
+ class CommentsController < ApplicationController
269
+ def show
270
+ @comment = Comment.find(params[:id])
271
+
272
+ event = {
273
+ name: "comment_viewed",
274
+ data: {
275
+ comment_id: @comment.id,
276
+ user_id: current_user.id
277
+ }
278
+ }
279
+
280
+ # By delivering messages asynchronously you free up your server processes faster.
281
+ Racecar.produce_async(value: event.to_json, topic: "activity")
282
+ end
283
+ end
284
+ ```
285
+ In addition to improving response time, delivering messages asynchronously also protects your application against Kafka availability issues -- if messages cannot be delivered, they'll be buffered for later and retried automatically.
286
+
287
+ A third method is to produce messages first (without delivering the messages to Kafka yet), and deliver them synchronously later.
288
+
289
+ ```
290
+ # app/controllers/comments_controller.rb
291
+ class CommentsController < ApplicationController
292
+ def create
293
+ @comment = Comment.create!(params)
294
+
295
+ event = {
296
+ name: "comment_created",
297
+ data: {
298
+ comment_id: @comment.id
299
+ user_id: current_user.id
300
+ }
301
+ }
302
+
303
+ # This will queue the two messages in the internal buffer and block server process until they are delivered.
304
+ Racecar.wait_for_delivery do
305
+ Racecar.produce_async(comment.to_json, topic: "comments")
306
+ Racecar.produce_async(event.to_json, topic: "activity")
307
+ end
308
+ end
309
+ end
310
+ ```
311
+
248
312
  ### Configuration
249
313
 
250
314
  Racecar provides a flexible way to configure your consumer in a way that feels at home in a Rails application. If you haven't already, run `bundle exec rails generate racecar:install` in order to generate a config file. You'll get a separate section for each Rails environment, with the common configuration values in a shared `common` section.
@@ -338,6 +402,7 @@ Racecar has support for using SASL to authenticate clients using either the GSSA
338
402
 
339
403
  These settings are related to consumers that _produce messages to Kafka_.
340
404
 
405
+ - `partitioner` – The strategy used to determine which topic partition a message is written to when Racecar produces a value to Kafka. The codec needs to be one of `consistent`, `consistent_random` `murmur2` `murmur2_random` `fnv1a` `fnv1a_random` either as a Symbol or a String, defaults to `consistent_random`
341
406
  - `producer_compression_codec` – If defined, Racecar will compress messages before writing them to Kafka. The codec needs to be one of `gzip`, `lz4`, or `snappy`, either as a Symbol or a String.
342
407
 
343
408
  #### Datadog monitoring
@@ -443,6 +508,64 @@ The important part is the `strategy.type` value, which tells Kubernetes how to u
443
508
 
444
509
  Instead, the `Recreate` update strategy should be used. It completely tears down the existing containers before starting all of the new containers simultaneously, allowing for a single synchronization stage and a much faster, more stable deployment update.
445
510
 
511
+ #### Liveness Probe
512
+
513
+ Racecar comes with a built-in liveness probe, primarily for use with Kubernetes, but useful for any deployment environment where you can periodically run a process to check the health of your consumer.
514
+
515
+ To use this feature:
516
+ - set the `liveness_probe_enabled` config option to true.
517
+ - configure your Kubernetes deployment to run `$ racecarctl liveness_probe`
518
+
519
+
520
+ When enabled (see config) Racecar will touch the file at `liveness_probe_file_path` each time it finishes polling Kafka and processing the messages in the batch (if any).
521
+
522
+ The modified time of this file can be observed to determine when the consumer last exhibited 'liveness'.
523
+
524
+ Running `racecarctl liveness_probe` will return a successful exit status if the last 'liveness' event happened within an acceptable time, `liveness_probe_max_interval`.
525
+
526
+ `liveness_probe_max_interval` should be long enough to account for both the Kafka polling time of `max_wait_time` and the processing time of a full message batch.
527
+
528
+ On receiving `SIGTERM`, Racecar will gracefully shut down and delete this file, causing the probe to fail immediately after exit.
529
+
530
+ You may wish to tolerate more than one failed probe run to accommodate for environmental variance and clock changes.
531
+
532
+ See the [Configuration section](https://github.com/zendesk/racecar#configuration) for the various ways the liveness probe can be configured, environment variables being one option.
533
+
534
+ Here is an example Kubernetes liveness probe configuration:
535
+
536
+ ```yaml
537
+ apiVersion: apps/v1
538
+ kind: Deployment
539
+ spec:
540
+ template:
541
+ spec:
542
+ containers:
543
+ - name: consumer
544
+
545
+ args:
546
+ - racecar
547
+ - SomeConsumer
548
+
549
+ env:
550
+ - name: RACECAR_LIVENESS_PROBE_ENABLED
551
+ value: "true"
552
+
553
+ livenessProbe:
554
+ exec:
555
+ command:
556
+ - racecarctl
557
+ - liveness_probe
558
+
559
+ # Allow up to 10 consecutive failures before terminating Pod:
560
+ failureThreshold: 10
561
+
562
+ # Wait 30 seconds before starting the probes:
563
+ initialDelaySeconds: 30
564
+
565
+ # Perform the check every 10 seconds:
566
+ periodSeconds: 10
567
+ ```
568
+
446
569
  #### Deploying to Heroku
447
570
 
448
571
  If you run your applications in Heroku and/or use the Heroku Kafka add-on, you application will be provided with 4 ENV variables that allow connecting to the cluster: `KAFKA_URL`, `KAFKA_TRUSTED_CERT`, `KAFKA_CLIENT_CERT`, and `KAFKA_CLIENT_CERT_KEY`.
@@ -479,7 +602,7 @@ Again, the recommended approach is to manage the processes using process manager
479
602
 
480
603
  ### Handling errors
481
604
 
482
- When processing messages from a Kafka topic, your code may encounter an error and raise an exception. The cause is typically one of two things:
605
+ #### When processing messages from a Kafka topic, your code may encounter an error and raise an exception. The cause is typically one of two things:
483
606
 
484
607
  1. The message being processed is somehow malformed or doesn't conform with the assumptions made by the processing code.
485
608
  2. You're using some external resource such as a database or a network API that is temporarily unavailable.
@@ -514,6 +637,16 @@ end
514
637
 
515
638
  It is highly recommended that you set up an error handler. Please note that the `info` object contains different keys and values depending on whether you are using `process` or `process_batch`. See the `instrumentation_payload` object in the `process` and `process_batch` methods in the `Runner` class for the complete list.
516
639
 
640
+ #### Errors related to Compression
641
+
642
+ A sample error might look like this:
643
+
644
+ ```
645
+ E, [2022-10-09T11:28:29.976548 #15] ERROR -- : (try 5/10): Error for topic subscription #<struct Racecar::Consumer::Subscription topic="support.entity_incremental.views.view_ticket_ids", start_from_beginning=false, max_bytes_per_partition=104857, additional_config={}>: Local: Not implemented (not_implemented)
646
+ ```
647
+
648
+ Please see [Compression](#compression)
649
+
517
650
  ### Logging
518
651
 
519
652
  By default, Racecar will log to `STDOUT`. If you're using Rails, your application code will use whatever logger you've configured there.
@@ -528,7 +661,15 @@ In order to introspect the configuration of a consumer process, send it the `SIG
528
661
 
529
662
  ### Upgrading from v1 to v2
530
663
 
531
- In order to safely upgrade from Racecar v1 to v2, you need to completely shut down your consumer group before starting it up again with the v2 Racecar dependency. In general, you should avoid rolling deploys for consumers groups, so it is likely the case that this will just work for you, but it's a good idea to check first.
664
+ In order to safely upgrade from Racecar v1 to v2, you need to completely shut down your consumer group before starting it up again with the v2 Racecar dependency. In general, you should avoid rolling deploys for consumers groups, so it is likely the case that this will just work for you, but it's a good idea to check first.
665
+
666
+ ### Compression
667
+
668
+ Racecar v2 requires a C library (zlib) to compress the messages before producing to the topic. If not already installed on you consumer docker container, please install using following command in Dockerfile of consumer
669
+
670
+ ```
671
+ apt-get update && apt-get install -y libzstd-dev
672
+ ```
532
673
 
533
674
  ## Development
534
675
 
data/Rakefile CHANGED
@@ -3,6 +3,9 @@
3
3
  require "bundler/gem_tasks"
4
4
  require "rspec/core/rake_task"
5
5
 
6
+ # Pushing to rubygems is handled by a github workflow
7
+ ENV["gem_push"] = "false"
8
+
6
9
  RSpec::Core::RakeTask.new(:spec)
7
10
 
8
11
  task :default => :spec
@@ -3,8 +3,8 @@
3
3
  class BatchConsumer < Racecar::Consumer
4
4
  subscribes_to "messages", start_from_beginning: false
5
5
 
6
- def process_batch(batch)
7
- batch.messages.each do |message|
6
+ def process_batch(messages)
7
+ messages.each do |message|
8
8
  puts message.value
9
9
  end
10
10
  end
data/lib/racecar/cli.rb CHANGED
@@ -5,6 +5,7 @@ require "logger"
5
5
  require "fileutils"
6
6
  require "racecar/rails_config_file_loader"
7
7
  require "racecar/daemon"
8
+ require "racecar/liveness_probe"
8
9
 
9
10
  module Racecar
10
11
  class Cli
@@ -58,6 +59,11 @@ module Racecar
58
59
  $stderr.puts "=> Ctrl-C to shutdown consumer"
59
60
  end
60
61
 
62
+ if config.liveness_probe_enabled
63
+ $stderr.puts "=> Liveness probe enabled"
64
+ config.install_liveness_probe
65
+ end
66
+
61
67
  processor = consumer_class.new
62
68
  Racecar.run(processor)
63
69
  nil
@@ -1,7 +1,12 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require "tmpdir"
4
+
3
5
  require "king_konf"
4
6
 
7
+ require "racecar/liveness_probe"
8
+ require "racecar/instrumenter"
9
+
5
10
  module Racecar
6
11
  class Config < KingKonf::Config
7
12
  env_prefix :racecar
@@ -77,6 +82,9 @@ module Racecar
77
82
  desc "The log level for the Racecar logs"
78
83
  string :log_level, default: "info"
79
84
 
85
+ desc "The strategy used to determine which topic partition a message is written to when Racecar produces a value to Kafka; defaults to `consistent_random`"
86
+ symbol :partitioner, allowed_values: %i{consistent consistent_random murmur2 murmur2_random fnv1a fnv1a_random}, default: :consistent_random
87
+
80
88
  desc "Protocol used to communicate with brokers"
81
89
  symbol :security_protocol, allowed_values: %i{plaintext ssl sasl_plaintext sasl_ssl}
82
90
 
@@ -164,6 +172,15 @@ module Racecar
164
172
  for backward compatibility, however this can be quite memory intensive"
165
173
  integer :statistics_interval, default: 1
166
174
 
175
+ desc "Whether to enable liveness probe behavior (touch the file)"
176
+ boolean :liveness_probe_enabled, default: false
177
+
178
+ desc "Path to a file Racecar will touch to show liveness"
179
+ string :liveness_probe_file_path, default: "#{Dir.tmpdir}/racecar-liveness"
180
+
181
+ desc "Used only by the liveness probe: Max time (in seconds) between liveness events before the process is considered not healthy"
182
+ integer :liveness_probe_max_interval, default: 5
183
+
167
184
  # The error handler must be set directly on the object.
168
185
  attr_reader :error_handler
169
186
 
@@ -247,6 +264,35 @@ module Racecar
247
264
  producer_config
248
265
  end
249
266
 
267
+ def instrumenter
268
+ @instrumenter ||= begin
269
+ default_payload = { client_id: client_id, group_id: group_id }
270
+
271
+ if defined?(ActiveSupport::Notifications)
272
+ # ActiveSupport needs `concurrent-ruby` but doesn't `require` it.
273
+ require 'concurrent/utility/monotonic_time'
274
+ Instrumenter.new(backend: ActiveSupport::Notifications, default_payload: default_payload)
275
+ else
276
+ logger.warn "ActiveSupport::Notifications not available, instrumentation is disabled"
277
+ NullInstrumenter
278
+ end
279
+ end
280
+ end
281
+ attr_writer :instrumenter
282
+
283
+ def install_liveness_probe
284
+ liveness_probe.tap(&:install)
285
+ end
286
+
287
+ def liveness_probe
288
+ require "active_support/notifications"
289
+ @liveness_probe ||= LivenessProbe.new(
290
+ ActiveSupport::Notifications,
291
+ liveness_probe_file_path,
292
+ liveness_probe_max_interval
293
+ )
294
+ end
295
+
250
296
  private
251
297
 
252
298
  def rdkafka_security_config
@@ -140,7 +140,7 @@ module Racecar
140
140
  @logger.debug "No time remains for polling messages. Will try on next call."
141
141
  return nil
142
142
  elsif wait_ms >= remain_ms
143
- @logger.error "Only #{remain_ms}ms left, but want to wait for #{wait_ms}ms before poll. Will retry on next call."
143
+ @logger.warn "Only #{remain_ms}ms left, but want to wait for #{wait_ms}ms before poll. Will retry on next call."
144
144
  @previous_retries = try
145
145
  return nil
146
146
  elsif wait_ms > 0
data/lib/racecar/ctl.rb CHANGED
@@ -32,6 +32,17 @@ module Racecar
32
32
  @command = command
33
33
  end
34
34
 
35
+ def liveness_probe(args)
36
+ require "racecar/liveness_probe"
37
+ parse_options!(args)
38
+
39
+ if ENV["RAILS_ENV"]
40
+ Racecar.config.load_file("config/racecar.yml", ENV["RAILS_ENV"])
41
+ end
42
+
43
+ Racecar.config.liveness_probe.check_liveness_within_interval!
44
+ end
45
+
35
46
  def status(args)
36
47
  parse_options!(args)
37
48
 
@@ -244,6 +244,50 @@ module Racecar
244
244
  # Number of messages ACK'd for the topic.
245
245
  increment("producer.ack.messages", tags: tags)
246
246
  end
247
+
248
+ def produce_async(event)
249
+ client = event.payload.fetch(:client_id)
250
+ topic = event.payload.fetch(:topic)
251
+ message_size = event.payload.fetch(:message_size)
252
+ buffer_size = event.payload.fetch(:buffer_size)
253
+
254
+ tags = {
255
+ client: client,
256
+ topic: topic,
257
+ }
258
+
259
+ # This gets us the write rate.
260
+ increment("producer.produce.messages", tags: tags.merge(topic: topic))
261
+
262
+ # Information about typical/average/95p message size.
263
+ histogram("producer.produce.message_size", message_size, tags: tags.merge(topic: topic))
264
+
265
+ # Aggregate message size.
266
+ count("producer.produce.message_size.sum", message_size, tags: tags.merge(topic: topic))
267
+
268
+ # This gets us the avg/max buffer size per producer.
269
+ histogram("producer.buffer.size", buffer_size, tags: tags)
270
+ end
271
+
272
+ def produce_sync(event)
273
+ client = event.payload.fetch(:client_id)
274
+ topic = event.payload.fetch(:topic)
275
+ message_size = event.payload.fetch(:message_size)
276
+
277
+ tags = {
278
+ client: client,
279
+ topic: topic,
280
+ }
281
+
282
+ # This gets us the write rate.
283
+ increment("producer.produce.messages", tags: tags.merge(topic: topic))
284
+
285
+ # Information about typical/average/95p message size.
286
+ histogram("producer.produce.message_size", message_size, tags: tags.merge(topic: topic))
287
+
288
+ # Aggregate message size.
289
+ count("producer.produce.message_size.sum", message_size, tags: tags.merge(topic: topic))
290
+ end
247
291
 
248
292
  attach_to "racecar"
249
293
  end
@@ -9,16 +9,9 @@ module Racecar
9
9
  NAMESPACE = "racecar"
10
10
  attr_reader :backend
11
11
 
12
- def initialize(default_payload = {})
12
+ def initialize(backend:, default_payload: {})
13
+ @backend = backend
13
14
  @default_payload = default_payload
14
-
15
- @backend = if defined?(ActiveSupport::Notifications)
16
- # ActiveSupport needs `concurrent-ruby` but doesn't `require` it.
17
- require 'concurrent/utility/monotonic_time'
18
- ActiveSupport::Notifications
19
- else
20
- NullInstrumenter
21
- end
22
15
  end
23
16
 
24
17
  def instrument(event_name, payload = {}, &block)
@@ -0,0 +1,78 @@
1
+ require "fileutils"
2
+
3
+ module Racecar
4
+ class LivenessProbe
5
+ def initialize(message_bus, file_path, max_interval)
6
+ @message_bus = message_bus
7
+ @file_path = file_path
8
+ @max_interval = max_interval
9
+ @subscribers = []
10
+ end
11
+
12
+ attr_reader :message_bus, :file_path, :max_interval, :subscribers
13
+ private :message_bus, :file_path, :max_interval, :subscribers
14
+
15
+ def check_liveness_within_interval!
16
+ unless liveness_event_within_interval?
17
+ $stderr.puts "Racecar healthcheck failed: No liveness within interval #{max_interval}s. Last liveness at #{last_liveness_event_at}, #{elapsed_since_liveness_event} seconds ago."
18
+ Process.exit(1)
19
+ end
20
+ end
21
+
22
+ def liveness_event_within_interval?
23
+ elapsed_since_liveness_event < max_interval
24
+ rescue Errno::ENOENT
25
+ $stderr.puts "Racecar healthcheck failed: Liveness file not found `#{file_path}`"
26
+ Process.exit(1)
27
+ end
28
+
29
+ def install
30
+ unless file_path && file_writeable?
31
+ raise(
32
+ "Liveness probe configuration error: `liveness_probe_file_path` must be set to a writable file path.\n" \
33
+ " Set `RACECAR_LIVENESS_PROBE_FILE_PATH` and `RACECAR_LIVENESS_MAX_INTERVAL` environment variables."
34
+ )
35
+ end
36
+
37
+ subscribers << message_bus.subscribe("start_main_loop.racecar") do
38
+ touch_liveness_file
39
+ end
40
+
41
+ subscribers = message_bus.subscribe("shut_down.racecar") do
42
+ delete_liveness_file
43
+ end
44
+
45
+ nil
46
+ end
47
+
48
+ def uninstall
49
+ subscribers.each { |s| message_bus.unsubscribe(s) }
50
+ end
51
+
52
+ private
53
+
54
+ def elapsed_since_liveness_event
55
+ Time.now - last_liveness_event_at
56
+ end
57
+
58
+ def last_liveness_event_at
59
+ File.mtime(file_path)
60
+ end
61
+
62
+ def touch_liveness_file
63
+ FileUtils.touch(file_path)
64
+ end
65
+
66
+ def delete_liveness_file
67
+ FileUtils.rm_rf(file_path)
68
+ end
69
+
70
+ def file_writeable?
71
+ File.write(file_path, "")
72
+ File.unlink(file_path)
73
+ true
74
+ rescue
75
+ false
76
+ end
77
+ end
78
+ end
@@ -0,0 +1,129 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "racecar/message_delivery_error"
4
+
5
+ at_exit do
6
+ Racecar::Producer.shutdown!
7
+ end
8
+
9
+ module Racecar
10
+ class Producer
11
+
12
+ @@mutex = Mutex.new
13
+
14
+ class << self
15
+ def shutdown!
16
+ @@mutex.synchronize do
17
+ if !@internal_producer.nil?
18
+ @internal_producer.close
19
+ end
20
+ end
21
+ end
22
+ end
23
+
24
+ def initialize(config: nil, logger: nil, instrumenter: NullInstrumenter)
25
+ @config = config
26
+ @logger = logger
27
+ @delivery_handles = []
28
+ @instrumenter = instrumenter
29
+ @batching = false
30
+ @internal_producer = init_internal_producer(config)
31
+ end
32
+
33
+ def init_internal_producer(config)
34
+ @@mutex.synchronize do
35
+ @@init_internal_producer ||= begin
36
+ # https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
37
+ producer_config = {
38
+ "bootstrap.servers" => config.brokers.join(","),
39
+ "client.id" => config.client_id,
40
+ "statistics.interval.ms" => config.statistics_interval_ms,
41
+ "message.timeout.ms" => config.message_timeout * 1000,
42
+ }
43
+ producer_config["compression.codec"] = config.producer_compression_codec.to_s unless config.producer_compression_codec.nil?
44
+ producer_config.merge!(config.rdkafka_producer)
45
+ Rdkafka::Config.new(producer_config).producer
46
+ end
47
+ end
48
+ end
49
+
50
+
51
+ # fire and forget - you won't get any guarantees or feedback from
52
+ # Racecar on the status of the message and it won't halt execution
53
+ # of the rest of your code.
54
+ def produce_async(value:, topic:, **options)
55
+ with_instrumentation(action: "produce_async", value: value, topic: topic, **options) do
56
+ handle = internal_producer.produce(payload: value, topic: topic, **options)
57
+ @delivery_handles << handle if @batching
58
+ end
59
+
60
+ nil
61
+ end
62
+
63
+ # synchronous message production - will wait until the delivery handle succeeds, fails or times out.
64
+ def produce_sync(value:, topic:, **options)
65
+ with_instrumentation(action: "produce_sync", value: value, topic: topic, **options) do
66
+ handle = internal_producer.produce(payload: value, topic: topic, **options)
67
+ deliver_with_error_handling(handle)
68
+ end
69
+
70
+ nil
71
+ end
72
+
73
+ # Blocks until all messages that have been asynchronously produced in the block have been delivered.
74
+ # Usage:
75
+ # messages = [
76
+ # {value: "message1", topic: "topic1"},
77
+ # {value: "message2", topic: "topic1"},
78
+ # {value: "message3", topic: "topic2"}
79
+ # ]
80
+ # Racecar.wait_for_delivery {
81
+ # messages.each do |msg|
82
+ # Racecar.produce_async(value: msg[:value], topic: msg[:topic])
83
+ # end
84
+ # }
85
+ def wait_for_delivery
86
+ @batching = true
87
+ @delivery_handles.clear
88
+ yield
89
+ @delivery_handles.each do |handle|
90
+ deliver_with_error_handling(handle)
91
+ end
92
+ ensure
93
+ @delivery_handles.clear
94
+ @batching = false
95
+
96
+ nil
97
+ end
98
+
99
+ private
100
+
101
+ attr_reader :internal_producer
102
+
103
+ def deliver_with_error_handling(handle)
104
+ handle.wait
105
+ rescue Rdkafka::AbstractHandle::WaitTimeoutError => e
106
+ partition = MessageDeliveryError.partition_from_delivery_handle(handle)
107
+ @logger.warn "Still trying to deliver message to (partition #{partition})... (will try up to Racecar.config.message_timeout)"
108
+ retry
109
+ rescue Rdkafka::RdkafkaError => e
110
+ raise MessageDeliveryError.new(e, handle)
111
+ end
112
+
113
+ def with_instrumentation(action:, value:, topic:, **options)
114
+ message_size = value.respond_to?(:bytesize) ? value.bytesize : 0
115
+ instrumentation_payload = {
116
+ value: value,
117
+ topic: topic,
118
+ message_size: message_size,
119
+ buffer_size: @delivery_handles.size,
120
+ key: options.fetch(:key, nil),
121
+ partition: options.fetch(:partition, nil),
122
+ partition_key: options.fetch(:partition_key, nil)
123
+ }
124
+ @instrumenter.instrument(action, instrumentation_payload) do
125
+ yield
126
+ end
127
+ end
128
+ end
129
+ end
@@ -67,6 +67,8 @@ module Racecar
67
67
  loop do
68
68
  break if @stop_requested
69
69
  resume_paused_partitions
70
+
71
+ @instrumenter.instrument("start_main_loop", instrumentation_payload)
70
72
  @instrumenter.instrument("main_loop", instrumentation_payload) do
71
73
  case process_method
72
74
  when :batch then
@@ -94,6 +96,7 @@ module Racecar
94
96
  ensure
95
97
  producer.close
96
98
  Racecar::Datadog.close if Object.const_defined?("Racecar::Datadog")
99
+ @instrumenter.instrument("shut_down", instrumentation_payload || {})
97
100
  end
98
101
 
99
102
  def stop
@@ -149,7 +152,9 @@ module Racecar
149
152
  "client.id" => config.client_id,
150
153
  "statistics.interval.ms" => config.statistics_interval_ms,
151
154
  "message.timeout.ms" => config.message_timeout * 1000,
155
+ "partitioner" => config.partitioner.to_s,
152
156
  }
157
+
153
158
  producer_config["compression.codec"] = config.producer_compression_codec.to_s unless config.producer_compression_codec.nil?
154
159
  producer_config.merge!(config.rdkafka_producer)
155
160
  producer_config
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Racecar
4
- VERSION = "2.8.2"
4
+ VERSION = "2.9.0.beta1"
5
5
  end
data/lib/racecar.rb CHANGED
@@ -8,6 +8,7 @@ require "racecar/consumer"
8
8
  require "racecar/consumer_set"
9
9
  require "racecar/runner"
10
10
  require "racecar/parallel_runner"
11
+ require "racecar/producer"
11
12
  require "racecar/config"
12
13
  require "racecar/version"
13
14
  require "ensure_hash_compact"
@@ -39,20 +40,33 @@ module Racecar
39
40
  config.logger = logger
40
41
  end
41
42
 
42
- def self.instrumenter
43
- @instrumenter ||= begin
44
- default_payload = { client_id: config.client_id, group_id: config.group_id }
43
+ def self.produce_async(value:, topic:, **options)
44
+ producer.produce_async(value: value, topic: topic, **options)
45
+ end
46
+
47
+ def self.produce_sync(value:, topic:, **options)
48
+ producer.produce_sync(value: value, topic: topic, **options)
49
+ end
50
+
51
+ def self.wait_for_delivery(&block)
52
+ producer.wait_for_delivery(&block)
53
+ end
45
54
 
46
- Instrumenter.new(default_payload).tap do |instrumenter|
47
- if instrumenter.backend == NullInstrumenter
48
- logger.warn "ActiveSupport::Notifications not available, instrumentation is disabled"
49
- end
55
+ def self.producer
56
+ Thread.current[:racecar_producer] ||= begin
57
+ if config.datadog_enabled
58
+ require "racecar/datadog"
50
59
  end
60
+ Racecar::Producer.new(config: config, logger: logger, instrumenter: instrumenter)
51
61
  end
52
62
  end
53
63
 
64
+ def self.instrumenter
65
+ config.instrumenter
66
+ end
67
+
54
68
  def self.run(processor)
55
- runner = Runner.new(processor, config: config, logger: logger, instrumenter: instrumenter)
69
+ runner = Runner.new(processor, config: config, logger: logger, instrumenter: config.instrumenter)
56
70
 
57
71
  if config.parallel_workers && config.parallel_workers > 1
58
72
  ParallelRunner.new(runner: runner, config: config, logger: logger).run
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: racecar
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.8.2
4
+ version: 2.9.0.beta1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: exe
11
11
  cert_chain: []
12
- date: 2022-07-12 00:00:00.000000000 Z
12
+ date: 2023-03-22 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: king_konf
@@ -165,9 +165,9 @@ executables:
165
165
  extensions: []
166
166
  extra_rdoc_files: []
167
167
  files:
168
- - ".circleci/config.yml"
169
168
  - ".github/dependabot.yml"
170
169
  - ".github/workflows/ci.yml"
170
+ - ".github/workflows/publish.yml"
171
171
  - ".gitignore"
172
172
  - ".rspec"
173
173
  - CHANGELOG.md
@@ -203,11 +203,13 @@ files:
203
203
  - lib/racecar/erroneous_state_error.rb
204
204
  - lib/racecar/heroku.rb
205
205
  - lib/racecar/instrumenter.rb
206
+ - lib/racecar/liveness_probe.rb
206
207
  - lib/racecar/message.rb
207
208
  - lib/racecar/message_delivery_error.rb
208
209
  - lib/racecar/null_instrumenter.rb
209
210
  - lib/racecar/parallel_runner.rb
210
211
  - lib/racecar/pause.rb
212
+ - lib/racecar/producer.rb
211
213
  - lib/racecar/rails_config_file_loader.rb
212
214
  - lib/racecar/runner.rb
213
215
  - lib/racecar/version.rb
@@ -227,11 +229,11 @@ required_ruby_version: !ruby/object:Gem::Requirement
227
229
  version: '2.6'
228
230
  required_rubygems_version: !ruby/object:Gem::Requirement
229
231
  requirements:
230
- - - ">="
232
+ - - ">"
231
233
  - !ruby/object:Gem::Version
232
- version: '0'
234
+ version: 1.3.1
233
235
  requirements: []
234
- rubygems_version: 3.0.3
236
+ rubygems_version: 3.0.3.1
235
237
  signing_key:
236
238
  specification_version: 4
237
239
  summary: A framework for running Kafka consumers
data/.circleci/config.yml DELETED
@@ -1,56 +0,0 @@
1
- version: 2.1
2
- orbs:
3
- ruby: circleci/ruby@0.1.2
4
-
5
- jobs:
6
- build:
7
- docker:
8
- - image: circleci/ruby:2.6.3-stretch-node
9
- executor: ruby/default
10
- steps:
11
- - checkout
12
- - run:
13
- name: Which bundler?
14
- command: bundle -v
15
- - ruby/bundle-install
16
- - run: bundle exec rspec --exclude-pattern='spec/integration/*_spec.rb'
17
- integration-tests:
18
- docker:
19
- - image: circleci/ruby:2.6.3-stretch-node
20
- - image: wurstmeister/zookeeper
21
- - image: wurstmeister/kafka:2.11-2.0.0
22
- environment:
23
- KAFKA_ADVERTISED_HOST_NAME: localhost
24
- KAFKA_ADVERTISED_PORT: 9092
25
- KAFKA_PORT: 9092
26
- KAFKA_ZOOKEEPER_CONNECT: localhost:2181
27
- KAFKA_DELETE_TOPIC_ENABLE: true
28
- - image: wurstmeister/kafka:2.11-2.0.0
29
- environment:
30
- KAFKA_ADVERTISED_HOST_NAME: localhost
31
- KAFKA_ADVERTISED_PORT: 9093
32
- KAFKA_PORT: 9093
33
- KAFKA_ZOOKEEPER_CONNECT: localhost:2181
34
- KAFKA_DELETE_TOPIC_ENABLE: true
35
- - image: wurstmeister/kafka:2.11-2.0.0
36
- environment:
37
- KAFKA_ADVERTISED_HOST_NAME: localhost
38
- KAFKA_ADVERTISED_PORT: 9094
39
- KAFKA_PORT: 9094
40
- KAFKA_ZOOKEEPER_CONNECT: localhost:2181
41
- KAFKA_DELETE_TOPIC_ENABLE: true
42
- executor: ruby/default
43
- steps:
44
- - checkout
45
- - run:
46
- name: Which bundler?
47
- command: bundle -v
48
- - ruby/bundle-install
49
- - run: bundle exec rspec --pattern='spec/integration/*_spec.rb'
50
-
51
- workflows:
52
- version: 2
53
- test:
54
- jobs:
55
- - build
56
- - integration-tests