karafka 2.0.0.rc5 → 2.0.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 52fde84aac9ffef63ecbc1d367e3bfdc7209f6c2904f2a7142f51b1bd0613002
4
- data.tar.gz: 4b4e7838e19505469ec20a654a3d7bb9c2588f351f9ea64fdb05e224b476d562
3
+ metadata.gz: 5adfb5654381a04d5b7111806bd5bbba70f75f02e7044b57af09b91b547dbd09
4
+ data.tar.gz: 9500da87035507a037218e9df5a11c8803e738c707b5d624c19559917939b689
5
5
  SHA512:
6
- metadata.gz: 0ebdda184eca4c7dcfe39821125ba7a8645336607300395de792a7c8808cde07f54f5382278946fab58af0e95f3a85ab05be9ed6e7b292fc7692922cb7890747
7
- data.tar.gz: 416cb01ed1b6dac0681969ce72505d19855a041e7ceb29e369ed9fc80675755ad1f478a20ee2d1f2c0e925036ba6f08adda727f8ecd3f4f07f39f0be347ae4a1
6
+ metadata.gz: 1acb20378ecdf95b87378297de714f6791f5abd3d411fbda4701231912e30033163c3b81a21f98a99d2ca247dbbe8cda72d81efc7b1d83a3aa019485dd2e8604
7
+ data.tar.gz: 2a8d352d68852da005c87b0494457abf3559ec3975deb1098da233e16e4e33fed517020733300b2364ad650cc3764d59deb7fff6efe63e216e4fdb057c3e4ed8
checksums.yaml.gz.sig CHANGED
Binary file
@@ -73,10 +73,6 @@ jobs:
73
73
  ruby-version: ${{matrix.ruby}}
74
74
  bundler-cache: true
75
75
 
76
- - name: Ensure all needed Kafka topics are created and wait if not
77
- run: |
78
- bin/wait_for_kafka
79
-
80
76
  - name: Run all specs
81
77
  env:
82
78
  GITHUB_COVERAGE: ${{matrix.coverage}}
@@ -120,10 +116,6 @@ jobs:
120
116
  bundle config set without development
121
117
  bundle install
122
118
 
123
- - name: Ensure all needed Kafka topics are created and wait if not
124
- run: |
125
- bin/wait_for_kafka
126
-
127
119
  - name: Run integration tests
128
120
  env:
129
121
  KARAFKA_PRO_LICENSE_TOKEN: ${{ secrets.KARAFKA_PRO_LICENSE_TOKEN }}
data/CHANGELOG.md CHANGED
@@ -1,5 +1,101 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.0.1 (2022-08-06)
4
+ - Provide `Karafka::Admin` for creation and destruction of topics and fetching cluster info.
5
+ - Update integration specs to always use one-time disposable topics.
6
+ - Remove no longer needed `wait_for_kafka` script.
7
+ - Add more integration specs for cover offset management upon errors.
8
+
9
+ ## 2.0.0 (2022-08-05)
10
+
11
+ This changelog describes changes between `1.4` and `2.0`. Please refer to appropriate release notes for changes between particular `rc` releases.
12
+
13
+ Karafka 2.0 is a **major** rewrite that brings many new things to the table but also removes specific concepts that happened not to be as good as I initially thought when I created them.
14
+
15
+ Please consider getting a Pro version if you want to **support** my work on the Karafka ecosystem!
16
+
17
+ For anyone worried that I will start converting regular features into Pro: This will **not** happen. Anything free and fully OSS in Karafka 1.4 will **forever** remain free. Most additions and improvements to the ecosystem are to its free parts. Any feature that is introduced as a free and open one will not become paid.
18
+
19
+ ### Additions
20
+
21
+ This section describes **new** things and concepts introduced with Karafka 2.0.
22
+
23
+ Karafka 2.0:
24
+
25
+ - Introduces multi-threaded support for [concurrent work](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading) consumption for separate partitions as well as for single partition work via [Virtual Partitions](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions).
26
+ - Introduces [Active Job adapter](https://github.com/karafka/karafka/wiki/Active-Job) for using Karafka as a jobs backend with Ruby on Rails Active Job.
27
+ - Introduces fully automatic integration end-to-end [test suite](https://github.com/karafka/karafka/tree/master/spec/integrations) that checks any case I could imagine.
28
+ - Introduces [Virtual Partitions](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions) for ability to parallelize work of a single partition.
29
+ - Introduces [Long-Running Jobs](https://github.com/karafka/karafka/wiki/Pro-Long-Running-Jobs) to allow for work that would otherwise exceed the `max.poll.interval.ms`.
30
+ - Introduces the [Enhanced Scheduler](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Scheduler) that uses a non-preemptive LJF (Longest Job First) algorithm instead of a a FIFO (First-In, First-Out) one.
31
+ - Introduces [Enhanced Active Job adapter](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Active-Job) that is optimized and allows for strong ordering of jobs and more.
32
+ - Introduces seamless [Ruby on Rails integration](https://github.com/karafka/karafka/wiki/Integrating-with-Ruby-on-Rails-and-other-frameworks) via `Rails::Railte` without need for any extra configuration.
33
+ - Provides `#revoked` [method](https://github.com/karafka/karafka/wiki/Consuming-messages#shutdown-and-partition-revocation-handlers) for taking actions upon topic revocation.
34
+ - Emits underlying async errors emitted from `librdkafka` via the standardized `error.occurred` [monitor channel](https://github.com/karafka/karafka/wiki/Error-handling-and-back-off-policy#error-tracking).
35
+ - Replaces `ruby-kafka` with `librdkafka` as an underlying driver.
36
+ - Introduces official [EOL policies](https://github.com/karafka/karafka/wiki/Versions-Lifecycle-and-EOL).
37
+ - Introduces [benchmarks](https://github.com/karafka/karafka/tree/master/spec/benchmarks) that can be used to profile Karafka.
38
+ - Introduces a requirement that the end user code **needs** to be [thread-safe](https://github.com/karafka/karafka/wiki/FAQ#does-karafka-require-gems-to-be-thread-safe).
39
+ - Introduces a [Pro subscription](https://github.com/karafka/karafka/wiki/Build-vs.-Buy) with a [commercial license](https://github.com/karafka/karafka/blob/master/LICENSE-COMM) to fund further ecosystem development.
40
+
41
+ ### Deletions
42
+
43
+ This section describes things that are **no longer** part of the Karafka ecosystem.
44
+
45
+ Karafka 2.0:
46
+
47
+ - Removes topics mappers concept completely.
48
+ - Removes pidfiles support.
49
+ - Removes daemonization support.
50
+ - Removes support for using `sidekiq-backend` due to introduction of [multi-threading](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading).
51
+ - Removes the `Responders` concept in favour of WaterDrop producer usage.
52
+ - Removes completely all the callbacks in favour of finalizer method `#shutdown`.
53
+ - Removes single message consumption mode in favour of [documentation](https://github.com/karafka/karafka/wiki/Consuming-messages#one-at-a-time) on how to do it easily by yourself.
54
+
55
+ ### Changes
56
+
57
+ This section describes things that were **changed** in Karafka but are still present.
58
+
59
+ Karafka 2.0:
60
+
61
+ - Uses only instrumentation that comes from Karafka. This applies also to notifications coming natively from `librdkafka`. They are now piped through Karafka prior to being dispatched.
62
+ - Integrates WaterDrop `2.x` tightly with autoconfiguration inheritance and an option to redefine it.
63
+ - Integrates with the `karafka-testing` gem for RSpec that also has been updated.
64
+ - Updates `cli info` to reflect the `2.0` details.
65
+ - Stops validating `kafka` configuration beyond minimum as the rest is handled by `librdkafka`.
66
+ - No longer uses `dry-validation`.
67
+ - No longer uses `dry-monitor`.
68
+ - No longer uses `dry-configurable`.
69
+ - Lowers general external dependencies three **heavily**.
70
+ - Renames `Karafka::Params::BatchMetadata` to `Karafka::Messages::BatchMetadata`.
71
+ - Renames `Karafka::Params::Params` to `Karafka::Messages::Message`.
72
+ - Renames `#params_batch` in consumers to `#messages`.
73
+ - Renames `Karafka::Params::Metadata` to `Karafka::Messages::Metadata`.
74
+ - Renames `Karafka::Fetcher` to `Karafka::Runner` and align notifications key names.
75
+ - Renames `StdoutListener` to `LoggerListener`.
76
+ - Reorganizes [monitoring and logging](https://github.com/karafka/karafka/wiki/Monitoring-and-logging) to match new concepts.
77
+ - Notifies on fatal worker processing errors.
78
+ - Contains updated install templates for Rails and no-non Rails.
79
+ - Changes how the routing style (`0.5`) behaves. It now builds a single consumer group instead of one per topic.
80
+ - Introduces changes that will allow me to build full web-UI in the upcoming `2.1`.
81
+ - Contains updated example apps.
82
+ - Standardizes error hooks for all error reporting (`error.occurred`).
83
+ - Changes license to `LGPL-3.0`.
84
+ - Introduces a `karafka-core` dependency that contains common code used across the ecosystem.
85
+ - Contains updated [wiki](https://github.com/karafka/karafka/wiki) on everything I could think of.
86
+
87
+ ### What's ahead
88
+
89
+ Karafka 2.0 is just the beginning.
90
+
91
+ There are several things in the plan already for 2.1 and beyond, including a web dashboard, at-rest encryption, transactions support, and more.
92
+
93
+ ## 2.0.0.rc6 (2022-08-05)
94
+ - Update licenser to use a gem based approach based on `karafka-license`.
95
+ - Do not mark intermediate jobs as consumed when Karafka runs Enhanced Active Job with Virtual Partitions.
96
+ - Improve development experience by adding fast cluster state changes refresh (#944)
97
+ - Improve the license loading.
98
+
3
99
  ## 2.0.0.rc5 (2022-08-01)
4
100
  - Improve specs stability
5
101
  - Improve forceful shutdown
@@ -53,7 +149,7 @@
53
149
  - Add more integration specs related to polling limits.
54
150
  - Remove auto-detection of re-assigned partitions upon rebalance as for too fast rebalances it could not be accurate enough. It would also mess up in case of rebalances that would happen right after a `#seek` was issued for a partition.
55
151
  - Optimize the removal of pre-buffered lost partitions data.
56
- - Always rune `#revoked` when rebalance with revocation happens.
152
+ - Always run `#revoked` when rebalance with revocation happens.
57
153
  - Evict executors upon rebalance, to prevent race-conditions.
58
154
  - Align topics names for integration specs.
59
155
 
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.0.rc5)
4
+ karafka (2.0.1)
5
5
  karafka-core (>= 2.0.2, < 3.0.0)
6
6
  rdkafka (>= 0.12)
7
7
  thor (>= 0.20)
data/LICENSE-COMM CHANGED
@@ -6,7 +6,7 @@ IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT ("EULA") IS A LEGAL AGREEMEN
6
6
 
7
7
  ------------------------------------------------------------------------------
8
8
 
9
- In order to use the Software under this Agreement, you must receive an application token at the time of purchase, in accordance with the scope of use and other terms specified for each type of Software and as set forth in this Section 1 of this Agreement.
9
+ In order to use the Software under this Agreement, you must receive a "Source URL" to a license package at the time of purchase, in accordance with the scope of use and other terms specified for each type of Software and as set forth in this Section 1 of this Agreement.
10
10
 
11
11
  1. License Grant
12
12
 
@@ -22,7 +22,7 @@ In order to use the Software under this Agreement, you must receive an applicati
22
22
 
23
23
  3. Restricted Uses.
24
24
 
25
- 3.1 You shall not (and shall not allow any third party to): (a) decompile, disassemble, or otherwise reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions); (b) distribute, sell, sublicense, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement; (c) redistribute the Software or Modifications other than by including the Software or a portion thereof within your own product, which must have substantially different functionality than the Software or Modifications and must not allow any third party to use the Software or Modifications, or any portions thereof, for software development or application development purposes; (d) redistribute the Software as part of a product, "appliance" or "virtual server"; (e) redistribute the Software on any server which is not directly under your control; (f) remove any product identification, proprietary, copyright or other notices contained in the Software; (g) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by Maciej Mensfeld; (h) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software; (i) utilize any equipment, device, software, or other means designed to circumvent or remove any form of token verification or copy protection used by Maciej Mensfeld in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by Maciej Mensfeld; (j) use the Software to develop a product which is competitive with any Maciej Mensfeld product offerings; or (k) use unauthorized Source URLS or keycode(s) or distribute or publish Source URLs or keycode(s), except as may be expressly permitted by Maciej Mensfeld in writing. If your unique application token is ever published, Maciej Mensfeld reserves the right to terminate your access without notice.
25
+ 3.1 You shall not (and shall not allow any third party to): (a) decompile, disassemble, or otherwise reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions); (b) distribute, sell, sublicense, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement; (c) redistribute the Software or Modifications other than by including the Software or a portion thereof within your own product, which must have substantially different functionality than the Software or Modifications and must not allow any third party to use the Software or Modifications, or any portions thereof, for software development or application development purposes; (d) redistribute the Software as part of a product, "appliance" or "virtual server"; (e) redistribute the Software on any server which is not directly under your control; (f) remove any product identification, proprietary, copyright or other notices contained in the Software; (g) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by Maciej Mensfeld; (h) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software; (i) utilize any equipment, device, software, or other means designed to circumvent or remove any form of Source URL or copy protection used by Maciej Mensfeld in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by Maciej Mensfeld; (j) use the Software to develop a product which is competitive with any Maciej Mensfeld product offerings; or (k) use unauthorized Source URLS or keycode(s) or distribute or publish Source URLs or keycode(s), except as may be expressly permitted by Maciej Mensfeld in writing. If your unique Source URL is ever published, Maciej Mensfeld reserves the right to terminate your access without notice.
26
26
 
27
27
  3.2 UNDER NO CIRCUMSTANCES MAY YOU USE THE SOFTWARE AS PART OF A PRODUCT OR SERVICE THAT PROVIDES SIMILAR FUNCTIONALITY TO THE SOFTWARE ITSELF.
28
28
 
data/README.md CHANGED
@@ -4,14 +4,12 @@
4
4
  [![Gem Version](https://badge.fury.io/rb/karafka.svg)](http://badge.fury.io/rb/karafka)
5
5
  [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
6
6
 
7
- **Note**: All of the documentation here refers to Karafka `2.0.0.rc4` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
8
-
9
7
  ## About Karafka
10
8
 
11
9
  Karafka is a Ruby and Rails multi-threaded efficient Kafka processing framework that:
12
10
 
13
11
  - Supports parallel processing in [multiple threads](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading) (also for a [single topic partition](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions) work)
14
- - Has [ActiveJob backend](https://github.com/karafka/karafka/wiki/Active-Job) support (including ordered jobs)
12
+ - Has [ActiveJob backend](https://github.com/karafka/karafka/wiki/Active-Job) support (including [ordered jobs](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Active-Job#ordered-jobs))
15
13
  - [Automatically integrates](https://github.com/karafka/karafka/wiki/Integrating-with-Ruby-on-Rails-and-other-frameworks#integrating-with-ruby-on-rails=) with Ruby on Rails
16
14
  - Supports in-development [code reloading](https://github.com/karafka/karafka/wiki/Auto-reload-of-code-changes-in-development)
17
15
  - Is powered by [librdkafka](https://github.com/edenhill/librdkafka) (the Apache Kafka C/C++ client library)
@@ -55,7 +53,7 @@ We also maintain many [integration specs](https://github.com/karafka/karafka/tre
55
53
  1. Add and install Karafka:
56
54
 
57
55
  ```bash
58
- bundle add karafka -v 2.0.0.rc4
56
+ bundle add karafka
59
57
 
60
58
  bundle exec karafka install
61
59
  ```
data/bin/create_token CHANGED
@@ -9,16 +9,10 @@ PRIVATE_KEY_LOCATION = File.join(Dir.home, '.ssh', 'karafka-pro', 'id_rsa')
9
9
 
10
10
  # Name of the entity that acquires the license
11
11
  ENTITY = ARGV[0]
12
- # Date till which license is valid
13
- EXPIRES_ON = Date.parse(ARGV[1])
14
12
 
15
13
  raise ArgumentError, 'Entity missing' if ENTITY.nil? || ENTITY.empty?
16
- raise ArgumentError, 'Expires on needs to be in the future' if EXPIRES_ON <= Date.today
17
14
 
18
- pro_token_data = {
19
- entity: ENTITY,
20
- expires_on: EXPIRES_ON
21
- }
15
+ pro_token_data = { entity: ENTITY }
22
16
 
23
17
  # This code uses my private key to generate a new token for Karafka Pro capabilities
24
18
  private_key = OpenSSL::PKey::RSA.new(File.read(PRIVATE_KEY_LOCATION))
data/bin/integrations CHANGED
@@ -22,7 +22,7 @@ ROOT_PATH = Pathname.new(File.expand_path(File.join(File.dirname(__FILE__), '../
22
22
  CONCURRENCY = ENV.key?('CI') ? 5 : Etc.nprocessors * 2
23
23
 
24
24
  # How may bytes do we want to keep from the stdout in the buffer for when we need to print it
25
- MAX_BUFFER_OUTPUT = 10_240
25
+ MAX_BUFFER_OUTPUT = 51_200
26
26
 
27
27
  # Abstraction around a single test scenario execution process
28
28
  class Scenario
data/docker-compose.yml CHANGED
@@ -16,36 +16,7 @@ services:
16
16
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
17
17
  KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
18
18
  KAFKA_CREATE_TOPICS:
19
- "integrations_00_02:2:1,\
20
- integrations_01_02:2:1,\
21
- integrations_02_02:2:1,\
22
- integrations_03_02:2:1,\
23
- integrations_04_02:2:1,\
24
- integrations_05_02:2:1,\
25
- integrations_06_02:2:1,\
26
- integrations_07_02:2:1,\
27
- integrations_08_02:2:1,\
28
- integrations_09_02:2:1,\
29
- integrations_10_02:2:1,\
30
- integrations_11_02:2:1,\
31
- integrations_12_02:2:1,\
32
- integrations_13_02:2:1,\
33
- integrations_14_02:2:1,\
34
- integrations_15_02:2:1,\
35
- integrations_16_02:2:1,\
36
- integrations_17_02:2:1,\
37
- integrations_18_02:2:1,\
38
- integrations_19_02:2:1,\
39
- integrations_20_02:2:1,\
40
- integrations_21_02:2:1,\
41
- integrations_00_03:3:1,\
42
- integrations_01_03:3:1,\
43
- integrations_02_03:3:1,\
44
- integrations_03_03:3:1,\
45
- integrations_04_03:3:1,\
46
- integrations_00_10:10:1,\
47
- integrations_01_10:10:1,\
48
- benchmarks_00_01:1:1,\
19
+ "benchmarks_00_01:1:1,\
49
20
  benchmarks_00_05:5:1,\
50
21
  benchmarks_01_05:5:1,\
51
22
  benchmarks_00_10:10:1"
@@ -0,0 +1,57 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ # Simple admin actions that we can perform via Karafka on our Kafka cluster
5
+ #
6
+ # @note It always initializes a new admin instance as we want to ensure it is always closed
7
+ # Since admin actions are not performed that often, that should be ok.
8
+ #
9
+ # @note It always uses the primary defined cluster and does not support multi-cluster work.
10
+ # If you need this, just replace the cluster info for the time you use this
11
+ class Admin
12
+ class << self
13
+ # Creates Kafka topic with given settings
14
+ #
15
+ # @param name [String] topic name
16
+ # @param partitions [Integer] number of partitions we expect
17
+ # @param replication_factor [Integer] number of replicas
18
+ # @param topic_config [Hash] topic config details as described here:
19
+ # https://kafka.apache.org/documentation/#topicconfigs
20
+ def create_topic(name, partitions, replication_factor, topic_config = {})
21
+ with_admin do |admin|
22
+ admin
23
+ .create_topic(name, partitions, replication_factor, topic_config)
24
+ .wait
25
+ end
26
+ end
27
+
28
+ # Deleted a given topic
29
+ #
30
+ # @param name [String] topic name
31
+ def delete_topic(name)
32
+ with_admin do |admin|
33
+ admin
34
+ .delete_topic(name)
35
+ .wait
36
+ end
37
+ end
38
+
39
+ # @return [Rdkafka::Metadata] cluster metadata info
40
+ def cluster_info
41
+ with_admin do |admin|
42
+ Rdkafka::Metadata.new(admin.instance_variable_get('@native_kafka'))
43
+ end
44
+ end
45
+
46
+ private
47
+
48
+ # Creates admin instance and yields it. After usage it closes the admin instance
49
+ def with_admin
50
+ admin = ::Rdkafka::Config.new(Karafka::App.config.kafka).admin
51
+ result = yield(admin)
52
+ admin.close
53
+ result
54
+ end
55
+ end
56
+ end
57
+ end
@@ -52,8 +52,7 @@ module Karafka
52
52
  if Karafka.pro?
53
53
  [
54
54
  'License: Commercial',
55
- "License entity: #{config.license.entity}",
56
- "License expires on: #{config.license.expires_on}"
55
+ "License entity: #{config.license.entity}"
57
56
  ]
58
57
  else
59
58
  [
@@ -345,6 +345,9 @@ module Karafka
345
345
  when :network_exception # 13
346
346
  reset
347
347
  return nil
348
+ when :unknown_topic_or_part
349
+ # This is expected and temporary until rdkafka catches up with metadata
350
+ return nil
348
351
  end
349
352
 
350
353
  raise if time_poll.attempts > MAX_POLL_RETRIES
@@ -364,7 +367,6 @@ module Karafka
364
367
  config = ::Rdkafka::Config.new(@subscription_group.kafka)
365
368
  config.consumer_rebalance_listener = @rebalance_manager
366
369
  consumer = config.consumer
367
- consumer.subscribe(*@subscription_group.topics.map(&:name))
368
370
  @name = consumer.name
369
371
 
370
372
  # Register statistics runner for this particular type of callbacks
@@ -389,6 +391,9 @@ module Karafka
389
391
  )
390
392
  )
391
393
 
394
+ # Subscription needs to happen after we assigned the rebalance callbacks just in case of
395
+ # a race condition
396
+ consumer.subscribe(*@subscription_group.topics.map(&:name))
392
397
  consumer
393
398
  end
394
399
 
@@ -21,7 +21,6 @@ module Karafka
21
21
  nested(:license) do
22
22
  required(:token) { |val| [true, false].include?(val) || val.is_a?(String) }
23
23
  required(:entity) { |val| val.is_a?(String) }
24
- required(:expires_on) { |val| val.is_a?(Date) }
25
24
  end
26
25
 
27
26
  required(:client_id) { |val| val.is_a?(String) && Contracts::TOPIC_REGEXP.match?(val) }
@@ -5,19 +5,19 @@ module Karafka
5
5
  # Listener that sets a proc title with a nice descriptive value
6
6
  class ProctitleListener
7
7
  # Updates proc title to an initializing one
8
- # @param _event [Dry::Events::Event] event details including payload
8
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
9
9
  def on_app_initializing(_event)
10
10
  setproctitle('initializing')
11
11
  end
12
12
 
13
13
  # Updates proc title to a running one
14
- # @param _event [Dry::Events::Event] event details including payload
14
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
15
15
  def on_app_running(_event)
16
16
  setproctitle('running')
17
17
  end
18
18
 
19
19
  # Updates proc title to a stopping one
20
- # @param _event [Dry::Events::Event] event details including payload
20
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
21
21
  def on_app_stopping(_event)
22
22
  setproctitle('stopping')
23
23
  end
@@ -65,7 +65,7 @@ module Karafka
65
65
 
66
66
  # Hooks up to WaterDrop instrumentation for emitted statistics
67
67
  #
68
- # @param event [Dry::Events::Event]
68
+ # @param event [Karafka::Core::Monitoring::Event]
69
69
  def on_statistics_emitted(event)
70
70
  statistics = event[:statistics]
71
71
 
@@ -76,7 +76,7 @@ module Karafka
76
76
 
77
77
  # Increases the errors count by 1
78
78
  #
79
- # @param event [Dry::Events::Event]
79
+ # @param event [Karafka::Core::Monitoring::Event]
80
80
  def on_error_occurred(event)
81
81
  extra_tags = ["type:#{event[:type]}"]
82
82
 
@@ -94,7 +94,7 @@ module Karafka
94
94
 
95
95
  # Reports how many messages we've polled and how much time did we spend on it
96
96
  #
97
- # @param event [Dry::Events::Event]
97
+ # @param event [Karafka::Core::Monitoring::Event]
98
98
  def on_connection_listener_fetch_loop_received(event)
99
99
  time_taken = event[:time]
100
100
  messages_count = event[:messages_buffer].size
@@ -105,7 +105,7 @@ module Karafka
105
105
 
106
106
  # Here we report majority of things related to processing as we have access to the
107
107
  # consumer
108
- # @param event [Dry::Events::Event]
108
+ # @param event [Karafka::Core::Monitoring::Event]
109
109
  def on_consumer_consumed(event)
110
110
  messages = event.payload[:caller].messages
111
111
  metadata = messages.metadata
@@ -124,7 +124,7 @@ module Karafka
124
124
  histogram('consumer.consumption_lag', metadata.consumption_lag, tags: tags)
125
125
  end
126
126
 
127
- # @param event [Dry::Events::Event]
127
+ # @param event [Karafka::Core::Monitoring::Event]
128
128
  def on_consumer_revoked(event)
129
129
  messages = event.payload[:caller].messages
130
130
  metadata = messages.metadata
@@ -137,7 +137,7 @@ module Karafka
137
137
  count('consumer.revoked', 1, tags: tags)
138
138
  end
139
139
 
140
- # @param event [Dry::Events::Event]
140
+ # @param event [Karafka::Core::Monitoring::Event]
141
141
  def on_consumer_shutdown(event)
142
142
  messages = event.payload[:caller].messages
143
143
  metadata = messages.metadata
@@ -151,7 +151,7 @@ module Karafka
151
151
  end
152
152
 
153
153
  # Worker related metrics
154
- # @param event [Dry::Events::Event]
154
+ # @param event [Karafka::Core::Monitoring::Event]
155
155
  def on_worker_process(event)
156
156
  jq_stats = event[:jobs_queue].statistics
157
157
 
@@ -162,7 +162,7 @@ module Karafka
162
162
 
163
163
  # We report this metric before and after processing for higher accuracy
164
164
  # Without this, the utilization would not be fully reflected
165
- # @param event [Dry::Events::Event]
165
+ # @param event [Karafka::Core::Monitoring::Event]
166
166
  def on_worker_processed(event)
167
167
  jq_stats = event[:jobs_queue].statistics
168
168
 
@@ -8,8 +8,34 @@ module Karafka
8
8
 
9
9
  private_constant :PUBLIC_KEY_LOCATION
10
10
 
11
+ # Tries to prepare license and verifies it
12
+ #
13
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
14
+ def prepare_and_verify(license_config)
15
+ prepare(license_config)
16
+ verify(license_config)
17
+ end
18
+
19
+ private
20
+
21
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
22
+ def prepare(license_config)
23
+ # If there is token, no action needed
24
+ # We support a case where someone would put the token in instead of using one from the
25
+ # license. That's in case there are limitations to using external package sources, etc
26
+ return if license_config.token
27
+
28
+ begin
29
+ license_config.token || require('karafka-license')
30
+ rescue LoadError
31
+ return
32
+ end
33
+
34
+ license_config.token = Karafka::License.token
35
+ end
36
+
11
37
  # Check license and setup license details (if needed)
12
- # @param license_config [Dry::Configurable::Config] config part related to the licensing
38
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
13
39
  def verify(license_config)
14
40
  # If no license, it will just run LGPL components without anything extra
15
41
  return unless license_config.token
@@ -29,19 +55,10 @@ module Karafka
29
55
  details = data ? JSON.parse(data) : raise_invalid_license_token(license_config)
30
56
 
31
57
  license_config.entity = details.fetch('entity')
32
- license_config.expires_on = Date.parse(details.fetch('expires_on'))
33
-
34
- return if license_config.expires_on > Date.today
35
-
36
- raise_expired_license_token_in_dev(license_config.expires_on)
37
-
38
- notify_if_license_expired(license_config.expires_on)
39
58
  end
40
59
 
41
- private
42
-
43
60
  # Raises an error with info, that used token is invalid
44
- # @param license_config [Dry::Configurable::Config]
61
+ # @param license_config [Karafka::Core::Configurable::Node]
45
62
  def raise_invalid_license_token(license_config)
46
63
  # We set it to false so `Karafka.pro?` method behaves as expected
47
64
  license_config.token = false
@@ -54,42 +71,5 @@ module Karafka
54
71
  MSG
55
72
  )
56
73
  end
57
-
58
- # Raises an error for test and dev environments if running pro with expired license
59
- # We never want to cause any non-dev problems and we should never crash anything else than
60
- # tests and development envs.
61
- #
62
- # @param expires_on [Date] when the license expires
63
- def raise_expired_license_token_in_dev(expires_on)
64
- env = Karafka::App.env
65
-
66
- return unless env.development? || env.test?
67
-
68
- raise Errors::ExpiredLicenseTokenError.new, expired_message(expires_on)
69
- end
70
-
71
- # We do not raise an error here as we don't want to cause any problems to someone that runs
72
- # Karafka on production. Error message is enough.
73
- #
74
- # @param expires_on [Date] when the license expires
75
- def notify_if_license_expired(expires_on)
76
- Karafka.logger.error(expired_message(expires_on))
77
-
78
- Karafka.monitor.instrument(
79
- 'error.occurred',
80
- caller: self,
81
- error: Errors::ExpiredLicenseTokenError.new(expired_message(expires_on)),
82
- type: 'licenser.expired'
83
- )
84
- end
85
-
86
- # @param expires_on [Date] when the license expires
87
- # @return [String] expired message
88
- def expired_message(expires_on)
89
- <<~MSG.tr("\n", ' ')
90
- Your license expired on #{expires_on}.
91
- Please reach us at contact@karafka.io or visit https://karafka.io to obtain a valid one.
92
- MSG
93
- end
94
74
  end
95
75
  end
@@ -33,6 +33,10 @@ module Karafka
33
33
  ::ActiveSupport::JSON.decode(message.raw_payload)
34
34
  )
35
35
 
36
+ # We cannot mark jobs as done after each if there are virtual partitions. Otherwise
37
+ # this could create random markings
38
+ next if topic.virtual_partitioner?
39
+
36
40
  mark_as_consumed(message)
37
41
  end
38
42
  end
@@ -36,7 +36,7 @@ module Karafka
36
36
 
37
37
  class << self
38
38
  # Loads all the pro components and configures them wherever it is expected
39
- # @param config [Dry::Configurable::Config] whole app config that we can alter with pro
39
+ # @param config [Karafka::Core::Configurable::Node] app config that we can alter with pro
40
40
  # components
41
41
  def setup(config)
42
42
  COMPONENTS.each { |component| require_relative(component) }
@@ -45,7 +45,7 @@ module Karafka
45
45
  end
46
46
 
47
47
  # @private
48
- # @param event [Dry::Events::Event] event details
48
+ # @param event [Karafka::Core::Monitoring::Event] event details
49
49
  # Tracks time taken to process a single message of a given topic partition
50
50
  def on_consumer_consumed(event)
51
51
  consumer = event[:caller]
@@ -1,5 +1,14 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ # This Karafka component is a Pro component.
4
+ # All of the commercial components are present in the lib/karafka/pro directory of this
5
+ # repository and their usage requires commercial license agreement.
6
+ #
7
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
8
+ #
9
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
10
+ # your code to Maciej Mensfeld.
11
+
3
12
  module Karafka
4
13
  module Pro
5
14
  module Processing
@@ -19,7 +19,20 @@ module Karafka
19
19
  'client.id': 'karafka'
20
20
  }.freeze
21
21
 
22
- private_constant :KAFKA_DEFAULTS
22
+ # Contains settings that should not be used in production but make life easier in dev
23
+ DEV_DEFAULTS = {
24
+ # Will create non-existing topics automatically.
25
+ # Note that the broker needs to be configured with `auto.create.topics.enable=true`
26
+ # While it is not recommended in prod, it simplifies work in dev
27
+ 'allow.auto.create.topics': 'true',
28
+ # We refresh the cluster state often as newly created topics in dev may not be detected
29
+ # fast enough. Fast enough means within reasonable time to provide decent user experience
30
+ # While it's only a one time thing for new topics, it can still be irritating to have to
31
+ # restart the process.
32
+ 'topic.metadata.refresh.interval.ms': 5_000
33
+ }.freeze
34
+
35
+ private_constant :KAFKA_DEFAULTS, :DEV_DEFAULTS
23
36
 
24
37
  # Available settings
25
38
 
@@ -34,8 +47,6 @@ module Karafka
34
47
  setting :token, default: false
35
48
  # option entity [String] for whom we did issue the license
36
49
  setting :entity, default: ''
37
- # option expires_on [Date] date when the license expires
38
- setting :expires_on, default: Date.parse('2100-01-01')
39
50
  end
40
51
 
41
52
  # option client_id [String] kafka client_id - used to provide
@@ -136,8 +147,10 @@ module Karafka
136
147
 
137
148
  Contracts::Config.new.validate!(config.to_h)
138
149
 
139
- # Check the license presence (if needed) and
140
- Licenser.new.verify(config.license)
150
+ licenser = Licenser.new
151
+
152
+ # Tries to load our license gem and if present will try to load the correct license
153
+ licenser.prepare_and_verify(config.license)
141
154
 
142
155
  configure_components
143
156
 
@@ -149,13 +162,21 @@ module Karafka
149
162
  # Propagates the kafka setting defaults unless they are already present
150
163
  # This makes it easier to set some values that users usually don't change but still allows
151
164
  # them to overwrite the whole hash if they want to
152
- # @param config [Dry::Configurable::Config] dry config of this producer
165
+ # @param config [Karafka::Core::Configurable::Node] config of this producer
153
166
  def merge_kafka_defaults!(config)
154
167
  KAFKA_DEFAULTS.each do |key, value|
155
168
  next if config.kafka.key?(key)
156
169
 
157
170
  config.kafka[key] = value
158
171
  end
172
+
173
+ return if Karafka::App.env.production?
174
+
175
+ DEV_DEFAULTS.each do |key, value|
176
+ next if config.kafka.key?(key)
177
+
178
+ config.kafka[key] = value
179
+ end
159
180
  end
160
181
 
161
182
  # Sets up all the components that are based on the user configuration
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.0.rc5'
6
+ VERSION = '2.0.1'
7
7
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.0.rc5
4
+ version: 2.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -34,7 +34,7 @@ cert_chain:
34
34
  R2P11bWoCtr70BsccVrN8jEhzwXngMyI2gVt750Y+dbTu1KgRqZKp/ECe7ZzPzXj
35
35
  pIy9vHxTANKYVyI4qj8OrFdEM5BQNu8oQpL0iQ==
36
36
  -----END CERTIFICATE-----
37
- date: 2022-08-01 00:00:00.000000000 Z
37
+ date: 2022-08-06 00:00:00.000000000 Z
38
38
  dependencies:
39
39
  - !ruby/object:Gem::Dependency
40
40
  name: karafka-core
@@ -152,7 +152,6 @@ files:
152
152
  - bin/scenario
153
153
  - bin/stress_many
154
154
  - bin/stress_one
155
- - bin/wait_for_kafka
156
155
  - certs/karafka-pro.pem
157
156
  - certs/mensfeld.pem
158
157
  - config/errors.yml
@@ -166,6 +165,7 @@ files:
166
165
  - lib/karafka/active_job/job_extensions.rb
167
166
  - lib/karafka/active_job/job_options_contract.rb
168
167
  - lib/karafka/active_job/routing/extensions.rb
168
+ - lib/karafka/admin.rb
169
169
  - lib/karafka/app.rb
170
170
  - lib/karafka/base_consumer.rb
171
171
  - lib/karafka/cli.rb
@@ -287,9 +287,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
287
287
  version: 2.7.0
288
288
  required_rubygems_version: !ruby/object:Gem::Requirement
289
289
  requirements:
290
- - - ">"
290
+ - - ">="
291
291
  - !ruby/object:Gem::Version
292
- version: 1.3.1
292
+ version: '0'
293
293
  requirements: []
294
294
  rubygems_version: 3.3.7
295
295
  signing_key:
metadata.gz.sig CHANGED
Binary file
data/bin/wait_for_kafka DELETED
@@ -1,20 +0,0 @@
1
- #!/bin/bash
2
-
3
- # This script allows us to wait for Kafka docker to fully be ready
4
- # We consider it fully ready when all our topics that need to be created are created as expected
5
-
6
- KAFKA_NAME='karafka_20_kafka'
7
- ZOOKEEPER='zookeeper:2181'
8
- LIST_CMD="kafka-topics.sh --list --zookeeper $ZOOKEEPER"
9
-
10
- # Take the number of topics that we need to create prior to running anything
11
- TOPICS_COUNT=`cat docker-compose.yml | grep -E -i 'integrations_|benchmarks_' | wc -l`
12
-
13
- # And wait until all of them are created
14
- until (((`docker exec $KAFKA_NAME $LIST_CMD | wc -l`) >= $TOPICS_COUNT));
15
- do
16
- echo "Waiting for Kafka to create all the needed topics..."
17
- sleep 1
18
- done
19
-
20
- echo "All the needed topics created."