karafka 2.0.0.rc4 → 2.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 526402b906d00f844c5b25925a854bc45f5127dc8c06646f1cd1432f265dfb81
4
- data.tar.gz: c58f6be491c4c4e82237307d37874100588c97241439788325600bea77d4254f
3
+ metadata.gz: 06f96b3ba14b7910d3cb23a90bdc135927f1483d5af4d079d59b3d6940d391b9
4
+ data.tar.gz: 2d936d4ddac360e229004c05f6ee51d004018e3c4cf4203067a99e6df93df92e
5
5
  SHA512:
6
- metadata.gz: '08922eef9890af84f1b444329061283f7987258200a9e338b892944a6ca061a60abed273e02acf44f91728f0ed219ec494e96c05a1155db2321e941e24c8c328'
7
- data.tar.gz: 5b3de319f1887af51eeda3ffc5b42a86601a382a55ab30e8cc7abce0526d4471ff274b3627250d47ec9a70de763fd337a7c8eff79add2b9409c935133bbe3610
6
+ metadata.gz: caeb9bcf1f0301f31442025176f3da631aa8ff9c21164e112b9ce3112372309d4b7fe59863cf7cf511e6a6bc4448f61448f3bc2024b101d4ae2668a358e5bbe9
7
+ data.tar.gz: cfc422fae74512c2142ef36f9d4b5c3024bf318d79dc83456e27553aebc177cec7a3772f6378160c8ecdf5a3b2626d862ba91d823fae6e30cbb593d4198474a9
checksums.yaml.gz.sig CHANGED
@@ -1,3 +1,3 @@
1
- ��?p5�� &&T������l����oV�`�ݟN����6ʦ�/��F-����-2���caz����8BJj-L"���K���L�.�cah^ggvGe��
2
- R�& �@��?yy-|p�/�:���}���L jŝ�o�x�����>�4<��YvL@�7�K�""z�I\��r�8n����@��Ds�I"�i�HE��GT��ۑh'��@��;��$��ƀ\Y]47f}eWQ���
3
- �&8J�%vM�"v?Y,��%���H�O
1
+ �����L��zi�.��.�fT�
2
+ �guM��Ԉ142��B��U 0ǪqEJ\�����o�zr�U_z!b�ל2_�R��iў1��t,}P��]{��l��0��om�.�$lbl��p(KW��V,�iC)��~��K��¸�Gv��m+��[By!�����h� ���03R'�k���Ygu'L9u%�T���G�ĸ�e���;�KqQd�m���=���l�>��AG\t��jí�%��n���鐁QC. ���4s:�){+�Q0��=Nh�� ɀ����X!à�� ����Ul�ZD�����'�� 磶.j��v[
3
+
data/CHANGELOG.md CHANGED
@@ -1,8 +1,108 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.0.0 (2022-08-5)
4
+
5
+ This changelog describes changes between `1.4` and `2.0`. Please refer to appropriate release notes for changes between particular `rc` releases.
6
+
7
+ Karafka 2.0 is a **major** rewrite that brings many new things to the table but also removes specific concepts that happened not to be as good as I initially thought when I created them.
8
+
9
+ Please consider getting a Pro version if you want to **support** my work on the Karafka ecosystem!
10
+
11
+ For anyone worried that I will start converting regular features into Pro: This will **not** happen. Anything free and fully OSS in Karafka 1.4 will **forever** remain free. Most additions and improvements to the ecosystem are to its free parts. Any feature that is introduced as a free and open one will not become paid.
12
+
13
+ ### Additions
14
+
15
+ This section describes **new** things and concepts introduced with Karafka 2.0.
16
+
17
+ Karafka 2.0:
18
+
19
+ - Introduces multi-threaded support for [concurrent work](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading) consumption for separate partitions as well as for single partition work via [Virtual Partitions](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions).
20
+ - Introduces [Active Job adapter](https://github.com/karafka/karafka/wiki/Active-Job) for using Karafka as a jobs backend with Ruby on Rails Active Job.
21
+ - Introduces fully automatic integration end-to-end [test suite](https://github.com/karafka/karafka/tree/master/spec/integrations) that checks any case I could imagine.
22
+ - Introduces [Virtual Partitions](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions) for ability to parallelize work of a single partition.
23
+ - Introduces [Long-Running Jobs](https://github.com/karafka/karafka/wiki/Pro-Long-Running-Jobs) to allow for work that would otherwise exceed the `max.poll.interval.ms`.
24
+ - Introduces the [Enhanced Scheduler](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Scheduler) that uses a non-preemptive LJF (Longest Job First) algorithm instead of a a FIFO (First-In, First-Out) one.
25
+ - Introduces [Enhanced Active Job adapter](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Active-Job) that is optimized and allows for strong ordering of jobs and more.
26
+ - Introduces seamless [Ruby on Rails integration](https://github.com/karafka/karafka/wiki/Integrating-with-Ruby-on-Rails-and-other-frameworks) via `Rails::Railte` without need for any extra configuration.
27
+ - Provides `#revoked` [method](https://github.com/karafka/karafka/wiki/Consuming-messages#shutdown-and-partition-revocation-handlers) for taking actions upon topic revocation.
28
+ - Emits underlying async errors emitted from `librdkafka` via the standardized `error.occurred` [monitor channel](https://github.com/karafka/karafka/wiki/Error-handling-and-back-off-policy#error-tracking).
29
+ - Replaces `ruby-kafka` with `librdkafka` as an underlying driver.
30
+ - Introduces official [EOL policies](https://github.com/karafka/karafka/wiki/Versions-Lifecycle-and-EOL).
31
+ - Introduces [benchmarks](https://github.com/karafka/karafka/tree/master/spec/benchmarks) that can be used to profile Karafka.
32
+ - Introduces a requirement that the end user code **needs** to be [thread-safe](https://github.com/karafka/karafka/wiki/FAQ#does-karafka-require-gems-to-be-thread-safe).
33
+ - Introduces a [Pro subscription](https://github.com/karafka/karafka/wiki/Build-vs.-Buy) with a [commercial license](https://github.com/karafka/karafka/blob/master/LICENSE-COMM) to fund further ecosystem development.
34
+
35
+ ### Deletions
36
+
37
+ This section describes things that are **no longer** part of the Karafka ecosystem.
38
+
39
+ Karafka 2.0:
40
+
41
+ - Removes topics mappers concept completely.
42
+ - Removes pidfiles support.
43
+ - Removes daemonization support.
44
+ - Removes support for using `sidekiq-backend` due to introduction of [multi-threading](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading).
45
+ - Removes the `Responders` concept in favour of WaterDrop producer usage.
46
+ - Removes completely all the callbacks in favour of finalizer method `#shutdown`.
47
+ - Removes single message consumption mode in favour of [documentation](https://github.com/karafka/karafka/wiki/Consuming-messages#one-at-a-time) on how to do it easily by yourself.
48
+
49
+ ### Changes
50
+
51
+ This section describes things that were **changed** in Karafka but are still present.
52
+
53
+ Karafka 2.0:
54
+
55
+ - Uses only instrumentation that comes from Karafka. This applies also to notifications coming natively from `librdkafka`. They are now piped through Karafka prior to being dispatched.
56
+ - Integrates WaterDrop `2.x` tightly with autoconfiguration inheritance and an option to redefine it.
57
+ - Integrates with the `karafka-testing` gem for RSpec that also has been updated.
58
+ - Updates `cli info` to reflect the `2.0` details.
59
+ - Stops validating `kafka` configuration beyond minimum as the rest is handled by `librdkafka`.
60
+ - No longer uses `dry-validation`.
61
+ - No longer uses `dry-monitor`.
62
+ - No longer uses `dry-configurable`.
63
+ - Lowers general external dependencies three **heavily**.
64
+ - Renames `Karafka::Params::BatchMetadata` to `Karafka::Messages::BatchMetadata`.
65
+ - Renames `Karafka::Params::Params` to `Karafka::Messages::Message`.
66
+ - Renames `#params_batch` in consumers to `#messages`.
67
+ - Renames `Karafka::Params::Metadata` to `Karafka::Messages::Metadata`.
68
+ - Renames `Karafka::Fetcher` to `Karafka::Runner` and align notifications key names.
69
+ - Renames `StdoutListener` to `LoggerListener`.
70
+ - Reorganizes [monitoring and logging](https://github.com/karafka/karafka/wiki/Monitoring-and-logging) to match new concepts.
71
+ - Notifies on fatal worker processing errors.
72
+ - Contains updated install templates for Rails and no-non Rails.
73
+ - Changes how the routing style (`0.5`) behaves. It now builds a single consumer group instead of one per topic.
74
+ - Introduces changes that will allow me to build full web-UI in the upcoming `2.1`.
75
+ - Contains updated example apps.
76
+ - Standardizes error hooks for all error reporting (`error.occurred`).
77
+ - Changes license to `LGPL-3.0`.
78
+ - Introduces a `karafka-core` dependency that contains common code used across the ecosystem.
79
+ - Contains updated [wiki](https://github.com/karafka/karafka/wiki) on everything I could think of.
80
+
81
+ ### What's ahead
82
+
83
+ Karafka 2.0 is just the beginning.
84
+
85
+ There are several things in the plan already for 2.1 and beyond, including a web dashboard, at-rest encryption, transactions support, and more.
86
+
87
+ ## 2.0.0.rc6 (2022-08-05)
88
+ - Update licenser to use a gem based approach based on `karafka-license`.
89
+ - Do not mark intermediate jobs as consumed when Karafka runs Enhanced Active Job with Virtual Partitions.
90
+ - Improve development experience by adding fast cluster state changes refresh (#944)
91
+ - Improve the license loading.
92
+
93
+ ## 2.0.0.rc5 (2022-08-01)
94
+ - Improve specs stability
95
+ - Improve forceful shutdown
96
+ - Add support for debug `TTIN` backtrace printing
97
+ - Fix a case where logger listener would not intercept `warn` level
98
+ - Require `rdkafka` >= `0.12`
99
+ - Replace statistics decorator with the one from `karafka-core`
100
+
3
101
  ## 2.0.0.rc4 (2022-07-28)
4
102
  - Remove `dry-monitor`
5
103
  - Use `karafka-core`
104
+ - Improve forceful shutdown resources finalization
105
+ - Cache consumer client name
6
106
 
7
107
  ## 2.0.0.rc3 (2022-07-26)
8
108
  - Fix Pro partitioner hash function may not utilize all the threads (#907).
@@ -43,7 +143,7 @@
43
143
  - Add more integration specs related to polling limits.
44
144
  - Remove auto-detection of re-assigned partitions upon rebalance as for too fast rebalances it could not be accurate enough. It would also mess up in case of rebalances that would happen right after a `#seek` was issued for a partition.
45
145
  - Optimize the removal of pre-buffered lost partitions data.
46
- - Always rune `#revoked` when rebalance with revocation happens.
146
+ - Always run `#revoked` when rebalance with revocation happens.
47
147
  - Evict executors upon rebalance, to prevent race-conditions.
48
148
  - Align topics names for integration specs.
49
149
 
@@ -69,8 +169,8 @@
69
169
  - Fix a case where consecutive CTRL+C (non-stop) would case an exception during forced shutdown
70
170
  - Add missing `consumer.prepared.error` into `LoggerListener`
71
171
  - Delegate partition resuming from the consumers to listeners threads.
72
- - Add support for Long Running Jobs (LRJ) for ActiveJob [PRO]
73
- - Add support for Long Running Jobs for consumers [PRO]
172
+ - Add support for Long-Running Jobs (LRJ) for ActiveJob [PRO]
173
+ - Add support for Long-Running Jobs for consumers [PRO]
74
174
  - Allow `active_job_topic` to accept a block for extra topic related settings
75
175
  - Remove no longer needed logger threads
76
176
  - Auto-adapt number of processes for integration specs based on the number of CPUs
data/Gemfile.lock CHANGED
@@ -1,11 +1,11 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.0.rc4)
5
- karafka-core (>= 2.0.0, < 3.0.0)
6
- rdkafka (>= 0.10)
4
+ karafka (2.0.0)
5
+ karafka-core (>= 2.0.2, < 3.0.0)
6
+ rdkafka (>= 0.12)
7
7
  thor (>= 0.20)
8
- waterdrop (>= 2.4.0, < 3.0.0)
8
+ waterdrop (>= 2.4.1, < 3.0.0)
9
9
  zeitwerk (~> 2.3)
10
10
 
11
11
  GEM
@@ -30,7 +30,7 @@ GEM
30
30
  activesupport (>= 5.0)
31
31
  i18n (1.12.0)
32
32
  concurrent-ruby (~> 1.0)
33
- karafka-core (2.0.0)
33
+ karafka-core (2.0.2)
34
34
  concurrent-ruby (>= 1.1)
35
35
  mini_portile2 (2.8.0)
36
36
  minitest (5.16.2)
@@ -61,8 +61,8 @@ GEM
61
61
  thor (1.2.1)
62
62
  tzinfo (2.0.5)
63
63
  concurrent-ruby (~> 1.0)
64
- waterdrop (2.4.0)
65
- karafka-core (~> 2.0)
64
+ waterdrop (2.4.1)
65
+ karafka-core (>= 2.0.2, < 3.0.0)
66
66
  rdkafka (>= 0.10)
67
67
  zeitwerk (~> 2.3)
68
68
  zeitwerk (2.6.0)
data/LICENSE-COMM CHANGED
@@ -6,7 +6,7 @@ IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT ("EULA") IS A LEGAL AGREEMEN
6
6
 
7
7
  ------------------------------------------------------------------------------
8
8
 
9
- In order to use the Software under this Agreement, you must receive an application token at the time of purchase, in accordance with the scope of use and other terms specified for each type of Software and as set forth in this Section 1 of this Agreement.
9
+ In order to use the Software under this Agreement, you must receive a "Source URL" to a license package at the time of purchase, in accordance with the scope of use and other terms specified for each type of Software and as set forth in this Section 1 of this Agreement.
10
10
 
11
11
  1. License Grant
12
12
 
@@ -22,7 +22,7 @@ In order to use the Software under this Agreement, you must receive an applicati
22
22
 
23
23
  3. Restricted Uses.
24
24
 
25
- 3.1 You shall not (and shall not allow any third party to): (a) decompile, disassemble, or otherwise reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions); (b) distribute, sell, sublicense, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement; (c) redistribute the Software or Modifications other than by including the Software or a portion thereof within your own product, which must have substantially different functionality than the Software or Modifications and must not allow any third party to use the Software or Modifications, or any portions thereof, for software development or application development purposes; (d) redistribute the Software as part of a product, "appliance" or "virtual server"; (e) redistribute the Software on any server which is not directly under your control; (f) remove any product identification, proprietary, copyright or other notices contained in the Software; (g) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by Maciej Mensfeld; (h) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software; (i) utilize any equipment, device, software, or other means designed to circumvent or remove any form of token verification or copy protection used by Maciej Mensfeld in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by Maciej Mensfeld; (j) use the Software to develop a product which is competitive with any Maciej Mensfeld product offerings; or (k) use unauthorized Source URLS or keycode(s) or distribute or publish Source URLs or keycode(s), except as may be expressly permitted by Maciej Mensfeld in writing. If your unique application token is ever published, Maciej Mensfeld reserves the right to terminate your access without notice.
25
+ 3.1 You shall not (and shall not allow any third party to): (a) decompile, disassemble, or otherwise reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions); (b) distribute, sell, sublicense, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement; (c) redistribute the Software or Modifications other than by including the Software or a portion thereof within your own product, which must have substantially different functionality than the Software or Modifications and must not allow any third party to use the Software or Modifications, or any portions thereof, for software development or application development purposes; (d) redistribute the Software as part of a product, "appliance" or "virtual server"; (e) redistribute the Software on any server which is not directly under your control; (f) remove any product identification, proprietary, copyright or other notices contained in the Software; (g) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by Maciej Mensfeld; (h) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software; (i) utilize any equipment, device, software, or other means designed to circumvent or remove any form of Source URL or copy protection used by Maciej Mensfeld in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by Maciej Mensfeld; (j) use the Software to develop a product which is competitive with any Maciej Mensfeld product offerings; or (k) use unauthorized Source URLS or keycode(s) or distribute or publish Source URLs or keycode(s), except as may be expressly permitted by Maciej Mensfeld in writing. If your unique Source URL is ever published, Maciej Mensfeld reserves the right to terminate your access without notice.
26
26
 
27
27
  3.2 UNDER NO CIRCUMSTANCES MAY YOU USE THE SOFTWARE AS PART OF A PRODUCT OR SERVICE THAT PROVIDES SIMILAR FUNCTIONALITY TO THE SOFTWARE ITSELF.
28
28
 
data/README.md CHANGED
@@ -4,14 +4,14 @@
4
4
  [![Gem Version](https://badge.fury.io/rb/karafka.svg)](http://badge.fury.io/rb/karafka)
5
5
  [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
6
6
 
7
- **Note**: All of the documentation here refers to Karafka `2.0.0.rc3` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
7
+ **Note**: All of the documentation here refers to Karafka `2.0.0` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
8
8
 
9
9
  ## About Karafka
10
10
 
11
11
  Karafka is a Ruby and Rails multi-threaded efficient Kafka processing framework that:
12
12
 
13
13
  - Supports parallel processing in [multiple threads](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading) (also for a [single topic partition](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions) work)
14
- - Has [ActiveJob backend](https://github.com/karafka/karafka/wiki/Active-Job) support (including ordered jobs)
14
+ - Has [ActiveJob backend](https://github.com/karafka/karafka/wiki/Active-Job) support (including [ordered jobs](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Active-Job#ordered-jobs))
15
15
  - [Automatically integrates](https://github.com/karafka/karafka/wiki/Integrating-with-Ruby-on-Rails-and-other-frameworks#integrating-with-ruby-on-rails=) with Ruby on Rails
16
16
  - Supports in-development [code reloading](https://github.com/karafka/karafka/wiki/Auto-reload-of-code-changes-in-development)
17
17
  - Is powered by [librdkafka](https://github.com/edenhill/librdkafka) (the Apache Kafka C/C++ client library)
@@ -55,7 +55,7 @@ We also maintain many [integration specs](https://github.com/karafka/karafka/tre
55
55
  1. Add and install Karafka:
56
56
 
57
57
  ```bash
58
- bundle add karafka -v 2.0.0.rc3
58
+ bundle add karafka
59
59
 
60
60
  bundle exec karafka install
61
61
  ```
data/bin/create_token CHANGED
@@ -9,16 +9,10 @@ PRIVATE_KEY_LOCATION = File.join(Dir.home, '.ssh', 'karafka-pro', 'id_rsa')
9
9
 
10
10
  # Name of the entity that acquires the license
11
11
  ENTITY = ARGV[0]
12
- # Date till which license is valid
13
- EXPIRES_ON = Date.parse(ARGV[1])
14
12
 
15
13
  raise ArgumentError, 'Entity missing' if ENTITY.nil? || ENTITY.empty?
16
- raise ArgumentError, 'Expires on needs to be in the future' if EXPIRES_ON <= Date.today
17
14
 
18
- pro_token_data = {
19
- entity: ENTITY,
20
- expires_on: EXPIRES_ON
21
- }
15
+ pro_token_data = { entity: ENTITY }
22
16
 
23
17
  # This code uses my private key to generate a new token for Karafka Pro capabilities
24
18
  private_key = OpenSSL::PKey::RSA.new(File.read(PRIVATE_KEY_LOCATION))
data/bin/integrations CHANGED
@@ -22,7 +22,7 @@ ROOT_PATH = Pathname.new(File.expand_path(File.join(File.dirname(__FILE__), '../
22
22
  CONCURRENCY = ENV.key?('CI') ? 5 : Etc.nprocessors * 2
23
23
 
24
24
  # How may bytes do we want to keep from the stdout in the buffer for when we need to print it
25
- MAX_BUFFER_OUTPUT = 10_240
25
+ MAX_BUFFER_OUTPUT = 51_200
26
26
 
27
27
  # Abstraction around a single test scenario execution process
28
28
  class Scenario
@@ -37,7 +37,7 @@ class Scenario
37
37
  'consumption/worker_critical_error_behaviour.rb' => [0, 2].freeze,
38
38
  'shutdown/on_hanging_jobs_and_a_shutdown.rb' => [2].freeze,
39
39
  'shutdown/on_hanging_on_shutdown_job_and_a_shutdown.rb' => [2].freeze,
40
- 'shutdown/on_hanging_poll_and_shutdown.rb' => [2].freeze
40
+ 'shutdown/on_hanging_listener_and_shutdown.rb' => [2].freeze
41
41
  }.freeze
42
42
 
43
43
  private_constant :MAX_RUN_TIME, :EXIT_CODES
data/karafka.gemspec CHANGED
@@ -16,10 +16,10 @@ Gem::Specification.new do |spec|
16
16
  spec.description = 'Framework used to simplify Apache Kafka based Ruby applications development'
17
17
  spec.licenses = ['LGPL-3.0', 'Commercial']
18
18
 
19
- spec.add_dependency 'karafka-core', '>= 2.0.0', '< 3.0.0'
20
- spec.add_dependency 'rdkafka', '>= 0.10'
19
+ spec.add_dependency 'karafka-core', '>= 2.0.2', '< 3.0.0'
20
+ spec.add_dependency 'rdkafka', '>= 0.12'
21
21
  spec.add_dependency 'thor', '>= 0.20'
22
- spec.add_dependency 'waterdrop', '>= 2.4.0', '< 3.0.0'
22
+ spec.add_dependency 'waterdrop', '>= 2.4.1', '< 3.0.0'
23
23
  spec.add_dependency 'zeitwerk', '~> 2.3'
24
24
 
25
25
  spec.required_ruby_version = '>= 2.7.0'
@@ -52,8 +52,7 @@ module Karafka
52
52
  if Karafka.pro?
53
53
  [
54
54
  'License: Commercial',
55
- "License entity: #{config.license.entity}",
56
- "License expires on: #{config.license.expires_on}"
55
+ "License entity: #{config.license.entity}"
57
56
  ]
58
57
  else
59
58
  [
@@ -275,6 +275,11 @@ module Karafka
275
275
 
276
276
  # Commits the stored offsets in a sync way and closes the consumer.
277
277
  def close
278
+ # Once client is closed, we should not close it again
279
+ # This could only happen in case of a race-condition when forceful shutdown happens
280
+ # and triggers this from a different thread
281
+ return if @closed
282
+
278
283
  @mutex.synchronize do
279
284
  internal_commit_offsets(async: false)
280
285
 
@@ -340,6 +345,9 @@ module Karafka
340
345
  when :network_exception # 13
341
346
  reset
342
347
  return nil
348
+ when :unknown_topic_or_part
349
+ # This is expected and temporary until rdkafka catches up with metadata
350
+ return nil
343
351
  end
344
352
 
345
353
  raise if time_poll.attempts > MAX_POLL_RETRIES
@@ -359,7 +367,6 @@ module Karafka
359
367
  config = ::Rdkafka::Config.new(@subscription_group.kafka)
360
368
  config.consumer_rebalance_listener = @rebalance_manager
361
369
  consumer = config.consumer
362
- consumer.subscribe(*@subscription_group.topics.map(&:name))
363
370
  @name = consumer.name
364
371
 
365
372
  # Register statistics runner for this particular type of callbacks
@@ -384,6 +391,9 @@ module Karafka
384
391
  )
385
392
  )
386
393
 
394
+ # Subscription needs to happen after we assigned the rebalance callbacks just in case of
395
+ # a race condition
396
+ consumer.subscribe(*@subscription_group.topics.map(&:name))
387
397
  consumer
388
398
  end
389
399
 
@@ -34,6 +34,8 @@ module Karafka
34
34
  # We can do this that way because we always first schedule jobs using messages before we
35
35
  # fetch another batch.
36
36
  @messages_buffer = MessagesBuffer.new(subscription_group)
37
+ @mutex = Mutex.new
38
+ @stopped = false
37
39
  end
38
40
 
39
41
  # Runs the main listener fetch loop.
@@ -51,6 +53,25 @@ module Karafka
51
53
  fetch_loop
52
54
  end
53
55
 
56
+ # Stops the jobs queue, triggers shutdown on all the executors (sync), commits offsets and
57
+ # stops kafka client.
58
+ #
59
+ # @note This method is not private despite being part of the fetch loop because in case of
60
+ # a forceful shutdown, it may be invoked from a separate thread
61
+ #
62
+ # @note We wrap it with a mutex exactly because of the above case of forceful shutdown
63
+ def shutdown
64
+ return if @stopped
65
+
66
+ @mutex.synchronize do
67
+ @stopped = true
68
+ @executors.clear
69
+ @coordinators.reset
70
+ @client.commit_offsets!
71
+ @client.stop
72
+ end
73
+ end
74
+
54
75
  private
55
76
 
56
77
  # Fetches the data and adds it to the jobs queue.
@@ -239,13 +260,6 @@ module Karafka
239
260
  @client.batch_poll until @jobs_queue.empty?(@subscription_group.id)
240
261
  end
241
262
 
242
- # Stops the jobs queue, triggers shutdown on all the executors (sync), commits offsets and
243
- # stops kafka client.
244
- def shutdown
245
- @client.commit_offsets!
246
- @client.stop
247
- end
248
-
249
263
  # We can stop client without a problem, as it will reinitialize itself when running the
250
264
  # `#fetch_loop` again. We just need to remember to also reset the runner as it is a long
251
265
  # running one, so with a new connection to Kafka, we need to initialize the state of the
@@ -21,7 +21,6 @@ module Karafka
21
21
  nested(:license) do
22
22
  required(:token) { |val| [true, false].include?(val) || val.is_a?(String) }
23
23
  required(:entity) { |val| val.is_a?(String) }
24
- required(:expires_on) { |val| val.is_a?(Date) }
25
24
  end
26
25
 
27
26
  required(:client_id) { |val| val.is_a?(String) && Contracts::TOPIC_REGEXP.match?(val) }
@@ -16,8 +16,7 @@ module Karafka
16
16
  @consumer_group_id = consumer_group_id
17
17
  @client_name = client_name
18
18
  @monitor = monitor
19
- # We decorate both Karafka and WaterDrop statistics the same way
20
- @statistics_decorator = ::WaterDrop::Instrumentation::Callbacks::StatisticsDecorator.new
19
+ @statistics_decorator = ::Karafka::Core::Monitoring::StatisticsDecorator.new
21
20
  end
22
21
 
23
22
  # Emits decorated statistics to the monitor
@@ -9,6 +9,7 @@ module Karafka
9
9
  USED_LOG_LEVELS = %i[
10
10
  debug
11
11
  info
12
+ warn
12
13
  error
13
14
  fatal
14
15
  ].freeze
@@ -60,11 +61,28 @@ module Karafka
60
61
  info "[#{job.id}] #{job_type} job for #{consumer} on #{topic} finished in #{time}ms"
61
62
  end
62
63
 
63
- # Logs info about system signals that Karafka received.
64
+ # Logs info about system signals that Karafka received and prints backtrace for threads in
65
+ # case of ttin
64
66
  #
65
67
  # @param event [Dry::Events::Event] event details including payload
66
68
  def on_process_notice_signal(event)
67
69
  info "Received #{event[:signal]} system signal"
70
+
71
+ # We print backtrace only for ttin
72
+ return unless event[:signal] == :SIGTTIN
73
+
74
+ # Inspired by Sidekiq
75
+ Thread.list.each do |thread|
76
+ tid = (thread.object_id ^ ::Process.pid).to_s(36)
77
+
78
+ warn "Thread TID-#{tid} #{thread['label']}"
79
+
80
+ if thread.backtrace
81
+ warn thread.backtrace.join("\n")
82
+ else
83
+ warn '<no backtrace available>'
84
+ end
85
+ end
68
86
  end
69
87
 
70
88
  # Logs info that we're initializing Karafka app.
@@ -5,19 +5,19 @@ module Karafka
5
5
  # Listener that sets a proc title with a nice descriptive value
6
6
  class ProctitleListener
7
7
  # Updates proc title to an initializing one
8
- # @param _event [Dry::Events::Event] event details including payload
8
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
9
9
  def on_app_initializing(_event)
10
10
  setproctitle('initializing')
11
11
  end
12
12
 
13
13
  # Updates proc title to a running one
14
- # @param _event [Dry::Events::Event] event details including payload
14
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
15
15
  def on_app_running(_event)
16
16
  setproctitle('running')
17
17
  end
18
18
 
19
19
  # Updates proc title to a stopping one
20
- # @param _event [Dry::Events::Event] event details including payload
20
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
21
21
  def on_app_stopping(_event)
22
22
  setproctitle('stopping')
23
23
  end
@@ -65,7 +65,7 @@ module Karafka
65
65
 
66
66
  # Hooks up to WaterDrop instrumentation for emitted statistics
67
67
  #
68
- # @param event [Dry::Events::Event]
68
+ # @param event [Karafka::Core::Monitoring::Event]
69
69
  def on_statistics_emitted(event)
70
70
  statistics = event[:statistics]
71
71
 
@@ -76,7 +76,7 @@ module Karafka
76
76
 
77
77
  # Increases the errors count by 1
78
78
  #
79
- # @param event [Dry::Events::Event]
79
+ # @param event [Karafka::Core::Monitoring::Event]
80
80
  def on_error_occurred(event)
81
81
  extra_tags = ["type:#{event[:type]}"]
82
82
 
@@ -94,7 +94,7 @@ module Karafka
94
94
 
95
95
  # Reports how many messages we've polled and how much time did we spend on it
96
96
  #
97
- # @param event [Dry::Events::Event]
97
+ # @param event [Karafka::Core::Monitoring::Event]
98
98
  def on_connection_listener_fetch_loop_received(event)
99
99
  time_taken = event[:time]
100
100
  messages_count = event[:messages_buffer].size
@@ -105,7 +105,7 @@ module Karafka
105
105
 
106
106
  # Here we report majority of things related to processing as we have access to the
107
107
  # consumer
108
- # @param event [Dry::Events::Event]
108
+ # @param event [Karafka::Core::Monitoring::Event]
109
109
  def on_consumer_consumed(event)
110
110
  messages = event.payload[:caller].messages
111
111
  metadata = messages.metadata
@@ -124,7 +124,7 @@ module Karafka
124
124
  histogram('consumer.consumption_lag', metadata.consumption_lag, tags: tags)
125
125
  end
126
126
 
127
- # @param event [Dry::Events::Event]
127
+ # @param event [Karafka::Core::Monitoring::Event]
128
128
  def on_consumer_revoked(event)
129
129
  messages = event.payload[:caller].messages
130
130
  metadata = messages.metadata
@@ -137,7 +137,7 @@ module Karafka
137
137
  count('consumer.revoked', 1, tags: tags)
138
138
  end
139
139
 
140
- # @param event [Dry::Events::Event]
140
+ # @param event [Karafka::Core::Monitoring::Event]
141
141
  def on_consumer_shutdown(event)
142
142
  messages = event.payload[:caller].messages
143
143
  metadata = messages.metadata
@@ -151,7 +151,7 @@ module Karafka
151
151
  end
152
152
 
153
153
  # Worker related metrics
154
- # @param event [Dry::Events::Event]
154
+ # @param event [Karafka::Core::Monitoring::Event]
155
155
  def on_worker_process(event)
156
156
  jq_stats = event[:jobs_queue].statistics
157
157
 
@@ -162,7 +162,7 @@ module Karafka
162
162
 
163
163
  # We report this metric before and after processing for higher accuracy
164
164
  # Without this, the utilization would not be fully reflected
165
- # @param event [Dry::Events::Event]
165
+ # @param event [Karafka::Core::Monitoring::Event]
166
166
  def on_worker_processed(event)
167
167
  jq_stats = event[:jobs_queue].statistics
168
168
 
@@ -8,8 +8,34 @@ module Karafka
8
8
 
9
9
  private_constant :PUBLIC_KEY_LOCATION
10
10
 
11
+ # Tries to prepare license and verifies it
12
+ #
13
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
14
+ def prepare_and_verify(license_config)
15
+ prepare(license_config)
16
+ verify(license_config)
17
+ end
18
+
19
+ private
20
+
21
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
22
+ def prepare(license_config)
23
+ # If there is token, no action needed
24
+ # We support a case where someone would put the token in instead of using one from the
25
+ # license. That's in case there are limitations to using external package sources, etc
26
+ return if license_config.token
27
+
28
+ begin
29
+ license_config.token || require('karafka-license')
30
+ rescue LoadError
31
+ return
32
+ end
33
+
34
+ license_config.token = Karafka::License.token
35
+ end
36
+
11
37
  # Check license and setup license details (if needed)
12
- # @param license_config [Dry::Configurable::Config] config part related to the licensing
38
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
13
39
  def verify(license_config)
14
40
  # If no license, it will just run LGPL components without anything extra
15
41
  return unless license_config.token
@@ -29,19 +55,10 @@ module Karafka
29
55
  details = data ? JSON.parse(data) : raise_invalid_license_token(license_config)
30
56
 
31
57
  license_config.entity = details.fetch('entity')
32
- license_config.expires_on = Date.parse(details.fetch('expires_on'))
33
-
34
- return if license_config.expires_on > Date.today
35
-
36
- raise_expired_license_token_in_dev(license_config.expires_on)
37
-
38
- notify_if_license_expired(license_config.expires_on)
39
58
  end
40
59
 
41
- private
42
-
43
60
  # Raises an error with info, that used token is invalid
44
- # @param license_config [Dry::Configurable::Config]
61
+ # @param license_config [Karafka::Core::Configurable::Node]
45
62
  def raise_invalid_license_token(license_config)
46
63
  # We set it to false so `Karafka.pro?` method behaves as expected
47
64
  license_config.token = false
@@ -54,42 +71,5 @@ module Karafka
54
71
  MSG
55
72
  )
56
73
  end
57
-
58
- # Raises an error for test and dev environments if running pro with expired license
59
- # We never want to cause any non-dev problems and we should never crash anything else than
60
- # tests and development envs.
61
- #
62
- # @param expires_on [Date] when the license expires
63
- def raise_expired_license_token_in_dev(expires_on)
64
- env = Karafka::App.env
65
-
66
- return unless env.development? || env.test?
67
-
68
- raise Errors::ExpiredLicenseTokenError.new, expired_message(expires_on)
69
- end
70
-
71
- # We do not raise an error here as we don't want to cause any problems to someone that runs
72
- # Karafka on production. Error message is enough.
73
- #
74
- # @param expires_on [Date] when the license expires
75
- def notify_if_license_expired(expires_on)
76
- Karafka.logger.error(expired_message(expires_on))
77
-
78
- Karafka.monitor.instrument(
79
- 'error.occurred',
80
- caller: self,
81
- error: Errors::ExpiredLicenseTokenError.new(expired_message(expires_on)),
82
- type: 'licenser.expired'
83
- )
84
- end
85
-
86
- # @param expires_on [Date] when the license expires
87
- # @return [String] expired message
88
- def expired_message(expires_on)
89
- <<~MSG.tr("\n", ' ')
90
- Your license expired on #{expires_on}.
91
- Please reach us at contact@karafka.io or visit https://karafka.io to obtain a valid one.
92
- MSG
93
- end
94
74
  end
95
75
  end
@@ -12,7 +12,7 @@ module Karafka
12
12
  # @note We need this to make sure that we allocate proper dispatched events only to
13
13
  # callback listeners that should publish them
14
14
  def name
15
- ::Rdkafka::Bindings.rd_kafka_name(@native_kafka)
15
+ @name ||= ::Rdkafka::Bindings.rd_kafka_name(@native_kafka)
16
16
  end
17
17
  end
18
18
  end
@@ -33,6 +33,10 @@ module Karafka
33
33
  ::ActiveSupport::JSON.decode(message.raw_payload)
34
34
  )
35
35
 
36
+ # We cannot mark jobs as done after each if there are virtual partitions. Otherwise
37
+ # this could create random markings
38
+ next if topic.virtual_partitioner?
39
+
36
40
  mark_as_consumed(message)
37
41
  end
38
42
  end
@@ -36,7 +36,7 @@ module Karafka
36
36
  end
37
37
  end
38
38
 
39
- # Runs extra logic after consumption that is related to handling long running jobs
39
+ # Runs extra logic after consumption that is related to handling long-running jobs
40
40
  # @note This overwrites the '#on_after_consume' from the base consumer
41
41
  def on_after_consume
42
42
  coordinator.on_finished do |first_group_message, last_group_message|
@@ -59,7 +59,7 @@ module Karafka
59
59
  # Mark as consumed only if manual offset management is not on
60
60
  mark_as_consumed(last_message) unless topic.manual_offset_management? || revoked?
61
61
 
62
- # If this is not a long running job there is nothing for us to do here
62
+ # If this is not a long-running job there is nothing for us to do here
63
63
  return unless topic.long_running_job?
64
64
 
65
65
  # Once processing is done, we move to the new offset based on commits
@@ -36,7 +36,7 @@ module Karafka
36
36
 
37
37
  class << self
38
38
  # Loads all the pro components and configures them wherever it is expected
39
- # @param config [Dry::Configurable::Config] whole app config that we can alter with pro
39
+ # @param config [Karafka::Core::Configurable::Node] app config that we can alter with pro
40
40
  # components
41
41
  def setup(config)
42
42
  COMPONENTS.each { |component| require_relative(component) }
@@ -45,7 +45,7 @@ module Karafka
45
45
  end
46
46
 
47
47
  # @private
48
- # @param event [Dry::Events::Event] event details
48
+ # @param event [Karafka::Core::Monitoring::Event] event details
49
49
  # Tracks time taken to process a single message of a given topic partition
50
50
  def on_consumer_consumed(event)
51
51
  consumer = event[:caller]
@@ -1,5 +1,14 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ # This Karafka component is a Pro component.
4
+ # All of the commercial components are present in the lib/karafka/pro directory of this
5
+ # repository and their usage requires commercial license agreement.
6
+ #
7
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
8
+ #
9
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
10
+ # your code to Maciej Mensfeld.
11
+
3
12
  module Karafka
4
13
  module Pro
5
14
  module Processing
@@ -28,7 +28,7 @@ module Karafka
28
28
  virtual_partitioner != nil
29
29
  end
30
30
 
31
- # @return [Boolean] is a given job on a topic a long running one
31
+ # @return [Boolean] is a given job on a topic a long-running one
32
32
  def long_running_job?
33
33
  @long_running_job || false
34
34
  end
@@ -9,6 +9,7 @@ module Karafka
9
9
  SIGINT
10
10
  SIGQUIT
11
11
  SIGTERM
12
+ SIGTTIN
12
13
  ].freeze
13
14
 
14
15
  HANDLED_SIGNALS.each do |signal|
@@ -104,6 +104,9 @@ module Karafka
104
104
  # We're done waiting, lets kill them!
105
105
  workers.each(&:terminate)
106
106
  listeners.each(&:terminate)
107
+ # We always need to shutdown clients to make sure we do not force the GC to close consumer.
108
+ # This can cause memory leaks and crashes.
109
+ listeners.each(&:shutdown)
107
110
 
108
111
  Karafka::App.producer.close
109
112
 
@@ -19,7 +19,20 @@ module Karafka
19
19
  'client.id': 'karafka'
20
20
  }.freeze
21
21
 
22
- private_constant :KAFKA_DEFAULTS
22
+ # Contains settings that should not be used in production but make life easier in dev
23
+ DEV_DEFAULTS = {
24
+ # Will create non-existing topics automatically.
25
+ # Note that the broker needs to be configured with `auto.create.topics.enable=true`
26
+ # While it is not recommended in prod, it simplifies work in dev
27
+ 'allow.auto.create.topics': 'true',
28
+ # We refresh the cluster state often as newly created topics in dev may not be detected
29
+ # fast enough. Fast enough means within reasonable time to provide decent user experience
30
+ # While it's only a one time thing for new topics, it can still be irritating to have to
31
+ # restart the process.
32
+ 'topic.metadata.refresh.interval.ms': 5_000
33
+ }.freeze
34
+
35
+ private_constant :KAFKA_DEFAULTS, :DEV_DEFAULTS
23
36
 
24
37
  # Available settings
25
38
 
@@ -34,8 +47,6 @@ module Karafka
34
47
  setting :token, default: false
35
48
  # option entity [String] for whom we did issue the license
36
49
  setting :entity, default: ''
37
- # option expires_on [Date] date when the license expires
38
- setting :expires_on, default: Date.parse('2100-01-01')
39
50
  end
40
51
 
41
52
  # option client_id [String] kafka client_id - used to provide
@@ -136,8 +147,10 @@ module Karafka
136
147
 
137
148
  Contracts::Config.new.validate!(config.to_h)
138
149
 
139
- # Check the license presence (if needed) and
140
- Licenser.new.verify(config.license)
150
+ licenser = Licenser.new
151
+
152
+ # Tries to load our license gem and if present will try to load the correct license
153
+ licenser.prepare_and_verify(config.license)
141
154
 
142
155
  configure_components
143
156
 
@@ -149,13 +162,21 @@ module Karafka
149
162
  # Propagates the kafka setting defaults unless they are already present
150
163
  # This makes it easier to set some values that users usually don't change but still allows
151
164
  # them to overwrite the whole hash if they want to
152
- # @param config [Dry::Configurable::Config] dry config of this producer
165
+ # @param config [Karafka::Core::Configurable::Node] config of this producer
153
166
  def merge_kafka_defaults!(config)
154
167
  KAFKA_DEFAULTS.each do |key, value|
155
168
  next if config.kafka.key?(key)
156
169
 
157
170
  config.kafka[key] = value
158
171
  end
172
+
173
+ return if Karafka::App.env.production?
174
+
175
+ DEV_DEFAULTS.each do |key, value|
176
+ next if config.kafka.key?(key)
177
+
178
+ config.kafka[key] = value
179
+ end
159
180
  end
160
181
 
161
182
  # Sets up all the components that are based on the user configuration
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.0.rc4'
6
+ VERSION = '2.0.0'
7
7
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.0.rc4
4
+ version: 2.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -34,7 +34,7 @@ cert_chain:
34
34
  R2P11bWoCtr70BsccVrN8jEhzwXngMyI2gVt750Y+dbTu1KgRqZKp/ECe7ZzPzXj
35
35
  pIy9vHxTANKYVyI4qj8OrFdEM5BQNu8oQpL0iQ==
36
36
  -----END CERTIFICATE-----
37
- date: 2022-07-28 00:00:00.000000000 Z
37
+ date: 2022-08-05 00:00:00.000000000 Z
38
38
  dependencies:
39
39
  - !ruby/object:Gem::Dependency
40
40
  name: karafka-core
@@ -42,7 +42,7 @@ dependencies:
42
42
  requirements:
43
43
  - - ">="
44
44
  - !ruby/object:Gem::Version
45
- version: 2.0.0
45
+ version: 2.0.2
46
46
  - - "<"
47
47
  - !ruby/object:Gem::Version
48
48
  version: 3.0.0
@@ -52,7 +52,7 @@ dependencies:
52
52
  requirements:
53
53
  - - ">="
54
54
  - !ruby/object:Gem::Version
55
- version: 2.0.0
55
+ version: 2.0.2
56
56
  - - "<"
57
57
  - !ruby/object:Gem::Version
58
58
  version: 3.0.0
@@ -62,14 +62,14 @@ dependencies:
62
62
  requirements:
63
63
  - - ">="
64
64
  - !ruby/object:Gem::Version
65
- version: '0.10'
65
+ version: '0.12'
66
66
  type: :runtime
67
67
  prerelease: false
68
68
  version_requirements: !ruby/object:Gem::Requirement
69
69
  requirements:
70
70
  - - ">="
71
71
  - !ruby/object:Gem::Version
72
- version: '0.10'
72
+ version: '0.12'
73
73
  - !ruby/object:Gem::Dependency
74
74
  name: thor
75
75
  requirement: !ruby/object:Gem::Requirement
@@ -90,7 +90,7 @@ dependencies:
90
90
  requirements:
91
91
  - - ">="
92
92
  - !ruby/object:Gem::Version
93
- version: 2.4.0
93
+ version: 2.4.1
94
94
  - - "<"
95
95
  - !ruby/object:Gem::Version
96
96
  version: 3.0.0
@@ -100,7 +100,7 @@ dependencies:
100
100
  requirements:
101
101
  - - ">="
102
102
  - !ruby/object:Gem::Version
103
- version: 2.4.0
103
+ version: 2.4.1
104
104
  - - "<"
105
105
  - !ruby/object:Gem::Version
106
106
  version: 3.0.0
@@ -287,9 +287,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
287
287
  version: 2.7.0
288
288
  required_rubygems_version: !ruby/object:Gem::Requirement
289
289
  requirements:
290
- - - ">"
290
+ - - ">="
291
291
  - !ruby/object:Gem::Version
292
- version: 1.3.1
292
+ version: '0'
293
293
  requirements: []
294
294
  rubygems_version: 3.3.7
295
295
  signing_key:
metadata.gz.sig CHANGED
Binary file