karafka 2.0.0.rc3 → 2.0.0.rc6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b2a60c5c750ef681042f6e0a0d61032c2b9d9e8d48fa73ec480157c7bffb4cc7
4
- data.tar.gz: c04074859408e77d236e86dcfdbe8502ba5322f6e7d8d1e236c9aa838b3e1a29
3
+ metadata.gz: 13fbbd2c70987d84f768acb1a01f02f18ca59f49d8f02a2c1103b6870dbcfb15
4
+ data.tar.gz: 223187a692ae8e3d5da3c028c135c20a3de0c05b26b7f67cfbae11b928b02773
5
5
  SHA512:
6
- metadata.gz: 487dc8f5fb131dc8f25087ae026b40710c9859e7849b878ac42053737dd665a0147462fd6348e3e03eb87e0d6657c8f0038b02506f525ee221ee1f6d806bcf84
7
- data.tar.gz: 23a68b2d97cd5441088521c949c2a0ec23befde2fa81c9b9178ea6786a582aea5e342697d08f9a3dd1dfe1401792bcbc3e936b91c87eb740ef66beccd69b39bf
6
+ metadata.gz: 381ddd6a4d3f9695ca5d657d4960213afb3c5b92b8b4e0f51b45f75d1de92f65a570ec903abb43d5cb2cf371a557dbbb01cb78ce33eb118902e6d5e77fdb6095
7
+ data.tar.gz: e4675ba160b9443240497f874e29bb7614b77bf06c4fa8e768aff42060de182059ac4a26b07841d251f5bb4d257ad8f3920d77ec551c8f1487a4eef3e6a6db32
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,5 +1,25 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.0.0.rc6 (2022-08-05)
4
+ - Update licenser to use a gem based approach based on `karafka/pro/license`.
5
+ - Do not mark intermediate jobs as consumed when Karafka runs Enhanced Active Job with Virtual Partitions.
6
+ - Improve development experience by adding fast cluster state changes refresh (#944)
7
+ - Improve the license loading.
8
+
9
+ ## 2.0.0.rc5 (2022-08-01)
10
+ - Improve specs stability
11
+ - Improve forceful shutdown
12
+ - Add support for debug `TTIN` backtrace printing
13
+ - Fix a case where logger listener would not intercept `warn` level
14
+ - Require `rdkafka` >= `0.12`
15
+ - Replace statistics decorator with the one from `karafka-core`
16
+
17
+ ## 2.0.0.rc4 (2022-07-28)
18
+ - Remove `dry-monitor`
19
+ - Use `karafka-core`
20
+ - Improve forceful shutdown resources finalization
21
+ - Cache consumer client name
22
+
3
23
  ## 2.0.0.rc3 (2022-07-26)
4
24
  - Fix Pro partitioner hash function may not utilize all the threads (#907).
5
25
  - Improve virtual partitions messages distribution.
@@ -65,8 +85,8 @@
65
85
  - Fix a case where consecutive CTRL+C (non-stop) would case an exception during forced shutdown
66
86
  - Add missing `consumer.prepared.error` into `LoggerListener`
67
87
  - Delegate partition resuming from the consumers to listeners threads.
68
- - Add support for Long Running Jobs (LRJ) for ActiveJob [PRO]
69
- - Add support for Long Running Jobs for consumers [PRO]
88
+ - Add support for Long-Running Jobs (LRJ) for ActiveJob [PRO]
89
+ - Add support for Long-Running Jobs for consumers [PRO]
70
90
  - Allow `active_job_topic` to accept a block for extra topic related settings
71
91
  - Remove no longer needed logger threads
72
92
  - Auto-adapt number of processes for integration specs based on the number of CPUs
data/CONTRIBUTING.md CHANGED
@@ -1,4 +1,4 @@
1
- # Contribute
1
+ # Contributing
2
2
 
3
3
  ## Introduction
4
4
 
@@ -8,11 +8,7 @@ We welcome any type of contribution, not only code. You can help with:
8
8
  - **QA**: file bug reports, the more details you can give the better (e.g. screenshots with the console open)
9
9
  - **Marketing**: writing blog posts, howto's, printing stickers, ...
10
10
  - **Community**: presenting the project at meetups, organizing a dedicated meetup for the local community, ...
11
- - **Code**: take a look at the [open issues](issues). Even if you can't write code, commenting on them, showing that you care about a given issue matters. It helps us triage them.
12
-
13
- ## Your First Contribution
14
-
15
- Working on your first Pull Request? You can learn how from this *free* series, [How to Contribute to an Open Source Project on GitHub](https://egghead.io/series/how-to-contribute-to-an-open-source-project-on-github).
11
+ - **Code**: take a look at the [open issues](https://github.com/karafka/karafka/issues). Even if you can't write code, commenting on them, showing that you care about a given issue matters. It helps us triage them.
16
12
 
17
13
  ## Submitting code
18
14
 
@@ -32,5 +28,5 @@ By sending a pull request to the pro components, you are agreeing to transfer th
32
28
 
33
29
  ## Questions
34
30
 
35
- If you have any questions, create an [issue](issue) (protip: do a quick search first to see if someone else didn't ask the same question before!).
36
- You can also reach us at hello@karafka.opencollective.com.
31
+ If you have any questions, create an [issue](https://github.com/karafka/karafka/issues) (protip: do a quick search first to see if someone else didn't ask the same question before!).
32
+ You can also reach us at contact@karafka.io.
data/Gemfile.lock CHANGED
@@ -1,11 +1,11 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.0.rc3)
5
- dry-monitor (~> 0.5)
6
- rdkafka (>= 0.10)
4
+ karafka (2.0.0.rc6)
5
+ karafka-core (>= 2.0.2, < 3.0.0)
6
+ rdkafka (>= 0.12)
7
7
  thor (>= 0.20)
8
- waterdrop (>= 2.3.3, < 3.0.0)
8
+ waterdrop (>= 2.4.1, < 3.0.0)
9
9
  zeitwerk (~> 2.3)
10
10
 
11
11
  GEM
@@ -23,19 +23,6 @@ GEM
23
23
  concurrent-ruby (1.1.10)
24
24
  diff-lcs (1.5.0)
25
25
  docile (1.4.0)
26
- dry-configurable (0.15.0)
27
- concurrent-ruby (~> 1.0)
28
- dry-core (~> 0.6)
29
- dry-core (0.8.0)
30
- concurrent-ruby (~> 1.0)
31
- dry-events (0.3.0)
32
- concurrent-ruby (~> 1.0)
33
- dry-core (~> 0.5, >= 0.5)
34
- dry-monitor (0.6.1)
35
- dry-configurable (~> 0.13, >= 0.13.0)
36
- dry-core (~> 0.5, >= 0.5)
37
- dry-events (~> 0.2)
38
- zeitwerk (~> 2.5)
39
26
  factory_bot (6.2.1)
40
27
  activesupport (>= 5.0.0)
41
28
  ffi (1.15.5)
@@ -43,6 +30,8 @@ GEM
43
30
  activesupport (>= 5.0)
44
31
  i18n (1.12.0)
45
32
  concurrent-ruby (~> 1.0)
33
+ karafka-core (2.0.2)
34
+ concurrent-ruby (>= 1.1)
46
35
  mini_portile2 (2.8.0)
47
36
  minitest (5.16.2)
48
37
  rake (13.0.6)
@@ -70,11 +59,10 @@ GEM
70
59
  simplecov-html (0.12.3)
71
60
  simplecov_json_formatter (0.1.4)
72
61
  thor (1.2.1)
73
- tzinfo (2.0.4)
62
+ tzinfo (2.0.5)
74
63
  concurrent-ruby (~> 1.0)
75
- waterdrop (2.3.3)
76
- concurrent-ruby (>= 1.1)
77
- dry-monitor (~> 0.5)
64
+ waterdrop (2.4.1)
65
+ karafka-core (>= 2.0.2, < 3.0.0)
78
66
  rdkafka (>= 0.10)
79
67
  zeitwerk (~> 2.3)
80
68
  zeitwerk (2.6.0)
data/LICENSE-COMM CHANGED
@@ -6,7 +6,7 @@ IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT ("EULA") IS A LEGAL AGREEMEN
6
6
 
7
7
  ------------------------------------------------------------------------------
8
8
 
9
- In order to use the Software under this Agreement, you must receive an application token at the time of purchase, in accordance with the scope of use and other terms specified for each type of Software and as set forth in this Section 1 of this Agreement.
9
+ In order to use the Software under this Agreement, you must receive a "Source URL" to a license package at the time of purchase, in accordance with the scope of use and other terms specified for each type of Software and as set forth in this Section 1 of this Agreement.
10
10
 
11
11
  1. License Grant
12
12
 
@@ -22,7 +22,7 @@ In order to use the Software under this Agreement, you must receive an applicati
22
22
 
23
23
  3. Restricted Uses.
24
24
 
25
- 3.1 You shall not (and shall not allow any third party to): (a) decompile, disassemble, or otherwise reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions); (b) distribute, sell, sublicense, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement; (c) redistribute the Software or Modifications other than by including the Software or a portion thereof within your own product, which must have substantially different functionality than the Software or Modifications and must not allow any third party to use the Software or Modifications, or any portions thereof, for software development or application development purposes; (d) redistribute the Software as part of a product, "appliance" or "virtual server"; (e) redistribute the Software on any server which is not directly under your control; (f) remove any product identification, proprietary, copyright or other notices contained in the Software; (g) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by Maciej Mensfeld; (h) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software; (i) utilize any equipment, device, software, or other means designed to circumvent or remove any form of token verification or copy protection used by Maciej Mensfeld in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by Maciej Mensfeld; (j) use the Software to develop a product which is competitive with any Maciej Mensfeld product offerings; or (k) use unauthorized Source URLS or keycode(s) or distribute or publish Source URLs or keycode(s), except as may be expressly permitted by Maciej Mensfeld in writing. If your unique application token is ever published, Maciej Mensfeld reserves the right to terminate your access without notice.
25
+ 3.1 You shall not (and shall not allow any third party to): (a) decompile, disassemble, or otherwise reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions); (b) distribute, sell, sublicense, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement; (c) redistribute the Software or Modifications other than by including the Software or a portion thereof within your own product, which must have substantially different functionality than the Software or Modifications and must not allow any third party to use the Software or Modifications, or any portions thereof, for software development or application development purposes; (d) redistribute the Software as part of a product, "appliance" or "virtual server"; (e) redistribute the Software on any server which is not directly under your control; (f) remove any product identification, proprietary, copyright or other notices contained in the Software; (g) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by Maciej Mensfeld; (h) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software; (i) utilize any equipment, device, software, or other means designed to circumvent or remove any form of Source URL or copy protection used by Maciej Mensfeld in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by Maciej Mensfeld; (j) use the Software to develop a product which is competitive with any Maciej Mensfeld product offerings; or (k) use unauthorized Source URLS or keycode(s) or distribute or publish Source URLs or keycode(s), except as may be expressly permitted by Maciej Mensfeld in writing. If your unique Source URL is ever published, Maciej Mensfeld reserves the right to terminate your access without notice.
26
26
 
27
27
  3.2 UNDER NO CIRCUMSTANCES MAY YOU USE THE SOFTWARE AS PART OF A PRODUCT OR SERVICE THAT PROVIDES SIMILAR FUNCTIONALITY TO THE SOFTWARE ITSELF.
28
28
 
data/README.md CHANGED
@@ -4,18 +4,18 @@
4
4
  [![Gem Version](https://badge.fury.io/rb/karafka.svg)](http://badge.fury.io/rb/karafka)
5
5
  [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
6
6
 
7
- **Note**: All of the documentation here refers to Karafka `2.0.0.rc2` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
7
+ **Note**: All of the documentation here refers to Karafka `2.0.0.rc5` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
8
8
 
9
9
  ## About Karafka
10
10
 
11
11
  Karafka is a Ruby and Rails multi-threaded efficient Kafka processing framework that:
12
12
 
13
- - Supports parallel processing in [multiple threads](Concurrency-and-multithreading) (also for a single topic partition work)
14
- - Has [ActiveJob backend](Active-Job) support (including ordered jobs)
15
- - [Automatically integrates](Integrating-with-Ruby-on-Rails-and-other-frameworks#integrating-with-ruby-on-rails=) with Ruby on Rails
16
- - Supports in-development [code reloading](Auto-reload-of-code-changes-in-development)
13
+ - Supports parallel processing in [multiple threads](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading) (also for a [single topic partition](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions) work)
14
+ - Has [ActiveJob backend](https://github.com/karafka/karafka/wiki/Active-Job) support (including [ordered jobs](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Active-Job#ordered-jobs))
15
+ - [Automatically integrates](https://github.com/karafka/karafka/wiki/Integrating-with-Ruby-on-Rails-and-other-frameworks#integrating-with-ruby-on-rails=) with Ruby on Rails
16
+ - Supports in-development [code reloading](https://github.com/karafka/karafka/wiki/Auto-reload-of-code-changes-in-development)
17
17
  - Is powered by [librdkafka](https://github.com/edenhill/librdkafka) (the Apache Kafka C/C++ client library)
18
- - Has an out-of the box [StatsD/DataDog monitoring](Monitoring-and-logging) with a dashboard template.
18
+ - Has an out-of the box [StatsD/DataDog monitoring](https://github.com/karafka/karafka/wiki/Monitoring-and-logging) with a dashboard template.
19
19
 
20
20
  ```ruby
21
21
  # Define what topics you want to consume with which consumers in karafka.rb
@@ -42,7 +42,7 @@ Karafka **uses** threads to handle many messages simultaneously in the same proc
42
42
  If you're entirely new to the subject, you can start with our "Kafka on Rails" articles series, which will get you up and running with the terminology and basic ideas behind using Kafka:
43
43
 
44
44
  - [Kafka on Rails: Using Kafka with Ruby on Rails – Part 1 – Kafka basics and its advantages](https://mensfeld.pl/2017/11/kafka-on-rails-using-kafka-with-ruby-on-rails-part-1-kafka-basics-and-its-advantages/)
45
- - [Kafka on Rails: Using Kafka with Ruby on Rails – Part 2 – Getting started with Ruby and Kafka](https://mensfeld.pl/2018/01/kafka-on-rails-using-kafka-with-ruby-on-rails-part-2-getting-started-with-ruby-and-kafka/)
45
+ - [Kafka on Rails: Using Kafka with Ruby on Rails – Part 2 – Getting started with Rails and Kafka](https://mensfeld.pl/2018/01/kafka-on-rails-using-kafka-with-ruby-on-rails-part-2-getting-started-with-ruby-and-kafka/)
46
46
 
47
47
  If you want to get started with Kafka and Karafka as fast as possible, then the best idea is to visit our [Getting started](https://github.com/karafka/karafka/wiki/Getting-started) guides and the [example apps repository](https://github.com/karafka/example-apps).
48
48
 
@@ -55,7 +55,7 @@ We also maintain many [integration specs](https://github.com/karafka/karafka/tre
55
55
  1. Add and install Karafka:
56
56
 
57
57
  ```bash
58
- bundle add karafka -v 2.0.0.rc2
58
+ bundle add karafka -v 2.0.0.rc5
59
59
 
60
60
  bundle exec karafka install
61
61
  ```
data/bin/create_token CHANGED
@@ -9,16 +9,10 @@ PRIVATE_KEY_LOCATION = File.join(Dir.home, '.ssh', 'karafka-pro', 'id_rsa')
9
9
 
10
10
  # Name of the entity that acquires the license
11
11
  ENTITY = ARGV[0]
12
- # Date till which license is valid
13
- EXPIRES_ON = Date.parse(ARGV[1])
14
12
 
15
13
  raise ArgumentError, 'Entity missing' if ENTITY.nil? || ENTITY.empty?
16
- raise ArgumentError, 'Expires on needs to be in the future' if EXPIRES_ON <= Date.today
17
14
 
18
- pro_token_data = {
19
- entity: ENTITY,
20
- expires_on: EXPIRES_ON
21
- }
15
+ pro_token_data = { entity: ENTITY }
22
16
 
23
17
  # This code uses my private key to generate a new token for Karafka Pro capabilities
24
18
  private_key = OpenSSL::PKey::RSA.new(File.read(PRIVATE_KEY_LOCATION))
data/bin/integrations CHANGED
@@ -22,7 +22,7 @@ ROOT_PATH = Pathname.new(File.expand_path(File.join(File.dirname(__FILE__), '../
22
22
  CONCURRENCY = ENV.key?('CI') ? 5 : Etc.nprocessors * 2
23
23
 
24
24
  # How may bytes do we want to keep from the stdout in the buffer for when we need to print it
25
- MAX_BUFFER_OUTPUT = 10_240
25
+ MAX_BUFFER_OUTPUT = 51_200
26
26
 
27
27
  # Abstraction around a single test scenario execution process
28
28
  class Scenario
@@ -37,7 +37,7 @@ class Scenario
37
37
  'consumption/worker_critical_error_behaviour.rb' => [0, 2].freeze,
38
38
  'shutdown/on_hanging_jobs_and_a_shutdown.rb' => [2].freeze,
39
39
  'shutdown/on_hanging_on_shutdown_job_and_a_shutdown.rb' => [2].freeze,
40
- 'shutdown/on_hanging_poll_and_shutdown.rb' => [2].freeze
40
+ 'shutdown/on_hanging_listener_and_shutdown.rb' => [2].freeze
41
41
  }.freeze
42
42
 
43
43
  private_constant :MAX_RUN_TIME, :EXIT_CODES
data/karafka.gemspec CHANGED
@@ -12,17 +12,17 @@ Gem::Specification.new do |spec|
12
12
  spec.authors = ['Maciej Mensfeld']
13
13
  spec.email = %w[maciej@mensfeld.pl]
14
14
  spec.homepage = 'https://karafka.io'
15
- spec.summary = 'Efficient Kafka processing framework for Ruby and Rails '
15
+ spec.summary = 'Efficient Kafka processing framework for Ruby and Rails'
16
16
  spec.description = 'Framework used to simplify Apache Kafka based Ruby applications development'
17
17
  spec.licenses = ['LGPL-3.0', 'Commercial']
18
18
 
19
- spec.add_dependency 'dry-monitor', '~> 0.5'
20
- spec.add_dependency 'rdkafka', '>= 0.10'
19
+ spec.add_dependency 'karafka-core', '>= 2.0.2', '< 3.0.0'
20
+ spec.add_dependency 'rdkafka', '>= 0.12'
21
21
  spec.add_dependency 'thor', '>= 0.20'
22
- spec.add_dependency 'waterdrop', '>= 2.3.3', '< 3.0.0'
22
+ spec.add_dependency 'waterdrop', '>= 2.4.1', '< 3.0.0'
23
23
  spec.add_dependency 'zeitwerk', '~> 2.3'
24
24
 
25
- spec.required_ruby_version = '>= 2.6.0'
25
+ spec.required_ruby_version = '>= 2.7.0'
26
26
 
27
27
  if $PROGRAM_NAME.end_with?('gem')
28
28
  spec.signing_key = File.expand_path('~/.ssh/gem-private_key.pem')
@@ -52,8 +52,7 @@ module Karafka
52
52
  if Karafka.pro?
53
53
  [
54
54
  'License: Commercial',
55
- "License entity: #{config.license.entity}",
56
- "License expires on: #{config.license.expires_on}"
55
+ "License entity: #{config.license.entity}"
57
56
  ]
58
57
  else
59
58
  [
@@ -275,6 +275,11 @@ module Karafka
275
275
 
276
276
  # Commits the stored offsets in a sync way and closes the consumer.
277
277
  def close
278
+ # Once client is closed, we should not close it again
279
+ # This could only happen in case of a race-condition when forceful shutdown happens
280
+ # and triggers this from a different thread
281
+ return if @closed
282
+
278
283
  @mutex.synchronize do
279
284
  internal_commit_offsets(async: false)
280
285
 
@@ -340,6 +345,9 @@ module Karafka
340
345
  when :network_exception # 13
341
346
  reset
342
347
  return nil
348
+ when :unknown_topic_or_part
349
+ # This is expected and temporary until rdkafka catches up with metadata
350
+ return nil
343
351
  end
344
352
 
345
353
  raise if time_poll.attempts > MAX_POLL_RETRIES
@@ -359,7 +367,6 @@ module Karafka
359
367
  config = ::Rdkafka::Config.new(@subscription_group.kafka)
360
368
  config.consumer_rebalance_listener = @rebalance_manager
361
369
  consumer = config.consumer
362
- consumer.subscribe(*@subscription_group.topics.map(&:name))
363
370
  @name = consumer.name
364
371
 
365
372
  # Register statistics runner for this particular type of callbacks
@@ -384,6 +391,9 @@ module Karafka
384
391
  )
385
392
  )
386
393
 
394
+ # Subscription needs to happen after we assigned the rebalance callbacks just in case of
395
+ # a race condition
396
+ consumer.subscribe(*@subscription_group.topics.map(&:name))
387
397
  consumer
388
398
  end
389
399
 
@@ -34,6 +34,8 @@ module Karafka
34
34
  # We can do this that way because we always first schedule jobs using messages before we
35
35
  # fetch another batch.
36
36
  @messages_buffer = MessagesBuffer.new(subscription_group)
37
+ @mutex = Mutex.new
38
+ @stopped = false
37
39
  end
38
40
 
39
41
  # Runs the main listener fetch loop.
@@ -51,6 +53,25 @@ module Karafka
51
53
  fetch_loop
52
54
  end
53
55
 
56
+ # Stops the jobs queue, triggers shutdown on all the executors (sync), commits offsets and
57
+ # stops kafka client.
58
+ #
59
+ # @note This method is not private despite being part of the fetch loop because in case of
60
+ # a forceful shutdown, it may be invoked from a separate thread
61
+ #
62
+ # @note We wrap it with a mutex exactly because of the above case of forceful shutdown
63
+ def shutdown
64
+ return if @stopped
65
+
66
+ @mutex.synchronize do
67
+ @stopped = true
68
+ @executors.clear
69
+ @coordinators.reset
70
+ @client.commit_offsets!
71
+ @client.stop
72
+ end
73
+ end
74
+
54
75
  private
55
76
 
56
77
  # Fetches the data and adds it to the jobs queue.
@@ -239,13 +260,6 @@ module Karafka
239
260
  @client.batch_poll until @jobs_queue.empty?(@subscription_group.id)
240
261
  end
241
262
 
242
- # Stops the jobs queue, triggers shutdown on all the executors (sync), commits offsets and
243
- # stops kafka client.
244
- def shutdown
245
- @client.commit_offsets!
246
- @client.stop
247
- end
248
-
249
263
  # We can stop client without a problem, as it will reinitialize itself when running the
250
264
  # `#fetch_loop` again. We just need to remember to also reset the runner as it is a long
251
265
  # running one, so with a new connection to Kafka, we need to initialize the state of the
@@ -3,7 +3,7 @@
3
3
  module Karafka
4
4
  module Contracts
5
5
  # Base contract for all Karafka contracts
6
- class Base < ::WaterDrop::Contractable::Contract
6
+ class Base < ::Karafka::Core::Contractable::Contract
7
7
  # @param data [Hash] data for validation
8
8
  # @return [Boolean] true if all good
9
9
  # @raise [Errors::InvalidConfigurationError] invalid configuration error
@@ -21,7 +21,6 @@ module Karafka
21
21
  nested(:license) do
22
22
  required(:token) { |val| [true, false].include?(val) || val.is_a?(String) }
23
23
  required(:entity) { |val| val.is_a?(String) }
24
- required(:expires_on) { |val| val.is_a?(Date) }
25
24
  end
26
25
 
27
26
  required(:client_id) { |val| val.is_a?(String) && Contracts::TOPIC_REGEXP.match?(val) }
@@ -24,9 +24,6 @@ module Karafka
24
24
  # Raised when we try to use Karafka CLI commands (except install) without a boot file
25
25
  MissingBootFileError = Class.new(BaseError)
26
26
 
27
- # Raised when want to hook up to an event that is not registered and supported
28
- UnregisteredMonitorEventError = Class.new(BaseError)
29
-
30
27
  # Raised when we've waited enough for shutting down a non-responsive process
31
28
  ForcefulShutdownError = Class.new(BaseError)
32
29
 
@@ -16,8 +16,7 @@ module Karafka
16
16
  @consumer_group_id = consumer_group_id
17
17
  @client_name = client_name
18
18
  @monitor = monitor
19
- # We decorate both Karafka and WaterDrop statistics the same way
20
- @statistics_decorator = ::WaterDrop::Instrumentation::Callbacks::StatisticsDecorator.new
19
+ @statistics_decorator = ::Karafka::Core::Monitoring::StatisticsDecorator.new
21
20
  end
22
21
 
23
22
  # Emits decorated statistics to the monitor
@@ -9,6 +9,7 @@ module Karafka
9
9
  USED_LOG_LEVELS = %i[
10
10
  debug
11
11
  info
12
+ warn
12
13
  error
13
14
  fatal
14
15
  ].freeze
@@ -60,11 +61,28 @@ module Karafka
60
61
  info "[#{job.id}] #{job_type} job for #{consumer} on #{topic} finished in #{time}ms"
61
62
  end
62
63
 
63
- # Logs info about system signals that Karafka received.
64
+ # Logs info about system signals that Karafka received and prints backtrace for threads in
65
+ # case of ttin
64
66
  #
65
67
  # @param event [Dry::Events::Event] event details including payload
66
68
  def on_process_notice_signal(event)
67
69
  info "Received #{event[:signal]} system signal"
70
+
71
+ # We print backtrace only for ttin
72
+ return unless event[:signal] == :SIGTTIN
73
+
74
+ # Inspired by Sidekiq
75
+ Thread.list.each do |thread|
76
+ tid = (thread.object_id ^ ::Process.pid).to_s(36)
77
+
78
+ warn "Thread TID-#{tid} #{thread['label']}"
79
+
80
+ if thread.backtrace
81
+ warn thread.backtrace.join("\n")
82
+ else
83
+ warn '<no backtrace available>'
84
+ end
85
+ end
68
86
  end
69
87
 
70
88
  # Logs info that we're initializing Karafka app.
@@ -1,66 +1,21 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Karafka
4
- # Namespace for all the things related with Karafka instrumentation process
5
4
  module Instrumentation
6
- # Monitor is used to hookup external monitoring services to monitor how Karafka works
7
- # It provides a standardized API for checking incoming messages/enqueueing etc
8
- # Since it is a pub-sub based on dry-monitor, you can use as many subscribers/loggers at the
9
- # same time, which means that you might have for example file logging and NewRelic at the same
10
- # time
11
- # @note This class acts as a singleton because we are only permitted to have single monitor
12
- # per running process (just as logger)
13
- class Monitor < Dry::Monitor::Notifications
14
- # List of events that we support in the system and to which a monitor client can hook up
15
- # @note The non-error once support timestamp benchmarking
16
- # @note Depending on Karafka extensions and additional engines, this might not be the
17
- # complete list of all the events. Please use the #available_events on fully loaded
18
- # Karafka system to determine all of the events you can use.
19
- BASE_EVENTS = %w[
20
- app.initialized
21
- app.running
22
- app.stopping
23
- app.stopped
24
-
25
- consumer.consumed
26
- consumer.revoked
27
- consumer.shutdown
28
-
29
- process.notice_signal
30
-
31
- connection.listener.before_fetch_loop
32
- connection.listener.fetch_loop
33
- connection.listener.fetch_loop.received
34
-
35
- worker.process
36
- worker.processed
37
-
38
- statistics.emitted
39
-
40
- error.occurred
41
- ].freeze
42
-
43
- private_constant :BASE_EVENTS
44
-
45
- # @return [Karafka::Instrumentation::Monitor] monitor instance for system instrumentation
46
- def initialize
47
- super(:karafka)
48
- BASE_EVENTS.each(&method(:register_event))
49
- end
50
-
51
- # Allows us to subscribe to events with a code that will be yielded upon events
52
- # @param event_name_or_listener [String, Object] name of the event we want to subscribe to
53
- # or a listener if we decide to go with object listener
54
- def subscribe(event_name_or_listener)
55
- return super unless event_name_or_listener.is_a?(String)
56
- return super if available_events.include?(event_name_or_listener)
57
-
58
- raise Errors::UnregisteredMonitorEventError, event_name_or_listener
59
- end
60
-
61
- # @return [Array<String>] names of available events to which we can subscribe
62
- def available_events
63
- __bus__.events.keys
5
+ # Karafka instrumentation monitor that we use to publish events
6
+ # By default uses our internal notifications bus but can be used with
7
+ # `ActiveSupport::Notifications` as well
8
+ class Monitor < ::Karafka::Core::Monitoring::Monitor
9
+ attr_reader :notifications_bus
10
+
11
+ # @param notifications_bus [Object] either our internal notifications bus or
12
+ # `ActiveSupport::Notifications`
13
+ # @param namespace [String, nil] namespace for events or nil if no namespace
14
+ def initialize(
15
+ notifications_bus = ::Karafka::Instrumentation::Notifications.new,
16
+ namespace = nil
17
+ )
18
+ super(notifications_bus, namespace)
64
19
  end
65
20
  end
66
21
  end
@@ -0,0 +1,52 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ # Namespace for all the things related with Karafka instrumentation process
5
+ module Instrumentation
6
+ # Monitor is used to hookup external monitoring services to monitor how Karafka works
7
+ # It provides a standardized API for checking incoming messages/enqueueing etc
8
+ # Since it is a pub-sub based on dry-monitor, you can use as many subscribers/loggers at the
9
+ # same time, which means that you might have for example file logging and NewRelic at the same
10
+ # time
11
+ # @note This class acts as a singleton because we are only permitted to have single monitor
12
+ # per running process (just as logger)
13
+ class Notifications < Karafka::Core::Monitoring::Notifications
14
+ # List of events that we support in the system and to which a monitor client can hook up
15
+ # @note The non-error once support timestamp benchmarking
16
+ # @note Depending on Karafka extensions and additional engines, this might not be the
17
+ # complete list of all the events. Please use the #available_events on fully loaded
18
+ # Karafka system to determine all of the events you can use.
19
+ EVENTS = %w[
20
+ app.initialized
21
+ app.running
22
+ app.stopping
23
+ app.stopped
24
+
25
+ consumer.consumed
26
+ consumer.revoked
27
+ consumer.shutdown
28
+
29
+ process.notice_signal
30
+
31
+ connection.listener.before_fetch_loop
32
+ connection.listener.fetch_loop
33
+ connection.listener.fetch_loop.received
34
+
35
+ worker.process
36
+ worker.processed
37
+
38
+ statistics.emitted
39
+
40
+ error.occurred
41
+ ].freeze
42
+
43
+ private_constant :EVENTS
44
+
45
+ # @return [Karafka::Instrumentation::Monitor] monitor instance for system instrumentation
46
+ def initialize
47
+ super
48
+ EVENTS.each { |event| register_event(event) }
49
+ end
50
+ end
51
+ end
52
+ end
@@ -5,19 +5,19 @@ module Karafka
5
5
  # Listener that sets a proc title with a nice descriptive value
6
6
  class ProctitleListener
7
7
  # Updates proc title to an initializing one
8
- # @param _event [Dry::Events::Event] event details including payload
8
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
9
9
  def on_app_initializing(_event)
10
10
  setproctitle('initializing')
11
11
  end
12
12
 
13
13
  # Updates proc title to a running one
14
- # @param _event [Dry::Events::Event] event details including payload
14
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
15
15
  def on_app_running(_event)
16
16
  setproctitle('running')
17
17
  end
18
18
 
19
19
  # Updates proc title to a stopping one
20
- # @param _event [Dry::Events::Event] event details including payload
20
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
21
21
  def on_app_stopping(_event)
22
22
  setproctitle('stopping')
23
23
  end
@@ -11,7 +11,7 @@ module Karafka
11
11
  #
12
12
  # @note You need to setup the `dogstatsd-ruby` client and assign it
13
13
  class Listener
14
- include WaterDrop::Configurable
14
+ include ::Karafka::Core::Configurable
15
15
  extend Forwardable
16
16
 
17
17
  def_delegators :config, :client, :rd_kafka_metrics, :namespace, :default_tags
@@ -65,7 +65,7 @@ module Karafka
65
65
 
66
66
  # Hooks up to WaterDrop instrumentation for emitted statistics
67
67
  #
68
- # @param event [Dry::Events::Event]
68
+ # @param event [Karafka::Core::Monitoring::Event]
69
69
  def on_statistics_emitted(event)
70
70
  statistics = event[:statistics]
71
71
 
@@ -76,7 +76,7 @@ module Karafka
76
76
 
77
77
  # Increases the errors count by 1
78
78
  #
79
- # @param event [Dry::Events::Event]
79
+ # @param event [Karafka::Core::Monitoring::Event]
80
80
  def on_error_occurred(event)
81
81
  extra_tags = ["type:#{event[:type]}"]
82
82
 
@@ -94,7 +94,7 @@ module Karafka
94
94
 
95
95
  # Reports how many messages we've polled and how much time did we spend on it
96
96
  #
97
- # @param event [Dry::Events::Event]
97
+ # @param event [Karafka::Core::Monitoring::Event]
98
98
  def on_connection_listener_fetch_loop_received(event)
99
99
  time_taken = event[:time]
100
100
  messages_count = event[:messages_buffer].size
@@ -105,7 +105,7 @@ module Karafka
105
105
 
106
106
  # Here we report majority of things related to processing as we have access to the
107
107
  # consumer
108
- # @param event [Dry::Events::Event]
108
+ # @param event [Karafka::Core::Monitoring::Event]
109
109
  def on_consumer_consumed(event)
110
110
  messages = event.payload[:caller].messages
111
111
  metadata = messages.metadata
@@ -124,7 +124,7 @@ module Karafka
124
124
  histogram('consumer.consumption_lag', metadata.consumption_lag, tags: tags)
125
125
  end
126
126
 
127
- # @param event [Dry::Events::Event]
127
+ # @param event [Karafka::Core::Monitoring::Event]
128
128
  def on_consumer_revoked(event)
129
129
  messages = event.payload[:caller].messages
130
130
  metadata = messages.metadata
@@ -137,7 +137,7 @@ module Karafka
137
137
  count('consumer.revoked', 1, tags: tags)
138
138
  end
139
139
 
140
- # @param event [Dry::Events::Event]
140
+ # @param event [Karafka::Core::Monitoring::Event]
141
141
  def on_consumer_shutdown(event)
142
142
  messages = event.payload[:caller].messages
143
143
  metadata = messages.metadata
@@ -151,7 +151,7 @@ module Karafka
151
151
  end
152
152
 
153
153
  # Worker related metrics
154
- # @param event [Dry::Events::Event]
154
+ # @param event [Karafka::Core::Monitoring::Event]
155
155
  def on_worker_process(event)
156
156
  jq_stats = event[:jobs_queue].statistics
157
157
 
@@ -162,7 +162,7 @@ module Karafka
162
162
 
163
163
  # We report this metric before and after processing for higher accuracy
164
164
  # Without this, the utilization would not be fully reflected
165
- # @param event [Dry::Events::Event]
165
+ # @param event [Karafka::Core::Monitoring::Event]
166
166
  def on_worker_processed(event)
167
167
  jq_stats = event[:jobs_queue].statistics
168
168
 
@@ -8,8 +8,34 @@ module Karafka
8
8
 
9
9
  private_constant :PUBLIC_KEY_LOCATION
10
10
 
11
+ # Tries to prepare license and verifies it
12
+ #
13
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
14
+ def prepare_and_verify(license_config)
15
+ prepare(license_config)
16
+ verify(license_config)
17
+ end
18
+
19
+ private
20
+
21
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
22
+ def prepare(license_config)
23
+ # If there is token, no action needed
24
+ # We support a case where someone would put the token in instead of using one from the
25
+ # license. That's in case there are limitations to using external package sources, etc
26
+ return if license_config.token
27
+
28
+ begin
29
+ license_config.token || require('karafka-license')
30
+ rescue LoadError
31
+ return
32
+ end
33
+
34
+ license_config.token = Karafka::License.token
35
+ end
36
+
11
37
  # Check license and setup license details (if needed)
12
- # @param license_config [Dry::Configurable::Config] config part related to the licensing
38
+ # @param license_config [Karafka::Core::Configurable::Node] config related to the licensing
13
39
  def verify(license_config)
14
40
  # If no license, it will just run LGPL components without anything extra
15
41
  return unless license_config.token
@@ -29,19 +55,10 @@ module Karafka
29
55
  details = data ? JSON.parse(data) : raise_invalid_license_token(license_config)
30
56
 
31
57
  license_config.entity = details.fetch('entity')
32
- license_config.expires_on = Date.parse(details.fetch('expires_on'))
33
-
34
- return if license_config.expires_on > Date.today
35
-
36
- raise_expired_license_token_in_dev(license_config.expires_on)
37
-
38
- notify_if_license_expired(license_config.expires_on)
39
58
  end
40
59
 
41
- private
42
-
43
60
  # Raises an error with info, that used token is invalid
44
- # @param license_config [Dry::Configurable::Config]
61
+ # @param license_config [Karafka::Core::Configurable::Node]
45
62
  def raise_invalid_license_token(license_config)
46
63
  # We set it to false so `Karafka.pro?` method behaves as expected
47
64
  license_config.token = false
@@ -54,42 +71,5 @@ module Karafka
54
71
  MSG
55
72
  )
56
73
  end
57
-
58
- # Raises an error for test and dev environments if running pro with expired license
59
- # We never want to cause any non-dev problems and we should never crash anything else than
60
- # tests and development envs.
61
- #
62
- # @param expires_on [Date] when the license expires
63
- def raise_expired_license_token_in_dev(expires_on)
64
- env = Karafka::App.env
65
-
66
- return unless env.development? || env.test?
67
-
68
- raise Errors::ExpiredLicenseTokenError.new, expired_message(expires_on)
69
- end
70
-
71
- # We do not raise an error here as we don't want to cause any problems to someone that runs
72
- # Karafka on production. Error message is enough.
73
- #
74
- # @param expires_on [Date] when the license expires
75
- def notify_if_license_expired(expires_on)
76
- Karafka.logger.error(expired_message(expires_on))
77
-
78
- Karafka.monitor.instrument(
79
- 'error.occurred',
80
- caller: self,
81
- error: Errors::ExpiredLicenseTokenError.new(expired_message(expires_on)),
82
- type: 'licenser.expired'
83
- )
84
- end
85
-
86
- # @param expires_on [Date] when the license expires
87
- # @return [String] expired message
88
- def expired_message(expires_on)
89
- <<~MSG.tr("\n", ' ')
90
- Your license expired on #{expires_on}.
91
- Please reach us at contact@karafka.io or visit https://karafka.io to obtain a valid one.
92
- MSG
93
- end
94
74
  end
95
75
  end
@@ -12,7 +12,7 @@ module Karafka
12
12
  # @note We need this to make sure that we allocate proper dispatched events only to
13
13
  # callback listeners that should publish them
14
14
  def name
15
- ::Rdkafka::Bindings.rd_kafka_name(@native_kafka)
15
+ @name ||= ::Rdkafka::Bindings.rd_kafka_name(@native_kafka)
16
16
  end
17
17
  end
18
18
  end
@@ -33,6 +33,10 @@ module Karafka
33
33
  ::ActiveSupport::JSON.decode(message.raw_payload)
34
34
  )
35
35
 
36
+ # We cannot mark jobs as done after each if there are virtual partitions. Otherwise
37
+ # this could create random markings
38
+ next if topic.virtual_partitioner?
39
+
36
40
  mark_as_consumed(message)
37
41
  end
38
42
  end
@@ -36,7 +36,7 @@ module Karafka
36
36
  end
37
37
  end
38
38
 
39
- # Runs extra logic after consumption that is related to handling long running jobs
39
+ # Runs extra logic after consumption that is related to handling long-running jobs
40
40
  # @note This overwrites the '#on_after_consume' from the base consumer
41
41
  def on_after_consume
42
42
  coordinator.on_finished do |first_group_message, last_group_message|
@@ -59,7 +59,7 @@ module Karafka
59
59
  # Mark as consumed only if manual offset management is not on
60
60
  mark_as_consumed(last_message) unless topic.manual_offset_management? || revoked?
61
61
 
62
- # If this is not a long running job there is nothing for us to do here
62
+ # If this is not a long-running job there is nothing for us to do here
63
63
  return unless topic.long_running_job?
64
64
 
65
65
  # Once processing is done, we move to the new offset based on commits
@@ -36,7 +36,7 @@ module Karafka
36
36
 
37
37
  class << self
38
38
  # Loads all the pro components and configures them wherever it is expected
39
- # @param config [Dry::Configurable::Config] whole app config that we can alter with pro
39
+ # @param config [Karafka::Core::Configurable::Node] app config that we can alter with pro
40
40
  # components
41
41
  def setup(config)
42
42
  COMPONENTS.each { |component| require_relative(component) }
@@ -45,7 +45,7 @@ module Karafka
45
45
  end
46
46
 
47
47
  # @private
48
- # @param event [Dry::Events::Event] event details
48
+ # @param event [Karafka::Core::Monitoring::Event] event details
49
49
  # Tracks time taken to process a single message of a given topic partition
50
50
  def on_consumer_consumed(event)
51
51
  consumer = event[:caller]
@@ -1,5 +1,14 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ # This Karafka component is a Pro component.
4
+ # All of the commercial components are present in the lib/karafka/pro directory of this
5
+ # repository and their usage requires commercial license agreement.
6
+ #
7
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
8
+ #
9
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
10
+ # your code to Maciej Mensfeld.
11
+
3
12
  module Karafka
4
13
  module Pro
5
14
  module Processing
@@ -28,7 +28,7 @@ module Karafka
28
28
  virtual_partitioner != nil
29
29
  end
30
30
 
31
- # @return [Boolean] is a given job on a topic a long running one
31
+ # @return [Boolean] is a given job on a topic a long-running one
32
32
  def long_running_job?
33
33
  @long_running_job || false
34
34
  end
@@ -9,6 +9,7 @@ module Karafka
9
9
  SIGINT
10
10
  SIGQUIT
11
11
  SIGTERM
12
+ SIGTTIN
12
13
  ].freeze
13
14
 
14
15
  HANDLED_SIGNALS.each do |signal|
@@ -104,6 +104,9 @@ module Karafka
104
104
  # We're done waiting, lets kill them!
105
105
  workers.each(&:terminate)
106
106
  listeners.each(&:terminate)
107
+ # We always need to shutdown clients to make sure we do not force the GC to close consumer.
108
+ # This can cause memory leaks and crashes.
109
+ listeners.each(&:shutdown)
107
110
 
108
111
  Karafka::App.producer.close
109
112
 
@@ -12,14 +12,27 @@ module Karafka
12
12
  # enough and will still keep the code simple
13
13
  # @see Karafka::Setup::Configurators::Base for more details about configurators api
14
14
  class Config
15
- extend ::WaterDrop::Configurable
15
+ extend ::Karafka::Core::Configurable
16
16
 
17
17
  # Defaults for kafka settings, that will be overwritten only if not present already
18
18
  KAFKA_DEFAULTS = {
19
19
  'client.id': 'karafka'
20
20
  }.freeze
21
21
 
22
- private_constant :KAFKA_DEFAULTS
22
+ # Contains settings that should not be used in production but make life easier in dev
23
+ DEV_DEFAULTS = {
24
+ # Will create non-existing topics automatically.
25
+ # Note that the broker needs to be configured with `auto.create.topics.enable=true`
26
+ # While it is not recommended in prod, it simplifies work in dev
27
+ 'allow.auto.create.topics': 'true',
28
+ # We refresh the cluster state often as newly created topics in dev may not be detected
29
+ # fast enough. Fast enough means within reasonable time to provide decent user experience
30
+ # While it's only a one time thing for new topics, it can still be irritating to have to
31
+ # restart the process.
32
+ 'topic.metadata.refresh.interval.ms': 5_000
33
+ }.freeze
34
+
35
+ private_constant :KAFKA_DEFAULTS, :DEV_DEFAULTS
23
36
 
24
37
  # Available settings
25
38
 
@@ -34,8 +47,6 @@ module Karafka
34
47
  setting :token, default: false
35
48
  # option entity [String] for whom we did issue the license
36
49
  setting :entity, default: ''
37
- # option expires_on [Date] date when the license expires
38
- setting :expires_on, default: Date.parse('2100-01-01')
39
50
  end
40
51
 
41
52
  # option client_id [String] kafka client_id - used to provide
@@ -136,8 +147,10 @@ module Karafka
136
147
 
137
148
  Contracts::Config.new.validate!(config.to_h)
138
149
 
139
- # Check the license presence (if needed) and
140
- Licenser.new.verify(config.license)
150
+ licenser = Licenser.new
151
+
152
+ # Tries to load our license gem and if present will try to load the correct license
153
+ licenser.prepare_and_verify(config.license)
141
154
 
142
155
  configure_components
143
156
 
@@ -149,13 +162,21 @@ module Karafka
149
162
  # Propagates the kafka setting defaults unless they are already present
150
163
  # This makes it easier to set some values that users usually don't change but still allows
151
164
  # them to overwrite the whole hash if they want to
152
- # @param config [Dry::Configurable::Config] dry config of this producer
165
+ # @param config [Karafka::Core::Configurable::Node] config of this producer
153
166
  def merge_kafka_defaults!(config)
154
167
  KAFKA_DEFAULTS.each do |key, value|
155
168
  next if config.kafka.key?(key)
156
169
 
157
170
  config.kafka[key] = value
158
171
  end
172
+
173
+ return if Karafka::App.env.production?
174
+
175
+ DEV_DEFAULTS.each do |key, value|
176
+ next if config.kafka.key?(key)
177
+
178
+ config.kafka[key] = value
179
+ end
159
180
  end
160
181
 
161
182
  # Sets up all the components that are based on the user configuration
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.0.rc3'
6
+ VERSION = '2.0.0.rc6'
7
7
  end
data/lib/karafka.rb CHANGED
@@ -1,6 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  %w[
4
+ karafka-core
4
5
  delegate
5
6
  English
6
7
  rdkafka
@@ -12,8 +13,6 @@
12
13
  openssl
13
14
  base64
14
15
  date
15
- dry/events/publisher
16
- dry/monitor/notifications
17
16
  zeitwerk
18
17
  ].each(&method(:require))
19
18
 
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.0.rc3
4
+ version: 2.0.0.rc6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -34,36 +34,42 @@ cert_chain:
34
34
  R2P11bWoCtr70BsccVrN8jEhzwXngMyI2gVt750Y+dbTu1KgRqZKp/ECe7ZzPzXj
35
35
  pIy9vHxTANKYVyI4qj8OrFdEM5BQNu8oQpL0iQ==
36
36
  -----END CERTIFICATE-----
37
- date: 2022-07-26 00:00:00.000000000 Z
37
+ date: 2022-08-05 00:00:00.000000000 Z
38
38
  dependencies:
39
39
  - !ruby/object:Gem::Dependency
40
- name: dry-monitor
40
+ name: karafka-core
41
41
  requirement: !ruby/object:Gem::Requirement
42
42
  requirements:
43
- - - "~>"
43
+ - - ">="
44
44
  - !ruby/object:Gem::Version
45
- version: '0.5'
45
+ version: 2.0.2
46
+ - - "<"
47
+ - !ruby/object:Gem::Version
48
+ version: 3.0.0
46
49
  type: :runtime
47
50
  prerelease: false
48
51
  version_requirements: !ruby/object:Gem::Requirement
49
52
  requirements:
50
- - - "~>"
53
+ - - ">="
51
54
  - !ruby/object:Gem::Version
52
- version: '0.5'
55
+ version: 2.0.2
56
+ - - "<"
57
+ - !ruby/object:Gem::Version
58
+ version: 3.0.0
53
59
  - !ruby/object:Gem::Dependency
54
60
  name: rdkafka
55
61
  requirement: !ruby/object:Gem::Requirement
56
62
  requirements:
57
63
  - - ">="
58
64
  - !ruby/object:Gem::Version
59
- version: '0.10'
65
+ version: '0.12'
60
66
  type: :runtime
61
67
  prerelease: false
62
68
  version_requirements: !ruby/object:Gem::Requirement
63
69
  requirements:
64
70
  - - ">="
65
71
  - !ruby/object:Gem::Version
66
- version: '0.10'
72
+ version: '0.12'
67
73
  - !ruby/object:Gem::Dependency
68
74
  name: thor
69
75
  requirement: !ruby/object:Gem::Requirement
@@ -84,7 +90,7 @@ dependencies:
84
90
  requirements:
85
91
  - - ">="
86
92
  - !ruby/object:Gem::Version
87
- version: 2.3.3
93
+ version: 2.4.1
88
94
  - - "<"
89
95
  - !ruby/object:Gem::Version
90
96
  version: 3.0.0
@@ -94,7 +100,7 @@ dependencies:
94
100
  requirements:
95
101
  - - ">="
96
102
  - !ruby/object:Gem::Version
97
- version: 2.3.3
103
+ version: 2.4.1
98
104
  - - "<"
99
105
  - !ruby/object:Gem::Version
100
106
  version: 3.0.0
@@ -192,6 +198,7 @@ files:
192
198
  - lib/karafka/instrumentation/logger.rb
193
199
  - lib/karafka/instrumentation/logger_listener.rb
194
200
  - lib/karafka/instrumentation/monitor.rb
201
+ - lib/karafka/instrumentation/notifications.rb
195
202
  - lib/karafka/instrumentation/proctitle_listener.rb
196
203
  - lib/karafka/instrumentation/vendors/datadog/dashboard.json
197
204
  - lib/karafka/instrumentation/vendors/datadog/listener.rb
@@ -277,7 +284,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
277
284
  requirements:
278
285
  - - ">="
279
286
  - !ruby/object:Gem::Version
280
- version: 2.6.0
287
+ version: 2.7.0
281
288
  required_rubygems_version: !ruby/object:Gem::Requirement
282
289
  requirements:
283
290
  - - ">"
metadata.gz.sig CHANGED
Binary file