karafka 2.0.0 → 2.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 06f96b3ba14b7910d3cb23a90bdc135927f1483d5af4d079d59b3d6940d391b9
4
- data.tar.gz: 2d936d4ddac360e229004c05f6ee51d004018e3c4cf4203067a99e6df93df92e
3
+ metadata.gz: 5adfb5654381a04d5b7111806bd5bbba70f75f02e7044b57af09b91b547dbd09
4
+ data.tar.gz: 9500da87035507a037218e9df5a11c8803e738c707b5d624c19559917939b689
5
5
  SHA512:
6
- metadata.gz: caeb9bcf1f0301f31442025176f3da631aa8ff9c21164e112b9ce3112372309d4b7fe59863cf7cf511e6a6bc4448f61448f3bc2024b101d4ae2668a358e5bbe9
7
- data.tar.gz: cfc422fae74512c2142ef36f9d4b5c3024bf318d79dc83456e27553aebc177cec7a3772f6378160c8ecdf5a3b2626d862ba91d823fae6e30cbb593d4198474a9
6
+ metadata.gz: 1acb20378ecdf95b87378297de714f6791f5abd3d411fbda4701231912e30033163c3b81a21f98a99d2ca247dbbe8cda72d81efc7b1d83a3aa019485dd2e8604
7
+ data.tar.gz: 2a8d352d68852da005c87b0494457abf3559ec3975deb1098da233e16e4e33fed517020733300b2364ad650cc3764d59deb7fff6efe63e216e4fdb057c3e4ed8
checksums.yaml.gz.sig CHANGED
Binary file
@@ -73,10 +73,6 @@ jobs:
73
73
  ruby-version: ${{matrix.ruby}}
74
74
  bundler-cache: true
75
75
 
76
- - name: Ensure all needed Kafka topics are created and wait if not
77
- run: |
78
- bin/wait_for_kafka
79
-
80
76
  - name: Run all specs
81
77
  env:
82
78
  GITHUB_COVERAGE: ${{matrix.coverage}}
@@ -120,10 +116,6 @@ jobs:
120
116
  bundle config set without development
121
117
  bundle install
122
118
 
123
- - name: Ensure all needed Kafka topics are created and wait if not
124
- run: |
125
- bin/wait_for_kafka
126
-
127
119
  - name: Run integration tests
128
120
  env:
129
121
  KARAFKA_PRO_LICENSE_TOKEN: ${{ secrets.KARAFKA_PRO_LICENSE_TOKEN }}
data/CHANGELOG.md CHANGED
@@ -1,6 +1,12 @@
1
1
  # Karafka framework changelog
2
2
 
3
- ## 2.0.0 (2022-08-5)
3
+ ## 2.0.1 (2022-08-06)
4
+ - Provide `Karafka::Admin` for creation and destruction of topics and fetching cluster info.
5
+ - Update integration specs to always use one-time disposable topics.
6
+ - Remove no longer needed `wait_for_kafka` script.
7
+ - Add more integration specs for cover offset management upon errors.
8
+
9
+ ## 2.0.0 (2022-08-05)
4
10
 
5
11
  This changelog describes changes between `1.4` and `2.0`. Please refer to appropriate release notes for changes between particular `rc` releases.
6
12
 
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.0)
4
+ karafka (2.0.1)
5
5
  karafka-core (>= 2.0.2, < 3.0.0)
6
6
  rdkafka (>= 0.12)
7
7
  thor (>= 0.20)
data/README.md CHANGED
@@ -4,8 +4,6 @@
4
4
  [![Gem Version](https://badge.fury.io/rb/karafka.svg)](http://badge.fury.io/rb/karafka)
5
5
  [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
6
6
 
7
- **Note**: All of the documentation here refers to Karafka `2.0.0` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
8
-
9
7
  ## About Karafka
10
8
 
11
9
  Karafka is a Ruby and Rails multi-threaded efficient Kafka processing framework that:
data/docker-compose.yml CHANGED
@@ -16,36 +16,7 @@ services:
16
16
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
17
17
  KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
18
18
  KAFKA_CREATE_TOPICS:
19
- "integrations_00_02:2:1,\
20
- integrations_01_02:2:1,\
21
- integrations_02_02:2:1,\
22
- integrations_03_02:2:1,\
23
- integrations_04_02:2:1,\
24
- integrations_05_02:2:1,\
25
- integrations_06_02:2:1,\
26
- integrations_07_02:2:1,\
27
- integrations_08_02:2:1,\
28
- integrations_09_02:2:1,\
29
- integrations_10_02:2:1,\
30
- integrations_11_02:2:1,\
31
- integrations_12_02:2:1,\
32
- integrations_13_02:2:1,\
33
- integrations_14_02:2:1,\
34
- integrations_15_02:2:1,\
35
- integrations_16_02:2:1,\
36
- integrations_17_02:2:1,\
37
- integrations_18_02:2:1,\
38
- integrations_19_02:2:1,\
39
- integrations_20_02:2:1,\
40
- integrations_21_02:2:1,\
41
- integrations_00_03:3:1,\
42
- integrations_01_03:3:1,\
43
- integrations_02_03:3:1,\
44
- integrations_03_03:3:1,\
45
- integrations_04_03:3:1,\
46
- integrations_00_10:10:1,\
47
- integrations_01_10:10:1,\
48
- benchmarks_00_01:1:1,\
19
+ "benchmarks_00_01:1:1,\
49
20
  benchmarks_00_05:5:1,\
50
21
  benchmarks_01_05:5:1,\
51
22
  benchmarks_00_10:10:1"
@@ -0,0 +1,57 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ # Simple admin actions that we can perform via Karafka on our Kafka cluster
5
+ #
6
+ # @note It always initializes a new admin instance as we want to ensure it is always closed
7
+ # Since admin actions are not performed that often, that should be ok.
8
+ #
9
+ # @note It always uses the primary defined cluster and does not support multi-cluster work.
10
+ # If you need this, just replace the cluster info for the time you use this
11
+ class Admin
12
+ class << self
13
+ # Creates Kafka topic with given settings
14
+ #
15
+ # @param name [String] topic name
16
+ # @param partitions [Integer] number of partitions we expect
17
+ # @param replication_factor [Integer] number of replicas
18
+ # @param topic_config [Hash] topic config details as described here:
19
+ # https://kafka.apache.org/documentation/#topicconfigs
20
+ def create_topic(name, partitions, replication_factor, topic_config = {})
21
+ with_admin do |admin|
22
+ admin
23
+ .create_topic(name, partitions, replication_factor, topic_config)
24
+ .wait
25
+ end
26
+ end
27
+
28
+ # Deleted a given topic
29
+ #
30
+ # @param name [String] topic name
31
+ def delete_topic(name)
32
+ with_admin do |admin|
33
+ admin
34
+ .delete_topic(name)
35
+ .wait
36
+ end
37
+ end
38
+
39
+ # @return [Rdkafka::Metadata] cluster metadata info
40
+ def cluster_info
41
+ with_admin do |admin|
42
+ Rdkafka::Metadata.new(admin.instance_variable_get('@native_kafka'))
43
+ end
44
+ end
45
+
46
+ private
47
+
48
+ # Creates admin instance and yields it. After usage it closes the admin instance
49
+ def with_admin
50
+ admin = ::Rdkafka::Config.new(Karafka::App.config.kafka).admin
51
+ result = yield(admin)
52
+ admin.close
53
+ result
54
+ end
55
+ end
56
+ end
57
+ end
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.0'
6
+ VERSION = '2.0.1'
7
7
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.0
4
+ version: 2.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -34,7 +34,7 @@ cert_chain:
34
34
  R2P11bWoCtr70BsccVrN8jEhzwXngMyI2gVt750Y+dbTu1KgRqZKp/ECe7ZzPzXj
35
35
  pIy9vHxTANKYVyI4qj8OrFdEM5BQNu8oQpL0iQ==
36
36
  -----END CERTIFICATE-----
37
- date: 2022-08-05 00:00:00.000000000 Z
37
+ date: 2022-08-06 00:00:00.000000000 Z
38
38
  dependencies:
39
39
  - !ruby/object:Gem::Dependency
40
40
  name: karafka-core
@@ -152,7 +152,6 @@ files:
152
152
  - bin/scenario
153
153
  - bin/stress_many
154
154
  - bin/stress_one
155
- - bin/wait_for_kafka
156
155
  - certs/karafka-pro.pem
157
156
  - certs/mensfeld.pem
158
157
  - config/errors.yml
@@ -166,6 +165,7 @@ files:
166
165
  - lib/karafka/active_job/job_extensions.rb
167
166
  - lib/karafka/active_job/job_options_contract.rb
168
167
  - lib/karafka/active_job/routing/extensions.rb
168
+ - lib/karafka/admin.rb
169
169
  - lib/karafka/app.rb
170
170
  - lib/karafka/base_consumer.rb
171
171
  - lib/karafka/cli.rb
metadata.gz.sig CHANGED
Binary file
data/bin/wait_for_kafka DELETED
@@ -1,20 +0,0 @@
1
- #!/bin/bash
2
-
3
- # This script allows us to wait for Kafka docker to fully be ready
4
- # We consider it fully ready when all our topics that need to be created are created as expected
5
-
6
- KAFKA_NAME='karafka_20_kafka'
7
- ZOOKEEPER='zookeeper:2181'
8
- LIST_CMD="kafka-topics.sh --list --zookeeper $ZOOKEEPER"
9
-
10
- # Take the number of topics that we need to create prior to running anything
11
- TOPICS_COUNT=`cat docker-compose.yml | grep -E -i 'integrations_|benchmarks_' | wc -l`
12
-
13
- # And wait until all of them are created
14
- until (((`docker exec $KAFKA_NAME $LIST_CMD | wc -l`) >= $TOPICS_COUNT));
15
- do
16
- echo "Waiting for Kafka to create all the needed topics..."
17
- sleep 1
18
- done
19
-
20
- echo "All the needed topics created."