karafka 2.2.13 → 2.3.0.alpha1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (125) hide show
  1. checksums.yaml +4 -4
  2. checksums.yaml.gz.sig +0 -0
  3. data/.github/workflows/ci.yml +38 -12
  4. data/.ruby-version +1 -1
  5. data/CHANGELOG.md +161 -125
  6. data/Gemfile.lock +12 -12
  7. data/README.md +0 -2
  8. data/SECURITY.md +23 -0
  9. data/config/locales/errors.yml +7 -1
  10. data/config/locales/pro_errors.yml +22 -0
  11. data/docker-compose.yml +3 -1
  12. data/karafka.gemspec +2 -2
  13. data/lib/karafka/admin/acl.rb +287 -0
  14. data/lib/karafka/admin.rb +118 -16
  15. data/lib/karafka/app.rb +12 -3
  16. data/lib/karafka/base_consumer.rb +32 -31
  17. data/lib/karafka/cli/base.rb +1 -1
  18. data/lib/karafka/connection/client.rb +94 -84
  19. data/lib/karafka/connection/conductor.rb +28 -0
  20. data/lib/karafka/connection/listener.rb +165 -46
  21. data/lib/karafka/connection/listeners_batch.rb +5 -11
  22. data/lib/karafka/connection/manager.rb +72 -0
  23. data/lib/karafka/connection/messages_buffer.rb +12 -0
  24. data/lib/karafka/connection/proxy.rb +17 -0
  25. data/lib/karafka/connection/status.rb +75 -0
  26. data/lib/karafka/contracts/config.rb +14 -10
  27. data/lib/karafka/contracts/consumer_group.rb +9 -1
  28. data/lib/karafka/contracts/topic.rb +3 -1
  29. data/lib/karafka/errors.rb +13 -0
  30. data/lib/karafka/instrumentation/assignments_tracker.rb +96 -0
  31. data/lib/karafka/instrumentation/callbacks/rebalance.rb +10 -7
  32. data/lib/karafka/instrumentation/logger_listener.rb +3 -9
  33. data/lib/karafka/instrumentation/notifications.rb +19 -9
  34. data/lib/karafka/instrumentation/vendors/appsignal/metrics_listener.rb +31 -28
  35. data/lib/karafka/instrumentation/vendors/datadog/logger_listener.rb +22 -3
  36. data/lib/karafka/instrumentation/vendors/datadog/metrics_listener.rb +15 -12
  37. data/lib/karafka/instrumentation/vendors/kubernetes/liveness_listener.rb +39 -36
  38. data/lib/karafka/pro/base_consumer.rb +47 -0
  39. data/lib/karafka/pro/connection/manager.rb +300 -0
  40. data/lib/karafka/pro/connection/multiplexing/listener.rb +40 -0
  41. data/lib/karafka/pro/instrumentation/performance_tracker.rb +85 -0
  42. data/lib/karafka/pro/iterator/tpl_builder.rb +1 -1
  43. data/lib/karafka/pro/iterator.rb +1 -6
  44. data/lib/karafka/pro/loader.rb +16 -2
  45. data/lib/karafka/pro/processing/coordinator.rb +2 -1
  46. data/lib/karafka/pro/processing/executor.rb +37 -0
  47. data/lib/karafka/pro/processing/expansions_selector.rb +32 -0
  48. data/lib/karafka/pro/processing/jobs/periodic.rb +41 -0
  49. data/lib/karafka/pro/processing/jobs/periodic_non_blocking.rb +32 -0
  50. data/lib/karafka/pro/processing/jobs_builder.rb +14 -3
  51. data/lib/karafka/pro/processing/offset_metadata/consumer.rb +44 -0
  52. data/lib/karafka/pro/processing/offset_metadata/fetcher.rb +131 -0
  53. data/lib/karafka/pro/processing/offset_metadata/listener.rb +46 -0
  54. data/lib/karafka/pro/processing/schedulers/base.rb +143 -0
  55. data/lib/karafka/pro/processing/schedulers/default.rb +107 -0
  56. data/lib/karafka/pro/processing/strategies/aj/lrj_mom_vp.rb +1 -1
  57. data/lib/karafka/pro/processing/strategies/default.rb +136 -3
  58. data/lib/karafka/pro/processing/strategies/dlq/default.rb +35 -0
  59. data/lib/karafka/pro/processing/strategies/lrj/default.rb +1 -1
  60. data/lib/karafka/pro/processing/strategies/lrj/mom.rb +1 -1
  61. data/lib/karafka/pro/processing/strategies/vp/default.rb +60 -26
  62. data/lib/karafka/pro/processing/virtual_offset_manager.rb +41 -11
  63. data/lib/karafka/pro/routing/features/long_running_job/topic.rb +2 -0
  64. data/lib/karafka/pro/routing/features/multiplexing/config.rb +38 -0
  65. data/lib/karafka/pro/routing/features/multiplexing/contracts/topic.rb +114 -0
  66. data/lib/karafka/pro/routing/features/multiplexing/patches/contracts/consumer_group.rb +42 -0
  67. data/lib/karafka/pro/routing/features/multiplexing/proxy.rb +38 -0
  68. data/lib/karafka/pro/routing/features/multiplexing/subscription_group.rb +42 -0
  69. data/lib/karafka/pro/routing/features/multiplexing/subscription_groups_builder.rb +40 -0
  70. data/lib/karafka/pro/routing/features/multiplexing.rb +59 -0
  71. data/lib/karafka/pro/routing/features/non_blocking_job/topic.rb +32 -0
  72. data/lib/karafka/pro/routing/features/non_blocking_job.rb +37 -0
  73. data/lib/karafka/pro/routing/features/offset_metadata/config.rb +33 -0
  74. data/lib/karafka/pro/routing/features/offset_metadata/contracts/topic.rb +42 -0
  75. data/lib/karafka/pro/routing/features/offset_metadata/topic.rb +65 -0
  76. data/lib/karafka/pro/routing/features/offset_metadata.rb +40 -0
  77. data/lib/karafka/pro/routing/features/patterns/contracts/consumer_group.rb +4 -0
  78. data/lib/karafka/pro/routing/features/patterns/detector.rb +18 -10
  79. data/lib/karafka/pro/routing/features/periodic_job/config.rb +37 -0
  80. data/lib/karafka/pro/routing/features/periodic_job/contracts/topic.rb +44 -0
  81. data/lib/karafka/pro/routing/features/periodic_job/topic.rb +94 -0
  82. data/lib/karafka/pro/routing/features/periodic_job.rb +27 -0
  83. data/lib/karafka/pro/routing/features/virtual_partitions/config.rb +1 -0
  84. data/lib/karafka/pro/routing/features/virtual_partitions/contracts/topic.rb +1 -0
  85. data/lib/karafka/pro/routing/features/virtual_partitions/topic.rb +7 -2
  86. data/lib/karafka/process.rb +5 -3
  87. data/lib/karafka/processing/coordinator.rb +5 -1
  88. data/lib/karafka/processing/executor.rb +43 -13
  89. data/lib/karafka/processing/executors_buffer.rb +22 -7
  90. data/lib/karafka/processing/jobs/base.rb +19 -2
  91. data/lib/karafka/processing/jobs/consume.rb +3 -3
  92. data/lib/karafka/processing/jobs/idle.rb +5 -0
  93. data/lib/karafka/processing/jobs/revoked.rb +5 -0
  94. data/lib/karafka/processing/jobs/shutdown.rb +5 -0
  95. data/lib/karafka/processing/jobs_queue.rb +19 -8
  96. data/lib/karafka/processing/schedulers/default.rb +42 -0
  97. data/lib/karafka/processing/strategies/base.rb +13 -4
  98. data/lib/karafka/processing/strategies/default.rb +23 -7
  99. data/lib/karafka/processing/strategies/dlq.rb +36 -0
  100. data/lib/karafka/processing/worker.rb +4 -1
  101. data/lib/karafka/routing/builder.rb +12 -2
  102. data/lib/karafka/routing/consumer_group.rb +5 -5
  103. data/lib/karafka/routing/features/base.rb +44 -8
  104. data/lib/karafka/routing/features/dead_letter_queue/config.rb +6 -1
  105. data/lib/karafka/routing/features/dead_letter_queue/contracts/topic.rb +1 -0
  106. data/lib/karafka/routing/features/dead_letter_queue/topic.rb +9 -2
  107. data/lib/karafka/routing/proxy.rb +4 -3
  108. data/lib/karafka/routing/subscription_group.rb +2 -2
  109. data/lib/karafka/routing/subscription_groups_builder.rb +11 -2
  110. data/lib/karafka/routing/topic.rb +8 -10
  111. data/lib/karafka/routing/topics.rb +1 -1
  112. data/lib/karafka/runner.rb +13 -3
  113. data/lib/karafka/server.rb +5 -9
  114. data/lib/karafka/setup/config.rb +21 -1
  115. data/lib/karafka/status.rb +23 -14
  116. data/lib/karafka/templates/karafka.rb.erb +7 -0
  117. data/lib/karafka/time_trackers/partition_usage.rb +56 -0
  118. data/lib/karafka/version.rb +1 -1
  119. data.tar.gz.sig +0 -0
  120. metadata +47 -13
  121. metadata.gz.sig +0 -0
  122. data/lib/karafka/connection/consumer_group_coordinator.rb +0 -48
  123. data/lib/karafka/pro/performance_tracker.rb +0 -84
  124. data/lib/karafka/pro/processing/scheduler.rb +0 -74
  125. data/lib/karafka/processing/scheduler.rb +0 -38
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4056d72f0d37ac46c52597ebcfed87de031f9f250d57a64ec5c665d3423a3087
4
- data.tar.gz: 95aeab42e351043873d548a5289e8355fe48fa7b7f27aaf1549a220c76eac9c1
3
+ metadata.gz: de7ea23762cefa19d5f3620e92a39f0030cd8ff78f318c92f30c494d79b78163
4
+ data.tar.gz: 775cfbd40d181036004dcf72dbcb84394dc8367bed0f6d69812f2324dc179d6f
5
5
  SHA512:
6
- metadata.gz: 8e41da4dff00dc3cb9749874568a275cdad81b7a762182cee7ea497bfe373dd1b3f777dd40638d0c30ff13f50c5913cdcad175edcc8b9b36a3e26fb5658fc986
7
- data.tar.gz: 738352dea20404d42a80340c2fc27359d54185565e8069f8245662e02d33c8630ce7922c3938b06b07e5587bd007342c65439229484ed529ae050e356872f150
6
+ metadata.gz: d68a4122a35afad517e4280b94f6f3d7cb3cab94fb37c11729e5e5c7a7aca082a7a272a52ff09a86e2f55ad0e078e234c88be79ce3b730527a2f6e7629ef259c
7
+ data.tar.gz: aa2ddb108cc39caa8ad5c95a86d07006b5be374647e703414a7761ffd5c333010d7e53b9fd2c42216780e35f00639d8cf80126c291a59d0585424077840cc6b5
checksums.yaml.gz.sig CHANGED
Binary file
@@ -27,7 +27,7 @@ jobs:
27
27
  - name: Set up Ruby
28
28
  uses: ruby/setup-ruby@v1
29
29
  with:
30
- ruby-version: 3.2
30
+ ruby-version: 3.3
31
31
  bundler-cache: true
32
32
 
33
33
  - name: Install Diffend plugin
@@ -73,7 +73,7 @@ jobs:
73
73
  fail-fast: false
74
74
  matrix:
75
75
  ruby:
76
- - '3.3.0-preview2'
76
+ - '3.3'
77
77
  - '3.2'
78
78
  # We run it against the oldest and the newest of a given major to make sure, that there
79
79
  # are no syntax-sugars that we would use that were introduced down the road
@@ -82,9 +82,8 @@ jobs:
82
82
  - '3.0'
83
83
  - '3.0.0'
84
84
  - '2.7'
85
- - '2.7.0'
86
85
  include:
87
- - ruby: '3.2'
86
+ - ruby: '3.3'
88
87
  coverage: 'true'
89
88
  steps:
90
89
  - uses: actions/checkout@v4
@@ -100,6 +99,7 @@ jobs:
100
99
  with:
101
100
  ruby-version: ${{matrix.ruby}}
102
101
  bundler-cache: true
102
+ bundler: 'latest'
103
103
 
104
104
  - name: Wait for Kafka
105
105
  run: |
@@ -118,7 +118,7 @@ jobs:
118
118
  fail-fast: false
119
119
  matrix:
120
120
  ruby:
121
- - '3.3.0-preview2'
121
+ - '3.3'
122
122
  - '3.2'
123
123
  - '3.1'
124
124
  - '3.0'
@@ -143,17 +143,30 @@ jobs:
143
143
  #
144
144
  # We also want to check that librdkafka is compiling as expected on all versions of Ruby
145
145
  ruby-version: ${{matrix.ruby}}
146
+ bundler: 'latest'
146
147
 
147
148
  - name: Install latest Bundler
148
149
  run: |
149
- gem install bundler --no-document
150
- gem update --system --no-document
150
+ if [[ "$(ruby -v | awk '{print $2}')" == 2.7.8* ]]; then
151
+ gem install bundler -v 2.4.22 --no-document
152
+ bundle config set version 2.4.22
153
+ gem update --system 3.4.22 --no-document
154
+ else
155
+ gem install bundler --no-document
156
+ gem update --system --no-document
157
+ fi
158
+
151
159
  bundle config set without 'tools benchmarks docs'
152
160
 
153
161
  - name: Bundle install
154
162
  run: |
155
163
  bundle config set without development
156
- bundle install
164
+
165
+ if [[ "$(ruby -v | awk '{print $2}')" == 2.7.8* ]]; then
166
+ BUNDLER_VERSION=2.4.22 bundle install --jobs 4 --retry 3
167
+ else
168
+ bundle install --jobs 4 --retry 3
169
+ fi
157
170
 
158
171
  - name: Wait for Kafka
159
172
  run: |
@@ -170,7 +183,7 @@ jobs:
170
183
  fail-fast: false
171
184
  matrix:
172
185
  ruby:
173
- - '3.3.0-preview2'
186
+ - '3.3'
174
187
  - '3.2'
175
188
  - '3.1'
176
189
  - '3.0'
@@ -188,17 +201,30 @@ jobs:
188
201
  uses: ruby/setup-ruby@v1
189
202
  with:
190
203
  ruby-version: ${{matrix.ruby}}
204
+ bundler: 'latest'
191
205
 
192
206
  - name: Install latest Bundler
193
207
  run: |
194
- gem install bundler --no-document
195
- gem update --system --no-document
208
+ if [[ "$(ruby -v | awk '{print $2}')" == 2.7.8* ]]; then
209
+ gem install bundler -v 2.4.22 --no-document
210
+ bundle config set version 2.4.22
211
+ gem update --system 3.4.22 --no-document
212
+ else
213
+ gem install bundler --no-document
214
+ gem update --system --no-document
215
+ fi
216
+
196
217
  bundle config set without 'tools benchmarks docs'
197
218
 
198
219
  - name: Bundle install
199
220
  run: |
200
221
  bundle config set without development
201
- bundle install
222
+
223
+ if [[ "$(ruby -v | awk '{print $2}')" == 2.7.8* ]]; then
224
+ BUNDLER_VERSION=2.4.22 bundle install --jobs 4 --retry 3
225
+ else
226
+ bundle install --jobs 4 --retry 3
227
+ fi
202
228
 
203
229
  - name: Wait for Kafka
204
230
  run: |
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 3.2.2
1
+ 3.3.0
data/CHANGELOG.md CHANGED
@@ -1,26 +1,62 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.3.0 (Unreleased)
4
+ - **[Feature]** Provide ability to multiplex subscription groups (Pro)
5
+ - **[Feature]** Provide `Karafka::Admin::Acl` for Kafka ACL management via the Admin APIs.
6
+ - **[Feature]** Periodic Jobs (Pro)
7
+ - **[Feature]** Offset Metadata storage (Pro)
8
+ - **[Feature]** Provide low-level listeners management API for dynamic resources scaling (Pro)
9
+ - [Enhancement] Improve shutdown process by allowing for parallel connections shutdown.
10
+ - [Enhancement] Introduce `non_blocking` routing API that aliases LRJ to indicate a different use-case for LRJ flow approach.
11
+ - [Enhancement] Allow to reset offset when seeking backwards by using the `reset_offset` keyword attribute set to `true`.
12
+ - [Enhancement] Alias producer operations in consumer to skip `#producer` reference.
13
+ - [Enhancement] Provide an `:independent` configuration to DLQ allowing to reset pause count track on each marking as consumed when retrying.
14
+ - [Enhancement] Remove no longer needed shutdown patches for `librdkafka` improving multi-sg shutdown times for `cooperative-sticky`.
15
+ - [Enhancement] Allow for parallel closing of connections from independent consumer groups.
16
+ - [Change] Make `Kubernetes::LivenessListener` not start until Karafka app starts running.
17
+ - [Change] Remove the legacy "inside of topics" way of defining subscription groups names
18
+ - [Change] Update supported instrumentation to report on `#tick`.
19
+ - [Refactor] Replace `define_method` with `class_eval` in some locations.
20
+ - [Fix] Fix a case where internal Idle job scheduling would go via the consumption flow.
21
+ - [Fix] Make the Iterator `#stop_partition` work with karafka-rdkafka `0.14.6`.
22
+ - [Fix] Ensure Pro components are not loaded during OSS specs execution (not affecting usage).
23
+ - [Fix] Fix invalid action label for consumers in DataDog logger instrumentation.
24
+ - [Ignore] option --include-consumer-groups not working as intended after removal of "thor"
25
+
26
+ ## 2.2.14 (2023-12-07)
27
+ - **[Feature]** Provide `Karafka::Admin#delete_consumer_group` and `Karafka::Admin#seek_consumer_group`.
28
+ - **[Feature]** Provide `Karafka::App.assignments` that will return real-time assignments tracking.
29
+ - [Enhancement] Make sure that the Scheduling API is thread-safe by default and allow for lock-less schedulers when schedulers are stateless.
30
+ - [Enhancement] "Blockless" topics with defaults
31
+ - [Enhancement] Provide a `finished?` method to the jobs for advanced reference based job schedulers.
32
+ - [Enhancement] Provide `client.reset` notification event.
33
+ - [Enhancement] Remove all usage of concurrent-ruby from Karafka
34
+ - [Change] Replace single #before_schedule with appropriate methods and events for scheduling various types of work. This is needed as we may run different framework logic on those and, second, for accurate job tracking with advanced schedulers.
35
+ - [Change] Rename `before_enqueue` to `before_schedule` to reflect what it does and when (internal).
36
+ - [Change] Remove not needed error catchers for strategies code. This code if errors, should be considered critical and should not be silenced.
37
+ - [Change] Remove not used notifications events.
38
+
3
39
  ## 2.2.13 (2023-11-17)
4
40
  - **[Feature]** Introduce low-level extended Scheduling API for granular control of schedulers and jobs execution [Pro].
5
- - [Improvement] Use separate lock for user-facing synchronization.
6
- - [Improvement] Instrument `consumer.before_enqueue`.
7
- - [Improvement] Limit usage of `concurrent-ruby` (plan to remove it as a dependency fully)
8
- - [Improvement] Provide `#synchronize` API same as in VPs for LRJs to allow for lifecycle events and consumption synchronization.
41
+ - [Enhancement] Use separate lock for user-facing synchronization.
42
+ - [Enhancement] Instrument `consumer.before_enqueue`.
43
+ - [Enhancement] Limit usage of `concurrent-ruby` (plan to remove it as a dependency fully)
44
+ - [Enhancement] Provide `#synchronize` API same as in VPs for LRJs to allow for lifecycle events and consumption synchronization.
9
45
 
10
46
  ## 2.2.12 (2023-11-09)
11
- - [Improvement] Rewrite the polling engine to update statistics and error callbacks despite longer non LRJ processing or long `max_wait_time` setups. This change provides stability to the statistics and background error emitting making them time-reliable.
12
- - [Improvement] Auto-update Inline Insights if new insights are present for all consumers and not only LRJ (OSS and Pro).
13
- - [Improvement] Alias `#insights` with `#inline_insights` and `#insights?` with `#inline_insights?`
47
+ - [Enhancement] Rewrite the polling engine to update statistics and error callbacks despite longer non LRJ processing or long `max_wait_time` setups. This change provides stability to the statistics and background error emitting making them time-reliable.
48
+ - [Enhancement] Auto-update Inline Insights if new insights are present for all consumers and not only LRJ (OSS and Pro).
49
+ - [Enhancement] Alias `#insights` with `#inline_insights` and `#insights?` with `#inline_insights?`
14
50
 
15
51
  ## 2.2.11 (2023-11-03)
16
- - [Improvement] Allow marking as consumed in the user `#synchronize` block.
17
- - [Improvement] Make whole Pro VP marking as consumed concurrency safe for both async and sync scenarios.
18
- - [Improvement] Provide new alias to `karafka server`, that is: `karafka consumer`.
52
+ - [Enhancement] Allow marking as consumed in the user `#synchronize` block.
53
+ - [Enhancement] Make whole Pro VP marking as consumed concurrency safe for both async and sync scenarios.
54
+ - [Enhancement] Provide new alias to `karafka server`, that is: `karafka consumer`.
19
55
 
20
56
  ## 2.2.10 (2023-11-02)
21
- - [Improvement] Allow for running `#pause` without specifying the offset (provide offset or `:consecutive`). This allows for pausing on the consecutive message (last received + 1), so after resume we will get last message received + 1 effectively not using `#seek` and not purging `librdafka` buffer preserving on networking. Please be mindful that this uses notion of last message passed from **librdkafka**, and not the last one available in the consumer (`messages.last`). While for regular cases they will be the same, when using things like DLQ, LRJs, VPs or Filtering API, those may not be the same.
22
- - [Improvement] **Drastically** improve network efficiency of operating with LRJ by using the `:consecutive` offset as default strategy for running LRJs without moving the offset in place and purging the data.
23
- - [Improvement] Do not "seek in place". When pausing and/or seeking to the same location as the current position, do nothing not to purge buffers and not to move to the same place where we are.
57
+ - [Enhancement] Allow for running `#pause` without specifying the offset (provide offset or `:consecutive`). This allows for pausing on the consecutive message (last received + 1), so after resume we will get last message received + 1 effectively not using `#seek` and not purging `librdafka` buffer preserving on networking. Please be mindful that this uses notion of last message passed from **librdkafka**, and not the last one available in the consumer (`messages.last`). While for regular cases they will be the same, when using things like DLQ, LRJs, VPs or Filtering API, those may not be the same.
58
+ - [Enhancement] **Drastically** improve network efficiency of operating with LRJ by using the `:consecutive` offset as default strategy for running LRJs without moving the offset in place and purging the data.
59
+ - [Enhancement] Do not "seek in place". When pausing and/or seeking to the same location as the current position, do nothing not to purge buffers and not to move to the same place where we are.
24
60
  - [Fix] Pattern regexps should not be part of declaratives even when configured.
25
61
 
26
62
  ### Upgrade Notes
@@ -28,13 +64,13 @@
28
64
  In the latest Karafka release, there are no breaking changes. However, please note the updates to #pause and #seek. If you spot any issues, please report them immediately. Your feedback is crucial.
29
65
 
30
66
  ## 2.2.9 (2023-10-24)
31
- - [Improvement] Allow using negative offset references in `Karafka::Admin#read_topic`.
67
+ - [Enhancement] Allow using negative offset references in `Karafka::Admin#read_topic`.
32
68
  - [Change] Make sure that WaterDrop `2.6.10` or higher is used with this release to support transactions fully and the Web-UI.
33
69
 
34
70
  ## 2.2.8 (2023-10-20)
35
71
  - **[Feature]** Introduce Appsignal integration for errors and metrics tracking.
36
- - [Improvement] Expose `#synchronize` for VPs to allow for locks when cross-VP consumers work is needed.
37
- - [Improvement] Provide `#collapse_until!` direct consumer API to allow for collapsed virtual partitions consumer operations together with the Filtering API for advanced use-cases.
72
+ - [Enhancement] Expose `#synchronize` for VPs to allow for locks when cross-VP consumers work is needed.
73
+ - [Enhancement] Provide `#collapse_until!` direct consumer API to allow for collapsed virtual partitions consumer operations together with the Filtering API for advanced use-cases.
38
74
  - [Refactor] Reorganize how rebalance events are propagated from `librdkafka` to Karafka. Replace `connection.client.rebalance_callback` with `rebalance.partitions_assigned` and `rebalance.partitions_revoked`. Introduce two extra events: `rebalance.partitions_assign` and `rebalance.partitions_revoke` to handle pre-rebalance future work.
39
75
  - [Refactor] Remove `thor` as a CLI layer and rely on Ruby `OptParser`
40
76
 
@@ -136,31 +172,31 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
136
172
 
137
173
  ## 2.1.9 (2023-08-06)
138
174
  - **[Feature]** Introduce ability to customize pause strategy on a per topic basis (Pro).
139
- - [Improvement] Disable the extensive messages logging in the default `karafka.rb` template.
175
+ - [Enhancement] Disable the extensive messages logging in the default `karafka.rb` template.
140
176
  - [Change] Require `waterdrop` `>= 2.6.6` due to extra `LoggerListener` API.
141
177
 
142
178
  ## 2.1.8 (2023-07-29)
143
- - [Improvement] Introduce `Karafka::BaseConsumer#used?` method to indicate, that at least one invocation of `#consume` took or will take place. This can be used as a replacement to the non-direct `messages.count` check for shutdown and revocation to ensure, that the consumption took place or is taking place (in case of running LRJ).
144
- - [Improvement] Make `messages#to_a` return copy of the underlying array to prevent scenarios, where the mutation impacts offset management.
145
- - [Improvement] Mitigate a librdkafka `cooperative-sticky` rebalance crash issue.
146
- - [Improvement] Provide ability to overwrite `consumer_persistence` per subscribed topic. This is mostly useful for plugins and extensions developers.
179
+ - [Enhancement] Introduce `Karafka::BaseConsumer#used?` method to indicate, that at least one invocation of `#consume` took or will take place. This can be used as a replacement to the non-direct `messages.count` check for shutdown and revocation to ensure, that the consumption took place or is taking place (in case of running LRJ).
180
+ - [Enhancement] Make `messages#to_a` return copy of the underlying array to prevent scenarios, where the mutation impacts offset management.
181
+ - [Enhancement] Mitigate a librdkafka `cooperative-sticky` rebalance crash issue.
182
+ - [Enhancement] Provide ability to overwrite `consumer_persistence` per subscribed topic. This is mostly useful for plugins and extensions developers.
147
183
  - [Fix] Fix a case where the performance tracker would crash in case of mutation of messages to an empty state.
148
184
 
149
185
  ## 2.1.7 (2023-07-22)
150
- - [Improvement] Always query for watermarks in the Iterator to improve the initial response time.
151
- - [Improvement] Add `max_wait_time` option to the Iterator.
186
+ - [Enhancement] Always query for watermarks in the Iterator to improve the initial response time.
187
+ - [Enhancement] Add `max_wait_time` option to the Iterator.
152
188
  - [Fix] Fix a case where `Admin#read_topic` would wait for poll interval on non-existing messages instead of early exit.
153
189
  - [Fix] Fix a case where Iterator with per partition offsets with negative lookups would go below the number of available messages.
154
190
  - [Fix] Remove unused constant from Admin module.
155
191
  - [Fix] Add missing `connection.client.rebalance_callback.error` to the `LoggerListener` instrumentation hook.
156
192
 
157
193
  ## 2.1.6 (2023-06-29)
158
- - [Improvement] Provide time support for iterator
159
- - [Improvement] Provide time support for admin `#read_topic`
160
- - [Improvement] Provide time support for consumer `#seek`.
161
- - [Improvement] Remove no longer needed locks for client operations.
162
- - [Improvement] Raise `Karafka::Errors::TopicNotFoundError` when trying to iterate over non-existing topic.
163
- - [Improvement] Ensure that Kafka multi-command operations run under mutex together.
194
+ - [Enhancement] Provide time support for iterator
195
+ - [Enhancement] Provide time support for admin `#read_topic`
196
+ - [Enhancement] Provide time support for consumer `#seek`.
197
+ - [Enhancement] Remove no longer needed locks for client operations.
198
+ - [Enhancement] Raise `Karafka::Errors::TopicNotFoundError` when trying to iterate over non-existing topic.
199
+ - [Enhancement] Ensure that Kafka multi-command operations run under mutex together.
164
200
  - [Change] Require `waterdrop` `>= 2.6.2`
165
201
  - [Change] Require `karafka-core` `>= 2.1.1`
166
202
  - [Refactor] Clean-up iterator code.
@@ -172,13 +208,13 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
172
208
  - [Fix] Make sure, that `#pause` and `#resume` with one underlying connection do not race-condition.
173
209
 
174
210
  ## 2.1.5 (2023-06-19)
175
- - [Improvement] Drastically improve `#revoked?` response quality by checking the real time assignment lost state on librdkafka.
176
- - [Improvement] Improve eviction of saturated jobs that would run on already revoked assignments.
177
- - [Improvement] Expose `#commit_offsets` and `#commit_offsets!` methods in the consumer to provide ability to commit offsets directly to Kafka without having to mark new messages as consumed.
178
- - [Improvement] No longer skip offset commit when no messages marked as consumed as `librdkafka` has fixed the crashes there.
179
- - [Improvement] Remove no longer needed patches.
180
- - [Improvement] Ensure, that the coordinator revocation status is switched upon revocation detection when using `#revoked?`
181
- - [Improvement] Add benchmarks for marking as consumed (sync and async).
211
+ - [Enhancement] Drastically improve `#revoked?` response quality by checking the real time assignment lost state on librdkafka.
212
+ - [Enhancement] Improve eviction of saturated jobs that would run on already revoked assignments.
213
+ - [Enhancement] Expose `#commit_offsets` and `#commit_offsets!` methods in the consumer to provide ability to commit offsets directly to Kafka without having to mark new messages as consumed.
214
+ - [Enhancement] No longer skip offset commit when no messages marked as consumed as `librdkafka` has fixed the crashes there.
215
+ - [Enhancement] Remove no longer needed patches.
216
+ - [Enhancement] Ensure, that the coordinator revocation status is switched upon revocation detection when using `#revoked?`
217
+ - [Enhancement] Add benchmarks for marking as consumed (sync and async).
182
218
  - [Change] Require `karafka-core` `>= 2.1.0`
183
219
  - [Change] Require `waterdrop` `>= 2.6.1`
184
220
 
@@ -202,12 +238,12 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
202
238
  - **[Feature]** Provide ability to use CurrentAttributes with ActiveJob's Karafka adapter (federicomoretti).
203
239
  - **[Feature]** Introduce collective Virtual Partitions offset management.
204
240
  - **[Feature]** Use virtual offsets to filter out messages that would be re-processed upon retries.
205
- - [Improvement] No longer break processing on failing parallel virtual partitions in ActiveJob because it is compensated by virtual marking.
206
- - [Improvement] Always use Virtual offset management for Pro ActiveJobs.
207
- - [Improvement] Do not attempt to mark offsets on already revoked partitions.
208
- - [Improvement] Make sure, that VP components are not injected into non VP strategies.
209
- - [Improvement] Improve complex strategies inheritance flow.
210
- - [Improvement] Optimize offset management for DLQ + MoM feature combinations.
241
+ - [Enhancement] No longer break processing on failing parallel virtual partitions in ActiveJob because it is compensated by virtual marking.
242
+ - [Enhancement] Always use Virtual offset management for Pro ActiveJobs.
243
+ - [Enhancement] Do not attempt to mark offsets on already revoked partitions.
244
+ - [Enhancement] Make sure, that VP components are not injected into non VP strategies.
245
+ - [Enhancement] Improve complex strategies inheritance flow.
246
+ - [Enhancement] Optimize offset management for DLQ + MoM feature combinations.
211
247
  - [Change] Removed `Karafka::Pro::BaseConsumer` in favor of `Karafka::BaseConsumer`. (#1345)
212
248
  - [Fix] Fix for `max_messages` and `max_wait_time` not having reference in errors.yml (#1443)
213
249
 
@@ -219,16 +255,16 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
219
255
 
220
256
  ## 2.0.41 (2023-04-19)
221
257
  - **[Feature]** Provide `Karafka::Pro::Iterator` for anonymous topic/partitions iterations and messages lookups (#1389 and #1427).
222
- - [Improvement] Optimize topic lookup for `read_topic` admin method usage.
223
- - [Improvement] Report via `LoggerListener` information about the partition on which a given job has started and finished.
224
- - [Improvement] Slightly normalize the `LoggerListener` format. Always report partition related operations as followed: `TOPIC_NAME/PARTITION`.
225
- - [Improvement] Do not retry recovery from `unknown_topic_or_part` when Karafka is shutting down as there is no point and no risk of any data losses.
226
- - [Improvement] Report `client.software.name` and `client.software.version` according to `librdkafka` recommendation.
227
- - [Improvement] Report ten longest integration specs after the suite execution.
228
- - [Improvement] Prevent user originating errors related to statistics processing after listener loop crash from potentially crashing the listener loop and hanging Karafka process.
258
+ - [Enhancement] Optimize topic lookup for `read_topic` admin method usage.
259
+ - [Enhancement] Report via `LoggerListener` information about the partition on which a given job has started and finished.
260
+ - [Enhancement] Slightly normalize the `LoggerListener` format. Always report partition related operations as followed: `TOPIC_NAME/PARTITION`.
261
+ - [Enhancement] Do not retry recovery from `unknown_topic_or_part` when Karafka is shutting down as there is no point and no risk of any data losses.
262
+ - [Enhancement] Report `client.software.name` and `client.software.version` according to `librdkafka` recommendation.
263
+ - [Enhancement] Report ten longest integration specs after the suite execution.
264
+ - [Enhancement] Prevent user originating errors related to statistics processing after listener loop crash from potentially crashing the listener loop and hanging Karafka process.
229
265
 
230
266
  ## 2.0.40 (2023-04-13)
231
- - [Improvement] Introduce `Karafka::Messages::Messages#empty?` method to handle Idle related cases where shutdown or revocation would be called on an empty messages set. This method allows for checking if there are any messages in the messages batch.
267
+ - [Enhancement] Introduce `Karafka::Messages::Messages#empty?` method to handle Idle related cases where shutdown or revocation would be called on an empty messages set. This method allows for checking if there are any messages in the messages batch.
232
268
  - [Refactor] Require messages builder to accept partition and do not fetch it from messages.
233
269
  - [Refactor] Use empty messages set for internal APIs (Idle) (so there always is `Karafka::Messages::Messages`)
234
270
  - [Refactor] Allow for empty messages set initialization with -1001 and -1 on metadata (similar to `librdkafka`)
@@ -238,17 +274,17 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
238
274
  - **[Feature]** Provide Delayed Topics (#1000)
239
275
  - **[Feature]** Provide ability to expire messages (expiring topics)
240
276
  - **[Feature]** Provide ability to apply filters after messages are polled and before enqueued. This is a generic filter API for any usage.
241
- - [Improvement] When using ActiveJob with Virtual Partitions, Karafka will stop if collectively VPs are failing. This minimizes number of jobs that will be collectively re-processed.
242
- - [Improvement] `#retrying?` method has been added to consumers to provide ability to check, that we're reprocessing data after a failure. This is useful for branching out processing based on errors.
243
- - [Improvement] Track active_job_id in instrumentation (#1372)
244
- - [Improvement] Introduce new housekeeping job type called `Idle` for non-consumption execution flows.
245
- - [Improvement] Change how a manual offset management works with Long-Running Jobs. Use the last message offset to move forward instead of relying on the last message marked as consumed for a scenario where no message is marked.
246
- - [Improvement] Prioritize in Pro non-consumption jobs execution over consumption despite LJF. This will ensure, that housekeeping as well as other non-consumption events are not saturated when running a lot of work.
247
- - [Improvement] Normalize the DLQ behaviour with MoM. Always pause on dispatch for all the strategies.
248
- - [Improvement] Improve the manual offset management and DLQ behaviour when no markings occur for OSS.
249
- - [Improvement] Do not early stop ActiveJob work running under virtual partitions to prevent extensive reprocessing.
250
- - [Improvement] Drastically increase number of scenarios covered by integration specs (OSS and Pro).
251
- - [Improvement] Introduce a `Coordinator#synchronize` lock for cross virtual partitions operations.
277
+ - [Enhancement] When using ActiveJob with Virtual Partitions, Karafka will stop if collectively VPs are failing. This minimizes number of jobs that will be collectively re-processed.
278
+ - [Enhancement] `#retrying?` method has been added to consumers to provide ability to check, that we're reprocessing data after a failure. This is useful for branching out processing based on errors.
279
+ - [Enhancement] Track active_job_id in instrumentation (#1372)
280
+ - [Enhancement] Introduce new housekeeping job type called `Idle` for non-consumption execution flows.
281
+ - [Enhancement] Change how a manual offset management works with Long-Running Jobs. Use the last message offset to move forward instead of relying on the last message marked as consumed for a scenario where no message is marked.
282
+ - [Enhancement] Prioritize in Pro non-consumption jobs execution over consumption despite LJF. This will ensure, that housekeeping as well as other non-consumption events are not saturated when running a lot of work.
283
+ - [Enhancement] Normalize the DLQ behaviour with MoM. Always pause on dispatch for all the strategies.
284
+ - [Enhancement] Improve the manual offset management and DLQ behaviour when no markings occur for OSS.
285
+ - [Enhancement] Do not early stop ActiveJob work running under virtual partitions to prevent extensive reprocessing.
286
+ - [Enhancement] Drastically increase number of scenarios covered by integration specs (OSS and Pro).
287
+ - [Enhancement] Introduce a `Coordinator#synchronize` lock for cross virtual partitions operations.
252
288
  - [Fix] Do not resume partition that is not paused.
253
289
  - [Fix] Fix `LoggerListener` cases where logs would not include caller id (when available)
254
290
  - [Fix] Fix not working benchmark tests.
@@ -262,10 +298,10 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
262
298
  - [Refactor] Move `#mark_as_consumed` and `#mark_as_consumed!`into `Strategies::Default` to be able to introduce marking for virtual partitions.
263
299
 
264
300
  ## 2.0.38 (2023-03-27)
265
- - [Improvement] Introduce `Karafka::Admin#read_watermark_offsets` to get low and high watermark offsets values.
266
- - [Improvement] Track active_job_id in instrumentation (#1372)
267
- - [Improvement] Improve `#read_topic` reading in case of a compacted partition where the offset is below the low watermark offset. This should optimize reading and should not go beyond the low watermark offset.
268
- - [Improvement] Allow `#read_topic` to accept instance settings to overwrite any settings needed to customize reading behaviours.
301
+ - [Enhancement] Introduce `Karafka::Admin#read_watermark_offsets` to get low and high watermark offsets values.
302
+ - [Enhancement] Track active_job_id in instrumentation (#1372)
303
+ - [Enhancement] Improve `#read_topic` reading in case of a compacted partition where the offset is below the low watermark offset. This should optimize reading and should not go beyond the low watermark offset.
304
+ - [Enhancement] Allow `#read_topic` to accept instance settings to overwrite any settings needed to customize reading behaviours.
269
305
 
270
306
  ## 2.0.37 (2023-03-20)
271
307
  - [Fix] Declarative topics execution on a secondary cluster run topics creation on the primary one (#1365)
@@ -280,7 +316,7 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
280
316
  - **[Feature]** Allow for full topics reset and topics repartitioning via the CLI.
281
317
 
282
318
  ## 2.0.34 (2023-03-04)
283
- - [Improvement] Attach an `embedded` tag to Karafka processes started using the embedded API.
319
+ - [Enhancement] Attach an `embedded` tag to Karafka processes started using the embedded API.
284
320
  - [Change] Renamed `Datadog::Listener` to `Datadog::MetricsListener` for consistency. (#1124)
285
321
 
286
322
  ### Upgrade Notes
@@ -291,10 +327,10 @@ If you want to maintain the `2.1` behavior, that is `karafka_admin` admin group,
291
327
  - **[Feature]** Support `perform_all_later` in ActiveJob adapter for Rails `7.1+`
292
328
  - **[Feature]** Introduce ability to assign and re-assign tags in consumer instances. This can be used for extra instrumentation that is context aware.
293
329
  - **[Feature]** Introduce ability to assign and reassign tags to the `Karafka::Process`.
294
- - [Improvement] When using `ActiveJob` adapter, automatically tag jobs with the name of the `ActiveJob` class that is running inside of the `ActiveJob` consumer.
295
- - [Improvement] Make `::Karafka::Instrumentation::Notifications::EVENTS` list public for anyone wanting to re-bind those into a different notification bus.
296
- - [Improvement] Set `fetch.message.max.bytes` for `Karafka::Admin` to `5MB` to make sure that all data is fetched correctly for Web UI under heavy load (many consumers).
297
- - [Improvement] Introduce a `strict_topics_namespacing` config option to enable/disable the strict topics naming validations. This can be useful when working with pre-existing topics which we cannot or do not want to rename.
330
+ - [Enhancement] When using `ActiveJob` adapter, automatically tag jobs with the name of the `ActiveJob` class that is running inside of the `ActiveJob` consumer.
331
+ - [Enhancement] Make `::Karafka::Instrumentation::Notifications::EVENTS` list public for anyone wanting to re-bind those into a different notification bus.
332
+ - [Enhancement] Set `fetch.message.max.bytes` for `Karafka::Admin` to `5MB` to make sure that all data is fetched correctly for Web UI under heavy load (many consumers).
333
+ - [Enhancement] Introduce a `strict_topics_namespacing` config option to enable/disable the strict topics naming validations. This can be useful when working with pre-existing topics which we cannot or do not want to rename.
298
334
  - [Fix] Karafka monitor is prematurely cached (#1314)
299
335
 
300
336
  ### Upgrade Notes
@@ -325,39 +361,39 @@ end
325
361
 
326
362
  ## 2.0.32 (2023-02-13)
327
363
  - [Fix] Many non-existing topic subscriptions propagate poll errors beyond client
328
- - [Improvement] Ignore `unknown_topic_or_part` errors in dev when `allow.auto.create.topics` is on.
329
- - [Improvement] Optimize temporary errors handling in polling for a better backoff policy
364
+ - [Enhancement] Ignore `unknown_topic_or_part` errors in dev when `allow.auto.create.topics` is on.
365
+ - [Enhancement] Optimize temporary errors handling in polling for a better backoff policy
330
366
 
331
367
  ## 2.0.31 (2023-02-12)
332
368
  - [Feature] Allow for adding partitions via `Admin#create_partitions` API.
333
369
  - [Fix] Do not ignore admin errors upon invalid configuration (#1254)
334
370
  - [Fix] Topic name validation (#1300) - CandyFet
335
- - [Improvement] Increase the `max_wait_timeout` on admin operations to five minutes to make sure no timeout on heavily loaded clusters.
371
+ - [Enhancement] Increase the `max_wait_timeout` on admin operations to five minutes to make sure no timeout on heavily loaded clusters.
336
372
  - [Maintenance] Require `karafka-core` >= `2.0.11` and switch to shared RSpec locator.
337
373
  - [Maintenance] Require `karafka-rdkafka` >= `0.12.1`
338
374
 
339
375
  ## 2.0.30 (2023-01-31)
340
- - [Improvement] Alias `--consumer-groups` with `--include-consumer-groups`
341
- - [Improvement] Alias `--subscription-groups` with `--include-subscription-groups`
342
- - [Improvement] Alias `--topics` with `--include-topics`
343
- - [Improvement] Introduce `--exclude-consumer-groups` for ability to exclude certain consumer groups from running
344
- - [Improvement] Introduce `--exclude-subscription-groups` for ability to exclude certain subscription groups from running
345
- - [Improvement] Introduce `--exclude-topics` for ability to exclude certain topics from running
376
+ - [Enhancement] Alias `--consumer-groups` with `--include-consumer-groups`
377
+ - [Enhancement] Alias `--subscription-groups` with `--include-subscription-groups`
378
+ - [Enhancement] Alias `--topics` with `--include-topics`
379
+ - [Enhancement] Introduce `--exclude-consumer-groups` for ability to exclude certain consumer groups from running
380
+ - [Enhancement] Introduce `--exclude-subscription-groups` for ability to exclude certain subscription groups from running
381
+ - [Enhancement] Introduce `--exclude-topics` for ability to exclude certain topics from running
346
382
 
347
383
  ## 2.0.29 (2023-01-30)
348
- - [Improvement] Make sure, that the `Karafka#producer` instance has the `LoggerListener` enabled in the install template, so Karafka by default prints both consumer and producer info.
349
- - [Improvement] Extract the code loading capabilities of Karafka console from the executable, so web can use it to provide CLI commands.
384
+ - [Enhancement] Make sure, that the `Karafka#producer` instance has the `LoggerListener` enabled in the install template, so Karafka by default prints both consumer and producer info.
385
+ - [Enhancement] Extract the code loading capabilities of Karafka console from the executable, so web can use it to provide CLI commands.
350
386
  - [Fix] Fix for: running karafka console results in NameError with Rails (#1280)
351
387
  - [Fix] Make sure, that the `caller` for async errors is being published.
352
388
  - [Change] Make sure that WaterDrop `2.4.10` or higher is used with this release to support Web-UI.
353
389
 
354
390
  ## 2.0.28 (2023-01-25)
355
391
  - **[Feature]** Provide the ability to use Dead Letter Queue with Virtual Partitions.
356
- - [Improvement] Collapse Virtual Partitions upon retryable error to a single partition. This allows dead letter queue to operate and mitigate issues arising from work virtualization. This removes uncertainties upon errors that can be retried and processed. Affects given topic partition virtualization only for multi-topic and multi-partition parallelization. It also minimizes potential "flickering" where given data set has potentially many corrupted messages. The collapse will last until all the messages from the collective corrupted batch are processed. After that, virtualization will resume.
357
- - [Improvement] Introduce `#collapsed?` consumer method available for consumers using Virtual Partitions.
358
- - [Improvement] Allow for customization of DLQ dispatched message details in Pro (#1266) via the `#enhance_dlq_message` consumer method.
359
- - [Improvement] Include `original_consumer_group` in the DLQ dispatched messages in Pro.
360
- - [Improvement] Use Karafka `client_id` as kafka `client.id` value by default
392
+ - [Enhancement] Collapse Virtual Partitions upon retryable error to a single partition. This allows dead letter queue to operate and mitigate issues arising from work virtualization. This removes uncertainties upon errors that can be retried and processed. Affects given topic partition virtualization only for multi-topic and multi-partition parallelization. It also minimizes potential "flickering" where given data set has potentially many corrupted messages. The collapse will last until all the messages from the collective corrupted batch are processed. After that, virtualization will resume.
393
+ - [Enhancement] Introduce `#collapsed?` consumer method available for consumers using Virtual Partitions.
394
+ - [Enhancement] Allow for customization of DLQ dispatched message details in Pro (#1266) via the `#enhance_dlq_message` consumer method.
395
+ - [Enhancement] Include `original_consumer_group` in the DLQ dispatched messages in Pro.
396
+ - [Enhancement] Use Karafka `client_id` as kafka `client.id` value by default
361
397
 
362
398
  ### Upgrade Notes
363
399
 
@@ -378,14 +414,14 @@ class KarafkaApp < Karafka::App
378
414
 
379
415
  ## 2.0.26 (2023-01-10)
380
416
  - **[Feature]** Allow for disabling given topics by setting `active` to false. It will exclude them from consumption but will allow to have their definitions for using admin APIs, etc.
381
- - [Improvement] Early terminate on `read_topic` when reaching the last offset available on the request time.
382
- - [Improvement] Introduce a `quiet` state that indicates that Karafka is not only moving to quiet mode but actually that it reached it and no work will happen anymore in any of the consumer groups.
383
- - [Improvement] Use Karafka defined routes topics when possible for `read_topic` admin API.
384
- - [Improvement] Introduce `client.pause` and `client.resume` instrumentation hooks for tracking client topic partition pausing and resuming. This is alongside of `consumer.consuming.pause` that can be used to track both manual and automatic pausing with more granular consumer related details. The `client.*` should be used for low level tracking.
385
- - [Improvement] Replace `LoggerListener` pause notification with one based on `client.pause` instead of `consumer.consuming.pause`.
386
- - [Improvement] Expand `LoggerListener` with `client.resume` notification.
387
- - [Improvement] Replace random anonymous subscription groups ids with stable once.
388
- - [Improvement] Add `consumer.consume`, `consumer.revoke` and `consumer.shutting_down` notification events and move the revocation logic calling to strategies.
417
+ - [Enhancement] Early terminate on `read_topic` when reaching the last offset available on the request time.
418
+ - [Enhancement] Introduce a `quiet` state that indicates that Karafka is not only moving to quiet mode but actually that it reached it and no work will happen anymore in any of the consumer groups.
419
+ - [Enhancement] Use Karafka defined routes topics when possible for `read_topic` admin API.
420
+ - [Enhancement] Introduce `client.pause` and `client.resume` instrumentation hooks for tracking client topic partition pausing and resuming. This is alongside of `consumer.consuming.pause` that can be used to track both manual and automatic pausing with more granular consumer related details. The `client.*` should be used for low level tracking.
421
+ - [Enhancement] Replace `LoggerListener` pause notification with one based on `client.pause` instead of `consumer.consuming.pause`.
422
+ - [Enhancement] Expand `LoggerListener` with `client.resume` notification.
423
+ - [Enhancement] Replace random anonymous subscription groups ids with stable once.
424
+ - [Enhancement] Add `consumer.consume`, `consumer.revoke` and `consumer.shutting_down` notification events and move the revocation logic calling to strategies.
389
425
  - [Change] Rename job queue statistics `processing` key to `busy`. No changes needed because naming in the DataDog listener stays the same.
390
426
  - [Fix] Fix proctitle listener state changes reporting on new states.
391
427
  - [Fix] Make sure all files descriptors are closed in the integration specs.
@@ -398,17 +434,17 @@ class KarafkaApp < Karafka::App
398
434
 
399
435
  ## 2.0.24 (2022-12-19)
400
436
  - **[Feature]** Provide out of the box encryption support for Pro.
401
- - [Improvement] Add instrumentation upon `#pause`.
402
- - [Improvement] Add instrumentation upon retries.
403
- - [Improvement] Assign `#id` to consumers similar to other entities for ease of debugging.
404
- - [Improvement] Add retries and pausing to the default `LoggerListener`.
405
- - [Improvement] Introduce a new final `terminated` state that will kick in prior to exit but after all the instrumentation and other things are done.
406
- - [Improvement] Ensure that state transitions are thread-safe and ensure state transitions can occur in one direction.
407
- - [Improvement] Optimize status methods proxying to `Karafka::App`.
408
- - [Improvement] Allow for easier state usage by introducing explicit `#to_s` for reporting.
409
- - [Improvement] Change auto-generated id from `SecureRandom#uuid` to `SecureRandom#hex(6)`
410
- - [Improvement] Emit statistic every 5 seconds by default.
411
- - [Improvement] Introduce general messages parser that can be swapped when needed.
437
+ - [Enhancement] Add instrumentation upon `#pause`.
438
+ - [Enhancement] Add instrumentation upon retries.
439
+ - [Enhancement] Assign `#id` to consumers similar to other entities for ease of debugging.
440
+ - [Enhancement] Add retries and pausing to the default `LoggerListener`.
441
+ - [Enhancement] Introduce a new final `terminated` state that will kick in prior to exit but after all the instrumentation and other things are done.
442
+ - [Enhancement] Ensure that state transitions are thread-safe and ensure state transitions can occur in one direction.
443
+ - [Enhancement] Optimize status methods proxying to `Karafka::App`.
444
+ - [Enhancement] Allow for easier state usage by introducing explicit `#to_s` for reporting.
445
+ - [Enhancement] Change auto-generated id from `SecureRandom#uuid` to `SecureRandom#hex(6)`
446
+ - [Enhancement] Emit statistic every 5 seconds by default.
447
+ - [Enhancement] Introduce general messages parser that can be swapped when needed.
412
448
  - [Fix] Do not trigger code reloading when `consumer_persistence` is enabled.
413
449
  - [Fix] Shutdown producer after all the consumer components are down and the status is stopped. This will ensure, that any instrumentation related Kafka messaging can still operate.
414
450
 
@@ -429,17 +465,17 @@ end
429
465
 
430
466
  ## 2.0.23 (2022-12-07)
431
467
  - [Maintenance] Align with `waterdrop` and `karafka-core`
432
- - [Improvement] Provide `Admin#read_topic` API to get topic data without subscribing.
433
- - [Improvement] Upon an end user `#pause`, do not commit the offset in automatic offset management mode. This will prevent from a scenario where pause is needed but during it a rebalance occurs and a different assigned process starts not from the pause location but from the automatic offset that may be different. This still allows for using the `#mark_as_consumed`.
468
+ - [Enhancement] Provide `Admin#read_topic` API to get topic data without subscribing.
469
+ - [Enhancement] Upon an end user `#pause`, do not commit the offset in automatic offset management mode. This will prevent from a scenario where pause is needed but during it a rebalance occurs and a different assigned process starts not from the pause location but from the automatic offset that may be different. This still allows for using the `#mark_as_consumed`.
434
470
  - [Fix] Fix a scenario where manual `#pause` would be overwritten by a resume initiated by the strategy.
435
471
  - [Fix] Fix a scenario where manual `#pause` in LRJ would cause infinite pause.
436
472
 
437
473
  ## 2.0.22 (2022-12-02)
438
- - [Improvement] Load Pro components upon Karafka require so they can be altered prior to setup.
439
- - [Improvement] Do not run LRJ jobs that were added to the jobs queue but were revoked meanwhile.
440
- - [Improvement] Allow running particular named subscription groups similar to consumer groups.
441
- - [Improvement] Allow running particular topics similar to consumer groups.
442
- - [Improvement] Raise configuration error when trying to run Karafka with options leading to no subscriptions.
474
+ - [Enhancement] Load Pro components upon Karafka require so they can be altered prior to setup.
475
+ - [Enhancement] Do not run LRJ jobs that were added to the jobs queue but were revoked meanwhile.
476
+ - [Enhancement] Allow running particular named subscription groups similar to consumer groups.
477
+ - [Enhancement] Allow running particular topics similar to consumer groups.
478
+ - [Enhancement] Raise configuration error when trying to run Karafka with options leading to no subscriptions.
443
479
  - [Fix] Fix `karafka info` subscription groups count reporting as it was misleading.
444
480
  - [Fix] Allow for defining subscription groups with symbols similar to consumer groups and topics to align the API.
445
481
  - [Fix] Do not allow for an explicit `nil` as a `subscription_group` block argument.
@@ -449,23 +485,23 @@ end
449
485
  - [Fix] Duplicated logs in development environment for Rails when logger set to `$stdout`.
450
486
 
451
487
  ## 20.0.21 (2022-11-25)
452
- - [Improvement] Make revocation jobs for LRJ topics non-blocking to prevent blocking polling when someone uses non-revocation aware LRJ jobs and revocation happens.
488
+ - [Enhancement] Make revocation jobs for LRJ topics non-blocking to prevent blocking polling when someone uses non-revocation aware LRJ jobs and revocation happens.
453
489
 
454
490
  ## 2.0.20 (2022-11-24)
455
- - [Improvement] Support `group.instance.id` assignment (static group membership) for a case where a single consumer group has multiple subscription groups (#1173).
491
+ - [Enhancement] Support `group.instance.id` assignment (static group membership) for a case where a single consumer group has multiple subscription groups (#1173).
456
492
 
457
493
  ## 2.0.19 (2022-11-20)
458
494
  - **[Feature]** Provide ability to skip failing messages without dispatching them to an alternative topic (DLQ).
459
- - [Improvement] Improve the integration with Ruby on Rails by preventing double-require of components.
460
- - [Improvement] Improve stability of the shutdown process upon critical errors.
461
- - [Improvement] Improve stability of the integrations spec suite.
495
+ - [Enhancement] Improve the integration with Ruby on Rails by preventing double-require of components.
496
+ - [Enhancement] Improve stability of the shutdown process upon critical errors.
497
+ - [Enhancement] Improve stability of the integrations spec suite.
462
498
  - [Fix] Fix an issue where upon fast startup of multiple subscription groups from the same consumer group, a ghost queue would be created due to problems in `Concurrent::Hash`.
463
499
 
464
500
  ## 2.0.18 (2022-11-18)
465
501
  - **[Feature]** Support quiet mode via `TSTP` signal. When used, Karafka will finish processing current messages, run `shutdown` jobs, and switch to a quiet mode where no new work is being accepted. At the same time, it will keep the consumer group quiet, and thus no rebalance will be triggered. This can be particularly useful during deployments.
466
- - [Improvement] Trigger `#revoked` for jobs in case revocation would happen during shutdown when jobs are still running. This should ensure, we get a notion of revocation for Pro LRJ jobs even when revocation happening upon shutdown (#1150).
467
- - [Improvement] Stabilize the shutdown procedure for consumer groups with many subscription groups that have non-aligned processing cost per batch.
468
- - [Improvement] Remove double loading of Karafka via Rails railtie.
502
+ - [Enhancement] Trigger `#revoked` for jobs in case revocation would happen during shutdown when jobs are still running. This should ensure, we get a notion of revocation for Pro LRJ jobs even when revocation happening upon shutdown (#1150).
503
+ - [Enhancement] Stabilize the shutdown procedure for consumer groups with many subscription groups that have non-aligned processing cost per batch.
504
+ - [Enhancement] Remove double loading of Karafka via Rails railtie.
469
505
  - [Fix] Fix invalid class references in YARD docs.
470
506
  - [Fix] prevent parallel closing of many clients.
471
507
  - [Fix] fix a case where information about revocation for a combination of LRJ + VP would not be dispatched until all VP work is done.
@@ -494,11 +530,11 @@ end
494
530
  ## 2.0.16 (2022-11-09)
495
531
  - **[Breaking]** Disable the root `manual_offset_management` setting and require it to be configured per topic. This is part of "topic features" configuration extraction for better code organization.
496
532
  - **[Feature]** Introduce **Dead Letter Queue** feature and Pro **Enhanced Dead Letter Queue** feature
497
- - [Improvement] Align attributes available in the instrumentation bus for listener related events.
498
- - [Improvement] Include consumer group id in consumption related events (#1093)
499
- - [Improvement] Delegate pro components loading to Zeitwerk
500
- - [Improvement] Include `Datadog::LoggerListener` for tracking logger data with DataDog (@bruno-b-martins)
501
- - [Improvement] Include `seek_offset` in the `consumer.consume.error` event payload (#1113)
533
+ - [Enhancement] Align attributes available in the instrumentation bus for listener related events.
534
+ - [Enhancement] Include consumer group id in consumption related events (#1093)
535
+ - [Enhancement] Delegate pro components loading to Zeitwerk
536
+ - [Enhancement] Include `Datadog::LoggerListener` for tracking logger data with DataDog (@bruno-b-martins)
537
+ - [Enhancement] Include `seek_offset` in the `consumer.consume.error` event payload (#1113)
502
538
  - [Refactor] Remove unused logger listener event handler.
503
539
  - [Refactor] Internal refactoring of routing validations flow.
504
540
  - [Refactor] Reorganize how routing related features are represented internally to simplify features management.