karafka 1.4.4 → 2.1.10

Sign up to get free protection for your applications and to get access to all the features.
Files changed (315) hide show
  1. checksums.yaml +4 -4
  2. checksums.yaml.gz.sig +0 -0
  3. data/.github/FUNDING.yml +1 -3
  4. data/.github/workflows/ci.yml +117 -36
  5. data/.rspec +4 -0
  6. data/.ruby-version +1 -1
  7. data/CHANGELOG.md +611 -578
  8. data/CONTRIBUTING.md +10 -19
  9. data/Gemfile +7 -0
  10. data/Gemfile.lock +59 -100
  11. data/LICENSE +17 -0
  12. data/LICENSE-COMM +89 -0
  13. data/LICENSE-LGPL +165 -0
  14. data/README.md +64 -66
  15. data/bin/benchmarks +85 -0
  16. data/bin/create_token +22 -0
  17. data/bin/integrations +297 -0
  18. data/bin/karafka +4 -12
  19. data/bin/rspecs +6 -0
  20. data/bin/scenario +29 -0
  21. data/bin/stress_many +13 -0
  22. data/bin/stress_one +13 -0
  23. data/bin/verify_license_integrity +37 -0
  24. data/certs/cert_chain.pem +26 -0
  25. data/certs/karafka-pro.pem +11 -0
  26. data/config/locales/errors.yml +84 -0
  27. data/config/locales/pro_errors.yml +39 -0
  28. data/docker-compose.yml +13 -3
  29. data/karafka.gemspec +27 -22
  30. data/lib/active_job/karafka.rb +17 -0
  31. data/lib/active_job/queue_adapters/karafka_adapter.rb +32 -0
  32. data/lib/karafka/active_job/consumer.rb +49 -0
  33. data/lib/karafka/active_job/current_attributes/loading.rb +36 -0
  34. data/lib/karafka/active_job/current_attributes/persistence.rb +28 -0
  35. data/lib/karafka/active_job/current_attributes.rb +42 -0
  36. data/lib/karafka/active_job/dispatcher.rb +69 -0
  37. data/lib/karafka/active_job/job_extensions.rb +34 -0
  38. data/lib/karafka/active_job/job_options_contract.rb +32 -0
  39. data/lib/karafka/admin.rb +286 -0
  40. data/lib/karafka/app.rb +47 -23
  41. data/lib/karafka/base_consumer.rb +247 -29
  42. data/lib/karafka/cli/base.rb +24 -4
  43. data/lib/karafka/cli/console.rb +13 -8
  44. data/lib/karafka/cli/info.rb +45 -10
  45. data/lib/karafka/cli/install.rb +22 -12
  46. data/lib/karafka/cli/server.rb +63 -41
  47. data/lib/karafka/cli/topics.rb +146 -0
  48. data/lib/karafka/cli.rb +4 -11
  49. data/lib/karafka/connection/client.rb +502 -89
  50. data/lib/karafka/connection/consumer_group_coordinator.rb +48 -0
  51. data/lib/karafka/connection/listener.rb +294 -38
  52. data/lib/karafka/connection/listeners_batch.rb +40 -0
  53. data/lib/karafka/connection/messages_buffer.rb +84 -0
  54. data/lib/karafka/connection/pauses_manager.rb +46 -0
  55. data/lib/karafka/connection/proxy.rb +92 -0
  56. data/lib/karafka/connection/raw_messages_buffer.rb +101 -0
  57. data/lib/karafka/connection/rebalance_manager.rb +90 -0
  58. data/lib/karafka/contracts/base.rb +17 -0
  59. data/lib/karafka/contracts/config.rb +88 -11
  60. data/lib/karafka/contracts/consumer_group.rb +32 -187
  61. data/lib/karafka/contracts/server_cli_options.rb +80 -19
  62. data/lib/karafka/contracts/topic.rb +65 -0
  63. data/lib/karafka/contracts.rb +1 -1
  64. data/lib/karafka/embedded.rb +36 -0
  65. data/lib/karafka/env.rb +46 -0
  66. data/lib/karafka/errors.rb +26 -21
  67. data/lib/karafka/helpers/async.rb +33 -0
  68. data/lib/karafka/helpers/colorize.rb +26 -0
  69. data/lib/karafka/helpers/multi_delegator.rb +2 -2
  70. data/lib/karafka/instrumentation/callbacks/error.rb +39 -0
  71. data/lib/karafka/instrumentation/callbacks/statistics.rb +51 -0
  72. data/lib/karafka/instrumentation/logger.rb +5 -9
  73. data/lib/karafka/instrumentation/logger_listener.rb +299 -0
  74. data/lib/karafka/instrumentation/monitor.rb +13 -61
  75. data/lib/karafka/instrumentation/notifications.rb +75 -0
  76. data/lib/karafka/instrumentation/proctitle_listener.rb +7 -16
  77. data/lib/karafka/instrumentation/vendors/datadog/dashboard.json +1 -0
  78. data/lib/karafka/instrumentation/vendors/datadog/logger_listener.rb +153 -0
  79. data/lib/karafka/instrumentation/vendors/datadog/metrics_listener.rb +264 -0
  80. data/lib/karafka/instrumentation/vendors/kubernetes/liveness_listener.rb +176 -0
  81. data/lib/karafka/licenser.rb +78 -0
  82. data/lib/karafka/messages/batch_metadata.rb +52 -0
  83. data/lib/karafka/messages/builders/batch_metadata.rb +40 -0
  84. data/lib/karafka/messages/builders/message.rb +36 -0
  85. data/lib/karafka/messages/builders/messages.rb +36 -0
  86. data/lib/karafka/{params/params.rb → messages/message.rb} +20 -13
  87. data/lib/karafka/messages/messages.rb +71 -0
  88. data/lib/karafka/{params → messages}/metadata.rb +4 -6
  89. data/lib/karafka/messages/parser.rb +14 -0
  90. data/lib/karafka/messages/seek.rb +12 -0
  91. data/lib/karafka/patches/rdkafka/bindings.rb +139 -0
  92. data/lib/karafka/pro/active_job/consumer.rb +47 -0
  93. data/lib/karafka/pro/active_job/dispatcher.rb +86 -0
  94. data/lib/karafka/pro/active_job/job_options_contract.rb +45 -0
  95. data/lib/karafka/pro/encryption/cipher.rb +58 -0
  96. data/lib/karafka/pro/encryption/contracts/config.rb +79 -0
  97. data/lib/karafka/pro/encryption/errors.rb +24 -0
  98. data/lib/karafka/pro/encryption/messages/middleware.rb +46 -0
  99. data/lib/karafka/pro/encryption/messages/parser.rb +56 -0
  100. data/lib/karafka/pro/encryption/setup/config.rb +48 -0
  101. data/lib/karafka/pro/encryption.rb +47 -0
  102. data/lib/karafka/pro/iterator/expander.rb +95 -0
  103. data/lib/karafka/pro/iterator/tpl_builder.rb +155 -0
  104. data/lib/karafka/pro/iterator.rb +170 -0
  105. data/lib/karafka/pro/loader.rb +102 -0
  106. data/lib/karafka/pro/performance_tracker.rb +84 -0
  107. data/lib/karafka/pro/processing/collapser.rb +62 -0
  108. data/lib/karafka/pro/processing/coordinator.rb +148 -0
  109. data/lib/karafka/pro/processing/filters/base.rb +61 -0
  110. data/lib/karafka/pro/processing/filters/delayer.rb +70 -0
  111. data/lib/karafka/pro/processing/filters/expirer.rb +51 -0
  112. data/lib/karafka/pro/processing/filters/throttler.rb +84 -0
  113. data/lib/karafka/pro/processing/filters/virtual_limiter.rb +52 -0
  114. data/lib/karafka/pro/processing/filters_applier.rb +105 -0
  115. data/lib/karafka/pro/processing/jobs/consume_non_blocking.rb +39 -0
  116. data/lib/karafka/pro/processing/jobs/revoked_non_blocking.rb +37 -0
  117. data/lib/karafka/pro/processing/jobs_builder.rb +50 -0
  118. data/lib/karafka/pro/processing/partitioner.rb +69 -0
  119. data/lib/karafka/pro/processing/scheduler.rb +75 -0
  120. data/lib/karafka/pro/processing/strategies/aj/dlq_ftr_lrj_mom.rb +70 -0
  121. data/lib/karafka/pro/processing/strategies/aj/dlq_ftr_lrj_mom_vp.rb +76 -0
  122. data/lib/karafka/pro/processing/strategies/aj/dlq_ftr_mom.rb +72 -0
  123. data/lib/karafka/pro/processing/strategies/aj/dlq_ftr_mom_vp.rb +76 -0
  124. data/lib/karafka/pro/processing/strategies/aj/dlq_lrj_mom.rb +66 -0
  125. data/lib/karafka/pro/processing/strategies/aj/dlq_lrj_mom_vp.rb +70 -0
  126. data/lib/karafka/pro/processing/strategies/aj/dlq_mom.rb +64 -0
  127. data/lib/karafka/pro/processing/strategies/aj/dlq_mom_vp.rb +69 -0
  128. data/lib/karafka/pro/processing/strategies/aj/ftr_lrj_mom.rb +38 -0
  129. data/lib/karafka/pro/processing/strategies/aj/ftr_lrj_mom_vp.rb +66 -0
  130. data/lib/karafka/pro/processing/strategies/aj/ftr_mom.rb +38 -0
  131. data/lib/karafka/pro/processing/strategies/aj/ftr_mom_vp.rb +58 -0
  132. data/lib/karafka/pro/processing/strategies/aj/lrj_mom.rb +37 -0
  133. data/lib/karafka/pro/processing/strategies/aj/lrj_mom_vp.rb +82 -0
  134. data/lib/karafka/pro/processing/strategies/aj/mom.rb +36 -0
  135. data/lib/karafka/pro/processing/strategies/aj/mom_vp.rb +52 -0
  136. data/lib/karafka/pro/processing/strategies/base.rb +26 -0
  137. data/lib/karafka/pro/processing/strategies/default.rb +105 -0
  138. data/lib/karafka/pro/processing/strategies/dlq/default.rb +131 -0
  139. data/lib/karafka/pro/processing/strategies/dlq/ftr.rb +61 -0
  140. data/lib/karafka/pro/processing/strategies/dlq/ftr_lrj.rb +75 -0
  141. data/lib/karafka/pro/processing/strategies/dlq/ftr_lrj_mom.rb +71 -0
  142. data/lib/karafka/pro/processing/strategies/dlq/ftr_lrj_mom_vp.rb +43 -0
  143. data/lib/karafka/pro/processing/strategies/dlq/ftr_lrj_vp.rb +41 -0
  144. data/lib/karafka/pro/processing/strategies/dlq/ftr_mom.rb +69 -0
  145. data/lib/karafka/pro/processing/strategies/dlq/ftr_mom_vp.rb +41 -0
  146. data/lib/karafka/pro/processing/strategies/dlq/ftr_vp.rb +40 -0
  147. data/lib/karafka/pro/processing/strategies/dlq/lrj.rb +64 -0
  148. data/lib/karafka/pro/processing/strategies/dlq/lrj_mom.rb +65 -0
  149. data/lib/karafka/pro/processing/strategies/dlq/lrj_mom_vp.rb +36 -0
  150. data/lib/karafka/pro/processing/strategies/dlq/lrj_vp.rb +39 -0
  151. data/lib/karafka/pro/processing/strategies/dlq/mom.rb +68 -0
  152. data/lib/karafka/pro/processing/strategies/dlq/mom_vp.rb +37 -0
  153. data/lib/karafka/pro/processing/strategies/dlq/vp.rb +40 -0
  154. data/lib/karafka/pro/processing/strategies/ftr/default.rb +111 -0
  155. data/lib/karafka/pro/processing/strategies/ftr/vp.rb +40 -0
  156. data/lib/karafka/pro/processing/strategies/lrj/default.rb +87 -0
  157. data/lib/karafka/pro/processing/strategies/lrj/ftr.rb +69 -0
  158. data/lib/karafka/pro/processing/strategies/lrj/ftr_mom.rb +67 -0
  159. data/lib/karafka/pro/processing/strategies/lrj/ftr_mom_vp.rb +40 -0
  160. data/lib/karafka/pro/processing/strategies/lrj/ftr_vp.rb +39 -0
  161. data/lib/karafka/pro/processing/strategies/lrj/mom.rb +82 -0
  162. data/lib/karafka/pro/processing/strategies/lrj/mom_vp.rb +38 -0
  163. data/lib/karafka/pro/processing/strategies/lrj/vp.rb +36 -0
  164. data/lib/karafka/pro/processing/strategies/mom/default.rb +46 -0
  165. data/lib/karafka/pro/processing/strategies/mom/ftr.rb +53 -0
  166. data/lib/karafka/pro/processing/strategies/mom/ftr_vp.rb +37 -0
  167. data/lib/karafka/pro/processing/strategies/mom/vp.rb +35 -0
  168. data/lib/karafka/pro/processing/strategies/vp/default.rb +104 -0
  169. data/lib/karafka/pro/processing/strategies.rb +22 -0
  170. data/lib/karafka/pro/processing/strategy_selector.rb +84 -0
  171. data/lib/karafka/pro/processing/virtual_offset_manager.rb +147 -0
  172. data/lib/karafka/pro/routing/features/base.rb +24 -0
  173. data/lib/karafka/pro/routing/features/dead_letter_queue/contract.rb +50 -0
  174. data/lib/karafka/pro/routing/features/dead_letter_queue.rb +27 -0
  175. data/lib/karafka/pro/routing/features/delaying/config.rb +27 -0
  176. data/lib/karafka/pro/routing/features/delaying/contract.rb +38 -0
  177. data/lib/karafka/pro/routing/features/delaying/topic.rb +59 -0
  178. data/lib/karafka/pro/routing/features/delaying.rb +29 -0
  179. data/lib/karafka/pro/routing/features/expiring/config.rb +27 -0
  180. data/lib/karafka/pro/routing/features/expiring/contract.rb +38 -0
  181. data/lib/karafka/pro/routing/features/expiring/topic.rb +59 -0
  182. data/lib/karafka/pro/routing/features/expiring.rb +27 -0
  183. data/lib/karafka/pro/routing/features/filtering/config.rb +40 -0
  184. data/lib/karafka/pro/routing/features/filtering/contract.rb +41 -0
  185. data/lib/karafka/pro/routing/features/filtering/topic.rb +51 -0
  186. data/lib/karafka/pro/routing/features/filtering.rb +27 -0
  187. data/lib/karafka/pro/routing/features/long_running_job/config.rb +28 -0
  188. data/lib/karafka/pro/routing/features/long_running_job/contract.rb +37 -0
  189. data/lib/karafka/pro/routing/features/long_running_job/topic.rb +42 -0
  190. data/lib/karafka/pro/routing/features/long_running_job.rb +28 -0
  191. data/lib/karafka/pro/routing/features/pausing/contract.rb +48 -0
  192. data/lib/karafka/pro/routing/features/pausing/topic.rb +44 -0
  193. data/lib/karafka/pro/routing/features/pausing.rb +25 -0
  194. data/lib/karafka/pro/routing/features/throttling/config.rb +32 -0
  195. data/lib/karafka/pro/routing/features/throttling/contract.rb +41 -0
  196. data/lib/karafka/pro/routing/features/throttling/topic.rb +69 -0
  197. data/lib/karafka/pro/routing/features/throttling.rb +30 -0
  198. data/lib/karafka/pro/routing/features/virtual_partitions/config.rb +30 -0
  199. data/lib/karafka/pro/routing/features/virtual_partitions/contract.rb +52 -0
  200. data/lib/karafka/pro/routing/features/virtual_partitions/topic.rb +56 -0
  201. data/lib/karafka/pro/routing/features/virtual_partitions.rb +27 -0
  202. data/lib/karafka/pro.rb +13 -0
  203. data/lib/karafka/process.rb +24 -8
  204. data/lib/karafka/processing/coordinator.rb +181 -0
  205. data/lib/karafka/processing/coordinators_buffer.rb +62 -0
  206. data/lib/karafka/processing/executor.rb +148 -0
  207. data/lib/karafka/processing/executors_buffer.rb +72 -0
  208. data/lib/karafka/processing/jobs/base.rb +55 -0
  209. data/lib/karafka/processing/jobs/consume.rb +45 -0
  210. data/lib/karafka/processing/jobs/idle.rb +24 -0
  211. data/lib/karafka/processing/jobs/revoked.rb +22 -0
  212. data/lib/karafka/processing/jobs/shutdown.rb +23 -0
  213. data/lib/karafka/processing/jobs_builder.rb +28 -0
  214. data/lib/karafka/processing/jobs_queue.rb +150 -0
  215. data/lib/karafka/processing/partitioner.rb +24 -0
  216. data/lib/karafka/processing/result.rb +42 -0
  217. data/lib/karafka/processing/scheduler.rb +22 -0
  218. data/lib/karafka/processing/strategies/aj_dlq_mom.rb +44 -0
  219. data/lib/karafka/processing/strategies/aj_mom.rb +21 -0
  220. data/lib/karafka/processing/strategies/base.rb +52 -0
  221. data/lib/karafka/processing/strategies/default.rb +158 -0
  222. data/lib/karafka/processing/strategies/dlq.rb +88 -0
  223. data/lib/karafka/processing/strategies/dlq_mom.rb +49 -0
  224. data/lib/karafka/processing/strategies/mom.rb +29 -0
  225. data/lib/karafka/processing/strategy_selector.rb +47 -0
  226. data/lib/karafka/processing/worker.rb +93 -0
  227. data/lib/karafka/processing/workers_batch.rb +27 -0
  228. data/lib/karafka/railtie.rb +125 -0
  229. data/lib/karafka/routing/activity_manager.rb +84 -0
  230. data/lib/karafka/routing/builder.rb +34 -23
  231. data/lib/karafka/routing/consumer_group.rb +47 -21
  232. data/lib/karafka/routing/consumer_mapper.rb +1 -12
  233. data/lib/karafka/routing/features/active_job/builder.rb +33 -0
  234. data/lib/karafka/routing/features/active_job/config.rb +15 -0
  235. data/lib/karafka/routing/features/active_job/contract.rb +41 -0
  236. data/lib/karafka/routing/features/active_job/topic.rb +33 -0
  237. data/lib/karafka/routing/features/active_job.rb +13 -0
  238. data/lib/karafka/routing/features/base/expander.rb +53 -0
  239. data/lib/karafka/routing/features/base.rb +34 -0
  240. data/lib/karafka/routing/features/dead_letter_queue/config.rb +19 -0
  241. data/lib/karafka/routing/features/dead_letter_queue/contract.rb +42 -0
  242. data/lib/karafka/routing/features/dead_letter_queue/topic.rb +41 -0
  243. data/lib/karafka/routing/features/dead_letter_queue.rb +16 -0
  244. data/lib/karafka/routing/features/declaratives/config.rb +18 -0
  245. data/lib/karafka/routing/features/declaratives/contract.rb +30 -0
  246. data/lib/karafka/routing/features/declaratives/topic.rb +44 -0
  247. data/lib/karafka/routing/features/declaratives.rb +14 -0
  248. data/lib/karafka/routing/features/manual_offset_management/config.rb +15 -0
  249. data/lib/karafka/routing/features/manual_offset_management/contract.rb +24 -0
  250. data/lib/karafka/routing/features/manual_offset_management/topic.rb +35 -0
  251. data/lib/karafka/routing/features/manual_offset_management.rb +18 -0
  252. data/lib/karafka/routing/proxy.rb +18 -20
  253. data/lib/karafka/routing/router.rb +28 -3
  254. data/lib/karafka/routing/subscription_group.rb +91 -0
  255. data/lib/karafka/routing/subscription_groups_builder.rb +58 -0
  256. data/lib/karafka/routing/topic.rb +77 -24
  257. data/lib/karafka/routing/topics.rb +46 -0
  258. data/lib/karafka/runner.rb +52 -0
  259. data/lib/karafka/serialization/json/deserializer.rb +7 -15
  260. data/lib/karafka/server.rb +108 -37
  261. data/lib/karafka/setup/attributes_map.rb +347 -0
  262. data/lib/karafka/setup/config.rb +183 -179
  263. data/lib/karafka/status.rb +54 -7
  264. data/lib/karafka/templates/example_consumer.rb.erb +16 -0
  265. data/lib/karafka/templates/karafka.rb.erb +34 -56
  266. data/lib/karafka/time_trackers/base.rb +14 -0
  267. data/lib/karafka/time_trackers/pause.rb +122 -0
  268. data/lib/karafka/time_trackers/poll.rb +69 -0
  269. data/lib/karafka/version.rb +1 -1
  270. data/lib/karafka.rb +90 -16
  271. data/renovate.json +6 -0
  272. data.tar.gz.sig +0 -0
  273. metadata +290 -172
  274. metadata.gz.sig +0 -0
  275. data/MIT-LICENCE +0 -18
  276. data/certs/mensfeld.pem +0 -25
  277. data/config/errors.yml +0 -41
  278. data/lib/karafka/assignment_strategies/round_robin.rb +0 -13
  279. data/lib/karafka/attributes_map.rb +0 -63
  280. data/lib/karafka/backends/inline.rb +0 -16
  281. data/lib/karafka/base_responder.rb +0 -226
  282. data/lib/karafka/cli/flow.rb +0 -48
  283. data/lib/karafka/cli/missingno.rb +0 -19
  284. data/lib/karafka/code_reloader.rb +0 -67
  285. data/lib/karafka/connection/api_adapter.rb +0 -159
  286. data/lib/karafka/connection/batch_delegator.rb +0 -55
  287. data/lib/karafka/connection/builder.rb +0 -23
  288. data/lib/karafka/connection/message_delegator.rb +0 -36
  289. data/lib/karafka/consumers/batch_metadata.rb +0 -10
  290. data/lib/karafka/consumers/callbacks.rb +0 -71
  291. data/lib/karafka/consumers/includer.rb +0 -64
  292. data/lib/karafka/consumers/responders.rb +0 -24
  293. data/lib/karafka/consumers/single_params.rb +0 -15
  294. data/lib/karafka/contracts/consumer_group_topic.rb +0 -19
  295. data/lib/karafka/contracts/responder_usage.rb +0 -54
  296. data/lib/karafka/fetcher.rb +0 -42
  297. data/lib/karafka/helpers/class_matcher.rb +0 -88
  298. data/lib/karafka/helpers/config_retriever.rb +0 -46
  299. data/lib/karafka/helpers/inflector.rb +0 -26
  300. data/lib/karafka/instrumentation/stdout_listener.rb +0 -140
  301. data/lib/karafka/params/batch_metadata.rb +0 -26
  302. data/lib/karafka/params/builders/batch_metadata.rb +0 -30
  303. data/lib/karafka/params/builders/params.rb +0 -38
  304. data/lib/karafka/params/builders/params_batch.rb +0 -25
  305. data/lib/karafka/params/params_batch.rb +0 -60
  306. data/lib/karafka/patches/ruby_kafka.rb +0 -47
  307. data/lib/karafka/persistence/client.rb +0 -29
  308. data/lib/karafka/persistence/consumers.rb +0 -45
  309. data/lib/karafka/persistence/topics.rb +0 -48
  310. data/lib/karafka/responders/builder.rb +0 -36
  311. data/lib/karafka/responders/topic.rb +0 -55
  312. data/lib/karafka/routing/topic_mapper.rb +0 -53
  313. data/lib/karafka/serialization/json/serializer.rb +0 -31
  314. data/lib/karafka/setup/configurators/water_drop.rb +0 -36
  315. data/lib/karafka/templates/application_responder.rb.erb +0 -11
data/CHANGELOG.md CHANGED
@@ -1,580 +1,613 @@
1
1
  # Karafka framework changelog
2
2
 
3
- ## 1.4.4 (2021-04-19)
4
- - Remove Ruby 2.5 support and update minimum Ruby requirement to 2.6
5
- - Remove rake dependency
6
-
7
- ## 1.4.3 (2021-03-24)
8
- - Fixes for Ruby 3.0 compatibility
9
-
10
- ## 1.4.2 (2021-02-16)
11
- - Rescue Errno::EROFS in ensure_dir_exists (unasuke)
12
-
13
- ## 1.4.1 (2020-12-04)
14
- - Return non-zero exit code when printing usage
15
- - Add support for :assignment_strategy for consumers
16
-
17
- ## 1.4.0 (2020-09-05)
18
- - Rename `Karafka::Params::Metadata` to `Karafka::Params::BatchMetadata`
19
- ` Rename consumer `#metadata` to `#batch_metadata`
20
- - Separate metadata (including Karafka native metadata) from the root of params (backwards compatibility preserved thanks to rabotyaga)
21
- - Remove metadata hash dependency
22
- - Remove params dependency on a hash in favour of PORO
23
- - Remove batch metadata dependency on a hash
24
- - Remove MultiJson in favour of JSON in the default deserializer
25
- - allow accessing all the metadata without accessing the payload
26
- - freeze params and underlying elements except for the mutable payload
27
- - provide access to raw payload after serialization
28
- - fixes a bug where a non-deserializable (error) params would be marked as deserialized after first unsuccessful deserialization attempt
29
- - fixes bug where karafka would mutate internal ruby-kafka state
30
- - fixes bug where topic name in metadata would not be mapped using topic mappers
31
- - simplifies the params and params batch API, before `#payload` usage, it won't be deserialized
32
- - removes the `#[]` API from params to prevent from accessing raw data in a different way than #raw_payload
33
- - makes the params batch operations consistent as params payload is deserialized only when accessed explicitly
34
-
35
- ## 1.3.7 (2020-08-11)
36
- - #599 - Allow metadata access without deserialization attempt (rabotyaga)
37
- - Sync with ruby-kafka `1.2.0` api
38
-
39
- ## 1.3.6 (2020-04-24)
40
- - #583 - Use Karafka.logger for CLI messages (prikha)
41
- - #582 - Cannot only define seed brokers in consumer groups
42
-
43
- ## 1.3.5 (2020-04-02)
44
- - #578 - ThreadError: can't be called from trap context patch
45
-
46
- ## 1.3.4 (2020-02-17)
47
- - `dry-configurable` upgrade (solnic)
48
- - Remove temporary `thor` patches that are no longer needed
49
-
50
- ## 1.3.3 (2019-12-23)
51
- - Require `delegate` to fix missing dependency in `ruby-kafka`
52
-
53
- ## 1.3.2 (2019-12-23)
54
- - #561 - Allow `thor` 1.0.x usage in Karafka
55
- - #567 - Ruby 2.7.0 support + unfreeze of a frozen string fix
56
-
57
- ## 1.3.1 (2019-11-11)
58
- - #545 - Makes sure the log directory exists when is possible (robertomiranda)
59
- - Ruby 2.6.5 support
60
- - #551 - add support for DSA keys
61
- - #549 - Missing directories after `karafka install` (nijikon)
62
-
63
- ## 1.3.0 (2019-09-09)
64
- - Drop support for Ruby 2.4
65
- - YARD docs tags cleanup
66
-
67
- ## 1.3.0.rc1 (2019-07-31)
68
- - Drop support for Kafka 0.10 in favor of native support for Kafka 0.11.
69
- - Update ruby-kafka to the 0.7 version
70
- - Support messages headers receiving
71
- - Message bus unification
72
- - Parser available in metadata
73
- - Cleanup towards moving to a non-global state app management
74
- - Drop Ruby 2.3 support
75
- - Support for Ruby 2.6.3
76
- - `Karafka::Loader` has been removed in favor of Zeitwerk
77
- - Schemas are now contracts
78
- - #393 - Reorganize responders - removed `multiple_usage` constrain
79
- - #388 - ssl_client_cert_chain sync
80
- - #300 - Store value in a value key and replace its content with parsed version - without root merge
81
- - #331 - Disallow building groups without topics
82
- - #340 - Instrumentation unification. Better and more consistent naming
83
- - #340 - Procline instrumentation for a nicer process name
84
- - #342 - Change default for `fetcher_max_queue_size` from `100` to `10` to lower max memory usage
85
- - #345 - Cleanup exceptions names
86
- - #341 - Split connection delegator into batch delegator and single_delegator
87
- - #351 - Rename `#retrieve!` to `#parse!` on params and `#parsed` to `parse!` on params batch.
88
- - #351 - Adds '#first' for params_batch that returns parsed first element from the params_batch object.
89
- - #360 - Single params consuming mode automatically parses data specs
90
- - #359 - Divide mark_as_consumed into mark_as_consumed and mark_as_consumed!
91
- - #356 - Provide a `#values` for params_batch to extract only values of objects from the params_batch
92
- - #363 - Too shallow ruby-kafka version lock
93
- - #354 - Expose consumer heartbeat
94
- - #377 - Remove the persistent setup in favor of persistence
95
- - #375 - Sidekiq Backend parser mismatch
96
- - #369 - Single consumer can support more than one topic
97
- - #288 - Drop dependency on `activesupport` gem
98
- - #371 - SASL over SSL
99
- - #392 - Move params redundant data to metadata
100
- - #335 - Metadata access from within the consumer
101
- - #402 - Delayed reconnection upon critical failures
102
- - #405 - `reconnect_timeout` value is now being validated
103
- - #437 - Specs ensuring that the `#437` won't occur in the `1.3` release
104
- - #426 - ssl client cert key password
105
- - #444 - add certificate and private key validation
106
- - #460 - Decouple responder "parser" (generator?) from topic.parser (benissimo)
107
- - #463 - Split parsers into serializers / deserializers
108
- - #473 - Support SASL OAuthBearer Authentication
109
- - #475 - Disallow subscribing to the same topic with multiple consumers
110
- - #485 - Setting shutdown_timeout to nil kills the app without waiting for anything
111
- - #487 - Make listeners as instances
112
- - #29 - Consumer class names must have the word "Consumer" in it in order to work (Sidekiq backend)
113
- - #491 - irb is missing for console to work
114
- - #502 - Karafka process hangs when sending multiple sigkills
115
- - #506 - ssl_verify_hostname sync
116
- - #483 - Upgrade dry-validation before releasing 1.3
117
- - #492 - Use Zeitwerk for code reload in development
118
- - #508 - Reset the consumers instances upon reconnecting to a cluster
119
- - [#530](https://github.com/karafka/karafka/pull/530) - expose ruby and ruby-kafka version
120
- - [534](https://github.com/karafka/karafka/pull/534) - Allow to use headers in the deserializer object
121
- - [#319](https://github.com/karafka/karafka/pull/328) - Support for exponential backoff in pause
122
-
123
- ## 1.2.11
124
- - [#470](https://github.com/karafka/karafka/issues/470) Karafka not working with dry-configurable 0.8
125
-
126
- ## 1.2.10
127
- - [#453](https://github.com/karafka/karafka/pull/453) require `Forwardable` module
128
-
129
- ## 1.2.9
130
- - Critical exceptions now will cause consumer to stop instead of retrying without a break
131
- - #412 - Fix dry-inflector dependency lock in gemspec
132
- - #414 - Backport to 1.2 the delayed retry upon failure
133
- - #437 - Raw message is no longer added to params after ParserError raised
134
-
135
- ## 1.2.8
136
- - #408 - Responder Topic Lookup Bug on Heroku
137
-
138
- ## 1.2.7
139
- - Unlock Ruby-kafka version with a warning
140
-
141
- ## 1.2.6
142
- - Lock WaterDrop to 1.2.3
143
- - Lock Ruby-Kafka to 0.6.x (support for 0.7 will be added in Karafka 1.3)
144
- - #382 - Full logging with AR, etc for development mode when there is Rails integration
145
-
146
- ## 1.2.5
147
- - #354 - Expose consumer heartbeat
148
- - #373 - Async producer not working properly with responders
149
-
150
- ## 1.2.4
151
- - #332 - Fetcher for max queue size
152
-
153
- ## 1.2.3
154
- - #313 - support PLAINTEXT and SSL for scheme
155
- - #288 - drop activesupport callbacks in favor of notifications
156
- - #320 - Pausing indefinitely with nil pause timeout doesn't work
157
- - #318 - Partition pausing doesn't work with custom topic mappers
158
- - Rename ConfigAdapter to ApiAdapter to better reflect what it does
159
- - #317 - Manual offset committing doesn't work with custom topic mappers
160
-
161
- ## 1.2.2
162
- - #312 - Broken for ActiveSupport 5.2.0
163
-
164
- ## 1.2.1
165
- - #304 - Unification of error instrumentation event details
166
- - #306 - Using file logger from within a trap context upon shutdown is impossible
167
-
168
- ## 1.2.0
169
- - Spec improvements
170
- - #260 - Specs missing randomization
171
- - #251 - Shutdown upon non responding (unreachable) cluster is not possible
172
- - #258 - Investigate lowering requirements on activesupport
173
- - #246 - Alias consumer#mark_as_consumed on controller
174
- - #259 - Allow forcing key/partition key on responders
175
- - #267 - Styling inconsistency
176
- - #242 - Support setting the max bytes to fetch per request
177
- - #247 - Support SCRAM once released
178
- - #271 - Provide an after_init option to pass a configuration block
179
- - #262 - Error in the monitor code for NewRelic
180
- - #241 - Performance metrics
181
- - #274 - Rename controllers to consumers
182
- - #184 - Seek to
183
- - #284 - Dynamic Params parent class
184
- - #275 - ssl_ca_certs_from_system
185
- - #296 - Instrument forceful exit with an error
186
- - Replaced some of the activesupport parts with dry-inflector
187
- - Lower ActiveSupport dependency
188
- - Remove configurators in favor of the after_init block configurator
189
- - Ruby 2.5.0 support
190
- - Renamed Karafka::Connection::Processor to Karafka::Connection::Delegator to match incoming naming conventions
191
- - Renamed Karafka::Connection::Consumer to Karafka::Connection::Client due to #274
192
- - Removed HashWithIndifferentAccess in favor of a regular hash
193
- - JSON parsing defaults now to string keys
194
- - Lower memory usage due to less params data internal details
195
- - Support multiple ```after_init``` blocks in favor of a single one
196
- - Renamed ```received_at``` to ```receive_time``` to follow ruby-kafka and WaterDrop conventions
197
- - Adjust internal setup to easier map Ruby-Kafka config changes
198
- - System callbacks reorganization
199
- - Added ```before_fetch_loop``` configuration block for early client usage (```#seek```, etc)
200
- - Renamed ```after_fetched``` to ```after_fetch``` to normalize the naming convention
201
- - Instrumentation on a connection delegator level
202
- - Added ```params_batch#last``` method to retrieve last element after unparsing
203
- - All params keys are now strings
204
-
205
- ## 1.1.2
206
- - #256 - Default kafka.seed_brokers configuration is created in invalid format
207
-
208
- ## 1.1.1
209
- - #253 - Allow providing a global per app parser in config settings
210
-
211
- ## 1.1.0
212
- - Gem bump
213
- - Switch from Celluloid to native Thread management
214
- - Improved shutdown process
215
- - Introduced optional fetch callbacks and moved current the ```after_received``` there as well
216
- - Karafka will raise Errors::InvalidPauseTimeout exception when trying to pause but timeout set to 0
217
- - Allow float for timeouts and other time based second settings
218
- - Renamed MessagesProcessor to Processor and MessagesConsumer to Consumer - we don't process and don't consumer anything else so it was pointless to keep this "namespace"
219
- - #232 - Remove unused ActiveSupport require
220
- - #214 - Expose consumer on a controller layer
221
- - #193 - Process shutdown callbacks
222
- - Fixed accessibility of ```#params_batch``` from the outside of the controller
223
- - connection_pool config options are no longer required
224
- - celluloid config options are no longer required
225
- - ```#perform``` is now renamed to ```#consume``` with warning level on using the old one (deprecated)
226
- - #235 - Rename perform to consume
227
- - Upgrade to ruby-kafka 0.5
228
- - Due to redesign of Waterdrop concurrency setting is no longer needed
229
- - #236 - Manual offset management
230
- - WaterDrop 1.0.0 support with async
231
- - Renamed ```batch_consuming``` option to ```batch_fetching``` as it is not a consumption (with processing) but a process of fetching messages from Kafka. The messages is considered consumed, when it is processed.
232
- - Renamed ```batch_processing``` to ```batch_consuming``` to resemble Kafka concept of consuming messages.
233
- - Renamed ```after_received``` to ```after_fetched``` to normalize the naming conventions.
234
- - Responders support the per topic ```async``` option.
235
-
236
- ## 1.0.1
237
- - #210 - LoadError: cannot load such file -- [...]/karafka.rb
238
- - Ruby 2.4.2 as a default (+travis integration)
239
- - JRuby upgrade
240
- - Expanded persistence layer (moved to a namespace for easier future development)
241
- - #213 - Misleading error when non-existing dependency is required
242
- - #212 - Make params react to #topic, #partition, #offset
243
- - #215 - Consumer group route dynamic options are ignored
244
- - #217 - check RUBY_ENGINE constant if RUBY_VERSION is missing (#217)
245
- - #218 - add configuration setting to control Celluloid's shutdown timeout
246
- - Renamed Karafka::Routing::Mapper to Karafka::Routing::TopicMapper to match naming conventions
247
- - #219 - Allow explicit consumer group names, without prefixes
248
- - Fix to early removed pid upon shutdown of demonized process
249
- - max_wait_time updated to match https://github.com/zendesk/ruby-kafka/issues/433
250
- - #230 - Better uri validation for seed brokers (incompatibility as the kafka:// or kafka+ssl:// is required)
251
- - Small internal docs fixes
252
- - Dry::Validation::MissingMessageError: message for broker_schema? was not found
253
- - #238 - warning: already initialized constant Karafka::Schemas::URI_SCHEMES
254
-
255
- ## 1.0.0
256
-
257
- ### Closed issues:
258
-
259
- - #103 - Env for logger is loaded 2 early (on gem load not on app init)
260
- - #142 - Possibility to better control Kafka consumers (consumer groups management)
261
- - #150 - Add support for start_from_beginning on a per topic basis
262
- - #154 - Support for min_bytes and max_wait_time on messages consuming
263
- - #160 - Reorganize settings to better resemble ruby-kafka requirements
264
- - #164 - If we decide to have configuration per topic, topic uniqueness should be removed
265
- - #165 - Router validator
266
- - #166 - Params and route reorganization (new API)
267
- - #167 - Remove Sidekiq UI from Karafka
268
- - #168 - Introduce unique IDs of routes
269
- - #171 - Add kafka message metadata to params
270
- - #176 - Transform Karafka::Connection::Consumer into a module
271
- - #177 - Monitor not reacting when kafka killed with -9
272
- - #175 - Allow single consumer to subscribe to multiple topics
273
- - #178 - Remove parsing failover when cannot unparse data
274
- - #174 - Extended config validation
275
- - ~~#180 - Switch from JSON parser to yajl-ruby~~
276
- - #181 - When responder is defined and not used due to ```respond_with``` not being triggered in the perform, it won't raise an exception.
277
- - #188 - Rename name in config to client id
278
- - #186 - Support ruby-kafka ```ssl_ca_cert_file_path``` config
279
- - #189 - karafka console does not preserve history on exit
280
- - #191 - Karafka 0.6.0rc1 does not work with jruby / now it does :-)
281
- - Switch to multi json so everyone can use their favourite JSON parser
282
- - Added jruby support in general and in Travis
283
- - #196 - Topic mapper does not map topics when subscribing thanks to @webandtech
284
- - #96 - Karafka server - possibility to run it only for a certain topics
285
- - ~~karafka worker cli option is removed (please use sidekiq directly)~~ - restored, bad idea
286
- - (optional) pausing upon processing failures ```pause_timeout```
287
- - Karafka console main process no longer intercepts irb errors
288
- - Wiki updates
289
- - #204 - Long running controllers
290
- - Better internal API to handle multiple usage cases using ```Karafka::Controllers::Includer```
291
- - #207 - Rename before_enqueued to after_received
292
- - #147 - De-attach Karafka from Sidekiq by extracting Sidekiq backend
293
-
294
- ### New features and improvements
295
-
296
- - batch processing thanks to ```#batch_consuming``` flag and ```#params_batch``` on controllers
297
- - ```#topic``` method on an controller instance to make a clear distinction in between params and route details
298
- - Changed routing model (still compatible with 0.5) to allow better resources management
299
- - Lower memory requirements due to object creation limitation (2-3 times less objects on each new message)
300
- - Introduced the ```#batch_consuming``` config flag (config for #126) that can be set per each consumer_group
301
- - Added support for partition, offset and partition key in the params hash
302
- - ```name``` option in config renamed to ```client_id```
303
- - Long running controllers with ```persistent``` flag on a topic config level, to make controller instances persistent between messages batches (single controller instance per topic per partition no per messages batch) - turned on by default
304
-
305
- ### Incompatibilities
306
-
307
- - Default boot file is renamed from app.rb to karafka.rb
308
- - Removed worker glass as dependency (now and independent gem)
309
- - ```kafka.hosts``` option renamed to ```kafka.seed_brokers``` - you don't need to provide all the hosts to work with Kafka
310
- - ```start_from_beginning``` moved into kafka scope (```kafka.start_from_beginning```)
311
- - Router no longer checks for route uniqueness - now you can define same routes for multiple kafkas and do a lot of crazy stuff, so it's your responsibility to check uniqueness
312
- - Change in the way we identify topics in between Karafka and Sidekiq workers. If you upgrade, please make sure, all the jobs scheduled in Sidekiq are finished before the upgrade.
313
- - ```batch_mode``` renamed to ```batch_fetching```
314
- - Renamed content to value to better resemble ruby-kafka internal messages naming convention
315
- - When having a responder with ```required``` topics and not using ```#respond_with``` at all, it will raise an exception
316
- - Renamed ```inline_mode``` to ```inline_processing``` to resemble other settings conventions
317
- - Renamed ```inline_processing``` to ```backend``` to reach 1.0 future compatibility
318
- - Single controller **needs** to be used for a single topic consumption
319
- - Renamed ```before_enqueue``` to ```after_received``` to better resemble internal logic, since for inline backend, there is no enqueue.
320
- - Due to the level on which topic and controller are related (class level), the dynamic worker selection is no longer available.
321
- - Renamed params #retrieve to params #retrieve! to better reflect what it does
322
-
323
- ### Other changes
324
- - PolishGeeksDevTools removed (in favour of Coditsu)
325
- - Waaaaaay better code quality thanks to switching from dev tools to Coditsu
326
- - Gem bump
327
- - Cleaner internal API
328
- - SRP
329
- - Better settings proxying and management between ruby-kafka and karafka
330
- - All internal validations are now powered by dry-validation
331
- - Better naming conventions to reflect Kafka reality
332
- - Removed Karafka::Connection::Message in favour of direct message details extraction from Kafka::FetchedMessage
333
-
334
- ## 0.5.0.3
335
- - #132 - When Kafka is gone, should reconnect after a time period
336
- - #136 - new ruby-kafka version + other gem bumps
337
- - ruby-kafka update
338
- - #135 - NonMatchingRouteError - better error description in the code
339
- - #140 - Move Capistrano Karafka to a different specific gem
340
- - #110 - Add call method on a responder class to alias instance build and call
341
- - #76 - Configs validator
342
- - #138 - Possibility to have no worker class defined if inline_mode is being used
343
- - #145 - Topic Mapper
344
- - Ruby update to 2.4.1
345
- - Gem bump x2
346
- - #158 - Update docs section on heroku usage
347
- - #150 - Add support for start_from_beginning on a per topic basis
348
- - #148 - Lower Karafka Sidekiq dependency
349
- - Allow karafka root to be specified from ENV
350
- - Handle SIGTERM as a shutdown command for kafka server to support Heroku deployment
351
-
352
- ## 0.5.0.2
353
- - Gems update x3
354
- - Default Ruby set to 2.3.3
355
- - ~~Default Ruby set to 2.4.0~~
356
- - Readme updates to match bug fixes and resolved issues
357
- - #95 - Allow options into responder
358
- - #98 - Use parser when responding on a topic
359
- - #114 - Option to configure waterdrop connection pool timeout and concurrency
360
- - #118 - Added dot in topic validation format
361
- - #119 - add support for authentication using SSL
362
- - #121 - JSON as a default for standalone responders usage
363
- - #122 - Allow on capistrano role customization
364
- - #125 - Add support to batch incoming messages
365
- - #130 - start_from_beginning flag on routes and default
366
- - #128 - Monitor caller_label not working with super on inheritance
367
- - Renamed *inline* to *inline_mode* to stay consistent with flags that change the way karafka works (#125)
368
- - Dry-configurable bump to 0.5 with fixed proc value evaluation on retrieve patch (internal change)
369
-
370
- ## 0.5.0.1
371
- - Fixed inconsistency in responders non-required topic definition. Now only required: false available
372
- - #101 - Responders fail when multiple_usage true and required false
373
- - fix error on startup from waterdrop #102
374
- - Waterdrop 0.3.2.1 with kafka.hosts instead of kafka_hosts
375
- - #105 - Karafka::Monitor#caller_label not working with inherited monitors
376
- - #99 - Standalone mode (without Sidekiq)
377
- - #97 - Buffer responders single topics before send (pre-validation)
378
- - Better control over consumer thanks to additional config options
379
- - #111 - Dynamic worker assignment based on the income params
380
- - Long shutdown time fix
381
-
382
- ## 0.5.0
383
- - Removed Zookeeper totally as dependency
384
- - Better group and partition rebalancing
385
- - Automatic thread management (no need for tuning) - each topic is a separate actor/thread
386
- - Moved from Poseidon into Ruby-Kafka
387
- - No more max_concurrency setting
388
- - After you define your App class and routes (and everything else) you need to add execute App.boot!
389
- - Manual consuming is no longer available (no more karafka consume command)
390
- - Karafka topics CLI is no longer available. No Zookeeper - no global topic discovery
391
- - Dropped ZK as dependency
392
- - karafka info command no longer prints details about Zookeeper
393
- - Better shutdown
394
- - No more autodiscovery via Zookeeper - instead, the whole cluster will be discovered directly from Kafka
395
- - No more support for Kafka 0.8
396
- - Support for Kafka 0.9
397
- - No more need for ActorCluster, since now we have a single thread (and Kafka connection) per topic
398
- - Ruby 2.2.* support dropped
399
- - Using App name as a Kafka client_id
400
- - Automatic Capistrano integration
401
- - Responders support for handling better responses pipe-lining and better responses flow description and design (see README for more details)
402
- - Gem bump
403
- - Readme updates
404
- - karafka flow CLI command for printing the application flow
405
- - Some internal refactoring
406
-
407
- ## 0.4.2
408
- - #87 - Re-consume mode with crone for better Rails/Rack integration
409
- - Moved Karafka server related stuff into separate Karafka::Server class
410
- - Renamed Karafka::Runner into Karafka::Fetcher
411
- - Gem bump
412
- - Added chroot option to Zookeeper options
413
- - Moved BROKERS_PATH into config from constant
414
- - Added Karafka consume CLI action for a short running single consumption round
415
- - Small fixes to close broken connections
416
- - Readme updates
417
-
418
- ## 0.4.1
419
- - Explicit throw(:abort) required to halt before_enqueue (like in Rails 5)
420
- - #61 - autodiscovery of Kafka brokers based on Zookeeper data
421
- - #63 - Graceful shutdown with current offset state during data processing
422
- - #65 - Example of NewRelic monitor is outdated
423
- - #71 - Setup should be executed after user code is loaded
424
- - Gem bump x3
425
- - Rubocop remarks
426
- - worker_timeout config option has been removed. It now needs to be defined manually by the framework user because WorkerGlass::Timeout can be disabled and we cannot use Karafka settings on a class level to initialize user code stuff
427
- - Moved setup logic under setup/Setup namespace
428
- - Better defaults handling
429
- - #75 - Kafka and Zookeeper options as a hash
430
- - #82 - Karafka autodiscovery fails upon caching of configs
431
- - #81 - Switch config management to dry configurable
432
- - Version fix
433
- - Dropped support for Ruby 2.1.*
434
- - Ruby bump to 2.3.1
435
-
436
- ## 0.4.0
437
- - Added WaterDrop gem with default configuration
438
- - Refactoring of config logic to simplify adding new dependencies that need to be configured based on #setup data
439
- - Gem bump
440
- - Readme updates
441
- - Renamed cluster to actor_cluster for method names
442
- - Replaced SidekiqGlass with generic WorkerGlass lib
443
- - Application bootstrap in app.rb no longer required
444
- - Karafka.boot needs to be executed after all the application files are loaded (template updated)
445
- - Small loader refactor (no API changes)
446
- - Ruby 2.3.0 support (default)
447
- - No more rake tasks
448
- - Karafka CLI instead of rake tasks
449
- - Worker cli command allows passing additional options directly to Sidekiq
450
- - Renamed concurrency to max_concurrency - it describes better what happens - Karafka will use this number of threads only when required
451
- - Added wait_timeout that allows us to tune how long should we wait on a single socket connection (single topic) for new messages before going to next one (this applies to each thread separately)
452
- - Rubocop remarks
453
- - Removed Sinatra and Puma dependencies
454
- - Karafka Cli internal reorganization
455
- - Karafka Cli routes task
456
- - #37 - warn log for failed parsing of a message
457
- - #43 - wrong constant name
458
- - #44 - Method name conflict
459
- - #48 - Cannot load such file -- celluloid/current
460
- - #46 - Loading application
461
- - #45 - Set up monitor in config
462
- - #47 - rake karafka:run uses app.rb only
463
- - #53 - README update with Sinatra/Rails integration description
464
- - #41 - New Routing engine
465
- - #54 - Move Karafka::Workers::BaseWorker to Karafka::BaseWorker
466
- - #55 - ApplicationController and ApplicationWorker
467
-
468
- ## 0.3.2
469
- - Karafka::Params::Params lazy load merge keys with string/symbol names priorities fix
470
-
471
- ## 0.3.1
472
- - Renamed Karafka::Monitor to Karafka::Process to represent a Karafka process wrapper
473
- - Added Karafka::Monitoring that allows to add custom logging and monitoring with external libraries and systems
474
- - Moved logging functionality into Karafka::Monitoring default monitoring
475
- - Added possibility to provide own monitoring as long as in responds to #notice and #notice_error
476
- - Standardized logging format for all logs
477
-
478
- ## 0.3.0
479
- - Switched from custom ParserError for each parser to general catching of Karafka::Errors::ParseError and its descendants
480
- - Gem bump
481
- - Fixed #32 - now when using custom workers that does not inherit from Karafka::BaseWorker perform method is not required. Using custom workers means that the logic that would normally lie under #perform, needs to be executed directly from the worker.
482
- - Fixed #31 - Technically didn't fix because this is how Sidekiq is meant to work, but provided possibility to assign custom interchangers that allow to bypass JSON encoding issues by converting data that goes to Redis to a required format (and parsing it back when it is fetched)
483
- - Added full parameters lazy load - content is no longer loaded during #perform_async if params are not used in before_enqueue
484
- - No more namespaces for Redis by default (use separate DBs)
485
-
486
- ## 0.1.21
487
- - Sidekiq 4.0.1 bump
488
- - Gem bump
489
- - Added direct celluloid requirement to Karafka (removed from Sidekiq)
490
-
491
- ## 0.1.19
492
- - Internal call - schedule naming change
493
- - Enqueue to perform_async naming in controller to follow Sidekiq naming convention
494
- - Gem bump
495
-
496
- ## 0.1.18
497
- - Changed Redis configuration options into a single hash that is directly passed to Redis setup for Sidekiq
498
- - Added config.ru to provide a Sidekiq web UI (see README for more details)
499
-
500
- ## 0.1.17
501
- - Changed Karafka::Connection::Cluster tp Karafka::Connection::ActorCluster to distinguish between a single thread actor cluster for multiple topic connection and a future feature that will allow process clusterization.
502
- - Add an ability to use user-defined parsers for a messages
503
- - Lazy load params for before callbacks
504
- - Automatic loading/initializing all workers classes during startup (so Sidekiq won't fail with unknown workers exception)
505
- - Params are now private to controller
506
- - Added bootstrap method to app.rb
507
-
508
- ## 0.1.16
509
- - Cluster level error catching for all exceptions so actor is not killer
510
- - Cluster level error logging
511
- - Listener refactoring (QueueConsumer extracted)
512
- - Karafka::Connection::QueueConsumer to wrap around fetching logic - technically we could replace Kafka with any other messaging engine as long as we preserve the same API
513
- - Added debug env for debugging purpose in applications
514
-
515
- ## 0.1.15
516
- - Fixed max_wait_ms vs socket_timeout_ms issue
517
- - Fixed closing queue connection after Poseidon::Errors::ProtocolError failure
518
- - Fixed wrong logging file selection based on env
519
- - Extracted Karafka::Connection::QueueConsumer object to wrap around queue connection
520
-
521
- ## 0.1.14
522
- - Rake tasks for listing all the topics on Kafka server (rake kafka:topics)
523
-
524
- ## 0.1.13
525
- - Ability to assign custom workers and use them bypassing Karafka::BaseWorker (or its descendants)
526
- - Gem bump
527
-
528
- ## 0.1.12
529
- - All internal errors went to Karafka::Errors namespace
530
-
531
- ## 0.1.11
532
- - Rescuing all the "before Sidekiq" processing so errors won't affect other incoming messages
533
- - Fixed dying actors after connection error
534
- - Added a new app status - "initializing"
535
- - Karafka::Status model cleanup
536
-
537
- ## 0.1.10
538
- - Added possibility to specify redis namespace in configuration (failover to app name)
539
- - Renamed redis_host to redis_url in configuration
540
-
541
- ## 0.1.9
542
- - Added worker logger
543
-
544
- ## 0.1.8
545
- - Dropped local env support in favour of [Envlogic](https://github.com/karafka/envlogic) - no changes in API
546
-
547
- ## 0.1.7
548
- - Karafka option for Redis hosts (not localhost only)
549
-
550
- ## 0.1.6
551
- - Added better concurency by clusterization of listeners
552
- - Added graceful shutdown
553
- - Added concurency that allows to handle bigger applications with celluloid
554
- - Karafka controllers no longer require group to be defined (created based on the topic and app name)
555
- - Karafka controllers no longer require topic to be defined (created based on the controller name)
556
- - Readme updates
557
-
558
- ## 0.1.5
559
- - Celluloid support for listeners
560
- - Multi target logging (STDOUT and file)
561
-
562
- ## 0.1.4
563
- - Renamed events to messages to follow Apache Kafka naming convention
564
-
565
- ## 0.1.3
566
- - Karafka::App.logger moved to Karafka.logger
567
- - README updates (Usage section was added)
568
-
569
- ## 0.1.2
570
- - Logging to log/environment.log
571
- - Karafka::Runner
572
-
573
- ## 0.1.1
574
- - README updates
575
- - Rake tasks updates
576
- - Rake installation task
577
- - Changelog file added
578
-
579
- ## 0.1.0
580
- - Initial framework code
3
+ ## 2.1.10 (2023-08-21)
4
+ - [Enhancement] Introduce `connection.client.rebalance_callback` event for instrumentation of rebalances.
5
+ - [Refactor] Introduce low level commands proxy to handle deviation in how we want to run certain commands and how rdkafka-ruby runs that by design.
6
+ - [Fix] Do not report lags in the DD listener for cases where the assignment is not workable.
7
+ - [Fix] Do not report negative lags in the DD listener.
8
+ - [Fix] Extremely fast shutdown after boot in specs can cause process not to stop.
9
+ - [Fix] Disable `allow.auto.create.topics` for admin by default to prevent accidental topics creation on topics metadata lookups.
10
+ - [Fix] Improve the `query_watermark_offsets` operations by increasing too low timeout.
11
+ - [Fix] Increase `TplBuilder` timeouts to compensate for remote clusters.
12
+ - [Fix] Always try to unsubscribe short-lived consumers used throughout the system, especially in the admin APIs.
13
+ - [Fix] Add missing `connection.client.poll.error` error type reference.
14
+
15
+ ## 2.1.9 (2023-08-06)
16
+ - **[Feature]** Introduce ability to customize pause strategy on a per topic basis (Pro).
17
+ - [Improvement] Disable the extensive messages logging in the default `karafka.rb` template.
18
+ - [Change] Require `waterdrop` `>= 2.6.6` due to extra `LoggerListener` API.
19
+
20
+ ## 2.1.8 (2023-07-29)
21
+ - [Improvement] Introduce `Karafka::BaseConsumer#used?` method to indicate, that at least one invocation of `#consume` took or will take place. This can be used as a replacement to the non-direct `messages.count` check for shutdown and revocation to ensure, that the consumption took place or is taking place (in case of running LRJ).
22
+ - [Improvement] Make `messages#to_a` return copy of the underlying array to prevent scenarios, where the mutation impacts offset management.
23
+ - [Improvement] Mitigate a librdkafka `cooperative-sticky` rebalance crash issue.
24
+ - [Improvement] Provide ability to overwrite `consumer_persistence` per subscribed topic. This is mostly useful for plugins and extensions developers.
25
+ - [Fix] Fix a case where the performance tracker would crash in case of mutation of messages to an empty state.
26
+
27
+ ## 2.1.7 (2023-07-22)
28
+ - [Improvement] Always query for watermarks in the Iterator to improve the initial response time.
29
+ - [Improvement] Add `max_wait_time` option to the Iterator.
30
+ - [Fix] Fix a case where `Admin#read_topic` would wait for poll interval on non-existing messages instead of early exit.
31
+ - [Fix] Fix a case where Iterator with per partition offsets with negative lookups would go below the number of available messages.
32
+ - [Fix] Remove unused constant from Admin module.
33
+ - [Fix] Add missing `connection.client.rebalance_callback.error` to the `LoggerListener` instrumentation hook.
34
+
35
+ ## 2.1.6 (2023-06-29)
36
+ - [Improvement] Provide time support for iterator
37
+ - [Improvement] Provide time support for admin `#read_topic`
38
+ - [Improvement] Provide time support for consumer `#seek`.
39
+ - [Improvement] Remove no longer needed locks for client operations.
40
+ - [Improvement] Raise `Karafka::Errors::TopicNotFoundError` when trying to iterate over non-existing topic.
41
+ - [Improvement] Ensure that Kafka multi-command operations run under mutex together.
42
+ - [Change] Require `waterdrop` `>= 2.6.2`
43
+ - [Change] Require `karafka-core` `>= 2.1.1`
44
+ - [Refactor] Clean-up iterator code.
45
+ - [Fix] Improve performance in dev environment for a Rails app (juike)
46
+ - [Fix] Rename `InvalidRealOffsetUsage` to `InvalidRealOffsetUsageError` to align with naming of other errors.
47
+ - [Fix] Fix unstable spec.
48
+ - [Fix] Fix a case where automatic `#seek` would overwrite manual seek of a user when running LRJ.
49
+ - [Fix] Make sure, that user direct `#seek` and `#pause` operations take precedence over system actions.
50
+ - [Fix] Make sure, that `#pause` and `#resume` with one underlying connection do not race-condition.
51
+
52
+ ## 2.1.5 (2023-06-19)
53
+ - [Improvement] Drastically improve `#revoked?` response quality by checking the real time assignment lost state on librdkafka.
54
+ - [Improvement] Improve eviction of saturated jobs that would run on already revoked assignments.
55
+ - [Improvement] Expose `#commit_offsets` and `#commit_offsets!` methods in the consumer to provide ability to commit offsets directly to Kafka without having to mark new messages as consumed.
56
+ - [Improvement] No longer skip offset commit when no messages marked as consumed as `librdkafka` has fixed the crashes there.
57
+ - [Improvement] Remove no longer needed patches.
58
+ - [Improvement] Ensure, that the coordinator revocation status is switched upon revocation detection when using `#revoked?`
59
+ - [Improvement] Add benchmarks for marking as consumed (sync and async).
60
+ - [Change] Require `karafka-core` `>= 2.1.0`
61
+ - [Change] Require `waterdrop` `>= 2.6.1`
62
+
63
+ ## 2.1.4 (2023-06-06)
64
+ - [Fix] `processing_lag` and `consumption_lag` on empty batch fail on shutdown usage (#1475)
65
+
66
+ ## 2.1.3 (2023-05-29)
67
+ - [Maintenance] Add linter to ensure, that all integration specs end with `_spec.rb`.
68
+ - [Fix] Fix `#retrying?` helper result value (Aerdayne).
69
+ - [Fix] Fix `mark_as_consumed!` raising an error instead of `false` on `unknown_member_id` (#1461).
70
+ - [Fix] Enable phantom tests.
71
+
72
+ ## 2.1.2 (2023-05-26)
73
+ - Set minimum `karafka-core` on `2.0.13` to make sure correct version of `karafka-rdkafka` is used.
74
+ - Set minimum `waterdrop` on `2.5.3` to make sure correct version of `waterdrop` is used.
75
+
76
+ ## 2.1.1 (2023-05-24)
77
+ - [Fix] Liveness Probe Doesn't Meet HTTP 1.1 Criteria - Causing Kubernetes Restarts (#1450)
78
+
79
+ ## 2.1.0 (2023-05-22)
80
+ - **[Feature]** Provide ability to use CurrentAttributes with ActiveJob's Karafka adapter (federicomoretti).
81
+ - **[Feature]** Introduce collective Virtual Partitions offset management.
82
+ - **[Feature]** Use virtual offsets to filter out messages that would be re-processed upon retries.
83
+ - [Improvement] No longer break processing on failing parallel virtual partitions in ActiveJob because it is compensated by virtual marking.
84
+ - [Improvement] Always use Virtual offset management for Pro ActiveJobs.
85
+ - [Improvement] Do not attempt to mark offsets on already revoked partitions.
86
+ - [Improvement] Make sure, that VP components are not injected into non VP strategies.
87
+ - [Improvement] Improve complex strategies inheritance flow.
88
+ - [Improvement] Optimize offset management for DLQ + MoM feature combinations.
89
+ - [Change] Removed `Karafka::Pro::BaseConsumer` in favor of `Karafka::BaseConsumer`. (#1345)
90
+ - [Fix] Fix for `max_messages` and `max_wait_time` not having reference in errors.yml (#1443)
91
+
92
+ ### Upgrade notes
93
+
94
+ 1. Upgrade to Karafka `2.0.41` prior to upgrading to `2.1.0`.
95
+ 2. Replace `Karafka::Pro::BaseConsumer` references to `Karafka::BaseConsumer`.
96
+ 3. Replace `Karafka::Instrumentation::Vendors::Datadog:Listener` with `Karafka::Instrumentation::Vendors::Datadog::MetricsListener`.
97
+
98
+ ## 2.0.41 (2023-04-19)
99
+ - **[Feature]** Provide `Karafka::Pro::Iterator` for anonymous topic/partitions iterations and messages lookups (#1389 and #1427).
100
+ - [Improvement] Optimize topic lookup for `read_topic` admin method usage.
101
+ - [Improvement] Report via `LoggerListener` information about the partition on which a given job has started and finished.
102
+ - [Improvement] Slightly normalize the `LoggerListener` format. Always report partition related operations as followed: `TOPIC_NAME/PARTITION`.
103
+ - [Improvement] Do not retry recovery from `unknown_topic_or_part` when Karafka is shutting down as there is no point and no risk of any data losses.
104
+ - [Improvement] Report `client.software.name` and `client.software.version` according to `librdkafka` recommendation.
105
+ - [Improvement] Report ten longest integration specs after the suite execution.
106
+ - [Improvement] Prevent user originating errors related to statistics processing after listener loop crash from potentially crashing the listener loop and hanging Karafka process.
107
+
108
+ ## 2.0.40 (2023-04-13)
109
+ - [Improvement] Introduce `Karafka::Messages::Messages#empty?` method to handle Idle related cases where shutdown or revocation would be called on an empty messages set. This method allows for checking if there are any messages in the messages batch.
110
+ - [Refactor] Require messages builder to accept partition and do not fetch it from messages.
111
+ - [Refactor] Use empty messages set for internal APIs (Idle) (so there always is `Karafka::Messages::Messages`)
112
+ - [Refactor] Allow for empty messages set initialization with -1001 and -1 on metadata (similar to `librdkafka`)
113
+
114
+ ## 2.0.39 (2023-04-11)
115
+ - **[Feature]** Provide ability to throttle/limit number of messages processed in a time unit (#1203)
116
+ - **[Feature]** Provide Delayed Topics (#1000)
117
+ - **[Feature]** Provide ability to expire messages (expiring topics)
118
+ - **[Feature]** Provide ability to apply filters after messages are polled and before enqueued. This is a generic filter API for any usage.
119
+ - [Improvement] When using ActiveJob with Virtual Partitions, Karafka will stop if collectively VPs are failing. This minimizes number of jobs that will be collectively re-processed.
120
+ - [Improvement] `#retrying?` method has been added to consumers to provide ability to check, that we're reprocessing data after a failure. This is useful for branching out processing based on errors.
121
+ - [Improvement] Track active_job_id in instrumentation (#1372)
122
+ - [Improvement] Introduce new housekeeping job type called `Idle` for non-consumption execution flows.
123
+ - [Improvement] Change how a manual offset management works with Long-Running Jobs. Use the last message offset to move forward instead of relying on the last message marked as consumed for a scenario where no message is marked.
124
+ - [Improvement] Prioritize in Pro non-consumption jobs execution over consumption despite LJF. This will ensure, that housekeeping as well as other non-consumption events are not saturated when running a lot of work.
125
+ - [Improvement] Normalize the DLQ behaviour with MoM. Always pause on dispatch for all the strategies.
126
+ - [Improvement] Improve the manual offset management and DLQ behaviour when no markings occur for OSS.
127
+ - [Improvement] Do not early stop ActiveJob work running under virtual partitions to prevent extensive reprocessing.
128
+ - [Improvement] Drastically increase number of scenarios covered by integration specs (OSS and Pro).
129
+ - [Improvement] Introduce a `Coordinator#synchronize` lock for cross virtual partitions operations.
130
+ - [Fix] Do not resume partition that is not paused.
131
+ - [Fix] Fix `LoggerListener` cases where logs would not include caller id (when available)
132
+ - [Fix] Fix not working benchmark tests.
133
+ - [Fix] Fix a case where when using manual offset management with a user pause would ignore the pause and seek to the next message.
134
+ - [Fix] Fix a case where dead letter queue would go into an infinite loop on message with first ever offset if the first ever offset would not recover.
135
+ - [Fix] Make sure to resume always for all LRJ strategies on revocation.
136
+ - [Refactor] Make sure that coordinator is topic aware. Needed for throttling, delayed processing and expired jobs.
137
+ - [Refactor] Put Pro strategies into namespaces to better organize multiple combinations.
138
+ - [Refactor] Do not rely on messages metadata for internal topic and partition operations like `#seek` so they can run independently from the consumption flow.
139
+ - [Refactor] Hold a single topic/partition reference on a coordinator instead of in executor, coordinator and consumer.
140
+ - [Refactor] Move `#mark_as_consumed` and `#mark_as_consumed!`into `Strategies::Default` to be able to introduce marking for virtual partitions.
141
+
142
+ ## 2.0.38 (2023-03-27)
143
+ - [Improvement] Introduce `Karafka::Admin#read_watermark_offsets` to get low and high watermark offsets values.
144
+ - [Improvement] Track active_job_id in instrumentation (#1372)
145
+ - [Improvement] Improve `#read_topic` reading in case of a compacted partition where the offset is below the low watermark offset. This should optimize reading and should not go beyond the low watermark offset.
146
+ - [Improvement] Allow `#read_topic` to accept instance settings to overwrite any settings needed to customize reading behaviours.
147
+
148
+ ## 2.0.37 (2023-03-20)
149
+ - [Fix] Declarative topics execution on a secondary cluster run topics creation on the primary one (#1365)
150
+ - [Fix] Admin read operations commit offset when not needed (#1369)
151
+
152
+ ## 2.0.36 (2023-03-17)
153
+ - [Refactor] Rename internal naming of `Structurable` to `Declaratives` for declarative topics feature.
154
+ - [Fix] AJ + DLQ + MOM + LRJ is pausing indefinitely after the first job (#1362)
155
+
156
+ ## 2.0.35 (2023-03-13)
157
+ - **[Feature]** Allow for defining topics config via the DSL and its automatic creation via CLI command.
158
+ - **[Feature]** Allow for full topics reset and topics repartitioning via the CLI.
159
+
160
+ ## 2.0.34 (2023-03-04)
161
+ - [Improvement] Attach an `embedded` tag to Karafka processes started using the embedded API.
162
+ - [Change] Renamed `Datadog::Listener` to `Datadog::MetricsListener` for consistency. (#1124)
163
+
164
+ ### Upgrade notes
165
+
166
+ 1. Replace `Datadog::Listener` references to `Datadog::MetricsListener`.
167
+
168
+ ## 2.0.33 (2023-02-24)
169
+ - **[Feature]** Support `perform_all_later` in ActiveJob adapter for Rails `7.1+`
170
+ - **[Feature]** Introduce ability to assign and re-assign tags in consumer instances. This can be used for extra instrumentation that is context aware.
171
+ - **[Feature]** Introduce ability to assign and reassign tags to the `Karafka::Process`.
172
+ - [Improvement] When using `ActiveJob` adapter, automatically tag jobs with the name of the `ActiveJob` class that is running inside of the `ActiveJob` consumer.
173
+ - [Improvement] Make `::Karafka::Instrumentation::Notifications::EVENTS` list public for anyone wanting to re-bind those into a different notification bus.
174
+ - [Improvement] Set `fetch.message.max.bytes` for `Karafka::Admin` to `5MB` to make sure that all data is fetched correctly for Web UI under heavy load (many consumers).
175
+ - [Improvement] Introduce a `strict_topics_namespacing` config option to enable/disable the strict topics naming validations. This can be useful when working with pre-existing topics which we cannot or do not want to rename.
176
+ - [Fix] Karafka monitor is prematurely cached (#1314)
177
+
178
+ ### Upgrade notes
179
+
180
+ Since `#tags` were introduced on consumers, the `#tags` method is now part of the consumers API.
181
+
182
+ This means, that in case you were using a method called `#tags` in your consumers, you will have to rename it:
183
+
184
+ ```ruby
185
+ class EventsConsumer < ApplicationConsumer
186
+ def consume
187
+ messages.each do |message|
188
+ tags << message.payload.tag
189
+ end
190
+
191
+ tags.each { |tags| puts tag }
192
+ end
193
+
194
+ private
195
+
196
+ # This will collide with the tagging API
197
+ # This NEEDS to be renamed not to collide with `#tags` method provided by the consumers API.
198
+ def tags
199
+ @tags ||= Set.new
200
+ end
201
+ end
202
+ ```
203
+
204
+ ## 2.0.32 (2023-02-13)
205
+ - [Fix] Many non-existing topic subscriptions propagate poll errors beyond client
206
+ - [Improvement] Ignore `unknown_topic_or_part` errors in dev when `allow.auto.create.topics` is on.
207
+ - [Improvement] Optimize temporary errors handling in polling for a better backoff policy
208
+
209
+ ## 2.0.31 (2023-02-12)
210
+ - [Feature] Allow for adding partitions via `Admin#create_partitions` API.
211
+ - [Fix] Do not ignore admin errors upon invalid configuration (#1254)
212
+ - [Fix] Topic name validation (#1300) - CandyFet
213
+ - [Improvement] Increase the `max_wait_timeout` on admin operations to five minutes to make sure no timeout on heavily loaded clusters.
214
+ - [Maintenance] Require `karafka-core` >= `2.0.11` and switch to shared RSpec locator.
215
+ - [Maintenance] Require `karafka-rdkafka` >= `0.12.1`
216
+
217
+ ## 2.0.30 (2023-01-31)
218
+ - [Improvement] Alias `--consumer-groups` with `--include-consumer-groups`
219
+ - [Improvement] Alias `--subscription-groups` with `--include-subscription-groups`
220
+ - [Improvement] Alias `--topics` with `--include-topics`
221
+ - [Improvement] Introduce `--exclude-consumer-groups` for ability to exclude certain consumer groups from running
222
+ - [Improvement] Introduce `--exclude-subscription-groups` for ability to exclude certain subscription groups from running
223
+ - [Improvement] Introduce `--exclude-topics` for ability to exclude certain topics from running
224
+
225
+ ## 2.0.29 (2023-01-30)
226
+ - [Improvement] Make sure, that the `Karafka#producer` instance has the `LoggerListener` enabled in the install template, so Karafka by default prints both consumer and producer info.
227
+ - [Improvement] Extract the code loading capabilities of Karafka console from the executable, so web can use it to provide CLI commands.
228
+ - [Fix] Fix for: running karafka console results in NameError with Rails (#1280)
229
+ - [Fix] Make sure, that the `caller` for async errors is being published.
230
+ - [Change] Make sure that WaterDrop `2.4.10` or higher is used with this release to support Web-UI.
231
+
232
+ ## 2.0.28 (2023-01-25)
233
+ - **[Feature]** Provide the ability to use Dead Letter Queue with Virtual Partitions.
234
+ - [Improvement] Collapse Virtual Partitions upon retryable error to a single partition. This allows dead letter queue to operate and mitigate issues arising from work virtualization. This removes uncertainties upon errors that can be retried and processed. Affects given topic partition virtualization only for multi-topic and multi-partition parallelization. It also minimizes potential "flickering" where given data set has potentially many corrupted messages. The collapse will last until all the messages from the collective corrupted batch are processed. After that, virtualization will resume.
235
+ - [Improvement] Introduce `#collapsed?` consumer method available for consumers using Virtual Partitions.
236
+ - [Improvement] Allow for customization of DLQ dispatched message details in Pro (#1266) via the `#enhance_dlq_message` consumer method.
237
+ - [Improvement] Include `original_consumer_group` in the DLQ dispatched messages in Pro.
238
+ - [Improvement] Use Karafka `client_id` as kafka `client.id` value by default
239
+
240
+ ### Upgrade notes
241
+
242
+ If you want to continue to use `karafka` as default for kafka `client.id`, assign it manually:
243
+
244
+ ```ruby
245
+ class KarafkaApp < Karafka::App
246
+ setup do |config|
247
+ # Other settings...
248
+ config.kafka = {
249
+ 'client.id': 'karafka'
250
+ }
251
+ ```
252
+
253
+ ## 2.0.27 (2023-01-11)
254
+ - Do not lock Ruby version in Karafka in favour of `karafka-core`.
255
+ - Make sure `karafka-core` version is at least `2.0.9` to make sure we run `karafka-rdkafka`.
256
+
257
+ ## 2.0.26 (2023-01-10)
258
+ - **[Feature]** Allow for disabling given topics by setting `active` to false. It will exclude them from consumption but will allow to have their definitions for using admin APIs, etc.
259
+ - [Improvement] Early terminate on `read_topic` when reaching the last offset available on the request time.
260
+ - [Improvement] Introduce a `quiet` state that indicates that Karafka is not only moving to quiet mode but actually that it reached it and no work will happen anymore in any of the consumer groups.
261
+ - [Improvement] Use Karafka defined routes topics when possible for `read_topic` admin API.
262
+ - [Improvement] Introduce `client.pause` and `client.resume` instrumentation hooks for tracking client topic partition pausing and resuming. This is alongside of `consumer.consuming.pause` that can be used to track both manual and automatic pausing with more granular consumer related details. The `client.*` should be used for low level tracking.
263
+ - [Improvement] Replace `LoggerListener` pause notification with one based on `client.pause` instead of `consumer.consuming.pause`.
264
+ - [Improvement] Expand `LoggerListener` with `client.resume` notification.
265
+ - [Improvement] Replace random anonymous subscription groups ids with stable once.
266
+ - [Improvement] Add `consumer.consume`, `consumer.revoke` and `consumer.shutting_down` notification events and move the revocation logic calling to strategies.
267
+ - [Change] Rename job queue statistics `processing` key to `busy`. No changes needed because naming in the DataDog listener stays the same.
268
+ - [Fix] Fix proctitle listener state changes reporting on new states.
269
+ - [Fix] Make sure all files descriptors are closed in the integration specs.
270
+ - [Fix] Fix a case where empty subscription groups could leak into the execution flow.
271
+ - [Fix] Fix `LoggerListener` reporting so it does not end with `.`.
272
+ - [Fix] Run previously defined (if any) signal traps created prior to Karafka signals traps.
273
+
274
+ ## 2.0.25 (2023-01-10)
275
+ - Release yanked due to accidental release with local changes.
276
+
277
+ ## 2.0.24 (2022-12-19)
278
+ - **[Feature]** Provide out of the box encryption support for Pro.
279
+ - [Improvement] Add instrumentation upon `#pause`.
280
+ - [Improvement] Add instrumentation upon retries.
281
+ - [Improvement] Assign `#id` to consumers similar to other entities for ease of debugging.
282
+ - [Improvement] Add retries and pausing to the default `LoggerListener`.
283
+ - [Improvement] Introduce a new final `terminated` state that will kick in prior to exit but after all the instrumentation and other things are done.
284
+ - [Improvement] Ensure that state transitions are thread-safe and ensure state transitions can occur in one direction.
285
+ - [Improvement] Optimize status methods proxying to `Karafka::App`.
286
+ - [Improvement] Allow for easier state usage by introducing explicit `#to_s` for reporting.
287
+ - [Improvement] Change auto-generated id from `SecureRandom#uuid` to `SecureRandom#hex(6)`
288
+ - [Improvement] Emit statistic every 5 seconds by default.
289
+ - [Improvement] Introduce general messages parser that can be swapped when needed.
290
+ - [Fix] Do not trigger code reloading when `consumer_persistence` is enabled.
291
+ - [Fix] Shutdown producer after all the consumer components are down and the status is stopped. This will ensure, that any instrumentation related Kafka messaging can still operate.
292
+
293
+ ### Upgrade notes
294
+
295
+ If you want to disable `librdkafka` statistics because you do not use them at all, update the `kafka` `statistics.interval.ms` setting and set it to `0`:
296
+
297
+ ```ruby
298
+ class KarafkaApp < Karafka::App
299
+ setup do |config|
300
+ # Other settings...
301
+ config.kafka = {
302
+ 'statistics.interval.ms': 0
303
+ }
304
+ end
305
+ end
306
+ ```
307
+
308
+ ## 2.0.23 (2022-12-07)
309
+ - [Maintenance] Align with `waterdrop` and `karafka-core`
310
+ - [Improvement] Provide `Admin#read_topic` API to get topic data without subscribing.
311
+ - [Improvement] Upon an end user `#pause`, do not commit the offset in automatic offset management mode. This will prevent from a scenario where pause is needed but during it a rebalance occurs and a different assigned process starts not from the pause location but from the automatic offset that may be different. This still allows for using the `#mark_as_consumed`.
312
+ - [Fix] Fix a scenario where manual `#pause` would be overwritten by a resume initiated by the strategy.
313
+ - [Fix] Fix a scenario where manual `#pause` in LRJ would cause infinite pause.
314
+
315
+ ## 2.0.22 (2022-12-02)
316
+ - [Improvement] Load Pro components upon Karafka require so they can be altered prior to setup.
317
+ - [Improvement] Do not run LRJ jobs that were added to the jobs queue but were revoked meanwhile.
318
+ - [Improvement] Allow running particular named subscription groups similar to consumer groups.
319
+ - [Improvement] Allow running particular topics similar to consumer groups.
320
+ - [Improvement] Raise configuration error when trying to run Karafka with options leading to no subscriptions.
321
+ - [Fix] Fix `karafka info` subscription groups count reporting as it was misleading.
322
+ - [Fix] Allow for defining subscription groups with symbols similar to consumer groups and topics to align the API.
323
+ - [Fix] Do not allow for an explicit `nil` as a `subscription_group` block argument.
324
+ - [Fix] Fix instability in subscription groups static members ids when using `--consumer_groups` CLI flag.
325
+ - [Fix] Fix a case in routing, where anonymous subscription group could not be used inside of a consumer group.
326
+ - [Fix] Fix a case where shutdown prior to listeners build would crash the server initialization.
327
+ - [Fix] Duplicated logs in development environment for Rails when logger set to `$stdout`.
328
+
329
+ ## 20.0.21 (2022-11-25)
330
+ - [Improvement] Make revocation jobs for LRJ topics non-blocking to prevent blocking polling when someone uses non-revocation aware LRJ jobs and revocation happens.
331
+
332
+ ## 2.0.20 (2022-11-24)
333
+ - [Improvement] Support `group.instance.id` assignment (static group membership) for a case where a single consumer group has multiple subscription groups (#1173).
334
+
335
+ ## 2.0.19 (2022-11-20)
336
+ - **[Feature]** Provide ability to skip failing messages without dispatching them to an alternative topic (DLQ).
337
+ - [Improvement] Improve the integration with Ruby on Rails by preventing double-require of components.
338
+ - [Improvement] Improve stability of the shutdown process upon critical errors.
339
+ - [Improvement] Improve stability of the integrations spec suite.
340
+ - [Fix] Fix an issue where upon fast startup of multiple subscription groups from the same consumer group, a ghost queue would be created due to problems in `Concurrent::Hash`.
341
+
342
+ ## 2.0.18 (2022-11-18)
343
+ - **[Feature]** Support quiet mode via `TSTP` signal. When used, Karafka will finish processing current messages, run `shutdown` jobs, and switch to a quiet mode where no new work is being accepted. At the same time, it will keep the consumer group quiet, and thus no rebalance will be triggered. This can be particularly useful during deployments.
344
+ - [Improvement] Trigger `#revoked` for jobs in case revocation would happen during shutdown when jobs are still running. This should ensure, we get a notion of revocation for Pro LRJ jobs even when revocation happening upon shutdown (#1150).
345
+ - [Improvement] Stabilize the shutdown procedure for consumer groups with many subscription groups that have non-aligned processing cost per batch.
346
+ - [Improvement] Remove double loading of Karafka via Rails railtie.
347
+ - [Fix] Fix invalid class references in YARD docs.
348
+ - [Fix] prevent parallel closing of many clients.
349
+ - [Fix] fix a case where information about revocation for a combination of LRJ + VP would not be dispatched until all VP work is done.
350
+
351
+ ## 2.0.17 (2022-11-10)
352
+ - [Fix] Few typos around DLQ and Pro DLQ Dispatch original metadata naming.
353
+ - [Fix] Narrow the components lookup to the appropriate scope (#1114)
354
+
355
+ ### Upgrade notes
356
+
357
+ 1. Replace `original-*` references from DLQ dispatched metadata with `original_*`
358
+
359
+ ```ruby
360
+ # DLQ topic consumption
361
+ def consume
362
+ messages.each do |broken_message|
363
+ topic = broken_message.metadata['original_topic'] # was original-topic
364
+ partition = broken_message.metadata['original_partition'] # was original-partition
365
+ offset = broken_message.metadata['original_offset'] # was original-offset
366
+
367
+ Rails.logger.error "This message is broken: #{topic}/#{partition}/#{offset}"
368
+ end
369
+ end
370
+ ```
371
+
372
+ ## 2.0.16 (2022-11-09)
373
+ - **[Breaking]** Disable the root `manual_offset_management` setting and require it to be configured per topic. This is part of "topic features" configuration extraction for better code organization.
374
+ - **[Feature]** Introduce **Dead Letter Queue** feature and Pro **Enhanced Dead Letter Queue** feature
375
+ - [Improvement] Align attributes available in the instrumentation bus for listener related events.
376
+ - [Improvement] Include consumer group id in consumption related events (#1093)
377
+ - [Improvement] Delegate pro components loading to Zeitwerk
378
+ - [Improvement] Include `Datadog::LoggerListener` for tracking logger data with DataDog (@bruno-b-martins)
379
+ - [Improvement] Include `seek_offset` in the `consumer.consume.error` event payload (#1113)
380
+ - [Refactor] Remove unused logger listener event handler.
381
+ - [Refactor] Internal refactoring of routing validations flow.
382
+ - [Refactor] Reorganize how routing related features are represented internally to simplify features management.
383
+ - [Refactor] Extract supported features combinations processing flow into separate strategies.
384
+ - [Refactor] Auto-create topics in the integration specs based on the defined routing
385
+ - [Refactor] Auto-inject Pro components via composition instead of requiring to use `Karafka::Pro::BaseConsumer` (#1116)
386
+ - [Fix] Fix a case where routing tags would not be injected when given routing definition would not be used with a block
387
+ - [Fix] Fix a case where using `#active_job_topic` without extra block options would cause `manual_offset_management` to stay false.
388
+ - [Fix] Fix a case when upon Pro ActiveJob usage with Virtual Partitions, correct offset would not be stored
389
+ - [Fix] Fix a case where upon Virtual Partitions usage, same underlying real partition would be resumed several times.
390
+ - [Fix] Fix LRJ enqueuing pause increases the coordinator counter (#115)
391
+ - [Fix] Release `ActiveRecord` connection to the pool after the work in non-dev envs (#1130)
392
+ - [Fix] Fix a case where post-initialization shutdown would not initiate shutdown procedures.
393
+ - [Fix] Prevent Karafka from committing offsets twice upon shutdown.
394
+ - [Fix] Fix for a case where fast consecutive stop signaling could hang the stopping listeners.
395
+ - [Specs] Split specs into regular and pro to simplify how resources are loaded
396
+ - [Specs] Add specs to ensure, that all the Pro components have a proper per-file license (#1099)
397
+
398
+ ### Upgrade notes
399
+
400
+ 1. Remove the `manual_offset_management` setting from the main config if you use it:
401
+
402
+ ```ruby
403
+ class KarafkaApp < Karafka::App
404
+ setup do |config|
405
+ # ...
406
+
407
+ # This line needs to be removed:
408
+ config.manual_offset_management = true
409
+ end
410
+ end
411
+ ```
412
+
413
+ 2. Set the `manual_offset_management` feature flag per each topic where you want to use it in the routing. Don't set it for topics where you want the default offset management strategy to be used.
414
+
415
+ ```ruby
416
+ class KarafkaApp < Karafka::App
417
+ routes.draw do
418
+ consumer_group :group_name do
419
+ topic :example do
420
+ consumer ExampleConsumer
421
+ manual_offset_management true
422
+ end
423
+
424
+ topic :example2 do
425
+ consumer ExampleConsumer2
426
+ manual_offset_management true
427
+ end
428
+ end
429
+ end
430
+ end
431
+ ```
432
+
433
+ 3. If you were using code to restart dead connections similar to this:
434
+
435
+ ```ruby
436
+ class ActiveRecordConnectionsCleaner
437
+ def on_error_occurred(event)
438
+ return unless event[:error].is_a?(ActiveRecord::StatementInvalid)
439
+
440
+ ::ActiveRecord::Base.clear_active_connections!
441
+ end
442
+ end
443
+
444
+ Karafka.monitor.subscribe(ActiveRecordConnectionsCleaner.new)
445
+ ```
446
+
447
+ It **should** be removed. This code is **no longer needed**.
448
+
449
+ ## 2.0.15 (2022-10-20)
450
+ - Sanitize admin config prior to any admin action.
451
+ - Make messages partitioner outcome for virtual partitions consistently distributed in regards to concurrency.
452
+ - Improve DataDog/StatsD metrics reporting by reporting per topic partition lags and trends.
453
+ - Replace synchronous offset commit with async on resuming paused partition (#1087).
454
+
455
+ ## 2.0.14 (2022-10-16)
456
+ - Prevent consecutive stop signals from starting multiple supervision shutdowns.
457
+ - Provide `Karafka::Embedded` to simplify the start/stop process when running Karafka from within other process (Puma, Sidekiq, etc).
458
+ - Fix a race condition when un-pausing a long-running-job exactly upon listener resuming would crash the listener loop (#1072).
459
+
460
+ ## 2.0.13 (2022-10-14)
461
+ - Early exit upon attempts to commit current or earlier offset twice.
462
+ - Add more integration specs covering edge cases.
463
+ - Strip non producer related config when default producer is initialized (#776)
464
+
465
+ ## 2.0.12 (2022-10-06)
466
+ - Commit stored offsets upon rebalance revocation event to reduce number of messages that are re-processed.
467
+ - Support cooperative-sticky rebalance strategy.
468
+ - Replace offset commit after each batch with a per-rebalance commit.
469
+ - User instrumentation to publish internal rebalance errors.
470
+
471
+ ## 2.0.11 (2022-09-29)
472
+ - Report early on errors related to network and on max poll interval being exceeded to indicate critical problems that will be retries but may mean some underlying problems in the system.
473
+ - Fix support of Ruby 2.7.0 to 2.7.2 (#1045)
474
+
475
+ ## 2.0.10 (2022-09-23)
476
+ - Improve error recovery by delegating the recovery to the existing `librdkafka` instance.
477
+
478
+ ## 2.0.9 (2022-09-22)
479
+ - Fix Singleton not visible when used in PORO (#1034)
480
+ - Divide pristine specs into pristine and poro. Pristine will still have helpers loaded, poro will have nothing.
481
+ - Fix a case where `manual_offset_management` offset upon error is not reverted to the first message in a case where there were no markings as consumed at all for multiple batches.
482
+ - Implement small reliability improvements around marking as consumed.
483
+ - Introduce a config sanity check to make sure Virtual Partitions are not used with manual offset management.
484
+ - Fix a possibility of using `active_job_topic` with Virtual Partitions and manual offset management (ActiveJob still can use due to atomicity of jobs).
485
+ - Move seek offset ownership to the coordinator to allow Virtual Partitions further development.
486
+ - Improve client shutdown in specs.
487
+ - Do not reset client on network issue and rely on `librdkafka` to do so.
488
+ - Allow for nameless (anonymous) subscription groups (#1033)
489
+
490
+ ## 2.0.8 (2022-09-19)
491
+ - [Breaking change] Rename Virtual Partitions `concurrency` to `max_partitions` to avoid confusion (#1023).
492
+ - Allow for block based subscription groups management (#1030).
493
+
494
+ ## 2.0.7 (2022-09-05)
495
+ - [Breaking change] Redefine the Virtual Partitions routing DSL to accept concurrency
496
+ - Allow for `concurrency` setting in Virtual Partitions to extend or limit number of jobs per regular partition. This allows to make sure, we do not use all the threads on virtual partitions jobs
497
+ - Allow for creation of as many Virtual Partitions as needed, without taking global `concurrency` into consideration
498
+
499
+ ## 2.0.6 (2022-09-02)
500
+ - Improve client closing.
501
+ - Fix for: Multiple LRJ topics fetched concurrently block ability for LRJ to kick in (#1002)
502
+ - Introduce a pre-enqueue sync execution layer to prevent starvation cases for LRJ
503
+ - Close admin upon critical errors to prevent segmentation faults
504
+ - Add support for manual subscription group management (#852)
505
+
506
+ ## 2.0.5 (2022-08-23)
507
+ - Fix unnecessary double new line in the `karafka.rb` template for Ruby on Rails
508
+ - Fix a case where a manually paused partition would not be processed after rebalance (#988)
509
+ - Increase specs stability.
510
+ - Lower concurrency of execution of specs in Github CI.
511
+
512
+ ## 2.0.4 (2022-08-19)
513
+ - Fix hanging topic creation (#964)
514
+ - Fix conflict with other Rails loading libraries like `gruf` (#974)
515
+
516
+ ## 2.0.3 (2022-08-09)
517
+ - Update boot info on server startup.
518
+ - Update `karafka info` with more descriptive Ruby version info.
519
+ - Fix issue where when used with Rails in development, log would be too verbose.
520
+ - Fix issue where Zeitwerk with Rails would not load Pro components despite license being present.
521
+
522
+ ## 2.0.2 (2022-08-07)
523
+ - Bypass issue with Rails reload in development by releasing the connection (https://github.com/rails/rails/issues/44183).
524
+
525
+ ## 2.0.1 (2022-08-06)
526
+ - Provide `Karafka::Admin` for creation and destruction of topics and fetching cluster info.
527
+ - Update integration specs to always use one-time disposable topics.
528
+ - Remove no longer needed `wait_for_kafka` script.
529
+ - Add more integration specs for cover offset management upon errors.
530
+
531
+ ## 2.0.0 (2022-08-05)
532
+
533
+ This changelog describes changes between `1.4` and `2.0`. Please refer to appropriate release notes for changes between particular `rc` releases.
534
+
535
+ Karafka 2.0 is a **major** rewrite that brings many new things to the table but also removes specific concepts that happened not to be as good as I initially thought when I created them.
536
+
537
+ Please consider getting a Pro version if you want to **support** my work on the Karafka ecosystem!
538
+
539
+ For anyone worried that I will start converting regular features into Pro: This will **not** happen. Anything free and fully OSS in Karafka 1.4 will **forever** remain free. Most additions and improvements to the ecosystem are to its free parts. Any feature that is introduced as a free and open one will not become paid.
540
+
541
+ ### Additions
542
+
543
+ This section describes **new** things and concepts introduced with Karafka 2.0.
544
+
545
+ Karafka 2.0:
546
+
547
+ - Introduces multi-threaded support for [concurrent work](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading) consumption for separate partitions as well as for single partition work via [Virtual Partitions](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions).
548
+ - Introduces [Active Job adapter](https://github.com/karafka/karafka/wiki/Active-Job) for using Karafka as a jobs backend with Ruby on Rails Active Job.
549
+ - Introduces fully automatic integration end-to-end [test suite](https://github.com/karafka/karafka/tree/master/spec/integrations) that checks any case I could imagine.
550
+ - Introduces [Virtual Partitions](https://github.com/karafka/karafka/wiki/Pro-Virtual-Partitions) for ability to parallelize work of a single partition.
551
+ - Introduces [Long-Running Jobs](https://github.com/karafka/karafka/wiki/Pro-Long-Running-Jobs) to allow for work that would otherwise exceed the `max.poll.interval.ms`.
552
+ - Introduces the [Enhanced Scheduler](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Scheduler) that uses a non-preemptive LJF (Longest Job First) algorithm instead of a a FIFO (First-In, First-Out) one.
553
+ - Introduces [Enhanced Active Job adapter](https://github.com/karafka/karafka/wiki/Pro-Enhanced-Active-Job) that is optimized and allows for strong ordering of jobs and more.
554
+ - Introduces seamless [Ruby on Rails integration](https://github.com/karafka/karafka/wiki/Integrating-with-Ruby-on-Rails-and-other-frameworks) via `Rails::Railte` without need for any extra configuration.
555
+ - Provides `#revoked` [method](https://github.com/karafka/karafka/wiki/Consuming-messages#shutdown-and-partition-revocation-handlers) for taking actions upon topic revocation.
556
+ - Emits underlying async errors emitted from `librdkafka` via the standardized `error.occurred` [monitor channel](https://github.com/karafka/karafka/wiki/Error-handling-and-back-off-policy#error-tracking).
557
+ - Replaces `ruby-kafka` with `librdkafka` as an underlying driver.
558
+ - Introduces official [EOL policies](https://github.com/karafka/karafka/wiki/Versions-Lifecycle-and-EOL).
559
+ - Introduces [benchmarks](https://github.com/karafka/karafka/tree/master/spec/benchmarks) that can be used to profile Karafka.
560
+ - Introduces a requirement that the end user code **needs** to be [thread-safe](https://github.com/karafka/karafka/wiki/FAQ#does-karafka-require-gems-to-be-thread-safe).
561
+ - Introduces a [Pro subscription](https://github.com/karafka/karafka/wiki/Build-vs-Buy) with a [commercial license](https://github.com/karafka/karafka/blob/master/LICENSE-COMM) to fund further ecosystem development.
562
+
563
+ ### Deletions
564
+
565
+ This section describes things that are **no longer** part of the Karafka ecosystem.
566
+
567
+ Karafka 2.0:
568
+
569
+ - Removes topics mappers concept completely.
570
+ - Removes pidfiles support.
571
+ - Removes daemonization support.
572
+ - Removes support for using `sidekiq-backend` due to introduction of [multi-threading](https://github.com/karafka/karafka/wiki/Concurrency-and-multithreading).
573
+ - Removes the `Responders` concept in favour of WaterDrop producer usage.
574
+ - Removes completely all the callbacks in favour of finalizer method `#shutdown`.
575
+ - Removes single message consumption mode in favour of [documentation](https://github.com/karafka/karafka/wiki/Consuming-messages#one-at-a-time) on how to do it easily by yourself.
576
+
577
+ ### Changes
578
+
579
+ This section describes things that were **changed** in Karafka but are still present.
580
+
581
+ Karafka 2.0:
582
+
583
+ - Uses only instrumentation that comes from Karafka. This applies also to notifications coming natively from `librdkafka`. They are now piped through Karafka prior to being dispatched.
584
+ - Integrates WaterDrop `2.x` tightly with autoconfiguration inheritance and an option to redefine it.
585
+ - Integrates with the `karafka-testing` gem for RSpec that also has been updated.
586
+ - Updates `cli info` to reflect the `2.0` details.
587
+ - Stops validating `kafka` configuration beyond minimum as the rest is handled by `librdkafka`.
588
+ - No longer uses `dry-validation`.
589
+ - No longer uses `dry-monitor`.
590
+ - No longer uses `dry-configurable`.
591
+ - Lowers general external dependencies three **heavily**.
592
+ - Renames `Karafka::Params::BatchMetadata` to `Karafka::Messages::BatchMetadata`.
593
+ - Renames `Karafka::Params::Params` to `Karafka::Messages::Message`.
594
+ - Renames `#params_batch` in consumers to `#messages`.
595
+ - Renames `Karafka::Params::Metadata` to `Karafka::Messages::Metadata`.
596
+ - Renames `Karafka::Fetcher` to `Karafka::Runner` and align notifications key names.
597
+ - Renames `StdoutListener` to `LoggerListener`.
598
+ - Reorganizes [monitoring and logging](https://github.com/karafka/karafka/wiki/Monitoring-and-logging) to match new concepts.
599
+ - Notifies on fatal worker processing errors.
600
+ - Contains updated install templates for Rails and no-non Rails.
601
+ - Changes how the routing style (`0.5`) behaves. It now builds a single consumer group instead of one per topic.
602
+ - Introduces changes that will allow me to build full web-UI in the upcoming `2.1`.
603
+ - Contains updated example apps.
604
+ - Standardizes error hooks for all error reporting (`error.occurred`).
605
+ - Changes license to `LGPL-3.0`.
606
+ - Introduces a `karafka-core` dependency that contains common code used across the ecosystem.
607
+ - Contains updated [wiki](https://github.com/karafka/karafka/wiki) on everything I could think of.
608
+
609
+ ## Older releases
610
+
611
+ This changelog tracks Karafka `2.0` and higher changes.
612
+
613
+ If you are looking for changes in the unsupported releases, we recommend checking the [`1.4`](https://github.com/karafka/karafka/blob/1.4/CHANGELOG.md) branch of the Karafka repository.