@point3/node-rdkafka 3.6.0-1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (707) hide show
  1. package/LICENSE.txt +20 -0
  2. package/README.md +636 -0
  3. package/binding.gyp +154 -0
  4. package/deps/librdkafka/.clang-format +136 -0
  5. package/deps/librdkafka/.clang-format-cpp +103 -0
  6. package/deps/librdkafka/.dir-locals.el +10 -0
  7. package/deps/librdkafka/.formatignore +33 -0
  8. package/deps/librdkafka/.gdbmacros +19 -0
  9. package/deps/librdkafka/.github/CODEOWNERS +1 -0
  10. package/deps/librdkafka/.github/ISSUE_TEMPLATE +34 -0
  11. package/deps/librdkafka/.semaphore/run-all-tests.yml +77 -0
  12. package/deps/librdkafka/.semaphore/semaphore-integration.yml +250 -0
  13. package/deps/librdkafka/.semaphore/semaphore.yml +378 -0
  14. package/deps/librdkafka/.semaphore/verify-linux-packages.yml +41 -0
  15. package/deps/librdkafka/CHANGELOG.md +2208 -0
  16. package/deps/librdkafka/CMakeLists.txt +291 -0
  17. package/deps/librdkafka/CODE_OF_CONDUCT.md +46 -0
  18. package/deps/librdkafka/CONFIGURATION.md +209 -0
  19. package/deps/librdkafka/CONTRIBUTING.md +431 -0
  20. package/deps/librdkafka/Doxyfile +2375 -0
  21. package/deps/librdkafka/INTRODUCTION.md +2481 -0
  22. package/deps/librdkafka/LICENSE +26 -0
  23. package/deps/librdkafka/LICENSE.cjson +22 -0
  24. package/deps/librdkafka/LICENSE.crc32c +28 -0
  25. package/deps/librdkafka/LICENSE.fnv1a +18 -0
  26. package/deps/librdkafka/LICENSE.hdrhistogram +27 -0
  27. package/deps/librdkafka/LICENSE.lz4 +26 -0
  28. package/deps/librdkafka/LICENSE.murmur2 +25 -0
  29. package/deps/librdkafka/LICENSE.nanopb +22 -0
  30. package/deps/librdkafka/LICENSE.opentelemetry +203 -0
  31. package/deps/librdkafka/LICENSE.pycrc +23 -0
  32. package/deps/librdkafka/LICENSE.queue +31 -0
  33. package/deps/librdkafka/LICENSE.regexp +5 -0
  34. package/deps/librdkafka/LICENSE.snappy +36 -0
  35. package/deps/librdkafka/LICENSE.tinycthread +26 -0
  36. package/deps/librdkafka/LICENSE.wingetopt +49 -0
  37. package/deps/librdkafka/LICENSES.txt +625 -0
  38. package/deps/librdkafka/Makefile +125 -0
  39. package/deps/librdkafka/README.md +199 -0
  40. package/deps/librdkafka/README.win32 +26 -0
  41. package/deps/librdkafka/STATISTICS.md +624 -0
  42. package/deps/librdkafka/configure +214 -0
  43. package/deps/librdkafka/configure.self +331 -0
  44. package/deps/librdkafka/debian/changelog +111 -0
  45. package/deps/librdkafka/debian/compat +1 -0
  46. package/deps/librdkafka/debian/control +71 -0
  47. package/deps/librdkafka/debian/copyright +99 -0
  48. package/deps/librdkafka/debian/gbp.conf +9 -0
  49. package/deps/librdkafka/debian/librdkafka++1.install +1 -0
  50. package/deps/librdkafka/debian/librdkafka-dev.examples +2 -0
  51. package/deps/librdkafka/debian/librdkafka-dev.install +9 -0
  52. package/deps/librdkafka/debian/librdkafka1.docs +5 -0
  53. package/deps/librdkafka/debian/librdkafka1.install +1 -0
  54. package/deps/librdkafka/debian/librdkafka1.symbols +135 -0
  55. package/deps/librdkafka/debian/rules +19 -0
  56. package/deps/librdkafka/debian/source/format +1 -0
  57. package/deps/librdkafka/debian/watch +2 -0
  58. package/deps/librdkafka/dev-conf.sh +123 -0
  59. package/deps/librdkafka/examples/CMakeLists.txt +79 -0
  60. package/deps/librdkafka/examples/Makefile +167 -0
  61. package/deps/librdkafka/examples/README.md +42 -0
  62. package/deps/librdkafka/examples/alter_consumer_group_offsets.c +338 -0
  63. package/deps/librdkafka/examples/consumer.c +271 -0
  64. package/deps/librdkafka/examples/delete_records.c +233 -0
  65. package/deps/librdkafka/examples/describe_cluster.c +322 -0
  66. package/deps/librdkafka/examples/describe_consumer_groups.c +455 -0
  67. package/deps/librdkafka/examples/describe_topics.c +427 -0
  68. package/deps/librdkafka/examples/elect_leaders.c +317 -0
  69. package/deps/librdkafka/examples/globals.json +11 -0
  70. package/deps/librdkafka/examples/idempotent_producer.c +344 -0
  71. package/deps/librdkafka/examples/incremental_alter_configs.c +347 -0
  72. package/deps/librdkafka/examples/kafkatest_verifiable_client.cpp +945 -0
  73. package/deps/librdkafka/examples/list_consumer_group_offsets.c +359 -0
  74. package/deps/librdkafka/examples/list_consumer_groups.c +365 -0
  75. package/deps/librdkafka/examples/list_offsets.c +327 -0
  76. package/deps/librdkafka/examples/misc.c +287 -0
  77. package/deps/librdkafka/examples/openssl_engine_example.cpp +248 -0
  78. package/deps/librdkafka/examples/producer.c +251 -0
  79. package/deps/librdkafka/examples/producer.cpp +228 -0
  80. package/deps/librdkafka/examples/rdkafka_complex_consumer_example.c +617 -0
  81. package/deps/librdkafka/examples/rdkafka_complex_consumer_example.cpp +467 -0
  82. package/deps/librdkafka/examples/rdkafka_consume_batch.cpp +264 -0
  83. package/deps/librdkafka/examples/rdkafka_example.c +853 -0
  84. package/deps/librdkafka/examples/rdkafka_example.cpp +679 -0
  85. package/deps/librdkafka/examples/rdkafka_performance.c +1781 -0
  86. package/deps/librdkafka/examples/transactions-older-broker.c +668 -0
  87. package/deps/librdkafka/examples/transactions.c +665 -0
  88. package/deps/librdkafka/examples/user_scram.c +491 -0
  89. package/deps/librdkafka/examples/win_ssl_cert_store.cpp +396 -0
  90. package/deps/librdkafka/lds-gen.py +73 -0
  91. package/deps/librdkafka/mainpage.doxy +40 -0
  92. package/deps/librdkafka/mklove/Makefile.base +329 -0
  93. package/deps/librdkafka/mklove/modules/configure.atomics +144 -0
  94. package/deps/librdkafka/mklove/modules/configure.base +2484 -0
  95. package/deps/librdkafka/mklove/modules/configure.builtin +70 -0
  96. package/deps/librdkafka/mklove/modules/configure.cc +186 -0
  97. package/deps/librdkafka/mklove/modules/configure.cxx +8 -0
  98. package/deps/librdkafka/mklove/modules/configure.fileversion +65 -0
  99. package/deps/librdkafka/mklove/modules/configure.gitversion +29 -0
  100. package/deps/librdkafka/mklove/modules/configure.good_cflags +18 -0
  101. package/deps/librdkafka/mklove/modules/configure.host +132 -0
  102. package/deps/librdkafka/mklove/modules/configure.lib +49 -0
  103. package/deps/librdkafka/mklove/modules/configure.libcurl +99 -0
  104. package/deps/librdkafka/mklove/modules/configure.libsasl2 +36 -0
  105. package/deps/librdkafka/mklove/modules/configure.libssl +147 -0
  106. package/deps/librdkafka/mklove/modules/configure.libzstd +58 -0
  107. package/deps/librdkafka/mklove/modules/configure.parseversion +95 -0
  108. package/deps/librdkafka/mklove/modules/configure.pic +16 -0
  109. package/deps/librdkafka/mklove/modules/configure.socket +20 -0
  110. package/deps/librdkafka/mklove/modules/configure.zlib +61 -0
  111. package/deps/librdkafka/mklove/modules/patches/README.md +8 -0
  112. package/deps/librdkafka/mklove/modules/patches/libcurl.0000-no-runtime-linking-check.patch +11 -0
  113. package/deps/librdkafka/mklove/modules/patches/libssl.0000-osx-rand-include-fix-OpenSSL-PR16409.patch +56 -0
  114. package/deps/librdkafka/packaging/RELEASE.md +319 -0
  115. package/deps/librdkafka/packaging/alpine/build-alpine.sh +38 -0
  116. package/deps/librdkafka/packaging/archlinux/PKGBUILD +30 -0
  117. package/deps/librdkafka/packaging/cmake/Config.cmake.in +37 -0
  118. package/deps/librdkafka/packaging/cmake/Modules/FindLZ4.cmake +38 -0
  119. package/deps/librdkafka/packaging/cmake/Modules/FindZSTD.cmake +27 -0
  120. package/deps/librdkafka/packaging/cmake/Modules/LICENSE.FindZstd +178 -0
  121. package/deps/librdkafka/packaging/cmake/README.md +38 -0
  122. package/deps/librdkafka/packaging/cmake/config.h.in +52 -0
  123. package/deps/librdkafka/packaging/cmake/parseversion.cmake +60 -0
  124. package/deps/librdkafka/packaging/cmake/rdkafka.pc.in +12 -0
  125. package/deps/librdkafka/packaging/cmake/try_compile/atomic_32_test.c +8 -0
  126. package/deps/librdkafka/packaging/cmake/try_compile/atomic_64_test.c +8 -0
  127. package/deps/librdkafka/packaging/cmake/try_compile/c11threads_test.c +14 -0
  128. package/deps/librdkafka/packaging/cmake/try_compile/crc32c_hw_test.c +27 -0
  129. package/deps/librdkafka/packaging/cmake/try_compile/dlopen_test.c +11 -0
  130. package/deps/librdkafka/packaging/cmake/try_compile/libsasl2_test.c +7 -0
  131. package/deps/librdkafka/packaging/cmake/try_compile/pthread_setname_darwin_test.c +6 -0
  132. package/deps/librdkafka/packaging/cmake/try_compile/pthread_setname_freebsd_test.c +7 -0
  133. package/deps/librdkafka/packaging/cmake/try_compile/pthread_setname_gnu_test.c +5 -0
  134. package/deps/librdkafka/packaging/cmake/try_compile/rand_r_test.c +7 -0
  135. package/deps/librdkafka/packaging/cmake/try_compile/rdkafka_setup.cmake +122 -0
  136. package/deps/librdkafka/packaging/cmake/try_compile/regex_test.c +10 -0
  137. package/deps/librdkafka/packaging/cmake/try_compile/strndup_test.c +5 -0
  138. package/deps/librdkafka/packaging/cmake/try_compile/sync_32_test.c +8 -0
  139. package/deps/librdkafka/packaging/cmake/try_compile/sync_64_test.c +8 -0
  140. package/deps/librdkafka/packaging/cp/README.md +16 -0
  141. package/deps/librdkafka/packaging/cp/check_features.c +72 -0
  142. package/deps/librdkafka/packaging/cp/verify-deb.sh +33 -0
  143. package/deps/librdkafka/packaging/cp/verify-packages.sh +69 -0
  144. package/deps/librdkafka/packaging/cp/verify-rpm.sh +32 -0
  145. package/deps/librdkafka/packaging/debian/changelog +66 -0
  146. package/deps/librdkafka/packaging/debian/compat +1 -0
  147. package/deps/librdkafka/packaging/debian/control +49 -0
  148. package/deps/librdkafka/packaging/debian/copyright +84 -0
  149. package/deps/librdkafka/packaging/debian/docs +5 -0
  150. package/deps/librdkafka/packaging/debian/gbp.conf +9 -0
  151. package/deps/librdkafka/packaging/debian/librdkafka-dev.dirs +2 -0
  152. package/deps/librdkafka/packaging/debian/librdkafka-dev.examples +2 -0
  153. package/deps/librdkafka/packaging/debian/librdkafka-dev.install +6 -0
  154. package/deps/librdkafka/packaging/debian/librdkafka-dev.substvars +1 -0
  155. package/deps/librdkafka/packaging/debian/librdkafka.dsc +16 -0
  156. package/deps/librdkafka/packaging/debian/librdkafka1-dbg.substvars +1 -0
  157. package/deps/librdkafka/packaging/debian/librdkafka1.dirs +1 -0
  158. package/deps/librdkafka/packaging/debian/librdkafka1.install +2 -0
  159. package/deps/librdkafka/packaging/debian/librdkafka1.postinst.debhelper +5 -0
  160. package/deps/librdkafka/packaging/debian/librdkafka1.postrm.debhelper +5 -0
  161. package/deps/librdkafka/packaging/debian/librdkafka1.symbols +64 -0
  162. package/deps/librdkafka/packaging/debian/rules +19 -0
  163. package/deps/librdkafka/packaging/debian/source/format +1 -0
  164. package/deps/librdkafka/packaging/debian/watch +2 -0
  165. package/deps/librdkafka/packaging/get_version.py +21 -0
  166. package/deps/librdkafka/packaging/homebrew/README.md +15 -0
  167. package/deps/librdkafka/packaging/homebrew/brew-update-pr.sh +31 -0
  168. package/deps/librdkafka/packaging/mingw-w64/configure-build-msys2-mingw-static.sh +52 -0
  169. package/deps/librdkafka/packaging/mingw-w64/configure-build-msys2-mingw.sh +21 -0
  170. package/deps/librdkafka/packaging/mingw-w64/export-variables.sh +13 -0
  171. package/deps/librdkafka/packaging/mingw-w64/run-tests.sh +6 -0
  172. package/deps/librdkafka/packaging/mingw-w64/semaphoreci-build.sh +38 -0
  173. package/deps/librdkafka/packaging/nuget/README.md +84 -0
  174. package/deps/librdkafka/packaging/nuget/artifact.py +177 -0
  175. package/deps/librdkafka/packaging/nuget/cleanup-s3.py +143 -0
  176. package/deps/librdkafka/packaging/nuget/common/p-common__plat-windows__arch-win32__bldtype-Release/msvcr120.zip +0 -0
  177. package/deps/librdkafka/packaging/nuget/common/p-common__plat-windows__arch-win32__bldtype-Release/msvcr140.zip +0 -0
  178. package/deps/librdkafka/packaging/nuget/common/p-common__plat-windows__arch-x64__bldtype-Release/msvcr120.zip +0 -0
  179. package/deps/librdkafka/packaging/nuget/common/p-common__plat-windows__arch-x64__bldtype-Release/msvcr140.zip +0 -0
  180. package/deps/librdkafka/packaging/nuget/nuget.sh +21 -0
  181. package/deps/librdkafka/packaging/nuget/nugetpackage.py +278 -0
  182. package/deps/librdkafka/packaging/nuget/packaging.py +448 -0
  183. package/deps/librdkafka/packaging/nuget/push-to-nuget.sh +21 -0
  184. package/deps/librdkafka/packaging/nuget/release.py +167 -0
  185. package/deps/librdkafka/packaging/nuget/requirements.txt +3 -0
  186. package/deps/librdkafka/packaging/nuget/staticpackage.py +178 -0
  187. package/deps/librdkafka/packaging/nuget/templates/librdkafka.redist.nuspec +21 -0
  188. package/deps/librdkafka/packaging/nuget/templates/librdkafka.redist.props +18 -0
  189. package/deps/librdkafka/packaging/nuget/templates/librdkafka.redist.targets +19 -0
  190. package/deps/librdkafka/packaging/nuget/zfile/__init__.py +0 -0
  191. package/deps/librdkafka/packaging/nuget/zfile/zfile.py +98 -0
  192. package/deps/librdkafka/packaging/rpm/Makefile +92 -0
  193. package/deps/librdkafka/packaging/rpm/README.md +23 -0
  194. package/deps/librdkafka/packaging/rpm/el7-x86_64.cfg +40 -0
  195. package/deps/librdkafka/packaging/rpm/librdkafka.spec +118 -0
  196. package/deps/librdkafka/packaging/rpm/mock-on-docker.sh +96 -0
  197. package/deps/librdkafka/packaging/rpm/tests/Makefile +25 -0
  198. package/deps/librdkafka/packaging/rpm/tests/README.md +8 -0
  199. package/deps/librdkafka/packaging/rpm/tests/run-test.sh +42 -0
  200. package/deps/librdkafka/packaging/rpm/tests/test-on-docker.sh +56 -0
  201. package/deps/librdkafka/packaging/rpm/tests/test.c +77 -0
  202. package/deps/librdkafka/packaging/rpm/tests/test.cpp +34 -0
  203. package/deps/librdkafka/packaging/tools/Dockerfile +31 -0
  204. package/deps/librdkafka/packaging/tools/build-configurations-checks.sh +12 -0
  205. package/deps/librdkafka/packaging/tools/build-deb-package.sh +64 -0
  206. package/deps/librdkafka/packaging/tools/build-debian.sh +65 -0
  207. package/deps/librdkafka/packaging/tools/build-manylinux.sh +68 -0
  208. package/deps/librdkafka/packaging/tools/build-release-artifacts.sh +139 -0
  209. package/deps/librdkafka/packaging/tools/distro-build.sh +38 -0
  210. package/deps/librdkafka/packaging/tools/gh-release-checksums.py +39 -0
  211. package/deps/librdkafka/packaging/tools/rdutcoverage.sh +25 -0
  212. package/deps/librdkafka/packaging/tools/requirements.txt +2 -0
  213. package/deps/librdkafka/packaging/tools/run-in-docker.sh +28 -0
  214. package/deps/librdkafka/packaging/tools/run-integration-tests.sh +31 -0
  215. package/deps/librdkafka/packaging/tools/run-style-check.sh +4 -0
  216. package/deps/librdkafka/packaging/tools/style-format.sh +149 -0
  217. package/deps/librdkafka/packaging/tools/update_rpcs_max_versions.py +100 -0
  218. package/deps/librdkafka/service.yml +172 -0
  219. package/deps/librdkafka/src/CMakeLists.txt +374 -0
  220. package/deps/librdkafka/src/Makefile +103 -0
  221. package/deps/librdkafka/src/README.lz4.md +30 -0
  222. package/deps/librdkafka/src/cJSON.c +2834 -0
  223. package/deps/librdkafka/src/cJSON.h +398 -0
  224. package/deps/librdkafka/src/crc32c.c +430 -0
  225. package/deps/librdkafka/src/crc32c.h +38 -0
  226. package/deps/librdkafka/src/generate_proto.sh +66 -0
  227. package/deps/librdkafka/src/librdkafka_cgrp_synch.png +0 -0
  228. package/deps/librdkafka/src/lz4.c +2727 -0
  229. package/deps/librdkafka/src/lz4.h +842 -0
  230. package/deps/librdkafka/src/lz4frame.c +2078 -0
  231. package/deps/librdkafka/src/lz4frame.h +692 -0
  232. package/deps/librdkafka/src/lz4frame_static.h +47 -0
  233. package/deps/librdkafka/src/lz4hc.c +1631 -0
  234. package/deps/librdkafka/src/lz4hc.h +413 -0
  235. package/deps/librdkafka/src/nanopb/pb.h +917 -0
  236. package/deps/librdkafka/src/nanopb/pb_common.c +388 -0
  237. package/deps/librdkafka/src/nanopb/pb_common.h +49 -0
  238. package/deps/librdkafka/src/nanopb/pb_decode.c +1727 -0
  239. package/deps/librdkafka/src/nanopb/pb_decode.h +193 -0
  240. package/deps/librdkafka/src/nanopb/pb_encode.c +1000 -0
  241. package/deps/librdkafka/src/nanopb/pb_encode.h +185 -0
  242. package/deps/librdkafka/src/opentelemetry/common.pb.c +32 -0
  243. package/deps/librdkafka/src/opentelemetry/common.pb.h +170 -0
  244. package/deps/librdkafka/src/opentelemetry/metrics.options +2 -0
  245. package/deps/librdkafka/src/opentelemetry/metrics.pb.c +67 -0
  246. package/deps/librdkafka/src/opentelemetry/metrics.pb.h +966 -0
  247. package/deps/librdkafka/src/opentelemetry/resource.pb.c +12 -0
  248. package/deps/librdkafka/src/opentelemetry/resource.pb.h +58 -0
  249. package/deps/librdkafka/src/queue.h +850 -0
  250. package/deps/librdkafka/src/rd.h +584 -0
  251. package/deps/librdkafka/src/rdaddr.c +255 -0
  252. package/deps/librdkafka/src/rdaddr.h +202 -0
  253. package/deps/librdkafka/src/rdatomic.h +230 -0
  254. package/deps/librdkafka/src/rdavg.h +260 -0
  255. package/deps/librdkafka/src/rdavl.c +210 -0
  256. package/deps/librdkafka/src/rdavl.h +250 -0
  257. package/deps/librdkafka/src/rdbase64.c +200 -0
  258. package/deps/librdkafka/src/rdbase64.h +43 -0
  259. package/deps/librdkafka/src/rdbuf.c +1884 -0
  260. package/deps/librdkafka/src/rdbuf.h +375 -0
  261. package/deps/librdkafka/src/rdcrc32.c +114 -0
  262. package/deps/librdkafka/src/rdcrc32.h +170 -0
  263. package/deps/librdkafka/src/rddl.c +179 -0
  264. package/deps/librdkafka/src/rddl.h +43 -0
  265. package/deps/librdkafka/src/rdendian.h +175 -0
  266. package/deps/librdkafka/src/rdfloat.h +67 -0
  267. package/deps/librdkafka/src/rdfnv1a.c +113 -0
  268. package/deps/librdkafka/src/rdfnv1a.h +35 -0
  269. package/deps/librdkafka/src/rdgz.c +120 -0
  270. package/deps/librdkafka/src/rdgz.h +46 -0
  271. package/deps/librdkafka/src/rdhdrhistogram.c +721 -0
  272. package/deps/librdkafka/src/rdhdrhistogram.h +87 -0
  273. package/deps/librdkafka/src/rdhttp.c +830 -0
  274. package/deps/librdkafka/src/rdhttp.h +101 -0
  275. package/deps/librdkafka/src/rdinterval.h +177 -0
  276. package/deps/librdkafka/src/rdkafka.c +5505 -0
  277. package/deps/librdkafka/src/rdkafka.h +10686 -0
  278. package/deps/librdkafka/src/rdkafka_admin.c +9794 -0
  279. package/deps/librdkafka/src/rdkafka_admin.h +661 -0
  280. package/deps/librdkafka/src/rdkafka_assignment.c +1010 -0
  281. package/deps/librdkafka/src/rdkafka_assignment.h +73 -0
  282. package/deps/librdkafka/src/rdkafka_assignor.c +1786 -0
  283. package/deps/librdkafka/src/rdkafka_assignor.h +402 -0
  284. package/deps/librdkafka/src/rdkafka_aux.c +409 -0
  285. package/deps/librdkafka/src/rdkafka_aux.h +174 -0
  286. package/deps/librdkafka/src/rdkafka_background.c +221 -0
  287. package/deps/librdkafka/src/rdkafka_broker.c +6337 -0
  288. package/deps/librdkafka/src/rdkafka_broker.h +744 -0
  289. package/deps/librdkafka/src/rdkafka_buf.c +543 -0
  290. package/deps/librdkafka/src/rdkafka_buf.h +1525 -0
  291. package/deps/librdkafka/src/rdkafka_cert.c +576 -0
  292. package/deps/librdkafka/src/rdkafka_cert.h +62 -0
  293. package/deps/librdkafka/src/rdkafka_cgrp.c +7587 -0
  294. package/deps/librdkafka/src/rdkafka_cgrp.h +477 -0
  295. package/deps/librdkafka/src/rdkafka_conf.c +4880 -0
  296. package/deps/librdkafka/src/rdkafka_conf.h +732 -0
  297. package/deps/librdkafka/src/rdkafka_confval.h +97 -0
  298. package/deps/librdkafka/src/rdkafka_coord.c +623 -0
  299. package/deps/librdkafka/src/rdkafka_coord.h +132 -0
  300. package/deps/librdkafka/src/rdkafka_error.c +228 -0
  301. package/deps/librdkafka/src/rdkafka_error.h +80 -0
  302. package/deps/librdkafka/src/rdkafka_event.c +502 -0
  303. package/deps/librdkafka/src/rdkafka_event.h +126 -0
  304. package/deps/librdkafka/src/rdkafka_feature.c +898 -0
  305. package/deps/librdkafka/src/rdkafka_feature.h +104 -0
  306. package/deps/librdkafka/src/rdkafka_fetcher.c +1422 -0
  307. package/deps/librdkafka/src/rdkafka_fetcher.h +44 -0
  308. package/deps/librdkafka/src/rdkafka_header.c +220 -0
  309. package/deps/librdkafka/src/rdkafka_header.h +76 -0
  310. package/deps/librdkafka/src/rdkafka_idempotence.c +807 -0
  311. package/deps/librdkafka/src/rdkafka_idempotence.h +144 -0
  312. package/deps/librdkafka/src/rdkafka_int.h +1260 -0
  313. package/deps/librdkafka/src/rdkafka_interceptor.c +819 -0
  314. package/deps/librdkafka/src/rdkafka_interceptor.h +104 -0
  315. package/deps/librdkafka/src/rdkafka_lz4.c +450 -0
  316. package/deps/librdkafka/src/rdkafka_lz4.h +49 -0
  317. package/deps/librdkafka/src/rdkafka_metadata.c +2209 -0
  318. package/deps/librdkafka/src/rdkafka_metadata.h +345 -0
  319. package/deps/librdkafka/src/rdkafka_metadata_cache.c +1183 -0
  320. package/deps/librdkafka/src/rdkafka_mock.c +3661 -0
  321. package/deps/librdkafka/src/rdkafka_mock.h +610 -0
  322. package/deps/librdkafka/src/rdkafka_mock_cgrp.c +1876 -0
  323. package/deps/librdkafka/src/rdkafka_mock_handlers.c +3113 -0
  324. package/deps/librdkafka/src/rdkafka_mock_int.h +710 -0
  325. package/deps/librdkafka/src/rdkafka_msg.c +2589 -0
  326. package/deps/librdkafka/src/rdkafka_msg.h +614 -0
  327. package/deps/librdkafka/src/rdkafka_msgbatch.h +62 -0
  328. package/deps/librdkafka/src/rdkafka_msgset.h +98 -0
  329. package/deps/librdkafka/src/rdkafka_msgset_reader.c +1806 -0
  330. package/deps/librdkafka/src/rdkafka_msgset_writer.c +1474 -0
  331. package/deps/librdkafka/src/rdkafka_offset.c +1565 -0
  332. package/deps/librdkafka/src/rdkafka_offset.h +150 -0
  333. package/deps/librdkafka/src/rdkafka_op.c +997 -0
  334. package/deps/librdkafka/src/rdkafka_op.h +858 -0
  335. package/deps/librdkafka/src/rdkafka_partition.c +4896 -0
  336. package/deps/librdkafka/src/rdkafka_partition.h +1182 -0
  337. package/deps/librdkafka/src/rdkafka_pattern.c +228 -0
  338. package/deps/librdkafka/src/rdkafka_pattern.h +70 -0
  339. package/deps/librdkafka/src/rdkafka_plugin.c +213 -0
  340. package/deps/librdkafka/src/rdkafka_plugin.h +41 -0
  341. package/deps/librdkafka/src/rdkafka_proto.h +736 -0
  342. package/deps/librdkafka/src/rdkafka_protocol.h +128 -0
  343. package/deps/librdkafka/src/rdkafka_queue.c +1230 -0
  344. package/deps/librdkafka/src/rdkafka_queue.h +1220 -0
  345. package/deps/librdkafka/src/rdkafka_range_assignor.c +1748 -0
  346. package/deps/librdkafka/src/rdkafka_request.c +7089 -0
  347. package/deps/librdkafka/src/rdkafka_request.h +732 -0
  348. package/deps/librdkafka/src/rdkafka_roundrobin_assignor.c +123 -0
  349. package/deps/librdkafka/src/rdkafka_sasl.c +530 -0
  350. package/deps/librdkafka/src/rdkafka_sasl.h +63 -0
  351. package/deps/librdkafka/src/rdkafka_sasl_cyrus.c +722 -0
  352. package/deps/librdkafka/src/rdkafka_sasl_int.h +89 -0
  353. package/deps/librdkafka/src/rdkafka_sasl_oauthbearer.c +1833 -0
  354. package/deps/librdkafka/src/rdkafka_sasl_oauthbearer.h +52 -0
  355. package/deps/librdkafka/src/rdkafka_sasl_oauthbearer_oidc.c +1666 -0
  356. package/deps/librdkafka/src/rdkafka_sasl_oauthbearer_oidc.h +47 -0
  357. package/deps/librdkafka/src/rdkafka_sasl_plain.c +142 -0
  358. package/deps/librdkafka/src/rdkafka_sasl_scram.c +858 -0
  359. package/deps/librdkafka/src/rdkafka_sasl_win32.c +550 -0
  360. package/deps/librdkafka/src/rdkafka_ssl.c +2129 -0
  361. package/deps/librdkafka/src/rdkafka_ssl.h +86 -0
  362. package/deps/librdkafka/src/rdkafka_sticky_assignor.c +4785 -0
  363. package/deps/librdkafka/src/rdkafka_subscription.c +278 -0
  364. package/deps/librdkafka/src/rdkafka_telemetry.c +760 -0
  365. package/deps/librdkafka/src/rdkafka_telemetry.h +52 -0
  366. package/deps/librdkafka/src/rdkafka_telemetry_decode.c +1053 -0
  367. package/deps/librdkafka/src/rdkafka_telemetry_decode.h +59 -0
  368. package/deps/librdkafka/src/rdkafka_telemetry_encode.c +997 -0
  369. package/deps/librdkafka/src/rdkafka_telemetry_encode.h +301 -0
  370. package/deps/librdkafka/src/rdkafka_timer.c +402 -0
  371. package/deps/librdkafka/src/rdkafka_timer.h +117 -0
  372. package/deps/librdkafka/src/rdkafka_topic.c +2161 -0
  373. package/deps/librdkafka/src/rdkafka_topic.h +334 -0
  374. package/deps/librdkafka/src/rdkafka_transport.c +1309 -0
  375. package/deps/librdkafka/src/rdkafka_transport.h +99 -0
  376. package/deps/librdkafka/src/rdkafka_transport_int.h +100 -0
  377. package/deps/librdkafka/src/rdkafka_txnmgr.c +3256 -0
  378. package/deps/librdkafka/src/rdkafka_txnmgr.h +171 -0
  379. package/deps/librdkafka/src/rdkafka_zstd.c +226 -0
  380. package/deps/librdkafka/src/rdkafka_zstd.h +57 -0
  381. package/deps/librdkafka/src/rdlist.c +576 -0
  382. package/deps/librdkafka/src/rdlist.h +434 -0
  383. package/deps/librdkafka/src/rdlog.c +89 -0
  384. package/deps/librdkafka/src/rdlog.h +41 -0
  385. package/deps/librdkafka/src/rdmap.c +508 -0
  386. package/deps/librdkafka/src/rdmap.h +492 -0
  387. package/deps/librdkafka/src/rdmurmur2.c +167 -0
  388. package/deps/librdkafka/src/rdmurmur2.h +35 -0
  389. package/deps/librdkafka/src/rdports.c +61 -0
  390. package/deps/librdkafka/src/rdports.h +38 -0
  391. package/deps/librdkafka/src/rdposix.h +250 -0
  392. package/deps/librdkafka/src/rdrand.c +80 -0
  393. package/deps/librdkafka/src/rdrand.h +43 -0
  394. package/deps/librdkafka/src/rdregex.c +156 -0
  395. package/deps/librdkafka/src/rdregex.h +43 -0
  396. package/deps/librdkafka/src/rdsignal.h +57 -0
  397. package/deps/librdkafka/src/rdstring.c +645 -0
  398. package/deps/librdkafka/src/rdstring.h +98 -0
  399. package/deps/librdkafka/src/rdsysqueue.h +404 -0
  400. package/deps/librdkafka/src/rdtime.h +356 -0
  401. package/deps/librdkafka/src/rdtypes.h +86 -0
  402. package/deps/librdkafka/src/rdunittest.c +549 -0
  403. package/deps/librdkafka/src/rdunittest.h +232 -0
  404. package/deps/librdkafka/src/rdvarint.c +134 -0
  405. package/deps/librdkafka/src/rdvarint.h +165 -0
  406. package/deps/librdkafka/src/rdwin32.h +382 -0
  407. package/deps/librdkafka/src/rdxxhash.c +1030 -0
  408. package/deps/librdkafka/src/rdxxhash.h +328 -0
  409. package/deps/librdkafka/src/regexp.c +1352 -0
  410. package/deps/librdkafka/src/regexp.h +41 -0
  411. package/deps/librdkafka/src/snappy.c +1866 -0
  412. package/deps/librdkafka/src/snappy.h +62 -0
  413. package/deps/librdkafka/src/snappy_compat.h +138 -0
  414. package/deps/librdkafka/src/statistics_schema.json +444 -0
  415. package/deps/librdkafka/src/tinycthread.c +932 -0
  416. package/deps/librdkafka/src/tinycthread.h +503 -0
  417. package/deps/librdkafka/src/tinycthread_extra.c +199 -0
  418. package/deps/librdkafka/src/tinycthread_extra.h +212 -0
  419. package/deps/librdkafka/src/win32_config.h +58 -0
  420. package/deps/librdkafka/src-cpp/CMakeLists.txt +90 -0
  421. package/deps/librdkafka/src-cpp/ConfImpl.cpp +84 -0
  422. package/deps/librdkafka/src-cpp/ConsumerImpl.cpp +244 -0
  423. package/deps/librdkafka/src-cpp/HandleImpl.cpp +436 -0
  424. package/deps/librdkafka/src-cpp/HeadersImpl.cpp +48 -0
  425. package/deps/librdkafka/src-cpp/KafkaConsumerImpl.cpp +296 -0
  426. package/deps/librdkafka/src-cpp/Makefile +55 -0
  427. package/deps/librdkafka/src-cpp/MessageImpl.cpp +38 -0
  428. package/deps/librdkafka/src-cpp/MetadataImpl.cpp +170 -0
  429. package/deps/librdkafka/src-cpp/ProducerImpl.cpp +197 -0
  430. package/deps/librdkafka/src-cpp/QueueImpl.cpp +70 -0
  431. package/deps/librdkafka/src-cpp/README.md +16 -0
  432. package/deps/librdkafka/src-cpp/RdKafka.cpp +59 -0
  433. package/deps/librdkafka/src-cpp/TopicImpl.cpp +124 -0
  434. package/deps/librdkafka/src-cpp/TopicPartitionImpl.cpp +57 -0
  435. package/deps/librdkafka/src-cpp/rdkafkacpp.h +3797 -0
  436. package/deps/librdkafka/src-cpp/rdkafkacpp_int.h +1641 -0
  437. package/deps/librdkafka/tests/0000-unittests.c +72 -0
  438. package/deps/librdkafka/tests/0001-multiobj.c +102 -0
  439. package/deps/librdkafka/tests/0002-unkpart.c +244 -0
  440. package/deps/librdkafka/tests/0003-msgmaxsize.c +173 -0
  441. package/deps/librdkafka/tests/0004-conf.c +934 -0
  442. package/deps/librdkafka/tests/0005-order.c +133 -0
  443. package/deps/librdkafka/tests/0006-symbols.c +163 -0
  444. package/deps/librdkafka/tests/0007-autotopic.c +136 -0
  445. package/deps/librdkafka/tests/0008-reqacks.c +179 -0
  446. package/deps/librdkafka/tests/0009-mock_cluster.c +97 -0
  447. package/deps/librdkafka/tests/0011-produce_batch.c +753 -0
  448. package/deps/librdkafka/tests/0012-produce_consume.c +537 -0
  449. package/deps/librdkafka/tests/0013-null-msgs.c +473 -0
  450. package/deps/librdkafka/tests/0014-reconsume-191.c +512 -0
  451. package/deps/librdkafka/tests/0015-offset_seeks.c +172 -0
  452. package/deps/librdkafka/tests/0016-client_swname.c +181 -0
  453. package/deps/librdkafka/tests/0017-compression.c +140 -0
  454. package/deps/librdkafka/tests/0018-cgrp_term.c +338 -0
  455. package/deps/librdkafka/tests/0019-list_groups.c +289 -0
  456. package/deps/librdkafka/tests/0020-destroy_hang.c +162 -0
  457. package/deps/librdkafka/tests/0021-rkt_destroy.c +72 -0
  458. package/deps/librdkafka/tests/0022-consume_batch.c +279 -0
  459. package/deps/librdkafka/tests/0025-timers.c +147 -0
  460. package/deps/librdkafka/tests/0026-consume_pause.c +547 -0
  461. package/deps/librdkafka/tests/0028-long_topicnames.c +79 -0
  462. package/deps/librdkafka/tests/0029-assign_offset.c +202 -0
  463. package/deps/librdkafka/tests/0030-offset_commit.c +589 -0
  464. package/deps/librdkafka/tests/0031-get_offsets.c +235 -0
  465. package/deps/librdkafka/tests/0033-regex_subscribe.c +536 -0
  466. package/deps/librdkafka/tests/0034-offset_reset.c +398 -0
  467. package/deps/librdkafka/tests/0035-api_version.c +73 -0
  468. package/deps/librdkafka/tests/0036-partial_fetch.c +87 -0
  469. package/deps/librdkafka/tests/0037-destroy_hang_local.c +85 -0
  470. package/deps/librdkafka/tests/0038-performance.c +121 -0
  471. package/deps/librdkafka/tests/0039-event.c +284 -0
  472. package/deps/librdkafka/tests/0040-io_event.c +257 -0
  473. package/deps/librdkafka/tests/0041-fetch_max_bytes.c +97 -0
  474. package/deps/librdkafka/tests/0042-many_topics.c +252 -0
  475. package/deps/librdkafka/tests/0043-no_connection.c +77 -0
  476. package/deps/librdkafka/tests/0044-partition_cnt.c +94 -0
  477. package/deps/librdkafka/tests/0045-subscribe_update.c +1010 -0
  478. package/deps/librdkafka/tests/0046-rkt_cache.c +65 -0
  479. package/deps/librdkafka/tests/0047-partial_buf_tmout.c +98 -0
  480. package/deps/librdkafka/tests/0048-partitioner.c +283 -0
  481. package/deps/librdkafka/tests/0049-consume_conn_close.c +162 -0
  482. package/deps/librdkafka/tests/0050-subscribe_adds.c +145 -0
  483. package/deps/librdkafka/tests/0051-assign_adds.c +126 -0
  484. package/deps/librdkafka/tests/0052-msg_timestamps.c +238 -0
  485. package/deps/librdkafka/tests/0053-stats_cb.cpp +527 -0
  486. package/deps/librdkafka/tests/0054-offset_time.cpp +236 -0
  487. package/deps/librdkafka/tests/0055-producer_latency.c +539 -0
  488. package/deps/librdkafka/tests/0056-balanced_group_mt.c +315 -0
  489. package/deps/librdkafka/tests/0057-invalid_topic.cpp +112 -0
  490. package/deps/librdkafka/tests/0058-log.cpp +123 -0
  491. package/deps/librdkafka/tests/0059-bsearch.cpp +241 -0
  492. package/deps/librdkafka/tests/0060-op_prio.cpp +163 -0
  493. package/deps/librdkafka/tests/0061-consumer_lag.cpp +295 -0
  494. package/deps/librdkafka/tests/0062-stats_event.c +126 -0
  495. package/deps/librdkafka/tests/0063-clusterid.cpp +180 -0
  496. package/deps/librdkafka/tests/0064-interceptors.c +481 -0
  497. package/deps/librdkafka/tests/0065-yield.cpp +140 -0
  498. package/deps/librdkafka/tests/0066-plugins.cpp +129 -0
  499. package/deps/librdkafka/tests/0067-empty_topic.cpp +151 -0
  500. package/deps/librdkafka/tests/0068-produce_timeout.c +136 -0
  501. package/deps/librdkafka/tests/0069-consumer_add_parts.c +119 -0
  502. package/deps/librdkafka/tests/0070-null_empty.cpp +197 -0
  503. package/deps/librdkafka/tests/0072-headers_ut.c +448 -0
  504. package/deps/librdkafka/tests/0073-headers.c +381 -0
  505. package/deps/librdkafka/tests/0074-producev.c +87 -0
  506. package/deps/librdkafka/tests/0075-retry.c +290 -0
  507. package/deps/librdkafka/tests/0076-produce_retry.c +452 -0
  508. package/deps/librdkafka/tests/0077-compaction.c +363 -0
  509. package/deps/librdkafka/tests/0078-c_from_cpp.cpp +96 -0
  510. package/deps/librdkafka/tests/0079-fork.c +93 -0
  511. package/deps/librdkafka/tests/0080-admin_ut.c +3095 -0
  512. package/deps/librdkafka/tests/0081-admin.c +5633 -0
  513. package/deps/librdkafka/tests/0082-fetch_max_bytes.cpp +137 -0
  514. package/deps/librdkafka/tests/0083-cb_event.c +233 -0
  515. package/deps/librdkafka/tests/0084-destroy_flags.c +208 -0
  516. package/deps/librdkafka/tests/0085-headers.cpp +392 -0
  517. package/deps/librdkafka/tests/0086-purge.c +368 -0
  518. package/deps/librdkafka/tests/0088-produce_metadata_timeout.c +162 -0
  519. package/deps/librdkafka/tests/0089-max_poll_interval.c +511 -0
  520. package/deps/librdkafka/tests/0090-idempotence.c +171 -0
  521. package/deps/librdkafka/tests/0091-max_poll_interval_timeout.c +295 -0
  522. package/deps/librdkafka/tests/0092-mixed_msgver.c +103 -0
  523. package/deps/librdkafka/tests/0093-holb.c +200 -0
  524. package/deps/librdkafka/tests/0094-idempotence_msg_timeout.c +231 -0
  525. package/deps/librdkafka/tests/0095-all_brokers_down.cpp +122 -0
  526. package/deps/librdkafka/tests/0097-ssl_verify.cpp +658 -0
  527. package/deps/librdkafka/tests/0098-consumer-txn.cpp +1218 -0
  528. package/deps/librdkafka/tests/0099-commit_metadata.c +194 -0
  529. package/deps/librdkafka/tests/0100-thread_interceptors.cpp +195 -0
  530. package/deps/librdkafka/tests/0101-fetch-from-follower.cpp +446 -0
  531. package/deps/librdkafka/tests/0102-static_group_rebalance.c +836 -0
  532. package/deps/librdkafka/tests/0103-transactions.c +1383 -0
  533. package/deps/librdkafka/tests/0104-fetch_from_follower_mock.c +625 -0
  534. package/deps/librdkafka/tests/0105-transactions_mock.c +3930 -0
  535. package/deps/librdkafka/tests/0106-cgrp_sess_timeout.c +318 -0
  536. package/deps/librdkafka/tests/0107-topic_recreate.c +259 -0
  537. package/deps/librdkafka/tests/0109-auto_create_topics.cpp +278 -0
  538. package/deps/librdkafka/tests/0110-batch_size.cpp +182 -0
  539. package/deps/librdkafka/tests/0111-delay_create_topics.cpp +127 -0
  540. package/deps/librdkafka/tests/0112-assign_unknown_part.c +87 -0
  541. package/deps/librdkafka/tests/0113-cooperative_rebalance.cpp +3473 -0
  542. package/deps/librdkafka/tests/0114-sticky_partitioning.cpp +176 -0
  543. package/deps/librdkafka/tests/0115-producer_auth.cpp +182 -0
  544. package/deps/librdkafka/tests/0116-kafkaconsumer_close.cpp +216 -0
  545. package/deps/librdkafka/tests/0117-mock_errors.c +331 -0
  546. package/deps/librdkafka/tests/0118-commit_rebalance.c +154 -0
  547. package/deps/librdkafka/tests/0119-consumer_auth.cpp +167 -0
  548. package/deps/librdkafka/tests/0120-asymmetric_subscription.c +185 -0
  549. package/deps/librdkafka/tests/0121-clusterid.c +115 -0
  550. package/deps/librdkafka/tests/0122-buffer_cleaning_after_rebalance.c +227 -0
  551. package/deps/librdkafka/tests/0123-connections_max_idle.c +98 -0
  552. package/deps/librdkafka/tests/0124-openssl_invalid_engine.c +69 -0
  553. package/deps/librdkafka/tests/0125-immediate_flush.c +144 -0
  554. package/deps/librdkafka/tests/0126-oauthbearer_oidc.c +528 -0
  555. package/deps/librdkafka/tests/0127-fetch_queue_backoff.cpp +165 -0
  556. package/deps/librdkafka/tests/0128-sasl_callback_queue.cpp +125 -0
  557. package/deps/librdkafka/tests/0129-fetch_aborted_msgs.c +79 -0
  558. package/deps/librdkafka/tests/0130-store_offsets.c +178 -0
  559. package/deps/librdkafka/tests/0131-connect_timeout.c +81 -0
  560. package/deps/librdkafka/tests/0132-strategy_ordering.c +179 -0
  561. package/deps/librdkafka/tests/0133-ssl_keys.c +150 -0
  562. package/deps/librdkafka/tests/0134-ssl_provider.c +92 -0
  563. package/deps/librdkafka/tests/0135-sasl_credentials.cpp +143 -0
  564. package/deps/librdkafka/tests/0136-resolve_cb.c +181 -0
  565. package/deps/librdkafka/tests/0137-barrier_batch_consume.c +619 -0
  566. package/deps/librdkafka/tests/0138-admin_mock.c +281 -0
  567. package/deps/librdkafka/tests/0139-offset_validation_mock.c +950 -0
  568. package/deps/librdkafka/tests/0140-commit_metadata.cpp +108 -0
  569. package/deps/librdkafka/tests/0142-reauthentication.c +515 -0
  570. package/deps/librdkafka/tests/0143-exponential_backoff_mock.c +552 -0
  571. package/deps/librdkafka/tests/0144-idempotence_mock.c +373 -0
  572. package/deps/librdkafka/tests/0145-pause_resume_mock.c +119 -0
  573. package/deps/librdkafka/tests/0146-metadata_mock.c +505 -0
  574. package/deps/librdkafka/tests/0147-consumer_group_consumer_mock.c +952 -0
  575. package/deps/librdkafka/tests/0148-offset_fetch_commit_error_mock.c +563 -0
  576. package/deps/librdkafka/tests/0149-broker-same-host-port.c +140 -0
  577. package/deps/librdkafka/tests/0150-telemetry_mock.c +651 -0
  578. package/deps/librdkafka/tests/0151-purge-brokers.c +566 -0
  579. package/deps/librdkafka/tests/0152-rebootstrap.c +59 -0
  580. package/deps/librdkafka/tests/0153-memberid.c +128 -0
  581. package/deps/librdkafka/tests/1000-unktopic.c +164 -0
  582. package/deps/librdkafka/tests/8000-idle.cpp +60 -0
  583. package/deps/librdkafka/tests/8001-fetch_from_follower_mock_manual.c +113 -0
  584. package/deps/librdkafka/tests/CMakeLists.txt +170 -0
  585. package/deps/librdkafka/tests/LibrdkafkaTestApp.py +291 -0
  586. package/deps/librdkafka/tests/Makefile +182 -0
  587. package/deps/librdkafka/tests/README.md +509 -0
  588. package/deps/librdkafka/tests/autotest.sh +33 -0
  589. package/deps/librdkafka/tests/backtrace.gdb +30 -0
  590. package/deps/librdkafka/tests/broker_version_tests.py +315 -0
  591. package/deps/librdkafka/tests/buildbox.sh +17 -0
  592. package/deps/librdkafka/tests/cleanup-checker-tests.sh +20 -0
  593. package/deps/librdkafka/tests/cluster_testing.py +191 -0
  594. package/deps/librdkafka/tests/delete-test-topics.sh +56 -0
  595. package/deps/librdkafka/tests/fixtures/oauthbearer/jwt_assertion_template.json +10 -0
  596. package/deps/librdkafka/tests/fixtures/ssl/Makefile +8 -0
  597. package/deps/librdkafka/tests/fixtures/ssl/README.md +13 -0
  598. package/deps/librdkafka/tests/fixtures/ssl/client.keystore.intermediate.p12 +0 -0
  599. package/deps/librdkafka/tests/fixtures/ssl/client.keystore.p12 +0 -0
  600. package/deps/librdkafka/tests/fixtures/ssl/client2.certificate.intermediate.pem +72 -0
  601. package/deps/librdkafka/tests/fixtures/ssl/client2.certificate.pem +50 -0
  602. package/deps/librdkafka/tests/fixtures/ssl/client2.intermediate.key +46 -0
  603. package/deps/librdkafka/tests/fixtures/ssl/client2.key +46 -0
  604. package/deps/librdkafka/tests/fixtures/ssl/create_keys.sh +168 -0
  605. package/deps/librdkafka/tests/fuzzers/Makefile +12 -0
  606. package/deps/librdkafka/tests/fuzzers/README.md +31 -0
  607. package/deps/librdkafka/tests/fuzzers/fuzz_regex.c +74 -0
  608. package/deps/librdkafka/tests/fuzzers/helpers.h +90 -0
  609. package/deps/librdkafka/tests/gen-ssl-certs.sh +165 -0
  610. package/deps/librdkafka/tests/interactive_broker_version.py +170 -0
  611. package/deps/librdkafka/tests/interceptor_test/CMakeLists.txt +16 -0
  612. package/deps/librdkafka/tests/interceptor_test/Makefile +22 -0
  613. package/deps/librdkafka/tests/interceptor_test/interceptor_test.c +314 -0
  614. package/deps/librdkafka/tests/interceptor_test/interceptor_test.h +54 -0
  615. package/deps/librdkafka/tests/java/IncrementalRebalanceCli.java +97 -0
  616. package/deps/librdkafka/tests/java/Makefile +13 -0
  617. package/deps/librdkafka/tests/java/Murmur2Cli.java +46 -0
  618. package/deps/librdkafka/tests/java/README.md +14 -0
  619. package/deps/librdkafka/tests/java/TransactionProducerCli.java +162 -0
  620. package/deps/librdkafka/tests/java/run-class.sh +11 -0
  621. package/deps/librdkafka/tests/librdkafka.suppressions +483 -0
  622. package/deps/librdkafka/tests/lz4_manual_test.sh +59 -0
  623. package/deps/librdkafka/tests/multi-broker-version-test.sh +50 -0
  624. package/deps/librdkafka/tests/parse-refcnt.sh +43 -0
  625. package/deps/librdkafka/tests/performance_plot.py +115 -0
  626. package/deps/librdkafka/tests/plugin_test/Makefile +19 -0
  627. package/deps/librdkafka/tests/plugin_test/plugin_test.c +58 -0
  628. package/deps/librdkafka/tests/requirements.txt +2 -0
  629. package/deps/librdkafka/tests/run-all-tests.sh +79 -0
  630. package/deps/librdkafka/tests/run-consumer-tests.sh +16 -0
  631. package/deps/librdkafka/tests/run-producer-tests.sh +16 -0
  632. package/deps/librdkafka/tests/run-test-batches.py +157 -0
  633. package/deps/librdkafka/tests/run-test.sh +140 -0
  634. package/deps/librdkafka/tests/rusage.c +249 -0
  635. package/deps/librdkafka/tests/sasl_test.py +289 -0
  636. package/deps/librdkafka/tests/scenarios/README.md +6 -0
  637. package/deps/librdkafka/tests/scenarios/ak23.json +6 -0
  638. package/deps/librdkafka/tests/scenarios/default.json +5 -0
  639. package/deps/librdkafka/tests/scenarios/noautocreate.json +5 -0
  640. package/deps/librdkafka/tests/sockem.c +801 -0
  641. package/deps/librdkafka/tests/sockem.h +85 -0
  642. package/deps/librdkafka/tests/sockem_ctrl.c +145 -0
  643. package/deps/librdkafka/tests/sockem_ctrl.h +61 -0
  644. package/deps/librdkafka/tests/test.c +7778 -0
  645. package/deps/librdkafka/tests/test.conf.example +27 -0
  646. package/deps/librdkafka/tests/test.h +1028 -0
  647. package/deps/librdkafka/tests/testcpp.cpp +131 -0
  648. package/deps/librdkafka/tests/testcpp.h +388 -0
  649. package/deps/librdkafka/tests/testshared.h +416 -0
  650. package/deps/librdkafka/tests/tools/README.md +4 -0
  651. package/deps/librdkafka/tests/tools/stats/README.md +21 -0
  652. package/deps/librdkafka/tests/tools/stats/filter.jq +42 -0
  653. package/deps/librdkafka/tests/tools/stats/graph.py +150 -0
  654. package/deps/librdkafka/tests/tools/stats/requirements.txt +3 -0
  655. package/deps/librdkafka/tests/tools/stats/to_csv.py +124 -0
  656. package/deps/librdkafka/tests/trivup/trivup-0.14.0.tar.gz +0 -0
  657. package/deps/librdkafka/tests/until-fail.sh +87 -0
  658. package/deps/librdkafka/tests/xxxx-assign_partition.c +122 -0
  659. package/deps/librdkafka/tests/xxxx-metadata.cpp +159 -0
  660. package/deps/librdkafka/vcpkg.json +23 -0
  661. package/deps/librdkafka/win32/README.md +5 -0
  662. package/deps/librdkafka/win32/build-package.bat +3 -0
  663. package/deps/librdkafka/win32/build.bat +19 -0
  664. package/deps/librdkafka/win32/common.vcxproj +84 -0
  665. package/deps/librdkafka/win32/interceptor_test/interceptor_test.vcxproj +87 -0
  666. package/deps/librdkafka/win32/librdkafka.autopkg.template +54 -0
  667. package/deps/librdkafka/win32/librdkafka.master.testing.targets +13 -0
  668. package/deps/librdkafka/win32/librdkafka.sln +226 -0
  669. package/deps/librdkafka/win32/librdkafka.vcxproj +276 -0
  670. package/deps/librdkafka/win32/librdkafkacpp/librdkafkacpp.vcxproj +104 -0
  671. package/deps/librdkafka/win32/msbuild.ps1 +15 -0
  672. package/deps/librdkafka/win32/openssl_engine_example/openssl_engine_example.vcxproj +132 -0
  673. package/deps/librdkafka/win32/package-zip.ps1 +46 -0
  674. package/deps/librdkafka/win32/packages/repositories.config +4 -0
  675. package/deps/librdkafka/win32/push-package.bat +4 -0
  676. package/deps/librdkafka/win32/rdkafka_complex_consumer_example_cpp/rdkafka_complex_consumer_example_cpp.vcxproj +67 -0
  677. package/deps/librdkafka/win32/rdkafka_example/rdkafka_example.vcxproj +97 -0
  678. package/deps/librdkafka/win32/rdkafka_performance/rdkafka_performance.vcxproj +97 -0
  679. package/deps/librdkafka/win32/setup-msys2.ps1 +47 -0
  680. package/deps/librdkafka/win32/setup-vcpkg.ps1 +34 -0
  681. package/deps/librdkafka/win32/tests/test.conf.example +25 -0
  682. package/deps/librdkafka/win32/tests/tests.vcxproj +253 -0
  683. package/deps/librdkafka/win32/win_ssl_cert_store/win_ssl_cert_store.vcxproj +132 -0
  684. package/deps/librdkafka/win32/wingetopt.c +564 -0
  685. package/deps/librdkafka/win32/wingetopt.h +101 -0
  686. package/deps/librdkafka/win32/wintime.h +33 -0
  687. package/deps/librdkafka.gyp +62 -0
  688. package/lib/admin.js +233 -0
  689. package/lib/client.js +573 -0
  690. package/lib/error.js +500 -0
  691. package/lib/index.js +34 -0
  692. package/lib/kafka-consumer-stream.js +397 -0
  693. package/lib/kafka-consumer.js +698 -0
  694. package/lib/producer/high-level-producer.js +323 -0
  695. package/lib/producer-stream.js +307 -0
  696. package/lib/producer.js +375 -0
  697. package/lib/tools/ref-counter.js +52 -0
  698. package/lib/topic-partition.js +88 -0
  699. package/lib/topic.js +42 -0
  700. package/lib/util.js +29 -0
  701. package/package.json +61 -0
  702. package/prebuilds/darwin-arm64/@point3+node-rdkafka.node +0 -0
  703. package/prebuilds/linux-x64/@point3+node-rdkafka.node +0 -0
  704. package/util/configure.js +30 -0
  705. package/util/get-env.js +6 -0
  706. package/util/test-compile.js +11 -0
  707. package/util/test-producer-delivery.js +100 -0
@@ -0,0 +1,2481 @@
1
+ <a name="introduction-to-librdkafka---the-apache-kafka-cc-client-library"></a>
2
+ # Introduction to librdkafka - the Apache Kafka C/C++ client library
3
+
4
+
5
+ librdkafka is a high performance C implementation of the Apache
6
+ Kafka client, providing a reliable and performant client for production use.
7
+ librdkafka also provides a native C++ interface.
8
+
9
+ <!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
10
+ **Table of Contents**
11
+
12
+ - [Introduction to librdkafka - the Apache Kafka C/C++ client library](#introduction-to-librdkafka---the-apache-kafka-cc-client-library)
13
+ - [Performance](#performance)
14
+ - [High throughput](#high-throughput)
15
+ - [Low latency](#low-latency)
16
+ - [Latency measurement](#latency-measurement)
17
+ - [Compression](#compression)
18
+ - [Message reliability](#message-reliability)
19
+ - [Producer message delivery success](#producer-message-delivery-success)
20
+ - [Producer message delivery failure](#producer-message-delivery-failure)
21
+ - [Error: Timed out in transmission queue](#error-timed-out-in-transmission-queue)
22
+ - [Error: Timed out in flight to/from broker](#error-timed-out-in-flight-tofrom-broker)
23
+ - [Error: Temporary broker-side error](#error-temporary-broker-side-error)
24
+ - [Error: Temporary errors due to stale metadata](#error-temporary-errors-due-to-stale-metadata)
25
+ - [Error: Local time out](#error-local-time-out)
26
+ - [Error: Permanent errors](#error-permanent-errors)
27
+ - [Producer retries](#producer-retries)
28
+ - [Reordering](#reordering)
29
+ - [Idempotent Producer](#idempotent-producer)
30
+ - [Guarantees](#guarantees)
31
+ - [Ordering and message sequence numbers](#ordering-and-message-sequence-numbers)
32
+ - [Partitioner considerations](#partitioner-considerations)
33
+ - [Message timeout considerations](#message-timeout-considerations)
34
+ - [Leader change](#leader-change)
35
+ - [Error handling](#error-handling)
36
+ - <a href="#rd-kafka-resp-err-out-of-order-sequence-number">RD_KAFKA_RESP_ERR_OUT_OF_ORDER_SEQUENCE_NUMBER</a>
37
+ - <a href="#rd-kafka-resp-err-duplicate-sequence-number">RD_KAFKA_RESP_ERR_DUPLICATE_SEQUENCE_NUMBER</a>
38
+ - <a href="#rd-kafka-resp-err-unknown-producer-id">RD_KAFKA_RESP_ERR_UNKNOWN_PRODUCER_ID</a>
39
+ - [Standard errors](#standard-errors)
40
+ - [Message persistence status](#message-persistence-status)
41
+ - [Transactional Producer](#transactional-producer)
42
+ - [Error handling](#error-handling-1)
43
+ - [Old producer fencing](#old-producer-fencing)
44
+ - [Configuration considerations](#configuration-considerations)
45
+ - [Exactly Once Semantics (EOS) and transactions](#exactly-once-semantics-eos-and-transactions)
46
+ - [Usage](#usage)
47
+ - [Documentation](#documentation)
48
+ - [Initialization](#initialization)
49
+ - [Configuration](#configuration)
50
+ - [Example](#example)
51
+ - [Termination](#termination)
52
+ - [High-level KafkaConsumer](#high-level-kafkaconsumer)
53
+ - [Producer](#producer)
54
+ - [Admin API client](#admin-api-client)
55
+ - [Speeding up termination](#speeding-up-termination)
56
+ - [Threads and callbacks](#threads-and-callbacks)
57
+ - [Brokers](#brokers)
58
+ - [SSL](#ssl)
59
+ - [OAUTHBEARER with support for OIDC](#oauthbearer-with-support-for-oidc)
60
+ - [JWT bearer grant type (KIP-1139)](#jwt-bearer-grant-type-kip-1139)
61
+ - [Metadata based authentication](#metadata-based-authentication)
62
+ - [Azure IMDS](#azure-imds)
63
+ - [Sparse connections](#sparse-connections)
64
+ - [Random broker selection](#random-broker-selection)
65
+ - [Persistent broker connections](#persistent-broker-connections)
66
+ - [Connection close](#connection-close)
67
+ - [Fetch From Follower](#fetch-from-follower)
68
+ - [Logging](#logging)
69
+ - [Debug contexts](#debug-contexts)
70
+ - [Feature discovery](#feature-discovery)
71
+ - [Producer API](#producer-api)
72
+ - [Simple Consumer API (legacy)](#simple-consumer-api-legacy)
73
+ - [Offset management](#offset-management)
74
+ - [Auto offset commit](#auto-offset-commit)
75
+ - [At-least-once processing](#at-least-once-processing)
76
+ - [Auto offset reset](#auto-offset-reset)
77
+ - [Consumer groups](#consumer-groups)
78
+ - [Static consumer groups](#static-consumer-groups)
79
+ - [Next Generation Consumer Group Protocol (KIP-848)](#next-generation-consumer-group-protocol-kip-848)
80
+ - [Overview](#overview)
81
+ - [Available Features](#available-features)
82
+ - [Contract Changes](#contract-changes)
83
+ - [Client Configuration changes](#client-configuration-changes)
84
+ - [Rebalance Callback Changes](#rebalance-callback-changes)
85
+ - [Static Group Membership](#static-group-membership)
86
+ - [Session Timeout \& Fetching](#session-timeout--fetching)
87
+ - [Closing / Auto-Commit](#closing--auto-commit)
88
+ - [Error Handling Changes](#error-handling-changes)
89
+ - [Summary of Key Differences (Classic vs Next-Gen)](#summary-of-key-differences-classic-vs-next-gen)
90
+ - [Minimal Example Config](#minimal-example-config)
91
+ - [Classic Protocol](#classic-protocol)
92
+ - [Next-Gen Protocol / KIP-848](#next-gen-protocol--kip-848)
93
+ - [Rebalance Callback Migration](#rebalance-callback-migration)
94
+ - [Range Assignor (Classic)](#range-assignor-classic)
95
+ - [Incremental Assignor (Including Range in Consumer / KIP-848, Any Protocol)](#incremental-assignor-including-range-in-consumer--kip-848-any-protocol)
96
+ - [Upgrade and Downgrade](#upgrade-and-downgrade)
97
+ - [Migration Checklist (Next-Gen Protocol / KIP-848)](#migration-checklist-next-gen-protocol--kip-848)
98
+ - [Note on Batch consume APIs](#note-on-batch-consume-apis)
99
+ - [Topics](#topics)
100
+ - [Unknown or unauthorized topics](#unknown-or-unauthorized-topics)
101
+ - [Topic metadata propagation for newly created topics](#topic-metadata-propagation-for-newly-created-topics)
102
+ - [Topic auto creation](#topic-auto-creation)
103
+ - [Metadata](#metadata)
104
+ - [\< 0.9.3](#lt093)
105
+ - [\> 0.9.3](#gt093-1)
106
+ - [Query reasons](#query-reasons)
107
+ - [Caching strategy](#caching-strategy)
108
+ - [Fatal errors](#fatal-errors)
109
+ - [Fatal producer errors](#fatal-producer-errors)
110
+ - [Fatal consumer errors](#fatal-consumer-errors)
111
+ - [Compatibility](#compatibility)
112
+ - [Broker version compatibility](#broker-version-compatibility)
113
+ - [Broker version \>= 0.10.0.0 (or trunk)](#broker-version--01000-or-trunk)
114
+ - [Broker versions 0.9.0.x](#broker-versions-090x)
115
+ - [Broker versions 0.8.x.y](#broker-versions-08xy)
116
+ - [Detailed description](#detailed-description)
117
+ - [Supported KIPs](#supported-kips)
118
+ - [Supported protocol versions](#supported-protocol-versions)
119
+ - [Recommendations for language binding developers](#recommendations-for-language-binding-developers)
120
+ - [Expose the configuration interface pass-thru](#expose-the-configuration-interface-pass-thru)
121
+ - [Error constants](#error-constants)
122
+ - [Reporting client software name and version to broker](#reporting-client-software-name-and-version-to-broker)
123
+ - [Documentation reuse](#documentation-reuse)
124
+ - [Community support](#community-support)
125
+
126
+ <!-- markdown-toc end -->
127
+
128
+
129
+ <a name="performance"></a>
130
+ ## Performance
131
+
132
+ librdkafka is a multi-threaded library designed for use on modern hardware and
133
+ it attempts to keep memory copying to a minimum. The payload of produced or
134
+ consumed messages may pass through without any copying
135
+ (if so desired by the application) putting no limit on message sizes.
136
+
137
+ librdkafka allows you to decide if high throughput is the name of the game,
138
+ or if a low latency service is required, or a balance between the two, all
139
+ through the configuration property interface.
140
+
141
+ The single most important configuration properties for performance tuning is
142
+ `linger.ms` - how long to wait for `batch.num.messages` or `batch.size` to
143
+ fill up in the local per-partition queue before sending the batch of messages
144
+ to the broker.
145
+
146
+ In low throughput scenarios, a lower value improves latency.
147
+ As throughput increases, the cost of each broker request becomes significant
148
+ impacting both maximum throughput and latency. For higher throughput
149
+ applications, latency will typically be lower using a higher `linger.ms` due
150
+ to larger batches resulting in a lesser number of requests, yielding decreased
151
+ per-message load on the broker. A good general purpose setting is 5ms.
152
+ For applications seeking maximum throughput, the recommended value is >= 50ms.
153
+
154
+
155
+ <a name="high-throughput"></a>
156
+ ### High throughput
157
+
158
+ The key to high throughput is message batching - waiting for a certain amount
159
+ of messages to accumulate in the local queue before sending them off in
160
+ one large message set or batch to the peer. This amortizes the messaging
161
+ overhead and eliminates the adverse effect of the round trip time (rtt).
162
+
163
+ `linger.ms` (also called `queue.buffering.max.ms`) allows librdkafka to
164
+ wait up to the specified amount of time to accumulate up to
165
+ `batch.num.messages` or `batch.size` in a single batch (MessageSet) before
166
+ sending to the broker. The larger the batch the higher the throughput.
167
+ Enabling `msg` debugging (set `debug` property to `msg`) will emit log
168
+ messages for the accumulation process which lets you see what batch sizes
169
+ are being produced.
170
+
171
+ Example using `linger.ms=1`:
172
+
173
+ ```
174
+ ... test [0]: MessageSet with 1514 message(s) delivered
175
+ ... test [3]: MessageSet with 1690 message(s) delivered
176
+ ... test [0]: MessageSet with 1720 message(s) delivered
177
+ ... test [3]: MessageSet with 2 message(s) delivered
178
+ ... test [3]: MessageSet with 4 message(s) delivered
179
+ ... test [0]: MessageSet with 4 message(s) delivered
180
+ ... test [3]: MessageSet with 11 message(s) delivered
181
+ ```
182
+
183
+ Example using `linger.ms=1000`:
184
+ ```
185
+ ... test [0]: MessageSet with 10000 message(s) delivered
186
+ ... test [0]: MessageSet with 10000 message(s) delivered
187
+ ... test [0]: MessageSet with 4667 message(s) delivered
188
+ ... test [3]: MessageSet with 10000 message(s) delivered
189
+ ... test [3]: MessageSet with 10000 message(s) delivered
190
+ ... test [3]: MessageSet with 4476 message(s) delivered
191
+
192
+ ```
193
+
194
+
195
+ The default setting of `linger.ms=5` is not suitable for
196
+ high throughput, it is recommended to set this value to >50ms, with
197
+ throughput leveling out somewhere around 100-1000ms depending on
198
+ message produce pattern and sizes.
199
+
200
+ These setting are set globally (`rd_kafka_conf_t`) but applies on a
201
+ per topic+partition basis.
202
+
203
+
204
+ <a name="low-latency"></a>
205
+ ### Low latency
206
+
207
+ When low latency messaging is required the `linger.ms` should be
208
+ tuned to the maximum permitted producer-side latency.
209
+ Setting `linger.ms` to 0 or 0.1 will make sure messages are sent as
210
+ soon as possible.
211
+ Lower buffering time leads to smaller batches and larger per-message overheads,
212
+ increasing network, memory and CPU usage for producers, brokers and consumers.
213
+
214
+ See [How to decrease message latency](https://github.com/confluentinc/librdkafka/wiki/How-to-decrease-message-latency) for more info.
215
+
216
+
217
+ <a name="latency-measurement"></a>
218
+ #### Latency measurement
219
+
220
+ End-to-end latency is preferably measured by synchronizing clocks on producers
221
+ and consumers and using the message timestamp on the consumer to calculate
222
+ the full latency. Make sure the topic's `log.message.timestamp.type` is set to
223
+ the default `CreateTime` (Kafka topic configuration, not librdkafka topic).
224
+
225
+ Latencies are typically incurred by the producer, network and broker, the
226
+ consumer effect on end-to-end latency is minimal.
227
+
228
+ To break down the end-to-end latencies and find where latencies are adding up
229
+ there are a number of metrics available through librdkafka statistics
230
+ on the producer:
231
+
232
+ * `brokers[].int_latency` is the time, per message, between produce()
233
+ and the message being written to a MessageSet and ProduceRequest.
234
+ High `int_latency` indicates CPU core contention: check CPU load and,
235
+ involuntary context switches (`/proc/<..>/status`).
236
+ Consider using a machine/instance with more CPU cores.
237
+ This metric is only relevant on the producer.
238
+
239
+ * `brokers[].outbuf_latency` is the time, per protocol request
240
+ (such as ProduceRequest), between the request being enqueued (which happens
241
+ right after int_latency) and the time the request is written to the
242
+ TCP socket connected to the broker.
243
+ High `outbuf_latency` indicates CPU core contention or network congestion:
244
+ check CPU load and socket SendQ (`netstat -anp | grep :9092`).
245
+
246
+ * `brokers[].rtt` is the time, per protocol request, between the request being
247
+ written to the TCP socket and the time the response is received from
248
+ the broker.
249
+ High `rtt` indicates broker load or network congestion:
250
+ check broker metrics, local socket SendQ, network performance, etc.
251
+
252
+ * `brokers[].throttle` is the time, per throttled protocol request, the
253
+ broker throttled/delayed handling of a request due to usage quotas.
254
+ The throttle time will also be reflected in `rtt`.
255
+
256
+ * `topics[].batchsize` is the size of individual Producer MessageSet batches.
257
+ See below.
258
+
259
+ * `topics[].batchcnt` is the number of messages in individual Producer
260
+ MessageSet batches. Due to Kafka protocol overhead a batch with few messages
261
+ will have a higher relative processing and size overhead than a batch
262
+ with many messages.
263
+ Use the `linger.ms` client configuration property to set the maximum
264
+ amount of time allowed for accumulating a single batch, the larger the
265
+ value the larger the batches will grow, thus increasing efficiency.
266
+ When producing messages at a high rate it is recommended to increase
267
+ linger.ms, which will improve throughput and in some cases also latency.
268
+
269
+
270
+ See [STATISTICS.md](STATISTICS.md) for the full definition of metrics.
271
+ A JSON schema for the statistics is available in
272
+ [statistics-schema.json](src/statistics-schema.json).
273
+
274
+
275
+ <a name="compression"></a>
276
+ ### Compression
277
+
278
+ Producer message compression is enabled through the `compression.codec`
279
+ configuration property.
280
+
281
+ Compression is performed on the batch of messages in the local queue, the
282
+ larger the batch the higher likelyhood of a higher compression ratio.
283
+ The local batch queue size is controlled through the `batch.num.messages`,
284
+ `batch.size`, and `linger.ms` configuration properties as described in the
285
+ **High throughput** chapter above.
286
+
287
+
288
+
289
+ <a name="message-reliability"></a>
290
+ ## Message reliability
291
+
292
+ Message reliability is an important factor of librdkafka - an application
293
+ can rely fully on librdkafka to deliver a message according to the specified
294
+ configuration (`request.required.acks` and `message.send.max.retries`, etc).
295
+
296
+ If the topic configuration property `request.required.acks` is set to wait
297
+ for message commit acknowledgements from brokers (any value but 0, see
298
+ [`CONFIGURATION.md`](CONFIGURATION.md)
299
+ for specifics) then librdkafka will hold on to the message until
300
+ all expected acks have been received, gracefully handling the following events:
301
+
302
+ * Broker connection failure
303
+ * Topic leader change
304
+ * Produce errors signaled by the broker
305
+ * Network problems
306
+
307
+ We recommend `request.required.acks` to be set to `all` to make sure
308
+ produced messages are acknowledged by all in-sync replica brokers.
309
+
310
+ This is handled automatically by librdkafka and the application does not need
311
+ to take any action at any of the above events.
312
+ The message will be resent up to `message.send.max.retries` times before
313
+ reporting a failure back to the application.
314
+
315
+ The delivery report callback is used by librdkafka to signal the status of
316
+ a message back to the application, it will be called once for each message
317
+ to report the status of message delivery:
318
+
319
+ * If `error_code` is non-zero the message delivery failed and the error_code
320
+ indicates the nature of the failure (`rd_kafka_resp_err_t` enum).
321
+ * If `error_code` is zero the message has been successfully delivered.
322
+
323
+ See Producer API chapter for more details on delivery report callback usage.
324
+
325
+ The delivery report callback is optional but highly recommended.
326
+
327
+
328
+ <a name="producer-message-delivery-success"></a>
329
+ ### Producer message delivery success
330
+
331
+ When a ProduceRequest is successfully handled by the broker and a
332
+ ProduceResponse is received (also called the ack) without an error code
333
+ the messages from the ProduceRequest are enqueued on the delivery report
334
+ queue (if a delivery report callback has been set) and will be passed to
335
+ the application on the next invocation rd_kafka_poll().
336
+
337
+
338
+ <a name="producer-message-delivery-failure"></a>
339
+ ### Producer message delivery failure
340
+
341
+ The following sub-chapters explains how different produce errors
342
+ are handled.
343
+
344
+ If the error is retryable and there are remaining retry attempts for
345
+ the given message(s), an automatic retry will be scheduled by librdkafka,
346
+ these retries are not visible to the application.
347
+
348
+ Only permanent errors and temporary errors that have reached their maximum
349
+ retry count will generate a delivery report event to the application with an
350
+ error code set.
351
+
352
+ The application should typically not attempt to retry producing the message
353
+ on failure, but instead configure librdkafka to perform these retries
354
+ using the `retries`, `retry.backoff.ms` and `retry.backoff.max.ms`
355
+ configuration properties.
356
+
357
+
358
+ <a name="error-timed-out-in-transmission-queue"></a>
359
+ #### Error: Timed out in transmission queue
360
+
361
+ Internal error ERR__TIMED_OUT_QUEUE.
362
+
363
+ The connectivity to the broker may be stalled due to networking contention,
364
+ local or remote system issues, etc, and the request has not yet been sent.
365
+
366
+ The producer can be certain that the message has not been sent to the broker.
367
+
368
+ This is a retryable error, but is not counted as a retry attempt
369
+ since the message was never actually transmitted.
370
+
371
+ A retry by librdkafka at this point will not cause duplicate messages.
372
+
373
+
374
+ <a name="error-timed-out-in-flight-tofrom-broker"></a>
375
+ #### Error: Timed out in flight to/from broker
376
+
377
+ Internal error ERR__TIMED_OUT, ERR__TRANSPORT.
378
+
379
+ Same reasons as for `Timed out in transmission queue` above, with the
380
+ difference that the message may have been sent to the broker and might
381
+ be stalling waiting for broker replicas to ack the message, or the response
382
+ could be stalled due to networking issues.
383
+ At this point the producer can't know if the message reached the broker,
384
+ nor if the broker wrote the message to disk and replicas.
385
+
386
+ This is a retryable error.
387
+
388
+ A retry by librdkafka at this point may cause duplicate messages.
389
+
390
+
391
+ <a name="error-temporary-broker-side-error"></a>
392
+ #### Error: Temporary broker-side error
393
+
394
+ Broker errors ERR_REQUEST_TIMED_OUT, ERR_NOT_ENOUGH_REPLICAS,
395
+ ERR_NOT_ENOUGH_REPLICAS_AFTER_APPEND.
396
+
397
+ These errors are considered temporary and librdkafka is will retry them
398
+ if permitted by configuration.
399
+
400
+
401
+ <a name="error-temporary-errors-due-to-stale-metadata"></a>
402
+ #### Error: Temporary errors due to stale metadata
403
+
404
+ Broker errors ERR_LEADER_NOT_AVAILABLE, ERR_NOT_LEADER_FOR_PARTITION.
405
+
406
+ These errors are considered temporary and a retry is warranted, a metadata
407
+ request is automatically sent to find a new leader for the partition.
408
+
409
+ A retry by librdkafka at this point will not cause duplicate messages.
410
+
411
+
412
+ <a name="error-local-time-out"></a>
413
+ #### Error: Local time out
414
+
415
+ Internal error ERR__MSG_TIMED_OUT.
416
+
417
+ The message could not be successfully transmitted before `message.timeout.ms`
418
+ expired, typically due to no leader being available or no broker connection.
419
+ The message may have been retried due to other errors but
420
+ those error messages are abstracted by the ERR__MSG_TIMED_OUT error code.
421
+
422
+ Since the `message.timeout.ms` has passed there will be no more retries
423
+ by librdkafka.
424
+
425
+
426
+ <a name="error-permanent-errors"></a>
427
+ #### Error: Permanent errors
428
+
429
+ Any other error is considered a permanent error and the message
430
+ will fail immediately, generating a delivery report event with the
431
+ distinctive error code.
432
+
433
+ The full list of permanent errors depend on the broker version and
434
+ will likely grow in the future.
435
+
436
+ Typical permanent broker errors are:
437
+ * ERR_CORRUPT_MESSAGE
438
+ * ERR_MSG_SIZE_TOO_LARGE - adjust client's or broker's `message.max.bytes`.
439
+ * ERR_UNKNOWN_TOPIC_OR_PART - topic or partition does not exist,
440
+ automatic topic creation is disabled on the
441
+ broker or the application is specifying a
442
+ partition that does not exist.
443
+ * ERR_RECORD_LIST_TOO_LARGE
444
+ * ERR_INVALID_REQUIRED_ACKS
445
+ * ERR_TOPIC_AUTHORIZATION_FAILED
446
+ * ERR_UNSUPPORTED_FOR_MESSAGE_FORMAT
447
+ * ERR_CLUSTER_AUTHORIZATION_FAILED
448
+
449
+
450
+ <a name="producer-retries"></a>
451
+ ### Producer retries
452
+
453
+ The ProduceRequest itself is not retried, instead the messages
454
+ are put back on the internal partition queue by an insert sort
455
+ that maintains their original position (the message order is defined
456
+ at the time a message is initially appended to a partition queue, i.e., after
457
+ partitioning).
458
+ A backoff time (`retry.backoff.ms`) is set on the retried messages which
459
+ effectively blocks retry attempts until the backoff time has expired.
460
+
461
+
462
+ <a name="reordering"></a>
463
+ ### Reordering
464
+
465
+ As for all retries, if `max.in.flight` > 1 and `retries` > 0, retried messages
466
+ may be produced out of order, since a sub-sequent message in a sub-sequent
467
+ ProduceRequest may already be in-flight (and accepted by the broker)
468
+ by the time the retry for the failing message is sent.
469
+
470
+ Using the Idempotent Producer prevents reordering even with `max.in.flight` > 1,
471
+ see [Idempotent Producer](#idempotent-producer) below for more information.
472
+
473
+
474
+ <a name="idempotent-producer"></a>
475
+ ### Idempotent Producer
476
+
477
+ librdkafka supports the idempotent producer which provides strict ordering and
478
+ and exactly-once producer guarantees.
479
+ The idempotent producer is enabled by setting the `enable.idempotence`
480
+ configuration property to `true`, this will automatically adjust a number of
481
+ other configuration properties to adhere to the idempotency requirements,
482
+ see the documentation of `enable.idempotence` in [CONFIGURATION.md](CONFIGURATION.md) for
483
+ more information.
484
+ Producer instantiation will fail if the user supplied an incompatible value
485
+ for any of the automatically adjusted properties, e.g., it is an error to
486
+ explicitly set `acks=1` when `enable.idempotence=true` is set.
487
+
488
+
489
+ <a name="guarantees"></a>
490
+ #### Guarantees
491
+
492
+ There are three types of guarantees that the idempotent producer can satisfy:
493
+
494
+ * Exactly-once - a message is only written to the log once.
495
+ Does NOT cover the exactly-once consumer case.
496
+ * Ordering - a series of messages are written to the log in the
497
+ order they were produced.
498
+ * Gap-less - **EXPERIMENTAL** a series of messages are written once and
499
+ in order without risk of skipping messages. The sequence
500
+ of messages may be cut short and fail before all
501
+ messages are written, but may not fail individual
502
+ messages in the series.
503
+ This guarantee is disabled by default, but may be enabled
504
+ by setting `enable.gapless.guarantee` if individual message
505
+ failure is a concern.
506
+ Messages that fail due to exceeded timeout (`message.timeout.ms`),
507
+ are permitted by the gap-less guarantee and may cause
508
+ gaps in the message series without raising a fatal error.
509
+ See **Message timeout considerations** below for more info.
510
+ **WARNING**: This is an experimental property subject to
511
+ change or removal.
512
+
513
+ All three guarantees are in effect when idempotence is enabled, only
514
+ gap-less may be disabled individually.
515
+
516
+
517
+ <a name="ordering-and-message-sequence-numbers"></a>
518
+ #### Ordering and message sequence numbers
519
+
520
+ librdkafka maintains the original produce() ordering per-partition for all
521
+ messages produced, using an internal per-partition 64-bit counter
522
+ called the msgid which starts at 1. This msgid allows messages to be
523
+ re-inserted in the partition message queue in the original order in the
524
+ case of retries.
525
+
526
+ The Idempotent Producer functionality in the Kafka protocol also has
527
+ a per-message sequence number, which is a signed 32-bit wrapping counter that is
528
+ reset each time the Producer's ID (PID) or Epoch changes.
529
+
530
+ The librdkafka msgid is used, along with a base msgid value stored
531
+ at the time the PID/Epoch was bumped, to calculate the Kafka protocol's
532
+ message sequence number.
533
+
534
+ With Idempotent Producer enabled there is no risk of reordering despite
535
+ `max.in.flight` > 1 (capped at 5).
536
+
537
+ **Note**: "MsgId" in log messages refer to the librdkafka msgid, while "seq"
538
+ refers to the protocol message sequence, "baseseq" is the seq of
539
+ the first message in a batch.
540
+ MsgId starts at 1, while message seqs start at 0.
541
+
542
+
543
+ The producer statistics also maintain two metrics for tracking the next
544
+ expected response sequence:
545
+
546
+ * `next_ack_seq` - the next sequence to expect an acknowledgement for, which
547
+ is the last successfully produced MessageSet's last
548
+ sequence + 1.
549
+ * `next_err_seq` - the next sequence to expect an error for, which is typically
550
+ the same as `next_ack_seq` until an error occurs, in which
551
+ case the `next_ack_seq` can't be incremented (since no
552
+ messages were acked on error). `next_err_seq` is used to
553
+ properly handle sub-sequent errors due to a failing
554
+ first request.
555
+
556
+ **Note**: Both are exposed in partition statistics.
557
+
558
+
559
+
560
+ <a name="partitioner-considerations"></a>
561
+ #### Partitioner considerations
562
+
563
+ Strict ordering is guaranteed on a **per partition** basis.
564
+
565
+ An application utilizing the idempotent producer should not mix
566
+ producing to explicit partitions with partitioner-based partitions
567
+ since messages produced for the latter are queued separately until
568
+ a topic's partition count is known, which would insert these messages
569
+ after the partition-explicit messages regardless of produce order.
570
+
571
+
572
+ <a name="message-timeout-considerations"></a>
573
+ #### Message timeout considerations
574
+
575
+ If messages time out (due to `message.timeout.ms`) while in the producer queue
576
+ there will be gaps in the series of produced messages.
577
+
578
+ E.g., Messages 1,2,3,4,5 are produced by the application.
579
+ While messages 2,3,4 are transmitted to the broker the connection to
580
+ the broker goes down.
581
+ While the broker is down the message timeout expires for message 2 and 3.
582
+ As the connection comes back up messages 4, 5 are transmitted to the
583
+ broker, resulting in a final written message sequence of 1, 4, 5.
584
+
585
+ The producer gracefully handles this case by draining the in-flight requests
586
+ for a given partition when one or more of its queued (not transmitted)
587
+ messages are timed out. When all requests are drained the Epoch is bumped and
588
+ the base sequence number is reset to the first message in the queue, effectively
589
+ skipping the timed out messages as if they had never existed from the
590
+ broker's point of view.
591
+ The message status for timed out queued messages will be
592
+ `RD_KAFKA_MSG_STATUS_NOT_PERSISTED`.
593
+
594
+ If messages time out while in-flight to the broker (also due to
595
+ `message.timeout.ms`), the protocol request will fail, the broker
596
+ connection will be closed by the client, and the timed out messages will be
597
+ removed from the producer queue. In this case the in-flight messages may be
598
+ written to the topic log by the broker, even though
599
+ a delivery report with error `ERR__MSG_TIMED_OUT` will be raised, since
600
+ the producer timed out the request before getting an acknowledgement back
601
+ from the broker.
602
+ The message status for timed out in-flight messages will be
603
+ `RD_KAFKA_MSG_STATUS_POSSIBLY_PERSISTED`, indicating that the producer
604
+ does not know if the messages were written and acked by the broker,
605
+ or dropped in-flight.
606
+
607
+ An application may inspect the message status by calling
608
+ `rd_kafka_message_status()` on the message in the delivery report callback,
609
+ to see if the message was (possibly) persisted (written to the topic log) by
610
+ the broker or not.
611
+
612
+ Despite the graceful handling of timeouts, we recommend to use a
613
+ large `message.timeout.ms` to minimize the risk of timeouts.
614
+
615
+ **Warning**: `enable.gapless.guarantee` does not apply to timed-out messages.
616
+
617
+ **Note**: `delivery.timeout.ms` is an alias for `message.timeout.ms`.
618
+
619
+
620
+ <a name="leader-change"></a>
621
+ #### Leader change
622
+
623
+ There are corner cases where an Idempotent Producer has outstanding
624
+ ProduceRequests in-flight to the previous leader while a new leader is elected.
625
+
626
+ A leader change is typically triggered by the original leader
627
+ failing or terminating, which has the risk of also failing (some of) the
628
+ in-flight ProduceRequests to that broker. To recover the producer to a
629
+ consistent state it will not send any ProduceRequests for these partitions to
630
+ the new leader broker until all responses for any outstanding ProduceRequests
631
+ to the previous partition leader has been received, or these requests have
632
+ timed out.
633
+ This drain may take up to `min(socket.timeout.ms, message.timeout.ms)`.
634
+ If the connection to the previous broker goes down the outstanding requests
635
+ are failed immediately.
636
+
637
+
638
+ <a name="error-handling"></a>
639
+ #### Error handling
640
+
641
+ Background:
642
+ The error handling for the Idempotent Producer, as initially proposed
643
+ in the [EOS design document](https://docs.google.com/document/d/11Jqy_GjUGtdXJK94XGsEIK7CP1SnQGdp2eF0wSw9ra8),
644
+ missed some corner cases which are now being addressed in [KIP-360](https://cwiki.apache.org/confluence/display/KAFKA/KIP-360%3A+Improve+handling+of+unknown+producer).
645
+ There were some intermediate fixes and workarounds prior to KIP-360 that proved
646
+ to be incomplete and made the error handling in the client overly complex.
647
+ With the benefit of hindsight the librdkafka implementation will attempt
648
+ to provide correctness from the lessons learned in the Java client and
649
+ provide stricter and less complex error handling.
650
+
651
+ The follow sections describe librdkafka's handling of the
652
+ Idempotent Producer specific errors that may be returned by the broker.
653
+
654
+ <a name="rd-kafka-resp-err-out-of-order-sequence-number"></a>
655
+ ##### RD_KAFKA_RESP_ERR_OUT_OF_ORDER_SEQUENCE_NUMBER
656
+
657
+ This error is returned by the broker when the sequence number in the
658
+ ProduceRequest is larger than the expected next sequence
659
+ for the given PID+Epoch+Partition (last BaseSeq + msgcount + 1).
660
+ Note: sequence 0 is always accepted.
661
+
662
+ If the failed request is the head-of-line (next expected sequence to be acked)
663
+ it indicates desynchronization between the client and broker:
664
+ the client thinks the sequence number is correct but the broker disagrees.
665
+ There is no way for the client to recover from this scenario without
666
+ risking message loss or duplication, and it is not safe for the
667
+ application to manually retry messages.
668
+ A fatal error (`RD_KAFKA_RESP_ERR_OUT_OF_ORDER_SEQUENCE_NUMBER`) is raised.
669
+
670
+ When the request is not head-of-line the previous request failed
671
+ (for any reason), which means the messages in the current request
672
+ can be retried after waiting for all outstanding requests for this
673
+ partition to drain and then reset the Producer ID and start over.
674
+
675
+
676
+ **Java Producer behaviour**:
677
+ Fail the batch, reset the pid, and then continue producing
678
+ (and retrying sub-sequent) messages. This will lead to gaps
679
+ in the message series.
680
+
681
+
682
+ <a name="rd-kafka-resp-err-duplicate-sequence-number"></a>
683
+ ##### RD_KAFKA_RESP_ERR_DUPLICATE_SEQUENCE_NUMBER
684
+
685
+ Returned by broker when the request's base sequence number is
686
+ less than the expected sequence number (which is the last written
687
+ sequence + msgcount).
688
+ Note: sequence 0 is always accepted.
689
+
690
+ This error is typically benign and occurs upon retrying a previously successful
691
+ send that was not acknowledged.
692
+
693
+ The messages will be considered successfully produced but will have neither
694
+ timestamp or offset set.
695
+
696
+
697
+ **Java Producer behaviour:**
698
+ Treats the message as successfully delivered.
699
+
700
+ <a name="rd-kafka-resp-err-unknown-producer-id"></a>
701
+ ##### RD_KAFKA_RESP_ERR_UNKNOWN_PRODUCER_ID
702
+
703
+ Returned by broker when the PID+Epoch is unknown, which may occur when
704
+ the PID's state has expired (due to topic retention, DeleteRecords,
705
+ or compaction).
706
+
707
+ The Java producer added quite a bit of error handling for this case,
708
+ extending the ProduceRequest protocol to return the logStartOffset
709
+ to give the producer a chance to differentiate between an actual
710
+ UNKNOWN_PRODUCER_ID or topic retention having deleted the last
711
+ message for this producer (effectively voiding the Producer ID cache).
712
+ This workaround proved to be error prone (see explanation in KIP-360)
713
+ when the partition leader changed.
714
+
715
+ KIP-360 suggests removing this error checking in favour of failing fast,
716
+ librdkafka follows suite.
717
+
718
+
719
+ If the response is for the first ProduceRequest in-flight
720
+ and there are no messages waiting to be retried nor any ProduceRequests
721
+ unaccounted for, then the error is ignored and the epoch is incremented,
722
+ this is likely to happen for an idle producer who's last written
723
+ message has been deleted from the log, and thus its PID state.
724
+ Otherwise the producer raises a fatal error
725
+ (RD_KAFKA_RESP_ERR_UNKNOWN_PRODUCER_ID) since the delivery guarantees can't
726
+ be satisfied.
727
+
728
+
729
+ **Java Producer behaviour:**
730
+ Retries the send in some cases (but KIP-360 will change this).
731
+ Not a fatal error in any case.
732
+
733
+
734
+ <a name="standard-errors"></a>
735
+ ##### Standard errors
736
+
737
+ All the standard Produce errors are handled in the usual way,
738
+ permanent errors will fail the messages in the batch, while
739
+ temporary errors will be retried (if retry count permits).
740
+
741
+ If a permanent error is returned for a batch in a series of in-flight batches,
742
+ the sub-sequent batches will fail with
743
+ RD_KAFKA_RESP_ERR_OUT_OF_ORDER_SEQUENCE_NUMBER since the sequence number of the
744
+ failed batched was never written to the topic log and next expected sequence
745
+ thus not incremented on the broker.
746
+
747
+ A fatal error (RD_KAFKA_RESP_ERR__GAPLESS_GUARANTEE) is raised to satisfy
748
+ the gap-less guarantee (if `enable.gapless.guarantee` is set) by failing all
749
+ queued messages.
750
+
751
+
752
+ <a name="message-persistence-status"></a>
753
+ ##### Message persistence status
754
+
755
+ To help the application decide what to do in these error cases, a new
756
+ per-message API is introduced, `rd_kafka_message_status()`,
757
+ which returns one of the following values:
758
+
759
+ * `RD_KAFKA_MSG_STATUS_NOT_PERSISTED` - the message has never
760
+ been transmitted to the broker, or failed with an error indicating
761
+ it was not written to the log.
762
+ Application retry will risk ordering, but not duplication.
763
+ * `RD_KAFKA_MSG_STATUS_POSSIBLY_PERSISTED` - the message was transmitted
764
+ to the broker, but no acknowledgement was received.
765
+ Application retry will risk ordering and duplication.
766
+ * `RD_KAFKA_MSG_STATUS_PERSISTED` - the message was written to the log by
767
+ the broker and fully acknowledged.
768
+ No reason for application to retry.
769
+
770
+ This method should be called by the application on delivery report error.
771
+
772
+
773
+ <a name="transactional-producer"></a>
774
+ ### Transactional Producer
775
+
776
+
777
+ <a name="error-handling-1"></a>
778
+ #### Error handling
779
+
780
+ Using the transactional producer simplifies error handling compared to the
781
+ standard or idempotent producer, a transactional application will only need
782
+ to care about these different types of errors:
783
+
784
+ * Retriable errors - the operation failed due to temporary problems,
785
+ such as network timeouts, the operation may be safely retried.
786
+ Use `rd_kafka_error_is_retriable()` to distinguish this case.
787
+ * Abortable errors - if any of the transactional APIs return a non-fatal
788
+ error code the current transaction has failed and the application
789
+ must call `rd_kafka_abort_transaction()`, rewind its input to the
790
+ point before the current transaction started, and attempt a new transaction
791
+ by calling `rd_kafka_begin_transaction()`, etc.
792
+ Use `rd_kafka_error_txn_requires_abort()` to distinguish this case.
793
+ * Fatal errors - the application must cease operations and destroy the
794
+ producer instance.
795
+ Use `rd_kafka_error_is_fatal()` to distinguish this case.
796
+ * For all other errors returned from the transactional API: the current
797
+ recommendation is to treat any error that has neither retriable, abortable,
798
+ or fatal set, as a fatal error.
799
+
800
+ While the application should log the actual fatal or abortable errors, there
801
+ is no need for the application to handle the underlying errors specifically.
802
+
803
+
804
+
805
+ <a name="old-producer-fencing"></a>
806
+ #### Old producer fencing
807
+
808
+ If a new transactional producer instance is started with the same
809
+ `transactional.id`, any previous still running producer
810
+ instance will be fenced off at the next produce, commit or abort attempt, by
811
+ raising a fatal error with the error code set to
812
+ `RD_KAFKA_RESP_ERR__FENCED`.
813
+
814
+
815
+ <a name="configuration-considerations"></a>
816
+ #### Configuration considerations
817
+
818
+ To make sure messages time out (in case of connectivity problems, etc) within
819
+ the transaction, the `message.timeout.ms` configuration property must be
820
+ set lower than the `transaction.timeout.ms`, this is enforced when
821
+ creating the producer instance.
822
+ If `message.timeout.ms` is not explicitly configured it will be adjusted
823
+ automatically.
824
+
825
+
826
+
827
+
828
+ <a name="exactly-once-semantics-eos-and-transactions"></a>
829
+ ### Exactly Once Semantics (EOS) and transactions
830
+
831
+ librdkafka supports Exactly One Semantics (EOS) as defined in [KIP-98](https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging).
832
+ For more on the use of transactions, see [Transactions in Apache Kafka](https://www.confluent.io/blog/transactions-apache-kafka/).
833
+
834
+ See [examples/transactions.c](examples/transactions.c) for an example
835
+ transactional EOS application.
836
+
837
+ **Warning**
838
+ If the broker version is older than Apache Kafka 2.5.0 then one transactional
839
+ producer instance per consumed input partition is required.
840
+ For 2.5.0 and later a single producer instance may be used regardless of
841
+ the number of input partitions.
842
+ See KIP-447 for more information.
843
+
844
+
845
+ <a name="usage"></a>
846
+ ## Usage
847
+
848
+ <a name="documentation"></a>
849
+ ### Documentation
850
+
851
+ The librdkafka API is documented in the [`rdkafka.h`](src/rdkafka.h)
852
+ header file, the configuration properties are documented in
853
+ [`CONFIGURATION.md`](CONFIGURATION.md)
854
+
855
+ <a name="initialization"></a>
856
+ ### Initialization
857
+
858
+ The application needs to instantiate a top-level object `rd_kafka_t` which is
859
+ the base container, providing global configuration and shared state.
860
+ It is created by calling `rd_kafka_new()`.
861
+
862
+ It also needs to instantiate one or more topics (`rd_kafka_topic_t`) to be used
863
+ for producing to or consuming from. The topic object holds topic-specific
864
+ configuration and will be internally populated with a mapping of all available
865
+ partitions and their leader brokers.
866
+ It is created by calling `rd_kafka_topic_new()`.
867
+
868
+ Both `rd_kafka_t` and `rd_kafka_topic_t` comes with a configuration API which
869
+ is optional.
870
+ Not using the API will cause librdkafka to use its default values which are
871
+ documented in [`CONFIGURATION.md`](CONFIGURATION.md).
872
+
873
+ **Note**: An application may create multiple `rd_kafka_t` objects and
874
+ they share no state.
875
+
876
+ **Note**: An `rd_kafka_topic_t` object may only be used with the `rd_kafka_t`
877
+ object it was created from.
878
+
879
+
880
+
881
+ <a name="configuration"></a>
882
+ ### Configuration
883
+
884
+ To ease integration with the official Apache Kafka software and lower
885
+ the learning curve, librdkafka implements identical configuration
886
+ properties as found in the official clients of Apache Kafka.
887
+
888
+ Configuration is applied prior to object creation using the
889
+ `rd_kafka_conf_set()` and `rd_kafka_topic_conf_set()` APIs.
890
+
891
+ **Note**: The `rd_kafka.._conf_t` objects are not reusable after they have been
892
+ passed to `rd_kafka.._new()`.
893
+ The application does not need to free any config resources after a
894
+ `rd_kafka.._new()` call.
895
+
896
+ <a name="example"></a>
897
+ #### Example
898
+
899
+ ```c
900
+ rd_kafka_conf_t *conf;
901
+ rd_kafka_conf_res_t res;
902
+ rd_kafka_t *rk;
903
+ char errstr[512];
904
+
905
+ conf = rd_kafka_conf_new();
906
+
907
+ res = rd_kafka_conf_set(conf, "compression.codec", "snappy",
908
+ errstr, sizeof(errstr));
909
+ if (res != RD_KAFKA_CONF_OK)
910
+ fail("%s\n", errstr);
911
+
912
+ res = rd_kafka_conf_set(conf, "batch.num.messages", "100",
913
+ errstr, sizeof(errstr));
914
+ if (res != RD_KAFKA_CONF_OK)
915
+ fail("%s\n", errstr);
916
+
917
+ rk = rd_kafka_new(RD_KAFKA_PRODUCER, conf, errstr, sizeof(errstr));
918
+ if (!rk) {
919
+ rd_kafka_conf_destroy(rk);
920
+ fail("Failed to create producer: %s\n", errstr);
921
+ }
922
+
923
+ /* Note: librdkafka takes ownership of the conf object on success */
924
+ ```
925
+
926
+ Configuration properties may be set in any order (except for interceptors) and
927
+ may be overwritten before being passed to `rd_kafka_new()`.
928
+ `rd_kafka_new()` will verify that the passed configuration is consistent
929
+ and will fail and return an error if incompatible configuration properties
930
+ are detected. It will also emit log warnings for deprecated and problematic
931
+ configuration properties.
932
+
933
+
934
+ <a name="termination"></a>
935
+ ### Termination
936
+
937
+ librdkafka is asynchronous in its nature and performs most operation in its
938
+ background threads.
939
+
940
+ Calling the librdkafka handle destructor tells the librdkafka background
941
+ threads to finalize their work, close network connections, clean up, etc, and
942
+ may thus take some time. The destructor (`rd_kafka_destroy()`) will block
943
+ until all background threads have terminated.
944
+
945
+ If the destructor blocks indefinitely it typically means there is an outstanding
946
+ object reference, such as a message or topic object, that was not destroyed
947
+ prior to destroying the client handle.
948
+
949
+ All objects except for the handle (C: `rd_kafka_t`,
950
+ C++: `Consumer,KafkaConsumer,Producer`), such as topic objects, messages,
951
+ `topic_partition_t`, `TopicPartition`, events, etc, **MUST** be
952
+ destroyed/deleted prior to destroying or closing the handle.
953
+
954
+ For C, make sure the following objects are destroyed prior to calling
955
+ `rd_kafka_consumer_close()` and `rd_kafka_destroy()`:
956
+ * `rd_kafka_message_t`
957
+ * `rd_kafka_topic_t`
958
+ * `rd_kafka_topic_partition_t`
959
+ * `rd_kafka_topic_partition_list_t`
960
+ * `rd_kafka_event_t`
961
+ * `rd_kafka_queue_t`
962
+
963
+ For C++ make sure the following objects are deleted prior to
964
+ calling `KafkaConsumer::close()` and delete on the Consumer, KafkaConsumer or
965
+ Producer handle:
966
+ * `Message`
967
+ * `Topic`
968
+ * `TopicPartition`
969
+ * `Event`
970
+ * `Queue`
971
+
972
+
973
+ <a name="high-level-kafkaconsumer"></a>
974
+ #### High-level KafkaConsumer
975
+
976
+ Proper termination sequence for the high-level KafkaConsumer is:
977
+ ```c
978
+ /* 1) Leave the consumer group, commit final offsets, etc. */
979
+ rd_kafka_consumer_close(rk);
980
+
981
+ /* 2) Destroy handle object */
982
+ rd_kafka_destroy(rk);
983
+ ```
984
+
985
+ **NOTE**: There is no need to unsubscribe prior to calling `rd_kafka_consumer_close()`.
986
+
987
+ **NOTE**: Any topic objects created must be destroyed prior to rd_kafka_destroy()
988
+
989
+ Effects of not doing the above, for:
990
+ 1. Final offsets are not committed and the consumer will not actively leave
991
+ the group, it will be kicked out of the group after the `session.timeout.ms`
992
+ expires. It is okay to omit the `rd_kafka_consumer_close()` call in case
993
+ the application does not want to wait for the blocking close call.
994
+ 2. librdkafka will continue to operate on the handle. Actual memory leaks.
995
+
996
+
997
+ <a name="producer"></a>
998
+ #### Producer
999
+
1000
+ The proper termination sequence for Producers is:
1001
+
1002
+ ```c
1003
+ /* 1) Make sure all outstanding requests are transmitted and handled. */
1004
+ rd_kafka_flush(rk, 60*1000); /* One minute timeout */
1005
+
1006
+ /* 2) Destroy the topic and handle objects */
1007
+ rd_kafka_topic_destroy(rkt); /* Repeat for all topic objects held */
1008
+ rd_kafka_destroy(rk);
1009
+ ```
1010
+
1011
+ Effects of not doing the above, for:
1012
+ 1. Messages in-queue or in-flight will be dropped.
1013
+ 2. librdkafka will continue to operate on the handle. Actual memory leaks.
1014
+
1015
+
1016
+ <a name="admin-api-client"></a>
1017
+ #### Admin API client
1018
+
1019
+ Unlike the Java Admin client, the Admin APIs in librdkafka are available
1020
+ on any type of client instance and can be used in combination with the
1021
+ client type's main functionality, e.g., it is perfectly fine to call
1022
+ `CreateTopics()` in your running producer, or `DeleteRecords()` in your
1023
+ consumer.
1024
+
1025
+ If you need a client instance to only perform Admin API operations the
1026
+ recommendation is to create a producer instance since it requires less
1027
+ configuration (no `group.id`) than the consumer and is generally more cost
1028
+ efficient.
1029
+ We do recommend that you set `allow.auto.create.topics=false` to avoid
1030
+ topic metadata lookups to unexpectedly have the broker create topics.
1031
+
1032
+
1033
+
1034
+ <a name="speeding-up-termination"></a>
1035
+ #### Speeding up termination
1036
+ To speed up the termination of librdkafka an application can set a
1037
+ termination signal that will be used internally by librdkafka to quickly
1038
+ cancel any outstanding I/O waits.
1039
+ Make sure you block this signal in your application.
1040
+
1041
+ ```c
1042
+ char tmp[16];
1043
+ snprintf(tmp, sizeof(tmp), "%i", SIGIO); /* Or whatever signal you decide */
1044
+ rd_kafka_conf_set(rk_conf, "internal.termination.signal", tmp, errstr, sizeof(errstr));
1045
+ ```
1046
+
1047
+
1048
+ <a name="threads-and-callbacks"></a>
1049
+ ### Threads and callbacks
1050
+
1051
+ librdkafka uses multiple threads internally to fully utilize modern hardware.
1052
+ The API is completely thread-safe and the calling application may call any
1053
+ of the API functions from any of its own threads at any time.
1054
+
1055
+ A poll-based API is used to provide signaling back to the application,
1056
+ the application should call rd_kafka_poll() at regular intervals.
1057
+ The poll API will call the following configured callbacks (optional):
1058
+
1059
+ * `dr_msg_cb` - Message delivery report callback - signals that a message has
1060
+ been delivered or failed delivery, allowing the application to take action
1061
+ and to release any application resources used in the message.
1062
+ * `error_cb` - Error callback - signals an error. These errors are usually of
1063
+ an informational nature, i.e., failure to connect to a broker, and the
1064
+ application usually does not need to take any action.
1065
+ The type of error is passed as a rd_kafka_resp_err_t enum value,
1066
+ including both remote broker errors as well as local failures.
1067
+ An application typically does not have to perform any action when
1068
+ an error is raised through the error callback, the client will
1069
+ automatically try to recover from all errors, given that the
1070
+ client and cluster is correctly configured.
1071
+ In some specific cases a fatal error may occur which will render
1072
+ the client more or less inoperable for further use:
1073
+ if the error code in the error callback is set to
1074
+ `RD_KAFKA_RESP_ERR__FATAL` the application should retrieve the
1075
+ underlying fatal error and reason using the `rd_kafka_fatal_error()` call,
1076
+ and then begin terminating the instance.
1077
+ The Event API's EVENT_ERROR has a `rd_kafka_event_error_is_fatal()`
1078
+ function, and the C++ EventCb has a `fatal()` method, to help the
1079
+ application determine if an error is fatal or not.
1080
+ * `stats_cb` - Statistics callback - triggered if `statistics.interval.ms`
1081
+ is configured to a non-zero value, emitting metrics and internal state
1082
+ in JSON format, see [STATISTICS.md].
1083
+ * `throttle_cb` - Throttle callback - triggered whenever a broker has
1084
+ throttled (delayed) a request.
1085
+
1086
+ These callbacks will also be triggered by `rd_kafka_flush()`,
1087
+ `rd_kafka_consumer_poll()`, and any other functions that serve queues.
1088
+
1089
+
1090
+ Optional callbacks not triggered by poll, these may be called spontaneously
1091
+ from any thread at any time:
1092
+
1093
+ * `log_cb` - Logging callback - allows the application to output log messages
1094
+ generated by librdkafka.
1095
+ * `partitioner_cb` - Partitioner callback - application provided message partitioner.
1096
+ The partitioner may be called in any thread at any time, it may be
1097
+ called multiple times for the same key.
1098
+ Partitioner function contraints:
1099
+ - MUST NOT call any rd_kafka_*() functions
1100
+ - MUST NOT block or execute for prolonged periods of time.
1101
+ - MUST return a value between 0 and partition_cnt-1, or the
1102
+ special RD_KAFKA_PARTITION_UA value if partitioning
1103
+ could not be performed.
1104
+
1105
+
1106
+
1107
+ <a name="brokers"></a>
1108
+ ### Brokers
1109
+
1110
+ On initialization, librdkafka only needs a partial list of
1111
+ brokers (at least one), called the bootstrap brokers.
1112
+ The client will connect to the bootstrap brokers specified by the
1113
+ `bootstrap.servers` configuration property and query cluster Metadata
1114
+ information which contains the full list of brokers, topic, partitions and their
1115
+ leaders in the Kafka cluster.
1116
+
1117
+ Broker names are specified as `host[:port]` where the port is optional
1118
+ (default 9092) and the host is either a resolvable hostname or an IPv4 or IPv6
1119
+ address.
1120
+ If host resolves to multiple addresses librdkafka will round-robin the
1121
+ addresses for each connection attempt.
1122
+ A DNS record containing all broker address can thus be used to provide a
1123
+ reliable bootstrap broker.
1124
+
1125
+
1126
+ <a name="ssl"></a>
1127
+ #### SSL
1128
+
1129
+ If the client is to connect to a broker's SSL endpoints/listeners the client
1130
+ needs to be configured with `security.protocol=SSL` for just SSL transport or
1131
+ `security.protocol=SASL_SSL` for SASL authentication and SSL transport.
1132
+ The client will try to verify the broker's certificate by checking the
1133
+ CA root certificates, if the broker's certificate can't be verified
1134
+ the connection is closed (and retried). This is to protect the client
1135
+ from connecting to rogue brokers.
1136
+
1137
+ The CA root certificate defaults are system specific:
1138
+ * On Linux, Mac OSX, and other Unix-like system the OpenSSL default
1139
+ CA path will be used, also called the OPENSSLDIR, which is typically
1140
+ `/etc/ssl/certs` (on Linux, typcially in the `ca-certificates` package) and
1141
+ `/usr/local/etc/openssl` on Mac OSX (Homebrew).
1142
+ * On Windows the Root certificate store is used, unless
1143
+ `ssl.ca.certificate.stores` is configured in which case certificates are
1144
+ read from the specified stores.
1145
+ * If OpenSSL is linked statically, librdkafka will set the default CA
1146
+ location to the first of a series of probed paths (see below).
1147
+
1148
+ If the system-provided default CA root certificates are not sufficient to
1149
+ verify the broker's certificate, such as when a self-signed certificate
1150
+ or a local CA authority is used, the CA certificate must be specified
1151
+ explicitly so that the client can find it.
1152
+ This can be done either by providing a PEM file (e.g., `cacert.pem`)
1153
+ as the `ssl.ca.location` configuration property, or by passing an in-memory
1154
+ PEM, X.509/DER or PKCS#12 certificate to `rd_kafka_conf_set_ssl_cert()`.
1155
+
1156
+ It is also possible to disable broker certificate verification completely
1157
+ by setting `enable.ssl.certificate.verification=false`, but this is not
1158
+ recommended since it allows for rogue brokers and man-in-the-middle attacks,
1159
+ and should only be used for testing and troubleshooting purposes.
1160
+
1161
+ CA location probe paths (see [rdkafka_ssl.c](src/rdkafka_ssl.c) for full list)
1162
+ used when OpenSSL is statically linked:
1163
+
1164
+ "/etc/pki/tls/certs/ca-bundle.crt",
1165
+ "/etc/ssl/certs/ca-bundle.crt",
1166
+ "/etc/pki/tls/certs/ca-bundle.trust.crt",
1167
+ "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem",
1168
+ "/etc/ssl/ca-bundle.pem",
1169
+ "/etc/pki/tls/cacert.pem",
1170
+ "/etc/ssl/cert.pem",
1171
+ "/etc/ssl/cacert.pem",
1172
+ "/etc/certs/ca-certificates.crt",
1173
+ "/etc/ssl/certs/ca-certificates.crt",
1174
+ "/etc/ssl/certs",
1175
+ "/usr/local/etc/ssl/cert.pem",
1176
+ "/usr/local/etc/ssl/cacert.pem",
1177
+ "/usr/local/etc/ssl/certs/cert.pem",
1178
+ "/usr/local/etc/ssl/certs/cacert.pem",
1179
+ etc..
1180
+
1181
+
1182
+ On **Windows** the Root certificate store is read by default, but any number
1183
+ of certificate stores can be read by setting the `ssl.ca.certificate.stores`
1184
+ configuration property to a comma-separated list of certificate store names.
1185
+ The predefined system store names are:
1186
+
1187
+ * `MY` - User certificates
1188
+ * `Root` - System CA certificates (default)
1189
+ * `CA` - Intermediate CA certificates
1190
+ * `Trust` - Trusted publishers
1191
+
1192
+ For example, to read both intermediate and root CAs, set
1193
+ `ssl.ca.certificate.stores=CA,Root`.
1194
+
1195
+
1196
+ <a name="oauthbearer-with-support-for-oidc"></a>
1197
+ #### OAUTHBEARER with support for OIDC
1198
+
1199
+ OAUTHBEARER with OIDC provides a method for the client to authenticate to the
1200
+ Kafka cluster by requesting an authentication token from an issuing server
1201
+ and passing the retrieved token to brokers during connection setup.
1202
+
1203
+ To use this authentication method the client needs to be configured as follows:
1204
+
1205
+ * `security.protocol` - set to `SASL_SSL` or `SASL_PLAINTEXT`.
1206
+ * `sasl.mechanism` - set to `OAUTHBEARER`.
1207
+ * `sasl.oauthbearer.method` - set to `OIDC`.
1208
+ * `sasl.oauthbearer.token.endpoint.url` - OAUTH issuer token
1209
+ endpoint HTTP(S) URI used to retrieve the token.
1210
+ * `sasl.oauthbearer.client.id` - public identifier for the application.
1211
+ It must be unique across all clients that the authorization server handles.
1212
+ * `sasl.oauthbearer.client.secret` - secret known only to the
1213
+ application and the authorization server.
1214
+ This should be a sufficiently random string that is not guessable.
1215
+ * `sasl.oauthbearer.scope` - clients use this to specify the scope of the
1216
+ access request to the broker.
1217
+ * `sasl.oauthbearer.extensions` - (optional) additional information to be
1218
+ provided to the broker. A comma-separated list of key=value pairs.
1219
+ For example:
1220
+ `supportFeatureX=true,organizationId=sales-emea`
1221
+ * `https.ca.location` - (optional) to customize the CA certificates
1222
+ location.
1223
+
1224
+ * `https.ca.pem` - (optional) to provide the CA certificates as a PEM string.
1225
+
1226
+ <a name="jwt-bearer-grant-type-kip-1139">
1227
+ ##### JWT bearer grant type (KIP-1139)
1228
+
1229
+ This KIP adds support for the `client_credentials, urn:ietf:params:oauth:grant-type:jwt-bearer`
1230
+ grant type, with a series of properties to be used for creating a JWT assertion
1231
+ sent to the token endpoint. The authenticated principal corresponds to the
1232
+ `sub` claim returned by token endpoint, `sasl.oauthbearer.client.id` and
1233
+ `sasl.oauthbearer.client.secret` aren't used. Required JWT claims must be set
1234
+ either through the template or with the `claim` properties.
1235
+
1236
+ * `sasl.oauthbearer.grant.type` - changes the default grant type, set it to
1237
+ `urn:ietf:params:oauth:grant-type:jwt-bearer`.
1238
+ * `sasl.oauthbearer.assertion.algorithm` - JWT algorithm defaults to `RS256`.
1239
+ * `sasl.oauthbearer.assertion.private.key.file` - a private key file for signing
1240
+ the token.
1241
+ * `sasl.oauthbearer.assertion.private.key.passphrase` - (optional) passphrase for the key if encrypted.
1242
+ * `sasl.oauthbearer.assertion.private.key.pem` - alternatively to the key file
1243
+ it's possible to pass the private key as a string.
1244
+ * `sasl.oauthbearer.assertion.file` - (optional) assertion file: with this property all other
1245
+ assertion related fields are ignored and the assertion is read from this file
1246
+ that should be periodically updated.
1247
+ * `sasl.oauthbearer.assertion.jwt.template.file` - (optional) template file: a template containing
1248
+ a default `header` and `payload` that can be overwritten by the `claim` properties.
1249
+ * `sasl.oauthbearer.assertion.claim.aud`,
1250
+ `sasl.oauthbearer.assertion.claim.exp.seconds`,
1251
+ `sasl.oauthbearer.assertion.claim.iss`,
1252
+ `sasl.oauthbearer.assertion.claim.jti.include`,
1253
+ `sasl.oauthbearer.assertion.claim.sub` - (optional) the `claim` properties:
1254
+ it's possible to dynamically customize the JWT claims with these or to
1255
+ skip the template file and use only these properties.
1256
+
1257
+ <a name="metadata-based-authentication"></a>
1258
+ ##### Metadata based authentication
1259
+
1260
+ Some cloud providers added the ability to authenticate clients based on
1261
+ OAUTHBEARER/OIDC tokens returned from endpoints that can only be called from
1262
+ a given instance. Such endpoints are served on a specific IP address (169.254.169.254)
1263
+ that is a link-local metadata endpoint.
1264
+
1265
+
1266
+ While there is no standard for that still, librdkafka has support for
1267
+ some metadata based OAUTHBEARER authentication types.
1268
+
1269
+
1270
+ Currently these authentication types are supported:
1271
+
1272
+ <a name="azure-imds">
1273
+ ###### Azure IMDS
1274
+
1275
+ To use this method you set:
1276
+
1277
+ * `sasl.oauthbearer.metadata.authentication.type=azure_imds` this makes it so
1278
+ that ` sasl.oauthbearer.client.id` and `sasl.oauthbearer.client.secret`
1279
+ aren't required.
1280
+ * `sasl.oauthbearer.config` is a general purpose configuration property
1281
+ In this case it accepts comma-separated `key=value` pairs.
1282
+ The `query` key is required in case `sasl.oauthbearer.token.endpoint.url` isn't
1283
+ specified and its value is the GET query string to append
1284
+ to the token endpoint URL. Such query string contains params required by
1285
+ Azure IMDS such as `client_id` (the UAMI), `resource` for determining the
1286
+ target audience and `api-version` for the API version to be used by the endpoint
1287
+ * `sasl.oauthbearer.token.endpoint.url` (optional) is set automatically.
1288
+ when choosing `sasl.oauthbearer.metadata.authentication.type=azure_imds` but can
1289
+ be customized.
1290
+
1291
+
1292
+ _Example:_ `sasl.oauthbearer.metadata.authentication.type=azure_imds` and
1293
+ `sasl.oauthbearer.config=params=api-version=2025-04-07&resource=api://<App registration client id>&client_id=<UAMI client id>`
1294
+
1295
+
1296
+ <a name="sparse-connections"></a>
1297
+ #### Sparse connections
1298
+
1299
+ The client will only connect to brokers it needs to communicate with, and
1300
+ only when necessary.
1301
+
1302
+ Examples of needed broker connections are:
1303
+
1304
+ * leaders for partitions being consumed from
1305
+ * leaders for partitions being produced to
1306
+ * consumer group coordinator broker
1307
+ * cluster controller for Admin API operations
1308
+
1309
+
1310
+ <a name="random-broker-selection"></a>
1311
+ ##### Random broker selection
1312
+
1313
+ When there is no broker connection and a connection to any broker
1314
+ is needed, such as on startup to retrieve metadata, the client randomly selects
1315
+ a broker from its list of brokers, which includes both the configured bootstrap
1316
+ brokers (including brokers manually added with `rd_kafka_brokers_add()`), as
1317
+ well as the brokers discovered from cluster metadata.
1318
+ Brokers with no prior connection attempt are tried first.
1319
+
1320
+ If there is already an available broker connection to any broker it is used,
1321
+ rather than connecting to a new one.
1322
+
1323
+ The random broker selection and connection scheduling is triggered when:
1324
+ * bootstrap servers are configured (`rd_kafka_new()`)
1325
+ * brokers are manually added (`rd_kafka_brokers_add()`).
1326
+ * a consumer group coordinator needs to be found.
1327
+ * acquiring a ProducerID for the Idempotent Producer.
1328
+ * cluster or topic metadata is being refreshed.
1329
+
1330
+ A single connection attempt will be performed, and the broker will
1331
+ return to an idle INIT state on failure to connect.
1332
+
1333
+ The random broker selection is rate-limited to:
1334
+ 10 < `reconnect.backoff.ms`/2 < 1000 milliseconds.
1335
+
1336
+ **Note**: The broker connection will be maintained until it is closed
1337
+ by the broker (idle connection reaper).
1338
+
1339
+ <a name="persistent-broker-connections"></a>
1340
+ ##### Persistent broker connections
1341
+
1342
+ While the random broker selection is useful for one-off queries, there
1343
+ is need for the client to maintain persistent connections to certain brokers:
1344
+ * Consumer: the group coordinator.
1345
+ * Consumer: partition leader for topics being fetched from.
1346
+ * Producer: partition leader for topics being produced to.
1347
+
1348
+ These dependencies are discovered and maintained automatically, marking
1349
+ matching brokers as persistent, which will make the client maintain connections
1350
+ to these brokers at all times, reconnecting as necessary.
1351
+
1352
+
1353
+ <a name="connection-close"></a>
1354
+ #### Connection close
1355
+
1356
+ A broker connection may be closed by the broker, intermediary network gear,
1357
+ due to network errors, timeouts, etc.
1358
+ When a broker connection is closed, librdkafka will back off the next reconnect
1359
+ attempt (to the given broker) for `reconnect.backoff.ms` -25% to +50% jitter,
1360
+ this value is increased exponentially for each connect attempt until
1361
+ `reconnect.backoff.max.ms` is reached, at which time the value is reset
1362
+ to `reconnect.backoff.ms`.
1363
+
1364
+ The broker will disconnect clients that have not sent any protocol requests
1365
+ within `connections.max.idle.ms` (broker configuration propertion, defaults
1366
+ to 10 minutes), but there is no fool proof way for the client to know that it
1367
+ was a deliberate close by the broker and not an error. To avoid logging these
1368
+ deliberate idle disconnects as errors the client employs some logic to try to
1369
+ classify a disconnect as an idle disconnect if no requests have been sent in
1370
+ the last `socket.timeout.ms` or there are no outstanding, or
1371
+ queued, requests waiting to be sent. In this case the standard "Disconnect"
1372
+ error log is silenced (will only be seen with debug enabled).
1373
+
1374
+ Otherwise, if a connection is closed while there are requests in-flight
1375
+ the logging level will be LOG_WARNING (4), else LOG_INFO (6).
1376
+
1377
+ `log.connection.close=false` may be used to silence all disconnect logs,
1378
+ but it is recommended to instead rely on the above heuristics.
1379
+
1380
+
1381
+ <a name="fetch-from-follower"></a>
1382
+ #### Fetch From Follower
1383
+
1384
+ librdkafka supports consuming messages from follower replicas
1385
+ ([KIP-392](https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica)).
1386
+ This is enabled by setting the `client.rack` configuration property which
1387
+ corresponds to `broker.rack` on the broker. The actual assignment of
1388
+ consumers to replicas is determined by the configured `replica.selector.class`
1389
+ on the broker.
1390
+
1391
+
1392
+ <a name="logging"></a>
1393
+ ### Logging
1394
+
1395
+ <a name="debug-contexts"></a>
1396
+ #### Debug contexts
1397
+
1398
+ Extensive debugging of librdkafka can be enabled by setting the
1399
+ `debug` configuration property to a CSV string of debug contexts:
1400
+
1401
+ | Debug context | Type | Description |
1402
+ | ------------- | -------- | ------------------------------------------------------------------------------------------- |
1403
+ | generic | * | General client instance level debugging. Includes initialization and termination debugging. |
1404
+ | broker | * | Broker and connection state debugging. |
1405
+ | topic | * | Topic and partition state debugging. Includes leader changes. |
1406
+ | metadata | * | Cluster and topic metadata retrieval debugging. |
1407
+ | feature | * | Kafka protocol feature support as negotiated with the broker. |
1408
+ | queue | producer | Message queue debugging. |
1409
+ | msg | * | Message debugging. Includes information about batching, compression, sizes, etc. |
1410
+ | protocol | * | Kafka protocol request/response debugging. Includes latency (rtt) printouts. |
1411
+ | cgrp | consumer | Low-level consumer group state debugging. |
1412
+ | security | * | Security and authentication debugging. |
1413
+ | fetch | consumer | Consumer message fetch debugging. Includes decision when and why messages are fetched. |
1414
+ | interceptor | * | Interceptor interface debugging. |
1415
+ | plugin | * | Plugin loading debugging. |
1416
+ | consumer | consumer | High-level consumer debugging. |
1417
+ | admin | admin | Admin API debugging. |
1418
+ | eos | producer | Idempotent Producer debugging. |
1419
+ | mock | * | Mock cluster functionality debugging. |
1420
+ | assignor | consumer | Detailed consumer group partition assignor debugging. |
1421
+ | conf | * | Display set configuration properties on startup. |
1422
+ | all | * | All of the above. |
1423
+
1424
+
1425
+ Suggested debugging settings for troubleshooting:
1426
+
1427
+ | Problem space | Type | Debug setting |
1428
+ | -------------------------------------------- | -------- | -------------------------------------------------------------------- |
1429
+ | Producer not delivering messages to broker | producer | `broker,topic,msg` |
1430
+ | Consumer not fetching messages | consumer | Start with `consumer`, or use `cgrp,fetch` for detailed information. |
1431
+ | Consumer starts reading at unexpected offset | consumer | `consumer` or `cgrp,fetch` |
1432
+ | Authentication or connectivity issues | * | `broker,auth` |
1433
+ | Protocol handling or latency | * | `broker,protocol` |
1434
+ | Topic leader and state | * | `topic,metadata` |
1435
+
1436
+
1437
+
1438
+
1439
+ <a name="feature-discovery"></a>
1440
+ ### Feature discovery
1441
+
1442
+ Apache Kafka broker version 0.10.0 added support for the ApiVersionRequest API
1443
+ which allows a client to query a broker for its range of supported API versions.
1444
+
1445
+ librdkafka supports this functionality and will query each broker on connect
1446
+ for this information (if `api.version.request=true`) and use it to enable or disable
1447
+ various protocol features, such as MessageVersion 1 (timestamps), KafkaConsumer, etc.
1448
+
1449
+ If the broker fails to respond to the ApiVersionRequest librdkafka will
1450
+ assume the broker is too old to support the API and fall back to an older
1451
+ broker version's API. These fallback versions are hardcoded in librdkafka
1452
+ and is controlled by the `broker.version.fallback` configuration property.
1453
+
1454
+
1455
+
1456
+ <a name="producer-api"></a>
1457
+ ### Producer API
1458
+
1459
+ After setting up the `rd_kafka_t` object with type `RD_KAFKA_PRODUCER` and one
1460
+ or more `rd_kafka_topic_t` objects librdkafka is ready for accepting messages
1461
+ to be produced and sent to brokers.
1462
+
1463
+ The `rd_kafka_produce()` function takes the following arguments:
1464
+
1465
+ * `rkt` - the topic to produce to, previously created with
1466
+ `rd_kafka_topic_new()`
1467
+ * `partition` - partition to produce to. If this is set to
1468
+ `RD_KAFKA_PARTITION_UA` (UnAssigned) then the configured partitioner
1469
+ function will be used to select a target partition.
1470
+ * `msgflags` - 0, or one of:
1471
+ * `RD_KAFKA_MSG_F_COPY` - librdkafka will immediately make a copy of
1472
+ the payload. Use this when the payload is in non-persistent
1473
+ memory, such as the stack.
1474
+ * `RD_KAFKA_MSG_F_FREE` - let librdkafka free the payload using
1475
+ `free(3)` when it is done with it.
1476
+
1477
+ These two flags are mutually exclusive and neither need to be set in
1478
+ which case the payload is neither copied nor freed by librdkafka.
1479
+
1480
+ If `RD_KAFKA_MSG_F_COPY` flag is not set no data copying will be
1481
+ performed and librdkafka will hold on the payload pointer until
1482
+ the message has been delivered or fails.
1483
+ The delivery report callback will be called when librdkafka is done
1484
+ with the message to let the application regain ownership of the
1485
+ payload memory.
1486
+ The application must not free the payload in the delivery report
1487
+ callback if `RD_KAFKA_MSG_F_FREE is set`.
1488
+ * `payload`,`len` - the message payload
1489
+ * `key`,`keylen` - an optional message key which can be used for partitioning.
1490
+ It will be passed to the topic partitioner callback, if any, and
1491
+ will be attached to the message when sending to the broker.
1492
+ * `msg_opaque` - an optional application-provided per-message opaque pointer
1493
+ that will be provided in the message delivery callback to let
1494
+ the application reference a specific message.
1495
+
1496
+
1497
+ `rd_kafka_produce()` is a non-blocking API, it will enqueue the message
1498
+ on an internal queue and return immediately.
1499
+ If the new message would cause the internal queue to exceed
1500
+ `queue.buffering.max.messages` or `queue.buffering.max.kbytes`
1501
+ configuration properties, `rd_kafka_produce()` returns -1 and sets errno
1502
+ to `ENOBUFS` and last_error to `RD_KAFKA_RESP_ERR__QUEUE_FULL`, thus
1503
+ providing a backpressure mechanism.
1504
+
1505
+
1506
+ `rd_kafka_producev()` provides an alternative produce API that does not
1507
+ require a topic `rkt` object and also provides support for extended
1508
+ message fields, such as timestamp and headers.
1509
+
1510
+
1511
+ **Note**: See `examples/rdkafka_performance.c` for a producer implementation.
1512
+
1513
+
1514
+ <a name="simple-consumer-api-legacy"></a>
1515
+ ### Simple Consumer API (legacy)
1516
+
1517
+ NOTE: For the high-level KafkaConsumer interface see rd_kafka_subscribe (rdkafka.h) or KafkaConsumer (rdkafkacpp.h)
1518
+
1519
+ The consumer API is a bit more stateful than the producer API.
1520
+ After creating `rd_kafka_t` with type `RD_KAFKA_CONSUMER` and
1521
+ `rd_kafka_topic_t` instances the application must also start the consumer
1522
+ for a given partition by calling `rd_kafka_consume_start()`.
1523
+
1524
+ `rd_kafka_consume_start()` arguments:
1525
+
1526
+ * `rkt` - the topic to start consuming from, previously created with
1527
+ `rd_kafka_topic_new()`.
1528
+ * `partition` - partition to consume from.
1529
+ * `offset` - message offset to start consuming from. This may either be an
1530
+ absolute message offset or one of the three special offsets:
1531
+ `RD_KAFKA_OFFSET_BEGINNING` to start consuming from the beginning
1532
+ of the partition's queue (oldest message), or
1533
+ `RD_KAFKA_OFFSET_END` to start consuming at the next message to be
1534
+ produced to the partition, or
1535
+ `RD_KAFKA_OFFSET_STORED` to use the offset store.
1536
+
1537
+ After a topic+partition consumer has been started librdkafka will attempt
1538
+ to keep `queued.min.messages` messages in the local queue by repeatedly
1539
+ fetching batches of messages from the broker. librdkafka will fetch all
1540
+ consumed partitions for which that broker is a leader, through a single
1541
+ request.
1542
+
1543
+ This local message queue is then served to the application through three
1544
+ different consume APIs:
1545
+
1546
+ * `rd_kafka_consume()` - consumes a single message
1547
+ * `rd_kafka_consume_batch()` - consumes one or more messages
1548
+ * `rd_kafka_consume_callback()` - consumes all messages in the local
1549
+ queue and calls a callback function for each one.
1550
+
1551
+ These three APIs are listed above the ascending order of performance,
1552
+ `rd_kafka_consume()` being the slowest and `rd_kafka_consume_callback()` being
1553
+ the fastest. The different consume variants are provided to cater for different
1554
+ application needs.
1555
+
1556
+ A consumed message, as provided or returned by each of the consume functions,
1557
+ is represented by the `rd_kafka_message_t` type.
1558
+
1559
+ `rd_kafka_message_t` members:
1560
+
1561
+ * `err` - Error signaling back to the application. If this field is non-zero
1562
+ the `payload` field should be considered an error message and
1563
+ `err` is an error code (`rd_kafka_resp_err_t`).
1564
+ If `err` is zero then the message is a proper fetched message
1565
+ and `payload` et.al contains message payload data.
1566
+ * `rkt`,`partition` - Topic and partition for this message or error.
1567
+ * `payload`,`len` - Message payload data or error message (err!=0).
1568
+ * `key`,`key_len` - Optional message key as specified by the producer
1569
+ * `offset` - Message offset
1570
+
1571
+ Both the `payload` and `key` memory, as well as the message as a whole, is
1572
+ owned by librdkafka and must not be used after an `rd_kafka_message_destroy()`
1573
+ call. librdkafka will share the same messageset receive buffer memory for all
1574
+ message payloads of that messageset to avoid excessive copying which means
1575
+ that if the application decides to hang on to a single `rd_kafka_message_t`
1576
+ it will hinder the backing memory to be released for all other messages
1577
+ from the same messageset.
1578
+
1579
+ When the application is done consuming messages from a topic+partition it
1580
+ should call `rd_kafka_consume_stop()` to stop the consumer. This will also
1581
+ purge any messages currently in the local queue.
1582
+
1583
+
1584
+ **Note**: See `examples/rdkafka_performance.c` for a consumer implementation.
1585
+
1586
+
1587
+ <a name="offset-management"></a>
1588
+ #### Offset management
1589
+
1590
+ Broker based offset management is available for broker version >= 0.9.0
1591
+ in conjunction with using the high-level KafkaConsumer interface (see
1592
+ rdkafka.h or rdkafkacpp.h)
1593
+
1594
+ Offset management is also available through a deprecated local offset file,
1595
+ where the offset is periodically written to a local file for each
1596
+ topic+partition according to the following topic configuration properties:
1597
+
1598
+ * `enable.auto.commit`
1599
+ * `auto.commit.interval.ms`
1600
+ * `offset.store.path`
1601
+ * `offset.store.sync.interval.ms`
1602
+
1603
+ The legacy `auto.commit.enable` topic configuration property is only to be used
1604
+ with the legacy low-level consumer.
1605
+ Use `enable.auto.commit` with the modern KafkaConsumer.
1606
+
1607
+
1608
+ <a name="auto-offset-commit"></a>
1609
+ ##### Auto offset commit
1610
+
1611
+ The consumer will automatically commit offsets every `auto.commit.interval.ms`
1612
+ when `enable.auto.commit` is enabled (default).
1613
+
1614
+ Offsets to be committed are kept in a local in-memory offset store,
1615
+ this offset store is updated by `consumer_poll()` (et.al) to
1616
+ store the offset of the last message passed to the application
1617
+ (per topic+partition).
1618
+
1619
+ <a name="at-least-once-processing"></a>
1620
+ ##### At-least-once processing
1621
+ Since auto commits are performed in a background thread this may result in
1622
+ the offset for the latest message being committed before the application has
1623
+ finished processing the message. If the application was to crash or exit
1624
+ prior to finishing processing, and the offset had been auto committed,
1625
+ the next incarnation of the consumer application would start at the next
1626
+ message, effectively missing the message that was processed when the
1627
+ application crashed.
1628
+ To avoid this scenario the application can disable the automatic
1629
+ offset **store** by setting `enable.auto.offset.store` to false
1630
+ and manually **storing** offsets after processing by calling
1631
+ `rd_kafka_offsets_store()`.
1632
+ This gives an application fine-grained control on when a message
1633
+ is eligible for committing without having to perform the commit itself.
1634
+ `enable.auto.commit` should be set to true when using manual offset storing.
1635
+ The latest stored offset will be automatically committed every
1636
+ `auto.commit.interval.ms`.
1637
+
1638
+ **Note**: Only greater offsets are committed, e.g., if the latest committed
1639
+ offset was 10 and the application performs an offsets_store()
1640
+ with offset 9, that offset will not be committed.
1641
+
1642
+
1643
+ <a name="auto-offset-reset"></a>
1644
+ ##### Auto offset reset
1645
+
1646
+ The consumer will by default try to acquire the last committed offsets for
1647
+ each topic+partition it is assigned using its configured `group.id`.
1648
+ If there is no committed offset available, or the consumer is unable to
1649
+ fetch the committed offsets, the policy of `auto.offset.reset` will kick in.
1650
+ This configuration property may be set to one the following values:
1651
+
1652
+ * `earliest` - start consuming the earliest message of the partition.
1653
+ * `latest` - start consuming the next message to be produced to the partition.
1654
+ * `error` - don't start consuming but isntead raise a consumer error
1655
+ with error-code `RD_KAFKA_RESP_ERR__AUTO_OFFSET_RESET` for
1656
+ the topic+partition. This allows the application to decide what
1657
+ to do in case there is no committed start offset.
1658
+
1659
+
1660
+ <a name="consumer-groups"></a>
1661
+ ### Consumer groups
1662
+
1663
+ Broker based consumer groups (requires Apache Kafka broker >=0.9) are supported,
1664
+ see KafkaConsumer in rdkafka.h or rdkafkacpp.h
1665
+
1666
+ The following diagram visualizes the high-level balanced consumer group state
1667
+ flow and synchronization between the application, librdkafka consumer,
1668
+ group coordinator, and partition leader(s).
1669
+
1670
+ ![Consumer group state diagram](src/librdkafka_cgrp_synch.png)
1671
+
1672
+
1673
+ <a name="static-consumer-groups"></a>
1674
+ #### Static consumer groups
1675
+
1676
+ By default Kafka consumers are rebalanced each time a new consumer joins
1677
+ the group or an existing member leaves. This is what is known as a dynamic
1678
+ membership. Apache Kafka >= 2.3.0 introduces static membership.
1679
+ Unlike dynamic membership, static members can leave and rejoin a group
1680
+ within the `session.timeout.ms` without triggering a rebalance, retaining
1681
+ their existing partitions assignment.
1682
+
1683
+ To enable static group membership configure each consumer instance
1684
+ in the group with a unique `group.instance.id`.
1685
+
1686
+ Consumers with `group.instance.id` set will not send a leave group request on
1687
+ close - session timeout, change of subscription, or a new group member joining
1688
+ the group, are the only mechanisms that will trigger a group rebalance for
1689
+ static consumer groups.
1690
+
1691
+ If a new consumer joins the group with same `group.instance.id` as an
1692
+ existing consumer, the existing consumer will be fenced and raise a fatal error.
1693
+ The fatal error is propagated as a consumer error with error code
1694
+ `RD_KAFKA_RESP_ERR__FATAL`, use `rd_kafka_fatal_error()` to retrieve
1695
+ the original fatal error code and reason.
1696
+
1697
+ To read more about static group membership, see [KIP-345](https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances).
1698
+
1699
+ <a name="next-generation-consumer-group-protocol-kip-848"></a>
1700
+ ### Next Generation Consumer Group Protocol ([KIP-848](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol))
1701
+
1702
+ Starting with **librdkafka v2.12.0** (GA release), the next generation consumer group rebalance protocol defined in **[KIP-848](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol)** is **production-ready**.
1703
+
1704
+ **Note:** The new consumer group protocol defined in [KIP-848](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol) is not enabled by default. There are few contract change associated with the new protocol and might cause breaking changes. `group.protocol` configuration property dictates whether to use the new `consumer` protocol or older `classic` protocol. It defaults to `classic` if not provided.
1705
+
1706
+ <a name="overview"></a>
1707
+ #### Overview
1708
+ - **What changed:**
1709
+ The **Group Leader role** (consumer member) is removed. Assignments are calculated by the **Group Coordinator (broker)** and distributed via **heartbeats**.
1710
+
1711
+ - **Requirements:**
1712
+ - Broker version: **v4.0.0+**
1713
+ - librdkafka version: **v2.12.0+**: GA (production-ready)
1714
+
1715
+ - **Enablement (client-side):**
1716
+ - `group.protocol=consumer`
1717
+ - `group.remote.assignor=<assignor>` (optional; broker-controlled if `NULL`; default broker assignor is **`uniform`**)
1718
+
1719
+ <a name="available-features"></a>
1720
+ #### Available Features
1721
+
1722
+ All [KIP-848](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol) features are supported including:
1723
+
1724
+ - Subscription to one or more topics, including **regular expression (regex) subscriptions**
1725
+ - Rebalance callbacks (**incremental only**)
1726
+ - Static group membership
1727
+ - Configurable remote assignor
1728
+ - Enforced max poll interval
1729
+ - Upgrade from `classic` protocol or downgrade from `consumer` protocol
1730
+ - AdminClient changes as per KIP
1731
+
1732
+ <a name="contract-changes"></a>
1733
+ #### Contract Changes
1734
+
1735
+ <a name="client-configuration-changes"></a>
1736
+ ##### Client Configuration changes
1737
+
1738
+ | Classic Protocol (Deprecated Configs in KIP-848) | KIP-848 / Next-Gen Replacement |
1739
+ | ------------------------------------------------ | ----------------------------------------------------- |
1740
+ | `partition.assignment.strategy` | `group.remote.assignor` |
1741
+ | `session.timeout.ms` | Broker config: `group.consumer.session.timeout.ms` |
1742
+ | `heartbeat.interval.ms` | Broker config: `group.consumer.heartbeat.interval.ms` |
1743
+ | `group.protocol.type` | Not used in the new protocol |
1744
+
1745
+ **Note:** The properties listed under “Classic Protocol (Deprecated Configs in KIP-848)” are **no longer used** when using the KIP-848 consumer protocol.
1746
+
1747
+ <a name="rebalance-callback-changes"></a>
1748
+ ##### Rebalance Callback Changes
1749
+
1750
+ - Protocol is **fully incremental**.
1751
+ - **Inside the rebalance callback**, you **must use**:
1752
+ - `rd_kafka_incremental_assign(rk, partitions)` to assign partitions
1753
+ - `rd_kafka_incremental_unassign(rk, partitions)` to revoke partitions
1754
+ - **Do not** use `rd_kafka_assign()` or other assignment APIs in KIP-848.
1755
+ - **Important:** The `partitions` parameter passed to `rd_kafka_incremental_assign` or `rd_kafka_incremental_unassign` contains only an **incremental list of partitions**—those being added or revoked—rather than the full partition list returned by `rd_kafka_assign(rk, partitions)` in the **range assignor of the classic protocol**, which was the default.
1756
+ - All assignors are **sticky**, including `range` (which wasn’t sticky before).
1757
+
1758
+ <a name="static-group-membership"></a>
1759
+ ##### Static Group Membership
1760
+
1761
+ - Duplicate `group.instance.id` handling:
1762
+ - **Newly joining member** is fenced with **UNRELEASED_INSTANCE_ID (fatal)**.
1763
+ - (Classic protocol fenced the **existing** member instead.)
1764
+ - Implications:
1765
+ - Ensure only **one active instance per `group.instance.id`**.
1766
+ - Consumers must shut down cleanly to avoid blocking replacements until session timeout expires.
1767
+
1768
+ <a name="session-timeout--fetching"></a>
1769
+ ##### Session Timeout & Fetching
1770
+
1771
+ - **Session timeout is broker-controlled**:
1772
+ - If the Coordinator is unreachable, a consumer **continues fetching messages** but cannot commit offsets.
1773
+ - Consumer is fenced once a heartbeat response is received from the Coordinator.
1774
+ - In the classic protocol, the client stopped fetching when session timeout expired.
1775
+
1776
+ <a name="closing--auto-commit"></a>
1777
+ ##### Closing / Auto-Commit
1778
+
1779
+ - On `close()` or unsubscribe with auto-commit enabled:
1780
+ - Member retries committing offsets until a timeout expires.
1781
+ - Currently uses the **default remote session timeout**.
1782
+ - Future **KIP-1092** will allow custom commit timeouts.
1783
+
1784
+ <a name="error-handling-changes"></a>
1785
+ ##### Error Handling Changes
1786
+
1787
+ - `UNKNOWN_TOPIC_OR_PART` (**subscription case**):
1788
+ - No longer returned if a topic is missing in the **local cache** when subscribing; the subscription proceeds.
1789
+ - `TOPIC_AUTHORIZATION_FAILED`:
1790
+ - Reported once per heartbeat or subscription change, even if only one topic is unauthorized.
1791
+
1792
+ <a name="summary-of-key-differences-classic-vs-next-gen"></a>
1793
+ ##### Summary of Key Differences (Classic vs Next-Gen)
1794
+
1795
+ - **Assignment:** Classic protocol calculated by **Group Leader (consumer)**; KIP-848 calculated by **Group Coordinator (broker)**
1796
+ - **Assignors:** Classic range assignor was **not sticky**; KIP-848 assignors are **sticky**, including range
1797
+ - **Deprecated configs:** Classic client configs are replaced by `group.remote.assignor` and broker-controlled session/heartbeat configs
1798
+ - **Static membership fencing:** KIP-848 fences **new member** on duplicate `group.instance.id`
1799
+ - **Session timeout:** Classic enforced on client; KIP-848 enforced on broker
1800
+ - **Auto-commit on close:** Classic stops at client session timeout; KIP-848 retries until remote timeout
1801
+ - **Unknown topics:** KIP-848 does not return error on subscription if topic missing
1802
+ - **Upgrade/Downgrade:** KIP-848 supports upgrade/downgrade from/to `classic` and `consumer` protocols
1803
+
1804
+ <a name="minimal-example-config"></a>
1805
+ #### Minimal Example Config
1806
+
1807
+ <a name="classic-protocol"></a>
1808
+ ##### Classic Protocol
1809
+ ```properties
1810
+ # Optional; default is 'classic'
1811
+ group.protocol=classic
1812
+
1813
+ partition.assignment.strategy=<range,roundrobin,sticky>
1814
+ session.timeout.ms=45000
1815
+ heartbeat.interval.ms=15000
1816
+ ```
1817
+
1818
+ <a name="next-gen-protocol--kip-848"></a>
1819
+ ##### Next-Gen Protocol / KIP-848
1820
+ ```properties
1821
+ group.protocol=consumer
1822
+
1823
+ # Optional: select a remote assignor
1824
+ # Valid options currently: 'uniform' or 'range'
1825
+ # group.remote.assignor=<uniform,range>
1826
+ # If unset(NULL), broker chooses the assignor (default: 'uniform')
1827
+
1828
+ # Session & heartbeat now controlled by broker:
1829
+ # group.consumer.session.timeout.ms
1830
+ # group.consumer.heartbeat.interval.ms
1831
+ ```
1832
+
1833
+ <a name="rebalance-callback-migration"></a>
1834
+ #### Rebalance Callback Migration
1835
+
1836
+ <a name="range-assignor-classic"></a>
1837
+ ##### Range Assignor (Classic)
1838
+ ```c
1839
+ /* Rebalance callback for range assignor (classic) */
1840
+ static void rebalance_cb (rd_kafka_t *rk,
1841
+ rd_kafka_resp_err_t err,
1842
+ rd_kafka_topic_partition_list_t *partitions,
1843
+ void *opaque) {
1844
+ switch (err) {
1845
+ case RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS:
1846
+ rd_kafka_assign(rk, partitions); /* full partition list */
1847
+ break;
1848
+
1849
+ case RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS:
1850
+ rd_kafka_assign(rk, NULL); /* revoke all partitions */
1851
+ break;
1852
+
1853
+ default:
1854
+ fprintf(stderr, "Rebalance error: %s\n", rd_kafka_err2str(err));
1855
+ break;
1856
+ }
1857
+ }
1858
+ ```
1859
+
1860
+ <a name="incremental-assignor-including-range-in-consumer--kip-848-any-protocol"></a>
1861
+ ##### Incremental Assignor (Including Range in Consumer / KIP-848, Any Protocol)
1862
+
1863
+ ```c
1864
+ /* Rebalance callback for incremental assignor */
1865
+ static void rebalance_cb (rd_kafka_t *rk,
1866
+ rd_kafka_resp_err_t err,
1867
+ rd_kafka_topic_partition_list_t *partitions,
1868
+ void *opaque) {
1869
+ switch (err) {
1870
+ case RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS:
1871
+ rd_kafka_incremental_assign(rk, partitions); /* incremental partitions only */
1872
+ break;
1873
+
1874
+ case RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS:
1875
+ rd_kafka_incremental_unassign(rk, partitions);
1876
+ break;
1877
+
1878
+ default:
1879
+ fprintf(stderr, "Rebalance error: %s\n", rd_kafka_err2str(err));
1880
+ break;
1881
+ }
1882
+ }
1883
+ ```
1884
+ **Note:**
1885
+ - The `partitions` list contains **only partitions being added or revoked**, not the full partition list as in the classic `rd_kafka_assign()`.
1886
+
1887
+ <a name="upgrade-and-downgrade"></a>
1888
+ #### Upgrade and Downgrade
1889
+
1890
+ - A group made up entirely of `classic` consumers runs under the classic protocol.
1891
+ - The group is **upgraded to the consumer protocol** as soon as at least one `consumer` protocol member joins.
1892
+ - The group is **downgraded back to the classic protocol** if the last `consumer` protocol member leaves while `classic` members remain.
1893
+ - Both **rolling upgrade** (classic → consumer) and **rolling downgrade** (consumer → classic) are supported.
1894
+
1895
+
1896
+ <a name="migration-checklist-next-gen-protocol--kip-848"></a>
1897
+ #### Migration Checklist (Next-Gen Protocol / [KIP-848](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol))
1898
+
1899
+ 1. Upgrade to **librdkafka ≥ v2.12.0** (GA release)
1900
+ 2. Run against **Kafka brokers ≥ v4.0.0**
1901
+ 3. Set `group.protocol=consumer`
1902
+ 4. Optionally set `group.remote.assignor`; leave `NULL` for broker-controlled (default: `uniform`), valid options: `uniform` or `range`
1903
+ 5. Replace deprecated configs with new ones
1904
+ 6. Update rebalance callbacks to **incremental APIs only**
1905
+ 7. Review static membership handling (`group.instance.id`)
1906
+ 8. Ensure proper shutdown to avoid fencing issues
1907
+ 9. Adjust error handling for unknown topics and authorization failures
1908
+
1909
+
1910
+ <a name="note-on-batch-consume-apis"></a>
1911
+ ### Note on Batch consume APIs
1912
+
1913
+ Using multiple instances of `rd_kafka_consume_batch()` and/or `rd_kafka_consume_batch_queue()`
1914
+ APIs concurrently is not thread safe and will result in undefined behaviour. We strongly recommend a
1915
+ single instance of these APIs to be used at a given time. This usecase is not supported and will not
1916
+ be supported in future as well. There are different ways to achieve similar result:
1917
+
1918
+ * Create multiple consumers reading from different partitions. In this way, different partitions
1919
+ are read by different consumers and each consumer can run its own batch call.
1920
+ * Create multiple consumers in same consumer group. In this way, partitions are assigned to
1921
+ different consumers and each consumer can run its own batch call.
1922
+ * Create single consumer and read data from single batch call and process this data in parallel.
1923
+
1924
+ Even after this if you feel the need to use multiple instances of these APIs for the same consumer
1925
+ concurrently, then don't use any of the **seek**, **pause**, **resume** or **rebalancing** operation
1926
+ in conjunction with these API calls. For **rebalancing** operation to work in sequencial manner, please
1927
+ set `rebalance_cb` configuration property (refer [examples/rdkafka_complex_consumer_example.c](examples/rdkafka_complex_consumer_example.c)
1928
+ for the help with the usage) for the consumer.
1929
+
1930
+
1931
+ <a name="topics"></a>
1932
+ ### Topics
1933
+
1934
+ <a name="unknown-or-unauthorized-topics"></a>
1935
+ #### Unknown or unauthorized topics
1936
+
1937
+ If a consumer application subscribes to non-existent or unauthorized topics
1938
+ a consumer error will be propagated for each unavailable topic with the
1939
+ error code set to either `RD_KAFKA_RESP_ERR_UNKNOWN_TOPIC_OR_PART` or a
1940
+ broker-specific error code, such as
1941
+ `RD_KAFKA_RESP_ERR_TOPIC_AUTHORIZATION_FAILED`.
1942
+
1943
+ As the topic metadata is refreshed every `topic.metadata.refresh.interval.ms`
1944
+ the unavailable topics are re-checked for availability, but the same error
1945
+ will not be raised again for the same topic.
1946
+
1947
+ If a consumer has Describe (ACL) permissions for a topic but not Read it will
1948
+ be able to join a consumer group and start consuming the topic, but the Fetch
1949
+ requests to retrieve messages from the broker will fail with
1950
+ `RD_KAFKA_RESP_ERR_TOPIC_AUTHORIZATION_FAILED`.
1951
+ This error will be raised to the application once per partition and
1952
+ assign()/seek() and the fetcher will back off the next fetch 10 times longer than
1953
+ the `fetch.error.backoff.ms` (but at least 1 second).
1954
+ It is recommended that the application takes appropriate action when this
1955
+ occurs, for instance adjusting its subscription or assignment to exclude the
1956
+ unauthorized topic.
1957
+
1958
+
1959
+ <a name="topic-metadata-propagation-for-newly-created-topics"></a>
1960
+ #### Topic metadata propagation for newly created topics
1961
+
1962
+ Due to the asynchronous nature of topic creation in Apache Kafka it may
1963
+ take some time for a newly created topic to be known by all brokers in the
1964
+ cluster.
1965
+ If a client tries to use a topic after topic creation but before the topic
1966
+ has been fully propagated in the cluster it will seem as if the topic does not
1967
+ exist which would raise `RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC` (et.al)
1968
+ errors to the application.
1969
+ To avoid these temporary errors being raised, the client will not flag
1970
+ a topic as non-existent until a propagation time has elapsed, this propagation
1971
+ defaults to 30 seconds and can be configured with
1972
+ `topic.metadata.propagation.max.ms`.
1973
+ The per-topic max propagation time starts ticking as soon as the topic is
1974
+ referenced (e.g., by produce()).
1975
+
1976
+ If messages are produced to unknown topics during the propagation time, the
1977
+ messages will be queued for later delivery to the broker when the topic
1978
+ metadata has propagated.
1979
+ Should the topic propagation time expire without the topic being seen the
1980
+ produced messages will fail with `RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC`.
1981
+
1982
+ **Note**: The propagation time will not take affect if a topic is known to
1983
+ the client and then deleted, in this case the topic will immediately
1984
+ be marked as non-existent and remain non-existent until a topic
1985
+ metadata refresh sees the topic again (after the topic has been
1986
+ re-created).
1987
+
1988
+ **Note**: `RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC*` during a `subscribe()` call occurs **only with the classic protocol**. With the next-gen `consumer` protocol (KIP-848), subscription proceeds even if the topic is not yet in the local cache (e.g., it may be created later).
1989
+
1990
+
1991
+ <a name="topic-auto-creation"></a>
1992
+ #### Topic auto creation
1993
+
1994
+ Topic auto creation is supported by librdkafka, if a non-existent topic is
1995
+ referenced by the client (by produce to, or consuming from, the topic, etc)
1996
+ the broker will automatically create the topic (with default partition counts
1997
+ and replication factor) if the broker configuration property
1998
+ `auto.create.topics.enable=true` is set.
1999
+
2000
+ *Note*: A topic that is undergoing automatic creation may be reported as
2001
+ unavailable, with e.g., `RD_KAFKA_RESP_ERR_UNKNOWN_TOPIC_OR_PART`, during the
2002
+ time the topic is being created and partition leaders are elected.
2003
+
2004
+ While topic auto creation may be useful for producer applications, it is not
2005
+ particularily valuable for consumer applications since even if the topic
2006
+ to consume is auto created there is nothing writing messages to the topic.
2007
+ To avoid consumers automatically creating topics the
2008
+ `allow.auto.create.topics` consumer configuration property is set to
2009
+ `false` by default, preventing the consumer to trigger automatic topic
2010
+ creation on the broker. This requires broker version v0.11.0.0 or later.
2011
+ The `allow.auto.create.topics` property may be set to `true` to allow
2012
+ auto topic creation, which also requires `auto.create.topics.enable=true` to
2013
+ be configured on the broker.
2014
+
2015
+
2016
+
2017
+ <a name="metadata"></a>
2018
+ ### Metadata
2019
+
2020
+ <a name="lt093"></a>
2021
+ #### < 0.9.3
2022
+ Previous to the 0.9.3 release librdkafka's metadata handling
2023
+ was chatty and excessive, which usually isn't a problem in small
2024
+ to medium-sized clusters, but in large clusters with a large amount
2025
+ of librdkafka clients the metadata requests could hog broker CPU and bandwidth.
2026
+
2027
+ <a name="gt093-1"></a>
2028
+ #### > 0.9.3
2029
+
2030
+ The remaining Metadata sections describe the current behaviour.
2031
+
2032
+ **Note:** "Known topics" in the following section means topics for
2033
+ locally created `rd_kafka_topic_t` objects.
2034
+
2035
+
2036
+ <a name="query-reasons"></a>
2037
+ #### Query reasons
2038
+
2039
+ There are four reasons to query metadata:
2040
+
2041
+ * brokers - update/populate cluster broker list, so the client can
2042
+ find and connect to any new brokers added.
2043
+
2044
+ * specific topic - find leader or partition count for specific topic
2045
+
2046
+ * known topics - same, but for all locally known topics.
2047
+
2048
+ * all topics - get topic names for consumer group wildcard subscription
2049
+ matching
2050
+
2051
+ The above list is sorted so that the sub-sequent entries contain the
2052
+ information above, e.g., 'known topics' contains enough information to
2053
+ also satisfy 'specific topic' and 'brokers'.
2054
+
2055
+
2056
+ <a name="caching-strategy"></a>
2057
+ #### Caching strategy
2058
+
2059
+ The prevalent cache timeout is `metadata.max.age.ms`, any cached entry
2060
+ will remain authoritative for this long or until a relevant broker error
2061
+ is returned.
2062
+
2063
+
2064
+ * brokers - eternally cached, the broker list is additative.
2065
+
2066
+ * topics - cached for `metadata.max.age.ms`
2067
+
2068
+
2069
+
2070
+ <a name="fatal-errors"></a>
2071
+ ### Fatal errors
2072
+
2073
+ If an unrecoverable error occurs, a fatal error is triggered in one
2074
+ or more of the follow ways depending on what APIs the application is utilizing:
2075
+
2076
+ * C: the `error_cb` is triggered with error code `RD_KAFKA_RESP_ERR__FATAL`,
2077
+ the application should call `rd_kafka_fatal_error()` to retrieve the
2078
+ underlying fatal error code and error string.
2079
+ * C: an `RD_KAFKA_EVENT_ERROR` event is triggered and
2080
+ `rd_kafka_event_error_is_fatal()` returns true: the fatal error code
2081
+ and string are available through `rd_kafka_event_error()`, and `.._string()`.
2082
+ * C and C++: any API call may return `RD_KAFKA_RESP_ERR__FATAL`, use
2083
+ `rd_kafka_fatal_error()` to retrieve the underlying fatal error code
2084
+ and error string.
2085
+ * C++: an `EVENT_ERROR` event is triggered and `event.fatal()` returns true:
2086
+ the fatal error code and string are available through `event.err()` and
2087
+ `event.str()`.
2088
+
2089
+
2090
+ An application may call `rd_kafka_fatal_error()` at any time to check if
2091
+ a fatal error has been raised.
2092
+
2093
+
2094
+ <a name="fatal-producer-errors"></a>
2095
+ #### Fatal producer errors
2096
+
2097
+ The idempotent producer guarantees of ordering and no duplicates also
2098
+ requires a way for the client to fail gracefully when these guarantees
2099
+ can't be satisfied.
2100
+
2101
+ If a fatal error has been raised, sub-sequent use of the following API calls
2102
+ will fail:
2103
+
2104
+ * `rd_kafka_produce()`
2105
+ * `rd_kafka_producev()`
2106
+ * `rd_kafka_produce_batch()`
2107
+
2108
+ The underlying fatal error code will be returned, depending on the error
2109
+ reporting scheme for each of those APIs.
2110
+
2111
+
2112
+ When a fatal error has occurred the application should call `rd_kafka_flush()`
2113
+ to wait for all outstanding and queued messages to drain before terminating
2114
+ the application.
2115
+ `rd_kafka_purge(RD_KAFKA_PURGE_F_QUEUE)` is automatically called by the client
2116
+ when a producer fatal error has occurred, messages in-flight are not purged
2117
+ automatically to allow waiting for the proper acknowledgement from the broker.
2118
+ The purged messages in queue will fail with error code set to
2119
+ `RD_KAFKA_RESP_ERR__PURGE_QUEUE`.
2120
+
2121
+
2122
+ <a name="fatal-consumer-errors"></a>
2123
+ #### Fatal consumer errors
2124
+
2125
+ A consumer configured for static group membership (`group.instance.id`) may
2126
+ raise a fatal error if a new consumer instance is started with the same
2127
+ instance id, causing the existing consumer to be fenced by the new consumer.
2128
+
2129
+ This fatal error is propagated on the fenced existing consumer in multiple ways:
2130
+ * `error_cb` (if configured) is triggered.
2131
+ * `rd_kafka_consumer_poll()` (et.al) will return a message object
2132
+ with the `err` field set to `RD_KAFKA_ERR__FATAL`.
2133
+ * any sub-sequent calls to state-changing consumer calls will
2134
+ return `RD_KAFKA_ERR___FATAL`.
2135
+ This includes `rd_kafka_subscribe()`, `rd_kafka_assign()`,
2136
+ `rd_kafka_consumer_close()`, `rd_kafka_commit*()`, etc.
2137
+
2138
+ The consumer will automatically stop consuming when a fatal error has occurred
2139
+ and no further subscription, assignment, consumption or offset committing
2140
+ will be possible. At this point the application should simply destroy the
2141
+ consumer instance and terminate the application since it has been replaced
2142
+ by a newer instance.
2143
+
2144
+
2145
+ <a name="compatibility"></a>
2146
+ ## Compatibility
2147
+
2148
+ <a name="broker-version-compatibility"></a>
2149
+ ### Broker version compatibility
2150
+
2151
+ librdkafka supports all released Apache Kafka broker versions since 0.8.0.0.0,
2152
+ but not all features may be available on all broker versions since some
2153
+ features rely on newer broker functionality.
2154
+
2155
+ **Current defaults:**
2156
+ * `api.version.request=true`
2157
+ * `broker.version.fallback=0.10.0`
2158
+ * `api.version.fallback.ms=0` (never revert to `broker.version.fallback`)
2159
+
2160
+ Depending on what broker version you are using, please configure your
2161
+ librdkafka based client as follows:
2162
+
2163
+ <a name="broker-version--01000-or-trunk"></a>
2164
+ #### Broker version >= 0.10.0.0 (or trunk)
2165
+
2166
+ For librdkafka >= v1.0.0 there is no need to set any api.version-related
2167
+ configuration parameters, the defaults are tailored for broker version 0.10.0.0
2168
+ or later.
2169
+
2170
+ For librdkafka < v1.0.0, please specify:
2171
+ ```
2172
+ api.version.request=true
2173
+ api.version.fallback.ms=0
2174
+ ```
2175
+
2176
+
2177
+ <a name="broker-versions-090x"></a>
2178
+ #### Broker versions 0.9.0.x
2179
+
2180
+ ```
2181
+ api.version.request=false
2182
+ broker.version.fallback=0.9.0.x (the exact 0.9.0.. version you are using)
2183
+ ```
2184
+
2185
+ <a name="broker-versions-08xy"></a>
2186
+ #### Broker versions 0.8.x.y
2187
+
2188
+ ```
2189
+ api.version.request=false
2190
+ broker.version.fallback=0.8.x.y (your exact 0.8... broker version)
2191
+ ```
2192
+
2193
+ <a name="detailed-description"></a>
2194
+ #### Detailed description
2195
+
2196
+ Apache Kafka version 0.10.0.0 added support for
2197
+ [KIP-35](https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version) -
2198
+ querying the broker for supported API request types and versions -
2199
+ allowing the client to figure out what features it can use.
2200
+ But for older broker versions there is no way for the client to reliably know
2201
+ what protocol features the broker supports.
2202
+
2203
+ To alleviate this situation librdkafka has three configuration properties:
2204
+ * `api.version.request=true|false` - enables the API version request,
2205
+ this requires a >= 0.10.0.0 broker and will cause a disconnect on
2206
+ brokers 0.8.x - this disconnect is recognized by librdkafka and on the next
2207
+ connection attempt (which is immediate) it will disable the API version
2208
+ request and use `broker.version.fallback` as a basis of available features.
2209
+ **NOTE**: Due to a bug in broker version 0.9.0.0 & 0.9.0.1 the broker will
2210
+ not close the connection when receiving the API version request, instead
2211
+ the request will time out in librdkafka after 10 seconds and it will fall
2212
+ back to `broker.version.fallback` on the next immediate connection attempt.
2213
+ * `broker.version.fallback=X.Y.Z.N` - if the API version request fails
2214
+ (if `api.version.request=true`) or API version requests are disabled
2215
+ (`api.version.request=false`) then this tells librdkafka what version the
2216
+ broker is running and adapts its feature set accordingly.
2217
+ * `api.version.fallback.ms=MS` - In the case where `api.version.request=true`
2218
+ and the API version request fails, this property dictates for how long
2219
+ librdkafka will use `broker.version.fallback` instead of
2220
+ `api.version.request=true`. After `MS` has passed the API version request
2221
+ will be sent on any new connections made for the broker in question.
2222
+ This allows upgrading the Kafka broker to a new version with extended
2223
+ feature set without needing to restart or reconfigure the client
2224
+ (given that `api.version.request=true`).
2225
+
2226
+ *Note: These properties applies per broker.*
2227
+
2228
+ The API version query was disabled by default (`api.version.request=false`) in
2229
+ librdkafka up to and including v0.9.5 due to the afforementioned bug in
2230
+ broker version 0.9.0.0 & 0.9.0.1, but was changed to `true` in
2231
+ librdkafka v0.11.0.
2232
+
2233
+
2234
+ <a name="supported-kips"></a>
2235
+ ### Supported KIPs
2236
+
2237
+ The [Apache Kafka Implementation Proposals (KIPs)](https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals) supported by librdkafka.
2238
+
2239
+
2240
+ | KIP | Kafka release | Status |
2241
+ | ------------------------------------------------------------------------ | --------------------------- | --------------------------------------------------------------------------------------------- |
2242
+ | KIP-1 - Stop accepting request.required.acks > 1 | 0.9.0.0 | Not enforced on client (due to backwards compat with brokers <0.8.3) |
2243
+ | KIP-4 - Metadata protocol changes | 0.9.0.0, 0.10.0.0, 0.10.1.0 | Supported |
2244
+ | KIP-8 - Producer flush() | 0.9.0.0 | Supported |
2245
+ | KIP-12 - SASL Kerberos | 0.9.0.0 | Supported (uses SSPI/logged-on-user on Windows, full KRB5 keytabs on Unix) |
2246
+ | KIP-13 - Protocol request throttling (enforced on broker) | 0.9.0.0 | Supported |
2247
+ | KIP-15 - Producer close with timeout | 0.9.0.0 | Supported (through flush() + destroy()) |
2248
+ | KIP-19 - Request timeouts | 0.9.0.0 | Supported |
2249
+ | KIP-22 - Producer pluggable partitioner | 0.9.0.0 | Supported (not supported by Go, .NET and Python) |
2250
+ | KIP-31 - Relative offsets in messagesets | 0.10.0.0 | Supported |
2251
+ | KIP-35 - ApiVersionRequest | 0.10.0.0 | Supported |
2252
+ | KIP-40 - ListGroups and DescribeGroups | 0.9.0.0 | Supported |
2253
+ | KIP-41 - max.poll.records | 0.10.0.0 | Supported through batch consumption interface (not supported by .NET and Go) |
2254
+ | KIP-42 - Producer and Consumer interceptors | 0.10.0.0 | Supported (not supported by Go, .NET and Python) |
2255
+ | KIP-43 - SASL PLAIN and handshake | 0.10.0.0 | Supported |
2256
+ | KIP-48 - Delegation tokens | 1.1.0 | Not supported |
2257
+ | KIP-54 - Sticky partition assignment strategy | 0.11.0.0 | Supported but not available, use KIP-429 instead. |
2258
+ | KIP-57 - Interoperable LZ4 framing | 0.10.0.0 | Supported |
2259
+ | KIP-62 - max.poll.interval and background heartbeats | 0.10.1.0 | Supported |
2260
+ | KIP-70 - Proper client rebalance event on unsubscribe/subscribe | 0.10.1.0 | Supported |
2261
+ | KIP-74 - max.partition.fetch.bytes | 0.10.1.0 | Supported |
2262
+ | KIP-78 - Retrieve Cluster Id | 0.10.1.0 | Supported (not supported by .NET) |
2263
+ | KIP-79 - OffsetsForTimes | 0.10.1.0 | Supported |
2264
+ | KIP-81 - Consumer pre-fetch buffer size | 2.4.0 (WIP) | Supported |
2265
+ | KIP-82 - Record Headers | 0.11.0.0 | Supported |
2266
+ | KIP-84 - SASL SCRAM | 0.10.2.0 | Supported |
2267
+ | KIP-85 - SASL config properties | 0.10.2.0 | Supported |
2268
+ | KIP-86 - Configurable SASL callbacks | 2.0.0 | Not supported |
2269
+ | KIP-88 - AdminAPI: ListGroupOffsets | 0.10.2.0 | Supported |
2270
+ | KIP-91 - Intuitive timeouts in Producer | 2.1.0 | Supported |
2271
+ | KIP-92 - Per-partition lag metrics in Consumer | 0.10.2.0 | Supported |
2272
+ | KIP-97 - Backwards compatibility with older brokers | 0.10.2.0 | Supported |
2273
+ | KIP-98 - EOS | 0.11.0.0 | Supported |
2274
+ | KIP-102 - Close with timeout in consumer | 0.10.2.0 | Not supported |
2275
+ | KIP-107 - AdminAPI: DeleteRecordsBefore | 0.11.0.0 | Supported |
2276
+ | KIP-110 - ZStd compression | 2.1.0 | Supported |
2277
+ | KIP-117 - AdminClient | 0.11.0.0 | Supported |
2278
+ | KIP-124 - Request rate quotas | 0.11.0.0 | Partially supported (depending on protocol request) |
2279
+ | KIP-126 - Producer ensure proper batch size after compression | 0.11.0.0 | Supported |
2280
+ | KIP-133 - AdminAPI: DescribeConfigs and AlterConfigs | 0.11.0.0 | Supported |
2281
+ | KIP-140 - AdminAPI: ACLs | 0.11.0.0 | Supported |
2282
+ | KIP-144 - Broker reconnect backoff | 0.11.0.0 | Supported |
2283
+ | KIP-152 - Improved SASL auth error messages | 1.0.0 | Supported |
2284
+ | KIP-192 - Cleaner idempotence semantics | 1.0.0 | Not supported (superceeded by KIP-360) |
2285
+ | KIP-195 - AdminAPI: CreatePartitions | 1.0.0 | Supported |
2286
+ | KIP-204 - AdminAPI: DeleteRecords | 1.1.0 | Supported |
2287
+ | KIP-219 - Client-side throttling | 2.0.0 | Not supported |
2288
+ | KIP-222 - AdminAPI: Consumer group operations | 2.0.0 | Supported |
2289
+ | KIP-223 - Consumer partition lead metric | 2.0.0 | Not supported |
2290
+ | KIP-226 - AdminAPI: Dynamic broker config | 1.1.0 | Supported |
2291
+ | KIP-227 - Consumer Incremental Fetch | 1.1.0 | Not supported |
2292
+ | KIP-229 - AdminAPI: DeleteGroups | 1.1.0 | Supported |
2293
+ | KIP-235 - DNS alias for secure connections | 2.1.0 | Supported |
2294
+ | KIP-249 - AdminAPI: Deletegation Tokens | 2.0.0 | Not supported |
2295
+ | KIP-255 - SASL OAUTHBEARER | 2.0.0 | Supported |
2296
+ | KIP-266 - Fix indefinite consumer timeouts | 2.0.0 | Supported (bound by session.timeout.ms and max.poll.interval.ms) |
2297
+ | KIP-289 - Consumer group.id default to NULL | 2.2.0 | Supported |
2298
+ | KIP-294 - SSL endpoint verification | 2.0.0 | Supported |
2299
+ | KIP-302 - Use all addresses for resolved broker hostname | 2.1.0 | Supported |
2300
+ | KIP-320 - Consumer: handle log truncation | 2.1.0, 2.2.0 | Supported |
2301
+ | KIP-322 - DeleteTopics disabled error code | 2.1.0 | Supported |
2302
+ | KIP-339 - AdminAPI: incrementalAlterConfigs | 2.3.0 | Supported |
2303
+ | KIP-341 - Update Sticky partition assignment data | 2.3.0 | Not supported (superceeded by KIP-429) |
2304
+ | KIP-342 - Custom SASL OAUTHBEARER extensions | 2.1.0 | Supported |
2305
+ | KIP-345 - Consumer: Static membership | 2.4.0 | Supported |
2306
+ | KIP-357 - AdminAPI: list ACLs per principal | 2.1.0 | Not supported |
2307
+ | KIP-359 - Producer: use EpochLeaderId | 2.4.0 | Not supported |
2308
+ | KIP-360 - Improve handling of unknown Idempotent Producer | 2.5.0 | Supported |
2309
+ | KIP-361 - Consumer: add config to disable auto topic creation | 2.3.0 | Supported |
2310
+ | KIP-368 - SASL periodic reauth | 2.2.0 | Supported |
2311
+ | KIP-369 - Always roundRobin partitioner | 2.4.0 | Not supported |
2312
+ | KIP-389 - Consumer group max size | 2.2.0 | Supported (error is propagated to application, but the consumer does not raise a fatal error) |
2313
+ | KIP-392 - Allow consumers to fetch from closest replica | 2.4.0 | Supported |
2314
+ | KIP-394 - Consumer: require member.id in JoinGroupRequest | 2.2.0 | Supported |
2315
+ | KIP-396 - AdminAPI: commit/list offsets | 2.4.0 | Supported |
2316
+ | KIP-412 - AdminAPI: adjust log levels | 2.4.0 | Not supported |
2317
+ | KIP-421 - Variables in client config files | 2.3.0 | Not applicable (librdkafka, et.al, does not provide a config file interface, and shouldn't) |
2318
+ | KIP-429 - Consumer: incremental rebalance protocol | 2.4.0 | Supported |
2319
+ | KIP-430 - AdminAPI: return authorized operations in Describe.. responses | 2.3.0 | Supported |
2320
+ | KIP-436 - Start time in stats | 2.3.0 | Supported |
2321
+ | KIP-447 - Producer scalability for EOS | 2.5.0 | Supported |
2322
+ | KIP-455 - AdminAPI: Replica assignment | 2.4.0 (WIP) | Not supported |
2323
+ | KIP-460 - AdminAPI: electLeaders | 2.6.0 | Supported |
2324
+ | KIP-464 - AdminAPI: defaults for createTopics | 2.4.0 | Supported |
2325
+ | KIP-467 - Per-message (sort of) error codes in ProduceResponse | 2.4.0 | Supported |
2326
+ | KIP-480 - Sticky partitioner | 2.4.0 | Supported |
2327
+ | KIP-482 - Optional fields in Kafka protocol | 2.4.0 | Partially supported (ApiVersionRequest) |
2328
+ | KIP-496 - AdminAPI: delete offsets | 2.4.0 | Supported |
2329
+ | KIP-511 - Collect Client's Name and Version | 2.4.0 | Supported |
2330
+ | KIP-514 - Bounded flush() | 2.4.0 | Supported |
2331
+ | KIP-516 - Topic Identifiers | 2.8.0 (WIP) | Partially Supported |
2332
+ | KIP-517 - Consumer poll() metrics | 2.4.0 | Not supported |
2333
+ | KIP-518 - Allow listing consumer groups per state | 2.6.0 | Supported |
2334
+ | KIP-519 - Make SSL engine configurable | 2.6.0 | Supported |
2335
+ | KIP-525 - Return topic metadata and configs in CreateTopics response | 2.4.0 | Not supported |
2336
+ | KIP-526 - Reduce Producer Metadata Lookups for Large Number of Topics | 2.5.0 | Not supported |
2337
+ | KIP-533 - Add default API timeout to AdminClient | 2.5.0 | Not supported |
2338
+ | KIP-546 - Add Client Quota APIs to AdminClient | 2.6.0 | Not supported |
2339
+ | KIP-554 - Add Broker-side SCRAM Config API | 2.7.0 | Supported |
2340
+ | KIP-559 - Make the Kafka Protocol Friendlier with L7 Proxies | 2.5.0 | Not supported |
2341
+ | KIP-568 - Explicit rebalance triggering on the Consumer | 2.6.0 | Not supported |
2342
+ | KIP-659 - Add metadata to DescribeConfigsResponse | 2.6.0 | Not supported |
2343
+ | KIP-580 - Exponential backoff for Kafka clients | 3.7.0 | Supported |
2344
+ | KIP-584 - Versioning scheme for features | WIP | Not supported |
2345
+ | KIP-588 - Allow producers to recover gracefully from txn timeouts | 2.8.0 (WIP) | Not supported |
2346
+ | KIP-601 - Configurable socket connection timeout | 2.7.0 | Supported |
2347
+ | KIP-602 - Use all resolved addresses by default | 2.6.0 | Supported |
2348
+ | KIP-651 - Support PEM format for SSL certs and keys | 2.7.0 | Supported |
2349
+ | KIP-654 - Aborted txns with non-flushed msgs should not be fatal | 2.7.0 | Supported |
2350
+ | KIP-714 - Client metrics and observability | 3.7.0 | Supported |
2351
+ | KIP-735 - Increase default consumer session timeout | 3.0.0 | Supported |
2352
+ | KIP-768 - SASL/OAUTHBEARER OIDC support | 3.0 | Supported |
2353
+ | KIP-881 - Rack-aware Partition Assignment for Kafka Consumers | 3.5.0 | Supported |
2354
+ | KIP-848 - The Next Generation of the Consumer Rebalance Protocol | 4.0.0 | Supported |
2355
+ | KIP-899 - Allow producer and consumer clients to rebootstrap | 3.8.0 | Supported |
2356
+ | KIP-951 - Leader discovery optimisations for the client | 3.7.0 | Supported |
2357
+ | KIP-1082 - Require Client-Generated IDs over the ConsumerGroupHeartbeat | 4.0.0 | Supported |
2358
+ | KIP-1102 - Enable clients to rebootstrap based on timeout or error code | 4.0.0 | Supported |
2359
+ | KIP-1139 - Add support for OAuth jwt-bearer grant type | 4.1.0 (WIP) | Supported |
2360
+
2361
+
2362
+
2363
+
2364
+ <a name="supported-protocol-versions"></a>
2365
+ ### Supported protocol versions
2366
+
2367
+ "Kafka max" is the maximum ApiVersion supported in Apache Kafka 4.0.0, while
2368
+ "librdkafka max" is the maximum ApiVersion supported in the latest
2369
+ release of librdkafka.
2370
+
2371
+
2372
+ | ApiKey | Request name | Kafka max | librdkafka max |
2373
+ | ------ | ---------------------------- | --------- | -------------- |
2374
+ | 0 | Produce | 12 | 10 |
2375
+ | 1 | Fetch | 17 | 16 |
2376
+ | 2 | ListOffsets | 10 | 7 |
2377
+ | 3 | Metadata | 13 | 13 |
2378
+ | 8 | OffsetCommit | 9 | 9 |
2379
+ | 9 | OffsetFetch | 9 | 9 |
2380
+ | 10 | FindCoordinator | 6 | 2 |
2381
+ | 11 | JoinGroup | 9 | 5 |
2382
+ | 12 | Heartbeat | 4 | 3 |
2383
+ | 13 | LeaveGroup | 5 | 1 |
2384
+ | 14 | SyncGroup | 5 | 3 |
2385
+ | 15 | DescribeGroups | 6 | 4 |
2386
+ | 16 | ListGroups | 5 | 4 |
2387
+ | 17 | SaslHandshake | 1 | 1 |
2388
+ | 18 | ApiVersions | 4 | 3 |
2389
+ | 19 | CreateTopics | 7 | 4 |
2390
+ | 20 | DeleteTopics | 6 | 1 |
2391
+ | 21 | DeleteRecords | 2 | 1 |
2392
+ | 22 | InitProducerId | 5 | 4 |
2393
+ | 23 | OffsetForLeaderEpoch | 4 | 2 |
2394
+ | 24 | AddPartitionsToTxn | 5 | 0 |
2395
+ | 25 | AddOffsetsToTxn | 4 | 0 |
2396
+ | 26 | EndTxn | 5 | 1 |
2397
+ | 28 | TxnOffsetCommit | 5 | 3 |
2398
+ | 29 | DescribeAcls | 3 | 1 |
2399
+ | 30 | CreateAcls | 3 | 1 |
2400
+ | 31 | DeleteAcls | 3 | 1 |
2401
+ | 32 | DescribeConfigs | 4 | 1 |
2402
+ | 33 | AlterConfigs | 2 | 2 |
2403
+ | 36 | SaslAuthenticate | 2 | 1 |
2404
+ | 37 | CreatePartitions | 3 | 0 |
2405
+ | 42 | DeleteGroups | 2 | 1 |
2406
+ | 43 | ElectLeaders | 2 | 2 |
2407
+ | 44 | IncrementalAlterConfigs | 1 | 1 |
2408
+ | 47 | OffsetDelete | 0 | 0 |
2409
+ | 50 | DescribeUserScramCredentials | 0 | 0 |
2410
+ | 51 | AlterUserScramCredentials | 0 | 0 |
2411
+ | 68 | ConsumerGroupHeartbeat | 1 | 1 |
2412
+ | 69 | ConsumerGroupDescribe | 1 | 0 |
2413
+ | 71 | GetTelemetrySubscriptions | 0 | 0 |
2414
+ | 72 | PushTelemetry | 0 | 0 |
2415
+
2416
+ <a name="recommendations-for-language-binding-developers"></a>
2417
+ # Recommendations for language binding developers
2418
+
2419
+ These recommendations are targeted for developers that wrap librdkafka
2420
+ with their high-level languages, such as confluent-kafka-go or node-rdkafka.
2421
+
2422
+ <a name="expose-the-configuration-interface-pass-thru"></a>
2423
+ ## Expose the configuration interface pass-thru
2424
+
2425
+ librdkafka's string-based key=value configuration property interface controls
2426
+ most runtime behaviour and evolves over time.
2427
+ Most features are also only configuration-based, meaning they do not require a
2428
+ new API (SSL and SASL are two good examples which are purely enabled through
2429
+ configuration properties) and thus no changes needed to the binding/application
2430
+ code.
2431
+
2432
+ If your language binding/applications allows configuration properties to be set
2433
+ in a pass-through fashion without any pre-checking done by your binding code it
2434
+ means that a simple upgrade of the underlying librdkafka library (but not your
2435
+ bindings) will provide new features to the user.
2436
+
2437
+ <a name="error-constants"></a>
2438
+ ## Error constants
2439
+
2440
+ The error constants, both the official (value >= 0) errors as well as the
2441
+ internal (value < 0) errors, evolve constantly.
2442
+ To avoid hard-coding them to expose to your users, librdkafka provides an API
2443
+ to extract the full list programmatically during runtime or for
2444
+ code generation, see `rd_kafka_get_err_descs()`.
2445
+
2446
+ <a name="reporting-client-software-name-and-version-to-broker"></a>
2447
+ ## Reporting client software name and version to broker
2448
+
2449
+ [KIP-511](https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers) introduces a means for a
2450
+ Kafka client to report its implementation name and version to the broker, the
2451
+ broker then exposes this as metrics (e.g., through JMX) to help Kafka operators
2452
+ troubleshoot problematic clients, understand the impact of broker and client
2453
+ upgrades, etc.
2454
+ This requires broker version 2.4.0 or later (metrics added in 2.5.0).
2455
+
2456
+ librdkafka will send its name (`librdkafka`) and version (e.g., `v1.3.0`)
2457
+ upon connect to a supporting broker.
2458
+ To help distinguish high-level client bindings on top of librdkafka, a client
2459
+ binding should configure the following two properties:
2460
+ * `client.software.name` - set to the binding name, e.g,
2461
+ `confluent-kafka-go` or `node-rdkafka`.
2462
+ * `client.software.version` - the version of the binding and the version
2463
+ of librdkafka, e.g., `v1.3.0-librdkafka-v1.3.0` or
2464
+ `1.2.0-librdkafka-v1.3.0`.
2465
+ It is **highly recommended** to include the librdkafka version in this
2466
+ version string.
2467
+
2468
+ These configuration properties are hidden (from CONFIGURATION.md et.al.) as
2469
+ they should typically not be modified by the user.
2470
+
2471
+ <a name="documentation-reuse"></a>
2472
+ ## Documentation reuse
2473
+
2474
+ You are free to reuse the librdkafka API and CONFIGURATION documentation in
2475
+ your project, but please do return any documentation improvements back to
2476
+ librdkafka (file a github pull request).
2477
+
2478
+ <a name="community-support"></a>
2479
+ ## Community support
2480
+
2481
+ Community support is offered through GitHub Issues and Discussions.