karafka 0.5.0.3 → 0.6.0.rc1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (76) hide show
  1. checksums.yaml +4 -4
  2. data/.console_irbrc +13 -0
  3. data/.github/ISSUE_TEMPLATE.md +2 -0
  4. data/.gitignore +1 -0
  5. data/CHANGELOG.md +59 -1
  6. data/CODE_OF_CONDUCT.md +46 -0
  7. data/CONTRIBUTING.md +67 -0
  8. data/Gemfile +2 -1
  9. data/Gemfile.lock +46 -147
  10. data/README.md +51 -952
  11. data/Rakefile +5 -14
  12. data/karafka.gemspec +19 -13
  13. data/lib/karafka.rb +7 -4
  14. data/lib/karafka/app.rb +10 -6
  15. data/lib/karafka/attributes_map.rb +67 -0
  16. data/lib/karafka/base_controller.rb +42 -52
  17. data/lib/karafka/base_responder.rb +30 -14
  18. data/lib/karafka/base_worker.rb +11 -26
  19. data/lib/karafka/cli.rb +2 -0
  20. data/lib/karafka/cli/base.rb +2 -0
  21. data/lib/karafka/cli/console.rb +7 -1
  22. data/lib/karafka/cli/flow.rb +13 -13
  23. data/lib/karafka/cli/info.rb +7 -4
  24. data/lib/karafka/cli/install.rb +4 -3
  25. data/lib/karafka/cli/server.rb +3 -1
  26. data/lib/karafka/cli/worker.rb +2 -0
  27. data/lib/karafka/connection/config_adapter.rb +103 -0
  28. data/lib/karafka/connection/listener.rb +16 -12
  29. data/lib/karafka/connection/messages_consumer.rb +86 -0
  30. data/lib/karafka/connection/messages_processor.rb +74 -0
  31. data/lib/karafka/errors.rb +15 -29
  32. data/lib/karafka/fetcher.rb +10 -8
  33. data/lib/karafka/helpers/class_matcher.rb +2 -0
  34. data/lib/karafka/helpers/config_retriever.rb +46 -0
  35. data/lib/karafka/helpers/multi_delegator.rb +2 -0
  36. data/lib/karafka/loader.rb +4 -2
  37. data/lib/karafka/logger.rb +37 -36
  38. data/lib/karafka/monitor.rb +3 -1
  39. data/lib/karafka/params/interchanger.rb +2 -0
  40. data/lib/karafka/params/params.rb +34 -41
  41. data/lib/karafka/params/params_batch.rb +46 -0
  42. data/lib/karafka/parsers/json.rb +4 -2
  43. data/lib/karafka/patches/dry_configurable.rb +2 -0
  44. data/lib/karafka/process.rb +4 -2
  45. data/lib/karafka/responders/builder.rb +2 -0
  46. data/lib/karafka/responders/topic.rb +14 -6
  47. data/lib/karafka/routing/builder.rb +22 -59
  48. data/lib/karafka/routing/consumer_group.rb +54 -0
  49. data/lib/karafka/routing/mapper.rb +2 -0
  50. data/lib/karafka/routing/proxy.rb +37 -0
  51. data/lib/karafka/routing/router.rb +18 -16
  52. data/lib/karafka/routing/topic.rb +78 -0
  53. data/lib/karafka/schemas/config.rb +36 -0
  54. data/lib/karafka/schemas/consumer_group.rb +56 -0
  55. data/lib/karafka/schemas/responder_usage.rb +38 -0
  56. data/lib/karafka/server.rb +5 -3
  57. data/lib/karafka/setup/config.rb +79 -32
  58. data/lib/karafka/setup/configurators/base.rb +2 -0
  59. data/lib/karafka/setup/configurators/celluloid.rb +2 -0
  60. data/lib/karafka/setup/configurators/sidekiq.rb +2 -0
  61. data/lib/karafka/setup/configurators/water_drop.rb +15 -3
  62. data/lib/karafka/status.rb +2 -0
  63. data/lib/karafka/templates/app.rb.example +15 -5
  64. data/lib/karafka/templates/application_worker.rb.example +0 -6
  65. data/lib/karafka/version.rb +2 -1
  66. data/lib/karafka/workers/builder.rb +2 -0
  67. metadata +109 -60
  68. data/lib/karafka/cli/routes.rb +0 -36
  69. data/lib/karafka/connection/consumer.rb +0 -33
  70. data/lib/karafka/connection/message.rb +0 -17
  71. data/lib/karafka/connection/topic_consumer.rb +0 -94
  72. data/lib/karafka/responders/usage_validator.rb +0 -60
  73. data/lib/karafka/routing/route.rb +0 -113
  74. data/lib/karafka/setup/config_schema.rb +0 -44
  75. data/lib/karafka/setup/configurators/worker_glass.rb +0 -13
  76. data/lib/karafka/templates/config.ru.example +0 -13
data/README.md CHANGED
@@ -1,997 +1,96 @@
1
- # Karafka
1
+ ![karafka logo](http://mensfeld.github.io/karafka-framework-introduction/img/karafka-04.png)
2
2
 
3
3
  [![Build Status](https://travis-ci.org/karafka/karafka.png)](https://travis-ci.org/karafka/karafka)
4
- [![Code Climate](https://codeclimate.com/github/karafka/karafka/badges/gpa.svg)](https://codeclimate.com/github/karafka/karafka)
5
- [![Join the chat at https://gitter.im/karafka/karafka](https://badges.gitter.im/karafka/karafka.svg)](https://gitter.im/karafka/karafka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
4
+ [![Backers on Open Collective](https://opencollective.com/karafka/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/karafka/sponsors/badge.svg)](#sponsors) [![Join the chat at https://gitter.im/karafka/karafka](https://badges.gitter.im/karafka/karafka.svg)](https://gitter.im/karafka/karafka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
6
5
 
7
6
  Framework used to simplify Apache Kafka based Ruby applications development.
8
7
 
9
- It allows programmers to use approach similar to "the Rails way" when working with asynchronous Kafka messages.
8
+ It allows programmers to use approach similar to standard HTTP conventions (```params``` and ```params_batch```) when working with asynchronous Kafka messages.
10
9
 
11
10
  Karafka not only handles incoming messages but also provides tools for building complex data-flow applications that receive and send messages.
12
11
 
13
- ## Table of Contents
14
-
15
- - [Table of Contents](#table-of-contents)
16
- - [Support](#support)
17
- - [Requirements](#requirements)
18
- - [How does it work](#how-does-it-work)
19
- - [Installation](#installation)
20
- - [Setup](#setup)
21
- - [Application](#application)
22
- - [Configurators](#configurators)
23
- - [Environment variables settings](#environment-variables-settings)
24
- - [Kafka brokers auto-discovery](#kafka-brokers-auto-discovery)
25
- - [Topic mappers](#topic-mappers)
26
- - [Usage](#usage)
27
- - [Karafka CLI](#karafka-cli)
28
- - [Routing](#routing)
29
- - [Topic](#topic)
30
- - [Group](#group)
31
- - [Worker](#worker)
32
- - [Parser](#parser)
33
- - [Interchanger](#interchanger)
34
- - [Responder](#responder)
35
- - [Inline mode flag](#inline-mode-flag)
36
- - [Batch mode flag](#batch-mode-flag)
37
- - [Receiving messages](#receiving-messages)
38
- - [Processing messages directly (without Sidekiq)](#processing-messages-directly-without-sidekiq)
39
- - [Sending messages from Karafka](#sending-messages-from-karafka)
40
- - [Using responders (recommended)](#using-responders-recommended)
41
- - [Using WaterDrop directly](#using-waterdrop-directly)
42
- - [Important components](#important-components)
43
- - [Controllers](#controllers)
44
- - [Controllers callbacks](#controllers-callbacks)
45
- - [Dynamic worker selection](#dynamic-worker-selection)
46
- - [Responders](#responders)
47
- - [Registering topics](#registering-topics)
48
- - [Responding on topics](#responding-on-topics)
49
- - [Response validation](#response-validation)
50
- - [Response partitioning](#response-partitioning)
51
- - [Monitoring and logging](#monitoring-and-logging)
52
- - [Example monitor with Errbit/Airbrake support](#example-monitor-with-errbitairbrake-support)
53
- - [Example monitor with NewRelic support](#example-monitor-with-newrelic-support)
54
- - [Deployment](#deployment)
55
- - [Capistrano](#capistrano)
56
- - [Docker](#docker)
57
- - [Heroku](#heroku)
58
- - [Sidekiq Web UI](#sidekiq-web-ui)
59
- - [Concurrency](#concurrency)
60
- - [Integrating with other frameworks](#integrating-with-other-frameworks)
61
- - [Integrating with Ruby on Rails](#integrating-with-ruby-on-rails)
62
- - [Integrating with Sinatra](#integrating-with-sinatra)
63
- - [Articles and other references](#articles-and-other-references)
64
- - [Libraries and components](#libraries-and-components)
65
- - [Articles and references](#articles-and-references)
66
- - [Note on Patches/Pull Requests](#note-on-patchespull-requests)
67
-
68
12
  ## How does it work
69
13
 
70
- Karafka provides a higher-level abstraction than raw Kafka Ruby drivers, such as Kafka-Ruby and Poseidon. Instead of focusing on single topic consumption, it provides developers with a set of tools that are dedicated for building multi-topic applications similarly to how Rails applications are being built.
71
-
72
- ## Support
73
-
74
- If you have any questions about using Karafka, feel free to join our [Gitter](https://gitter.im/karafka/karafka) chat channel.
75
-
76
- ## Requirements
77
-
78
- In order to use Karafka framework, you need to have:
79
-
80
- - Zookeeper (required by Kafka)
81
- - Kafka (at least 0.9.0)
82
- - Ruby (at least 2.3.0)
83
-
84
- ## Installation
85
-
86
- Karafka does not have a full installation shell command. In order to install it, please follow the below steps:
87
-
88
- Create a directory for your project:
89
-
90
- ```bash
91
- mkdir app_dir
92
- cd app_dir
93
- ```
94
-
95
- Create a **Gemfile** with Karafka:
96
-
97
- ```ruby
98
- source 'https://rubygems.org'
99
-
100
- gem 'karafka'
101
- ```
102
-
103
- and run Karafka install CLI task:
104
-
105
- ```
106
- bundle exec karafka install
107
- ```
108
-
109
- ## Setup
110
-
111
- ### Application
112
- Karafka has following configuration options:
113
-
114
- | Option | Required | Value type | Description |
115
- |-------------------------------|----------|-------------------|------------------------------------------------------------------------------------------------------------|
116
- | name | true | String | Application name |
117
- | topic_mapper | false | Class/Module | Mapper for hiding Kafka provider specific topic prefixes/postfixes, so internaly we use "pure" topics |
118
- | redis | false | Hash | Hash with Redis configuration options. It is required if inline_mode is off. |
119
- | inline_mode | false | Boolean | Do we want to perform logic without enqueuing it with Sidekiq (directly and asap) |
120
- | batch_mode | false | Boolean | Should the incoming messages be consumed in batches, or one at a time |
121
- | start_from_beginning | false | Boolean | Consume messages starting at the beginning or consume new messages that are produced at first run |
122
- | monitor | false | Object | Monitor instance (defaults to Karafka::Monitor) |
123
- | logger | false | Object | Logger instance (defaults to Karafka::Logger) |
124
- | kafka.hosts | true | Array<String> | Kafka server hosts. If 1 provided, Karafka will discover cluster structure automatically |
125
- | kafka.session_timeout | false | Integer | The number of seconds after which, if a consumer hasn't contacted the Kafka cluster, it will be kicked out |
126
- | kafka.offset_commit_interval | false | Integer | The interval between offset commits in seconds |
127
- | kafka.offset_commit_threshold | false | Integer | The number of messages that can be processed before their offsets are committed |
128
- | kafka.heartbeat_interval | false | Integer | The interval between heartbeats |
129
- | kafka.ssl.ca_cert | false | String | SSL CA certificate |
130
- | kafka.ssl.client_cert | false | String | SSL client certificate |
131
- | kafka.ssl.client_cert_key | false | String | SSL client certificate password |
132
- | connection_pool.size | false | Integer | Connection pool size for message producers connection pool |
133
- | connection_pool.timeout | false | Integer | Connection pool timeout for message producers connection pool |
134
-
135
- To apply this configuration, you need to use a *setup* method from the Karafka::App class (app.rb):
136
-
137
- ```ruby
138
- class App < Karafka::App
139
- setup do |config|
140
- config.kafka.hosts = %w( 127.0.0.1:9092 )
141
- config.inline_mode = false
142
- config.batch_mode = false
143
- config.redis = {
144
- url: 'redis://redis.example.com:7372/1'
145
- }
146
- config.name = 'my_application'
147
- config.logger = MyCustomLogger.new # not required
148
- end
149
- end
150
- ```
151
-
152
- Note: You can use any library like [Settingslogic](https://github.com/binarylogic/settingslogic) to handle your application configuration.
153
-
154
- ### Configurators
155
-
156
- For additional setup and/or configuration tasks you can create custom configurators. Similar to Rails these are added to a `config/initializers` directory and run after app initialization.
157
-
158
- Your new configurator class must inherit from `Karafka::Setup::Configurators::Base` and implement a `setup` method.
159
-
160
- Example configuration class:
161
-
162
- ```ruby
163
- class ExampleConfigurator < Karafka::Setup::Configurators::Base
164
- def setup
165
- ExampleClass.logger = Karafka.logger
166
- ExampleClass.redis = config.redis
167
- end
168
- end
169
- ```
170
-
171
- ### Environment variables settings
172
-
173
- There are several env settings you can use:
174
-
175
- | ENV name | Default | Description |
176
- |-------------------|-----------------|-------------------------------------------------------------------------------|
177
- | KARAFKA_ENV | development | In what mode this application should boot (production/development/test/etc) |
178
- | KARAFKA_BOOT_FILE | app_root/app.rb | Path to a file that contains Karafka app configuration and booting procedures |
179
- | KARAFKA_ROOT_DIR | Gemfile location| Path to Karafka's root directory |
180
-
181
- ### Kafka brokers auto-discovery
182
-
183
- Karafka supports Kafka brokers auto-discovery during startup and on failures. You need to provide at least one Kafka broker, from which the entire Kafka cluster will be discovered. Karafka will refresh list of available brokers if something goes wrong. This allows it to be aware of changes that happen in the infrastructure (adding and removing nodes).
184
-
185
- ### Topic mappers
186
-
187
- Some Kafka cloud providers require topics to be namespaced with user name. This approach is understandable, but at the same time, makes your applications less provider agnostic. To target that issue, you can create your own topic mapper that will sanitize incoming/outgoing topic names, so your logic won't be binded to those specific versions of topic names.
188
-
189
- Mapper needs to implement two following methods:
190
-
191
- - ```#incoming``` - accepts an incoming "namespace dirty" version ot topic. Needs to return sanitized topic.
192
- - ```#outgoing``` - accepts outgoing sanitized topic version. Needs to return namespaced one.
193
-
194
- Given each of the topics needs to have "karafka." prefix, your mapper could look like that:
195
-
196
- ```ruby
197
- class KarafkaTopicMapper
198
- def initialize(prefix)
199
- @prefix = prefix
200
- end
201
-
202
- def incoming(topic)
203
- topic.to_s.gsub("#{@prefix}.", '')
204
- end
205
-
206
- def outgoing(topic)
207
- "#{@prefix}.#{topic}"
208
- end
209
- end
210
-
211
- mapper = KarafkaTopicMapper.new('karafka')
212
- mapper.incoming('karafka.my_super_topic') #=> 'my_super_topic'
213
- mapper.outgoing('my_other_topic') #=> 'karafka.my_other_topic'
214
- ```
215
-
216
- To use custom mapper, just assign it during application configuration:
217
-
218
- ```ruby
219
- class App < Karafka::App
220
- setup do |config|
221
- # Other settings
222
- config.topic_mapper = MyCustomMapper.new('username')
223
- end
224
- end
225
- ```
226
-
227
- Topic mapper automatically integrates with both messages consumer and responders.
228
-
229
- ## Usage
230
-
231
- ### Karafka CLI
232
-
233
- Karafka has a simple CLI built in. It provides following commands:
234
-
235
- | Command | Description |
236
- |----------------|---------------------------------------------------------------------------|
237
- | help [COMMAND] | Describe available commands or one specific command |
238
- | console | Start the Karafka console (short-cut alias: "c") |
239
- | flow | Print application data flow (incoming => outgoing) |
240
- | info | Print configuration details and other options of your application |
241
- | install | Installs all required things for Karafka application in current directory |
242
- | routes | Print out all defined routes in alphabetical order |
243
- | server | Start the Karafka server (short-cut alias: "s") |
244
- | worker | Start the Karafka Sidekiq worker (short-cut alias: "w") |
245
-
246
- All the commands are executed the same way:
247
-
248
- ```
249
- bundle exec karafka [COMMAND]
250
- ```
251
-
252
- If you need more details about each of the CLI commands, you can execute following command:
253
-
254
- ```
255
- bundle exec karafka help [COMMAND]
256
- ```
257
-
258
- ### Routing
259
-
260
- Routing engine provides an interface to describe how messages from all the topics should be handled. To start using it, just use the *draw* method on routes:
261
-
262
- ```ruby
263
- App.routes.draw do
264
- topic :example do
265
- controller ExampleController
266
- end
267
- end
268
- ```
269
-
270
- The basic route description requires providing *topic* and *controller* that should handle it (Karafka will create a separate controller instance for each request).
271
-
272
- There are also several other methods available (optional):
273
-
274
- - *group* - symbol/string with a group name. Groups are used to cluster applications
275
- - *worker* - Class name - name of a worker class that we want to use to schedule perform code
276
- - *parser* - Class name - name of a parser class that we want to use to parse incoming data
277
- - *interchanger* - Class name - name of a interchanger class that we want to use to format data that we put/fetch into/from *#perform_async*
278
- - *responder* - Class name - name of a responder that we want to use to generate responses to other Kafka topics based on our processed data
279
- - *inline_mode* - Boolean - Do we want to perform logic without enqueuing it with Sidekiq (directly and asap) - overwrites global app setting
280
- - *batch_mode* - Boolean - Handle the incoming messages in batch, or one at a time - overwrites global app setting
281
-
282
- ```ruby
283
- App.routes.draw do
284
- topic :binary_video_details do
285
- group :composed_application
286
- controller Videos::DetailsController
287
- worker Workers::DetailsWorker
288
- parser Parsers::BinaryToJson
289
- interchanger Interchangers::Binary
290
- responder BinaryVideoProcessingResponder
291
- inline_mode true
292
- batch_mode true
293
- end
294
-
295
- topic :new_videos do
296
- controller Videos::NewVideosController
297
- end
298
- end
299
- ```
300
-
301
- See description below for more details on each of them.
302
-
303
- ##### Topic
304
-
305
- - *topic* - symbol/string with a topic that we want to route
306
-
307
- ```ruby
308
- topic :incoming_messages do
309
- # Details about how to handle this topic should go here
310
- end
311
- ```
312
-
313
- Topic is the root point of each route. Keep in mind that:
314
-
315
- - All topic names must be unique in a single Karafka application
316
- - Topics names are being validated because Kafka does not accept some characters
317
- - If you don't specify a group, it will be built based on the topic and application name
318
-
319
- ##### Group
320
-
321
- - *group* - symbol/string with a group name. Groups are used to cluster applications
322
-
323
- Optionally you can use **group** method to define group for this topic. Use it if you want to build many applications that will share the same Kafka group. Otherwise it will just build it based on the **topic** and application name. If you're not planning to build applications that will load-balance messages between many different applications (but between one applications many processes), you may want not to define it and allow the framework to define it for you.
324
-
325
- ```ruby
326
- topic :incoming_messages do
327
- group :load_balanced_group
328
- controller MessagesController
329
- end
330
- ```
331
-
332
- Note that a single group can be used only in a single topic.
333
-
334
- ##### Worker
335
-
336
- - *worker* - Class name - name of a worker class that we want to use to schedule perform code
337
-
338
- Karafka by default will build a worker that will correspond to each of your controllers (so you will have a pair - controller and a worker). All of them will inherit from **ApplicationWorker** and will share all its settings.
339
-
340
- To run Sidekiq you should have sidekiq.yml file in *config* folder. The example of sidekiq.yml file will be generated to config/sidekiq.yml.example once you run **bundle exec karafka install**.
341
-
342
- However, if you want to use a raw Sidekiq worker (without any Karafka additional magic), or you want to use SidekiqPro (or any other queuing engine that has the same API as Sidekiq), you can assign your own custom worker:
343
-
344
- ```ruby
345
- topic :incoming_messages do
346
- controller MessagesController
347
- worker MyCustomWorker
348
- end
349
- ```
350
-
351
- Note that even then, you need to specify a controller that will schedule a background task.
352
-
353
- Custom workers need to provide a **#perform_async** method. It needs to accept two arguments:
354
-
355
- - *topic* - first argument is a current topic from which a given message comes
356
- - *params* - all the params that came from Kafka + additional metadata. This data format might be changed if you use custom interchangers. Otherwise it will be an instance of Karafka::Params::Params.
357
-
358
- Keep in mind, that params might be in two states: parsed or unparsed when passed to #perform_async. This means, that if you use custom interchangers and/or custom workers, you might want to look into Karafka's sources to see exactly how it works.
359
-
360
- ##### Parser
361
-
362
- - *parser* - Class name - name of a parser class that we want to use to serialize and deserialize incoming and outgoing data.
363
-
364
- Karafka by default will parse messages with a Json parser. If you want to change this behaviour you need to set a custom parser for each route. Parser needs to have a following class methods:
365
-
366
- - *parse* - method used to parse incoming string into an object/hash
367
- - *generate* - method used in responders in order to convert objects into strings that have desired format
368
-
369
- and raise an error that is a ::Karafka::Errors::ParserError descendant when problem appears during the parsing process.
370
-
371
- ```ruby
372
- class XmlParser
373
- class ParserError < ::Karafka::Errors::ParserError; end
374
-
375
- def self.parse(message)
376
- Hash.from_xml(message)
377
- rescue REXML::ParseException
378
- raise ParserError
379
- end
380
-
381
- def self.generate(object)
382
- object.to_xml
383
- end
384
- end
385
-
386
- App.routes.draw do
387
- topic :binary_video_details do
388
- controller Videos::DetailsController
389
- parser XmlParser
390
- end
391
- end
392
- ```
393
-
394
- Note that parsing failure won't stop the application flow. Instead, Karafka will assign the raw message inside the :message key of params. That way you can handle raw message inside the Sidekiq worker (you can implement error detection, etc. - any "heavy" parsing logic can and should be implemented there).
395
-
396
- ##### Interchanger
397
-
398
- - *interchanger* - Class name - name of an interchanger class that we want to use to format data that we put/fetch into/from #perform_async.
399
-
400
- Custom interchangers target issues with non-standard (binary, etc.) data that we want to store when we do #perform_async. This data might be corrupted when fetched in a worker (see [this](https://github.com/karafka/karafka/issues/30) issue). With custom interchangers, you can encode/compress data before it is being passed to scheduling and decode/decompress it when it gets into the worker.
401
-
402
- **Warning**: if you decide to use slow interchangers, they might significantly slow down Karafka.
403
-
404
- ```ruby
405
- class Base64Interchanger
406
- class << self
407
- def load(params)
408
- Base64.encode64(Marshal.dump(params))
409
- end
410
-
411
- def parse(params)
412
- Marshal.load(Base64.decode64(params))
413
- end
414
- end
415
- end
416
-
417
- topic :binary_video_details do
418
- controller Videos::DetailsController
419
- interchanger Base64Interchanger
420
- end
421
- ```
422
-
423
- ##### Responder
424
-
425
- - *responder* - Class name - name of a responder that we want to use to generate responses to other Kafka topics based on our processed data.
426
-
427
- Responders are used to design the response that should be generated and sent to proper Kafka topics, once processing is done. It allows programmers to build not only data-consuming apps, but to build apps that consume data and, then, based on the business logic output send this processed data onwards (similarly to how Bash pipelines work).
428
-
429
- ```ruby
430
- class Responder < ApplicationResponder
431
- topic :users_created
432
- topic :profiles_created
433
-
434
- def respond(user, profile)
435
- respond_to :users_created, user
436
- respond_to :profiles_created, profile
437
- end
438
- end
439
- ```
440
-
441
- For more details about responders, please go to the [using responders](#using-responders) section.
442
-
443
- ##### Inline mode flag
444
-
445
- Inline mode flag allows you to disable Sidekiq usage by performing your #perform method business logic in the main Karafka server process.
446
-
447
- This flag be useful when you want to:
448
-
449
- - process messages one by one in a single flow
450
- - process messages as soon as possible (without Sidekiq delay)
451
-
452
- Note: Keep in mind, that by using this, you can significantly slow down Karafka. You also loose all the advantages of Sidekiq processing (reentrancy, retries, etc).
453
-
454
- ##### Batch mode flag
455
-
456
- Batch mode allows you to increase the overall throughput of your kafka consumer by handling incoming messages in batches, instead of one at a time.
457
-
458
- Note: The downside of increasing throughput is a slight increase in latency. Also keep in mind, that the client commits the offset of the batch's messages only **after** the entire batch has been scheduled into Sidekiq (or processed in case of inline mode).
459
-
460
- ### Receiving messages
461
-
462
- Karafka framework has a long running server process that is responsible for receiving messages.
463
-
464
- To start Karafka server process, use the following CLI command:
465
-
466
- ```bash
467
- bundle exec karafka server
468
- ```
469
-
470
- Karafka server can be daemonized with the **--daemon** flag:
471
-
472
- ```
473
- bundle exec karafka server --daemon
474
- ```
475
-
476
- #### Processing messages directly (without Sidekiq)
477
-
478
- If you don't want to use Sidekiq for processing and you would rather process messages directly in the main Karafka server process, you can do that by setting the *inline* flag either on an app level:
479
-
480
- ```ruby
481
- class App < Karafka::App
482
- setup do |config|
483
- config.inline_mode = true
484
- # Rest of the config
485
- end
486
- end
487
- ```
488
-
489
- or per route (when you want to treat some routes in a different way):
490
-
491
- ```ruby
492
- App.routes.draw do
493
- topic :binary_video_details do
494
- controller Videos::DetailsController
495
- inline_mode true
496
- end
497
- end
498
- ```
499
-
500
- Note: it can slow Karafka down significantly if you do heavy stuff that way.
501
-
502
- ### Sending messages from Karafka
14
+ Karafka provides a higher-level abstraction that allows you to focus on your business logic development, instead of focusing on implementing lower level abstration layers. It provides developers with a set of tools that are dedicated for building multi-topic applications similarly to how Rails applications are being built.
503
15
 
504
- It's quite common when using Kafka, to treat applications as parts of a bigger pipeline (similary to Bash pipeline) and forward processing results to other applications. Karafka provides two ways of dealing with that:
16
+ Karafka based applications can be easily deployed to any type of infrastructure, including those based on:
505
17
 
506
- - Using responders
507
- - Using Waterdrop directly
18
+ * Heroku
19
+ * Capistrano
20
+ * Docker
508
21
 
509
- Each of them has it's own advantages and disadvantages and it strongly depends on your application business logic which one will be better. The recommended (and way more elegant) way is to use responders for that.
510
-
511
- #### Using responders (recommended)
512
-
513
- One of the main differences when you respond to a Kafka message instead of a HTTP response, is that the response can be sent to many topics (instead of one HTTP response per one request) and that the data that is being sent can be different for different topics. That's why a simple **respond_to** would not be enough.
514
-
515
- In order to go beyond this limitation, Karafka uses responder objects that are responsible for sending data to other Kafka topics.
516
-
517
- By default, if you name a responder with the same name as a controller, it will be detected automatically:
518
-
519
- ```ruby
520
- module Users
521
- class CreateController < ApplicationController
522
- def perform
523
- # You can provide as many objects as you want to respond_with as long as a responders
524
- # #respond method accepts the same amount
525
- respond_with User.create(params[:user])
526
- end
527
- end
528
-
529
- class CreateResponder < ApplicationResponder
530
- topic :user_created
531
-
532
- def respond(user)
533
- respond_to :user_created, user
534
- end
535
- end
536
- end
537
- ```
538
-
539
- The appropriate responder will be used automatically when you invoke the **respond_with** controller method.
540
-
541
- Why did we separate the response layer from the controller layer? Because sometimes when you respond to multiple topics conditionally, that logic can be really complex and it is way better to manage and test it in isolation.
542
-
543
- For more details about responders DSL, please visit the [responders](#responders) section.
544
-
545
- #### Using WaterDrop directly
546
-
547
- It is not recommended (as it breaks responders validations and makes it harder to track data flow), but if you want to send messages outside of Karafka responders, you can to use the **waterdrop** gem directly.
548
-
549
- Example usage:
550
-
551
- ```ruby
552
- message = WaterDrop::Message.new('topic', 'message')
553
- message.send!
554
-
555
- message = WaterDrop::Message.new('topic', { user_id: 1 }.to_json)
556
- message.send!
557
- ```
558
-
559
- Please follow [WaterDrop README](https://github.com/karafka/waterdrop/blob/master/README.md) for more details on how to use it.
560
-
561
-
562
- ## Important components
563
-
564
- Apart from the internal implementation, Karafka is combined from the following components programmers mostly will work with:
565
-
566
- - Controllers - objects that are responsible for processing incoming messages (similar to Rails controllers)
567
- - Responders - objects that are responsible for sending responses based on the processed data
568
- - Workers - objects that execute data processing using Sidekiq backend
569
-
570
- ### Controllers
571
-
572
- Controllers should inherit from **ApplicationController** (or any other controller that inherits from **Karafka::BaseController**). If you don't want to use custom workers (and except some particular cases you don't need to), you need to define a **#perform** method that will execute your business logic code in background.
573
-
574
- ```ruby
575
- class UsersController < ApplicationController
576
- # Method execution will be enqueued in Sidekiq
577
- # Karafka will schedule automatically a proper job and execute this logic in the background
578
- def perform
579
- User.create(params[:user])
580
- end
581
- end
582
- ```
583
-
584
- #### Controllers callbacks
585
-
586
- You can add any number of *before_enqueue* callbacks. It can be a method or a block.
587
- before_enqueue acts in a similar way to Rails before_action so it should perform "lightweight" operations. You have access to params inside. Based on them you can define which data you want to receive and which you do not.
588
-
589
- **Warning**: keep in mind, that all *before_enqueue* blocks/methods are executed after messages are received. This is not executed in Sidekiq, but right after receiving the incoming message. This means, that if you perform "heavy duty" operations there, Karafka might slow down significantly.
590
-
591
- If any of callbacks throws :abort - *perform* method will be not enqueued to the worker (the execution chain will stop).
592
-
593
- Once you run a consumer - messages from Kafka server will be send to a proper controller (based on topic name).
594
-
595
- Presented example controller will accept incoming messages from a Kafka topic named :karafka_topic
596
-
597
- ```ruby
598
- class TestController < ApplicationController
599
- # before_enqueue has access to received params.
600
- # You can modify them before enqueuing it to sidekiq.
601
- before_enqueue {
602
- params.merge!(received_time: Time.now.to_s)
603
- }
604
-
605
- before_enqueue :validate_params
606
-
607
- # Method execution will be enqueued in Sidekiq.
608
- def perform
609
- Service.new.add_to_queue(params[:message])
610
- end
611
-
612
- # Define this method if you want to use Sidekiq reentrancy.
613
- # Logic to do if Sidekiq worker fails (because of exception, timeout, etc)
614
- def after_failure
615
- Service.new.remove_from_queue(params[:message])
616
- end
617
-
618
- private
619
-
620
- # We will not enqueue to sidekiq those messages, which were sent
621
- # from sum method and return too high message for our purpose.
622
- def validate_params
623
- throw(:abort) unless params['message'].to_i > 50 && params['method'] != 'sum'
624
- end
625
- end
626
- ```
627
-
628
- #### Dynamic worker selection
629
-
630
- When you work with Karafka, you may want to schedule part of the jobs to a different worker based on the incoming params. This can be achieved by reassigning worker in the *#before_enqueue* block:
631
-
632
- ```ruby
633
- before_enqueue do
634
- self.worker = (params[:important] ? FastWorker : SlowWorker)
635
- end
636
- ```
637
-
638
-
639
- ### Responders
640
-
641
- Responders are used to design and control response flow that comes from a single controller action. You might be familiar with a #respond_with Rails controller method. In Karafka it is an entrypoint to a responder *#respond*.
642
-
643
- Having a responders layer helps you prevent bugs when you design a receive-respond applications that handle multiple incoming and outgoing topics. Responders also provide a security layer that allows you to control that the flow is as you intended. It will raise an exception if you didn't respond to all the topics that you wanted to respond to.
644
-
645
- Here's a simple responder example:
646
-
647
- ```ruby
648
- class ExampleResponder < ApplicationResponder
649
- topic :users_notified
650
-
651
- def respond(user)
652
- respond_to :users_notified, user
653
- end
654
- end
655
- ```
656
-
657
- When passing data back to Kafka, responder uses parser **#generate** method to convert message object to a string. It will use parser of a route for which a current message was directed. By default it uses Karafka::Parsers::Json parser.
658
-
659
- Note: You can use responders outside of controllers scope, however it is not recommended because then, they won't be listed when executing **karafka flow** CLI command.
660
-
661
- #### Registering topics
662
-
663
- In order to maintain order in topics organization, before you can send data to a given topic, you need to register it. To do that, just execute *#topic* method with a topic name and optional settings during responder initialization:
664
-
665
- ```ruby
666
- class ExampleResponder < ApplicationResponder
667
- topic :regular_topic
668
- topic :optional_topic, required: false
669
- topic :multiple_use_topic, multiple_usage: true
670
- end
671
- ```
672
-
673
- *#topic* method accepts following settings:
674
-
675
- | Option | Type | Default | Description |
676
- |----------------|---------|---------|------------------------------------------------------------------------------------------------------------|
677
- | required | Boolean | true | Should we raise an error when a topic was not used (if required) |
678
- | multiple_usage | Boolean | false | Should we raise an error when during a single response flow we sent more than one message to a given topic |
679
-
680
- #### Responding on topics
681
-
682
- When you receive a single HTTP request, you generate a single HTTP response. This logic does not apply to Karafka. You can respond on as many topics as you want (or on none).
683
-
684
- To handle responding, you need to define *#respond* instance method. This method should accept the same amount of arguments passed into *#respond_with* method.
685
-
686
- In order to send a message to a given topic, you have to use **#respond_to** method that accepts two arguments:
687
-
688
- - topic name (Symbol)
689
- - data you want to send (if data is not string, responder will try to run #to_json method on the incoming data)
690
-
691
- ```ruby
692
- # respond_with user, profile
693
-
694
- class ExampleResponder < ApplicationResponder
695
- topic :regular_topic
696
- topic :optional_topic, required: false
697
-
698
- def respond(user, profile)
699
- respond_to :regular_topic, user
700
-
701
- if user.registered?
702
- respond_to :optional_topic, profile
703
- end
704
- end
705
- end
706
- ```
707
-
708
- #### Response validation
709
-
710
- In order to ensure the dataflow is as intended, responder will validate what and where was sent, making sure that:
711
-
712
- - Only topics that were registered were used (no typos, etc.)
713
- - Only a single message was sent to a topic that was registered without a **multiple_usage** flag
714
- - Any topic that was registered with **required** flag (default behavior) has been used
715
-
716
- This is an automatic process and does not require any triggers.
717
-
718
- #### Response partitioning
719
-
720
- Kafka topics are partitioned, which means that you can assing messages to partitions based on your business logic. To do so from responders, you can pass one of the following keyword arguments as a last option of a **#respond_to** method:
721
-
722
- * partition - use it when you want to send a given message to a certain partition
723
- * partition_key - use it when you want to ensure that a certain group of messages is delivered to the same partition, but you don't which partition it will be.
724
-
725
- ```ruby
726
- class ExampleResponder < ApplicationResponder
727
- topic :regular_topic
728
- topic :different_topic
729
-
730
- def respond(user, profile)
731
- respond_to :regular_topic, user, partition: 12
732
- # This will send user details to a partition based on the first letter
733
- # of login which means that for example all users with login starting
734
- # with "a" will go to the same partition on the different_topic
735
- respond_to :different_topic, user, partition_key: user.login[0].downcase
736
- end
737
- end
738
- ```
739
-
740
- If no keys are passed, the producer will randomly assign a partition.
741
-
742
- ## Monitoring and logging
743
-
744
- Karafka provides a simple monitor (Karafka::Monitor) with a really small API. You can use it to develop your own monitoring system (using for example NewRelic). By default, the only thing that is hooked up to this monitoring is a Karafka logger (Karafka::Logger). It is based on a standard [Ruby logger](http://ruby-doc.org/stdlib-2.2.3/libdoc/logger/rdoc/Logger.html).
745
-
746
- To change monitor or a logger assign new logger/monitor during setup:
747
-
748
- ```ruby
749
- class App < Karafka::App
750
- setup do |config|
751
- # Other setup stuff...
752
- config.logger = MyCustomLogger.new
753
- config.monitor = CustomMonitor.instance
754
- end
755
- end
756
- ```
757
-
758
- Keep in mind, that if you replace monitor with a custom one, you will have to implement logging as well. It is because monitoring is used for both monitoring and logging and a default monitor handles logging as well.
22
+ ## Support
759
23
 
760
- ### Example monitor with Errbit/Airbrake support
24
+ **Warning**: We're currently in the middle of upgrading our [Wiki pages](https://github.com/karafka/karafka/wiki), to match our newest 0.6 release and it's API. If you use the 0.5 version, you might encounter some incompatibilities. We're really sorry for the inconvenience.
761
25
 
762
- Here's a simple example of monitor that is used to handle errors logging into Airbrake/Errbit.
26
+ Karafka has a [Wiki pages](https://github.com/karafka/karafka/wiki) for almost everything. It covers the whole installation, setup and deployment along with other useful details on how to run Karafka.
763
27
 
764
- ```ruby
765
- class AppMonitor < Karafka::Monitor
766
- def notice_error(caller_class, e)
767
- super
768
- Airbrake.notify(e)
769
- end
770
- end
771
- ```
772
-
773
- ### Example monitor with NewRelic support
774
-
775
- Here's a simple example of monitor that is used to handle events and errors logging into NewRelic. It will send metrics with information about amount of processed messages per topic and how many of them were scheduled to be performed async.
776
-
777
- ```ruby
778
- # NewRelic example monitor for Karafka
779
- class AppMonitor < Karafka::Monitor
780
- # @param [Class] caller class for this notice
781
- # @param [Hash] hash with options for this notice
782
- def notice(caller_class, options = {})
783
- # Use default Karafka monitor logging
784
- super
785
- # Handle differently proper actions that we want to monit with NewRelic
786
- return unless respond_to?(caller_label, true)
787
- send(caller_label, options[:topic])
788
- end
789
-
790
- # @param [Class] caller class for this notice error
791
- # @param e [Exception] error that happened
792
- def notice_error(caller_class, e)
793
- super
794
- NewRelic::Agent.notice_error(e)
795
- end
796
-
797
- private
798
-
799
- # Log that message for a given topic was consumed
800
- # @param topic [String] topic name
801
- def consume(topic)
802
- record_count metric_key(topic, __method__)
803
- end
804
-
805
- # Log that message for topic was scheduled to be performed async
806
- # @param topic [String] topic name
807
- def perform_async(topic)
808
- record_count metric_key(topic, __method__)
809
- end
810
-
811
- # Log that message for topic was performed async
812
- # @param topic [String] topic name
813
- def perform(topic)
814
- record_count metric_key(topic, __method__)
815
- end
816
-
817
- # @param topic [String] topic name
818
- # @param action [String] action that we want to log (consume/perform_async/perform)
819
- # @return [String] a proper metric key for NewRelic
820
- # @example
821
- # metric_key('videos', 'perform_async') #=> 'Custom/videos/perform_async'
822
- def metric_key(topic, action)
823
- "Custom/#{topic}/#{action}"
824
- end
825
-
826
- # Records occurence of a given event
827
- # @param [String] key under which we want to log
828
- def record_count(key)
829
- NewRelic::Agent.record_metric(key, count: 1)
830
- end
831
- end
832
- ```
28
+ If you have any questions about using Karafka, feel free to join our [Gitter](https://gitter.im/karafka/karafka) chat channel.
833
29
 
834
- ## Deployment
30
+ Karafka dev team also provides commercial support in following matters:
835
31
 
836
- Karafka is currently being used in production with following deployment methods:
32
+ - Additional programming services for integrating existing Ruby apps with Kafka and Karafka
33
+ - Expertise and guidance on using Karafka within new and existing projects
34
+ - Trainings on how to design and develop systems based on Apache Kafka and Karafka framework
837
35
 
838
- - Capistrano
839
- - Docker
36
+ If you are interested in our commercial services, please contact [Maciej Mensfeld (maciej@coditsu.io)](mailto:maciej@coditsu.io) directly.
840
37
 
841
- Since the only thing that is long-running is Karafka server, it should't be hard to make it work with other deployment and CD tools.
38
+ ## Notice
842
39
 
843
- ### Capistrano
40
+ Karafka framework and Karafka team are __not__ related to Kafka streaming service called CloudKarafka in any matter. We don't recommend nor discourage usage of their platform.
844
41
 
845
- For details about integration with Capistrano, please go to [capistrano-karafka](https://github.com/karafka/capistrano-karafka) gem page.
42
+ ## Requirements
846
43
 
847
- ### Docker
44
+ In order to use Karafka framework, you need to have:
848
45
 
849
- Karafka can be dockerized as any other Ruby/Rails app. To execute **karafka server** command in your Docker container, just put this into your Dockerfile:
46
+ - Zookeeper (required by Kafka)
47
+ - Kafka (at least 0.9.0)
48
+ - Ruby (at least 2.3.0)
850
49
 
851
- ```bash
852
- ENV KARAFKA_ENV production
853
- CMD bundle exec karafka server
854
- ```
50
+ ## Note on Patches/Pull Requests
855
51
 
856
- ### Heroku
52
+ Fork the project.
53
+ Make your feature addition or bug fix.
54
+ Add tests for it. This is important so I don't break it in a future version unintentionally.
55
+ Commit, do not mess with Rakefile, version, or history. (if you want to have your own version, that is fine but bump version in a commit by itself I can ignore when I pull). Send me a pull request. Bonus points for topic branches.
857
56
 
858
- Karafka may be deployed on [Heroku](https://www.heroku.com/), and works with
859
- [Heroku Kafka](https://www.heroku.com/kafka) and [Heroku Redis](https://www.heroku.com/redis).
57
+ Each pull request must pass our quality requirements. To check if everything is as it should be, we use [PolishGeeks Dev Tools](https://github.com/polishgeeks/polishgeeks-dev-tools) that combine multiple linters and code analyzers. Please run:
860
58
 
861
- Set `KARAFKA_ENV`:
862
59
  ```bash
863
- heroku config:set KARAFKA_ENV=production
864
- ```
865
-
866
- Configure Karafka to use the Kafka and Redis configuration provided by Heroku:
867
- ```ruby
868
- # app_root/app.rb
869
- class App < Karafka::App
870
- setup do |config|
871
- config.kafka.hosts = ENV['KAFKA_URL'].split(',') # Convert CSV list of broker urls to an array
872
- config.kafka.ssl.ca_cert = ENV['KAFKA_TRUSTED_CERT'] if ENV['KAFKA_TRUSTED_CERT']
873
- config.kafka.ssl.client_cert = ENV['KAFKA_CLIENT_CERT'] if ENV['KAFKA_CLIENT_CERT']
874
- config.kafka.ssl.client_cert_key = ENV['KAFKA_CLIENT_CERT_KEY'] if ENV['KAFKA_CLIENT_CERT_KEY']
875
- config.redis = { url: ENV['REDIS_URL'] }
876
- # ...other configuration options...
877
- end
878
- end
879
- ```
880
-
881
- Create your Procfile:
882
- ```text
883
- karafka_server: bundle exec karafka server
884
- karafka_worker: bundle exec karafka worker
885
- ```
886
-
887
- ## Sidekiq Web UI
888
-
889
- Karafka comes with a Sidekiq Web UI application that can display the current state of a Sidekiq installation. If you installed Karafka based on the install instructions, you will have a **config.ru** file that allows you to run standalone Puma instance with a Sidekiq Web UI.
890
-
891
- To be able to use it (since Karafka does not depend on Puma and Sinatra) add both of them into your Gemfile:
892
-
893
- ```ruby
894
- gem 'puma'
895
- gem 'sinatra'
896
- ```
897
-
898
- bundle and run:
899
-
900
- ```
901
- bundle exec rackup
902
- # Puma starting...
903
- # * Min threads: 0, max threads: 16
904
- # * Environment: development
905
- # * Listening on tcp://localhost:9292
906
- ```
907
-
908
- You can then navigate to displayer url to check your Sidekiq status. Sidekiq Web UI by default is password protected. To check (or change) your login and password, please review **config.ru** file in your application.
909
-
910
- ## Concurrency
911
-
912
- Karafka uses [Celluloid](https://celluloid.io/) actors to handle listening to incoming connections. Since each topic and group requires a separate connection (which means that we have a connection per controller) we do this concurrently. It means, that for each route, you will have one additional thread running.
913
-
914
- ## Integrating with other frameworks
915
-
916
- Want to use Karafka with Ruby on Rails or Sinatra? It can be done!
917
-
918
- ### Integrating with Ruby on Rails
919
-
920
- Add Karafka to your Ruby on Rails application Gemfile:
921
-
922
- ```ruby
923
- gem 'karafka'
924
- ```
925
-
926
- Copy the **app.rb** file from your Karafka application into your Rails app (if you don't have this file, just create an empty Karafka app and copy it). This file is responsible for booting up Karafka framework. To make it work with Ruby on Rails, you need to load whole Rails application in this file. To do so, replace:
927
-
928
- ```ruby
929
- ENV['RACK_ENV'] ||= 'development'
930
- ENV['KARAFKA_ENV'] = ENV['RACK_ENV']
931
-
932
- Bundler.require(:default, ENV['KARAFKA_ENV'])
933
-
934
- Karafka::Loader.new.load(Karafka::App.root)
935
- ```
936
-
937
- with
938
-
939
- ```ruby
940
- ENV['RAILS_ENV'] ||= 'development'
941
- ENV['KARAFKA_ENV'] = ENV['RAILS_ENV']
942
-
943
- require ::File.expand_path('../config/environment', __FILE__)
944
- Rails.application.eager_load!
60
+ bundle exec rake
945
61
  ```
946
62
 
947
- and you are ready to go!
948
-
949
- ### Integrating with Sinatra
63
+ to check if everything is in order. After that you can submit a pull request.
950
64
 
951
- Sinatra applications differ from one another. There are single file applications and apps with similar to Rails structure. That's why we cannot provide a simple single tutorial. Here are some guidelines that you should follow in order to integrate it with Sinatra based application:
65
+ ## Contributors
952
66
 
953
- Add Karafka to your Sinatra application Gemfile:
67
+ This project exists thanks to all the people who contribute. [[Contribute]](CONTRIBUTING.md).
68
+ <a href="https://github.com/karafka/karafka/graphs/contributors"><img src="https://opencollective.com/karafka/contributors.svg?width=890" /></a>
954
69
 
955
- ```ruby
956
- gem 'karafka'
957
- ```
958
70
 
959
- After that make sure that whole your application is loaded before setting up and booting Karafka (see Ruby on Rails integration for more details about that).
71
+ ## Backers
960
72
 
961
- ## Articles and other references
73
+ Thank you to all our backers! 🙏 [[Become a backer](https://opencollective.com/karafka#backer)]
962
74
 
963
- ### Libraries and components
75
+ <a href="https://opencollective.com/karafka#backers" target="_blank"><img src="https://opencollective.com/karafka/backers.svg?width=890"></a>
964
76
 
965
- * [Karafka framework](https://github.com/karafka/karafka)
966
- * [Capistrano Karafka](https://github.com/karafka/capistrano-karafka)
967
- * [Waterdrop](https://github.com/karafka/waterdrop)
968
- * [Worker Glass](https://github.com/karafka/worker-glass)
969
- * [Envlogic](https://github.com/karafka/envlogic)
970
- * [Apache Kafka](http://kafka.apache.org/)
971
- * [Apache ZooKeeper](https://zookeeper.apache.org/)
972
- * [Ruby-Kafka](https://github.com/zendesk/ruby-kafka)
973
77
 
974
- ### Articles and references
78
+ ## Sponsors
975
79
 
976
- * [Karafka (Ruby + Kafka framework) 0.5.0 release details](http://dev.mensfeld.pl/2016/09/karafka-ruby-kafka-framework-0-5-0-release-details/)
977
- * [Karafka – Ruby micro-framework for building Apache Kafka message-based applications](http://dev.mensfeld.pl/2015/08/karafka-ruby-micro-framework-for-building-apache-kafka-message-based-applications/)
978
- * [Benchmarking Karafka – how does it handle multiple TCP connections](http://dev.mensfeld.pl/2015/11/benchmarking-karafka-how-does-it-handle-multiple-tcp-connections/)
979
- * [Karafka – Ruby framework for building Kafka message based applications (presentation)](http://mensfeld.github.io/karafka-framework-introduction/)
980
- * [Karafka example application](https://github.com/karafka/karafka-example-app)
981
- * [Karafka Travis CI](https://travis-ci.org/karafka/karafka)
982
- * [Karafka Code Climate](https://codeclimate.com/github/karafka/karafka)
80
+ We are looking for sustainable sponsorship. If your company is relying on Karafka framework or simply want to see Karafka evolve faster to meet your requirements, please consider backing the project. [[Become a sponsor](https://opencollective.com/karafka#sponsor)]
983
81
 
984
- ## Note on Patches/Pull Requests
82
+ Please contact [Maciej Mensfeld (maciej@coditsu.io)](mailto:maciej@coditsu.io) directly for more details.
985
83
 
986
- Fork the project.
987
- Make your feature addition or bug fix.
988
- Add tests for it. This is important so I don't break it in a future version unintentionally.
989
- Commit, do not mess with Rakefile, version, or history. (if you want to have your own version, that is fine but bump version in a commit by itself I can ignore when I pull). Send me a pull request. Bonus points for topic branches.
990
84
 
991
- Each pull request must pass our quality requirements. To check if everything is as it should be, we use [PolishGeeks Dev Tools](https://github.com/polishgeeks/polishgeeks-dev-tools) that combine multiple linters and code analyzers. Please run:
85
+ <a href="https://opencollective.com/karafka/sponsor/0/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/0/avatar.svg"></a>
86
+ <a href="https://opencollective.com/karafka/sponsor/1/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/1/avatar.svg"></a>
87
+ <a href="https://opencollective.com/karafka/sponsor/2/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/2/avatar.svg"></a>
88
+ <a href="https://opencollective.com/karafka/sponsor/3/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/3/avatar.svg"></a>
89
+ <a href="https://opencollective.com/karafka/sponsor/4/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/4/avatar.svg"></a>
90
+ <a href="https://opencollective.com/karafka/sponsor/5/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/5/avatar.svg"></a>
91
+ <a href="https://opencollective.com/karafka/sponsor/6/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/6/avatar.svg"></a>
92
+ <a href="https://opencollective.com/karafka/sponsor/7/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/7/avatar.svg"></a>
93
+ <a href="https://opencollective.com/karafka/sponsor/8/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/8/avatar.svg"></a>
94
+ <a href="https://opencollective.com/karafka/sponsor/9/website" target="_blank"><img src="https://opencollective.com/karafka/sponsor/9/avatar.svg"></a>
992
95
 
993
- ```bash
994
- bundle exec rake
995
- ```
996
96
 
997
- to check if everything is in order. After that you can submit a pull request.