karafka 0.5.0 → 0.5.0.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 9c40b95f3636ecc4ded9cbbfbb575129e5593560
4
- data.tar.gz: 2ee0650b0244b772141af9566cfd7c41c0a8632d
3
+ metadata.gz: d7e097bb23271902edf811b27a20afb3da70aaf4
4
+ data.tar.gz: d104e55ea9c218e6d931d4922252dd5a2dc38c23
5
5
  SHA512:
6
- metadata.gz: c850800bd7b372abe43df6d3179be95d69296b0e6369dc14bea3431f8fe76e3b33fbb6db360e0dd08218bacea183dfd31671c64ac6f3a122f897f2013653a765
7
- data.tar.gz: 08f57ac78f4c00d0426d17a6a24044cf393f173a356b87865d605b7699b3333424159ae58f9f5ff8cf0bf0d540559d24d19abd69e76d3f5243253c66a25f56b1
6
+ metadata.gz: 8f010bd5993055cf89622d343aa09307539539b503ee42e722112ce7498b7cbc4c750db475518eec07bfe33d028f9befb1b2240b09ccfb44b90c786dae6f878e
7
+ data.tar.gz: 8feb1a74b19313ffb12c74152925e09d4307723bab316b46c5238ae7b8b9020cb3ab71f10886f78335cffcd6735e8a7aa4a882f17f583782aa2d055e49c4edef
@@ -1,6 +1,18 @@
1
1
  # Karafka framework changelog
2
2
 
3
- ## 0.5.0-beta
3
+ ## 0.5.0.1
4
+ - Fixed inconsistency in responders non-required topic definition. Now only required: false available
5
+ - #101
6
+ - fix error on startup from waterdrop #102
7
+ - Waterdrop 0.3.2.1 with kafka.hosts instead of kafka_hosts
8
+ - #105 - Karafka::Monitor#caller_label not working with inherited monitors
9
+ - #99 - Standalone mode (without Sidekiq)
10
+ - #97 - Buffer responders single topics before send (prevalidation)
11
+ - Better control over consumer thanks to additional config options
12
+ - #111 - Dynamic worker assignment based on the income params
13
+ - Long shutdown time fix
14
+
15
+ ## 0.5.0
4
16
  - Removed Zookeeper totally as dependency
5
17
  - Better group and partition rebalancing
6
18
  - Automatic thread management (no need for tunning) - each topic is a separate actor/thread
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (0.5.0)
4
+ karafka (0.5.0.1)
5
5
  activesupport (~> 5.0)
6
6
  celluloid (~> 0.17)
7
7
  dry-configurable (~> 0.1.7)
@@ -166,10 +166,10 @@ GEM
166
166
  shoulda-context (1.2.1)
167
167
  shoulda-matchers (2.8.0)
168
168
  activesupport (>= 3.0.0)
169
- sidekiq (4.2.2)
169
+ sidekiq (4.2.3)
170
170
  concurrent-ruby (~> 1.0)
171
171
  connection_pool (~> 2.2, >= 2.2.0)
172
- rack-protection (~> 1.5)
172
+ rack-protection (>= 1.5.0)
173
173
  redis (~> 3.2, >= 3.2.1)
174
174
  simplecov (0.12.0)
175
175
  docile (~> 1.1.0)
@@ -192,9 +192,10 @@ GEM
192
192
  coercible (~> 1.0)
193
193
  descendants_tracker (~> 0.0, >= 0.0.3)
194
194
  equalizer (~> 0.0, >= 0.0.9)
195
- waterdrop (0.3.2)
195
+ waterdrop (0.3.2.1)
196
196
  bundler
197
197
  connection_pool
198
+ dry-configurable (~> 0.1.7)
198
199
  null-logger
199
200
  rake
200
201
  ruby-kafka
data/README.md CHANGED
@@ -31,6 +31,7 @@ Karafka not only handles incoming messages but also provides tools for building
31
31
  - [Parser](#parser)
32
32
  - [Interchanger](#interchanger)
33
33
  - [Responder](#responder)
34
+ - [Inline flag](#inline-flag)
34
35
  - [Receiving messages](#receiving-messages)
35
36
  - [Processing messages directly (without Sidekiq)](#processing-messages-directly-without-sidekiq)
36
37
  - [Sending messages from Karafka](#sending-messages-from-karafka)
@@ -39,6 +40,7 @@ Karafka not only handles incoming messages but also provides tools for building
39
40
  - [Important components](#important-components)
40
41
  - [Controllers](#controllers)
41
42
  - [Controllers callbacks](#controllers-callbacks)
43
+ - [Dynamic worker selection](#dynamic-worker-selection)
42
44
  - [Responders](#responders)
43
45
  - [Registering topics](#registering-topics)
44
46
  - [Responding on topics](#responding-on-topics)
@@ -77,7 +79,7 @@ In order to use Karafka framework, you need to have:
77
79
 
78
80
  ## Installation
79
81
 
80
- Karafka does not have a full installation shell command. In order to install it, please follow given steps:
82
+ Karafka does not have a full installation shell command. In order to install it, please follow the below steps:
81
83
 
82
84
  Create a directory for your project:
83
85
 
@@ -91,7 +93,7 @@ Create a **Gemfile** with Karafka:
91
93
  ```ruby
92
94
  source 'https://rubygems.org'
93
95
 
94
- gem 'karafka', github: 'karafka/karafka'
96
+ gem 'karafka'
95
97
  ```
96
98
 
97
99
  and run Karafka install CLI task:
@@ -105,13 +107,18 @@ bundle exec karafka install
105
107
  ### Application
106
108
  Karafka has following configuration options:
107
109
 
108
- | Option | Required | Value type | Description |
109
- |------------------------|----------|-------------------|---------------------------------------------------------------------------------------------|
110
- | name | true | String | Application name |
111
- | redis | true | Hash | Hash with Redis configuration options |
112
- | monitor | false | Object | Monitor instance (defaults to Karafka::Monitor) |
113
- | logger | false | Object | Logger instance (defaults to Karafka::Logger) |
114
- | kafka.hosts | false | Array<String> | Kafka server hosts. If 1 provided, Karafka will discover cluster structure automatically |
110
+ | Option | Required | Value type | Description |
111
+ |-------------------------------|----------|-------------------|------------------------------------------------------------------------------------------------------------|
112
+ | name | true | String | Application name |
113
+ | inline | false | Boolean | Do we want to perform logic without enqueuing it with Sidekiq (directly and asap) |
114
+ | redis | true | Hash | Hash with Redis configuration options |
115
+ | monitor | false | Object | Monitor instance (defaults to Karafka::Monitor) |
116
+ | logger | false | Object | Logger instance (defaults to Karafka::Logger) |
117
+ | kafka.hosts | false | Array<String> | Kafka server hosts. If 1 provided, Karafka will discover cluster structure automatically |
118
+ | kafka.session_timeout | false | Integer | The number of seconds after which, if a consumer hasn't contacted the Kafka cluster, it will be kicked out |
119
+ | kafka.offset_commit_interval | false | Integer | The interval between offset commits in seconds |
120
+ | kafka.offset_commit_threshold | false | Integer | The number of messages that can be processed before their offsets are committed |
121
+ | kafka.heartbeat_interval | false | Integer | The interval between heartbeats |
115
122
 
116
123
  To apply this configuration, you need to use a *setup* method from the Karafka::App class (app.rb):
117
124
 
@@ -119,6 +126,7 @@ To apply this configuration, you need to use a *setup* method from the Karafka::
119
126
  class App < Karafka::App
120
127
  setup do |config|
121
128
  config.kafka.hosts = %w( 127.0.0.1:9092 )
129
+ config.inline = false
122
130
  config.redis = {
123
131
  url: 'redis://redis.example.com:7372/1'
124
132
  }
@@ -208,6 +216,7 @@ There are also several other methods available (optional):
208
216
  - *parser* - Class name - name of a parser class that we want to use to parse incoming data
209
217
  - *interchanger* - Class name - name of a interchanger class that we want to use to format data that we put/fetch into/from *#perform_async*
210
218
  - *responder* - Class name - name of a responder that we want to use to generate responses to other Kafka topics based on our processed data
219
+ - *inline* - Boolean - Do we want to perform logic without enqueuing it with Sidekiq (directly and asap) - overwrites global app setting
211
220
 
212
221
  ```ruby
213
222
  App.routes.draw do
@@ -218,6 +227,7 @@ App.routes.draw do
218
227
  parser Parsers::BinaryToJson
219
228
  interchanger Interchangers::Binary
220
229
  responder BinaryVideoProcessingResponder
230
+ inline true
221
231
  end
222
232
 
223
233
  topic :new_videos do
@@ -272,7 +282,7 @@ However, if you want to use a raw Sidekiq worker (without any Karafka additional
272
282
  ```ruby
273
283
  topic :incoming_messages do
274
284
  controller MessagesController
275
- worker MyCustomController
285
+ worker MyCustomWorker
276
286
  end
277
287
  ```
278
288
 
@@ -289,7 +299,7 @@ Keep in mind, that params might be in two states: parsed or unparsed when passed
289
299
 
290
300
  - *parser* - Class name - name of a parser class that we want to use to parse incoming data
291
301
 
292
- Karafka by default will parse messages with a JSON parser. If you want to change this behaviour you need to set custom parser for each route. Parser needs to have a #parse method and raise error that is a ::Karafka::Errors::ParserError descendant when problem appears during parsing process.
302
+ Karafka by default will parse messages with a JSON parser. If you want to change this behaviour you need to set a custom parser for each route. Parser needs to have a #parse method and raise an error that is a ::Karafka::Errors::ParserError descendant when problem appears during the parsing process.
293
303
 
294
304
  ```ruby
295
305
  class XmlParser
@@ -314,7 +324,7 @@ Note that parsing failure won't stop the application flow. Instead, Karafka will
314
324
 
315
325
  ##### Interchanger
316
326
 
317
- - *interchanger* - Class name - name of a interchanger class that we want to use to format data that we put/fetch into/from #perform_async.
327
+ - *interchanger* - Class name - name of an interchanger class that we want to use to format data that we put/fetch into/from #perform_async.
318
328
 
319
329
  Custom interchangers target issues with non-standard (binary, etc) data that we want to store when we do #perform_async. This data might be corrupted when fetched in a worker (see [this](https://github.com/karafka/karafka/issues/30) issue). With custom interchangers, you can encode/compress data before it is being passed to scheduling and decode/decompress it when it gets into the worker.
320
330
 
@@ -359,6 +369,17 @@ end
359
369
 
360
370
  For more details about responders, please go to the [using responders](#using-responders) section.
361
371
 
372
+ ##### Inline flag
373
+
374
+ Inline flag allows you to disable Sidekiq usage by performing your #perform method business logic in the main Karafka server process.
375
+
376
+ This flag be useful when you want to:
377
+
378
+ - process messages one by one in a single flow
379
+ - process messages as soon as possible (without Sidekiq delay)
380
+
381
+ Note: Keep in mind, that by using this, you can significantly slow down Karafka. You also loose all the advantages of Sidekiq processing (reentrancy, retries, etc).
382
+
362
383
  ### Receiving messages
363
384
 
364
385
  Karafka framework has a long running server process that is responsible for receiving messages.
@@ -377,16 +398,24 @@ bundle exec karafka server --daemon
377
398
 
378
399
  #### Processing messages directly (without Sidekiq)
379
400
 
380
- If you don't want to use Sidekiq for processing and you would rather process messages directly in the main Karafka server process, you can do that using the *before_enqueue* callback inside of controller:
401
+ If you don't want to use Sidekiq for processing and you would rather process messages directly in the main Karafka server process, you can do that by setting the *inline* flag either on an app level:
381
402
 
382
403
  ```ruby
383
- class UsersController < ApplicationController
384
- before_enqueue :perform_directly
404
+ class App < Karafka::App
405
+ setup do |config|
406
+ config.inline = false
407
+ # Rest of the config
408
+ end
409
+ end
410
+ ```
385
411
 
386
- # By throwing abort signal, Karafka will not schedule a background #perform task.
387
- def perform_directly
388
- User.create(params[:user])
389
- throw(:abort)
412
+ or per route (when you want to treat some routes in a different way):
413
+
414
+ ```ruby
415
+ App.routes.draw do
416
+ topic :binary_video_details do
417
+ controller Videos::DetailsController
418
+ inline true
390
419
  end
391
420
  end
392
421
  ```
@@ -430,15 +459,15 @@ module Users
430
459
  end
431
460
  ```
432
461
 
433
- Appropriate responder will be used automatically when you invoke the **respond_with** controller method.
462
+ The appropriate responder will be used automatically when you invoke the **respond_with** controller method.
434
463
 
435
- Why did we separate response layer from the controller layer? Because sometimes when you respond to multiple topics conditionally, that logic can be really complex and it is way better to manage and test it in isolation.
464
+ Why did we separate the response layer from the controller layer? Because sometimes when you respond to multiple topics conditionally, that logic can be really complex and it is way better to manage and test it in isolation.
436
465
 
437
466
  For more details about responders DSL, please visit the [responders](#responders) section.
438
467
 
439
468
  #### Using WaterDrop directly
440
469
 
441
- It is not recommended (as it breaks responders validations and makes it harder to track data flow), but if you want to send messages outside of Karafka responders, you can to use **waterdrop** gem directly.
470
+ It is not recommended (as it breaks responders validations and makes it harder to track data flow), but if you want to send messages outside of Karafka responders, you can to use the **waterdrop** gem directly.
442
471
 
443
472
  Example usage:
444
473
 
@@ -477,7 +506,7 @@ end
477
506
 
478
507
  #### Controllers callbacks
479
508
 
480
- You can add any number of *before_enqueue* callbacks. It can be method or block.
509
+ You can add any number of *before_enqueue* callbacks. It can be a method or a block.
481
510
  before_enqueue acts in a similar way to Rails before_action so it should perform "lightweight" operations. You have access to params inside. Based on it you can define which data you want to receive and which not.
482
511
 
483
512
  **Warning**: keep in mind, that all *before_enqueue* blocks/methods are executed after messages are received. This is not executed in Sidekiq, but right after receiving the incoming message. This means, that if you perform "heavy duty" operations there, Karafka might significantly slow down.
@@ -519,6 +548,17 @@ Presented example controller will accept incoming messages from a Kafka topic na
519
548
  end
520
549
  ```
521
550
 
551
+ #### Dynamic worker selection
552
+
553
+ When you work with Karafka, you may want to schedule part of the jobs to a different worker based on the incoming params. This can be achieved by reassigning worker in the *#before_enqueue* block:
554
+
555
+ ```ruby
556
+ before_enqueue do
557
+ self.worker = (params[:important] ? FastWorker : SlowWorker)
558
+ end
559
+ ```
560
+
561
+
522
562
  ### Responders
523
563
 
524
564
  Responders are used to design and control response flow that comes from a single controller action. You might be familiar with a #respond_with Rails controller method. In Karafka it is an entrypoint to a responder *#respond*.
@@ -758,7 +798,7 @@ Want to use Karafka with Ruby on Rails or Sinatra? It can be done!
758
798
  Add Karafka to your Ruby on Rails application Gemfile:
759
799
 
760
800
  ```ruby
761
- gem 'karafka', github: 'karafka/karafka'
801
+ gem 'karafka'
762
802
  ```
763
803
 
764
804
  Copy the **app.rb** file from your Karafka application into your Rails app (if you don't have this file, just create an empty Karafka app and copy it). This file is responsible for booting up Karafka framework. To make it work with Ruby on Rails, you need to load whole Rails application in this file. To do so, replace:
@@ -789,7 +829,7 @@ Sinatra applications differ from one another. There are single file applications
789
829
  Add Karafka to your Sinatra application Gemfile:
790
830
 
791
831
  ```ruby
792
- gem 'karafka', github: 'karafka/karafka'
832
+ gem 'karafka'
793
833
  ```
794
834
 
795
835
  After that make sure that whole your application is loaded before setting up and booting Karafka (see Ruby on Rails integration for more details about that).
@@ -808,6 +848,7 @@ After that make sure that whole your application is loaded before setting up and
808
848
 
809
849
  ### Articles and references
810
850
 
851
+ * [Karafka (Ruby + Kafka framework) 0.5.0 release details](http://dev.mensfeld.pl/2016/09/karafka-ruby-kafka-framework-0-5-0-release-details/)
811
852
  * [Karafka – Ruby micro-framework for building Apache Kafka message-based applications](http://dev.mensfeld.pl/2015/08/karafka-ruby-micro-framework-for-building-apache-kafka-message-based-applications/)
812
853
  * [Benchmarking Karafka – how does it handle multiple TCP connections](http://dev.mensfeld.pl/2015/11/benchmarking-karafka-how-does-it-handle-multiple-tcp-connections/)
813
854
  * [Karafka – Ruby framework for building Kafka message based applications (presentation)](http://mensfeld.github.io/karafka-framework-introduction/)
@@ -7,11 +7,11 @@ Gem::Specification.new do |spec|
7
7
  spec.name = 'karafka'
8
8
  spec.version = ::Karafka::VERSION
9
9
  spec.platform = Gem::Platform::RUBY
10
- spec.authors = ['Maciej Mensfeld', 'Pavlo Vavruk']
11
- spec.email = %w( maciej@mensfeld.pl pavlo.vavruk@gmail.com )
10
+ spec.authors = ['Maciej Mensfeld', 'Pavlo Vavruk', 'Adam Gwozdowski']
11
+ spec.email = %w( maciej@mensfeld.pl pavlo.vavruk@gmail.com adam99g@gmail.com )
12
12
  spec.homepage = 'https://github.com/karafka/karafka'
13
- spec.summary = %q{ Ruby based Microframework for handling Apache Kafka incoming messages }
14
- spec.description = %q{ Microframework used to simplify Kafka based Ruby applications }
13
+ spec.summary = %q{ Ruby based framework for working with Apache Kafka }
14
+ spec.description = %q{ Framework used to simplify Apache Kafka based Ruby applications development }
15
15
  spec.license = 'MIT'
16
16
 
17
17
  spec.add_development_dependency 'bundler', '~> 1.2'
@@ -23,6 +23,7 @@
23
23
  active_support/inflector
24
24
  karafka/loader
25
25
  karafka/status
26
+ karafka/routing/route
26
27
  ).each { |lib| require lib }
27
28
 
28
29
  # Karafka library
@@ -64,7 +64,15 @@ module Karafka
64
64
 
65
65
  # This will be set based on routing settings
66
66
  # From 0.4 a single controller can handle multiple topics jobs
67
- attr_accessor :group, :topic, :worker, :parser, :interchanger, :responder
67
+ # All the attributes are taken from route
68
+ Karafka::Routing::Route::ATTRIBUTES.each do |attr|
69
+ attr_reader attr
70
+
71
+ define_method(:"#{attr}=") do |new_attr_value|
72
+ instance_variable_set(:"@#{attr}", new_attr_value)
73
+ @params[attr] = new_attr_value if @params
74
+ end
75
+ end
68
76
 
69
77
  class << self
70
78
  # Creates a callback that will be executed before scheduling to Sidekiq
@@ -95,7 +103,7 @@ module Karafka
95
103
  # will schedule a perform task in sidekiq
96
104
  def schedule
97
105
  run_callbacks :schedule do
98
- perform_async
106
+ inline ? perform_inline : perform_async
99
107
  end
100
108
  end
101
109
 
@@ -144,13 +152,20 @@ module Karafka
144
152
  responder.new.call(*data)
145
153
  end
146
154
 
155
+ # Executes perform code immediately (without enqueuing)
156
+ # @note Despite the fact, that workers won't be used, we still initialize all the
157
+ # classes and other framework elements
158
+ def perform_inline
159
+ Karafka.monitor.notice(self.class, to_h)
160
+ perform
161
+ end
162
+
147
163
  # Enqueues the execution of perform method into a worker.
148
164
  # @note Each worker needs to have a class #perform_async method that will allow us to pass
149
165
  # parameters into it. We always pass topic as a first argument and this request params
150
166
  # as a second one (we pass topic to be able to build back the controller in the worker)
151
167
  def perform_async
152
168
  Karafka.monitor.notice(self.class, to_h)
153
-
154
169
  # We use @params directly (instead of #params) because of lazy loading logic that is behind
155
170
  # it. See Karafka::Params::Params class for more details about that
156
171
  worker.perform_async(
@@ -16,10 +16,10 @@ module Karafka
16
16
  # end
17
17
  # end
18
18
  #
19
- # @example Marking topic as optional (we won't have to use it)
19
+ # @example Marking topic as not required (we won't have to use it)
20
20
  # class Responder < BaseResponder
21
21
  # topic :required_topic
22
- # topic :new_action, optional: true
22
+ # topic :new_action, required: false
23
23
  #
24
24
  # def respond(data)
25
25
  # respond_to :required_topic, data
@@ -51,6 +51,8 @@ module Karafka
51
51
  # Definitions of all topics that we want to be able to use in this responder should go here
52
52
  class_attribute :topics
53
53
 
54
+ attr_reader :messages_buffer
55
+
54
56
  class << self
55
57
  # Registers a topic as on to which we will be able to respond
56
58
  # @param topic_name [Symbol, String] name of topic to which we want to respond
@@ -65,7 +67,7 @@ module Karafka
65
67
  # Creates a responder object
66
68
  # @return [Karafka::BaseResponder] base responder descendant responder
67
69
  def initialize
68
- @used_topics = []
70
+ @messages_buffer = {}
69
71
  end
70
72
 
71
73
  # Performs respond and validates that all the response requirement were met
@@ -76,6 +78,7 @@ module Karafka
76
78
  def call(*data)
77
79
  respond(*data)
78
80
  validate!
81
+ deliver!
79
82
  end
80
83
 
81
84
  private
@@ -97,22 +100,30 @@ module Karafka
97
100
  def respond_to(topic, data)
98
101
  Karafka.monitor.notice(self.class, topic: topic, data: data)
99
102
 
100
- topic = topic.to_s
101
- @used_topics << topic
102
-
103
- ::WaterDrop::Message.new(
104
- topic,
105
- data.is_a?(String) ? data : data.to_json
106
- ).send!
103
+ messages_buffer[topic.to_s] ||= []
104
+ messages_buffer[topic.to_s] << (data.is_a?(String) ? data : data.to_json)
107
105
  end
108
106
 
109
107
  # Checks if we met all the topics requirements. It will fail if we didn't send a message to
110
108
  # a registered required topic, etc.
111
109
  def validate!
110
+ used_topics = messages_buffer.map do |key, data_elements|
111
+ Array.new(data_elements.count) { key }
112
+ end
113
+
112
114
  Responders::UsageValidator.new(
113
115
  self.class.topics || {},
114
- @used_topics
116
+ used_topics.flatten
115
117
  ).validate!
116
118
  end
119
+
120
+ # Takes all the messages from the buffer and delivers them one by one
121
+ # @note This method is executed after the validation, so we're sure that
122
+ # what we send is legit and it will go to a proper topics
123
+ def deliver!
124
+ messages_buffer.each do |topic, data_elements|
125
+ data_elements.each { |data| ::WaterDrop::Message.new(topic, data).send! }
126
+ end
127
+ end
117
128
  end
118
129
  end
@@ -12,6 +12,7 @@ module Karafka
12
12
  info = [
13
13
  "Karafka framework version: #{Karafka::VERSION}",
14
14
  "Application name: #{config.name}",
15
+ "Inline mode: #{config.inline}",
15
16
  "Number of threads: #{config.concurrency}",
16
17
  "Boot file: #{Karafka.boot_file}",
17
18
  "Environment: #{Karafka.env}",
@@ -10,12 +10,9 @@ module Karafka
10
10
  def call
11
11
  routes.each do |route|
12
12
  puts "#{route.topic}:"
13
- print('Group', route.group)
14
- print('Controller', route.controller)
15
- print('Worker', route.worker)
16
- print('Parser', route.parser)
17
- print('Interchanger', route.interchanger)
18
- print('Responder', route.responder)
13
+ Karafka::Routing::Route::ATTRIBUTES.each do |attr|
14
+ print(attr.to_s.capitalize, route.public_send(attr))
15
+ end
19
16
  end
20
17
  end
21
18
 
@@ -22,7 +22,7 @@ module Karafka
22
22
 
23
23
  # Gracefuly stops topic consumption
24
24
  def stop
25
- kafka_consumer.stop
25
+ @kafka_consumer&.stop
26
26
  @kafka_consumer = nil
27
27
  end
28
28
 
@@ -39,7 +39,14 @@ module Karafka
39
39
  client_id: ::Karafka::App.config.name
40
40
  )
41
41
 
42
- @kafka_consumer = kafka.consumer(group_id: @route.group)
42
+ @kafka_consumer = kafka.consumer(
43
+ group_id: @route.group,
44
+ session_timeout: ::Karafka::App.config.kafka.session_timeout,
45
+ offset_commit_interval: ::Karafka::App.config.kafka.offset_commit_interval,
46
+ offset_commit_threshold: ::Karafka::App.config.kafka.offset_commit_threshold,
47
+ heartbeat_interval: ::Karafka::App.config.kafka.heartbeat_interval
48
+ )
49
+
43
50
  @kafka_consumer.subscribe(@route.topic)
44
51
  @kafka_consumer
45
52
  end
@@ -71,7 +71,10 @@ module Karafka
71
71
  # @example Check label of method that invoked #notice_error
72
72
  # caller_label #=> 'rescue in target'
73
73
  def caller_label
74
- caller_locations(1, 2)[1].label
74
+ # We need to calculate ancestors because if someone inherits
75
+ # from this class, caller chains is longer
76
+ index = self.class.ancestors.index(Karafka::Monitor) + 1
77
+ caller_locations(index, 2)[1].label
75
78
  end
76
79
 
77
80
  # @return [Logger] logger instance
@@ -50,7 +50,8 @@ module Karafka
50
50
  controller: controller.class,
51
51
  worker: controller.worker,
52
52
  parser: controller.parser,
53
- topic: controller.topic
53
+ topic: controller.topic,
54
+ responder: controller.responder
54
55
  )
55
56
  end
56
57
  end
@@ -4,7 +4,7 @@ module Karafka
4
4
  # @example Define topic (required by default)
5
5
  # Karafka::Responders::Topic.new(:topic_name, {}) #=> #<Karafka::Responders::Topic...
6
6
  # @example Define optional topic
7
- # Karafka::Responders::Topic.new(:topic_name, optional: true)
7
+ # Karafka::Responders::Topic.new(:topic_name, required: false)
8
8
  # @example Define topic that on which we want to respond multiple times
9
9
  # Karafka::Responders::Topic.new(:topic_name, multiple_usage: true)
10
10
  class Topic
@@ -22,8 +22,7 @@ module Karafka
22
22
 
23
23
  # @return [Boolean] is this a required topic (if not, it is optional)
24
24
  def required?
25
- return false if @options[:optional]
26
- @options[:required] || true
25
+ @options.key?(:required) ? @options[:required] : true
27
26
  end
28
27
 
29
28
  # @return [Boolean] do we expect to use it multiple times in a single respond flow
@@ -40,6 +40,7 @@ module Karafka
40
40
  def validate_usage_of!(used_topic)
41
41
  raise(Errors::UnregisteredTopic, used_topic) unless @registered_topics[used_topic]
42
42
  return if @registered_topics[used_topic].multiple_usage?
43
+ return unless @registered_topics[used_topic].required?
43
44
  return if @used_topics.count(used_topic) < 2
44
45
  raise(Errors::TopicMultipleUsage, used_topic)
45
46
  end
@@ -18,6 +18,7 @@ module Karafka
18
18
  parser
19
19
  interchanger
20
20
  responder
21
+ inline
21
22
  ).freeze
22
23
 
23
24
  # All those options should be set on the route level
@@ -13,7 +13,17 @@ module Karafka
13
13
  NAME_FORMAT = /\A(\w|\-)+\z/
14
14
 
15
15
  # Options that we can set per each route
16
- attr_writer :group, :topic, :worker, :parser, :interchanger, :responder
16
+ ATTRIBUTES = %i(
17
+ group
18
+ topic
19
+ worker
20
+ parser
21
+ interchanger
22
+ responder
23
+ inline
24
+ ).freeze
25
+
26
+ ATTRIBUTES.each { |attr| attr_writer(attr) }
17
27
 
18
28
  # This we can get "directly" because it does not have any details, etc
19
29
  attr_accessor :controller
@@ -23,11 +33,7 @@ module Karafka
23
33
  # everywhere except Karafka server command, those would not be initialized on time - for
24
34
  # example for Sidekiq
25
35
  def build
26
- group
27
- worker
28
- parser
29
- interchanger
30
- responder
36
+ ATTRIBUTES.each { |attr| send(attr) }
31
37
  self
32
38
  end
33
39
 
@@ -68,6 +74,14 @@ module Karafka
68
74
  @interchanger ||= Karafka::Params::Interchanger
69
75
  end
70
76
 
77
+ # @return [Boolean] Should we perform execution in the background (default) or
78
+ # inline. This can be set globally and overwritten by a per route setting
79
+ # @note This method can be set to false, so direct assigment ||= would not work
80
+ def inline
81
+ return @inline unless @inline.nil?
82
+ @inline = Karafka::App.config.inline
83
+ end
84
+
71
85
  # Checks if topic and group have proper format (acceptable by Kafka)
72
86
  # @raise [Karafka::Errors::InvalidTopicName] raised when topic name is invalid
73
87
  # @raise [Karafka::Errors::InvalidGroupName] raised when group name is invalid
@@ -14,14 +14,11 @@ module Karafka
14
14
  # Builds a controller instance that should handle message from a given topic
15
15
  # @return [Karafka::BaseController] base controller descendant instance object
16
16
  def build
17
- controller = route.controller.new
18
- controller.topic = route.topic
19
- controller.parser = route.parser
20
- controller.worker = route.worker
21
- controller.interchanger = route.interchanger
22
- controller.responder = route.responder
23
-
24
- controller
17
+ route.controller.new.tap do |ctrl|
18
+ Karafka::Routing::Route::ATTRIBUTES.each do |attr|
19
+ ctrl.public_send(:"#{attr}=", route.public_send(attr))
20
+ end
21
+ end
25
22
  end
26
23
 
27
24
  private
@@ -25,7 +25,7 @@ module Karafka
25
25
  def bind_on_sigint
26
26
  process.on_sigint do
27
27
  Karafka::App.stop!
28
- consumers.map(&:stop) if Karafka::App.running?
28
+ consumers.map(&:stop)
29
29
  exit
30
30
  end
31
31
  end
@@ -34,7 +34,7 @@ module Karafka
34
34
  def bind_on_sigquit
35
35
  process.on_sigquit do
36
36
  Karafka::App.stop!
37
- consumers.map(&:stop) if Karafka::App.running?
37
+ consumers.map(&:stop)
38
38
  exit
39
39
  end
40
40
  end
@@ -15,6 +15,8 @@ module Karafka
15
15
  # Available settings
16
16
  # option name [String] current app name - used to provide default Kafka groups namespaces
17
17
  setting :name
18
+ # If inline is set to true, we won't enqueue jobs, instead we will run them immediately
19
+ setting :inline, false
18
20
  # option logger [Instance] logger that we want to use
19
21
  setting :logger, ::Karafka::Logger.instance
20
22
  # option monitor [Instance] monitor that we will to use (defaults to Karafka::Monitor)
@@ -25,7 +27,21 @@ module Karafka
25
27
  setting :redis
26
28
  # option kafka [Hash] - optional - kafka configuration options (hosts)
27
29
  setting :kafka do
30
+ # Array with at least one host
28
31
  setting :hosts
32
+ # option session_timeout [Integer] the number of seconds after which, if a client
33
+ # hasn't contacted the Kafka cluster, it will be kicked out of the group.
34
+ setting :session_timeout, 30
35
+ # option offset_commit_interval [Integer] the interval between offset commits,
36
+ # in seconds.
37
+ setting :offset_commit_interval, 10
38
+ # option offset_commit_threshold [Integer] the number of messages that can be
39
+ # processed before their offsets are committed. If zero, offset commits are
40
+ # not triggered by message processing.
41
+ setting :offset_commit_threshold, 0
42
+ # option heartbeat_interval [Integer] the interval between heartbeats; must be less
43
+ # than the session window.
44
+ setting :heartbeat_interval, 10
29
45
  end
30
46
 
31
47
  # This is configured automatically, don't overwrite it!
@@ -9,7 +9,7 @@ module Karafka
9
9
  water_config.send_messages = true
10
10
  water_config.connection_pool_size = config.concurrency
11
11
  water_config.connection_pool_timeout = 1
12
- water_config.kafka_hosts = config.kafka.hosts
12
+ water_config.kafka.hosts = config.kafka.hosts
13
13
  water_config.raise_on_failure = true
14
14
  end
15
15
  end
@@ -8,11 +8,12 @@ Karafka::Loader.new.load(Karafka::App.root)
8
8
  # App class
9
9
  class App < Karafka::App
10
10
  setup do |config|
11
- config.kafka = { hosts: %w( 127.0.0.1:9092 ) }
11
+ config.kafka.hosts = %w( 127.0.0.1:9092 )
12
12
  config.name = 'example_app'
13
13
  config.redis = {
14
14
  url: 'redis://localhost:6379'
15
15
  }
16
+ config.inline = false
16
17
  end
17
18
 
18
19
  routes.draw do
@@ -2,5 +2,5 @@
2
2
  # Main module namespace
3
3
  module Karafka
4
4
  # Current Karafka version
5
- VERSION = '0.5.0'
5
+ VERSION = '0.5.0.1'
6
6
  end
metadata CHANGED
@@ -1,15 +1,16 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.0
4
+ version: 0.5.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
8
8
  - Pavlo Vavruk
9
+ - Adam Gwozdowski
9
10
  autorequire:
10
11
  bindir: bin
11
12
  cert_chain: []
12
- date: 2016-09-30 00:00:00.000000000 Z
13
+ date: 2016-10-25 00:00:00.000000000 Z
13
14
  dependencies:
14
15
  - !ruby/object:Gem::Dependency
15
16
  name: bundler
@@ -165,10 +166,11 @@ dependencies:
165
166
  - - "~>"
166
167
  - !ruby/object:Gem::Version
167
168
  version: 0.1.7
168
- description: " Microframework used to simplify Kafka based Ruby applications "
169
+ description: " Framework used to simplify Apache Kafka based Ruby applications development "
169
170
  email:
170
171
  - maciej@mensfeld.pl
171
172
  - pavlo.vavruk@gmail.com
173
+ - adam99g@gmail.com
172
174
  executables:
173
175
  - karafka
174
176
  extensions: []
@@ -263,5 +265,5 @@ rubyforge_project:
263
265
  rubygems_version: 2.5.1
264
266
  signing_key:
265
267
  specification_version: 4
266
- summary: Ruby based Microframework for handling Apache Kafka incoming messages
268
+ summary: Ruby based framework for working with Apache Kafka
267
269
  test_files: []