phobos 1.8.0 → 1.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0b37b13898c547e212449d8b598984b80ceb9796d377991b7dd73f6e6acfdeb8
4
- data.tar.gz: 66d12f8461a5e3e796b47364644a948c2f1c030d227f61a6dd675baf7e368f62
3
+ metadata.gz: 2521d4d12e282bf2f2fc1ae720c52cfdab4637e2464840e196c7fd275f316c9d
4
+ data.tar.gz: 067d38a797a93bd7300e31c34b16e2bdea1dfd43ff6eddaaeac678b33ce9944c
5
5
  SHA512:
6
- metadata.gz: 75e2ddf1d53ea28086f0442779c2b19f7b8e9d78d62c7b7f4257fdf59d37f70d145fc014da37678d3023a4395c7b84b1c6d81145187482344af493114673931d
7
- data.tar.gz: da67af5cee56a7867f99f0353b847fd8d1faf23dd7f24e539fc6a90514b9e5c3c38bef04f29d03cbd4344242818127d1f69aafb4bd36d1ac11321179b75d7cb4
6
+ metadata.gz: a2808ac665d54a407da693e3ff0c8635ce0ef0daaf9815acf22c47628f268866cb350925a77477631f338cff360f79cc4e55b113a83c66e582333698bc19c800
7
+ data.tar.gz: 896136ecf3e4e41eb255c043b3982c55eb008a057ea97b47082b44826f2c59c8c525255f2fa3b9f2271df3e494dcc826d17c1136d7308c10b43bc63291fa4121
@@ -0,0 +1,26 @@
1
+ ###########################
2
+ # Configuration for rubocop
3
+ # in .rubocop.yml
4
+
5
+ ##############
6
+ # Global rules
7
+ # see .rubocop_common.yml
8
+
9
+ ##############
10
+ # Inherit default rules first, and then override those rules with
11
+ # our violation whitelist.
12
+ inherit_from:
13
+ - .rubocop_common.yml
14
+ - .rubocop_todo.yml
15
+
16
+ ##############
17
+ # Project specific overrides here, example:
18
+ # Metrics/BlockLength:
19
+ # Exclude:
20
+ # - 'tasks/the_huge_task.rake'
21
+
22
+ AllCops:
23
+ Exclude:
24
+ - spec/**/*
25
+ - examples/**/*
26
+ TargetRubyVersion: 2.3
@@ -0,0 +1,29 @@
1
+ ##############
2
+ # Global rules
3
+ AllCops:
4
+ Exclude:
5
+ - db/**/*
6
+ TargetRubyVersion: 2.3
7
+
8
+ Rails:
9
+ Enabled: false
10
+
11
+ Style/SymbolArray:
12
+ EnforcedStyle: brackets
13
+
14
+ Metrics/LineLength:
15
+ Max: 100
16
+
17
+ Metrics/BlockLength:
18
+ Exclude:
19
+ - '*.gemspec'
20
+ - 'spec/**/*.rb'
21
+
22
+ Documentation:
23
+ Enabled: false
24
+
25
+ Metrics/MethodLength:
26
+ Max: 15
27
+
28
+ Metrics/AbcSize:
29
+ Max: 17
@@ -0,0 +1,7 @@
1
+ # This configuration was generated by
2
+ # `rubocop --auto-gen-config`
3
+ # on 2018-10-22 14:47:09 +0200 using RuboCop version 0.59.2.
4
+ # The point is for the user to remove these configuration records
5
+ # one by one as the offenses are removed from the code base.
6
+ # Note that changes in the inspected code, or installation of new
7
+ # versions of RuboCop, may require this file to be generated again.
@@ -0,0 +1,2 @@
1
+ git:
2
+ repo: git@github.com:phobos/shared.git
@@ -6,6 +6,10 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
6
6
 
7
7
  ## UNRELEASED
8
8
 
9
+ ## [1.8.1] - 2018-11-23
10
+ ### Added
11
+ - Added ability to send partition keys separate from messsage keys.
12
+
9
13
  ## [1.8.0] - 2018-07-22
10
14
  ### Added
11
15
  - Possibility to configure a custom logger #81
data/Gemfile CHANGED
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  source 'https://rubygems.org'
2
4
 
3
5
  # Specify your gem's dependencies in phobos.gemspec
data/README.md CHANGED
@@ -338,21 +338,21 @@ The configuration file is organized in 6 sections. Take a look at the example fi
338
338
 
339
339
  The file will be parsed through ERB so ERB syntax/file extension is supported beside the YML format.
340
340
 
341
- __logger__ configures the logger for all Phobos components. It automatically
341
+ __logger__ configures the logger for all Phobos components. It automatically
342
342
  outputs to `STDOUT` and it saves the log in the configured file.
343
343
 
344
- __kafka__ provides configurations for every `Kafka::Client` created over the application.
344
+ __kafka__ provides configurations for every `Kafka::Client` created over the application.
345
345
  All [options supported by `ruby-kafka`][ruby-kafka-client] can be provided.
346
346
 
347
- __producer__ provides configurations for all producers created over the application,
348
- the options are the same for regular and async producers.
347
+ __producer__ provides configurations for all producers created over the application,
348
+ the options are the same for regular and async producers.
349
349
  All [options supported by `ruby-kafka`][ruby-kafka-producer] can be provided.
350
350
 
351
- __consumer__ provides configurations for all consumer groups created over the application.
351
+ __consumer__ provides configurations for all consumer groups created over the application.
352
352
  All [options supported by `ruby-kafka`][ruby-kafka-consumer] can be provided.
353
353
 
354
- __backoff__ Phobos provides automatic retries for your handlers. If an exception
355
- is raised, the listener will retry following the back off configured here.
354
+ __backoff__ Phobos provides automatic retries for your handlers. If an exception
355
+ is raised, the listener will retry following the back off configured here.
356
356
  Backoff can also be configured per listener.
357
357
 
358
358
  __listeners__ is the list of listeners configured. Each listener represents a consumer group.
@@ -537,6 +537,10 @@ end
537
537
 
538
538
  Bug reports and pull requests are welcome on GitHub at https://github.com/klarna/phobos.
539
539
 
540
+ ## Linting
541
+
542
+ Phobos projects Rubocop to lint the code, and in addition all projects use [Rubocop Rules](https://github.com/klippx/rubocop_rules) to maintain a shared rubocop configuration. Updates to the shared configurations are done in [phobos/shared](https://github.com/phobos/shared) repo, where you can also find instructions on how to apply the new settings to the Phobos projects.
543
+
540
544
  ## Acknowledgements
541
545
 
542
546
  Thanks to Sebastian Norde for the awesome logo!
data/Rakefile CHANGED
@@ -1,6 +1,8 @@
1
- require "bundler/gem_tasks"
2
- require "rspec/core/rake_task"
1
+ # frozen_string_literal: true
2
+
3
+ require 'bundler/gem_tasks'
4
+ require 'rspec/core/rake_task'
3
5
 
4
6
  RSpec::Core::RakeTask.new(:spec)
5
7
 
6
- task :default => :spec
8
+ task default: :spec
@@ -1,4 +1,5 @@
1
1
  #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
2
3
 
3
4
  require 'irb'
4
5
  require 'bundler/setup'
@@ -11,7 +12,8 @@ require 'phobos'
11
12
  # require 'pry'
12
13
  # Pry.start
13
14
 
14
- config_path = ENV['CONFIG_PATH'] || (File.exist?('config/phobos.yml') ? 'config/phobos.yml' : 'config/phobos.yml.example')
15
+ config_path = ENV['CONFIG_PATH'] ||
16
+ (File.exist?('config/phobos.yml') ? 'config/phobos.yml' : 'config/phobos.yml.example')
15
17
  Phobos.configure(config_path)
16
18
 
17
19
  IRB.start
data/bin/phobos CHANGED
@@ -1,4 +1,5 @@
1
1
  #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
2
3
 
3
4
  $LOAD_PATH.unshift File.dirname(__FILE__) + '/../lib'
4
5
 
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  #
2
4
  # This example assumes that you want to save all events in your database for
3
5
  # recovery purposes. The consumer will process the message and perform other
@@ -10,7 +12,7 @@ class HandlerSavingEventsDatabase
10
12
  include Phobos::Handler
11
13
  include Phobos::Producer
12
14
 
13
- def self.around_consume(payload, metadata)
15
+ def self.around_consume(payload, _metadata)
14
16
  #
15
17
  # Let's assume `::from_message` will initialize our object with `payload`
16
18
  #
@@ -39,7 +41,7 @@ class HandlerSavingEventsDatabase
39
41
  end
40
42
  end
41
43
 
42
- def consume(payload, metadata)
44
+ def consume(payload, _metadata)
43
45
  #
44
46
  # Process the event, it might index it to elasticsearch or notify other
45
47
  # system, you should process your message inside this method.
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  #
2
4
  # This example assumes you want to process the event and publish another
3
5
  # one to kafka. A new event is always published thus we want to use the async producer
@@ -9,7 +11,7 @@ class HandlerUsingAsyncProducer
9
11
 
10
12
  PUBLISH_TO 'another-topic'
11
13
 
12
- def consume(payload, metadata)
14
+ def consume(payload, _metadata)
13
15
  producer.async_publish(PUBLISH_TO, "#{payload}-#{rand}")
14
16
  end
15
17
  end
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  #
2
4
  # This example assumes you want to create a threaded kafka generator which
3
5
  # publish a stream of kafka messages without consuming them. It also shows
@@ -19,9 +21,9 @@ end
19
21
  # Trapping signals to properly stop this generator
20
22
  #
21
23
  @stop = false
22
- %i( INT TERM QUIT ).each do |signal|
24
+ [:INT, :TERM, :QUIT].each do |signal|
23
25
  Signal.trap(signal) do
24
- puts "Stopping"
26
+ puts 'Stopping'
25
27
  @stop = true
26
28
  end
27
29
  end
@@ -32,6 +34,7 @@ Thread.new do
32
34
 
33
35
  loop do
34
36
  break if @stop
37
+
35
38
  key = SecureRandom.uuid
36
39
  payload = Time.now.utc.to_json
37
40
 
@@ -50,7 +53,7 @@ Thread.new do
50
53
  # the producer can write to Kafka. Eventually we'll get some buffer overflows
51
54
  #
52
55
  rescue Kafka::BufferOverflow => e
53
- puts "| waiting"
56
+ puts '| waiting'
54
57
  sleep(1)
55
58
  retry
56
59
  end
@@ -67,7 +70,8 @@ Thread.new do
67
70
  .async_producer_shutdown
68
71
 
69
72
  #
70
- # Since no client was configured (we can do this with `MyProducer.producer.configure_kafka_client`)
73
+ # Since no client was configured (we can do this with
74
+ # `MyProducer.producer.configure_kafka_client`)
71
75
  # we must get the auto generated one and close it properly
72
76
  #
73
77
  MyProducer
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  require 'ostruct'
2
4
  require 'securerandom'
3
5
  require 'yaml'
@@ -13,6 +15,8 @@ require 'erb'
13
15
 
14
16
  require 'phobos/deep_struct'
15
17
  require 'phobos/version'
18
+ require 'phobos/constants'
19
+ require 'phobos/log'
16
20
  require 'phobos/instrumentation'
17
21
  require 'phobos/errors'
18
22
  require 'phobos/listener'
@@ -31,16 +35,15 @@ module Phobos
31
35
  attr_accessor :silence_log
32
36
 
33
37
  def configure(configuration)
34
- @config = DeepStruct.new(fetch_settings(configuration))
38
+ @config = fetch_configuration(configuration)
35
39
  @config.class.send(:define_method, :producer_hash) { Phobos.config.producer&.to_hash }
36
40
  @config.class.send(:define_method, :consumer_hash) { Phobos.config.consumer&.to_hash }
37
41
  @config.listeners ||= []
38
42
  configure_logger
39
- logger.info { Hash(message: 'Phobos configured', env: ENV['RAILS_ENV'] || ENV['RACK_ENV'] || 'N/A') }
40
43
  end
41
44
 
42
- def add_listeners(listeners_configuration)
43
- listeners_config = DeepStruct.new(fetch_settings(listeners_configuration))
45
+ def add_listeners(configuration)
46
+ listeners_config = fetch_configuration(configuration)
44
47
  @config.listeners += listeners_config.listeners
45
48
  end
46
49
 
@@ -55,61 +58,88 @@ module Phobos
55
58
  ExponentialBackoff.new(min, max).tap { |backoff| backoff.randomize_factor = rand }
56
59
  end
57
60
 
61
+ def deprecate(message)
62
+ warn "DEPRECATION WARNING: #{message} #{Kernel.caller.first}"
63
+ end
64
+
58
65
  # :nodoc:
59
66
  def configure_logger
60
- ruby_kafka = config.logger.ruby_kafka
61
-
62
67
  Logging.backtrace(true)
63
68
  Logging.logger.root.level = silence_log ? :fatal : config.logger.level
64
- appenders = logger_appenders
65
69
 
66
- @ruby_kafka_logger = nil
70
+ configure_ruby_kafka_logger
71
+ configure_phobos_logger
67
72
 
68
- if config.custom_kafka_logger
69
- @ruby_kafka_logger = config.custom_kafka_logger
70
- elsif ruby_kafka
71
- @ruby_kafka_logger = Logging.logger['RubyKafka']
72
- @ruby_kafka_logger.appenders = appenders
73
- @ruby_kafka_logger.level = silence_log ? :fatal : ruby_kafka.level
73
+ logger.info do
74
+ Hash(message: 'Phobos configured', env: ENV['RAILS_ENV'] || ENV['RACK_ENV'] || 'N/A')
74
75
  end
76
+ end
77
+
78
+ private
79
+
80
+ def fetch_configuration(configuration)
81
+ DeepStruct.new(read_configuration(configuration))
82
+ end
83
+
84
+ def read_configuration(configuration)
85
+ return configuration.to_h if configuration.respond_to?(:to_h)
86
+
87
+ YAML.safe_load(
88
+ ERB.new(
89
+ File.read(File.expand_path(configuration))
90
+ ).result,
91
+ [Symbol],
92
+ [],
93
+ true
94
+ )
95
+ end
75
96
 
97
+ def configure_phobos_logger
76
98
  if config.custom_logger
77
99
  @logger = config.custom_logger
78
100
  else
79
101
  @logger = Logging.logger[self]
80
- @logger.appenders = appenders
102
+ @logger.appenders = logger_appenders
81
103
  end
82
104
  end
83
105
 
84
- def logger_appenders
85
- date_pattern = '%Y-%m-%dT%H:%M:%S:%L%zZ'
86
- json_layout = Logging.layouts.json(date_pattern: date_pattern)
87
- log_file = config.logger.file
88
- stdout_layout = if config.logger.stdout_json == true
89
- json_layout
90
- else
91
- Logging.layouts.pattern(date_pattern: date_pattern)
92
- end
106
+ def configure_ruby_kafka_logger
107
+ if config.custom_kafka_logger
108
+ @ruby_kafka_logger = config.custom_kafka_logger
109
+ elsif config.logger.ruby_kafka
110
+ @ruby_kafka_logger = Logging.logger['RubyKafka']
111
+ @ruby_kafka_logger.appenders = logger_appenders
112
+ @ruby_kafka_logger.level = silence_log ? :fatal : config.logger.ruby_kafka.level
113
+ else
114
+ @ruby_kafka_logger = nil
115
+ end
116
+ end
93
117
 
118
+ def logger_appenders
94
119
  appenders = [Logging.appenders.stdout(layout: stdout_layout)]
95
120
 
96
121
  if log_file
97
122
  FileUtils.mkdir_p(File.dirname(log_file))
98
123
  appenders << Logging.appenders.file(log_file, layout: json_layout)
99
124
  end
125
+
100
126
  appenders
101
127
  end
102
128
 
103
- def deprecate(message)
104
- warn "DEPRECATION WARNING: #{message} #{Kernel.caller.first}"
129
+ def log_file
130
+ config.logger.file
105
131
  end
106
132
 
107
- private
108
-
109
- def fetch_settings(configuration)
110
- return configuration.to_h if configuration.respond_to?(:to_h)
133
+ def json_layout
134
+ Logging.layouts.json(date_pattern: Constants::LOG_DATE_PATTERN)
135
+ end
111
136
 
112
- YAML.load(ERB.new(File.read(File.expand_path(configuration))).result)
137
+ def stdout_layout
138
+ if config.logger.stdout_json == true
139
+ json_layout
140
+ else
141
+ Logging.layouts.pattern(date_pattern: Constants::LOG_DATE_PATTERN)
142
+ end
113
143
  end
114
144
  end
115
145
  end
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module Phobos
2
4
  module Actions
3
5
  class ProcessBatch
@@ -17,7 +19,7 @@ module Phobos
17
19
  end
18
20
 
19
21
  def execute
20
- instrument('listener.process_batch', @metadata) do |metadata|
22
+ instrument('listener.process_batch', @metadata) do |_metadata|
21
23
  @batch.messages.each do |message|
22
24
  Phobos::Actions::ProcessMessage.new(
23
25
  listener: @listener,
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module Phobos
2
4
  module Actions
3
5
  class ProcessMessage
@@ -19,31 +21,12 @@ module Phobos
19
21
  end
20
22
 
21
23
  def execute
22
- backoff = @listener.create_exponential_backoff
23
24
  payload = force_encoding(@message.value)
24
25
 
25
26
  begin
26
27
  process_message(payload)
27
- rescue => e
28
- retry_count = @metadata[:retry_count]
29
- interval = backoff.interval_at(retry_count).round(2)
30
-
31
- error = {
32
- waiting_time: interval,
33
- exception_class: e.class.name,
34
- exception_message: e.message,
35
- backtrace: e.backtrace
36
- }
37
-
38
- instrument('listener.retry_handler_error', error.merge(@metadata)) do
39
- Phobos.logger.error do
40
- { message: "error processing message, waiting #{interval}s" }.merge(error).merge(@metadata)
41
- end
42
-
43
- snooze(interval)
44
- end
45
-
46
- @metadata.merge!(retry_count: retry_count + 1)
28
+ rescue StandardError => e
29
+ handle_error(e)
47
30
  retry
48
31
  end
49
32
  end
@@ -69,19 +52,14 @@ module Phobos
69
52
  def process_message(payload)
70
53
  instrument('listener.process_message', @metadata) do
71
54
  handler = @listener.handler_class.new
72
- preprocessed_payload = begin
73
- handler.before_consume(payload, @metadata)
74
- rescue ArgumentError => e
75
- Phobos.deprecate("before_consume now expects metadata as second argument, please update your consumer."\
76
- " This will not be backwards compatible in the future.")
77
- handler.before_consume(payload)
78
- end
79
- consume_block = Proc.new { handler.consume(preprocessed_payload, @metadata) }
55
+
56
+ preprocessed_payload = before_consume(handler, payload)
57
+ consume_block = proc { handler.consume(preprocessed_payload, @metadata) }
80
58
 
81
59
  if @listener.handler_class.respond_to?(:around_consume)
82
60
  # around_consume class method implementation
83
- Phobos.deprecate("around_consume has been moved to instance method, please update your consumer."\
84
- " This will not be backwards compatible in the future.")
61
+ Phobos.deprecate('around_consume has been moved to instance method, '\
62
+ 'please update your consumer. This will not be backwards compatible in the future.')
85
63
  @listener.handler_class.around_consume(preprocessed_payload, @metadata, &consume_block)
86
64
  else
87
65
  # around_consume instance method implementation
@@ -89,6 +67,51 @@ module Phobos
89
67
  end
90
68
  end
91
69
  end
70
+
71
+ def before_consume(handler, payload)
72
+ handler.before_consume(payload, @metadata)
73
+ rescue ArgumentError
74
+ Phobos.deprecate('before_consume now expects metadata as second argument, '\
75
+ 'please update your consumer. This will not be backwards compatible in the future.')
76
+ handler.before_consume(payload)
77
+ end
78
+
79
+ def handle_error(error)
80
+ error_hash = {
81
+ waiting_time: backoff_interval,
82
+ exception_class: error.class.name,
83
+ exception_message: error.message,
84
+ backtrace: error.backtrace
85
+ }
86
+
87
+ instrument('listener.retry_handler_error', error_hash.merge(@metadata)) do
88
+ Phobos.logger.error do
89
+ { message: "error processing message, waiting #{backoff_interval}s" }
90
+ .merge(error_hash)
91
+ .merge(@metadata)
92
+ end
93
+
94
+ snooze(backoff_interval)
95
+ end
96
+
97
+ increment_retry_count
98
+ end
99
+
100
+ def retry_count
101
+ @metadata[:retry_count]
102
+ end
103
+
104
+ def increment_retry_count
105
+ @metadata[:retry_count] = retry_count + 1
106
+ end
107
+
108
+ def backoff
109
+ @backoff ||= @listener.create_exponential_backoff
110
+ end
111
+
112
+ def backoff_interval
113
+ backoff.interval_at(retry_count).round(2)
114
+ end
92
115
  end
93
116
  end
94
117
  end