phobos 1.4.2 → 1.5.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 1a07ed8172ccb8566701408cdcd049d3bc0802b4
4
- data.tar.gz: a469b26a35c71c25b1317392365b87e938944087
3
+ metadata.gz: 576b1c85a5d3f389caac3e09a2c1738bb2743eda
4
+ data.tar.gz: 5fc77dbf88b6bfc824245cb58c9a34558562d379
5
5
  SHA512:
6
- metadata.gz: 13915bc70544e1d24d2eb210e9615226272051a32b4c542ed93864b38a4315fbaaa4f8f6cd00bf88e70ce94feaed4db3ec4d4d8d16b4846b8973a4c6ce457dcc
7
- data.tar.gz: 5d032d5fa7e24a03611212c08201042f9570d631551a580922ada7d7c91a120e7125e616b98ad429aa04f3cf32a76d336c286ac7265187b9bbb681666f2dadab
6
+ metadata.gz: 74ff72c5bd479ee5e495d8da7ccc97c2bbe998c5ef7d601ce245a1b5e0a0054c5327c1cb100798dfb0832956fe91c8611f2aa66735d3af90e06b8dc8a915d713
7
+ data.tar.gz: 9dbf97945d5bb802fec64af350af7749273e30b9fd000e1ce435b1bfb2780b5f2dfc99d03c3a4cb3eaa1fa8e8a53b3e16ea75431c5057381f7d0993b22cb04b6
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 2.3.0
1
+ 2.4.1
data/CHANGELOG.md CHANGED
@@ -4,6 +4,11 @@ All notable changes to this project will be documented in this file.
4
4
  The format is based on [Keep a Changelog](http://keepachangelog.com/)
5
5
  and this project adheres to [Semantic Versioning](http://semver.org/).
6
6
 
7
+ ## 1.5.0 (2017-10-25)
8
+
9
+ - [enhancement] Add `before_consume` callback to support single point of decoding a message.
10
+ - [enhancement] Add module `Phobos::Test::Helper` for testing, to test consumers with minimal setup required
11
+
7
12
  ## 1.4.2 (2017-09-29)
8
13
 
9
14
  - [bugfix] Async publishing always delivers messages #33
data/README.md CHANGED
@@ -29,6 +29,7 @@ With Phobos by your side, all this becomes smooth sailing.
29
29
  1. [Instrumentation](#usage-instrumentation)
30
30
  1. [Plugins](#plugins)
31
31
  1. [Development](#development)
32
+ 1. [Test](#test)
32
33
 
33
34
  ## <a name="installation"></a> Installation
34
35
 
@@ -120,7 +121,7 @@ $ phobos start -c /var/configs/my.yml -b /opt/apps/boot.rb
120
121
 
121
122
  Messages from Kafka are consumed using __handlers__. You can use Phobos __executors__ or include it in your own project [as a library](#usage-as-library), but __handlers__ will always be used. To create a handler class, simply include the module `Phobos::Handler`. This module allows Phobos to manage the life cycle of your handler.
122
123
 
123
- A handler must implement the method `#consume(payload, metadata)`.
124
+ A handler is required to implement the method `#consume(payload, metadata)`.
124
125
 
125
126
  Instances of your handler will be created for every message, so keep a constructor without arguments. If `consume` raises an exception, Phobos will retry the message indefinitely, applying the back off configuration presented in the configuration file. The `metadata` hash will contain a key called `retry_count` with the current number of retries for this message. To skip a message, simply return from `#consume`.
126
127
 
@@ -162,6 +163,19 @@ class MyHandler
162
163
  end
163
164
  ```
164
165
 
166
+ Finally, it is also possible to preprocess the message payload before consuming it using the `before_consume` hook which is invoked before `.around_consume` and `#consume`. The result of this operation will be assigned to payload, so it is important to return the modified payload. This can be very useful, for example if you want a single point of decoding Avro messages and want the payload as a hash instead of a binary.
167
+
168
+ ```ruby
169
+ class MyHandler
170
+ include Phobos::Handler
171
+
172
+ def before_consume(payload)
173
+ # optionally preprocess payload
174
+ payload
175
+ end
176
+ end
177
+ ```
178
+
165
179
  Take a look at the examples folder for some ideas.
166
180
 
167
181
  The hander life cycle can be illustrated as:
@@ -170,7 +184,7 @@ The hander life cycle can be illustrated as:
170
184
 
171
185
  or optionally,
172
186
 
173
- `.start` -> `.around_consume` [ `#consume` ] -> `.stop`
187
+ `.start` -> `#before_consume` -> `.around_consume` [ `#consume` ] -> `.stop`
174
188
 
175
189
  ### <a name="usage-producing-messages-to-kafka"></a> Producing messages to Kafka
176
190
 
@@ -303,7 +317,7 @@ __producer__ provides configurations for all producers created over the applicat
303
317
 
304
318
  __consumer__ provides configurations for all consumer groups created over the application. All [options supported by `ruby-kafka`][ruby-kafka-consumer] can be provided.
305
319
 
306
- __backoff__ Phobos provides automatic retries for your handlers, if an exception is raised the listener will retry following the back off configured here
320
+ __backoff__ Phobos provides automatic retries for your handlers, if an exception is raised the listener will retry following the back off configured here. Backoff can also be configured per listener.
307
321
 
308
322
  __listeners__ is the list of listeners configured, each listener represents a consumers group
309
323
 
@@ -403,20 +417,37 @@ List of gems that enhance Phobos:
403
417
  ## <a name="development"></a> Development
404
418
 
405
419
  After checking out the repo:
406
- * make sure docker is installed and running
420
+ * make sure `docker` is installed and running (for windows and mac this also includes `docker-compose`).
421
+ * Linux: make sure `docker-compose` is installed and running.
407
422
  * run `bin/setup` to install dependencies
408
- * run `sh utils/start-all.sh` to start the required kafka containers in the background
409
- * run `rspec` to run the tests
423
+ * run `docker-compose up` to start the required kafka containers in a window
424
+ * run `rspec` to run the tests in another window
410
425
 
411
426
  You can also run `bin/console` for an interactive prompt that will allow you to experiment.
412
427
 
413
428
  To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
414
429
 
415
- The `utils` folder contain some shell scripts to help with the local Kafka cluster. It uses docker to start Kafka and zookeeper.
430
+ ## <a name="test"></a> Test
416
431
 
417
- ```sh
418
- sh utils/start-all.sh
419
- sh utils/stop-all.sh
432
+ Phobos exports a spec helper that can help you test your consumer. The Phobos lifecycle will conveniently be activated for you with minimal setup required.
433
+
434
+ * `process_message(handler:, payload:, metadata:, encoding: nil)` - Invokes your handler with payload and metadata, using a dummy listener (encoding is optional).
435
+
436
+ ```ruby
437
+ require 'spec_helper'
438
+
439
+ describe MyConsumer do
440
+ let(:payload) { 'foo' }
441
+ let(:metadata) { 'foo' }
442
+
443
+ it 'consumes my message' do
444
+ expect(described_class).to receive(:around_consume).with(payload, metadata).once.and_call_original
445
+ expect_any_instance_of(described_class).to receive(:before_consume).with(payload).once.and_call_original
446
+ expect_any_instance_of(described_class).to receive(:consume).with(payload, metadata).once.and_call_original
447
+
448
+ process_message(handler: described_class, payload: payload, metadata: metadata)
449
+ end
450
+ end
420
451
  ```
421
452
 
422
453
  ## Contributing
data/bin/console CHANGED
@@ -1,17 +1,17 @@
1
1
  #!/usr/bin/env ruby
2
2
 
3
- require "bundler/setup"
4
- require "phobos"
3
+ require 'irb'
4
+ require 'bundler/setup'
5
+ require 'phobos'
5
6
 
6
7
  # You can add fixtures and/or initialization code here to make experimenting
7
8
  # with your gem easier. You can also use a different console, if you like.
8
9
 
9
10
  # (If you use this, don't forget to add pry to your Gemfile!)
10
- # require "pry"
11
+ # require 'pry'
11
12
  # Pry.start
12
13
 
13
14
  config_path = ENV['CONFIG_PATH'] || (File.exist?('config/phobos.yml') ? 'config/phobos.yml' : 'config/phobos.yml.example')
14
15
  Phobos.configure(config_path)
15
16
 
16
- require "irb"
17
17
  IRB.start
data/circle.yml CHANGED
@@ -8,20 +8,20 @@ machine:
8
8
  CI: true
9
9
  DEFAULT_TIMEOUT: 20
10
10
  ruby:
11
- version: 2.3.1
11
+ version: 2.4.1
12
12
 
13
13
  dependencies:
14
14
  pre:
15
15
  - docker -v
16
- - docker pull ches/kafka:0.9.0.1
17
- - docker pull jplock/zookeeper:3.4.6
18
- - gem install bundler -v 1.9.5
16
+ - docker pull ches/kafka:0.10.2.1
17
+ - docker pull jplock/zookeeper:3.4.10
18
+ - gem install bundler -v 1.13.2
19
19
  - bundle install
20
20
 
21
21
  test:
22
22
  override:
23
- - docker run -d -p 2003:2181 --name zookeeper jplock/zookeeper:3.4.6; sleep 5
24
- - docker run -d -p 9092:9092 --name kafka -e KAFKA_BROKER_ID=0 -e KAFKA_ADVERTISED_HOST_NAME=localhost -e KAFKA_ADVERTISED_PORT=9092 -e ZOOKEEPER_CONNECTION_STRING=zookeeper:2181 --link zookeeper:zookeeper ches/kafka:0.9.0.1; sleep 5
23
+ - docker run -d -p 2003:2181 --name zookeeper jplock/zookeeper:3.4.10; sleep 5
24
+ - docker run -d -p 9092:9092 --name kafka -e KAFKA_BROKER_ID=0 -e KAFKA_ADVERTISED_HOST_NAME=localhost -e KAFKA_ADVERTISED_PORT=9092 -e ZOOKEEPER_CONNECTION_STRING=zookeeper:2181 --link zookeeper:zookeeper ches/kafka:0.10.2.1; sleep 5
25
25
  - bundle exec rspec -r rspec_junit_formatter --format RspecJunitFormatter -o $CIRCLE_TEST_REPORTS/rspec/unit.xml
26
26
  post:
27
27
  - cp log/*.log $CIRCLE_ARTIFACTS/ || true
@@ -95,3 +95,7 @@ listeners:
95
95
  # Apply this encoding to the message payload, if blank it uses the original encoding. This property accepts values
96
96
  # defined by the ruby Encoding class (https://ruby-doc.org/core-2.3.0/Encoding.html). Ex: UTF_8, ASCII_8BIT, etc
97
97
  force_encoding:
98
+ # Use this if custom backoff is required for a listener
99
+ backoff:
100
+ min_ms: 500
101
+ max_ms: 10000
@@ -0,0 +1,18 @@
1
+ version: '2'
2
+ services:
3
+ zookeeper:
4
+ image: jplock/zookeeper:3.4.10
5
+ ports:
6
+ - 2181:2181
7
+
8
+ kafka:
9
+ depends_on:
10
+ - zookeeper
11
+ image: ches/kafka:0.10.2.1
12
+ ports:
13
+ - 9092:9092
14
+ environment:
15
+ - KAFKA_BROKER_ID=0
16
+ - KAFKA_ADVERTISED_HOST_NAME=localhost
17
+ - KAFKA_ADVERTISED_PORT=9092
18
+ - ZOOKEEPER_CONNECTION_STRING=zookeeper:2181
data/lib/phobos.rb CHANGED
@@ -9,12 +9,15 @@ require 'concurrent'
9
9
  require 'exponential_backoff'
10
10
  require 'kafka'
11
11
  require 'logging'
12
+ require 'erb'
12
13
 
13
14
  require 'phobos/deep_struct'
14
15
  require 'phobos/version'
15
16
  require 'phobos/instrumentation'
16
17
  require 'phobos/errors'
17
18
  require 'phobos/listener'
19
+ require 'phobos/actions/process_batch'
20
+ require 'phobos/actions/process_message'
18
21
  require 'phobos/producer'
19
22
  require 'phobos/handler'
20
23
  require 'phobos/echo_handler'
@@ -40,9 +43,10 @@ module Phobos
40
43
  Kafka.new(config.kafka.to_hash.merge(logger: @ruby_kafka_logger))
41
44
  end
42
45
 
43
- def create_exponential_backoff
44
- min = Phobos.config.backoff.min_ms / 1000.0
45
- max = Phobos.config.backoff.max_ms / 1000.0
46
+ def create_exponential_backoff(backoff_config = nil)
47
+ backoff_config ||= Phobos.config.backoff.to_hash
48
+ min = backoff_config[:min_ms] / 1000.0
49
+ max = backoff_config[:max_ms] / 1000.0
46
50
  ExponentialBackoff.new(min, max).tap { |backoff| backoff.randomize_factor = rand }
47
51
  end
48
52
 
@@ -0,0 +1,61 @@
1
+ module Phobos
2
+ module Actions
3
+ class ProcessBatch
4
+ include Phobos::Instrumentation
5
+
6
+ def initialize(listener:, batch:, listener_metadata:)
7
+ @listener = listener
8
+ @batch = batch
9
+ @listener_metadata = listener_metadata
10
+ end
11
+
12
+ def execute
13
+ @batch.messages.each do |message|
14
+ backoff = @listener.create_exponential_backoff
15
+ metadata = @listener_metadata.merge(
16
+ key: message.key,
17
+ partition: message.partition,
18
+ offset: message.offset,
19
+ retry_count: 0
20
+ )
21
+
22
+ begin
23
+ instrument('listener.process_message', metadata) do |metadata|
24
+ time_elapsed = measure do
25
+ Phobos::Actions::ProcessMessage.new(
26
+ listener: @listener,
27
+ message: message,
28
+ metadata: metadata,
29
+ encoding: @listener.encoding
30
+ ).execute
31
+ end
32
+ metadata.merge!(time_elapsed: time_elapsed)
33
+ end
34
+ rescue => e
35
+ retry_count = metadata[:retry_count]
36
+ interval = backoff.interval_at(retry_count).round(2)
37
+
38
+ error = {
39
+ waiting_time: interval,
40
+ exception_class: e.class.name,
41
+ exception_message: e.message,
42
+ backtrace: e.backtrace
43
+ }
44
+
45
+ instrument('listener.retry_handler_error', error.merge(metadata)) do
46
+ Phobos.logger.error do
47
+ { message: "error processing message, waiting #{interval}s" }.merge(error).merge(metadata)
48
+ end
49
+
50
+ sleep interval
51
+ metadata.merge!(retry_count: retry_count + 1)
52
+ end
53
+
54
+ raise Phobos::AbortError if @listener.should_stop?
55
+ retry
56
+ end
57
+ end
58
+ end
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,26 @@
1
+ module Phobos
2
+ module Actions
3
+ class ProcessMessage
4
+ def initialize(listener:, message:, metadata:, encoding:)
5
+ @listener = listener
6
+ @message = message
7
+ @metadata = metadata
8
+ @encoding = encoding
9
+ end
10
+
11
+ def execute
12
+ payload = force_encoding(@message.value)
13
+ decoded_payload = @listener.handler_class.new.before_consume(payload)
14
+ @listener.handler_class.around_consume(decoded_payload, @metadata) do
15
+ @listener.handler_class.new.consume(decoded_payload, @metadata)
16
+ end
17
+ end
18
+
19
+ private
20
+
21
+ def force_encoding(value)
22
+ @encoding ? value.force_encoding(@encoding) : value
23
+ end
24
+ end
25
+ end
26
+ end
@@ -1,7 +1,17 @@
1
1
  module Phobos
2
2
  class Executor
3
3
  include Phobos::Instrumentation
4
- LISTENER_OPTS = %i(handler group_id topic min_bytes max_wait_time force_encoding start_from_beginning max_bytes_per_partition).freeze
4
+ LISTENER_OPTS = %i(
5
+ handler
6
+ group_id
7
+ topic
8
+ min_bytes
9
+ max_wait_time
10
+ force_encoding
11
+ start_from_beginning
12
+ max_bytes_per_partition
13
+ backoff
14
+ ).freeze
5
15
 
6
16
  def initialize
7
17
  @threads = Concurrent::Array.new
@@ -49,7 +59,7 @@ module Phobos
49
59
 
50
60
  def run_listener(listener)
51
61
  retry_count = 0
52
- backoff = Phobos.create_exponential_backoff
62
+ backoff = listener.create_exponential_backoff
53
63
 
54
64
  begin
55
65
  listener.start
@@ -4,6 +4,10 @@ module Phobos
4
4
  base.extend(ClassMethods)
5
5
  end
6
6
 
7
+ def before_consume(payload)
8
+ payload
9
+ end
10
+
7
11
  def consume(payload, metadata)
8
12
  raise NotImplementedError
9
13
  end
@@ -2,12 +2,6 @@ module Phobos
2
2
  module Instrumentation
3
3
  NAMESPACE = 'phobos'
4
4
 
5
- def instrument(event, extra = {})
6
- ActiveSupport::Notifications.instrument("#{NAMESPACE}.#{event}", extra) do |extra|
7
- yield(extra) if block_given?
8
- end
9
- end
10
-
11
5
  def self.subscribe(event)
12
6
  ActiveSupport::Notifications.subscribe("#{NAMESPACE}.#{event}") do |*args|
13
7
  yield ActiveSupport::Notifications::Event.new(*args) if block_given?
@@ -17,5 +11,17 @@ module Phobos
17
11
  def self.unsubscribe(subscriber)
18
12
  ActiveSupport::Notifications.unsubscribe(subscriber)
19
13
  end
14
+
15
+ def instrument(event, extra = {})
16
+ ActiveSupport::Notifications.instrument("#{NAMESPACE}.#{event}", extra) do |extra|
17
+ yield(extra) if block_given?
18
+ end
19
+ end
20
+
21
+ def measure
22
+ start = Time.now.utc
23
+ yield if block_given?
24
+ (Time.now.utc - start).round(3)
25
+ end
20
26
  end
21
27
  end
@@ -6,12 +6,17 @@ module Phobos
6
6
  DEFAULT_MAX_BYTES_PER_PARTITION = 524288 # 512 KB
7
7
 
8
8
  attr_reader :group_id, :topic, :id
9
+ attr_reader :handler_class, :encoding
9
10
 
10
- def initialize(handler:, group_id:, topic:, min_bytes: nil, max_wait_time: nil, force_encoding: nil, start_from_beginning: true, max_bytes_per_partition: DEFAULT_MAX_BYTES_PER_PARTITION)
11
+ def initialize(handler:, group_id:, topic:, min_bytes: nil,
12
+ max_wait_time: nil, force_encoding: nil,
13
+ start_from_beginning: true, backoff: nil,
14
+ max_bytes_per_partition: DEFAULT_MAX_BYTES_PER_PARTITION)
11
15
  @id = SecureRandom.hex[0...6]
12
16
  @handler_class = handler
13
17
  @group_id = group_id
14
18
  @topic = topic
19
+ @backoff = backoff
15
20
  @subscribe_opts = {
16
21
  start_from_beginning: start_from_beginning,
17
22
  max_bytes_per_partition: max_bytes_per_partition
@@ -47,12 +52,18 @@ module Phobos
47
52
  }.merge(listener_metadata)
48
53
 
49
54
  instrument('listener.process_batch', batch_metadata) do |batch_metadata|
50
- time_elapsed = measure { process_batch(batch) }
55
+ time_elapsed = measure do
56
+ Phobos::Actions::ProcessBatch.new(
57
+ listener: self,
58
+ batch: batch,
59
+ listener_metadata: listener_metadata
60
+ ).execute
61
+ end
51
62
  batch_metadata.merge!(time_elapsed: time_elapsed)
52
63
  Phobos.logger.info { Hash(message: 'Committed offset').merge(batch_metadata) }
53
64
  end
54
65
 
55
- return if @signal_to_stop
66
+ return if should_stop?
56
67
  end
57
68
 
58
69
  # Abort is an exception to prevent the consumer from committing the offset.
@@ -62,9 +73,7 @@ module Phobos
62
73
  #
63
74
  rescue Kafka::ProcessingError, Phobos::AbortError
64
75
  instrument('listener.retry_aborted', listener_metadata) do
65
- Phobos.logger.info do
66
- {message: 'Retry loop aborted, listener is shutting down'}.merge(listener_metadata)
67
- end
76
+ Phobos.logger.info({ message: 'Retry loop aborted, listener is shutting down' }.merge(listener_metadata))
68
77
  end
69
78
  end
70
79
 
@@ -80,14 +89,14 @@ module Phobos
80
89
  end
81
90
 
82
91
  @kafka_client.close
83
- if @signal_to_stop
92
+ if should_stop?
84
93
  Phobos.logger.info { Hash(message: 'Listener stopped').merge(listener_metadata) }
85
94
  end
86
95
  end
87
96
  end
88
97
 
89
98
  def stop
90
- return if @signal_to_stop
99
+ return if should_stop?
91
100
  instrument('listener.stopping', listener_metadata) do
92
101
  Phobos.logger.info { Hash(message: 'Listener stopping').merge(listener_metadata) }
93
102
  @consumer&.stop
@@ -95,73 +104,23 @@ module Phobos
95
104
  end
96
105
  end
97
106
 
98
- private
99
-
100
- def listener_metadata
101
- { listener_id: id, group_id: group_id, topic: topic }
107
+ def create_exponential_backoff
108
+ Phobos.create_exponential_backoff(@backoff)
102
109
  end
103
110
 
104
- def process_batch(batch)
105
- batch.messages.each do |message|
106
- backoff = Phobos.create_exponential_backoff
107
- metadata = listener_metadata.merge(
108
- key: message.key,
109
- partition: message.partition,
110
- offset: message.offset,
111
- retry_count: 0
112
- )
113
-
114
- begin
115
- instrument('listener.process_message', metadata) do |metadata|
116
- time_elapsed = measure { process_message(message, metadata) }
117
- metadata.merge!(time_elapsed: time_elapsed)
118
- end
119
- rescue => e
120
- retry_count = metadata[:retry_count]
121
- interval = backoff.interval_at(retry_count).round(2)
122
-
123
- error = {
124
- waiting_time: interval,
125
- exception_class: e.class.name,
126
- exception_message: e.message,
127
- backtrace: e.backtrace
128
- }
129
-
130
- instrument('listener.retry_handler_error', error.merge(metadata)) do
131
- Phobos.logger.error do
132
- {message: "error processing message, waiting #{interval}s"}.merge(error).merge(metadata)
133
- end
134
-
135
- sleep interval
136
- metadata.merge!(retry_count: retry_count + 1)
137
- end
138
-
139
- raise Phobos::AbortError if @signal_to_stop
140
- retry
141
- end
142
- end
111
+ def should_stop?
112
+ @signal_to_stop == true
143
113
  end
144
114
 
145
- def process_message(message, metadata)
146
- payload = force_encoding(message.value)
147
- @handler_class.around_consume(payload, metadata) do
148
- @handler_class.new.consume(payload, metadata)
149
- end
115
+ private
116
+
117
+ def listener_metadata
118
+ { listener_id: id, group_id: group_id, topic: topic }
150
119
  end
151
120
 
152
121
  def create_kafka_consumer
153
122
  configs = Phobos.config.consumer_hash.select { |k| KAFKA_CONSUMER_OPTS.include?(k) }
154
- @kafka_client.consumer({group_id: group_id}.merge(configs))
155
- end
156
-
157
- def force_encoding(value)
158
- @encoding ? value.force_encoding(@encoding) : value
159
- end
160
-
161
- def measure
162
- start = Time.now.utc
163
- yield if block_given?
164
- (Time.now.utc - start).round(3)
123
+ @kafka_client.consumer({ group_id: group_id }.merge(configs))
165
124
  end
166
125
 
167
126
  def compact(hash)
@@ -0,0 +1 @@
1
+ require 'phobos/test/helper'
@@ -0,0 +1,23 @@
1
+ module Phobos
2
+ module Test
3
+ module Helper
4
+ KafkaMessage = Struct.new(:value)
5
+
6
+ def process_message(handler:, payload:, metadata:, force_encoding: nil)
7
+ listener = Phobos::Listener.new(
8
+ handler: handler,
9
+ group_id: 'test-group',
10
+ topic: 'test-topic',
11
+ force_encoding: force_encoding
12
+ )
13
+
14
+ Phobos::Actions::ProcessMessage.new(
15
+ listener: listener,
16
+ message: KafkaMessage.new(payload),
17
+ metadata: metadata,
18
+ encoding: listener.encoding
19
+ ).execute
20
+ end
21
+ end
22
+ end
23
+ end
@@ -1,3 +1,3 @@
1
1
  module Phobos
2
- VERSION = '1.4.2'
2
+ VERSION = '1.5.0'
3
3
  end
data/phobos.gemspec CHANGED
@@ -12,7 +12,8 @@ Gem::Specification.new do |spec|
12
12
  'Sergey Evstifeev',
13
13
  'Thiago R. Colucci',
14
14
  'Martin Svalin',
15
- 'Francisco Juan'
15
+ 'Francisco Juan',
16
+ 'Tommy Gustafsson'
16
17
  ]
17
18
  spec.email = [
18
19
  'ornelas.tulio@gmail.com',
@@ -20,7 +21,8 @@ Gem::Specification.new do |spec|
20
21
  'sergey.evstifeev@gmail.com',
21
22
  'ticolucci@gmail.com',
22
23
  'martin@lite.nu',
23
- 'francisco.juan@gmail.com'
24
+ 'francisco.juan@gmail.com',
25
+ 'tommydgustafsson@gmail.com'
24
26
  ]
25
27
 
26
28
  spec.summary = %q{Simplifying Kafka for ruby apps}
@@ -1,17 +1,13 @@
1
1
  #!/bin/bash
2
2
  set -eu
3
3
 
4
- UTILS_DIR=$(dirname $0)
5
- source ${UTILS_DIR}/env.sh
6
-
7
- ZK_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' zookeeper)
8
4
  TOPIC=${TOPIC:='test'}
9
5
  PARTITIONS=${PARTITIONS:=2}
10
6
 
11
7
  echo "creating topic ${TOPIC}, partitions ${PARTITIONS}"
12
- docker run --rm $KAFKA_IMAGE:$KAFKA_IMAGE_VERSION kafka-topics.sh \
13
- --create \
8
+ docker-compose run --rm -e PARTITIONS=$PARTITIONS -e TOPIC=$TOPIC kafka kafka-topics.sh --create \
14
9
  --topic $TOPIC \
15
10
  --replication-factor 1 \
16
11
  --partitions $PARTITIONS \
17
- --zookeeper $ZK_IP:2181
12
+ --zookeeper zookeeper:2181 \
13
+ 2>/dev/null
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: phobos
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.4.2
4
+ version: 1.5.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Túlio Ornelas
@@ -10,10 +10,11 @@ authors:
10
10
  - Thiago R. Colucci
11
11
  - Martin Svalin
12
12
  - Francisco Juan
13
+ - Tommy Gustafsson
13
14
  autorequire:
14
15
  bindir: bin
15
16
  cert_chain: []
16
- date: 2017-09-29 00:00:00.000000000 Z
17
+ date: 2017-10-26 00:00:00.000000000 Z
17
18
  dependencies:
18
19
  - !ruby/object:Gem::Dependency
19
20
  name: bundler
@@ -235,6 +236,7 @@ email:
235
236
  - ticolucci@gmail.com
236
237
  - martin@lite.nu
237
238
  - francisco.juan@gmail.com
239
+ - tommydgustafsson@gmail.com
238
240
  executables:
239
241
  - phobos
240
242
  extensions: []
@@ -254,10 +256,13 @@ files:
254
256
  - bin/setup
255
257
  - circle.yml
256
258
  - config/phobos.yml.example
259
+ - docker-compose.yml
257
260
  - examples/handler_saving_events_database.rb
258
261
  - examples/handler_using_async_producer.rb
259
262
  - examples/publishing_messages_without_consumer.rb
260
263
  - lib/phobos.rb
264
+ - lib/phobos/actions/process_batch.rb
265
+ - lib/phobos/actions/process_message.rb
261
266
  - lib/phobos/cli.rb
262
267
  - lib/phobos/cli/runner.rb
263
268
  - lib/phobos/cli/start.rb
@@ -269,15 +274,12 @@ files:
269
274
  - lib/phobos/instrumentation.rb
270
275
  - lib/phobos/listener.rb
271
276
  - lib/phobos/producer.rb
277
+ - lib/phobos/test.rb
278
+ - lib/phobos/test/helper.rb
272
279
  - lib/phobos/version.rb
273
280
  - logo.png
274
281
  - phobos.gemspec
275
282
  - utils/create-topic.sh
276
- - utils/env.sh
277
- - utils/kafka.sh
278
- - utils/start-all.sh
279
- - utils/stop-all.sh
280
- - utils/zk.sh
281
283
  homepage: https://github.com/klarna/phobos
282
284
  licenses:
283
285
  - Apache License Version 2.0
data/utils/env.sh DELETED
@@ -1,11 +0,0 @@
1
- #!/bin/bash
2
- set -eu
3
-
4
- DOCKER_HOSTNAME='localhost'
5
- FORCE_PULL=${FORCE_PULL:='false'}
6
- APPS=(zk kafka)
7
-
8
- ZK_IMAGE=jplock/zookeeper
9
- ZK_IMAGE_VERSION=3.4.6
10
- KAFKA_IMAGE=ches/kafka
11
- KAFKA_IMAGE_VERSION=0.9.0.1
data/utils/kafka.sh DELETED
@@ -1,43 +0,0 @@
1
- #!/bin/bash
2
- set -eu
3
-
4
- source $(dirname $0)/env.sh
5
-
6
- start() {
7
- [ $FORCE_PULL = 'true' ] && docker pull $KAFKA_IMAGE:$KAFKA_IMAGE_VERSION
8
- ZK_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' zookeeper)
9
-
10
- docker run \
11
- -d \
12
- -p 9092:9092 \
13
- --name kafka \
14
- -e KAFKA_BROKER_ID=0 \
15
- -e KAFKA_ADVERTISED_HOST_NAME=localhost \
16
- -e KAFKA_ADVERTISED_PORT=9092 \
17
- -e ZOOKEEPER_CONNECTION_STRING=zookeeper:2181 \
18
- --link zookeeper:zookeeper \
19
- $KAFKA_IMAGE:$KAFKA_IMAGE_VERSION
20
-
21
- # The following statement waits until kafka is up and running
22
- docker exec kafka bash -c "JMX_PORT=9998 ./bin/kafka-topics.sh --zookeeper $ZK_IP:2181 --list 2> /dev/null"
23
- [ $? != '0' ] && echo "[kafka] failed to start"
24
- }
25
-
26
- stop() {
27
- docker stop kafka > /dev/null 2>&1 || true
28
- docker rm kafka > /dev/null 2>&1 || true
29
- }
30
-
31
- case "$1" in
32
- start)
33
- echo "[kafka] starting $KAFKA_IMAGE:$KAFKA_IMAGE_VERSION"
34
- stop
35
- start
36
- echo "[kafka] started"
37
- ;;
38
- stop)
39
- printf "[kafka] stopping... "
40
- stop
41
- echo "Done"
42
- ;;
43
- esac
data/utils/start-all.sh DELETED
@@ -1,9 +0,0 @@
1
- #!/bin/bash
2
- set -eu
3
-
4
- UTILS_DIR=$(dirname $0)
5
- source ${UTILS_DIR}/env.sh
6
-
7
- for (( i=0 ; i<${#APPS[@]} ; i++ )) ; do
8
- ${UTILS_DIR}/${APPS[i]}.sh start
9
- done
data/utils/stop-all.sh DELETED
@@ -1,9 +0,0 @@
1
- #!/bin/bash
2
- set -eu
3
-
4
- UTILS_DIR=$(dirname $0)
5
- source ${UTILS_DIR}/env.sh
6
-
7
- for (( i=${#APPS[@]}-1 ; i>=0 ; i-- )) ; do
8
- ${UTILS_DIR}/${APPS[i]}.sh stop
9
- done
data/utils/zk.sh DELETED
@@ -1,36 +0,0 @@
1
- #!/bin/bash
2
- set -eu
3
-
4
- UTILS_DIR=$(dirname $0)
5
- source ${UTILS_DIR}/env.sh
6
-
7
- start() {
8
- [ $FORCE_PULL = 'true' ] && docker pull $ZK_IMAGE:$ZK_IMAGE_VERSION
9
-
10
- docker run \
11
- -d \
12
- -p 2181:2181 \
13
- --name zookeeper \
14
- $ZK_IMAGE:$ZK_IMAGE_VERSION
15
-
16
- sleep 3
17
- }
18
-
19
- stop() {
20
- docker stop zookeeper > /dev/null 2>&1 || true
21
- docker rm zookeeper > /dev/null 2>&1 || true
22
- }
23
-
24
- case "$1" in
25
- start)
26
- echo "[zookeeper] starting $ZK_IMAGE:$ZK_IMAGE_VERSION"
27
- stop
28
- start
29
- echo "[zookeeper] started"
30
- ;;
31
- stop)
32
- printf "[zookeeper] stopping... "
33
- stop
34
- echo "Done"
35
- ;;
36
- esac