glass_octopus 1.1.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4fbc955a7cddf1c08e93db127bd2ccf5a3fb025a2a03e95bfc6ab7de54e50914
4
- data.tar.gz: 85085902b9c8c4c38e2c8813f68934f9a54ccab436c21f7519de633eff4eeb83
3
+ metadata.gz: 4050c7f59a837ce7113f5e4d89488f0f5db6a3ed409ec77b766a5944646ea1bc
4
+ data.tar.gz: 8b3c82d22898907786d0febe16fc275fd839214f5656168a1917e854029fef5a
5
5
  SHA512:
6
- metadata.gz: fe14d5ee99f0fb900b4d4b7e9454409d462c399086e30a61dfab61b7ee4a0d69ec4ef06a2d8c88d3c1d13f72e1ff65f17d1334bbe4cfb2e92448af0135b5dfc7
7
- data.tar.gz: 1e7e850864f9455a64524c09d48fa9d552e9ab6a4ae91f6244ef343a6dda1d76d5a414b7daee68a0e127c0191391253b4000004f11235a653188a618a8cb032e
6
+ metadata.gz: ced4aa1361e029523d6570d982e507b7a1a674946195f975083e85962226c3fe8c0331ecef76796267db8e761bc9b439c0c3a27b004a78550373c8dd2ab9cc3d
7
+ data.tar.gz: 6f15379c6c7570d2ffe79ea0d3a11347c4d57113d96c897fb3f69df23c57a7dd3b5866887f10989cc356a40f12924023e39f3b3b7d28c3abc2c60445747ebd6b
data/Gemfile CHANGED
@@ -1,7 +1,3 @@
1
1
  source "https://rubygems.org"
2
2
 
3
3
  gemspec
4
-
5
- gem "poseidon", github: "bpot/poseidon"
6
- gem "poseidon_cluster", github: "bsm/poseidon_cluster"
7
- gem "ruby-kafka"
data/README.md CHANGED
@@ -24,19 +24,13 @@ This gem requires Ruby 2.1 or higher.
24
24
 
25
25
  Pick your adapter:
26
26
 
27
- * For Kafka 0.8.x use poseidon and poseidon-cluster
28
-
29
- # in your Gemfile
30
- gem "glass_octopus"
31
- gem "poseidon", github: "bpot/poseidon"
32
- gem "poseidon_cluster", github: "bsm/poseidon_cluster"
33
-
34
- * For Kafka 0.9+ use ruby-kafka
27
+ * ruby-kafka
35
28
 
36
29
  # in your Gemfile
37
30
  gem "glass_octopus"
38
31
  gem "ruby-kafka"
39
32
 
33
+ Currently only `ruby-kafka` is supported out of the box. If you need to use another adapter you can pass a class to `config.adapter`. See documentation for `GlassOctopus::Configuration#adapter`.
40
34
 
41
35
  ```ruby
42
36
  # in app.rb
@@ -55,20 +49,24 @@ GlassOctopus.run(app) do |config|
55
49
  config.adapter :ruby_kafka do |kafka|
56
50
  kafka.broker_list = %[localhost:9092]
57
51
  kafka.topic = "mytopic"
58
- kafka.group = "mygroup"
59
- kafka.client = { logger: config.logger }
52
+ kafka.group_id = "mygroup"
53
+ kafka.client_id = "myapp"
60
54
  end
61
55
  end
62
56
  ```
63
57
 
64
58
  Run it with `bundle exec ruby app.rb`
65
59
 
60
+ For more examples look into the [example](example) directory.
61
+
62
+ For the API documentation please see the [documentation site][rubydoc]
63
+
66
64
  ### Handling Avro messages with Schema Registry
67
65
 
68
- Glass Octopus can be used with Avro messages validated against a schema. For this, you need a running [Schema Registry](https://docs.confluent.io/current/schema-registry/docs/index.html) service.
66
+ Glass Octopus can be used with Avro messages validated against a schema. For this, you need a running [Schema Registry](https://docs.confluent.io/current/schema-registry/docs/index.html) service.
69
67
  You also need to have the `avro_turf` gem installed.
70
68
 
71
- ```
69
+ ```ruby
72
70
  # in your Gemfile
73
71
  gem "avro_turf"
74
72
  ```
@@ -79,21 +77,97 @@ Add the `AvroParser` middleware with the Schema Registry URL to your app.
79
77
  # in app.rb
80
78
  app = GlassOctopus.build do
81
79
  use GlassOctopus::Middleware::AvroParser, "http://schema_registry_url:8081"
82
- ...
80
+ # ...
83
81
  end
84
82
  ```
85
83
 
86
- For more examples look into the [example](example) directory.
84
+ ### Supported middleware
87
85
 
88
- For the API documentation please see the [documentation site][rubydoc]
86
+ * ActiveRecord
87
+
88
+ Return any active connection to the pool after the message has been processed.
89
+
90
+ ```ruby
91
+ app = GlassOctopus.build do
92
+ use GlassOctopus::Middleware::ActiveRecord
93
+ # ...
94
+ end
95
+ ```
96
+
97
+ * New Relic
98
+
99
+ Record message processing as background transactions. Also captures uncaught exceptions.
100
+
101
+ ```ruby
102
+ app = GlassOctopus.build do
103
+ use GlassOctopus::Middleware::NewRelic, MyConsumer
104
+ # ...
105
+ end
106
+ ```
107
+
108
+ * Sentry
109
+
110
+ Report uncaught exceptions to Sentry.
111
+
112
+ ```ruby
113
+ app = GlassOctopus.build do
114
+ use GlassOctopus::Middleware::Sentry
115
+ # ...
116
+ end
117
+ ```
118
+
119
+ * Common logger
120
+
121
+ Log processed messages and runtime of the processing.
122
+
123
+ ```ruby
124
+ app = GlassOctopus.build do
125
+ use GlassOctopus::Middleware::CommonLogger
126
+ # ...
127
+ end
128
+ ```
129
+
130
+ * Parse messages as JSON
131
+
132
+ Parse message value as JSON. The resulting hash is placed in `context.params`.
133
+
134
+ ```ruby
135
+ app = GlassOctopus.build do
136
+ use GlassOctopus::Middleware::JsonParser
137
+ # ...
138
+ run MyConsumer
139
+ end
140
+
141
+ class MyConsumer
142
+ def call(ctx)
143
+ puts ctx.params # message value parsed as JSON
144
+ puts ctx.message # Raw unaltered message
145
+ end
146
+ end
147
+ ```
148
+
149
+ Optionally you can specify a class to be instantiated with the message hash.
150
+
151
+ ```ruby
152
+ app = GlassOctopus.build do
153
+ use GlassOctopus::Middleware::JsonParser, class: MyMessage
154
+ run MyConsumer
155
+ end
156
+
157
+ class MyMessage
158
+ def initialize(attributes)
159
+ attributes.each { |k,v| public_send("#{k}=", v) }
160
+ end
161
+ end
162
+ ```
89
163
 
90
164
  ## Development
91
165
 
92
- Install docker and docker-compose to run Kafka and zookeeper for tests.
166
+ Install docker and docker-compose to run Kafka and Zookeeper for tests.
167
+
168
+ Start Kafka and Zookeeper
93
169
 
94
- 1. Set the `ADVERTISED_HOST` environment variable
95
- 2. Run `rake docker:up`
96
- 3. Now you can run the tests.
170
+ $ docker-compose up
97
171
 
98
172
  Run all tests including integration tests:
99
173
 
@@ -103,7 +177,7 @@ Running tests without integration tests:
103
177
 
104
178
  $ rake # or rake test
105
179
 
106
- When you are done run `rake docker:down` to clean up docker containers.
180
+ When you are done run `docker-compose down` to clean up docker containers.
107
181
 
108
182
  ## License
109
183
 
data/Rakefile CHANGED
@@ -17,61 +17,3 @@ namespace :test do
17
17
  Rake::Task[:test].invoke
18
18
  end
19
19
  end
20
-
21
- namespace :docker do
22
- require "socket"
23
-
24
- desc "Start docker containers"
25
- task :up do
26
- start
27
- wait(9093)
28
- docker_compose("run --rm kafka_0_10 kafka-topics.sh --zookeeper zookeeper --create --topic test_topic --replication-factor 1 --partitions 1")
29
- wait(9092)
30
- docker_compose("run --rm kafka_0_8 bash -c '$KAFKA_HOME/bin/kafka-topics.sh --zookeeper kafka_0_8 --create --topic test_topic --replication-factor 1 --partitions 1'")
31
- end
32
-
33
- desc "Stop and remove docker containers"
34
- task :down do
35
- docker_compose("down")
36
- end
37
-
38
- desc "Reset docker containers"
39
- task :reset => [:down, :up]
40
-
41
- def start
42
- docker_compose("up -d")
43
- end
44
-
45
- def stop
46
- docker_compose("down")
47
- end
48
-
49
- def docker_compose(args)
50
- env = {
51
- "ADVERTISED_HOST" => docker_machine_ip,
52
- "KAFKA_0_8_EXTERNAL_PORT" => "9092",
53
- "KAFKA_0_10_EXTERNAL_PORT" => "9093",
54
- "ZOOKEEPER_EXTERNAL_PORT" => "2181",
55
- }
56
- system(env, "docker-compose #{args}")
57
- end
58
-
59
- def docker_machine_ip
60
- return @docker_ip if defined? @docker_ip
61
-
62
- if ENV.key?("ADVERTISED_HOST")
63
- @docker_ip = ENV["ADVERTISED_HOST"]
64
- else
65
- active = %x{docker-machine active}.chomp
66
- @docker_ip = %x{docker-machine ip #{active}}.chomp
67
- end
68
- end
69
-
70
- def wait(port)
71
- Socket.tcp(docker_machine_ip, port, connect_timeout: 5).close
72
- rescue Errno::ECONNREFUSED
73
- puts "waiting for #{docker_machine_ip}:#{port}"
74
- sleep 1
75
- retry
76
- end
77
- end
@@ -1,25 +1,22 @@
1
1
  version: '3'
2
2
  services:
3
- kafka_0_8:
4
- image: sspinc/kafka
3
+ kafka:
4
+ image: confluentinc/cp-kafka:5.0.1
5
5
  environment:
6
- - ADVERTISED_HOST=${ADVERTISED_HOST}
7
- - ADVERTISED_PORT=${KAFKA_0_8_PORT}
6
+ - KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181
7
+ - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092
8
+ - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
8
9
  ports:
9
- - ${KAFKA_0_8_PORT}:${KAFKA_0_8_PORT}
10
- - ${ZOOKEEPER_EXTERNAL_PORT}:2181
11
- kafka_0_10:
12
- image: ches/kafka:0.10.2.1
13
- environment:
14
- - KAFKA_ADVERTISED_HOST_NAME=${ADVERTISED_HOST}
15
- - KAFKA_ADVERTISED_PORT=${KAFKA_0_10_PORT}
16
- - KAFKA_PORT=${KAFKA_0_10_PORT}
17
- - ZOOKEEPER_IP=zookeeper
18
- ports:
19
- - ${KAFKA_0_10_PORT}:${KAFKA_0_10_PORT}
20
- links:
10
+ - 29092:29092
11
+ depends_on:
21
12
  - zookeeper
22
13
  zookeeper:
23
- image: zookeeper:3.4
14
+ image: confluentinc/cp-zookeeper:5.0.1
15
+ environment:
16
+ - ZOOKEEPER_CLIENT_PORT=32181
17
+ - ZOOKEEPER_TICK_TIME=2000
18
+ - ZOOKEEPER_SYNC_LIMIT=2
24
19
  expose:
25
- - "2181"
20
+ - 32181/tcp
21
+ ports:
22
+ - 32181:32181
@@ -17,10 +17,12 @@ end
17
17
 
18
18
 
19
19
  GlassOctopus.run(app) do |config|
20
+ config.logger = Logger.new(STDOUT)
21
+
20
22
  config.adapter :ruby_kafka do |kafka|
21
23
  kafka.broker_list = array_from_env("KAFKA_BROKER_LIST", default: %w[localhost:9092])
22
24
  kafka.topic = ENV.fetch("KAFKA_TOPIC", "mytopic")
23
- kafka.group = ENV.fetch("KAFKA_GROUP", "mygroup")
24
- kafka.client = { logger: config.logger }
25
+ kafka.group_id = ENV.fetch("KAFKA_GROUP", "mygroup")
26
+ kafka.client_id = "myapp"
25
27
  end
26
28
  end
@@ -15,10 +15,12 @@ def array_from_env(key, default:)
15
15
  end
16
16
 
17
17
  GlassOctopus.run(app) do |config|
18
- config.adapter :poseidon do |kafka_config|
19
- kafka_config.broker_list = array_from_env("KAFKA_BROKER_LIST", default: %w[localhost:9092])
20
- kafka_config.zookeeper_list = array_from_env("ZOOKEEPER_LIST", default: %w[localhost:2181])
21
- kafka_config.topic = ENV.fetch("KAFKA_TOPIC", "mytopic")
22
- kafka_config.group = ENV.fetch("KAFKA_GROUP", "mygroup")
18
+ config.logger = Logger.new(STDOUT)
19
+
20
+ config.adapter :ruby_kafka do |kafka|
21
+ kafka.broker_list = array_from_env("KAFKA_BROKER_LIST", default: %w[localhost:9092])
22
+ kafka.topic = ENV.fetch("KAFKA_TOPIC", "mytopic")
23
+ kafka.group_id = ENV.fetch("KAFKA_GROUP", "mygroup")
24
+ kafka.client_id = "myapp"
23
25
  end
24
26
  end
@@ -23,15 +23,13 @@ EOF
23
23
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
24
24
  spec.require_paths = ["lib"]
25
25
 
26
- spec.add_runtime_dependency "concurrent-ruby", "~> 1.0", ">= 1.0.1"
27
-
28
26
  spec.add_development_dependency "rake", "~> 12.0"
29
27
  spec.add_development_dependency "minitest", "~> 5.0"
30
28
  spec.add_development_dependency "minitest-color", "~> 0"
31
29
  spec.add_development_dependency "guard", "~> 2.14"
32
30
  spec.add_development_dependency "guard-minitest", "~> 2.4"
33
31
  spec.add_development_dependency "terminal-notifier-guard", "~> 1.7"
34
- spec.add_development_dependency "ruby-kafka", "~> 0.5.2"
32
+ spec.add_development_dependency "ruby-kafka", "~> 0.7.0"
35
33
  spec.add_development_dependency "avro_turf", "~> 0.8.0"
36
34
  spec.add_development_dependency "sinatra", "~> 2.0.0"
37
35
  spec.add_development_dependency "webmock", "~> 3.3.0"
@@ -15,14 +15,12 @@ module GlassOctopus
15
15
  end
16
16
 
17
17
  def run
18
- @consumer = Consumer.new(connection, processor, config.executor, logger)
18
+ @consumer = Consumer.new(connection, processor, config.logger)
19
19
  @consumer.run
20
20
  end
21
21
 
22
- def shutdown(timeout=nil)
23
- timeout ||= config.shutdown_timeout
24
- @consumer.shutdown(timeout) if @consumer
25
-
22
+ def shutdown
23
+ @consumer.shutdown if @consumer
26
24
  nil
27
25
  end
28
26
 
@@ -1,59 +1,81 @@
1
1
  require "logger"
2
- require "concurrent"
3
-
4
- require "glass_octopus/bounded_executor"
5
2
 
6
3
  module GlassOctopus
7
4
  # Configuration for the application.
8
5
  #
9
- # @!attribute [rw] connection_adapter
10
- # Connection adapter that connects to the Kafka.
11
- # @!attribute [rw] executor
12
- # A thread pool executor to process messages concurrently. Defaults to
13
- # a {BoundedExecutor} with 25 threads.
6
+ # @!attribute [r] connection_adapter
7
+ # The configured connection adapter.
8
+ # @see #adapter
14
9
  # @!attribute [rw] logger
15
10
  # A standard library compatible logger for the application. By default it
16
11
  # logs to the STDOUT.
17
- # @!attribute [rw] shutdown_timeout
18
- # Number of seconds to wait for the processing to finish before shutting down.
19
12
  class Configuration
20
- attr_accessor :connection_adapter,
21
- :executor,
22
- :logger,
23
- :shutdown_timeout
13
+ attr_accessor :logger
14
+ attr_reader :connection_adapter
24
15
 
25
16
  def initialize
26
17
  self.logger = Logger.new(STDOUT).tap { |l| l.level = Logger::INFO }
27
- self.executor = default_executor
28
- self.shutdown_timeout = 10
29
18
  end
30
19
 
31
- # Creates a new adapter
20
+ # Configures a new adapter.
21
+ #
22
+ # When a class is passed as +type+ the class will be instantiated.
23
+ #
24
+ # @example Using a custom adapter class
25
+ # config.adapter(MyAdapter) do |c|
26
+ # c.bootstrap_servers = %w[localhost:9092]
27
+ # c.group_id = "mygroup"
28
+ # c.topic = "mytopic"
29
+ # end
30
+ #
31
+ # class MyAdapter
32
+ # def initialize
33
+ # @options = OpenStruct.new
34
+ # yield @options
35
+ # end
36
+ #
37
+ # def fetch_message
38
+ # @consumer.each do |fetched_message|
39
+ # message = Message.new(
40
+ # fetched_message.topic,
41
+ # fetched_message.partition,
42
+ # fetched_message.offset,
43
+ # fetched_message.key,
44
+ # fetched_message.value
45
+ # )
46
+ #
47
+ # yield message
48
+ # end
49
+ # end
32
50
  #
33
- # @param type [:poseidon, :ruby_kafka] type of the adapter to use
51
+ # def connect
52
+ # # Connect to Kafka...
53
+ # @consumer = ...
54
+ # self
55
+ # end
56
+ #
57
+ # def close
58
+ # @consumer.close
59
+ # end
60
+ # end
61
+ #
62
+ # @param type [:ruby_kafka, Class] type of the adapter to use
34
63
  # @yield a block to conigure the adapter
35
64
  # @yieldparam config configuration object
36
65
  #
37
- # @see PoseidonAdapter
38
66
  # @see RubyKafkaAdapter
39
67
  def adapter(type, &block)
40
- self.connection_adapter = build_adapter(type, &block)
41
- end
42
-
43
- # @api private
44
- def default_executor
45
- BoundedExecutor.new(Concurrent::FixedThreadPool.new(25), limit: 25)
68
+ @connection_adapter = build_adapter(type, &block)
46
69
  end
47
70
 
48
71
  # @api private
49
72
  def build_adapter(type, &block)
50
73
  case type
51
- when :poseidon
52
- require "glass_octopus/connection/poseidon_adapter"
53
- PoseidonAdapter.new(&block)
54
74
  when :ruby_kafka
55
75
  require "glass_octopus/connection/ruby_kafka_adapter"
56
- RubyKafkaAdapter.new(&block)
76
+ RubyKafkaAdapter.new(logger, &block)
77
+ when Class
78
+ type.new(&block)
57
79
  else
58
80
  raise ArgumentError, "Unknown adapter: #{type}"
59
81
  end
@@ -11,8 +11,7 @@ module GlassOctopus
11
11
  # adapter = GlassOctopus::RubyKafkaAdapter.new do |kafka_config|
12
12
  # kafka_config.broker_list = %w[localhost:9092]
13
13
  # kafka_config.topic = "mytopic"
14
- # kafka_config.group = "mygroup"
15
- # kafka_config.kafka = { logger: Logger.new(STDOUT) }
14
+ # kafka_config.group_id = "mygroup"
16
15
  # end
17
16
  #
18
17
  # adapter.connect.fetch_message do |message|
@@ -30,7 +29,8 @@ module GlassOctopus
30
29
  #
31
30
  # * +broker_list+: list of Kafka broker addresses
32
31
  # * +topic+: name of the topic to subscribe to
33
- # * +group+: name of the consumer group
32
+ # * +group_id+: name of the consumer group
33
+ # * +client_id+: the identifier for this application
34
34
  #
35
35
  # Optional configuration:
36
36
  #
@@ -41,10 +41,13 @@ module GlassOctopus
41
41
  # Check the ruby-kafka documentation for driver specific configurations.
42
42
  #
43
43
  # @raise [OptionsInvalid]
44
- def initialize
44
+ def initialize(logger=nil)
45
45
  config = OpenStruct.new
46
46
  yield config
47
+
47
48
  @options = config.to_h
49
+ @options[:group_id] ||= @options[:group]
50
+ @options[:logger] ||= logger
48
51
  validate_options
49
52
 
50
53
  @kafka = nil
@@ -89,16 +92,16 @@ module GlassOctopus
89
92
 
90
93
  # @api private
91
94
  def connect_to_kafka
92
- Kafka.new(
93
- seed_brokers: options.fetch(:broker_list),
94
- **options.fetch(:client, {})
95
- )
95
+ client_options = options.fetch(:client, {}).merge(logger: @options[:logger])
96
+ client_options.merge!(client_id: @options[:client_id]) if @options.key?(:client_id)
97
+
98
+ Kafka.new(seed_brokers: options.fetch(:broker_list), **client_options)
96
99
  end
97
100
 
98
101
  # @api private
99
102
  def create_consumer(kafka)
100
103
  kafka.consumer(
101
- group_id: options.fetch(:group),
104
+ group_id: options.fetch(:group_id),
102
105
  **options.fetch(:consumer, {})
103
106
  )
104
107
  end
@@ -106,8 +109,8 @@ module GlassOctopus
106
109
  # @api private
107
110
  def validate_options
108
111
  errors = []
109
- [:broker_list, :group, :topic].each do |key|
110
- errors << "Missing key: #{key}" unless options.key?(key)
112
+ [:broker_list, :group_id, :topic].each do |key|
113
+ errors << "Missing key: #{key}" unless options[key]
111
114
  end
112
115
 
113
116
  raise OptionsInvalid.new(errors) if errors.any?
@@ -1,39 +1,33 @@
1
- require "glass_octopus/unit_of_work"
1
+ require "glass_octopus/context"
2
2
 
3
3
  module GlassOctopus
4
4
  # @api private
5
5
  class Consumer
6
- attr_reader :connection, :processor, :executor, :logger
6
+ attr_reader :connection, :processor, :logger
7
7
 
8
- def initialize(connection, processor, executor, logger)
8
+ def initialize(connection, processor, logger)
9
9
  @connection = connection
10
10
  @processor = processor
11
- @executor = executor
12
11
  @logger = logger
13
12
  end
14
13
 
15
14
  def run
16
15
  connection.fetch_message do |message|
17
- work = UnitOfWork.new(message, processor, logger)
18
- submit(work)
16
+ process_message(message)
19
17
  end
20
18
  end
21
19
 
22
- def shutdown(timeout=10)
20
+ def shutdown
23
21
  connection.close
24
- executor.shutdown
25
- logger.info("Waiting for workers to terminate...")
26
- executor.wait_for_termination(timeout)
27
22
  end
28
23
 
29
- def submit(work)
30
- if executor.post(work) { |work| work.perform }
31
- logger.debug { "Accepted message: #{work.message.to_h}" }
32
- else
33
- logger.warn { "Rejected message: #{work.message.to_h}" }
34
- end
35
- rescue Concurrent::RejectedExecutionError
36
- logger.warn { "Rejected message: #{work.message.to_h}" }
24
+ # Unit of work. Builds a context for a message and runs it through the
25
+ # middleware stack. It catches and logs all application level exceptions.
26
+ def process_message(message)
27
+ processor.call(Context.new(message, logger))
28
+ rescue => ex
29
+ logger.error("#{ex.class} - #{ex.message}:")
30
+ logger.error(ex.backtrace.join("\n")) if ex.backtrace
37
31
  end
38
32
  end
39
33
  end
@@ -23,8 +23,12 @@ module GlassOctopus
23
23
  runtime = Benchmark.realtime { yield }
24
24
  runtime *= 1000 # Convert to milliseconds
25
25
 
26
- logger.send(@log_level, format(FORMAT,
27
- ctx.message.topic, ctx.message.partition, ctx.message.key, runtime))
26
+ logger.send(@log_level) { format_message(ctx, runtime) }
27
+ end
28
+
29
+ def format_message(ctx, runtime)
30
+ format(FORMAT,
31
+ ctx.message.topic, ctx.message.partition, ctx.message.key, runtime)
28
32
  end
29
33
  end
30
34
  end
@@ -1,3 +1,3 @@
1
1
  module GlassOctopus
2
- VERSION = "1.1.0"
2
+ VERSION = "2.0.0"
3
3
  end
metadata CHANGED
@@ -1,35 +1,15 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: glass_octopus
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.0
4
+ version: 2.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Tamás Michelberger
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2018-02-12 00:00:00.000000000 Z
11
+ date: 2018-12-17 00:00:00.000000000 Z
12
12
  dependencies:
13
- - !ruby/object:Gem::Dependency
14
- name: concurrent-ruby
15
- requirement: !ruby/object:Gem::Requirement
16
- requirements:
17
- - - "~>"
18
- - !ruby/object:Gem::Version
19
- version: '1.0'
20
- - - ">="
21
- - !ruby/object:Gem::Version
22
- version: 1.0.1
23
- type: :runtime
24
- prerelease: false
25
- version_requirements: !ruby/object:Gem::Requirement
26
- requirements:
27
- - - "~>"
28
- - !ruby/object:Gem::Version
29
- version: '1.0'
30
- - - ">="
31
- - !ruby/object:Gem::Version
32
- version: 1.0.1
33
13
  - !ruby/object:Gem::Dependency
34
14
  name: rake
35
15
  requirement: !ruby/object:Gem::Requirement
@@ -120,14 +100,14 @@ dependencies:
120
100
  requirements:
121
101
  - - "~>"
122
102
  - !ruby/object:Gem::Version
123
- version: 0.5.2
103
+ version: 0.7.0
124
104
  type: :development
125
105
  prerelease: false
126
106
  version_requirements: !ruby/object:Gem::Requirement
127
107
  requirements:
128
108
  - - "~>"
129
109
  - !ruby/object:Gem::Version
130
- version: 0.5.2
110
+ version: 0.7.0
131
111
  - !ruby/object:Gem::Dependency
132
112
  name: avro_turf
133
113
  requirement: !ruby/object:Gem::Requirement
@@ -179,7 +159,6 @@ executables: []
179
159
  extensions: []
180
160
  extra_rdoc_files: []
181
161
  files:
182
- - ".env"
183
162
  - ".gitignore"
184
163
  - ".yardopts"
185
164
  - Gemfile
@@ -190,19 +169,15 @@ files:
190
169
  - bin/guard
191
170
  - bin/rake
192
171
  - docker-compose.yml
193
- - example/advanced.rb
194
172
  - example/avro.rb
195
173
  - example/basic.rb
196
- - example/ruby_kafka.rb
197
174
  - glass_octopus.gemspec
198
175
  - lib/glass-octopus.rb
199
176
  - lib/glass_octopus.rb
200
177
  - lib/glass_octopus/application.rb
201
- - lib/glass_octopus/bounded_executor.rb
202
178
  - lib/glass_octopus/builder.rb
203
179
  - lib/glass_octopus/configuration.rb
204
180
  - lib/glass_octopus/connection/options_invalid.rb
205
- - lib/glass_octopus/connection/poseidon_adapter.rb
206
181
  - lib/glass_octopus/connection/ruby_kafka_adapter.rb
207
182
  - lib/glass_octopus/consumer.rb
208
183
  - lib/glass_octopus/context.rb
@@ -216,7 +191,6 @@ files:
216
191
  - lib/glass_octopus/middleware/new_relic.rb
217
192
  - lib/glass_octopus/middleware/sentry.rb
218
193
  - lib/glass_octopus/runner.rb
219
- - lib/glass_octopus/unit_of_work.rb
220
194
  - lib/glass_octopus/version.rb
221
195
  homepage: https://github.com/sspinc/glass-octopus
222
196
  licenses:
@@ -238,7 +212,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
238
212
  version: '0'
239
213
  requirements: []
240
214
  rubyforge_project:
241
- rubygems_version: 2.7.4
215
+ rubygems_version: 2.7.6
242
216
  signing_key:
243
217
  specification_version: 4
244
218
  summary: A Kafka consumer framework. Like Rack but for Kafka.
data/.env DELETED
@@ -1,3 +0,0 @@
1
- KAFKA_0_8_PORT=9092
2
- KAFKA_0_10_PORT=9093
3
- ZOOKEEPER_EXTERNAL_PORT=2181
@@ -1,35 +0,0 @@
1
- require "bundler/setup"
2
- require "glass_octopus"
3
-
4
- app = GlassOctopus.build do
5
- use GlassOctopus::Middleware::CommonLogger
6
-
7
- run Proc.new { |ctx|
8
- puts "Got message: #{ctx.message.key} => #{ctx.message.value}"
9
- }
10
- end
11
-
12
- def array_from_env(key, default:)
13
- return default unless ENV.key?(key)
14
- ENV.fetch(key).split(",").map(&:strip)
15
- end
16
-
17
- GlassOctopus.run(app) do |config|
18
- config.logger = Logger.new("glass_octopus.log")
19
-
20
- config.adapter :poseidon do |kafka_config|
21
- kafka_config.broker_list = array_from_env("KAFKA_BROKER_LIST", default: %w[localhost:9092])
22
- kafka_config.zookeeper_list = array_from_env("ZOOKEEPER_LIST", default: %w[localhost:2181])
23
- kafka_config.topic = ENV.fetch("KAFKA_TOPIC", "mytopic")
24
- kafka_config.group = ENV.fetch("KAFKA_GROUP", "mygroup")
25
- kafka_config.logger = config.logger
26
- end
27
-
28
- config.executor = Concurrent::ThreadPoolExecutor.new(
29
- max_threads: 25,
30
- min_threads: 7
31
- )
32
-
33
- config.shutdown_timeout = 30
34
- end
35
-
@@ -1,24 +0,0 @@
1
- require "bundler/setup"
2
- require "glass_octopus"
3
-
4
- app = GlassOctopus.build do
5
- use GlassOctopus::Middleware::CommonLogger
6
-
7
- run Proc.new { |ctx|
8
- puts "Got message: #{ctx.message.key} => #{ctx.message.value}"
9
- }
10
- end
11
-
12
- def array_from_env(key, default:)
13
- return default unless ENV.key?(key)
14
- ENV.fetch(key).split(",").map(&:strip)
15
- end
16
-
17
- GlassOctopus.run(app) do |config|
18
- config.adapter :ruby_kafka do |kafka|
19
- kafka.broker_list = array_from_env("KAFKA_BROKER_LIST", default: %w[localhost:9092])
20
- kafka.topic = ENV.fetch("KAFKA_TOPIC", "mytopic")
21
- kafka.group = ENV.fetch("KAFKA_GROUP", "mygroup")
22
- kafka.client = { logger: config.logger }
23
- end
24
- end
@@ -1,53 +0,0 @@
1
- require "delegate"
2
- require "concurrent"
3
-
4
- module GlassOctopus
5
- # BoundedExecutor wraps an existing executor implementation and provides
6
- # throttling for job submission. It delegates every method to the wrapped
7
- # executor.
8
- #
9
- # Implementation is based on the Java Concurrency In Practice book. See:
10
- # http://jcip.net/listings/BoundedExecutor.java
11
- #
12
- # @example
13
- # pool = BoundedExecutor.new(Concurrent::FixedThreadPool.new(2), 2)
14
- #
15
- # pool.post { puts "something time consuming" }
16
- # pool.post { puts "something time consuming" }
17
- #
18
- # # This will block until the other submitted jobs are done.
19
- # pool.post { puts "something time consuming" }
20
- #
21
- class BoundedExecutor < SimpleDelegator
22
- # @param executor the executor implementation to wrap
23
- # @param limit [Integer] maximum number of jobs that can be submitted
24
- def initialize(executor, limit:)
25
- super(executor)
26
- @semaphore = Concurrent::Semaphore.new(limit)
27
- end
28
-
29
- # Submit a task to the executor for asynchronous processing. If the
30
- # submission limit is reached {#post} will block until there is a free
31
- # worker to accept the new task.
32
- #
33
- # @param args [Array] arguments to pass to the task
34
- # @return [Boolean] +true+ if the task was accepted, false otherwise
35
- def post(*args, &block)
36
- return false unless running?
37
-
38
- @semaphore.acquire
39
- begin
40
- __getobj__.post(args, block) do |args, block|
41
- begin
42
- block.call(*args)
43
- ensure
44
- @semaphore.release
45
- end
46
- end
47
- rescue Concurrent::RejectedExecutionError
48
- @semaphore.release
49
- raise
50
- end
51
- end
52
- end
53
- end
@@ -1,113 +0,0 @@
1
- require "ostruct"
2
- require "poseidon_cluster"
3
- require "glass_octopus/message"
4
- require "glass_octopus/connection/options_invalid"
5
-
6
- module GlassOctopus
7
- # Connection adapter that uses the {https://github.com/bpot/poseidon poseidon
8
- # gem} to talk to Kafka 0.8.x. Tested with Kafka 0.8.2.
9
- #
10
- # @example
11
- # adapter = GlassOctopus::PoseidonAdapter.new do |config|
12
- # config.broker_list = %w[localhost:9092]
13
- # config.zookeeper_list = %w[localhost:2181]
14
- # config.topic = "mytopic"
15
- # config.group = "mygroup"
16
- #
17
- # require "logger"
18
- # config.logger = Logger.new(STDOUT)
19
- # end
20
- #
21
- # adapter.connect.fetch_message do |message|
22
- # p message
23
- # end
24
- class PoseidonAdapter
25
- # @yield configure poseidon in the yielded block
26
- # The following configuration values are required:
27
- #
28
- # * +broker_list+: list of Kafka broker addresses
29
- # * +zookeeper_list+: list of Zookeeper addresses
30
- # * +topic+: name of the topic to subscribe to
31
- # * +group+: name of the consumer group
32
- #
33
- # Any other configuration value is passed to
34
- # {http://www.rubydoc.info/github/bsm/poseidon_cluster/Poseidon/ConsumerGroup Poseidon::ConsumerGroup}.
35
- #
36
- # @raise [OptionsInvalid]
37
- def initialize
38
- @poseidon_consumer = nil
39
- @closed = false
40
-
41
- config = OpenStruct.new
42
- yield config
43
-
44
- @options = config.to_h
45
- validate_options
46
- end
47
-
48
- # Connect to Kafka and Zookeeper, register the consumer group.
49
- # This also initiates a rebalance in the consumer group.
50
- def connect
51
- @closed = false
52
- @poseidon_consumer = create_consumer_group
53
- self
54
- end
55
-
56
- # Fetch messages from kafka in a loop.
57
- #
58
- # @yield messages read from Kafka
59
- # @yieldparam message [Message] a Kafka message
60
- def fetch_message
61
- @poseidon_consumer.fetch_loop do |partition, messages|
62
- break if closed?
63
-
64
- messages.each do |message|
65
- yield build_message(partition, message)
66
- end
67
-
68
- # Return true to auto-commit offset to Zookeeper
69
- true
70
- end
71
- end
72
-
73
- # Close the connection and stop the {#fetch_message} loop.
74
- def close
75
- @closed = true
76
- @poseidon_consumer.close if @poseidon_consumer
77
- @poseidon_cluster = nil
78
- end
79
-
80
- # @api private
81
- def closed?
82
- @closed
83
- end
84
-
85
- # @api private
86
- def create_consumer_group
87
- options = @options.dup
88
-
89
- Poseidon::ConsumerGroup.new(
90
- options.delete(:group),
91
- options.delete(:broker_list),
92
- options.delete(:zookeeper_list),
93
- options.delete(:topic),
94
- { :max_wait_ms => 1000 }.merge(options)
95
- )
96
- end
97
-
98
- # @api private
99
- def build_message(partition, message)
100
- GlassOctopus::Message.new(message.topic, partition, message.offset, message.key, message.value)
101
- end
102
-
103
- # @api private
104
- def validate_options
105
- errors = []
106
- [:group, :broker_list, :zookeeper_list, :topic].each do |key|
107
- errors << "Missing key: #{key}" unless @options.key?(key)
108
- end
109
-
110
- raise OptionsInvalid.new(errors) if errors.any?
111
- end
112
- end
113
- end
@@ -1,24 +0,0 @@
1
- require "glass_octopus/context"
2
-
3
- module GlassOctopus
4
- # Unit of work. Builds a context for a message and runs it through the
5
- # middleware stack. It catches and logs all application level exceptions.
6
- #
7
- # @api private
8
- class UnitOfWork
9
- attr_reader :message, :processor, :logger
10
-
11
- def initialize(message, processor, logger)
12
- @message = message
13
- @processor = processor
14
- @logger = logger
15
- end
16
-
17
- def perform
18
- processor.call(Context.new(message, logger))
19
- rescue => ex
20
- logger.logger.error("#{ex.class} - #{ex.message}:")
21
- logger.logger.error(ex.backtrace.join("\n")) if ex.backtrace
22
- end
23
- end
24
- end