ruby-kafka 0.3.6 → 0.3.7

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 1724a8fa55a7d963627724e1bd039fa27ebe4ee5
4
- data.tar.gz: fe4d190b76588cbd7aaedc806e72e75a23a477d5
3
+ metadata.gz: f399830029bb1bf2e51d01d01c1ade3015b216c2
4
+ data.tar.gz: c527a2ddd56a1e4d9e7399727b1dcc4d9a56efa4
5
5
  SHA512:
6
- metadata.gz: b767bfdada39400fc1bd47987bed986dca5acd7e0ff4bc4456a307b185479bb60205d104829b90cccd5ac1c8ee96209da36490ca9ee5379c2456c3e0ba8b34f8
7
- data.tar.gz: 8b120817c769e5dd8b05673695be42581a12f9a5a18c0d5a0040916bd53d2df8c72544b35e84f1fecd262d347b6ddf5f7fbe084ca2e855272e86c775a8d5b7c4
6
+ metadata.gz: 7f8f5ce7b095ba1dee302e8ffc0a7a23208e66ef6cdcd9f8c39cba753808c7ec741014c593deb41b936219d39c2841998f34c3c3e05bf498bd052721f2eaff78
7
+ data.tar.gz: 46987298c959729ea40693c07cf3a1e71c2c43dda70141d64870393ef67fc95ec4d8c8fd1443ddf581edb2b022faf6398deaaeb91440ed3cc4871581cf71b391
data/.gitignore CHANGED
@@ -8,3 +8,4 @@
8
8
  /spec/reports/
9
9
  /tmp/
10
10
  .env
11
+ *.log
@@ -4,6 +4,10 @@ Changes and additions to the library will be listed here.
4
4
 
5
5
  ## Unreleased
6
6
 
7
+ ## v0.3.7
8
+
9
+ - Default to port 9092 if no port is provided for a seed broker.
10
+
7
11
  ## v0.3.6
8
12
 
9
13
  - Fix bug that caused partition information to not be reliably updated.
data/README.md CHANGED
@@ -378,6 +378,22 @@ Each consumer process will be assigned one or more partitions from each topic th
378
378
 
379
379
  In order to be able to resume processing after a consumer crashes, each consumer will periodically _checkpoint_ its position within each partition it reads from. Since each partition has a monotonically increasing sequence of message offsets, this works by _committing_ the offset of the last message that was processed in a given partition. Kafka handles these commits and allows another consumer in a group to resume from the last commit when a member crashes or becomes unresponsive.
380
380
 
381
+ By default, offsets are committed every 10 seconds. You can increase the frequency, known as the _offset commit interval_, to limit the duration of double-processing scenarios, at the cost of a lower throughput due to the added coordination. If you want to improve throughtput, and double-processing is of less concern to you, then you can decrease the frequency.
382
+
383
+ In addition to the time based trigger it's possible to trigger checkpointing in response to _n_ messages having been processed, known as the _offset commit threshold_. This puts a bound on the number of messages that can be double-processed before the problem is detected. Setting this to 1 will cause an offset commit to take place every time a message has been processed. By default this trigger is disabled.
384
+
385
+ ```ruby
386
+ consumer = kafka.consumer(
387
+ group_id: "some-group",
388
+
389
+ # Increase offset commit frequency to once every 5 seconds.
390
+ offset_commit_interval: 5,
391
+
392
+ # Commit offsets when 100 messages have been processed.
393
+ offset_commit_threshold: 100,
394
+ )
395
+ ```
396
+
381
397
 
382
398
  #### Consuming Messages in Batches
383
399
 
data/circle.yml CHANGED
@@ -1,3 +1,19 @@
1
+ machine:
2
+ services:
3
+ - docker
4
+ environment:
5
+ LOG_TO_STDERR: true
6
+ LOG_LEVEL: DEBUG
7
+
1
8
  dependencies:
2
9
  pre:
10
+ - docker -v
11
+ - docker pull ches/kafka:0.9.0.1
12
+ - docker pull jplock/zookeeper:3.4.6
3
13
  - gem install bundler -v 1.9.5
14
+
15
+ test:
16
+ override:
17
+ - bundle exec rspec
18
+ post:
19
+ - cp *.log $CIRCLE_ARTIFACTS/ || true
@@ -137,6 +137,7 @@ module Kafka
137
137
 
138
138
  begin
139
139
  host, port = node.split(":", 2)
140
+ port ||= 9092 # Default Kafka port.
140
141
 
141
142
  broker = @broker_pool.connect(host, port.to_i)
142
143
  cluster_info = broker.fetch_metadata(topics: @target_topics)
@@ -77,6 +77,7 @@ module Kafka
77
77
  def send_request(request)
78
78
  # Default notification payload.
79
79
  notification = {
80
+ broker_host: @host,
80
81
  api: Protocol.api_name(request.api_key),
81
82
  request_size: 0,
82
83
  response_size: 0,
@@ -115,10 +115,10 @@ module Kafka
115
115
  yield message
116
116
  end
117
117
 
118
+ mark_message_as_processed(message)
118
119
  @offset_manager.commit_offsets_if_necessary
119
120
 
120
121
  @heartbeat.send_if_necessary
121
- mark_message_as_processed(message)
122
122
 
123
123
  return if !@running
124
124
  end
@@ -42,6 +42,7 @@ module Kafka
42
42
  @last_commit = Time.now
43
43
 
44
44
  @uncommitted_offsets = 0
45
+ @committed_offsets = nil
45
46
  end
46
47
  end
47
48
 
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.3.6"
2
+ VERSION = "0.3.7"
3
3
  end
@@ -36,4 +36,5 @@ Gem::Specification.new do |spec|
36
36
  spec.add_development_dependency "rspec-benchmark"
37
37
  spec.add_development_dependency "activesupport", ">= 4.2.0", "< 5.1"
38
38
  spec.add_development_dependency "snappy"
39
+ spec.add_development_dependency "colored"
39
40
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.6
4
+ version: 0.3.7
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-05-02 00:00:00.000000000 Z
11
+ date: 2016-05-10 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -142,6 +142,20 @@ dependencies:
142
142
  - - ">="
143
143
  - !ruby/object:Gem::Version
144
144
  version: '0'
145
+ - !ruby/object:Gem::Dependency
146
+ name: colored
147
+ requirement: !ruby/object:Gem::Requirement
148
+ requirements:
149
+ - - ">="
150
+ - !ruby/object:Gem::Version
151
+ version: '0'
152
+ type: :development
153
+ prerelease: false
154
+ version_requirements: !ruby/object:Gem::Requirement
155
+ requirements:
156
+ - - ">="
157
+ - !ruby/object:Gem::Version
158
+ version: '0'
145
159
  description: |-
146
160
  A client library for the Kafka distributed commit log.
147
161