ruby-kafka 0.3.6 → 0.3.7
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.gitignore +1 -0
- data/CHANGELOG.md +4 -0
- data/README.md +16 -0
- data/circle.yml +16 -0
- data/lib/kafka/cluster.rb +1 -0
- data/lib/kafka/connection.rb +1 -0
- data/lib/kafka/consumer.rb +1 -1
- data/lib/kafka/offset_manager.rb +1 -0
- data/lib/kafka/version.rb +1 -1
- data/ruby-kafka.gemspec +1 -0
- metadata +16 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: f399830029bb1bf2e51d01d01c1ade3015b216c2
|
4
|
+
data.tar.gz: c527a2ddd56a1e4d9e7399727b1dcc4d9a56efa4
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 7f8f5ce7b095ba1dee302e8ffc0a7a23208e66ef6cdcd9f8c39cba753808c7ec741014c593deb41b936219d39c2841998f34c3c3e05bf498bd052721f2eaff78
|
7
|
+
data.tar.gz: 46987298c959729ea40693c07cf3a1e71c2c43dda70141d64870393ef67fc95ec4d8c8fd1443ddf581edb2b022faf6398deaaeb91440ed3cc4871581cf71b391
|
data/.gitignore
CHANGED
data/CHANGELOG.md
CHANGED
data/README.md
CHANGED
@@ -378,6 +378,22 @@ Each consumer process will be assigned one or more partitions from each topic th
|
|
378
378
|
|
379
379
|
In order to be able to resume processing after a consumer crashes, each consumer will periodically _checkpoint_ its position within each partition it reads from. Since each partition has a monotonically increasing sequence of message offsets, this works by _committing_ the offset of the last message that was processed in a given partition. Kafka handles these commits and allows another consumer in a group to resume from the last commit when a member crashes or becomes unresponsive.
|
380
380
|
|
381
|
+
By default, offsets are committed every 10 seconds. You can increase the frequency, known as the _offset commit interval_, to limit the duration of double-processing scenarios, at the cost of a lower throughput due to the added coordination. If you want to improve throughtput, and double-processing is of less concern to you, then you can decrease the frequency.
|
382
|
+
|
383
|
+
In addition to the time based trigger it's possible to trigger checkpointing in response to _n_ messages having been processed, known as the _offset commit threshold_. This puts a bound on the number of messages that can be double-processed before the problem is detected. Setting this to 1 will cause an offset commit to take place every time a message has been processed. By default this trigger is disabled.
|
384
|
+
|
385
|
+
```ruby
|
386
|
+
consumer = kafka.consumer(
|
387
|
+
group_id: "some-group",
|
388
|
+
|
389
|
+
# Increase offset commit frequency to once every 5 seconds.
|
390
|
+
offset_commit_interval: 5,
|
391
|
+
|
392
|
+
# Commit offsets when 100 messages have been processed.
|
393
|
+
offset_commit_threshold: 100,
|
394
|
+
)
|
395
|
+
```
|
396
|
+
|
381
397
|
|
382
398
|
#### Consuming Messages in Batches
|
383
399
|
|
data/circle.yml
CHANGED
@@ -1,3 +1,19 @@
|
|
1
|
+
machine:
|
2
|
+
services:
|
3
|
+
- docker
|
4
|
+
environment:
|
5
|
+
LOG_TO_STDERR: true
|
6
|
+
LOG_LEVEL: DEBUG
|
7
|
+
|
1
8
|
dependencies:
|
2
9
|
pre:
|
10
|
+
- docker -v
|
11
|
+
- docker pull ches/kafka:0.9.0.1
|
12
|
+
- docker pull jplock/zookeeper:3.4.6
|
3
13
|
- gem install bundler -v 1.9.5
|
14
|
+
|
15
|
+
test:
|
16
|
+
override:
|
17
|
+
- bundle exec rspec
|
18
|
+
post:
|
19
|
+
- cp *.log $CIRCLE_ARTIFACTS/ || true
|
data/lib/kafka/cluster.rb
CHANGED
data/lib/kafka/connection.rb
CHANGED
data/lib/kafka/consumer.rb
CHANGED
@@ -115,10 +115,10 @@ module Kafka
|
|
115
115
|
yield message
|
116
116
|
end
|
117
117
|
|
118
|
+
mark_message_as_processed(message)
|
118
119
|
@offset_manager.commit_offsets_if_necessary
|
119
120
|
|
120
121
|
@heartbeat.send_if_necessary
|
121
|
-
mark_message_as_processed(message)
|
122
122
|
|
123
123
|
return if !@running
|
124
124
|
end
|
data/lib/kafka/offset_manager.rb
CHANGED
data/lib/kafka/version.rb
CHANGED
data/ruby-kafka.gemspec
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: ruby-kafka
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.3.
|
4
|
+
version: 0.3.7
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Daniel Schierbeck
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2016-05-
|
11
|
+
date: 2016-05-10 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: bundler
|
@@ -142,6 +142,20 @@ dependencies:
|
|
142
142
|
- - ">="
|
143
143
|
- !ruby/object:Gem::Version
|
144
144
|
version: '0'
|
145
|
+
- !ruby/object:Gem::Dependency
|
146
|
+
name: colored
|
147
|
+
requirement: !ruby/object:Gem::Requirement
|
148
|
+
requirements:
|
149
|
+
- - ">="
|
150
|
+
- !ruby/object:Gem::Version
|
151
|
+
version: '0'
|
152
|
+
type: :development
|
153
|
+
prerelease: false
|
154
|
+
version_requirements: !ruby/object:Gem::Requirement
|
155
|
+
requirements:
|
156
|
+
- - ">="
|
157
|
+
- !ruby/object:Gem::Version
|
158
|
+
version: '0'
|
145
159
|
description: |-
|
146
160
|
A client library for the Kafka distributed commit log.
|
147
161
|
|