ruby-kafka 0.3.15.beta3 → 0.3.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 99f22b235423d8050945416fa7b423e568ece95e
4
- data.tar.gz: c9706320fa69bdcb0bfe20378dd606dbd9d9213c
3
+ metadata.gz: 3e781eb534c5c2ee94905fe5665270df1d688689
4
+ data.tar.gz: a95297b844741019599e14bae2ca7afa34304538
5
5
  SHA512:
6
- metadata.gz: 1095866be7a6181392ba636f22a171b867ce7c04e340172f18b0f72e6aa3692a47f5dfd98f7360794e75e3cd334260e53c87c7b4bab29b31a5178eb80bfbcf89
7
- data.tar.gz: 0f5ed394bb0a57e4b854d254d503a8af10e2189702dfd7f5af7ed208b5232232738ba409044edfad4934805a854a7d08b822c350f1e3f52a18d2b3e9048b04f7
6
+ metadata.gz: f3d32a672b934ccaa48e49b2af92220d2bc7d435a642e43b1161f0a57e419f495c4a569c738cc85a24c0c719da608284164f9f83e96ba2a46c06e13e436d2700
7
+ data.tar.gz: 5283ffa3ba2f5968acf24ae73eb5b300ad38dd78907a8524e22d5d45e4ba20ac9caaa9efd6458cf7fddca3b897c1f6ff7344efd7bf55536b178c9c427e669854
@@ -4,12 +4,9 @@ Changes and additions to the library will be listed here.
4
4
 
5
5
  ## Unreleased
6
6
 
7
- ## v0.3.15.beta3
7
+ ## v0.3.15
8
8
 
9
9
  - Allow setting a timeout on a partition pause (#272).
10
-
11
- ## v0.3.15.beta1
12
-
13
10
  - Allow pausing consumption of a partition (#268).
14
11
 
15
12
  ## v0.3.14
data/README.md CHANGED
@@ -32,11 +32,14 @@ Although parts of this library work with Kafka 0.8 – specifically, the Produce
32
32
  4. [Thread Safety](#thread-safety)
33
33
  5. [Logging](#logging)
34
34
  6. [Instrumentation](#instrumentation)
35
- 7. [Understanding Timeouts](#understanding-timeouts)
36
- 8. [Encryption and Authentication using SSL](#encryption-and-authentication-using-ssl)
35
+ 7. [Monitoring](#monitoring)
36
+ 1. [Reporting Metrics to Datadog](#reporting-metrics-to-datadog)
37
+ 8. [Understanding Timeouts](#understanding-timeouts)
38
+ 9. [Encryption and Authentication using SSL](#encryption-and-authentication-using-ssl)
37
39
  4. [Design](#design)
38
40
  1. [Producer Design](#producer-design)
39
41
  2. [Asynchronous Producer Design](#asynchronous-producer-design)
42
+ 3. [Consumer Design](#consumer-design)
40
43
  5. [Development](#development)
41
44
  6. [Roadmap](#roadmap)
42
45
 
@@ -140,9 +143,9 @@ kafka.deliver_message("Hello, World!", key: "hello", topic: "greetings")
140
143
 
141
144
  #### Efficiently Producing Messages
142
145
 
143
- While `#deliver_message` works fine for infrequent writes, there are a number of downside:
146
+ While `#deliver_message` works fine for infrequent writes, there are a number of downsides:
144
147
 
145
- * Kafka is optimized for transmitting _batches_ of messages rather than individual messages, so there's a significant overhead and performance penalty in using the single-message API.
148
+ * Kafka is optimized for transmitting messages in _batches_ rather than individually, so there's a significant overhead and performance penalty in using the single-message API.
146
149
  * The message delivery can fail in a number of different ways, but this simplistic API does not provide automatic retries.
147
150
  * The message is not buffered, so if there is an error, it is lost.
148
151
 
@@ -662,6 +665,37 @@ end
662
665
  * `request_size` is the number of bytes in the request.
663
666
  * `response_size` is the number of bytes in the response.
664
667
 
668
+
669
+ ### Monitoring
670
+
671
+ It is highly recommended that you monitor your Kafka client applications in production. Typical problems you'll see are:
672
+
673
+ * high network errors rates, which may impact performance and time-to-delivery;
674
+ * producer buffer growth, which may indicate that producers are unable to deliver messages at the rate they're being produced;
675
+ * consumer processing errors, indicating exceptions are being raised in the processing code;
676
+ * frequent consumer rebalances, which may indicate unstable network conditions or consumer configurations.
677
+
678
+ You can quite easily build monitoring on top of the provided [instrumentation hooks](#instrumentation). In order to further help with monitoring, a prebuilt [Datadog](https://www.datadoghq.com/) reporter is included with ruby-kafka.
679
+
680
+
681
+ #### Reporting Metrics to Datadog
682
+
683
+ The Datadog reporter is automatically enabled when the `kafka/datadog` library is required. You can optionally change the configuration.
684
+
685
+ ```ruby
686
+ # This enables the reporter:
687
+ require "kafka/datadog"
688
+
689
+ # Default is "ruby_kafka".
690
+ Kafka::Datadog.namespace = "custom-namespace"
691
+
692
+ # Default is "127.0.0.1".
693
+ Kafka::Datadog.host = "statsd.something.com"
694
+
695
+ # Default is 8125.
696
+ Kafka::Datadog.port = 1234
697
+ ```
698
+
665
699
  ### Understanding Timeouts
666
700
 
667
701
  It's important to understand how timeouts work if you have a latency sensitive application. This library allows configuring timeouts on different levels:
@@ -744,6 +778,10 @@ Instead of writing directly into the pending message list, [`Kafka::AsyncProduce
744
778
 
745
779
  Rather than triggering message deliveries directly, users of the async producer will typically set up _automatic triggers_, such as a timer.
746
780
 
781
+ ### Consumer Design
782
+
783
+ The Consumer API is designed for flexibility and stability. The first is accomplished by not dictating any high-level object model, instead opting for a simple loop-based approach. The second is accomplished by handling group membership, heartbeats, and checkpointing automatically. Messages are marked as processed as soon as they've been successfully yielded to the user-supplied processing block, minimizing the cost of processing errors.
784
+
747
785
  ## Development
748
786
 
749
787
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
@@ -764,7 +802,7 @@ Beta release of the Consumer API, allowing balanced Consumer Groups coordinating
764
802
 
765
803
  API freeze. All new changes will be backwards compatible.
766
804
 
767
- ## Why a new library?
805
+ ## Why Create A New Library?
768
806
 
769
807
  There are a few existing Kafka clients in Ruby:
770
808
 
@@ -143,6 +143,7 @@ module Kafka
143
143
  elsif Time.now < timeout
144
144
  true
145
145
  else
146
+ @logger.info "Automatically resuming partition #{topic}/#{partition}, pause timeout expired"
146
147
  resume(topic, partition)
147
148
  false
148
149
  end
@@ -280,7 +281,7 @@ module Kafka
280
281
  yield
281
282
  rescue HeartbeatError, OffsetCommitError
282
283
  join_group
283
- rescue FetchError
284
+ rescue FetchError, NotLeaderForPartition, UnknownTopicOrPartition
284
285
  @cluster.mark_as_stale!
285
286
  rescue LeaderNotAvailable => e
286
287
  @logger.error "Leader not available; waiting 1s before retrying"
@@ -89,6 +89,11 @@ module Kafka
89
89
  rescue ConnectionError, UnknownMemberId, RebalanceInProgress, IllegalGeneration => e
90
90
  @logger.error "Error sending heartbeat: #{e}"
91
91
  raise HeartbeatError, e
92
+ rescue NotCoordinatorForGroup
93
+ @logger.error "Failed to find coordinator for group `#{@group_id}`; retrying..."
94
+ sleep 1
95
+ @coordinator = nil
96
+ retry
92
97
  end
93
98
 
94
99
  private
@@ -22,7 +22,10 @@ module Kafka
22
22
  def mark_as_processed(topic, partition, offset)
23
23
  @uncommitted_offsets += 1
24
24
  @processed_offsets[topic] ||= {}
25
- @processed_offsets[topic][partition] = offset
25
+
26
+ # The committed offset should always be the offset of the next message that the
27
+ # application will read, thus adding one to the last message processed
28
+ @processed_offsets[topic][partition] = offset + 1
26
29
  @logger.debug "Marking #{topic}/#{partition}:#{offset} as committed"
27
30
  end
28
31
 
@@ -41,8 +44,8 @@ module Kafka
41
44
  if offset < 0
42
45
  resolve_offset(topic, partition)
43
46
  else
44
- # The next offset is the last offset plus one.
45
- offset + 1
47
+ # The next offset is the last offset.
48
+ offset
46
49
  end
47
50
  end
48
51
 
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.3.15.beta3"
2
+ VERSION = "0.3.15"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.15.beta3
4
+ version: 0.3.15
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-08-29 00:00:00.000000000 Z
11
+ date: 2016-09-12 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -317,9 +317,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
317
317
  version: 2.1.0
318
318
  required_rubygems_version: !ruby/object:Gem::Requirement
319
319
  requirements:
320
- - - ">"
320
+ - - ">="
321
321
  - !ruby/object:Gem::Version
322
- version: 1.3.1
322
+ version: '0'
323
323
  requirements: []
324
324
  rubyforge_project:
325
325
  rubygems_version: 2.4.5.1