ruby-kafka 0.1.2 → 0.1.3

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: e80cbf966470ab5038d6aab4f88fde89bd5e7f66
4
- data.tar.gz: fcb92660786331e9b0f70d9a3e33aa8919590eff
3
+ metadata.gz: a2754b8ff5d028b2883c5657b91d1ab13881fa27
4
+ data.tar.gz: 71ddafeb0cabe9f03ec326037b6391070372befa
5
5
  SHA512:
6
- metadata.gz: a62034fba5bf5f64f27d19676ac3429bfc929984f744db234d2fe25db10ae32ae93da6ff1d2635a0285e1d72010ec565951b2fcf00c7e5bc0aba56ba158b03d5
7
- data.tar.gz: 236d24d5b6a8fac6fbad9cedc01b0639085ca746bc8bcf12f110aacd2f120fcf0c16f81a58e01c10d92fcc8b2c1c235ce228bc1b731412d4e94ff2fc1dbf43e8
6
+ metadata.gz: 09421c5b62a1c90e6be2ca25f2b1823d92384e7176d48252c787b8c4de73feac02f7362ac36f98e9da8b1a44968591302e4ee9020f4af5ef02baa8f852f89b7d
7
+ data.tar.gz: 361c720fd7bc8af0e162c463c0f24a31a5d92b942e23e6b1eeb17bb1843516ffb0b7a3778b43ee411ff492ea69274ba2e7624ab5d99631089f3da333a85fa5ed
data/README.md CHANGED
@@ -64,6 +64,26 @@ producer.send_messages
64
64
 
65
65
  Read the docs for [Kafka::Producer](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Producer) for more details.
66
66
 
67
+ ### Understanding Timeouts
68
+
69
+ It's important to understand how timeouts work if you have a latency sensitive application. This library allows configuring timeouts on different levels:
70
+
71
+ **Network timeouts** apply to network connections to individual Kafka brokers. There are two config keys here, each passed to `Kafka.new`:
72
+
73
+ * `connect_timeout` sets the number of seconds to wait while connecting to a broker for the first time. When ruby-kafka initializes, it needs to connect to at least one host in `seed_brokers` in order to discover the Kafka cluster. Each host is tried until there's one that works. Usually that means the first one, but if your entire cluster is down, or there's a network partition, you could wait up to `n * connect_timeout` seconds, where `n` is the number of seed brokers.
74
+ * `socket_timeout` sets the number of seconds to wait when reading from or writing to a socket connection to a broker. After this timeout expires the connection will be killed. Note that some Kafka operations are by definition long-running, such as waiting for new messages to arrive in a partition, so don't set this value too low. When configuring timeouts relating to specific Kafka operations, make sure to make them shorter than this one.
75
+
76
+ **Producer timeouts** can be configured when calling `#get_producer` on a client instance:
77
+
78
+ * `ack_timeout` is a timeout executed by a broker when the client is sending messages to it. It defines the number of seconds the broker should wait for replicas to acknowledge the write before responding to the client with an error. As such, it relates to the `required_acks` setting. It should be set lower than `socket_timeout`.
79
+ * `retry_backoff` configures the number of seconds to wait after a failed attempt to send messages to a Kafka broker before retrying. The `max_retries` setting defines the maximum number of retries to attempt, and so the total duration could be up to `max_retries * retry_backoff` seconds. The timeout can be arbitrarily long, and shouldn't be too short: if a broker goes down its partitions will be handed off to another broker, and that can take tens of seconds.
80
+
81
+ When sending many messages, it's likely that the client needs to send some messages to each broker in the cluster. Given `n` brokers in the cluster, the total wait time when calling `Kafka::Producer#send_messages` can be up to
82
+
83
+ n * (connect_timeout + socket_timeout + retry_backoff) * max_retries
84
+
85
+ Make sure your application can survive being blocked for so long.
86
+
67
87
  ## Development
68
88
 
69
89
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
@@ -27,22 +27,7 @@ module Kafka
27
27
  request = Protocol::TopicMetadataRequest.new(**options)
28
28
  response_class = Protocol::MetadataResponse
29
29
 
30
- response = @connection.send_request(request, response_class)
31
-
32
- response.topics.each do |topic|
33
- Protocol.handle_error(topic.topic_error_code)
34
-
35
- topic.partitions.each do |partition|
36
- begin
37
- Protocol.handle_error(partition.partition_error_code)
38
- rescue ReplicaNotAvailable
39
- # This error can be safely ignored per the protocol specification.
40
- @logger.warn "Replica not available for #{topic.topic_name}/#{partition.partition_id}"
41
- end
42
- end
43
- end
44
-
45
- response
30
+ @connection.send_request(request, response_class)
46
31
  end
47
32
 
48
33
  def produce(**options)
@@ -50,7 +50,7 @@ module Kafka
50
50
 
51
51
  attr_reader :partition_error_code
52
52
 
53
- def initialize(partition_error_code:, partition_id:, leader:, replicas:, isr:)
53
+ def initialize(partition_error_code:, partition_id:, leader:, replicas: [], isr: [])
54
54
  @partition_error_code = partition_error_code
55
55
  @partition_id = partition_id
56
56
  @leader = leader
@@ -68,7 +68,7 @@ module Kafka
68
68
 
69
69
  attr_reader :topic_error_code
70
70
 
71
- def initialize(topic_error_code:, topic_name:, partitions:)
71
+ def initialize(topic_error_code: 0, topic_name:, partitions:)
72
72
  @topic_error_code = topic_error_code
73
73
  @topic_name = topic_name
74
74
  @partitions = partitions
@@ -96,13 +96,21 @@ module Kafka
96
96
  topic_info = @topics.find {|t| t.topic_name == topic }
97
97
 
98
98
  if topic_info.nil?
99
- raise "no topic #{topic}"
99
+ raise UnknownTopicOrPartition, "no topic #{topic}"
100
100
  end
101
101
 
102
+ Protocol.handle_error(topic_info.topic_error_code)
103
+
102
104
  partition_info = topic_info.partitions.find {|p| p.partition_id == partition }
103
105
 
104
106
  if partition_info.nil?
105
- raise "no partition #{partition} in topic #{topic}"
107
+ raise UnknownTopicOrPartition, "no partition #{partition} in topic #{topic}"
108
+ end
109
+
110
+ begin
111
+ Protocol.handle_error(partition_info.partition_error_code)
112
+ rescue ReplicaNotAvailable
113
+ # This error can be safely ignored per the protocol specification.
106
114
  end
107
115
 
108
116
  partition_info.leader
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.1.2"
2
+ VERSION = "0.1.3"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.2
4
+ version: 0.1.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-02-05 00:00:00.000000000 Z
11
+ date: 2016-02-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler