fluent-plugin-kafka 0.5.0 → 0.5.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: a09a4933e7d0f7a30094cd98900a03a80dac3c9a
4
- data.tar.gz: 9421658f52091e37e39e32ae10e7e93132d9394f
3
+ metadata.gz: 3b6b3346c69250a4e8500843ed0dc689a8b63058
4
+ data.tar.gz: 3da518e3b7f82cdac6cc951d2b2ad33aa1242ef6
5
5
  SHA512:
6
- metadata.gz: e3a72bb6fecbe2dd0204e8bc84234d3d11c44017699eb7ce88ee4b154d04966f939487d85ef90b4811510c490fb733a922ba1f70012dc6fbf9490afebf843ed7
7
- data.tar.gz: 5b633c21eadd8797a5a6672191a391468a5b8c8e2b4c7968f74b67d7e3edc2978f3db0df2c157b835a7dcd75e899f4b26baf1d62c02ac7cf47f55943dd72dba8
6
+ metadata.gz: 188796327c6d23264052082b6c0838acf3515fd045d087017d6384c00463be18f130ce44439992069b2cb5efcdbafa83d7740f02a47ab72e5e9c4d2e67f0b1aa
7
+ data.tar.gz: 3cea790e8c4532d4887d5c97ce4806705a59596f493d9af340286c6b1db2d061d48eb448180f4dee51edb44c68a5776cba5aa100e9c47c8d4d0856f752805fd4
data/ChangeLog CHANGED
@@ -1,3 +1,7 @@
1
+ Release 0.5.1 - 2017/02/06
2
+
3
+ * in_kafka_group: Fix uninitialized constant error
4
+
1
5
  Release 0.5.0 - 2017/01/17
2
6
 
3
7
  * output: Add out_kafka2 plugin with v0.14 API
data/README.md CHANGED
@@ -30,6 +30,14 @@ If you want to use zookeeper related parameters, you also need to install zookee
30
30
 
31
31
  ## Usage
32
32
 
33
+ ### Common parameters
34
+
35
+ - ssl_ca_cert
36
+ - ssl_client_cert
37
+ - ssl_client_cert_key
38
+
39
+ Set path to SSL related files. See [Encryption and Authentication using SSL](https://github.com/zendesk/ruby-kafka#encryption-and-authentication-using-ssl) for more detail.
40
+
33
41
  ### Input plugin (@type 'kafka')
34
42
 
35
43
  Consume events by single consumer.
@@ -101,17 +109,17 @@ Consume events by kafka consumer group features..
101
109
 
102
110
  See also [ruby-kafka README](https://github.com/zendesk/ruby-kafka#consuming-messages-from-kafka) for more detailed documentation about ruby-kafka options.
103
111
 
104
- ### Output plugin (non-buffered)
112
+ ### Buffered output plugin
105
113
 
106
- This plugin uses ruby-kafka producer for writing data. For performance and reliability concerns, use `kafka_bufferd` output instead.
114
+ This plugin uses ruby-kafka producer for writing data. This plugin works with recent kafka versions.
107
115
 
108
116
  <match *.**>
109
- @type kafka
117
+ @type kafka_buffered
110
118
 
111
- # Brokers: you can choose either brokers or zookeeper.
112
- brokers <broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>,.. # Set brokers directly
113
- zookeeper <zookeeper_host>:<zookeeper_port> # Set brokers via Zookeeper
114
- zookeeper_path <broker path in zookeeper> :default => /brokers/ids # Set path in zookeeper for kafka
119
+ # Brokers: you can choose either brokers or zookeeper. If you are not familiar with zookeeper, use brokers parameters.
120
+ brokers <broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>,.. # Set brokers directly
121
+ zookeeper <zookeeper_host>:<zookeeper_port> # Set brokers via Zookeeper
122
+ zookeeper_path <broker path in zookeeper> :default => /brokers/ids # Set path in zookeeper for kafka
115
123
 
116
124
  default_topic (string) :default => nil
117
125
  default_partition_key (string) :default => nil
@@ -121,18 +129,27 @@ This plugin uses ruby-kafka producer for writing data. For performance and relia
121
129
  output_include_time (bool) :default => false
122
130
  exclude_topic_key (bool) :default => false
123
131
  exclude_partition_key (bool) :default => false
132
+ get_kafka_client_log (bool) :default => false
133
+
134
+ # See fluentd document for buffer related parameters: http://docs.fluentd.org/articles/buffer-plugin-overview
124
135
 
125
136
  # ruby-kafka producer options
126
- max_send_retries (integer) :default => 1
127
- required_acks (integer) :default => -1
128
- ack_timeout (integer) :default => nil (Use default of ruby-kafka)
129
- compression_codec (gzip|snappy) :default => nil
137
+ max_send_retries (integer) :default => 1
138
+ required_acks (integer) :default => -1
139
+ ack_timeout (integer) :default => nil (Use default of ruby-kafka)
140
+ compression_codec (gzip|snappy) :default => nil (No compression)
130
141
  </match>
131
142
 
132
- Supports following ruby-kafka::Producer options.
143
+ `<formatter name>` of `output_data_type` uses fluentd's formatter plugins. See [formatter article](http://docs.fluentd.org/articles/formatter-plugin-overview).
144
+
145
+ ruby-kafka sometimes returns `Kafka::DeliveryFailed` error without good information.
146
+ In this case, `get_kafka_client_log` is useful for identifying the error cause.
147
+ ruby-kafka's log is routed to fluentd log so you can see ruby-kafka's log in fluentd logs.
148
+
149
+ Supports following ruby-kafka's producer options.
133
150
 
134
151
  - max_send_retries - default: 1 - Number of times to retry sending of messages to a leader.
135
- - required_acks - default: -1 - The number of acks required per request. Default is waiting all acks for safety. If you need flush performance, set lower value, e.g. 1, 2.
152
+ - required_acks - default: -1 - The number of acks required per request. If you need flush performance, set lower value, e.g. 1, 2.
136
153
  - ack_timeout - default: nil - How long the producer waits for acks. The unit is seconds.
137
154
  - compression_codec - default: nil - The codec the producer uses to compress messages.
138
155
 
@@ -160,21 +177,19 @@ If key name `partition_key` exists in a message, this plugin set its value of pa
160
177
  |Not set| Exists | Messages which have partition_key record are assigned to the specific partition, others are assigned a partition at random |
161
178
  |Set| Exists | Messages which have partition_key record are assigned to the specific partition with parition_key, others are assigned to the specific partition with default_parition_key |
162
179
 
163
-
164
180
  If key name `message_key` exists in a message, this plugin publishes the value of message_key to kafka and can be read by consumers. Same message key will be assigned to all messages by setting `default_message_key` in config file. If message_key exists and if partition_key is not set explicitly, messsage_key will be used for partitioning.
165
181
 
182
+ ### Non-buffered output plugin
166
183
 
167
- ### Buffered output plugin
168
-
169
- This plugin uses ruby-kafka producer for writing data. This plugin works with recent kafka versions.
184
+ This plugin uses ruby-kafka producer for writing data. For performance and reliability concerns, use `kafka_bufferd` output instead. This is mainly for testing.
170
185
 
171
186
  <match *.**>
172
- @type kafka_buffered
187
+ @type kafka
173
188
 
174
189
  # Brokers: you can choose either brokers or zookeeper.
175
- brokers <broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>,.. # Set brokers directly
176
- zookeeper <zookeeper_host>:<zookeeper_port> # Set brokers via Zookeeper
177
- zookeeper_path <broker path in zookeeper> :default => /brokers/ids # Set path in zookeeper for kafka
190
+ brokers <broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>,.. # Set brokers directly
191
+ zookeeper <zookeeper_host>:<zookeeper_port> # Set brokers via Zookeeper
192
+ zookeeper_path <broker path in zookeeper> :default => /brokers/ids # Set path in zookeeper for kafka
178
193
 
179
194
  default_topic (string) :default => nil
180
195
  default_partition_key (string) :default => nil
@@ -184,37 +199,15 @@ This plugin uses ruby-kafka producer for writing data. This plugin works with re
184
199
  output_include_time (bool) :default => false
185
200
  exclude_topic_key (bool) :default => false
186
201
  exclude_partition_key (bool) :default => false
187
- get_kafka_client_log (bool) :default => false
188
-
189
- # See fluentd document for buffer related parameters: http://docs.fluentd.org/articles/buffer-plugin-overview
190
202
 
191
203
  # ruby-kafka producer options
192
- max_send_retries (integer) :default => 1
193
- required_acks (integer) :default => -1
194
- ack_timeout (integer) :default => nil (Use default of ruby-kafka)
195
- compression_codec (gzip|snappy) :default => nil (No compression)
204
+ max_send_retries (integer) :default => 1
205
+ required_acks (integer) :default => -1
206
+ ack_timeout (integer) :default => nil (Use default of ruby-kafka)
207
+ compression_codec (gzip|snappy) :default => nil
196
208
  </match>
197
209
 
198
- `<formatter name>` of `output_data_type` uses fluentd's formatter plugins. See [formatter article](http://docs.fluentd.org/articles/formatter-plugin-overview).
199
-
200
- ruby-kafka sometimes returns `Kafka::DeliveryFailed` error without good information.
201
- In this case, `get_kafka_client_log` is useful for identifying the error cause.
202
- ruby-kafka's log is routed to fluentd log so you can see ruby-kafka's log in fluentd logs.
203
-
204
- Supports following ruby-kafka's producer options.
205
-
206
- - max_send_retries - default: 1 - Number of times to retry sending of messages to a leader.
207
- - required_acks - default: -1 - The number of acks required per request.
208
- - ack_timeout - default: nil - How long the producer waits for acks. The unit is seconds.
209
- - compression_codec - default: nil - The codec the producer uses to compress messages.
210
-
211
- See also [Kafka::Client](http://www.rubydoc.info/gems/ruby-kafka/Kafka/Client) for more detailed documentation about ruby-kafka.
212
-
213
-
214
- This plugin supports compression codec "snappy" also.
215
- Install snappy module before you use snappy compression.
216
-
217
- $ gem install snappy
210
+ This plugin also supports ruby-kafka related parameters. See Buffered output plugin section.
218
211
 
219
212
  ## Contributing
220
213
 
@@ -12,7 +12,7 @@ Gem::Specification.new do |gem|
12
12
  gem.test_files = gem.files.grep(%r{^(test|spec|features)/})
13
13
  gem.name = "fluent-plugin-kafka"
14
14
  gem.require_paths = ["lib"]
15
- gem.version = '0.5.0'
15
+ gem.version = '0.5.1'
16
16
  gem.required_ruby_version = ">= 2.1.0"
17
17
 
18
18
  gem.add_dependency "fluentd", [">= 0.10.58", "< 2"]
@@ -45,6 +45,12 @@ class Fluent::KafkaGroupInput < Fluent::Input
45
45
  class ForShutdown < StandardError
46
46
  end
47
47
 
48
+ BufferError = if defined?(Fluent::Plugin::Buffer::BufferOverflowError)
49
+ Fluent::Plugin::Buffer::BufferOverflowError
50
+ else
51
+ Fluent::BufferQueueLimitError
52
+ end
53
+
48
54
  unless method_defined?(:router)
49
55
  define_method("router") { Fluent::Engine }
50
56
  end
@@ -190,7 +196,7 @@ class Fluent::KafkaGroupInput < Fluent::Input
190
196
  retries = 0
191
197
  begin
192
198
  router.emit_stream(tag, es)
193
- rescue Fluent::BufferQueueLimitError
199
+ rescue BufferError
194
200
  raise ForShutdown if @consumer.nil?
195
201
 
196
202
  if @retry_emit_limit.nil?
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluent-plugin-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.0
4
+ version: 0.5.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Hidemasa Togashi
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2017-01-17 00:00:00.000000000 Z
12
+ date: 2017-02-06 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: fluentd
@@ -131,7 +131,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
131
131
  version: '0'
132
132
  requirements: []
133
133
  rubyforge_project:
134
- rubygems_version: 2.6.8
134
+ rubygems_version: 2.5.2
135
135
  signing_key:
136
136
  specification_version: 4
137
137
  summary: Fluentd plugin for Apache Kafka > 0.8