logstash-input-datahub 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 2e8c3f6b66cc1841ca687384bf582539f0e1006d
4
+ data.tar.gz: ce8b7a82469a07a16e11fb323c801e57c8716be3
5
+ SHA512:
6
+ metadata.gz: 8ec7e67535eb392a3193c46ed74ba8cc2893b70ab81512dc9d1bda71e74b41ed9abb4b62df75b8fa905c8faec7e0d529db37cf56fcd5a4255c78dfb968985d0a
7
+ data.tar.gz: e81b90beb17a3f3587755bf86581c4f7561b45756d33de62fd3099bbd7ba07bc9d4b17ffc0e69cc9199085ce3cc0f1c89d027e75fa1b1b2582f02f658aa5caf2
data/CHANGELOG.md ADDED
@@ -0,0 +1,2 @@
1
+ 0.0.1
2
+ First version.
data/CONTRIBUTORS ADDED
File without changes
data/Gemfile ADDED
@@ -0,0 +1,2 @@
1
+ source 'https://rubygems.org'
2
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,13 @@
1
+ Copyright (c) 2012–2015 Elasticsearch <http://www.elastic.co>
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
data/README.md ADDED
@@ -0,0 +1,86 @@
1
+ # Aliyun DataHub Plugin for LogStash
2
+
3
+ ## Getting Started
4
+ ---
5
+
6
+ ### 介绍
7
+
8
+ - 该插件是基于logstash开发的输入插件,它主要完成消费DataHub(阿里云产品)服务上的数据。
9
+
10
+ ### 安装
11
+ + 环境要求linux, jdk1.7+, logstash(可选,如果没安装也没关系)
12
+ + 从流计算官网下载tar包,使用以下命令安装
13
+
14
+ 如果之前没安装过logstash,请使用以下步骤安装
15
+
16
+ ```
17
+ $ tar -xzvf logstash-with-datahub-2.3.0.tar.gz
18
+ $ cd logstash-with-datahub-2.3.0
19
+ ```
20
+
21
+ 如果之前安装过logstash,拿到logstash-input-datahub-1.0.0.gem,再使用以下命令安装
22
+
23
+ ```
24
+ $ ${LOGSTASH_HOME}/bin/logstash-plugin install --local logstash-input-datahub-1.0.0.gem
25
+ ```
26
+
27
+ ### 样例
28
+ logstash的配置如下:
29
+
30
+ ```
31
+ input {
32
+ datahub {
33
+ access_id => "Your accessId"
34
+ access_key => "Your accessKey"
35
+ endpoint => "http://dh-cn-hangzhou.aliyuncs.com"
36
+ project_name => "test_project"
37
+ topic_name => "test_logstash"
38
+ interval=> 5
39
+ #cursor => {
40
+ # "0"=>"20000000000000000000000003110091"
41
+ # "2"=>"20000000000000000000000003110091"
42
+ # "1"=>"20000000000000000000000003110091"
43
+ # "4"=>"20000000000000000000000003110091"
44
+ # "3"=>"2000000000000000000000000311000"
45
+ #}
46
+ shard_ids => []
47
+ pos_file => "/home/admin/logstash/logstash-2.4.0/pos_file"
48
+ }
49
+ }
50
+
51
+ output {
52
+ file {
53
+ path => "/home/admin/logstash/logstash-2.4.0/output"
54
+ }
55
+ }
56
+ ```
57
+
58
+ ### 参数介绍
59
+ ```
60
+ access_id(Required): 阿里云access id
61
+ access_key(Required): 阿里云access key
62
+ endpoint(Required): 阿里云datahub的服务地址
63
+ project_name(Required): datahub项目名称
64
+ topic_name(Required): datahub topic名称
65
+ retry_times(Optional): 重试次数,-1为无限重试、0为不重试、>0表示需要有限次数
66
+ retry_interval(Optional): 下一次重试的间隔,单位为秒
67
+ shard_ids(Optional):数组类型,需要消费的shard列表,空列表默认全部消费
68
+ cursor(Optional):消费起点,默认为空,表示从头开始消费
69
+ pos_file(Required):checkpoint记录文件,必须配置,优先使用checkpoint恢复消费offset
70
+ ```
71
+
72
+ ## 相关参考
73
+ ---
74
+
75
+ - [LogStash主页](https://www.elastic.co/products/logstash)
76
+ - [LogStash插件开发](https://www.elastic.co/guide/en/logstash/current/_how_to_write_a_logstash_input_plugin.html#_coding_input_plugins)
77
+
78
+ ## Authors && Contributors
79
+ ---
80
+
81
+ - [Dong Xiao](https://github.com/dongxiao1198)
82
+
83
+ ## License
84
+ ---
85
+
86
+ licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
@@ -0,0 +1,10 @@
1
+ require "test/unit"
2
+ require "datahub"
3
+
4
+ class DatahubTest < Test::Unit::TestCase
5
+
6
+ def test_register()
7
+
8
+ end
9
+
10
+ end
@@ -0,0 +1,348 @@
1
+ #
2
+ #Licensed to the Apache Software Foundation (ASF) under one
3
+ #or more contributor license agreements. See the NOTICE file
4
+ #distributed with this work for additional information
5
+ #regarding copyright ownership. The ASF licenses this file
6
+ #to you under the Apache License, Version 2.0 (the
7
+ #"License"); you may not use this file except in compliance
8
+ #with the License. You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ #Unless required by applicable law or agreed to in writing,
13
+ #software distributed under the License is distributed on an
14
+ #"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15
+ #KIND, either express or implied. See the License for the
16
+ #specific language governing permissions and limitations
17
+ #under the License.
18
+ #
19
+ require "logstash/inputs/base"
20
+ require "logstash/namespace"
21
+ require "logstash/environment"
22
+ require "thread"
23
+ require "stud/interval"
24
+
25
+ jar_path=File.expand_path(File.join(File.dirname(__FILE__), "../../.."))
26
+ LogStash::Environment.load_runtime_jars! File.join(jar_path, "vendor")
27
+
28
+ # Datahub output plugin
29
+ class LogStash::Inputs::Datahub < LogStash::Inputs::Base
30
+ config_name "datahub"
31
+ default :codec, "plain"
32
+ # datahub access id
33
+ config :access_id, :validate => :string, :required => true
34
+
35
+ # datahub access key
36
+ config :access_key, :validate => :string, :required => true
37
+
38
+ # datahub service endpoint
39
+ config :endpoint, :validate => :string, :required => true
40
+
41
+ # datahub project name
42
+ config :project_name, :validate => :string, :required => true
43
+
44
+ # datahub topic name
45
+ config :topic_name, :validate => :string, :required => true
46
+
47
+ # 重试次数,-1为无限重试、0为不重试、>0表示需要有限次数
48
+ config :retry_times, :validate => :number, :required => false, :default => -1
49
+
50
+ # 重试周期,下一次重试的间隔,单位为秒
51
+ config :retry_interval, :validate => :number, :required => false, :default => 3
52
+
53
+ # 指定shard
54
+ config :shard_ids, :validate => :array, :required => false, :default => []
55
+
56
+ # 指定start cursor
57
+ config :cursor, :validate => :hash, :required => false, :default => {}
58
+
59
+ # seek from cursor time
60
+ config :cursor_time, :validate => :hash, :required => false, :default => {}
61
+
62
+ # 消费batch
63
+ config :batch_limit, :validate => :number, :default => 100
64
+
65
+ # 消费间隔
66
+ config :interval, :validate => :number, :default => 2
67
+
68
+ # 消费间隔
69
+ config :pos_file, :validate => :string, :required => true
70
+
71
+ # 使用多线程
72
+ config :multi_thread, :validate => :boolean, :required => false, :default => true
73
+
74
+ # 写文件锁
75
+ @@mutex = Mutex.new
76
+
77
+ @@consumer_threads = []
78
+
79
+ DatahubPackage = com.aliyun.datahub
80
+
81
+ public
82
+ def register
83
+ begin
84
+ @stop = false
85
+ @account = DatahubPackage.auth.AliyunAccount::new(@access_id, @access_key)
86
+ @conf = DatahubPackage.DatahubConfiguration::new(@account, @endpoint)
87
+
88
+ @client = DatahubPackage.DatahubClient::new(@conf)
89
+ @project = DatahubPackage.wrapper.Project::Builder.build(@project_name, @client)
90
+ @topic = @project.getTopic(@topic_name)
91
+
92
+ @shards = get_active_shards(@topic.listShard())
93
+ @shard_count = @shards.size()
94
+
95
+ result = @client.getTopic(@project_name, @topic_name)
96
+ @schema = result.getRecordSchema()
97
+ @fields = @schema.getFields()
98
+
99
+ # 前置校验参数
100
+ check_params()
101
+ # 读取checkpoint
102
+ read_checkpoint()
103
+
104
+ if @shard_count == 0
105
+ @logger.error "No active shard available, please check"
106
+ raise "No active shard available, please check"
107
+ end
108
+
109
+ @logger.info "Init datahub success!"
110
+ rescue => e
111
+ @logger.error "Init failed!" + e.message + " " + e.backtrace.inspect.to_s
112
+ raise e
113
+ end
114
+ end # def register
115
+
116
+ def check_params()
117
+ @logger.info "Config shardIds:" + @shard_ids.to_s
118
+ if @shard_ids.size != 0
119
+ valid = true
120
+ for shard in 0...@shard_ids.size
121
+ @logger.info "Checking shard:" + @shard_ids[shard]
122
+ shard_exist_active = false
123
+ for i in 0...@shards.size
124
+ shard_entry = @shards[i]
125
+ if shard_entry.getShardId() == @shard_ids[shard] && shard_entry.getState() == DatahubPackage.model.ShardState::ACTIVE
126
+ shard_exist_active = true
127
+ break
128
+ end
129
+ end
130
+ if !shard_exist_active
131
+ valid = false
132
+ end
133
+ end
134
+ if (!valid)
135
+ @logger.error "Config shard_id not exists or state not active, check your config"
136
+ raise "Config shard_id not exists or state not active, check your config"
137
+ end
138
+ else
139
+ valid = false
140
+ for i in 0...@shards.size
141
+ shard_entry = @shards[i]
142
+ @logger.info "Checking shard:" + shard_entry.getShardId()
143
+ if shard_entry.getState() == DatahubPackage.model.ShardState::ACTIVE
144
+ @shard_ids.push(shard_entry.getShardId())
145
+ valid = true
146
+ end
147
+ end
148
+ if (!valid)
149
+ @logger.error "There is no active shard."
150
+ raise "There is no active shard."
151
+ end
152
+ end
153
+ @logger.info "Reading from shards:" + @shard_ids.to_s
154
+ begin
155
+ checkpoint_file = File.open(@pos_file, "a+")
156
+ checkpoint_file.close
157
+ rescue
158
+ @logger.error "Config pos file is invalid, pos_file must point to a file."
159
+ raise "Config pos file is invalid, pos_file must point to a file."
160
+ end
161
+ end
162
+
163
+ def update_checkpoint()
164
+ @logger.info "flush checkpoint:" + @cursor.to_s
165
+ @@mutex.synchronize {
166
+ checkpoint_file = File.open(@pos_file, "w")
167
+ @cursor.each { |key,value|
168
+ checkpoint_file.write(key+":"+value+"\n")
169
+ }
170
+ checkpoint_file.close
171
+ }
172
+ end
173
+
174
+ def read_checkpoint()
175
+ begin
176
+ @logger.info "read checkpoint:" + @pos_file
177
+ @@mutex.synchronize {
178
+ File.foreach(@pos_file) do |line|
179
+ checkpoint = line.chomp
180
+ cursor_param = checkpoint.split(":")
181
+ if cursor_param.size != 2
182
+ raise "Invalid checkpoint:" + checkpoint
183
+ end
184
+ @cursor[cursor_param[0]] = cursor_param[1]
185
+ end
186
+ @logger.info "recover checkpoint:" + @cursor.to_s
187
+ }
188
+ rescue => e
189
+ @logger.error e.backtrace.inspect.to_s
190
+ @logger.error "No checkpoint found or invalid, will start normally."
191
+ raise e
192
+ end
193
+ end
194
+
195
+ def update_cursor(shard_id, cursor)
196
+ @@mutex.synchronize {
197
+ @cursor[shard_id] = cursor
198
+ }
199
+ end
200
+
201
+ def get_cursor(shard_id, force = false)
202
+ @@mutex.synchronize {
203
+ if (force || !@cursor.has_key?(shard_id) || @cursor[shard_id]==nil || @cursor[shard_id]=="")
204
+ @logger.info "Shard:" + shard_id + " has no checkpoint, will seek to begin."
205
+ if (@cursor_time.has_key?(shard_id))
206
+ if (@cursor_time[shard_id] == '-1')
207
+ cursorRs = @client.getCursor(@project_name, @topic_name, shard_id, DatahubPackage.model.GetCursorRequest::CursorType::LATEST)
208
+ else
209
+ cursorRs = @client.getCursor(@project_name, @topic_name, shard_id, @cursor_time[shard_id].to_i * 1000)
210
+ end
211
+ else
212
+ cursorRs = @client.getCursor(@project_name, @topic_name, shard_id, DatahubPackage.model.GetCursorRequest::CursorType::OLDEST)
213
+ end
214
+ @cursor[shard_id] = cursorRs.getCursor()
215
+ @logger.info "Start reading shard:" + shard_id + " with cursor:" + @cursor[shard_id]
216
+ end
217
+ return @cursor[shard_id]
218
+ }
219
+ end
220
+
221
+ def read_record(shard_id, queue)
222
+ cursor = get_cursor(shard_id)
223
+ begin
224
+ @logger.info "Get record at shard: " + shard_id + " cursor:" + cursor
225
+ recordRs = @client.getRecords(@project_name, @topic_name, shard_id, cursor, @batch_limit, @schema)
226
+ recordEntries = recordRs.getRecords()
227
+ recordEntries.each do |record|
228
+ data = Hash.new
229
+ @fields.each do |field|
230
+ case field.getType()
231
+ when DatahubPackage.common.data.FieldType::BIGINT
232
+ data[field.getName()] = record.getBigint(field.getName())
233
+ when DatahubPackage.common.data.FieldType::DOUBLE
234
+ data[field.getName()] = record.getDouble(field.getName())
235
+ when DatahubPackage.common.data.FieldType::BOOLEAN
236
+ data[field.getName()] = record.getBoolean(field.getName())
237
+ when DatahubPackage.common.data.FieldType::TIMESTAMP
238
+ data[field.getName()] = record.getTimeStamp(field.getName())
239
+ when DatahubPackage.common.data.FieldType::STRING
240
+ data[field.getName()] = record.getString(field.getName())
241
+ else
242
+ @logger.error "Unknow type " + field.getType().toString()
243
+ raise "Unknow type " + field.getType().toString()
244
+ end
245
+ end
246
+ data["timestamp"] = record.getSystemTime()
247
+ event = LogStash::Event.new("message"=>data)
248
+ decorate(event)
249
+ queue << event
250
+ end
251
+ cursor = recordRs.getNextCursor()
252
+ update_cursor(shard_id, cursor)
253
+ @logger.info "Shard:" + shard_id + " Next cursor:" + cursor + " GetRecords:" + (recordRs.getRecordCount()).to_s
254
+ if (recordRs.getRecordCount() == 0)
255
+ @logger.info "Read to end, waiting for data:" + cursor
256
+ Stud.stoppable_sleep(@interval) { stop? }
257
+ else
258
+ update_checkpoint
259
+ end
260
+ rescue DatahubPackage.exception.InvalidCursorException => e
261
+ @logger.error "Shard:" + shard_id + " Invalid cursor:" + cursor + ", seek to begin."
262
+ get_cursor(shard_id, true)
263
+ rescue DatahubPackage.exception.InvalidOperationException => e
264
+ @logger.error "Shard:" + shard_id + " Invalid operation, cursor:" + cursor + ", shard is sealed, consumer will exit."
265
+ raise e
266
+ rescue DatahubPackage.exception.DatahubClientException => e
267
+ @logger.error "Read failed:" + e.getMessage() + ", will retry later."
268
+ Stud.stoppable_sleep(@retry_interval) { stop? }
269
+ end
270
+ end
271
+
272
+ def shard_consumer(shard_id, queue)
273
+ @logger.info "Consumer thread started:" + shard_id
274
+ while !check_stop do
275
+ begin
276
+ read_record(shard_id, queue)
277
+ rescue DatahubPackage.exception.InvalidOperationException => e
278
+ @logger.error "Shard:" + shard_id + " Invalid operation caused by sealed shard, consumer will exit."
279
+ break
280
+ rescue => e
281
+ @logger.error "Read records failed" + e.backtrace.inspect.to_s
282
+ @logger.error "Will retry after " + @retry_interval.to_s + " seconds."
283
+ Stud.stoppable_sleep(@retry_interval) { stop? }
284
+ end
285
+ end # loop
286
+ @logger.info "Consumer thread exit, shard:" + shard_id
287
+ end
288
+
289
+ def check_stop()
290
+ @@mutex.synchronize {
291
+ return @stop
292
+ }
293
+ end
294
+
295
+ def run(queue)
296
+ if @multi_thread
297
+ for i in 0...@shard_ids.size
298
+ @@consumer_threads << Thread.new(@shard_ids[i],queue) {|shard_id,queue|
299
+ shard_consumer(shard_id, queue)
300
+ }
301
+ end
302
+ while !check_stop do
303
+ Stud.stoppable_sleep(15) { stop? }
304
+ @logger.debug "Main thread heartbeat."
305
+ end # loop
306
+ else
307
+ while !check_stop do
308
+ for i in 0...@shard_ids.size
309
+ begin
310
+ read_record(@shard_ids[i], queue)
311
+ rescue DatahubPackage.exception.InvalidOperationException => e
312
+ @logger.error "Shard:" + @shard_ids[i] + " Invalid operation caused by sealed shard, remove this shard."
313
+ @shard_ids.delete_at(i)
314
+ break
315
+ rescue => e
316
+ @logger.error "Read records failed" + e.backtrace.inspect.to_s
317
+ @logger.error "Will retry after " + @retry_interval.to_s + " seconds."
318
+ Stud.stoppable_sleep(@retry_interval) { stop? }
319
+ end
320
+ end
321
+ end # loop
322
+ end
323
+ @logger.info "Main thread exit."
324
+ end # def run
325
+
326
+ def get_active_shards(shards)
327
+ active_shards = []
328
+ for i in 0...shards.size
329
+ entry = shards.get(i)
330
+ if entry.getState() == DatahubPackage.model.ShardState::ACTIVE
331
+ active_shards.push(entry)
332
+ end
333
+ end
334
+ return active_shards
335
+ end
336
+
337
+ def stop
338
+ @@mutex.synchronize {
339
+ @stop = true
340
+ }
341
+ @logger.info "Plugin stopping, waiting for consumer thread stop."
342
+ @@consumer_threads.each { |thread|
343
+ thread.join
344
+ }
345
+ @logger.info "Plugin stopped."
346
+ end
347
+
348
+ end # class LogStash::Inputs::Datahub
@@ -0,0 +1,24 @@
1
+ Gem::Specification.new do |s|
2
+ s.name = 'logstash-input-datahub'
3
+ s.version = "1.0.0"
4
+ s.licenses = ["Apache License (2.0)"]
5
+ s.summary = "This aliyun-datahub input plugin."
6
+ s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program"
7
+ s.authors = ["Aliyun"]
8
+ s.email = "stream@service.aliyun.com"
9
+ s.homepage = "https://datahub.console.aliyun.com/datahub"
10
+ s.require_paths = ["lib"]
11
+ #s.platform = 'java'
12
+ # Files
13
+ s.files = Dir['lib/**/*','lib/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE']
14
+ # Tests
15
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
16
+
17
+ # Special flag to let us know this is actually a logstash plugin
18
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" }
19
+
20
+ # Gem dependencies
21
+ s.add_runtime_dependency 'stud'
22
+ s.add_runtime_dependency "logstash-core", ">= 2.0.0", "< 3.0.0"
23
+ s.add_runtime_dependency "logstash-codec-plain"
24
+ end
@@ -0,0 +1,11 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/input/datahub"
4
+ require "logstash/codecs/plain"
5
+ require "logstash/event"
6
+
7
+ describe LogStash::Inputs::Datahub do
8
+ it_behaves_like "test input plugin" do
9
+ let(:config) { { "interval" => 100 } }
10
+ end
11
+ end
metadata ADDED
@@ -0,0 +1,120 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-input-datahub
3
+ version: !ruby/object:Gem::Version
4
+ version: 1.0.0
5
+ platform: ruby
6
+ authors:
7
+ - Aliyun
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2017-02-23 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: stud
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: '0'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: '0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: logstash-core
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: 2.0.0
34
+ - - "<"
35
+ - !ruby/object:Gem::Version
36
+ version: 3.0.0
37
+ type: :runtime
38
+ prerelease: false
39
+ version_requirements: !ruby/object:Gem::Requirement
40
+ requirements:
41
+ - - ">="
42
+ - !ruby/object:Gem::Version
43
+ version: 2.0.0
44
+ - - "<"
45
+ - !ruby/object:Gem::Version
46
+ version: 3.0.0
47
+ - !ruby/object:Gem::Dependency
48
+ name: logstash-codec-plain
49
+ requirement: !ruby/object:Gem::Requirement
50
+ requirements:
51
+ - - ">="
52
+ - !ruby/object:Gem::Version
53
+ version: '0'
54
+ type: :runtime
55
+ prerelease: false
56
+ version_requirements: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: '0'
61
+ description: This gem is a logstash plugin required to be installed on top of the
62
+ Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not
63
+ a stand-alone program
64
+ email: stream@service.aliyun.com
65
+ executables: []
66
+ extensions: []
67
+ extra_rdoc_files: []
68
+ files:
69
+ - CHANGELOG.md
70
+ - CONTRIBUTORS
71
+ - Gemfile
72
+ - LICENSE
73
+ - README.md
74
+ - lib/logstash/inputs/datahub-test.rb
75
+ - lib/logstash/inputs/datahub.rb
76
+ - logstash-input-datahub.gemspec
77
+ - spec/input/datahub.rb
78
+ - vendor/jar-dependencies/runtime-jars/aliyun-sdk-datahub-2.2.1-SNAPSHOT.jar
79
+ - vendor/jar-dependencies/runtime-jars/bouncycastle.provider-1.38-jdk15.jar
80
+ - vendor/jar-dependencies/runtime-jars/commons-codec-1.9.jar
81
+ - vendor/jar-dependencies/runtime-jars/commons-io-2.4.jar
82
+ - vendor/jar-dependencies/runtime-jars/commons-lang3-3.3.2.jar
83
+ - vendor/jar-dependencies/runtime-jars/gson-2.6.2.jar
84
+ - vendor/jar-dependencies/runtime-jars/jackson-annotations-2.4.0.jar
85
+ - vendor/jar-dependencies/runtime-jars/jackson-core-2.4.4.jar
86
+ - vendor/jar-dependencies/runtime-jars/jackson-core-asl-1.9.13.jar
87
+ - vendor/jar-dependencies/runtime-jars/jackson-databind-2.4.4.jar
88
+ - vendor/jar-dependencies/runtime-jars/jackson-mapper-asl-1.9.13.jar
89
+ - vendor/jar-dependencies/runtime-jars/log4j-1.2.17.jar
90
+ - vendor/jar-dependencies/runtime-jars/lz4-1.3.0.jar
91
+ - vendor/jar-dependencies/runtime-jars/slf4j-api-1.7.12.jar
92
+ - vendor/jar-dependencies/runtime-jars/slf4j-log4j12-1.7.12.jar
93
+ homepage: https://datahub.console.aliyun.com/datahub
94
+ licenses:
95
+ - Apache License (2.0)
96
+ metadata:
97
+ logstash_plugin: 'true'
98
+ logstash_group: input
99
+ post_install_message:
100
+ rdoc_options: []
101
+ require_paths:
102
+ - lib
103
+ required_ruby_version: !ruby/object:Gem::Requirement
104
+ requirements:
105
+ - - ">="
106
+ - !ruby/object:Gem::Version
107
+ version: '0'
108
+ required_rubygems_version: !ruby/object:Gem::Requirement
109
+ requirements:
110
+ - - ">="
111
+ - !ruby/object:Gem::Version
112
+ version: '0'
113
+ requirements: []
114
+ rubyforge_project:
115
+ rubygems_version: 2.4.5.1
116
+ signing_key:
117
+ specification_version: 4
118
+ summary: This aliyun-datahub input plugin.
119
+ test_files:
120
+ - spec/input/datahub.rb