logstash-output-hdfs 0.2.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: d90d1ec8104476f6daf5848f37a79db8bfd2c6e0
4
+ data.tar.gz: 14642e70c7219a7852433e65c51af480164a52c6
5
+ SHA512:
6
+ metadata.gz: 3d65c7f8919758f88022b25956f71f79c6f51cd0bfc539f7f00cf54a98c1625fd5150af567facd910b9043a0297a88acdde0cb9c350ab4327b4d31debe15e0fc
7
+ data.tar.gz: 2197df4e76cd7ced8f5d57f41989292c85465baedf8fd6dc46da6082a4c0a3877725f78f86e6a2e56c85576ca1c75060250be336438888b3dcb0cbfb1d7e57d7
@@ -0,0 +1,18 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ coverage
6
+ InstalledFiles
7
+ lib/bundler/man
8
+ pkg
9
+ rdoc
10
+ spec/reports
11
+ test/tmp
12
+ test/version_tmp
13
+ tmp
14
+
15
+ # YARD artifacts
16
+ .yardoc
17
+ _yardoc
18
+ doc/
@@ -0,0 +1,165 @@
1
+ GNU LESSER GENERAL PUBLIC LICENSE
2
+ Version 3, 29 June 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+
9
+ This version of the GNU Lesser General Public License incorporates
10
+ the terms and conditions of version 3 of the GNU General Public
11
+ License, supplemented by the additional permissions listed below.
12
+
13
+ 0. Additional Definitions.
14
+
15
+ As used herein, "this License" refers to version 3 of the GNU Lesser
16
+ General Public License, and the "GNU GPL" refers to version 3 of the GNU
17
+ General Public License.
18
+
19
+ "The Library" refers to a covered work governed by this License,
20
+ other than an Application or a Combined Work as defined below.
21
+
22
+ An "Application" is any work that makes use of an interface provided
23
+ by the Library, but which is not otherwise based on the Library.
24
+ Defining a subclass of a class defined by the Library is deemed a mode
25
+ of using an interface provided by the Library.
26
+
27
+ A "Combined Work" is a work produced by combining or linking an
28
+ Application with the Library. The particular version of the Library
29
+ with which the Combined Work was made is also called the "Linked
30
+ Version".
31
+
32
+ The "Minimal Corresponding Source" for a Combined Work means the
33
+ Corresponding Source for the Combined Work, excluding any source code
34
+ for portions of the Combined Work that, considered in isolation, are
35
+ based on the Application, and not on the Linked Version.
36
+
37
+ The "Corresponding Application Code" for a Combined Work means the
38
+ object code and/or source code for the Application, including any data
39
+ and utility programs needed for reproducing the Combined Work from the
40
+ Application, but excluding the System Libraries of the Combined Work.
41
+
42
+ 1. Exception to Section 3 of the GNU GPL.
43
+
44
+ You may convey a covered work under sections 3 and 4 of this License
45
+ without being bound by section 3 of the GNU GPL.
46
+
47
+ 2. Conveying Modified Versions.
48
+
49
+ If you modify a copy of the Library, and, in your modifications, a
50
+ facility refers to a function or data to be supplied by an Application
51
+ that uses the facility (other than as an argument passed when the
52
+ facility is invoked), then you may convey a copy of the modified
53
+ version:
54
+
55
+ a) under this License, provided that you make a good faith effort to
56
+ ensure that, in the event an Application does not supply the
57
+ function or data, the facility still operates, and performs
58
+ whatever part of its purpose remains meaningful, or
59
+
60
+ b) under the GNU GPL, with none of the additional permissions of
61
+ this License applicable to that copy.
62
+
63
+ 3. Object Code Incorporating Material from Library Header Files.
64
+
65
+ The object code form of an Application may incorporate material from
66
+ a header file that is part of the Library. You may convey such object
67
+ code under terms of your choice, provided that, if the incorporated
68
+ material is not limited to numerical parameters, data structure
69
+ layouts and accessors, or small macros, inline functions and templates
70
+ (ten or fewer lines in length), you do both of the following:
71
+
72
+ a) Give prominent notice with each copy of the object code that the
73
+ Library is used in it and that the Library and its use are
74
+ covered by this License.
75
+
76
+ b) Accompany the object code with a copy of the GNU GPL and this license
77
+ document.
78
+
79
+ 4. Combined Works.
80
+
81
+ You may convey a Combined Work under terms of your choice that,
82
+ taken together, effectively do not restrict modification of the
83
+ portions of the Library contained in the Combined Work and reverse
84
+ engineering for debugging such modifications, if you also do each of
85
+ the following:
86
+
87
+ a) Give prominent notice with each copy of the Combined Work that
88
+ the Library is used in it and that the Library and its use are
89
+ covered by this License.
90
+
91
+ b) Accompany the Combined Work with a copy of the GNU GPL and this license
92
+ document.
93
+
94
+ c) For a Combined Work that displays copyright notices during
95
+ execution, include the copyright notice for the Library among
96
+ these notices, as well as a reference directing the user to the
97
+ copies of the GNU GPL and this license document.
98
+
99
+ d) Do one of the following:
100
+
101
+ 0) Convey the Minimal Corresponding Source under the terms of this
102
+ License, and the Corresponding Application Code in a form
103
+ suitable for, and under terms that permit, the user to
104
+ recombine or relink the Application with a modified version of
105
+ the Linked Version to produce a modified Combined Work, in the
106
+ manner specified by section 6 of the GNU GPL for conveying
107
+ Corresponding Source.
108
+
109
+ 1) Use a suitable shared library mechanism for linking with the
110
+ Library. A suitable mechanism is one that (a) uses at run time
111
+ a copy of the Library already present on the user's computer
112
+ system, and (b) will operate properly with a modified version
113
+ of the Library that is interface-compatible with the Linked
114
+ Version.
115
+
116
+ e) Provide Installation Information, but only if you would otherwise
117
+ be required to provide such information under section 6 of the
118
+ GNU GPL, and only to the extent that such information is
119
+ necessary to install and execute a modified version of the
120
+ Combined Work produced by recombining or relinking the
121
+ Application with a modified version of the Linked Version. (If
122
+ you use option 4d0, the Installation Information must accompany
123
+ the Minimal Corresponding Source and Corresponding Application
124
+ Code. If you use option 4d1, you must provide the Installation
125
+ Information in the manner specified by section 6 of the GNU GPL
126
+ for conveying Corresponding Source.)
127
+
128
+ 5. Combined Libraries.
129
+
130
+ You may place library facilities that are a work based on the
131
+ Library side by side in a single library together with other library
132
+ facilities that are not Applications and are not covered by this
133
+ License, and convey such a combined library under terms of your
134
+ choice, if you do both of the following:
135
+
136
+ a) Accompany the combined library with a copy of the same work based
137
+ on the Library, uncombined with any other library facilities,
138
+ conveyed under the terms of this License.
139
+
140
+ b) Give prominent notice with the combined library that part of it
141
+ is a work based on the Library, and explaining where to find the
142
+ accompanying uncombined form of the same work.
143
+
144
+ 6. Revised Versions of the GNU Lesser General Public License.
145
+
146
+ The Free Software Foundation may publish revised and/or new versions
147
+ of the GNU Lesser General Public License from time to time. Such new
148
+ versions will be similar in spirit to the present version, but may
149
+ differ in detail to address new problems or concerns.
150
+
151
+ Each version is given a distinguishing version number. If the
152
+ Library as you received it specifies that a certain numbered version
153
+ of the GNU Lesser General Public License "or any later version"
154
+ applies to it, you have the option of following the terms and
155
+ conditions either of that published version or of any later version
156
+ published by the Free Software Foundation. If the Library as you
157
+ received it does not specify a version number of the GNU Lesser
158
+ General Public License, you may choose any version of the GNU Lesser
159
+ General Public License ever published by the Free Software Foundation.
160
+
161
+ If the Library as you received it specifies that a proxy can decide
162
+ whether future versions of the GNU Lesser General Public License shall
163
+ apply, that proxy's public statement of acceptance of any version is
164
+ permanent authorization for you to choose that version for the
165
+ Library.
@@ -0,0 +1,75 @@
1
+ # Logstash HDFS plugin
2
+
3
+ An HDFS plugin for [Logstash](http://logstash.net). This plugin is provided as an external plugin (see Usage below) and is not part of the Logstash project.
4
+
5
+ # Usage
6
+
7
+ ## Logstash 1.4.x
8
+
9
+ Run logstash with the `--pluginpath` (`-p`) command line argument to let logstash know where the plugin is. Also, you need to let Java know where your Hadoop JARs are, so set the `CLASSPATH` variable correctly.
10
+ On Logstash 1.4.x use the following command (ajusting paths as neccessary of course):
11
+
12
+ LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native" GEM_HOME=./logstash-1.4.2/vendor/bundle/jruby/1.9 CLASSPATH=$(find ./logstash-1.4.2/vendor/jar -type f -name '*.jar'|tr '\n' ':'):$(find /usr/lib/hadoop-hdfs -type f -name '*.jar' | tr '\n' ':'):$(find /usr/lib/hadoop -type f -name '*.jar' | tr '\n' ':'):/etc/hadoop/conf java org.jruby.Main -I./logstash-1.4.2/lib ./logstash-1.4.2/lib/logstash/runner.rb agent -f logstash.conf -p ./logstash-hdfs/lib
13
+
14
+ Note that logstash is not executed with `java -jar` because executable jars ignore external classpath. Instead we put the logstash jar on the class path and call the runner class.
15
+ Important: the Hadoop configuration dir containing `hdfs-site.xml` must be on the classpath.
16
+
17
+ ## Logstash 1.5.x
18
+
19
+ Logstash 1.5.x supports distribution of plugins as rubygems which makes life a log easier. To install the plugin from the version on rubygems:
20
+
21
+ $LOGSTASH_DIR/bin/plugin install logstash-output-hdfs
22
+
23
+ Or from source (after checking out the source, run in checkout directory):
24
+
25
+ $LOGSTASH_DIR/bin/plugin build logstash-output-hdfs.gemspec
26
+
27
+ Then run logstash with the following command:
28
+
29
+ LD_LIBRARY_PATH="$HADOOP_DIR/lib/native" CLASSPATH=$(find $HADOOP_DIR/share/hadoop/common/lib/ -name '*.jar' | tr '\n' ':'):$HADOOP_DIR/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:$HADOOP_DIR/share/hadoop/common/hadoop-common-2.4.0.jar:$HADOOP_DIR/conf $LOGSTASH_DIR/bin/logstash agent -f logstash.conf
30
+
31
+ Hadoop paths may need adjustments depending on the distribution and version you are using. The important thing is to have `hadoop-hdfs`, `hadoop-common` and all the jar files in `common/lib` on the classpath.
32
+
33
+ The following comman line will work on most distributions (but will take a little longer to load since it loads many unnecessary jars):
34
+
35
+ LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native" CLASSPATH=$(find /usr/lib/hadoop-hdfs -type f -name '*.jar' | tr '\n' ':'):$(find /usr/lib/hadoop -type f -name '*.jar' | tr '\n' ':'):/etc/hadoop/conf $LOGSTASH_DIR/bin/logstash agent -f logstash.conf
36
+
37
+
38
+ # HDFS Configuration
39
+
40
+ By default, the plugin will load Hadoop's configuration from the classpath. However, a logstash configuration option named 'hadoop_config_resources' has
41
+ been added that will allow the user to pass in multiple configuration locations to override this default configuration.
42
+
43
+ output {
44
+ hdfs {
45
+ path => "/path/to/output_file.log"
46
+ hadoop_config_resources => ['path/to/configuration/on/classpath/hdfs-site.xml', 'path/to/configuration/on/classpath/core-site.xml']
47
+ }
48
+ }
49
+
50
+
51
+ # HDFS Append and rewriting files
52
+
53
+ Please note, HDFS versions prior to 2.x do not properly support append. See [HADOOP-8230](https://issues.apache.org/jira/browse/HADOOP-8230) for reference.
54
+ To enable append on HDFS, set _dfs.support.append_ in <tt>hdfs-site.conf</tt> (2.x) or _dfs.support.broken.append_ on 1.x, and use the *enable_append* config option:
55
+
56
+ output {
57
+ hdfs {
58
+ path => "/path/to/output_file.log"
59
+ enable_append => true
60
+ }
61
+ }
62
+
63
+ If append is not supported and the file already exists, the plugin will cowardly refuse to reopen the file for writing unless *enable_reopen* is set to true.
64
+ This is probably a very bad idea, you have been warned!
65
+
66
+ # HFDS Flush
67
+
68
+ Flush and sync don't actually work as promised on HDFS (see [HDFS-536](https://issues.apache.org/jira/browse/HDFS-536)).
69
+ In Hadoop 2.x, `hflush` provides flush-like functionality and the plugin will use `hflush` if it is available.
70
+ Nevertheless, flushing code has been left in the plugin in case `flush` and `sync` will work on some HDFS implementation.
71
+
72
+ # License
73
+
74
+ The plugin is released under the LGPL v3.
75
+
@@ -0,0 +1,7 @@
1
+ @files=[]
2
+
3
+ task :default do
4
+ system("rake -T")
5
+ end
6
+
7
+ require "logstash/devutils/rake"
@@ -0,0 +1,213 @@
1
+ require "logstash/namespace"
2
+ require "logstash/outputs/base"
3
+
4
+ # HDFS output.
5
+ #
6
+ # Write events to files to HDFS. You can use fields from the
7
+ # event as parts of the filename.
8
+ class LogStash::Outputs::HDFS < LogStash::Outputs::Base
9
+
10
+ config_name "hdfs"
11
+ milestone 1
12
+
13
+ # The path to the file to write. Event fields can be used here,
14
+ # like "/var/log/logstash/%{@source_host}/%{application}"
15
+ config :path, :validate => :string, :required => true
16
+
17
+ # The format to use when writing events to the file. This value
18
+ # supports any string and can include %{name} and other dynamic
19
+ # strings.
20
+ #
21
+ # If this setting is omitted, the full json representation of the
22
+ # event will be written as a single line.
23
+ config :message_format, :validate => :string
24
+
25
+ # Flush interval for flushing writes to log files. 0 will flush on every message
26
+ # Flush doesn't actually work on most Hadoop 1.x versions. if you really care about flush, use 2.x
27
+ config :flush_interval, :validate => :number, :default => 60
28
+
29
+ # Enable the use of append. This only works with Hadoop 2.x dfs.support.append or 1.x with dfs.support.broken.append
30
+ config :enable_append, :validate => :boolean, :default => false
31
+
32
+ # Enable re-opening files. This is a really a bad idea because HDFS will truncate files. Only use if you know what you're doing
33
+ config :enable_reopen, :validate => :boolean, :default => false
34
+
35
+ # The classpath resource locations of the hadoop configuration
36
+ config :hadoop_config_resources, :validate => :array
37
+
38
+ public
39
+ def register
40
+ require "java"
41
+ java_import "org.apache.hadoop.fs.Path"
42
+ java_import "org.apache.hadoop.fs.FileSystem"
43
+ java_import "org.apache.hadoop.conf.Configuration"
44
+
45
+ @files = {}
46
+ now = Time.now
47
+ @last_flush_cycle = now
48
+ @last_stale_cleanup_cycle = now
49
+ flush_interval = @flush_interval.to_i
50
+ @stale_cleanup_interval = 10
51
+ conf = Configuration.new
52
+
53
+ if @hadoop_config_resources
54
+ @hadoop_config_resources.each { |resource|
55
+ conf.addResource(resource)
56
+ }
57
+ end
58
+
59
+ @logger.info "Using Hadoop configuration: #{conf.get("fs.defaultFS")}"
60
+ @hdfs = FileSystem.get(conf)
61
+ end # def register
62
+
63
+ public
64
+ def receive(event)
65
+ return unless output?(event)
66
+ out = get_output_stream(event.sprintf(@path))
67
+
68
+ if @message_format
69
+ output = event.sprintf(@message_format)
70
+ else
71
+ output = event.to_json
72
+ end
73
+ output += "\n" unless output.end_with? "\n"
74
+
75
+ out.write(output)
76
+
77
+ flush(out)
78
+ close_stale_files
79
+ end # def receive
80
+
81
+ def teardown
82
+ @logger.debug("Teardown: closing files")
83
+ @files.each do |path, fd|
84
+ begin
85
+ fd.close
86
+ @logger.debug("Closed file #{path}", :fd => fd)
87
+ rescue Exception => e
88
+ @logger.error("Excpetion while flushing and closing files.", :exception => e)
89
+ end
90
+ end
91
+ finished
92
+ end
93
+
94
+ private
95
+ def get_output_stream(path_string)
96
+ return @files[path_string] if @files.has_key?(path_string)
97
+ path = Path.new(path_string)
98
+ if @hdfs.exists(path)
99
+ if enable_append
100
+ begin
101
+ dfs_data_output_stream = @hdfs.append(path)
102
+ rescue java.io.IOException => e
103
+ logger.error("Error opening path for append, trying to recover lease", :exception => e)
104
+ recover_lease(path)
105
+ retry
106
+ end
107
+ elsif enable_reopen
108
+ logger.warn "Overwritting HDFS file", :path => path_string
109
+ dfs_data_output_stream = @hdfs.create(path, true)
110
+ else
111
+ raise IOError, "Cowardly refusing to open pre existing file (#{path_string}) because HDFS will truncate the file!"
112
+ end
113
+ else
114
+ dfs_data_output_stream = @hdfs.create(path)
115
+ end
116
+ @files[path_string] = DFSOutputStreamWrapper.new(dfs_data_output_stream)
117
+ end
118
+
119
+ def flush(fd)
120
+ if flush_interval > 0
121
+ flush_pending_files
122
+ else
123
+ fd.flush
124
+ end
125
+ end
126
+
127
+ # every flush_interval seconds or so (triggered by events, but if there are no events there's no point flushing files anyway)
128
+ def flush_pending_files
129
+ return unless Time.now - @last_flush_cycle >= flush_interval
130
+ @logger.debug("Starting flush cycle")
131
+ @files.each do |path, fd|
132
+ @logger.debug("Flushing file", :path => path, :fd => fd)
133
+ fd.flush
134
+ end
135
+ @last_flush_cycle = Time.now
136
+ end
137
+
138
+ # every 10 seconds or so (triggered by events, but if there are no events there's no point closing files anyway)
139
+ def close_stale_files
140
+ now = Time.now
141
+ return unless now - @last_stale_cleanup_cycle >= @stale_cleanup_interval
142
+ @logger.info("Starting stale files cleanup cycle", :files => @files)
143
+ inactive_files = @files.select { |path, file| not file.active }
144
+ @logger.debug("%d stale files found" % inactive_files.count, :inactive_files => inactive_files)
145
+ inactive_files.each do |path, file|
146
+ @logger.info("Closing file %s" % path)
147
+ file.close
148
+ @files.delete(path)
149
+ end
150
+ # mark all files as inactive, a call to write will mark them as active again
151
+ @files.each { |path, fd| fd.active = false }
152
+ @last_stale_cleanup_cycle = now
153
+ end
154
+
155
+ def recover_lease(path)
156
+ is_file_closed_available = @hdfs.respond_to? :isFileClosed
157
+ start = Time.now
158
+ first_retry = true
159
+
160
+ until start - Time.now > 900 # 15 minutes timeout
161
+ recovered = @hdfs.recoverLease(path)
162
+ return true if recovered
163
+ # first retry is fast
164
+ if first_retry
165
+ sleep 4
166
+ first_retry = false
167
+ next
168
+ end
169
+
170
+ # on further retries we backoff and spin on isFileClosed in hopes of catching an early break
171
+ 61.times do
172
+ return if is_closed_available and @hdfs.isFileClosed(path)
173
+ sleep 1
174
+ end
175
+ end
176
+ false
177
+ end
178
+
179
+ class DFSOutputStreamWrapper
180
+ # reflection locks java objects, so only do this once
181
+ if org.apache.hadoop.fs.FSDataOutputStream.instance_methods.include? :hflush
182
+ # hadoop 2.x uses hflush
183
+ FLUSH_METHOD = :hflush
184
+ else
185
+ FLUSH_METHOD = :flush
186
+ end
187
+ attr_accessor :active
188
+ def initialize(output_stream)
189
+ @output_stream = output_stream
190
+ end
191
+ def close
192
+ @output_stream.close
193
+ rescue IOException => e
194
+ logger.error("Failed to close file", :exception => e)
195
+ end
196
+ def flush
197
+ if FLUSH_METHOD == :hflush
198
+ @output_stream.hflush
199
+ else
200
+ @output_stream.flush
201
+ @output_stream.sync
202
+ end
203
+ rescue
204
+
205
+ end
206
+ def write(str)
207
+ bytes = str.to_java_bytes
208
+ @output_stream.write(bytes, 0, bytes.length)
209
+ @active = true
210
+ end
211
+ end
212
+ end # class LogStash::Outputs::File
213
+
@@ -0,0 +1,26 @@
1
+ Gem::Specification.new do |s|
2
+
3
+ s.name = 'logstash-output-hdfs'
4
+ s.version = '0.2.0'
5
+ s.licenses = ['Apache License (2.0)']
6
+ s.summary = "$summary"
7
+ s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program"
8
+ s.authors = ["Avishai Ish-Shalom"]
9
+ s.email = 'avishai@fewbytes.com'
10
+ s.homepage = "https://github.com/avishai-ish-shalom/logstash-hdfs"
11
+ s.require_paths = ["lib"]
12
+
13
+ # Files
14
+ s.files = `git ls-files`.split($\)+::Dir.glob('vendor/*')
15
+
16
+ # Tests
17
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
18
+
19
+ # Special flag to let us know this is actually a logstash plugin
20
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
21
+
22
+ # Gem dependencies
23
+ s.add_runtime_dependency 'logstash', '>= 1.4.0', '< 2.0.0'
24
+
25
+ s.add_development_dependency 'logstash-devutils'
26
+ end
metadata ADDED
@@ -0,0 +1,85 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-output-hdfs
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.2.0
5
+ platform: ruby
6
+ authors:
7
+ - Avishai Ish-Shalom
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2014-12-18 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ requirement: !ruby/object:Gem::Requirement
15
+ requirements:
16
+ - - '>='
17
+ - !ruby/object:Gem::Version
18
+ version: 1.4.0
19
+ - - <
20
+ - !ruby/object:Gem::Version
21
+ version: 2.0.0
22
+ name: logstash
23
+ prerelease: false
24
+ type: :runtime
25
+ version_requirements: !ruby/object:Gem::Requirement
26
+ requirements:
27
+ - - '>='
28
+ - !ruby/object:Gem::Version
29
+ version: 1.4.0
30
+ - - <
31
+ - !ruby/object:Gem::Version
32
+ version: 2.0.0
33
+ - !ruby/object:Gem::Dependency
34
+ requirement: !ruby/object:Gem::Requirement
35
+ requirements:
36
+ - - '>='
37
+ - !ruby/object:Gem::Version
38
+ version: '0'
39
+ name: logstash-devutils
40
+ prerelease: false
41
+ type: :development
42
+ version_requirements: !ruby/object:Gem::Requirement
43
+ requirements:
44
+ - - '>='
45
+ - !ruby/object:Gem::Version
46
+ version: '0'
47
+ description: This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program
48
+ email: avishai@fewbytes.com
49
+ executables: []
50
+ extensions: []
51
+ extra_rdoc_files: []
52
+ files:
53
+ - .gitignore
54
+ - LICENSE.txt
55
+ - README.md
56
+ - Rakefile
57
+ - lib/logstash/outputs/hdfs.rb
58
+ - logstash-output-hdfs.gemspec
59
+ homepage: https://github.com/avishai-ish-shalom/logstash-hdfs
60
+ licenses:
61
+ - Apache License (2.0)
62
+ metadata:
63
+ logstash_plugin: 'true'
64
+ logstash_group: output
65
+ post_install_message:
66
+ rdoc_options: []
67
+ require_paths:
68
+ - lib
69
+ required_ruby_version: !ruby/object:Gem::Requirement
70
+ requirements:
71
+ - - '>='
72
+ - !ruby/object:Gem::Version
73
+ version: '0'
74
+ required_rubygems_version: !ruby/object:Gem::Requirement
75
+ requirements:
76
+ - - '>='
77
+ - !ruby/object:Gem::Version
78
+ version: '0'
79
+ requirements: []
80
+ rubyforge_project:
81
+ rubygems_version: 2.1.9
82
+ signing_key:
83
+ specification_version: 4
84
+ summary: $summary
85
+ test_files: []