logstash-output-googlecloudstorage 2.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: abbcf0973e9998e05af85da8a3b5a099b59301e85e8a333f797962d9578b5d4b
4
+ data.tar.gz: 432e21ad4d816480a1b3c16da6c75f265742de83e5d647b791a9be7c189be42e
5
+ SHA512:
6
+ metadata.gz: 1474e3e8f52436f67319f2c4ba99d6ad65f7dbf22d44cf5a080a809d0e3935240cf1b5297b6cbc61a8061ce0e4a86d1848c9358e195c825b56da8334c05f8459
7
+ data.tar.gz: d34f68d51b328de8fb18cda83252d381c917de23b5737dc5e073b0ecc0c3939639c85b20b89dcabd0871343d5b3b7ea6918f2b268412213c951ee70c73a8321c
data/CHANGELOG.md ADDED
@@ -0,0 +1,73 @@
1
+ ## 4.2.6
2
+ - Removed JRuby check when using FIFOs [#75](https://github.com/logstash-plugins/logstash-output-file/pull/75)
3
+
4
+ ## 4.2.5
5
+ - Fix a bug introduced in v4.2.4 where events on low-volume pipelines could remain unflushed for long periods when `flush_interval` was non-zero [#70](https://github.com/logstash-plugins/logstash-output-file/pull/70)
6
+
7
+ ## 4.2.4
8
+ - Fix a bug where flush interval was being called for each event when enabled [#67](https://github.com/logstash-plugins/logstash-output-file/pull/67)
9
+
10
+ ## 4.2.3
11
+ - Docs: Set the default_codec doc attribute.
12
+
13
+ ## 4.2.2
14
+ - Add feature `write_behavior` to the documentation #58
15
+
16
+ ## 4.2.1
17
+ - Bugfix: Move require of flores into the spec file instead of main file.rb
18
+
19
+ ## 4.2.0
20
+ - New `write_behavior` feature. Value can be "append" (default) or
21
+ "overwrite". If "append", events will be appended to the end of the file.
22
+ If "overwrite", the file will only contain the last event written.
23
+
24
+ ## 4.1.2
25
+ - Update gemspec summary
26
+
27
+ ## 4.1.1
28
+ - Fix some documentation issues
29
+
30
+ ## 4.1.0
31
+ - Remove obsolete option `message_format`
32
+
33
+ ## 4.0.1
34
+ - Move one log message from info to debug to avoid noise
35
+
36
+ ## 4.0.0
37
+ - Make 'message_format' option obsolete
38
+ - Use new Logsash 2.4/5.0 APIs for working batchwise and with shared concurrency
39
+
40
+ ## 3.0.2
41
+ - Relax constraint on logstash-core-plugin-api to >= 1.60 <= 2.99
42
+
43
+ ## 3.0.1
44
+ - Republish all the gems under jruby.
45
+ ## 3.0.0
46
+ - Update the plugin to the version 2.0 of the plugin api, this change is required for Logstash 5.0 compatibility. See https://github.com/elastic/logstash/issues/5141
47
+ # 2.2.5
48
+ - Depend on logstash-core-plugin-api instead of logstash-core, removing the need to mass update plugins on major releases of logstash
49
+ # 2.2.4
50
+ - New dependency requirements for logstash-core for the 5.0 release
51
+ ## 2.2.3
52
+ - Rename Dir.exists? to Dir.exist? to fix deprecation warning
53
+ - Allow setting dir and file permissions
54
+
55
+ ## 2.2.1
56
+ - Fixed specs to not depend on pipeline ordering
57
+
58
+ ## 2.2.1
59
+ - Fixed Time specs
60
+
61
+ ## 2.2.0
62
+ - Add support for codec, using **json_lines** as default codec to keep default behavior.
63
+ Ref: https://github.com/logstash-plugins/logstash-output-file/pull/9
64
+
65
+ ## 2.1.0
66
+ - Add create_if_deleted option to create a destination file in case it
67
+ was deleted by another agent in the machine. In case of being false
68
+ the system will add the incomming messages to the failure file.
69
+
70
+ ## 2.0.0
71
+ - Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully,
72
+ instead of using Thread.raise on the plugins' threads. Ref: https://github.com/elastic/logstash/pull/3895
73
+ - Dependency on logstash-core update to 2.0
data/CONTRIBUTORS ADDED
@@ -0,0 +1,20 @@
1
+ The following is a list of people who have contributed ideas, code, bug
2
+ reports, or in general have helped logstash along its way.
3
+
4
+ Contributors:
5
+ * Colin Surprenant (colinsurprenant)
6
+ * Ivan Babrou (bobrik)
7
+ * John E. Vincent (lusis)
8
+ * Jordan Sissel (jordansissel)
9
+ * Kayla Green (MixMuffins)
10
+ * Kurt Hurtado (kurtado)
11
+ * Matt Gray (mattgray)
12
+ * Pete Fritchman (fetep)
13
+ * Philippe Weber (wiibaa)
14
+ * Pier-Hugues Pellerin (ph)
15
+ * Richard Pijnenburg (electrical)
16
+
17
+ Note: If you've sent us patches, bug reports, or otherwise contributed to
18
+ Logstash, and you aren't on the list above and want to be, please let us know
19
+ and we'll make sure you're here. Contributions from folks like you are what make
20
+ open source awesome.
data/Gemfile ADDED
@@ -0,0 +1,11 @@
1
+ source 'https://rubygems.org'
2
+
3
+ gemspec
4
+
5
+ logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash"
6
+ use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1"
7
+
8
+ if Dir.exist?(logstash_path) && use_logstash_source
9
+ gem 'logstash-core', :path => "#{logstash_path}/logstash-core"
10
+ gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api"
11
+ end
data/LICENSE ADDED
@@ -0,0 +1,13 @@
1
+ Copyright (c) 2012-2018 Elasticsearch <http://www.elastic.co>
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
data/NOTICE.TXT ADDED
@@ -0,0 +1,5 @@
1
+ Elasticsearch
2
+ Copyright 2012-2015 Elasticsearch
3
+
4
+ This product includes software developed by The Apache Software
5
+ Foundation (http://www.apache.org/).
data/README.md ADDED
@@ -0,0 +1,98 @@
1
+ # Logstash Plugin
2
+
3
+ [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-output-file.svg)](https://travis-ci.org/logstash-plugins/logstash-output-file)
4
+
5
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
6
+
7
+ It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
8
+
9
+ ## Documentation
10
+
11
+ Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
12
+
13
+ - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
14
+ - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
15
+
16
+ ## Need Help?
17
+
18
+ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
19
+
20
+ ## Developing
21
+
22
+ ### 1. Plugin Developement and Testing
23
+
24
+ #### Code
25
+ - To get started, you'll need JRuby with the Bundler gem installed.
26
+
27
+ - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
28
+
29
+ - Install dependencies
30
+ ```sh
31
+ bundle install
32
+ ```
33
+
34
+ #### Test
35
+
36
+ - Update your dependencies
37
+
38
+ ```sh
39
+ bundle install
40
+ ```
41
+
42
+ - Run tests
43
+
44
+ ```sh
45
+ bundle exec rspec
46
+ ```
47
+
48
+ ### 2. Running your unpublished Plugin in Logstash
49
+
50
+ #### 2.1 Run in a local Logstash clone
51
+
52
+ - Edit Logstash `Gemfile` and add the local plugin path, for example:
53
+ ```ruby
54
+ gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
55
+ ```
56
+ - Install plugin
57
+ ```sh
58
+ # Logstash 2.3 and higher
59
+ bin/logstash-plugin install --no-verify
60
+
61
+ # Prior to Logstash 2.3
62
+ bin/plugin install --no-verify
63
+
64
+ ```
65
+ - Run Logstash with your plugin
66
+ ```sh
67
+ bin/logstash -e 'filter {awesome {}}'
68
+ ```
69
+ At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
70
+
71
+ #### 2.2 Run in an installed Logstash
72
+
73
+ You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
74
+
75
+ - Build your plugin gem
76
+ ```sh
77
+ gem build logstash-filter-awesome.gemspec
78
+ ```
79
+ - Install the plugin from the Logstash home
80
+ ```sh
81
+ # Logstash 2.3 and higher
82
+ bin/logstash-plugin install --no-verify
83
+
84
+ # Prior to Logstash 2.3
85
+ bin/plugin install --no-verify
86
+
87
+ ```
88
+ - Start Logstash and proceed to test the plugin
89
+
90
+ ## Contributing
91
+
92
+ All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
93
+
94
+ Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
95
+
96
+ It is more important to the community that you are able to contribute.
97
+
98
+ For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -0,0 +1,147 @@
1
+ :plugin: file
2
+ :type: output
3
+ :default_codec: json_lines
4
+
5
+ ///////////////////////////////////////////
6
+ START - GENERATED VARIABLES, DO NOT EDIT!
7
+ ///////////////////////////////////////////
8
+ :version: %VERSION%
9
+ :release_date: %RELEASE_DATE%
10
+ :changelog_url: %CHANGELOG_URL%
11
+ :include_path: ../../../../logstash/docs/include
12
+ ///////////////////////////////////////////
13
+ END - GENERATED VARIABLES, DO NOT EDIT!
14
+ ///////////////////////////////////////////
15
+
16
+ [id="plugins-{type}s-{plugin}"]
17
+
18
+ === File output plugin
19
+
20
+ include::{include_path}/plugin_header.asciidoc[]
21
+
22
+ ==== Description
23
+
24
+ This output writes events to files on disk. You can use fields
25
+ from the event as parts of the filename and/or path.
26
+
27
+ By default, this output writes one event per line in **json** format.
28
+ You can customise the line format using the `line` codec like
29
+ [source,ruby]
30
+ output {
31
+ file {
32
+ path => ...
33
+ codec => line { format => "custom format: %{message}"}
34
+ }
35
+ }
36
+
37
+ [id="plugins-{type}s-{plugin}-options"]
38
+ ==== File Output Configuration Options
39
+
40
+ This plugin supports the following configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
41
+
42
+ [cols="<,<,<",options="header",]
43
+ |=======================================================================
44
+ |Setting |Input type|Required
45
+ | <<plugins-{type}s-{plugin}-create_if_deleted>> |<<boolean,boolean>>|No
46
+ | <<plugins-{type}s-{plugin}-dir_mode>> |<<number,number>>|No
47
+ | <<plugins-{type}s-{plugin}-file_mode>> |<<number,number>>|No
48
+ | <<plugins-{type}s-{plugin}-filename_failure>> |<<string,string>>|No
49
+ | <<plugins-{type}s-{plugin}-flush_interval>> |<<number,number>>|No
50
+ | <<plugins-{type}s-{plugin}-gzip>> |<<boolean,boolean>>|No
51
+ | <<plugins-{type}s-{plugin}-path>> |<<string,string>>|Yes
52
+ | <<plugins-{type}s-{plugin}-write_behavior>> |<<string,string>>|No
53
+ |=======================================================================
54
+
55
+ Also see <<plugins-{type}s-{plugin}-common-options>> for a list of options supported by all
56
+ output plugins.
57
+
58
+ &nbsp;
59
+
60
+ [id="plugins-{type}s-{plugin}-create_if_deleted"]
61
+ ===== `create_if_deleted`
62
+
63
+ * Value type is <<boolean,boolean>>
64
+ * Default value is `true`
65
+
66
+ If the configured file is deleted, but an event is handled by the plugin,
67
+ the plugin will recreate the file. Default => true
68
+
69
+ [id="plugins-{type}s-{plugin}-dir_mode"]
70
+ ===== `dir_mode`
71
+
72
+ * Value type is <<number,number>>
73
+ * Default value is `-1`
74
+
75
+ Dir access mode to use. Note that due to the bug in jruby system umask
76
+ is ignored on linux: https://github.com/jruby/jruby/issues/3426
77
+ Setting it to -1 uses default OS value.
78
+ Example: `"dir_mode" => 0750`
79
+
80
+ [id="plugins-{type}s-{plugin}-file_mode"]
81
+ ===== `file_mode`
82
+
83
+ * Value type is <<number,number>>
84
+ * Default value is `-1`
85
+
86
+ File access mode to use. Note that due to the bug in jruby system umask
87
+ is ignored on linux: https://github.com/jruby/jruby/issues/3426
88
+ Setting it to -1 uses default OS value.
89
+ Example: `"file_mode" => 0640`
90
+
91
+ [id="plugins-{type}s-{plugin}-filename_failure"]
92
+ ===== `filename_failure`
93
+
94
+ * Value type is <<string,string>>
95
+ * Default value is `"_filepath_failures"`
96
+
97
+ If the generated path is invalid, the events will be saved
98
+ into this file and inside the defined path.
99
+
100
+ [id="plugins-{type}s-{plugin}-flush_interval"]
101
+ ===== `flush_interval`
102
+
103
+ * Value type is <<number,number>>
104
+ * Default value is `2`
105
+
106
+ Flush interval (in seconds) for flushing writes to log files.
107
+ 0 will flush on every message.
108
+
109
+ [id="plugins-{type}s-{plugin}-gzip"]
110
+ ===== `gzip`
111
+
112
+ * Value type is <<boolean,boolean>>
113
+ * Default value is `false`
114
+
115
+ Gzip the output stream before writing to disk.
116
+
117
+ [id="plugins-{type}s-{plugin}-path"]
118
+ ===== `path`
119
+
120
+ * This is a required setting.
121
+ * Value type is <<string,string>>
122
+ * There is no default value for this setting.
123
+
124
+ The path to the file to write. Event fields can be used here,
125
+ like `/var/log/logstash/%{host}/%{application}`
126
+ One may also utilize the path option for date-based log
127
+ rotation via the joda time format. This will use the event
128
+ timestamp.
129
+ E.g.: `path => "./test-%{+YYYY-MM-dd}.txt"` to create
130
+ `./test-2013-05-29.txt`
131
+
132
+ If you use an absolute path you cannot start with a dynamic string.
133
+ E.g: `/%{myfield}/`, `/test-%{myfield}/` are not valid paths
134
+
135
+ [id="plugins-{type}s-{plugin}-write_behavior"]
136
+ ===== `write_behavior`
137
+
138
+ * Value type is <<string,string>>
139
+ * Default value is `append`
140
+
141
+ If `append`, the file will be opened for appending and each new event will be written at the end of the file.
142
+ If `overwrite`, the file will be truncated before writing and only the most recent event will appear in the file.
143
+
144
+ [id="plugins-{type}s-{plugin}-common-options"]
145
+ include::{include_path}/{type}.asciidoc[]
146
+
147
+ :default_codec!:
@@ -0,0 +1,475 @@
1
+ # encoding: utf-8
2
+ require "logstash/namespace"
3
+ require "logstash/outputs/base"
4
+ require "logstash/json"
5
+ require "logstash/errors"
6
+ require "zlib"
7
+
8
+ # This output writes events to files on disk. You can use fields
9
+ # from the event as parts of the filename and/or path.
10
+ #
11
+ # By default, this output writes one event per line in **json** format.
12
+ # You can customise the line format using the `line` codec like
13
+ # [source,ruby]
14
+ # output {
15
+ # file {
16
+ # path => ...
17
+ # codec => line { format => "custom format: %{message}"}
18
+ # }
19
+ # }
20
+ class LogStash::Outputs::GoogleCloudStorage < LogStash::Outputs::Base
21
+ concurrency :shared
22
+
23
+ FIELD_REF = /%\{[^}]+\}/
24
+
25
+ config_name "googlecloudstorage"
26
+
27
+ attr_reader :failure_path
28
+
29
+ # GCS bucket name, without "gs://" or any other prefix.
30
+ config :bucket, :validate => :string, :required => true
31
+
32
+ # GCS path to private key file.
33
+ config :key_path, :validate => :string, :required => true
34
+
35
+ # GCS private key password.
36
+ config :key_password, :validate => :string, :default => "notasecret"
37
+
38
+ # GCS service account.
39
+ config :service_account, :validate => :string, :required => true
40
+
41
+ # Log file prefix. Log file will follow the format:
42
+ config :log_file_prefix, :validate => :string, :default => "logstash_gcs"
43
+
44
+ # The event format you want to store in files. Defaults to plain text.
45
+ config :output_format, :validate => [ "json", "plain" ], :default => "plain"
46
+
47
+ # Time pattern for log file, defaults to hourly files.
48
+ # Must Time.strftime patterns: www.ruby-doc.org/core-2.0/Time.html#method-i-strftime
49
+ config :date_pattern, :validate => :string, :default => "%Y-%m-%dT%H:00"
50
+
51
+ # Gzip output stream when writing events to log files, set
52
+ # `Content-Type` to `application/gzip` instead of `text/plain`, and
53
+ # use file suffix `.log.gz` instead of `.log`.
54
+ config :gzip, :validate => :boolean, :default => false
55
+
56
+ # Gzip output stream when writing events to log files and set
57
+ # `Content-Encoding` to `gzip`.
58
+ config :gzip_content_encoding, :validate => :boolean, :default => false
59
+
60
+ # Uploader interval when uploading new files to GCS. Adjust time based
61
+ # on your time pattern (for example, for hourly files, this interval can be
62
+ # around one hour).
63
+
64
+ # The path to the file to write. Event fields can be used here,
65
+ # like `/var/log/logstash/%{host}/%{application}`
66
+ # One may also utilize the path option for date-based log
67
+ # rotation via the joda time format. This will use the event
68
+ # timestamp.
69
+ # E.g.: `path => "./test-%{+YYYY-MM-dd}.txt"` to create
70
+ # `./test-2013-05-29.txt`
71
+ #
72
+ # If you use an absolute path you cannot start with a dynamic string.
73
+ # E.g: `/%{myfield}/`, `/test-%{myfield}/` are not valid paths
74
+ config :path, :validate => :string, :required => true
75
+
76
+ # Flush interval (in seconds) for flushing writes to log files.
77
+ # 0 will flush on every message.
78
+ config :flush_interval, :validate => :number, :default => 100
79
+
80
+ # Gzip the output stream before writing to disk.
81
+ config :gzip, :validate => :boolean, :default => false
82
+
83
+ # Should the hostname be included in the file name?
84
+ config :include_hostname, :validate => :boolean, :default => true
85
+
86
+ # If the generated path is invalid, the events will be saved
87
+ # into this file and inside the defined path.
88
+ config :filename_failure, :validate => :string, :default => '_filepath_failures'
89
+
90
+ # If the configured file is deleted, but an event is handled by the plugin,
91
+ # the plugin will recreate the file. Default => true
92
+ config :create_if_deleted, :validate => :boolean, :default => true
93
+
94
+ # Dir access mode to use. Note that due to the bug in jruby system umask
95
+ # is ignored on linux: https://github.com/jruby/jruby/issues/3426
96
+ # Setting it to -1 uses default OS value.
97
+ # Example: `"dir_mode" => 0750`
98
+ config :dir_mode, :validate => :number, :default => -1
99
+
100
+ # GoogleCloudStorage access mode to use. Note that due to the bug in jruby system umask
101
+ # is ignored on linux: https://github.com/jruby/jruby/issues/3426
102
+ # Setting it to -1 uses default OS value.
103
+ # Example: `"file_mode" => 0640`
104
+ config :file_mode, :validate => :number, :default => -1
105
+
106
+
107
+ # How should the file be written?
108
+ #
109
+ # If `append`, the file will be opened for appending and each new event will
110
+ # be written at the end of the file.
111
+ #
112
+ # If `overwrite`, the file will be truncated before writing and only the most
113
+ # recent event will appear in the file.
114
+ config :write_behavior, :validate => [ "overwrite", "append" ], :default => "append"
115
+
116
+ default :codec, "json_lines"
117
+
118
+ public
119
+ def register
120
+ require "fileutils" # For mkdir_p
121
+
122
+ @files = {}
123
+ @io_mutex = Mutex.new
124
+
125
+ @path = File.expand_path(path)
126
+
127
+ validate_path
128
+
129
+ if path_with_field_ref?
130
+ @file_root = extract_file_root
131
+ else
132
+ @file_root = File.dirname(path)
133
+ end
134
+ @failure_path = File.join(@file_root, @filename_failure)
135
+
136
+ @flush_interval = @flush_interval.to_i
137
+ if @flush_interval > 0
138
+ @flusher = Interval.start(@flush_interval, -> { flush_pending_files })
139
+ end
140
+
141
+ @content_type = @gzip ? 'application/gzip' : 'text/plain'
142
+ @content_encoding = @gzip_content_encoding ? 'gzip' : 'identity'
143
+
144
+ @last_stale_cleanup_cycle = Time.now
145
+ @stale_cleanup_interval = 10
146
+ end # def register
147
+
148
+ private
149
+ def validate_path
150
+ if (root_directory =~ FIELD_REF) != nil
151
+ @logger.error("GoogleCloudStorage: The starting part of the path should not be dynamic.", :path => @path)
152
+ raise LogStash::ConfigurationError.new("The starting part of the path should not be dynamic.")
153
+ end
154
+ end
155
+
156
+ def initialize_google_client
157
+ require "google/api_client"
158
+ require "openssl"
159
+
160
+ @client = Google::APIClient.new(:application_name => 'Logstash Google Cloud Storage output plugin', :application_version => '0.1')
161
+ @storage = @client.discovered_api('storage', 'v1')
162
+
163
+ key = Google::APIClient::PKCS12.load_key(@key_path, @key_password)
164
+ service_account = Google::APIClient::JWTAsserter.new(@service_account, 'https://www.googleapis.com/auth/devstorage.read_write', key)
165
+ @client.authorization = service_account.authorize
166
+ end
167
+
168
+ # Uploads a local file to the configured bucket.
169
+ def upload_object(filename)
170
+ begin
171
+ @logger.debug("GCS: upload object.", :filename => filename)
172
+
173
+ media = Google::APIClient::UploadIO.new(filename, @content_type)
174
+ metadata_insert_result = @client.execute(:api_method => @storage.objects.insert,
175
+ :parameters => {
176
+ 'uploadType' => 'multipart',
177
+ 'bucket' => @bucket,
178
+ 'contentEncoding' => @content_encoding,
179
+ 'name' => File.basename(filename)
180
+ },
181
+ :body_object => {contentType: @content_type},
182
+ :media => media)
183
+ contents = metadata_insert_result.data
184
+ @logger.debug("GCS: multipart insert",
185
+ :object => contents.name,
186
+ :self_link => contents.self_link)
187
+ rescue => e
188
+ @logger.error("GCS: failed to upload file", :exception => e)
189
+ # TODO(rdc): limit retries?
190
+ sleep 1
191
+ retry
192
+ end
193
+ end
194
+
195
+ private
196
+ def root_directory
197
+ parts = @path.split(File::SEPARATOR).select { |item| !item.empty? }
198
+ if Gem.win_platform?
199
+ # First part is the drive letter
200
+ parts[1]
201
+ else
202
+ parts.first
203
+ end
204
+ end
205
+
206
+ public
207
+ def multi_receive_encoded(events_and_encoded)
208
+ encoded_by_path = Hash.new {|h,k| h[k] = []}
209
+
210
+ events_and_encoded.each do |event,encoded|
211
+ file_output_path = event_path(event)
212
+ encoded_by_path[file_output_path] << encoded
213
+ end
214
+
215
+ @io_mutex.synchronize do
216
+ encoded_by_path.each do |path,chunks|
217
+ fd = open(path)
218
+ if @write_behavior == "overwrite"
219
+ fd.truncate(0)
220
+ fd.seek(0, IO::SEEK_SET)
221
+ fd.write(chunks.last)
222
+ else
223
+ # append to the file
224
+ chunks.each {|chunk| fd.write(chunk + "\n") }
225
+ end
226
+ fd.flush unless @flusher && @flusher.alive?
227
+ upload_object(fd.path) unless @flusher && @flusher.alive?
228
+ end
229
+
230
+ close_stale_files
231
+ end
232
+ end # def receive
233
+
234
+ public
235
+ def close
236
+ @flusher.stop unless @flusher.nil?
237
+ @io_mutex.synchronize do
238
+ @logger.debug("Close: closing files")
239
+
240
+ @files.each do |path, fd|
241
+ begin
242
+ fd.close
243
+ @logger.debug("Closed file #{path}", :fd => fd)
244
+ rescue Exception => e
245
+ @logger.error("Exception while flushing and closing files.", :exception => e)
246
+ end
247
+ end
248
+ end
249
+ end
250
+
251
+ private
252
+ def inside_file_root?(log_path)
253
+ target_file = File.expand_path(log_path)
254
+ return target_file.start_with?("#{@file_root.to_s}/")
255
+ end
256
+
257
+ private
258
+ def event_path(event)
259
+ file_output_path = generate_filepath(event)
260
+ if path_with_field_ref? && !inside_file_root?(file_output_path)
261
+ @logger.warn("GoogleCloudStorage: the event tried to write outside the files root, writing the event to the failure file", :event => event, :filename => @failure_path)
262
+ file_output_path = @failure_path
263
+ elsif !@create_if_deleted && deleted?(file_output_path)
264
+ file_output_path = @failure_path
265
+ end
266
+ @logger.debug("GoogleCloudStorage, writing event to file.", :filename => file_output_path)
267
+
268
+ file_output_path
269
+ end
270
+
271
+ private
272
+ def generate_filepath(event)
273
+ event.sprintf(@path)
274
+ end
275
+
276
+ private
277
+ def path_with_field_ref?
278
+ path =~ FIELD_REF
279
+ end
280
+
281
+ private
282
+ def extract_file_root
283
+ parts = File.expand_path(path).split(File::SEPARATOR)
284
+ parts.take_while { |part| part !~ FIELD_REF }.join(File::SEPARATOR)
285
+ end
286
+
287
+ # the back-bone of @flusher, our periodic-flushing interval.
288
+ private
289
+ def flush_pending_files
290
+ @io_mutex.synchronize do
291
+ @logger.debug("Starting flush cycle")
292
+
293
+ @files.each do |path, fd|
294
+ @logger.debug("Flushing file", :path => path, :fd => fd)
295
+ fd.flush
296
+ upload_object(fd.path)
297
+ end
298
+ end
299
+ rescue => e
300
+ # squash exceptions caught while flushing after logging them
301
+ @logger.error("Exception flushing files", :exception => e.message, :backtrace => e.backtrace)
302
+ end
303
+
304
+ # every 10 seconds or so (triggered by events, but if there are no events there's no point closing files anyway)
305
+ private
306
+ def close_stale_files
307
+ now = Time.now
308
+ return unless now - @last_stale_cleanup_cycle >= @stale_cleanup_interval
309
+
310
+ @logger.debug("Starting stale files cleanup cycle", :files => @files)
311
+ inactive_files = @files.select { |path, fd| not fd.active }
312
+ @logger.debug("%d stale files found" % inactive_files.count, :inactive_files => inactive_files)
313
+ inactive_files.each do |path, fd|
314
+ @logger.info("Closing file %s" % path)
315
+ fd.close
316
+ @files.delete(path)
317
+ end
318
+ # mark all files as inactive, a call to write will mark them as active again
319
+ @files.each { |path, fd| fd.active = false }
320
+ @last_stale_cleanup_cycle = now
321
+ end
322
+
323
+ private
324
+ def cached?(path)
325
+ @files.include?(path) && !@files[path].nil?
326
+ end
327
+
328
+ private
329
+ def deleted?(path)
330
+ !File.exist?(path)
331
+ end
332
+
333
+ private
334
+ def open(path)
335
+ if !deleted?(path) && cached?(path)
336
+ return @files[path]
337
+ end
338
+
339
+ if deleted?(path)
340
+ if @create_if_deleted
341
+ @logger.debug("Required path was deleted, creating the file again", :path => path)
342
+ @files.delete(path)
343
+ else
344
+ return @files[path] if cached?(path)
345
+ end
346
+ end
347
+
348
+ @logger.info("Opening file", :path => path)
349
+
350
+ dir = File.dirname(path)
351
+ if !Dir.exist?(dir)
352
+ @logger.info("Creating directory", :directory => dir)
353
+ if @dir_mode != -1
354
+ FileUtils.mkdir_p(dir, :mode => @dir_mode)
355
+ else
356
+ FileUtils.mkdir_p(dir)
357
+ end
358
+ end
359
+
360
+ # work around a bug opening fifos (bug JRUBY-6280)
361
+ stat = File.stat(path) rescue nil
362
+ if stat && stat.ftype == "fifo"
363
+ fd = java.io.FileWriter.new(java.io.File.new(path))
364
+ else
365
+ if @file_mode != -1
366
+ fd = File.new(path, "a+", @file_mode)
367
+ else
368
+ fd = File.new(path, "a+")
369
+ end
370
+ end
371
+ if gzip
372
+ fd = Zlib::GzipWriter.new(fd)
373
+ end
374
+ @files[path] = IOWriter.new(fd)
375
+ end
376
+
377
+ ##
378
+ # Bare-bones utility for running a block of code at an interval.
379
+ #
380
+ class Interval
381
+ ##
382
+ # Initializes a new Interval with the given arguments and starts it before returning it.
383
+ #
384
+ # @param interval [Integer] (see: Interval#initialize)
385
+ # @param procsy [#call] (see: Interval#initialize)
386
+ #
387
+ # @return [Interval]
388
+ #
389
+ def self.start(interval, procsy)
390
+ self.new(interval, procsy).tap(&:start)
391
+ end
392
+
393
+ ##
394
+ # @param interval [Integer]: time in seconds to wait between calling the given proc
395
+ # @param procsy [#call]: proc or lambda to call periodically; must not raise exceptions.
396
+ def initialize(interval, procsy)
397
+ @interval = interval
398
+ @procsy = procsy
399
+
400
+ require 'thread' # Mutex, ConditionVariable, etc.
401
+ @mutex = Mutex.new
402
+ @sleeper = ConditionVariable.new
403
+ end
404
+
405
+ ##
406
+ # Starts the interval, or returns if it has already been started.
407
+ #
408
+ # @return [void]
409
+ def start
410
+ @mutex.synchronize do
411
+ return if @thread && @thread.alive?
412
+
413
+ @thread = Thread.new { run }
414
+ end
415
+ end
416
+
417
+ ##
418
+ # Stop the interval.
419
+ # Does not interrupt if execution is in-progress.
420
+ def stop
421
+ @mutex.synchronize do
422
+ @stopped = true
423
+ end
424
+
425
+ @thread && @thread.join
426
+ end
427
+
428
+ ##
429
+ # @return [Boolean]
430
+ def alive?
431
+ @thread && @thread.alive?
432
+ end
433
+
434
+ private
435
+
436
+ def run
437
+ @mutex.synchronize do
438
+ loop do
439
+ @sleeper.wait(@mutex, @interval)
440
+ break if @stopped
441
+
442
+ @procsy.call
443
+ end
444
+ end
445
+ ensure
446
+ @sleeper.broadcast
447
+ end
448
+ end # class LogStash::Outputs::File::Interval
449
+ end # class LogStash::Outputs::File
450
+
451
+ # wrapper class
452
+ class IOWriter
453
+ def initialize(io)
454
+ @io = io
455
+ end
456
+ def write(*args)
457
+ @io.write(*args)
458
+ @active = true
459
+ end
460
+ def flush
461
+ @io.flush
462
+ if @io.class == Zlib::GzipWriter
463
+ @io.to_io.flush
464
+ end
465
+ end
466
+ def method_missing(method_name, *args, &block)
467
+ if @io.respond_to?(method_name)
468
+
469
+ @io.send(method_name, *args, &block)
470
+ else
471
+ super
472
+ end
473
+ end
474
+ attr_accessor :active
475
+ end