logstash-output-azureblob 0.9.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 3b36f0e9cef45f58a3ce40f93e9d8808f4acb554cf3eeea2fc20a77b6a62e25a
4
+ data.tar.gz: 372d3ac19b637d675df14fc9733eb8da90117c41a6aab4b5fa745d31ea46c81c
5
+ SHA512:
6
+ metadata.gz: 0bf72cc5ba7923f1c9ba8714c33ca1b226ef40c04fe611ece979f254e65f44a9cee95c332734e05678ad3dc6c13c59be1278b987145cfd3d6bf633c2a2e58863
7
+ data.tar.gz: 9aad5f4b182892d3700a09b3e58a95846ca235f14075aef507898bff50cc36965d7e0a73c3b21c90641da8dab06051143cd949b130eeeb857d8de762c38d5e4f
data/CHANGELOG.md ADDED
@@ -0,0 +1,2 @@
1
+ ## 0.90
2
+ - Initial release to support latest libraries and jruby versions
data/CONTRIBUTORS ADDED
@@ -0,0 +1,16 @@
1
+ The following is a list of people who have contributed ideas, code, bug
2
+ reports, or in general have helped logstash along its way.
3
+
4
+ Current Contributors:
5
+ * Sean Stark -sean.stark@microsoft.com
6
+
7
+ Original Contributors - no longer maintainers
8
+ * Tuffk - tuffkmulhall@gmail.com
9
+ * BrunoLerner - bru.lerner@gmail.com
10
+ * Alex-Tsyganok -
11
+ * Charlie Zha - zysimplelife@gmail.com
12
+
13
+ Note: If you've sent us patches, bug reports, or otherwise contributed to
14
+ Logstash, and you aren't on the list above and want to be, please let us know
15
+ and we'll make sure you're here. Contributions from folks like you are what make
16
+ open source awesome.
data/DEVELOPER.md ADDED
@@ -0,0 +1,2 @@
1
+ # logstash-output-Logstash_Azure_Blob_Output
2
+ Example output plugin. This should help bootstrap your effort to write your own output plugin!
data/Gemfile ADDED
@@ -0,0 +1,3 @@
1
+ source 'https://rubygems.org'
2
+ gemspec
3
+ # gem "logstash", :github => "elastic/logstash", :branch => "main"
data/LICENSE ADDED
@@ -0,0 +1,11 @@
1
+ Licensed under the Apache License, Version 2.0 (the "License");
2
+ you may not use this file except in compliance with the License.
3
+ You may obtain a copy of the License at
4
+
5
+ http://www.apache.org/licenses/LICENSE-2.0
6
+
7
+ Unless required by applicable law or agreed to in writing, software
8
+ distributed under the License is distributed on an "AS IS" BASIS,
9
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10
+ See the License for the specific language governing permissions and
11
+ limitations under the License.
data/README.md ADDED
@@ -0,0 +1,102 @@
1
+
2
+ # Logstash Output Plugin for Azure Blob Storage
3
+
4
+ This is an output plugin for [Logstash](https://github.com/elastic/logstash). It is fully free and open source. The license is Apache 2.0. This plugin was forked from https://github.com/tuffk/Logstash-output-to-Azure-Blob and updated to use the latest Azure Storage Ruby SDK
5
+
6
+ - This plugin uses the https://github.com/Azure/azure-storage-ruby library
7
+ - The class documentation is here: https://www.rubydoc.info/gems/azure-storage-blob
8
+
9
+ ## Disclaimers
10
+
11
+ I am not a Ruby developer and may not be able to respond efficently to issues or bugs. Please take this into consideration when using this plugin
12
+
13
+ - Azure Data Lake Storage Gen2 accounts are not currently supported.
14
+ - Managed Identities and Service Principles are currently not supported for auth.
15
+
16
+ ## Requirements
17
+ - Logstash version 8.6+ [Installation instructions](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html).
18
+ - Tested on 8.6.2
19
+ - Azure Storage Account
20
+ - Azure Storage Account Access Key(s)
21
+
22
+ ## Installation
23
+ ```sh
24
+ bin/logstash-plugin install logstash-output-azureblob
25
+ ```
26
+ > On Ubuntu the default path is /usr/share/logstash/bin/
27
+
28
+ ## Configuration
29
+
30
+ Information about configuring Logstash can be found in the [Logstash configuration guide](https://www.elastic.co/guide/en/logstash/current/configuration.html).
31
+
32
+ You will need to configure this plugin before sending events from Logstash to an Azure Storage Account. The following example shows the minimum you need to provide:
33
+
34
+ ```yaml
35
+ output {
36
+ azure_blob {
37
+ storage_account_name => "my-azure-account" # required
38
+ storage_access_key => "my-super-secret-key" # required
39
+ container_name => "my-container" # required
40
+ size_file => 1024*1024*5 # optional
41
+ time_file => 10 # optional
42
+ restore => true # optional
43
+ temporary_directory => "path/to/directory" # optional
44
+ prefix => "a_prefix" # optional
45
+ upload_queue_size => 2 # optional
46
+ upload_workers_count => 1 # optional
47
+ rotation_strategy_val => "size_and_time" # optional
48
+ tags => [] # optional
49
+ encoding => "none" # optional
50
+ }
51
+ }
52
+ ```
53
+
54
+ ### Example with syslog
55
+
56
+ ```yaml
57
+ input {
58
+ syslog {
59
+ port => "5514"
60
+ type => "syslog"
61
+ codec => cef
62
+ }
63
+ }
64
+
65
+ output {
66
+ azure_blob {
67
+ storage_account_name => "<account-name>"
68
+ storage_access_key => "<access-key>"
69
+ container_name => "<container-name>"
70
+ }
71
+ }
72
+ ```
73
+
74
+ ## Development
75
+
76
+ - Docker Image - [cameronkerrnz/logstash-plugin-dev:7.17](https://hub.docker.com/r/cameronkerrnz/logstash-plugin-dev)
77
+ - https://github.com/cameronkerrnz/logstash-plugin-dev
78
+ - jruby 9.2.20.1 (2.5.8)
79
+ - Logstash Version 8.6.2+
80
+
81
+ 1. Install Dependencies
82
+
83
+ ```shell
84
+ rake vendor
85
+ bundle install
86
+ ```
87
+ 2. Build the plugin
88
+ ```shell
89
+ gem build logstash-output-azureblob.gemspec
90
+ ```
91
+ 3. Install Locally
92
+ ```shell
93
+ /usr/share/logstash/bin/logstash-plugin install /usr/share/logstash/logstash-output-azureblob-0.9.0.gem
94
+ ```
95
+ 4. Test with configuration file
96
+ ```shell
97
+ /usr/share/logstash/bin/logstash -f blob.conf
98
+ ```
99
+
100
+ ## Contributing
101
+
102
+ All contributions are welcome: ideas, patches, documentation, bug reports, and complaints. For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -0,0 +1,235 @@
1
+ require 'logstash/outputs/base'
2
+ require 'logstash/namespace'
3
+ require 'azure/storage/blob'
4
+ require 'azure/storage/common'
5
+ require 'tmpdir'
6
+
7
+ class LogStash::Outputs::LogstashAzureBlobOutput < LogStash::Outputs::Base
8
+ # name for the namespace under output for logstash configuration
9
+ config_name 'azure_blob'
10
+ default :codec, "line"
11
+
12
+ require 'logstash/outputs/blob/writable_directory_validator'
13
+ require 'logstash/outputs/blob/path_validator'
14
+ require 'logstash/outputs/blob/size_rotation_policy'
15
+ require 'logstash/outputs/blob/time_rotation_policy'
16
+ require 'logstash/outputs/blob/size_and_time_rotation_policy'
17
+ require 'logstash/outputs/blob/temporary_file'
18
+ require 'logstash/outputs/blob/temporary_file_factory'
19
+ require 'logstash/outputs/blob/uploader'
20
+ require 'logstash/outputs/blob/file_repository'
21
+
22
+ PREFIX_KEY_NORMALIZE_CHARACTER = '_'.freeze
23
+ PERIODIC_CHECK_INTERVAL_IN_SECONDS = 15
24
+ CRASH_RECOVERY_THREADPOOL = Concurrent::ThreadPoolExecutor.new(min_threads: 1,
25
+ max_threads: 2,
26
+ fallback_policy: :caller_runs)
27
+
28
+ # azure container
29
+ config :storage_account_name, validate: :string, required: false
30
+
31
+ # azure key
32
+ config :storage_access_key, validate: :string, required: false
33
+
34
+ # conatainer name
35
+ config :container_name, validate: :string, required: false
36
+
37
+ # mamadas
38
+ config :size_file, validate: :number, default: 1024 * 1024 * 5
39
+ config :time_file, validate: :number, default: 15
40
+ config :restore, validate: :boolean, default: true
41
+ config :temporary_directory, validate: :string, default: File.join(Dir.tmpdir, 'logstash')
42
+ config :prefix, validate: :string, default: ''
43
+ config :upload_queue_size, validate: :number, default: 2 * (Concurrent.processor_count * 0.25).ceil
44
+ config :upload_workers_count, validate: :number, default: (Concurrent.processor_count * 0.5).ceil
45
+ config :rotation_strategy_val, validate: %w[size_and_time size time], default: 'size_and_time'
46
+ config :tags, validate: :array, default: []
47
+ config :encoding, validate: %w[none gzip], default: 'none'
48
+
49
+ attr_accessor :storage_account_name, :storage_access_key, :container_name,
50
+ :size_file, :time_file, :restore, :temporary_directory, :prefix, :upload_queue_size,
51
+ :upload_workers_count, :rotation_strategy_val, :tags, :encoding
52
+
53
+ # initializes the +LogstashAzureBlobOutput+ instances
54
+ # validates all config parameters
55
+ # initializes the uploader
56
+ def register
57
+ unless @prefix.empty?
58
+ unless PathValidator.valid?(prefix)
59
+ raise LogStash::ConfigurationError.new("Prefix must not contains: #{PathValidator::INVALID_CHARACTERS}")
60
+ end
61
+ end
62
+
63
+ unless WritableDirectoryValidator.valid?(@temporary_directory)
64
+ raise LogStash::ConfigurationError.new("Logstash must have the permissions to write to the temporary directory: #{@temporary_directory}")
65
+ end
66
+
67
+ if @time_file.nil? && @size_file.nil? || @size_file.zero? && @time_file.zero?
68
+ raise LogStash::ConfigurationError.new('at least one of time_file or size_file set to a value greater than 0')
69
+ end
70
+
71
+ @file_repository = FileRepository.new(@tags, @encoding, @temporary_directory)
72
+
73
+ @rotation = rotation_strategy
74
+
75
+ executor = Concurrent::ThreadPoolExecutor.new(min_threads: 1,
76
+ max_threads: @upload_workers_count,
77
+ max_queue: @upload_queue_size,
78
+ fallback_policy: :caller_runs)
79
+
80
+ @uploader = Uploader.new(blob_container_resource, container_name, @logger, executor)
81
+
82
+ restore_from_crash if @restore
83
+ start_periodic_check if @rotation.needs_periodic?
84
+ end
85
+
86
+ # Receives multiple events and check if there is space in temporary directory
87
+ # @param events_and_encoded [Object]
88
+ def multi_receive_encoded(events_and_encoded)
89
+ prefix_written_to = Set.new
90
+
91
+ events_and_encoded.each do |event, encoded|
92
+ prefix_key = normalize_key(event.sprintf(@prefix))
93
+ prefix_written_to << prefix_key
94
+
95
+ begin
96
+ @file_repository.get_file(prefix_key) { |file| file.write(encoded) }
97
+ # The output should stop accepting new events coming in, since it cannot do anything with them anymore.
98
+ # Log the error and rethrow it.
99
+ rescue Errno::ENOSPC => e
100
+ @logger.error('Azure: No space left in temporary directory', temporary_directory: @temporary_directory)
101
+ raise e
102
+ end
103
+ end
104
+
105
+ # Groups IO calls to optimize fstat checks
106
+ rotate_if_needed(prefix_written_to)
107
+ end
108
+
109
+ # close the temporary file and uploads the content to Azure
110
+ def close
111
+ stop_periodic_check if @rotation.needs_periodic?
112
+
113
+ @logger.debug('Uploading current workspace')
114
+
115
+ # The plugin has stopped receiving new events, but we still have
116
+ # data on disk, lets make sure it get to Azure blob.
117
+ # If Logstash get interrupted, the `restore_from_crash` (when set to true) method will pickup
118
+ # the content in the temporary directly and upload it.
119
+ # This will block the shutdown until all upload are done or the use force quit.
120
+ @file_repository.each_files do |file|
121
+ upload_file(file)
122
+ end
123
+
124
+ @file_repository.shutdown
125
+
126
+ @uploader.stop # wait until all the current upload are complete
127
+ @crash_uploader.stop if @restore # we might have still work to do for recovery so wait until we are done
128
+ end
129
+
130
+ # Validates and normalize prefix key
131
+ # @param prefix_key [String]
132
+ def normalize_key(prefix_key)
133
+ prefix_key.gsub(PathValidator.matches_re, PREFIX_KEY_NORMALIZE_CHARACTER)
134
+ end
135
+
136
+ # checks periodically the tmeporary file if it needs to be rotated
137
+ def start_periodic_check
138
+ @logger.debug('Start periodic rotation check')
139
+
140
+ @periodic_check = Concurrent::TimerTask.new(execution_interval: PERIODIC_CHECK_INTERVAL_IN_SECONDS) do
141
+ @logger.debug('Periodic check for stale files')
142
+
143
+ rotate_if_needed(@file_repository.keys)
144
+ end
145
+
146
+ @periodic_check.execute
147
+ end
148
+
149
+ def stop_periodic_check
150
+ @periodic_check.shutdown
151
+ end
152
+
153
+ # login to azure cloud using azure storage blob client and create the container if it doesn't exist
154
+ # @return [Object] the azure_blob_service object, which is the endpoint to azure gem
155
+ def blob_container_resource
156
+ blob_client = Azure::Storage::Blob::BlobService.create(
157
+ storage_account_name: storage_account_name,
158
+ storage_access_key: storage_access_key
159
+ )
160
+ list = blob_client.list_containers()
161
+ list.each do |item|
162
+ @container = item if item.name == container_name
163
+ end
164
+
165
+ blob_client.create_container(container_name) unless @container
166
+ blob_client
167
+ end
168
+
169
+ # check if it needs to rotate according to rotation policy and rotates it if it needs
170
+ # @param prefixes [String]
171
+ def rotate_if_needed(prefixes)
172
+ prefixes.each do |prefix|
173
+ # Each file access is thread safe,
174
+ # until the rotation is done then only
175
+ # one thread has access to the resource.
176
+ @file_repository.get_factory(prefix) do |factory|
177
+ temp_file = factory.current
178
+
179
+ if @rotation.rotate?(temp_file)
180
+ @logger.debug('Rotate file',
181
+ strategy: @rotation.class.name,
182
+ key: temp_file.key,
183
+ path: temp_file.path)
184
+
185
+ upload_file(temp_file)
186
+ factory.rotate!
187
+ end
188
+ end
189
+ end
190
+ end
191
+
192
+ # uploads the file using the +Uploader+
193
+ def upload_file(temp_file)
194
+ @logger.debug('Queue for upload', path: temp_file.path)
195
+
196
+ # if the queue is full the calling thread will be used to upload
197
+ temp_file.close # make sure the content is on disk
198
+ unless temp_file.empty? # rubocop:disable GuardClause
199
+ @uploader.upload_async(temp_file,
200
+ on_complete: method(:clean_temporary_file))
201
+ end
202
+ end
203
+
204
+ # creates an instance for the rotation strategy
205
+ def rotation_strategy
206
+ case @rotation_strategy_val
207
+ when 'size'
208
+ SizeRotationPolicy.new(size_file)
209
+ when 'time'
210
+ TimeRotationPolicy.new(time_file)
211
+ when 'size_and_time'
212
+ SizeAndTimeRotationPolicy.new(size_file, time_file)
213
+ end
214
+ end
215
+
216
+ # Cleans the temporary files after it is uploaded to azure blob
217
+ def clean_temporary_file(file)
218
+ @logger.debug('Removing temporary file', file: file.path)
219
+ file.delete!
220
+ end
221
+
222
+ # uploads files if there was a crash before
223
+ def restore_from_crash
224
+ @crash_uploader = Uploader.new(blob_container_resource, container_name, @logger, CRASH_RECOVERY_THREADPOOL)
225
+
226
+ temp_folder_path = Pathname.new(@temporary_directory)
227
+ Dir.glob(::File.join(@temporary_directory, '**/*'))
228
+ .select { |file| ::File.file?(file) }
229
+ .each do |file|
230
+ temp_file = TemporaryFile.create_from_existing_file(file, temp_folder_path)
231
+ @logger.debug('Recovering from crash and uploading', file: temp_file.path)
232
+ @crash_uploader.upload_async(temp_file, on_complete: method(:clean_temporary_file))
233
+ end
234
+ end
235
+ end
@@ -0,0 +1,138 @@
1
+
2
+ require 'java'
3
+ require 'concurrent'
4
+ require 'concurrent/timer_task'
5
+ require 'logstash/util'
6
+
7
+ ConcurrentHashMap = java.util.concurrent.ConcurrentHashMap
8
+
9
+ module LogStash
10
+ module Outputs
11
+ class LogstashAzureBlobOutput
12
+ # sub class for +LogstashAzureBlobOutput+
13
+ # this class manages the temporary directory for the temporary files
14
+ class FileRepository
15
+ DEFAULT_STATE_SWEEPER_INTERVAL_SECS = 60
16
+ DEFAULT_STALE_TIME_SECS = 15 * 60
17
+ # Ensure that all access or work done
18
+ # on a factory is threadsafe
19
+ class PrefixedValue
20
+ # initialize the factory
21
+ def initialize(file_factory, stale_time)
22
+ @file_factory = file_factory
23
+ @lock = Mutex.new
24
+ @stale_time = stale_time
25
+ end
26
+
27
+ # activate the lock
28
+ def with_lock
29
+ @lock.synchronize do
30
+ yield @file_factory
31
+ end
32
+ end
33
+
34
+ # boolean method
35
+ def stale?
36
+ with_lock { |factory| factory.current.size.zero? && (Time.now - factory.current.ctime > @stale_time) }
37
+ end
38
+
39
+ # return this class
40
+ def apply(_prefix)
41
+ self
42
+ end
43
+
44
+ # delete the current factory
45
+ def delete!
46
+ with_lock { |factory| factory.current.delete! }
47
+ end
48
+ end
49
+
50
+ # class for initializing the repo manager
51
+ class FactoryInitializer
52
+ # initializes the class
53
+ def initialize(tags, encoding, temporary_directory, stale_time)
54
+ @tags = tags
55
+ @encoding = encoding
56
+ @temporary_directory = temporary_directory
57
+ @stale_time = stale_time
58
+ end
59
+
60
+ # applies the prefix key
61
+ def apply(prefix_key)
62
+ PrefixedValue.new(TemporaryFileFactory.new(prefix_key, @tags, @encoding, @temporary_directory), @stale_time)
63
+ end
64
+ end
65
+ # initializes the class with more variables
66
+ def initialize(tags, encoding, temporary_directory,
67
+ stale_time = DEFAULT_STALE_TIME_SECS,
68
+ sweeper_interval = DEFAULT_STATE_SWEEPER_INTERVAL_SECS)
69
+ # The path need to contains the prefix so when we start
70
+ # logtash after a crash we keep the remote structure
71
+ @prefixed_factories = ConcurrentHashMap.new
72
+
73
+ @sweeper_interval = sweeper_interval
74
+
75
+ @factory_initializer = FactoryInitializer.new(tags, encoding, temporary_directory, stale_time)
76
+
77
+ start_stale_sweeper
78
+ end
79
+
80
+ # gets the key set
81
+ def keys
82
+ @prefixed_factories.keySet
83
+ end
84
+
85
+ # with lock for each file
86
+ def each_files
87
+ @prefixed_factories.elements.each do |prefixed_file|
88
+ prefixed_file.with_lock { |factory| yield factory.current }
89
+ end
90
+ end
91
+
92
+ # Return the file factory
93
+ def get_factory(prefix_key)
94
+ @prefixed_factories.computeIfAbsent(prefix_key, @factory_initializer).with_lock { |factory| yield factory }
95
+ end
96
+
97
+ # gets file from prefix_key
98
+ def get_file(prefix_key)
99
+ get_factory(prefix_key) { |factory| yield factory.current }
100
+ end
101
+
102
+ # stops. shutdown
103
+ def shutdown
104
+ stop_stale_sweeper
105
+ end
106
+
107
+ # gets factory's size
108
+ def size
109
+ @prefixed_factories.size
110
+ end
111
+
112
+ # remove the stale given key and value
113
+ def remove_stale(k, v)
114
+ if v.stale? # rubocop:disable Style/GuardClause
115
+ @prefixed_factories.remove(k, v)
116
+ v.delete!
117
+ end
118
+ end
119
+
120
+ # starts the stale sweeper
121
+ def start_stale_sweeper
122
+ @stale_sweeper = Concurrent::TimerTask.new(execution_interval: @sweeper_interval) do
123
+ LogStash::Util.set_thread_name('LogstashAzureBlobOutput, Stale factory sweeper')
124
+
125
+ @prefixed_factories.forEach { |k, v| remove_stale(k, v) }
126
+ end
127
+
128
+ @stale_sweeper.execute
129
+ end
130
+
131
+ # stops the stale sweeper
132
+ def stop_stale_sweeper
133
+ @stale_sweeper.shutdown
134
+ end
135
+ end
136
+ end
137
+ end
138
+ end
@@ -0,0 +1,20 @@
1
+ module LogStash
2
+ module Outputs
3
+ class LogstashAzureBlobOutput
4
+ # a sub class of +LogstashAzureBlobOutput+
5
+ # valdiates the path for the temporary directory
6
+ class PathValidator
7
+ INVALID_CHARACTERS = "\^`><".freeze
8
+ # boolean method to check if a name is valid
9
+ def self.valid?(name)
10
+ name.match(matches_re).nil?
11
+ end
12
+
13
+ # define the invalid characters that shouldn't be in the path name
14
+ def self.matches_re
15
+ /[#{Regexp.escape(INVALID_CHARACTERS)}]/
16
+ end
17
+ end
18
+ end
19
+ end
20
+ end
@@ -0,0 +1,28 @@
1
+ require 'logstash/outputs/blob/size_rotation_policy'
2
+ require 'logstash/outputs/blob/time_rotation_policy'
3
+
4
+ module LogStash
5
+ module Outputs
6
+ class LogstashAzureBlobOutput
7
+ # a sub class of +LogstashAzureBlobOutput+
8
+ # sets the rotation policy
9
+ class SizeAndTimeRotationPolicy
10
+ # initialize the class
11
+ def initialize(file_size, time_file)
12
+ @size_strategy = SizeRotationPolicy.new(file_size)
13
+ @time_strategy = TimeRotationPolicy.new(time_file)
14
+ end
15
+
16
+ # check if it is time to rotate
17
+ def rotate?(file)
18
+ @size_strategy.rotate?(file) || @time_strategy.rotate?(file)
19
+ end
20
+
21
+ # boolean method
22
+ def needs_periodic?
23
+ true
24
+ end
25
+ end
26
+ end
27
+ end
28
+ end
@@ -0,0 +1,29 @@
1
+ module LogStash
2
+ module Outputs
3
+ class LogstashAzureBlobOutput
4
+ # a sub class of +LogstashAzureBlobOutput+
5
+ # sets the rotation policy by size
6
+ class SizeRotationPolicy
7
+ attr_reader :size_file
8
+ # initialize the class
9
+ def initialize(size_file)
10
+ if size_file <= 0
11
+ raise LogStash::ConfigurationError.new('`size_file` need to be greather than 0')
12
+ end
13
+
14
+ @size_file = size_file
15
+ end
16
+
17
+ # boolean method to check if it is time to rotate
18
+ def rotate?(file)
19
+ file.size >= size_file
20
+ end
21
+
22
+ # boolean method
23
+ def needs_periodic?
24
+ false
25
+ end
26
+ end
27
+ end
28
+ end
29
+ end
@@ -0,0 +1,81 @@
1
+ require 'thread'
2
+ require 'forwardable'
3
+ require 'fileutils'
4
+
5
+ module LogStash
6
+ module Outputs
7
+ class LogstashAzureBlobOutput
8
+ # a sub class of +LogstashAzureBlobOutput+
9
+ # Wrap the actual file descriptor into an utility classe
10
+ # It make it more OOP and easier to reason with the paths.
11
+ class TemporaryFile
12
+ extend Forwardable
13
+
14
+ def_delegators :@fd, :path, :write, :close, :fsync
15
+
16
+ attr_reader :fd
17
+
18
+ # initialize the class
19
+ def initialize(key, fd, temp_path)
20
+ @fd = fd
21
+ @key = key
22
+ @temp_path = temp_path
23
+ @created_at = Time.now
24
+ end
25
+
26
+ # gets the created at time
27
+ def ctime
28
+ @created_at
29
+ end
30
+
31
+ # gets path to temporary directory
32
+ attr_reader :temp_path
33
+
34
+ # gets the size of file
35
+ def size
36
+ # Use the fd size to get the accurate result,
37
+ # so we dont have to deal with fsync
38
+ # if the file is close we will use the File::size
39
+
40
+ @fd.size
41
+ rescue IOError
42
+ ::File.size(path)
43
+ end
44
+
45
+ # gets the key
46
+ def key
47
+ @key.gsub(/^\//, '')
48
+ end
49
+
50
+ # Each temporary file is made inside a directory named with an UUID,
51
+ # instead of deleting the file directly and having the risk of deleting other files
52
+ # we delete the root of the UUID, using a UUID also remove the risk of deleting unwanted file, it acts as
53
+ # a sandbox.
54
+ def delete!
55
+ begin
56
+ @fd.close
57
+ rescue
58
+ IOError
59
+ end
60
+ FileUtils.rm_r(@temp_path, secure: true)
61
+ end
62
+
63
+ # boolean method to determine if the file is empty
64
+ def empty?
65
+ size.zero?
66
+ end
67
+
68
+ # creates the temporary file in an existing temporary directory from existing file
69
+ # @param file_path [String] path to the file
70
+ # @param temporary_folder [String] path to the temporary folder
71
+ def self.create_from_existing_file(file_path, temporary_folder)
72
+ key_parts = Pathname.new(file_path).relative_path_from(temporary_folder).to_s.split(::File::SEPARATOR)
73
+
74
+ TemporaryFile.new(key_parts.slice(1, key_parts.size).join('/'),
75
+ ::File.open(file_path, 'r'),
76
+ ::File.join(temporary_folder, key_parts.slice(0, 1)))
77
+ end
78
+ end
79
+ end
80
+ end
81
+ end