logstash-output-kusto 0.1.6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 89aa0d2c0229ca92f8d1dbcc5ba749ae8b6afbbfff8aab57ab862ddc5ea2d329
4
+ data.tar.gz: 7023796abee3ebae86b8987406917beeb5ccfa9c8e2d26d8d56c77091626c411
5
+ SHA512:
6
+ metadata.gz: 4bfacbb9d6965366a2f1ce4e0bf031f3d100225da8d7da76d431a28cb82ff40c9e33ffce9a54a69bc45bc89bae86951f03ed0fb3eed7021df5534e0c234f52b2
7
+ data.tar.gz: e670720f4b5c39a5ac574ca760141009544475c74b3fa5cd7faf8625db9d951592e4673f9a8c6c973fbe5b41a2a47f3cae81fac99b5799d2b42eb6c9900ef27d
data/CHANGELOG.md ADDED
@@ -0,0 +1,4 @@
1
+ ## 0.1.0
2
+ - Plugin created with the logstash plugin generator
3
+ ## 0.1.6
4
+ - plugin published to the public. supports json events without dynamic routing meaning one output defined per target kusto table
data/CONTRIBUTORS ADDED
@@ -0,0 +1,10 @@
1
+ The following is a list of people who have contributed ideas, code, bug
2
+ reports, or in general have helped logstash along its way.
3
+
4
+ Contributors:
5
+ * Tamir Kamara - tamir.kamara@microsoft.com
6
+
7
+ Note: If you've sent us patches, bug reports, or otherwise contributed to
8
+ Logstash, and you aren't on the list above and want to be, please let us know
9
+ and we'll make sure you're here. Contributions from folks like you are what make
10
+ open source awesome.
data/Gemfile ADDED
@@ -0,0 +1,12 @@
1
+ source 'https://rubygems.org'
2
+ gemspec
3
+
4
+
5
+
6
+ logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash"
7
+ use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1"
8
+
9
+ if Dir.exist?(logstash_path) && use_logstash_source
10
+ gem 'logstash-core', :path => "#{logstash_path}/logstash-core"
11
+ gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api"
12
+ end
data/LICENSE ADDED
@@ -0,0 +1,11 @@
1
+ Licensed under the Apache License, Version 2.0 (the "License");
2
+ you may not use this file except in compliance with the License.
3
+ You may obtain a copy of the License at
4
+
5
+ http://www.apache.org/licenses/LICENSE-2.0
6
+
7
+ Unless required by applicable law or agreed to in writing, software
8
+ distributed under the License is distributed on an "AS IS" BASIS,
9
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10
+ See the License for the specific language governing permissions and
11
+ limitations under the License.
data/README.md ADDED
@@ -0,0 +1,86 @@
1
+ # Logstash Plugin
2
+
3
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
4
+
5
+ It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
6
+
7
+ ## Documentation
8
+
9
+ Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
10
+
11
+ - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
+ - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
+
14
+ ## Need Help?
15
+
16
+ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
17
+
18
+ ## Developing
19
+
20
+ ### 1. Plugin Developement and Testing
21
+
22
+ #### Code
23
+ - To get started, you'll need JRuby with the Bundler gem installed.
24
+
25
+ - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
26
+
27
+ - Install dependencies
28
+ ```sh
29
+ bundle install
30
+ ```
31
+
32
+ #### Test
33
+
34
+ - Update your dependencies
35
+
36
+ ```sh
37
+ bundle install
38
+ ```
39
+
40
+ - Run tests
41
+
42
+ ```sh
43
+ bundle exec rspec
44
+ ```
45
+
46
+ ### 2. Running your unpublished Plugin in Logstash
47
+
48
+ #### 2.1 Run in a local Logstash clone
49
+
50
+ - Edit Logstash `Gemfile` and add the local plugin path, for example:
51
+ ```ruby
52
+ gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
53
+ ```
54
+ - Install plugin
55
+ ```sh
56
+ bin/logstash-plugin install --no-verify
57
+ ```
58
+ - Run Logstash with your plugin
59
+ ```sh
60
+ bin/logstash -e 'filter {awesome {}}'
61
+ ```
62
+ At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
63
+
64
+ #### 2.2 Run in an installed Logstash
65
+
66
+ You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
67
+
68
+ - Build your plugin gem
69
+ ```sh
70
+ gem build logstash-filter-awesome.gemspec
71
+ ```
72
+ - Install the plugin from the Logstash home
73
+ ```sh
74
+ bin/logstash-plugin install /your/local/plugin/logstash-filter-awesome.gem
75
+ ```
76
+ - Start Logstash and proceed to test the plugin
77
+
78
+ ## Contributing
79
+
80
+ All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
81
+
82
+ Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
83
+
84
+ It is more important to the community that you are able to contribute.
85
+
86
+ For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
Binary file
@@ -0,0 +1,400 @@
1
+ # encoding: utf-8
2
+
3
+ require 'logstash/outputs/base'
4
+ require 'logstash/namespace'
5
+ require 'logstash/errors'
6
+
7
+ require 'logstash/outputs/kusto/ingestor'
8
+ require 'logstash/outputs/kusto/interval'
9
+
10
+ ##
11
+ # This plugin send messages to Azure Kusto in batches.
12
+ #
13
+ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
14
+ config_name 'kusto'
15
+ concurrency :shared
16
+
17
+ FIELD_REF = /%\{[^}]+\}/
18
+
19
+ attr_reader :failure_path
20
+
21
+ # The path to the file to write. Event fields can be used here,
22
+ # like `/var/log/logstash/%{host}/%{application}`
23
+ # One may also utilize the path option for date-based log
24
+ # rotation via the joda time format. This will use the event
25
+ # timestamp.
26
+ # E.g.: `path => "./test-%{+YYYY-MM-dd}.txt"` to create
27
+ # `./test-2013-05-29.txt`
28
+ #
29
+ # If you use an absolute path you cannot start with a dynamic string.
30
+ # E.g: `/%{myfield}/`, `/test-%{myfield}/` are not valid paths
31
+ config :path, validate: :string, required: true
32
+
33
+ # Flush interval (in seconds) for flushing writes to log files.
34
+ # 0 will flush on every message.
35
+ config :flush_interval, validate: :number, default: 2
36
+
37
+ # If the generated path is invalid, the events will be saved
38
+ # into this file and inside the defined path.
39
+ config :filename_failure, validate: :string, default: '_filepath_failures'
40
+
41
+ # If the configured file is deleted, but an event is handled by the plugin,
42
+ # the plugin will recreate the file. Default => true
43
+ config :create_if_deleted, validate: :boolean, default: true
44
+
45
+ # Dir access mode to use. Note that due to the bug in jruby system umask
46
+ # is ignored on linux: https://github.com/jruby/jruby/issues/3426
47
+ # Setting it to -1 uses default OS value.
48
+ # Example: `"dir_mode" => 0750`
49
+ config :dir_mode, validate: :number, default: -1
50
+
51
+ # File access mode to use. Note that due to the bug in jruby system umask
52
+ # is ignored on linux: https://github.com/jruby/jruby/issues/3426
53
+ # Setting it to -1 uses default OS value.
54
+ # Example: `"file_mode" => 0640`
55
+ config :file_mode, validate: :number, default: -1
56
+
57
+ # How should the file be written?
58
+ #
59
+ # If `append`, the file will be opened for appending and each new event will
60
+ # be written at the end of the file.
61
+ #
62
+ # If `overwrite`, the file will be truncated before writing and only the most
63
+ # recent event will appear in the file.
64
+ config :write_behavior, validate: %w[overwrite append], default: 'append'
65
+
66
+ # TODO: fix the interval type...
67
+ config :stale_cleanup_interval, validate: :number, default: 10
68
+ config :stale_cleanup_type, validate: %w[events interval], default: 'events'
69
+
70
+ # Should the plugin recover from failure?
71
+ #
72
+ # If `true`, the plugin will look for temp files from past runs within the
73
+ # path (before any dynamic pattern is added) and try to process them
74
+ #
75
+ # If `false`, the plugin will disregard temp files found
76
+ config :recovery, validate: :boolean, default: true
77
+
78
+ # Kusto configuration
79
+ config :ingest_url, validate: :string, required: true
80
+ config :app_id, validate: :string, required: true
81
+ config :app_key, validate: :string, required: true
82
+ config :app_tenant, validate: :string, default: nil
83
+
84
+ config :database, validate: :string, required: true
85
+ config :table, validate: :string, required: true
86
+ config :mapping, validate: :string
87
+
88
+ # Determines if local files used for temporary storage will be deleted
89
+ # after upload is successful
90
+ config :delete_temp_files, validate: :boolean, default: true
91
+
92
+ # TODO: will be used to route events to many tables according to event properties
93
+ config :dynamic_event_routing, validate: :boolean, default: false
94
+
95
+ # Specify how many files can be uploaded concurrently
96
+ config :upload_concurrent_count, validate: :number, default: 3
97
+
98
+ # Specify how many files can be kept in the upload queue before the main process
99
+ # starts processing them in the main thread (not healthy)
100
+ config :upload_queue_size, validate: :number, default: 30
101
+
102
+ default :codec, 'json_lines'
103
+
104
+ def register
105
+ require 'fileutils' # For mkdir_p
106
+
107
+ @files = {}
108
+ @io_mutex = Mutex.new
109
+
110
+ # TODO: add id to the tmp path to support multiple outputs of the same type
111
+ # add fields from the meta that will note the destination of the events in the file
112
+ @path = if dynamic_event_routing
113
+ File.expand_path("#{path}.kusto.%{[@metadata][database]}.%{[@metadata][table]}.%{[@metadata][mapping]}")
114
+ else
115
+ File.expand_path("#{path}.kusto")
116
+ end
117
+
118
+ validate_path
119
+
120
+ @file_root = if path_with_field_ref?
121
+ extract_file_root
122
+ else
123
+ File.dirname(path)
124
+ end
125
+ @failure_path = File.join(@file_root, @filename_failure)
126
+
127
+ executor = Concurrent::ThreadPoolExecutor.new(min_threads: 1,
128
+ max_threads: upload_concurrent_count,
129
+ max_queue: upload_queue_size,
130
+ fallback_policy: :caller_runs)
131
+
132
+ @ingestor = Ingestor.new(ingest_url, app_id, app_key, app_tenant, database, table, mapping, delete_temp_files, @logger, executor)
133
+
134
+ @flush_interval = @flush_interval.to_i
135
+ if @flush_interval > 0
136
+ @flusher = Interval.start(@flush_interval, -> { flush_pending_files })
137
+ end
138
+
139
+ if (@stale_cleanup_type == 'interval') && (@stale_cleanup_interval > 0)
140
+ @cleaner = Interval.start(stale_cleanup_interval, -> { close_stale_files })
141
+ end
142
+
143
+ @last_stale_cleanup_cycle = Time.now
144
+
145
+ # send existing files
146
+ recover_past_files if recovery
147
+ end
148
+
149
+ private
150
+ def validate_path
151
+ if (root_directory =~ FIELD_REF) != nil
152
+ @logger.error('File: The starting part of the path should not be dynamic.', path: @path)
153
+ raise LogStash::ConfigurationError.new('The starting part of the path should not be dynamic.')
154
+ end
155
+ end
156
+
157
+ private
158
+ def root_directory
159
+ parts = @path.split(File::SEPARATOR).reject(&:empty?)
160
+ if Gem.win_platform?
161
+ # First part is the drive letter
162
+ parts[1]
163
+ else
164
+ parts.first
165
+ end
166
+ end
167
+
168
+ public
169
+ def multi_receive_encoded(events_and_encoded)
170
+ encoded_by_path = Hash.new { |h, k| h[k] = [] }
171
+
172
+ events_and_encoded.each do |event, encoded|
173
+ file_output_path = event_path(event)
174
+ encoded_by_path[file_output_path] << encoded
175
+ end
176
+
177
+ @io_mutex.synchronize do
178
+ encoded_by_path.each do |path, chunks|
179
+ fd = open(path)
180
+ if @write_behavior == 'overwrite'
181
+ fd.truncate(0)
182
+ fd.seek(0, IO::SEEK_SET)
183
+ fd.write(chunks.last)
184
+ else
185
+ # append to the file
186
+ chunks.each { |chunk| fd.write(chunk) }
187
+ end
188
+ fd.flush unless @flusher && @flusher.alive?
189
+ end
190
+
191
+ close_stale_files if @stale_cleanup_type == 'events'
192
+ end
193
+ end
194
+
195
+ def close
196
+ @flusher.stop unless @flusher.nil?
197
+ @cleaner.stop unless @cleaner.nil?
198
+ @io_mutex.synchronize do
199
+ @logger.debug('Close: closing files')
200
+
201
+ @files.each do |path, fd|
202
+ begin
203
+ fd.close
204
+ @logger.debug("Closed file #{path}", fd: fd)
205
+
206
+ kusto_send_file(path)
207
+ rescue Exception => e
208
+ @logger.error('Exception while flushing and closing files.', exception: e)
209
+ end
210
+ end
211
+ end
212
+
213
+ @ingestor.stop
214
+ end
215
+
216
+ private
217
+ def inside_file_root?(log_path)
218
+ target_file = File.expand_path(log_path)
219
+ return target_file.start_with?("#{@file_root}/")
220
+ end
221
+
222
+ private
223
+ def event_path(event)
224
+ file_output_path = generate_filepath(event)
225
+ if path_with_field_ref? && !inside_file_root?(file_output_path)
226
+ @logger.warn('The event tried to write outside the files root, writing the event to the failure file', event: event, filename: @failure_path)
227
+ file_output_path = @failure_path
228
+ elsif !@create_if_deleted && deleted?(file_output_path)
229
+ file_output_path = @failure_path
230
+ end
231
+ @logger.debug('Writing event to tmp file.', filename: file_output_path)
232
+
233
+ file_output_path
234
+ end
235
+
236
+ private
237
+ def generate_filepath(event)
238
+ event.sprintf(@path)
239
+ end
240
+
241
+ private
242
+ def path_with_field_ref?
243
+ path =~ FIELD_REF
244
+ end
245
+
246
+ private
247
+ def extract_file_root
248
+ parts = File.expand_path(path).split(File::SEPARATOR)
249
+ parts.take_while { |part| part !~ FIELD_REF }.join(File::SEPARATOR)
250
+ end
251
+
252
+ # the back-bone of @flusher, our periodic-flushing interval.
253
+ private
254
+ def flush_pending_files
255
+ @io_mutex.synchronize do
256
+ @logger.debug('Starting flush cycle')
257
+
258
+ @files.each do |path, fd|
259
+ @logger.debug('Flushing file', path: path, fd: fd)
260
+ fd.flush
261
+ end
262
+ end
263
+ rescue Exception => e
264
+ # squash exceptions caught while flushing after logging them
265
+ @logger.error('Exception flushing files', exception: e.message, backtrace: e.backtrace)
266
+ end
267
+
268
+ # every 10 seconds or so (triggered by events, but if there are no events there's no point closing files anyway)
269
+ private
270
+ def close_stale_files
271
+ now = Time.now
272
+ return unless now - @last_stale_cleanup_cycle >= @stale_cleanup_interval
273
+
274
+ @logger.debug('Starting stale files cleanup cycle', files: @files)
275
+ inactive_files = @files.select { |path, fd| not fd.active }
276
+ @logger.debug("#{inactive_files.count} stale files found", inactive_files: inactive_files)
277
+ inactive_files.each do |path, fd|
278
+ @logger.info("Closing file #{path}")
279
+ fd.close
280
+ @files.delete(path)
281
+
282
+ kusto_send_file(path)
283
+ end
284
+ # mark all files as inactive, a call to write will mark them as active again
285
+ @files.each { |path, fd| fd.active = false }
286
+ @last_stale_cleanup_cycle = now
287
+ end
288
+
289
+ private
290
+ def cached?(path)
291
+ @files.include?(path) && !@files[path].nil?
292
+ end
293
+
294
+ private
295
+ def deleted?(path)
296
+ !File.exist?(path)
297
+ end
298
+
299
+ private
300
+ def open(path)
301
+ return @files[path] if !deleted?(path) && cached?(path)
302
+
303
+ if deleted?(path)
304
+ if @create_if_deleted
305
+ @logger.debug('Required file does not exist, creating it.', path: path)
306
+ @files.delete(path)
307
+ else
308
+ return @files[path] if cached?(path)
309
+ end
310
+ end
311
+
312
+ @logger.info('Opening file', path: path)
313
+
314
+ dir = File.dirname(path)
315
+ if !Dir.exist?(dir)
316
+ @logger.info('Creating directory', directory: dir)
317
+ if @dir_mode != -1
318
+ FileUtils.mkdir_p(dir, mode: @dir_mode)
319
+ else
320
+ FileUtils.mkdir_p(dir)
321
+ end
322
+ end
323
+
324
+ # work around a bug opening fifos (bug JRUBY-6280)
325
+ stat = begin
326
+ File.stat(path)
327
+ rescue
328
+ nil
329
+ end
330
+ fd = if stat && stat.ftype == 'fifo' && LogStash::Environment.jruby?
331
+ java.io.FileWriter.new(java.io.File.new(path))
332
+ elsif @file_mode != -1
333
+ File.new(path, 'a+', @file_mode)
334
+ else
335
+ File.new(path, 'a+')
336
+ end
337
+ # fd = if @file_mode != -1
338
+ # File.new(path, 'a+', @file_mode)
339
+ # else
340
+ # File.new(path, 'a+')
341
+ # end
342
+ # end
343
+ @files[path] = IOWriter.new(fd)
344
+ end
345
+
346
+ private
347
+ def kusto_send_file(file_path)
348
+ @ingestor.upload_async(file_path, delete_temp_files)
349
+ end
350
+
351
+ private
352
+ def recover_past_files
353
+ require 'find'
354
+
355
+ # we need to find the last "regular" part in the path before any dynamic vars
356
+ path_last_char = @path.length - 1
357
+
358
+ pattern_start = @path.index('%') || path_last_char
359
+ last_folder_before_pattern = @path.rindex('/', pattern_start) || path_last_char
360
+ new_path = path[0..last_folder_before_pattern]
361
+ @logger.info("Going to recover old files in path #{@new_path}")
362
+
363
+ begin
364
+ old_files = Find.find(new_path).select { |p| /.*\.kusto$/ =~ p }
365
+ @logger.info("Found #{old_files.length} old file(s), sending them now...")
366
+
367
+ old_files.each do |file|
368
+ kusto_send_file(file)
369
+ end
370
+ rescue Errno::ENOENT => e
371
+ @logger.warn('No such file or directory', exception: e.class, message: e.message, path: new_path, backtrace: e.backtrace)
372
+ end
373
+ end
374
+ end
375
+
376
+ # wrapper class
377
+ class IOWriter
378
+ def initialize(io)
379
+ @io = io
380
+ end
381
+
382
+ def write(*args)
383
+ @io.write(*args)
384
+ @active = true
385
+ end
386
+
387
+ def flush
388
+ @io.flush
389
+ end
390
+
391
+ def method_missing(method_name, *args, &block)
392
+ if @io.respond_to?(method_name)
393
+
394
+ @io.send(method_name, *args, &block)
395
+ else
396
+ super
397
+ end
398
+ end
399
+ attr_accessor :active
400
+ end
@@ -0,0 +1,94 @@
1
+ # encoding: utf-8
2
+
3
+ require 'logstash/outputs/base'
4
+ require 'logstash/namespace'
5
+ require 'logstash/errors'
6
+
7
+ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
8
+ ##
9
+ # This handles the overall logic and communication with Kusto
10
+ #
11
+ class Ingestor
12
+ require 'kusto/KustoClient-0.1.6.jar'
13
+
14
+ RETRY_DELAY_SECONDS = 3
15
+ DEFAULT_THREADPOOL = Concurrent::ThreadPoolExecutor.new(
16
+ min_threads: 1,
17
+ max_threads: 8,
18
+ max_queue: 1,
19
+ fallback_policy: :caller_runs
20
+ )
21
+ LOW_QUEUE_LENGTH = 3
22
+
23
+ def initialize(ingest_url, app_id, app_key, app_tenant, database, table, mapping, delete_local, logger, threadpool = DEFAULT_THREADPOOL)
24
+ @workers_pool = threadpool
25
+ @logger = logger
26
+
27
+ @logger.debug('Preparing Kusto resources.')
28
+ kusto_connection_string = Java::KustoConnectionStringBuilder.createWithAadApplicationCredentials(ingest_url, app_id, app_key, app_tenant)
29
+
30
+ @kusto_client = Java::KustoIngestClient.new(kusto_connection_string)
31
+
32
+ @ingestion_properties = Java::KustoIngestionProperties.new(database, table)
33
+ @ingestion_properties.setJsonMappingName(mapping)
34
+
35
+ @delete_local = delete_local
36
+
37
+ @logger.debug('Kusto resources are ready.')
38
+ end
39
+
40
+ def upload_async(path, delete_on_success)
41
+ if @workers_pool.remaining_capacity <= LOW_QUEUE_LENGTH
42
+ @logger.warn("Ingestor queue capacity is running low with #{@workers_pool.remaining_capacity} free slots.")
43
+ end
44
+
45
+ @workers_pool.post do
46
+ LogStash::Util.set_thread_name("Kusto to ingest file: #{path}")
47
+ upload(path, delete_on_success)
48
+ end
49
+ rescue Exception => e
50
+ @logger.error('StandardError.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
51
+ raise e
52
+ end
53
+
54
+ def upload(path, delete_on_success)
55
+ file_size = File.size(path)
56
+ @logger.debug("Sending file to kusto: #{path}. size: #{file_size}")
57
+
58
+ # TODO: dynamic routing
59
+ # file_metadata = path.partition('.kusto.').last
60
+ # file_metadata_parts = file_metadata.split('.')
61
+
62
+ # if file_metadata_parts.length == 3
63
+ # # this is the number we expect - database, table, mapping
64
+ # database = file_metadata_parts[0]
65
+ # table = file_metadata_parts[1]
66
+ # mapping = file_metadata_parts[2]
67
+
68
+ # local_ingestion_properties = Java::KustoIngestionProperties.new(database, table)
69
+ # local_ingestion_properties.addJsonMappingName(mapping)
70
+ # end
71
+
72
+ @kusto_client.ingestFromSingleFile(path, @ingestion_properties)
73
+
74
+ File.delete(path) if delete_on_success
75
+
76
+ @logger.debug("File #{path} sent to kusto.")
77
+ rescue Java::JavaNioFile::NoSuchFileException => e
78
+ @logger.error("File doesn't exist! Unrecoverable error.", exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
79
+ rescue => e
80
+ # When the retry limit is reached or another error happen we will wait and retry.
81
+ #
82
+ # Thread might be stuck here, but I think its better than losing anything
83
+ # its either a transient errors or something bad really happened.
84
+ @logger.error('Uploading failed, retrying.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
85
+ sleep RETRY_DELAY_SECONDS
86
+ retry
87
+ end
88
+
89
+ def stop
90
+ @workers_pool.shutdown
91
+ @workers_pool.wait_for_termination(nil) # block until its done
92
+ end
93
+ end
94
+ end
@@ -0,0 +1,81 @@
1
+ # encoding: utf-8
2
+
3
+ require 'logstash/outputs/base'
4
+ require 'logstash/namespace'
5
+ require 'logstash/errors'
6
+
7
+ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
8
+ ##
9
+ # Bare-bones utility for running a block of code at an interval.
10
+ #
11
+ class Interval
12
+ ##
13
+ # Initializes a new Interval with the given arguments and starts it
14
+ # before returning it.
15
+ #
16
+ # @param interval [Integer] (see: Interval#initialize)
17
+ # @param procsy [#call] (see: Interval#initialize)
18
+ #
19
+ # @return [Interval]
20
+ #
21
+ def self.start(interval, procsy)
22
+ new(interval, procsy).tap(&:start)
23
+ end
24
+
25
+ ##
26
+ # @param interval [Integer]: time in seconds to wait between calling the given proc
27
+ # @param procsy [#call]: proc or lambda to call periodically; must not raise exceptions.
28
+ def initialize(interval, procsy)
29
+ @interval = interval
30
+ @procsy = procsy
31
+
32
+ # Mutex, ConditionVariable, etc.
33
+ @mutex = Mutex.new
34
+ @sleeper = ConditionVariable.new
35
+ end
36
+
37
+ ##
38
+ # Starts the interval, or returns if it has already been started.
39
+ #
40
+ # @return [void]
41
+ def start
42
+ @mutex.synchronize do
43
+ return if @thread && @thread.alive?
44
+
45
+ @thread = Thread.new { run }
46
+ end
47
+ end
48
+
49
+ ##
50
+ # Stop the interval.
51
+ # Does not interrupt if execution is in-progress.
52
+ def stop
53
+ @mutex.synchronize do
54
+ @stopped = true
55
+ end
56
+
57
+ @thread && @thread.join
58
+ end
59
+
60
+ ##
61
+ # @return [Boolean]
62
+ def alive?
63
+ @thread && @thread.alive?
64
+ end
65
+
66
+ private
67
+
68
+ def run
69
+ @mutex.synchronize do
70
+ loop do
71
+ @sleeper.wait(@mutex, @interval)
72
+ break if @stopped
73
+
74
+ @procsy.call
75
+ end
76
+ end
77
+ ensure
78
+ @sleeper.broadcast
79
+ end
80
+ end
81
+ end
@@ -0,0 +1,32 @@
1
+ Gem::Specification.new do |s|
2
+ s.name = 'logstash-output-kusto'
3
+ s.version = '0.1.6'
4
+ s.licenses = ['Apache-2.0']
5
+ s.summary = 'Writes events to Azure KustoDB'
6
+ s.description = 'This is a logstash output plugin used to write events to an Azure KustoDB instance'
7
+ s.homepage = 'https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash'
8
+ s.authors = ['Tamir Kamara']
9
+ s.email = 'tamir.kamara@microsoft.com'
10
+ s.require_paths = ['lib']
11
+
12
+ # Files
13
+ s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
14
+
15
+ # Tests
16
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
17
+
18
+ # Special flag to let us know this is actually a logstash plugin
19
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
20
+
21
+ # Gem dependencies
22
+ s.add_runtime_dependency "logstash-core-plugin-api", "~> 2.0"
23
+ s.add_runtime_dependency 'logstash-codec-json_lines'
24
+ s.add_runtime_dependency 'logstash-codec-line'
25
+
26
+ s.add_development_dependency "logstash-devutils"
27
+ s.add_development_dependency 'flores'
28
+ s.add_development_dependency 'logstash-input-generator'
29
+
30
+ # Jar dependencies
31
+ s.add_runtime_dependency 'jar-dependencies'
32
+ end
@@ -0,0 +1,22 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/outputs/kusto"
4
+ require "logstash/codecs/plain"
5
+ require "logstash/event"
6
+
7
+ describe LogStash::Outputs::Kusto do
8
+ let(:sample_event) { LogStash::Event.new }
9
+ let(:output) { LogStash::Outputs::Kusto.new }
10
+
11
+ before do
12
+ output.register
13
+ end
14
+
15
+ describe "receive message" do
16
+ subject { output.receive(sample_event) }
17
+
18
+ it "returns a string" do
19
+ expect(subject).to eq("Event received")
20
+ end
21
+ end
22
+ end
metadata ADDED
@@ -0,0 +1,156 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-output-kusto
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.6
5
+ platform: ruby
6
+ authors:
7
+ - Tamir Kamara
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2018-08-09 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ requirement: !ruby/object:Gem::Requirement
15
+ requirements:
16
+ - - "~>"
17
+ - !ruby/object:Gem::Version
18
+ version: '2.0'
19
+ name: logstash-core-plugin-api
20
+ prerelease: false
21
+ type: :runtime
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - "~>"
25
+ - !ruby/object:Gem::Version
26
+ version: '2.0'
27
+ - !ruby/object:Gem::Dependency
28
+ requirement: !ruby/object:Gem::Requirement
29
+ requirements:
30
+ - - ">="
31
+ - !ruby/object:Gem::Version
32
+ version: '0'
33
+ name: logstash-codec-json_lines
34
+ prerelease: false
35
+ type: :runtime
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
41
+ - !ruby/object:Gem::Dependency
42
+ requirement: !ruby/object:Gem::Requirement
43
+ requirements:
44
+ - - ">="
45
+ - !ruby/object:Gem::Version
46
+ version: '0'
47
+ name: logstash-codec-line
48
+ prerelease: false
49
+ type: :runtime
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - ">="
53
+ - !ruby/object:Gem::Version
54
+ version: '0'
55
+ - !ruby/object:Gem::Dependency
56
+ requirement: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: '0'
61
+ name: logstash-devutils
62
+ prerelease: false
63
+ type: :development
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - ">="
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
69
+ - !ruby/object:Gem::Dependency
70
+ requirement: !ruby/object:Gem::Requirement
71
+ requirements:
72
+ - - ">="
73
+ - !ruby/object:Gem::Version
74
+ version: '0'
75
+ name: flores
76
+ prerelease: false
77
+ type: :development
78
+ version_requirements: !ruby/object:Gem::Requirement
79
+ requirements:
80
+ - - ">="
81
+ - !ruby/object:Gem::Version
82
+ version: '0'
83
+ - !ruby/object:Gem::Dependency
84
+ requirement: !ruby/object:Gem::Requirement
85
+ requirements:
86
+ - - ">="
87
+ - !ruby/object:Gem::Version
88
+ version: '0'
89
+ name: logstash-input-generator
90
+ prerelease: false
91
+ type: :development
92
+ version_requirements: !ruby/object:Gem::Requirement
93
+ requirements:
94
+ - - ">="
95
+ - !ruby/object:Gem::Version
96
+ version: '0'
97
+ - !ruby/object:Gem::Dependency
98
+ requirement: !ruby/object:Gem::Requirement
99
+ requirements:
100
+ - - ">="
101
+ - !ruby/object:Gem::Version
102
+ version: '0'
103
+ name: jar-dependencies
104
+ prerelease: false
105
+ type: :runtime
106
+ version_requirements: !ruby/object:Gem::Requirement
107
+ requirements:
108
+ - - ">="
109
+ - !ruby/object:Gem::Version
110
+ version: '0'
111
+ description: This is a logstash output plugin used to write events to an Azure KustoDB
112
+ instance
113
+ email: tamir.kamara@microsoft.com
114
+ executables: []
115
+ extensions: []
116
+ extra_rdoc_files: []
117
+ files:
118
+ - CHANGELOG.md
119
+ - CONTRIBUTORS
120
+ - Gemfile
121
+ - LICENSE
122
+ - README.md
123
+ - lib/kusto/KustoClient-0.1.6.jar
124
+ - lib/logstash/outputs/kusto.rb
125
+ - lib/logstash/outputs/kusto/ingestor.rb
126
+ - lib/logstash/outputs/kusto/interval.rb
127
+ - logstash-output-kusto.gemspec
128
+ - spec/outputs/kusto_spec.rb
129
+ homepage: https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash
130
+ licenses:
131
+ - Apache-2.0
132
+ metadata:
133
+ logstash_plugin: 'true'
134
+ logstash_group: output
135
+ post_install_message:
136
+ rdoc_options: []
137
+ require_paths:
138
+ - lib
139
+ required_ruby_version: !ruby/object:Gem::Requirement
140
+ requirements:
141
+ - - ">="
142
+ - !ruby/object:Gem::Version
143
+ version: '0'
144
+ required_rubygems_version: !ruby/object:Gem::Requirement
145
+ requirements:
146
+ - - ">="
147
+ - !ruby/object:Gem::Version
148
+ version: '0'
149
+ requirements: []
150
+ rubyforge_project:
151
+ rubygems_version: 2.7.6
152
+ signing_key:
153
+ specification_version: 4
154
+ summary: Writes events to Azure KustoDB
155
+ test_files:
156
+ - spec/outputs/kusto_spec.rb