logstash-output-kusto 0.1.6 → 0.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 89aa0d2c0229ca92f8d1dbcc5ba749ae8b6afbbfff8aab57ab862ddc5ea2d329
4
- data.tar.gz: 7023796abee3ebae86b8987406917beeb5ccfa9c8e2d26d8d56c77091626c411
3
+ metadata.gz: 606d298b80da2887ad2223db406fd23e38351b46ef7ec87861468828f1b3ec24
4
+ data.tar.gz: b7f420f9f84c886460c145874625400198acbcde1aa05cafac9ac4a03e7da008
5
5
  SHA512:
6
- metadata.gz: 4bfacbb9d6965366a2f1ce4e0bf031f3d100225da8d7da76d431a28cb82ff40c9e33ffce9a54a69bc45bc89bae86951f03ed0fb3eed7021df5534e0c234f52b2
7
- data.tar.gz: e670720f4b5c39a5ac574ca760141009544475c74b3fa5cd7faf8625db9d951592e4673f9a8c6c973fbe5b41a2a47f3cae81fac99b5799d2b42eb6c9900ef27d
6
+ metadata.gz: a4e96bcb287915d805ef9fddc71f63399f4d9299888ac4da8f06d5baf6d7594825ec503b657ddae2812daa581cdc7465c2882575a236f0b9e28cd339dae2702a
7
+ data.tar.gz: 17041bdad667f37658a9246bc2068f5dcf53ec3cd41c6a665e9b100727194e7cf81baaa6d2927b942c1592cb6b3f15b7382f43e17669ccdbd039a1ad2c85b132
@@ -1,4 +1,8 @@
1
1
  ## 0.1.0
2
2
  - Plugin created with the logstash plugin generator
3
3
  ## 0.1.6
4
- - plugin published to the public. supports json events without dynamic routing meaning one output defined per target kusto table
4
+ - plugin published to the public. supports ingestion json events into a specific table-database (without dynamic routing currently)
5
+ ## 0.1.7
6
+ - fixed app_key (password) bug, include 0.1.7 of the kusto-java-sdk to allow working through a proxy
7
+ ## 0.2.0
8
+ - move to version 1.0.0-BETA-01 of azure-kusto-kava sdk
data/README.md CHANGED
@@ -1,86 +1,62 @@
1
- # Logstash Plugin
1
+ # Logstash Output Plugin for Azure Data Explorer (Kusto)
2
2
 
3
- This is a plugin for [Logstash](https://github.com/elastic/logstash).
4
-
5
- It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
6
-
7
- ## Documentation
8
-
9
- Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
10
-
11
- - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
- - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
-
14
- ## Need Help?
15
-
16
- Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
17
-
18
- ## Developing
3
+ master: [![Build Status](https://travis-ci.org/Azure/logstash-output-kusto.svg)](https://travis-ci.org/Azure/logstash-output-kusto)
4
+ dev: [![Build Status](https://travis-ci.org/Azure/logstash-output-kusto.svg?branch=dev)](https://travis-ci.org/Azure/logstash-output-kusto)
19
5
 
20
- ### 1. Plugin Developement and Testing
21
-
22
- #### Code
23
- - To get started, you'll need JRuby with the Bundler gem installed.
6
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
24
7
 
25
- - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
8
+ It is fully free and open source. The license is Apache 2.0.
26
9
 
27
- - Install dependencies
28
- ```sh
29
- bundle install
30
- ```
10
+ This Azure Data Explorer (ADX) Logstash plugin enables you to process events from Logstash into an **Azure Data Explorer** database for later analysis.
31
11
 
32
- #### Test
12
+ ## Requirements
33
13
 
34
- - Update your dependencies
35
-
36
- ```sh
37
- bundle install
38
- ```
14
+ - Logstash version 6+. [Installation instructions](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html)
15
+ - Azure Data Explorer cluster with a database. Read [Create a cluster and database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal) for more information.
16
+ - AAD Application credentials with permission to ingest data into Azure Data Explorer. Read [Creating an AAD Application](https://docs.microsoft.com/en-us/azure/kusto/management/access-control/how-to-provision-aad-app) for more information.
39
17
 
40
- - Run tests
18
+ ## Installation
41
19
 
20
+ To make the Azure Data Explorer plugin available in your Logstash environment, run the following command:
42
21
  ```sh
43
- bundle exec rspec
22
+ bin/logstash-plugin install logstash-output-kusto
44
23
  ```
45
24
 
46
- ### 2. Running your unpublished Plugin in Logstash
25
+ ## Configuration
47
26
 
48
- #### 2.1 Run in a local Logstash clone
27
+ Perform configuration before sending events from Logstash to Azure Data Explorer. The following example shows the minimum you need to provide. It should be enough for most use-cases:
49
28
 
50
- - Edit Logstash `Gemfile` and add the local plugin path, for example:
51
29
  ```ruby
52
- gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
53
- ```
54
- - Install plugin
55
- ```sh
56
- bin/logstash-plugin install --no-verify
57
- ```
58
- - Run Logstash with your plugin
59
- ```sh
60
- bin/logstash -e 'filter {awesome {}}'
61
- ```
62
- At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
63
-
64
- #### 2.2 Run in an installed Logstash
65
-
66
- You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
67
-
68
- - Build your plugin gem
69
- ```sh
70
- gem build logstash-filter-awesome.gemspec
71
- ```
72
- - Install the plugin from the Logstash home
73
- ```sh
74
- bin/logstash-plugin install /your/local/plugin/logstash-filter-awesome.gem
75
- ```
76
- - Start Logstash and proceed to test the plugin
30
+ output {
31
+ kusto {
32
+ path => "/tmp/kusto/%{+YYYY-MM-dd-HH-mm}.txt"
33
+ ingest_url => "https://ingest-<cluster-name>.kusto.windows.net/"
34
+ app_id => "<application id>"
35
+ app_key => "<application key/secret>"
36
+ app_tenant => "<tenant id>"
37
+ database => "<database name>"
38
+ table => "<target table>"
39
+ mapping => "<mapping name>"
40
+ }
41
+ }
42
+ ```
43
+
44
+ ### Available Configuration Keys
45
+
46
+ | Parameter Name | Description | Notes |
47
+ | --- | --- | --- |
48
+ | **path** | The plugin writes events to temporary files before sending them to ADX. This parameter includes a path where files should be written and a time expression for file rotation to trigger an upload to the ADX service. The example above shows how to rotate the files every minute and check the Logstash docs for more information on time expressions. | Required
49
+ | **ingest_url** | The Kusto endpoint for ingestion-related communication. See it on the Azure Portal.| Required|
50
+ | **app_id, app_key, app_tenant**| Credentials required to connect to the ADX service. Be sure to use an application with 'ingest' priviledges. | Required|
51
+ | **database**| Database name to place events | Required |
52
+ | **table** | Target table name to place events | Required
53
+ | **mapping** | Mapping is used to map an incoming event json string into the correct row format (which property goes into which column) | Required |
54
+ | **recovery** | If set to true (default), plugin will attempt to resend pre-existing temp files found in the path upon startup | |
55
+ | **delete_temp_files** | Determines if temp files will be deleted after a successful upload (true is default; set false for debug purposes only)| |
56
+ | **flush_interval** | The time (in seconds) for flushing writes to temporary files. Default is 2 seconds, 0 will flush on every event. Increase this value to reduce IO calls but keep in mind that events in the buffer will be lost in case of abrupt failure.| |
77
57
 
78
58
  ## Contributing
79
59
 
80
- All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
81
-
82
- Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
83
-
84
- It is more important to the community that you are able to contribute.
85
-
60
+ All contributions are welcome: ideas, patches, documentation, bug reports, and complaints.
61
+ Programming is not a required skill. It is more important to the community that you are able to contribute.
86
62
  For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -8,7 +8,7 @@ require 'logstash/outputs/kusto/ingestor'
8
8
  require 'logstash/outputs/kusto/interval'
9
9
 
10
10
  ##
11
- # This plugin send messages to Azure Kusto in batches.
11
+ # This plugin sends messages to Azure Kusto in batches.
12
12
  #
13
13
  class LogStash::Outputs::Kusto < LogStash::Outputs::Base
14
14
  config_name 'kusto'
@@ -30,8 +30,9 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
30
30
  # E.g: `/%{myfield}/`, `/test-%{myfield}/` are not valid paths
31
31
  config :path, validate: :string, required: true
32
32
 
33
- # Flush interval (in seconds) for flushing writes to log files.
34
- # 0 will flush on every message.
33
+ # Flush interval (in seconds) for flushing writes to files.
34
+ # 0 will flush on every message. Increase this value to recude IO calls but keep
35
+ # in mind that events buffered before flush can be lost in case of abrupt failure.
35
36
  config :flush_interval, validate: :number, default: 2
36
37
 
37
38
  # If the generated path is invalid, the events will be saved
@@ -54,15 +55,6 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
54
55
  # Example: `"file_mode" => 0640`
55
56
  config :file_mode, validate: :number, default: -1
56
57
 
57
- # How should the file be written?
58
- #
59
- # If `append`, the file will be opened for appending and each new event will
60
- # be written at the end of the file.
61
- #
62
- # If `overwrite`, the file will be truncated before writing and only the most
63
- # recent event will appear in the file.
64
- config :write_behavior, validate: %w[overwrite append], default: 'append'
65
-
66
58
  # TODO: fix the interval type...
67
59
  config :stale_cleanup_interval, validate: :number, default: 10
68
60
  config :stale_cleanup_type, validate: %w[events interval], default: 'events'
@@ -75,16 +67,27 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
75
67
  # If `false`, the plugin will disregard temp files found
76
68
  config :recovery, validate: :boolean, default: true
77
69
 
78
- # Kusto configuration
70
+
71
+ # The Kusto endpoint for ingestion related communication. You can see it on the Azure Portal.
79
72
  config :ingest_url, validate: :string, required: true
73
+
74
+ # The following are the credentails used to connect to the Kusto service
75
+ # application id
80
76
  config :app_id, validate: :string, required: true
81
- config :app_key, validate: :string, required: true
77
+ # application key (secret)
78
+ config :app_key, validate: :password, required: true
79
+ # aad tenant id
82
80
  config :app_tenant, validate: :string, default: nil
83
81
 
82
+ # The following are the data settings that impact where events are written to
83
+ # Database name
84
84
  config :database, validate: :string, required: true
85
+ # Target table name
85
86
  config :table, validate: :string, required: true
87
+ # Mapping name - used by kusto to map an incoming event to the right row format (what value goes into which column)
86
88
  config :mapping, validate: :string
87
89
 
90
+
88
91
  # Determines if local files used for temporary storage will be deleted
89
92
  # after upload is successful
90
93
  config :delete_temp_files, validate: :boolean, default: true
@@ -149,9 +152,14 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
149
152
  private
150
153
  def validate_path
151
154
  if (root_directory =~ FIELD_REF) != nil
152
- @logger.error('File: The starting part of the path should not be dynamic.', path: @path)
155
+ @logger.error('The starting part of the path should not be dynamic.', path: @path)
153
156
  raise LogStash::ConfigurationError.new('The starting part of the path should not be dynamic.')
154
157
  end
158
+
159
+ if !path_with_field_ref?
160
+ @logger.error('Path should include some time related fields to allow for file rotation.', path: @path)
161
+ raise LogStash::ConfigurationError.new('Path should include some time related fields to allow for file rotation.')
162
+ end
155
163
  end
156
164
 
157
165
  private
@@ -177,14 +185,8 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
177
185
  @io_mutex.synchronize do
178
186
  encoded_by_path.each do |path, chunks|
179
187
  fd = open(path)
180
- if @write_behavior == 'overwrite'
181
- fd.truncate(0)
182
- fd.seek(0, IO::SEEK_SET)
183
- fd.write(chunks.last)
184
- else
185
- # append to the file
186
- chunks.each { |chunk| fd.write(chunk) }
187
- end
188
+ # append to the file
189
+ chunks.each { |chunk| fd.write(chunk) }
188
190
  fd.flush unless @flusher && @flusher.alive?
189
191
  end
190
192
 
@@ -210,7 +212,7 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
210
212
  end
211
213
  end
212
214
 
213
- @ingestor.stop
215
+ @ingestor.stop unless @ingestor.nil?
214
216
  end
215
217
 
216
218
  private
@@ -1,94 +1,116 @@
1
- # encoding: utf-8
2
-
3
- require 'logstash/outputs/base'
4
- require 'logstash/namespace'
5
- require 'logstash/errors'
6
-
7
- class LogStash::Outputs::Kusto < LogStash::Outputs::Base
8
- ##
9
- # This handles the overall logic and communication with Kusto
10
- #
11
- class Ingestor
12
- require 'kusto/KustoClient-0.1.6.jar'
13
-
14
- RETRY_DELAY_SECONDS = 3
15
- DEFAULT_THREADPOOL = Concurrent::ThreadPoolExecutor.new(
16
- min_threads: 1,
17
- max_threads: 8,
18
- max_queue: 1,
19
- fallback_policy: :caller_runs
20
- )
21
- LOW_QUEUE_LENGTH = 3
22
-
23
- def initialize(ingest_url, app_id, app_key, app_tenant, database, table, mapping, delete_local, logger, threadpool = DEFAULT_THREADPOOL)
24
- @workers_pool = threadpool
25
- @logger = logger
26
-
27
- @logger.debug('Preparing Kusto resources.')
28
- kusto_connection_string = Java::KustoConnectionStringBuilder.createWithAadApplicationCredentials(ingest_url, app_id, app_key, app_tenant)
29
-
30
- @kusto_client = Java::KustoIngestClient.new(kusto_connection_string)
31
-
32
- @ingestion_properties = Java::KustoIngestionProperties.new(database, table)
33
- @ingestion_properties.setJsonMappingName(mapping)
34
-
35
- @delete_local = delete_local
36
-
37
- @logger.debug('Kusto resources are ready.')
38
- end
39
-
40
- def upload_async(path, delete_on_success)
41
- if @workers_pool.remaining_capacity <= LOW_QUEUE_LENGTH
42
- @logger.warn("Ingestor queue capacity is running low with #{@workers_pool.remaining_capacity} free slots.")
43
- end
44
-
45
- @workers_pool.post do
46
- LogStash::Util.set_thread_name("Kusto to ingest file: #{path}")
47
- upload(path, delete_on_success)
48
- end
49
- rescue Exception => e
50
- @logger.error('StandardError.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
51
- raise e
52
- end
53
-
54
- def upload(path, delete_on_success)
55
- file_size = File.size(path)
56
- @logger.debug("Sending file to kusto: #{path}. size: #{file_size}")
57
-
58
- # TODO: dynamic routing
59
- # file_metadata = path.partition('.kusto.').last
60
- # file_metadata_parts = file_metadata.split('.')
61
-
62
- # if file_metadata_parts.length == 3
63
- # # this is the number we expect - database, table, mapping
64
- # database = file_metadata_parts[0]
65
- # table = file_metadata_parts[1]
66
- # mapping = file_metadata_parts[2]
67
-
68
- # local_ingestion_properties = Java::KustoIngestionProperties.new(database, table)
69
- # local_ingestion_properties.addJsonMappingName(mapping)
70
- # end
71
-
72
- @kusto_client.ingestFromSingleFile(path, @ingestion_properties)
73
-
74
- File.delete(path) if delete_on_success
75
-
76
- @logger.debug("File #{path} sent to kusto.")
77
- rescue Java::JavaNioFile::NoSuchFileException => e
78
- @logger.error("File doesn't exist! Unrecoverable error.", exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
79
- rescue => e
80
- # When the retry limit is reached or another error happen we will wait and retry.
81
- #
82
- # Thread might be stuck here, but I think its better than losing anything
83
- # its either a transient errors or something bad really happened.
84
- @logger.error('Uploading failed, retrying.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
85
- sleep RETRY_DELAY_SECONDS
86
- retry
87
- end
88
-
89
- def stop
90
- @workers_pool.shutdown
91
- @workers_pool.wait_for_termination(nil) # block until its done
92
- end
93
- end
94
- end
1
+ # encoding: utf-8
2
+
3
+ require 'logstash/outputs/base'
4
+ require 'logstash/namespace'
5
+ require 'logstash/errors'
6
+
7
+ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
8
+ ##
9
+ # This handles the overall logic and communication with Kusto
10
+ #
11
+ class Ingestor
12
+ require 'kusto/kusto-ingest-1.0.0-BETA-01-jar-with-dependencies.jar'
13
+
14
+ RETRY_DELAY_SECONDS = 3
15
+ DEFAULT_THREADPOOL = Concurrent::ThreadPoolExecutor.new(
16
+ min_threads: 1,
17
+ max_threads: 8,
18
+ max_queue: 1,
19
+ fallback_policy: :caller_runs
20
+ )
21
+ LOW_QUEUE_LENGTH = 3
22
+ FIELD_REF = /%\{[^}]+\}/
23
+
24
+ def initialize(ingest_url, app_id, app_key, app_tenant, database, table, mapping, delete_local, logger, threadpool = DEFAULT_THREADPOOL)
25
+ @workers_pool = threadpool
26
+ @logger = logger
27
+
28
+ validate_config(database, table, mapping)
29
+
30
+ @logger.debug('Preparing Kusto resources.')
31
+
32
+ kusto_connection_string = Java::com.microsoft.azure.kusto.data.ConnectionStringBuilder.createWithAadApplicationCredentials(ingest_url, app_id, app_key.value, app_tenant)
33
+
34
+ @kusto_client = Java::com.microsoft.azure.kusto.ingest.IngestClientFactory.createClient(kusto_connection_string)
35
+
36
+ @ingestion_properties = Java::com.microsoft.azure.kusto.ingest.IngestionProperties.new(database, table)
37
+ @ingestion_properties.setJsonMappingName(mapping)
38
+
39
+ @delete_local = delete_local
40
+
41
+ @logger.debug('Kusto resources are ready.')
42
+ end
43
+
44
+ def validate_config(database, table, mapping)
45
+ if database =~ FIELD_REF
46
+ @logger.error('database config value should not be dynamic.', database)
47
+ raise LogStash::ConfigurationError.new('database config value should not be dynamic.')
48
+ end
49
+
50
+ if table =~ FIELD_REF
51
+ @logger.error('table config value should not be dynamic.', table)
52
+ raise LogStash::ConfigurationError.new('table config value should not be dynamic.')
53
+ end
54
+
55
+ if mapping =~ FIELD_REF
56
+ @logger.error('mapping config value should not be dynamic.', mapping)
57
+ raise LogStash::ConfigurationError.new('mapping config value should not be dynamic.')
58
+ end
59
+ end
60
+
61
+ def upload_async(path, delete_on_success)
62
+ if @workers_pool.remaining_capacity <= LOW_QUEUE_LENGTH
63
+ @logger.warn("Ingestor queue capacity is running low with #{@workers_pool.remaining_capacity} free slots.")
64
+ end
65
+
66
+ @workers_pool.post do
67
+ LogStash::Util.set_thread_name("Kusto to ingest file: #{path}")
68
+ upload(path, delete_on_success)
69
+ end
70
+ rescue Exception => e
71
+ @logger.error('StandardError.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
72
+ raise e
73
+ end
74
+
75
+ def upload(path, delete_on_success)
76
+ file_size = File.size(path)
77
+ @logger.debug("Sending file to kusto: #{path}. size: #{file_size}")
78
+
79
+ # TODO: dynamic routing
80
+ # file_metadata = path.partition('.kusto.').last
81
+ # file_metadata_parts = file_metadata.split('.')
82
+
83
+ # if file_metadata_parts.length == 3
84
+ # # this is the number we expect - database, table, mapping
85
+ # database = file_metadata_parts[0]
86
+ # table = file_metadata_parts[1]
87
+ # mapping = file_metadata_parts[2]
88
+
89
+ # local_ingestion_properties = Java::KustoIngestionProperties.new(database, table)
90
+ # local_ingestion_properties.addJsonMappingName(mapping)
91
+ # end
92
+
93
+ file_source_info = Java::com.microsoft.azure.kusto.ingest.source.FileSourceInfo.new(path, 0); # 0 - let the sdk figure out the size of the file
94
+ @kusto_client.ingestFromFile(file_source_info, @ingestion_properties)
95
+
96
+ File.delete(path) if delete_on_success
97
+
98
+ @logger.debug("File #{path} sent to kusto.")
99
+ rescue Java::JavaNioFile::NoSuchFileException => e
100
+ @logger.error("File doesn't exist! Unrecoverable error.", exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
101
+ rescue => e
102
+ # When the retry limit is reached or another error happen we will wait and retry.
103
+ #
104
+ # Thread might be stuck here, but I think its better than losing anything
105
+ # its either a transient errors or something bad really happened.
106
+ @logger.error('Uploading failed, retrying.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
107
+ sleep RETRY_DELAY_SECONDS
108
+ retry
109
+ end
110
+
111
+ def stop
112
+ @workers_pool.shutdown
113
+ @workers_pool.wait_for_termination(nil) # block until its done
114
+ end
115
+ end
116
+ end
@@ -1,81 +1,81 @@
1
- # encoding: utf-8
2
-
3
- require 'logstash/outputs/base'
4
- require 'logstash/namespace'
5
- require 'logstash/errors'
6
-
7
- class LogStash::Outputs::Kusto < LogStash::Outputs::Base
8
- ##
9
- # Bare-bones utility for running a block of code at an interval.
10
- #
11
- class Interval
12
- ##
13
- # Initializes a new Interval with the given arguments and starts it
14
- # before returning it.
15
- #
16
- # @param interval [Integer] (see: Interval#initialize)
17
- # @param procsy [#call] (see: Interval#initialize)
18
- #
19
- # @return [Interval]
20
- #
21
- def self.start(interval, procsy)
22
- new(interval, procsy).tap(&:start)
23
- end
24
-
25
- ##
26
- # @param interval [Integer]: time in seconds to wait between calling the given proc
27
- # @param procsy [#call]: proc or lambda to call periodically; must not raise exceptions.
28
- def initialize(interval, procsy)
29
- @interval = interval
30
- @procsy = procsy
31
-
32
- # Mutex, ConditionVariable, etc.
33
- @mutex = Mutex.new
34
- @sleeper = ConditionVariable.new
35
- end
36
-
37
- ##
38
- # Starts the interval, or returns if it has already been started.
39
- #
40
- # @return [void]
41
- def start
42
- @mutex.synchronize do
43
- return if @thread && @thread.alive?
44
-
45
- @thread = Thread.new { run }
46
- end
47
- end
48
-
49
- ##
50
- # Stop the interval.
51
- # Does not interrupt if execution is in-progress.
52
- def stop
53
- @mutex.synchronize do
54
- @stopped = true
55
- end
56
-
57
- @thread && @thread.join
58
- end
59
-
60
- ##
61
- # @return [Boolean]
62
- def alive?
63
- @thread && @thread.alive?
64
- end
65
-
66
- private
67
-
68
- def run
69
- @mutex.synchronize do
70
- loop do
71
- @sleeper.wait(@mutex, @interval)
72
- break if @stopped
73
-
74
- @procsy.call
75
- end
76
- end
77
- ensure
78
- @sleeper.broadcast
79
- end
80
- end
81
- end
1
+ # encoding: utf-8
2
+
3
+ require 'logstash/outputs/base'
4
+ require 'logstash/namespace'
5
+ require 'logstash/errors'
6
+
7
+ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
8
+ ##
9
+ # Bare-bones utility for running a block of code at an interval.
10
+ #
11
+ class Interval
12
+ ##
13
+ # Initializes a new Interval with the given arguments and starts it
14
+ # before returning it.
15
+ #
16
+ # @param interval [Integer] (see: Interval#initialize)
17
+ # @param procsy [#call] (see: Interval#initialize)
18
+ #
19
+ # @return [Interval]
20
+ #
21
+ def self.start(interval, procsy)
22
+ new(interval, procsy).tap(&:start)
23
+ end
24
+
25
+ ##
26
+ # @param interval [Integer]: time in seconds to wait between calling the given proc
27
+ # @param procsy [#call]: proc or lambda to call periodically; must not raise exceptions.
28
+ def initialize(interval, procsy)
29
+ @interval = interval
30
+ @procsy = procsy
31
+
32
+ # Mutex, ConditionVariable, etc.
33
+ @mutex = Mutex.new
34
+ @sleeper = ConditionVariable.new
35
+ end
36
+
37
+ ##
38
+ # Starts the interval, or returns if it has already been started.
39
+ #
40
+ # @return [void]
41
+ def start
42
+ @mutex.synchronize do
43
+ return if @thread && @thread.alive?
44
+
45
+ @thread = Thread.new { run }
46
+ end
47
+ end
48
+
49
+ ##
50
+ # Stop the interval.
51
+ # Does not interrupt if execution is in-progress.
52
+ def stop
53
+ @mutex.synchronize do
54
+ @stopped = true
55
+ end
56
+
57
+ @thread && @thread.join
58
+ end
59
+
60
+ ##
61
+ # @return [Boolean]
62
+ def alive?
63
+ @thread && @thread.alive?
64
+ end
65
+
66
+ private
67
+
68
+ def run
69
+ @mutex.synchronize do
70
+ loop do
71
+ @sleeper.wait(@mutex, @interval)
72
+ break if @stopped
73
+
74
+ @procsy.call
75
+ end
76
+ end
77
+ ensure
78
+ @sleeper.broadcast
79
+ end
80
+ end
81
+ end
@@ -1,17 +1,17 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-output-kusto'
3
- s.version = '0.1.6'
3
+ s.version = '0.2.0'
4
4
  s.licenses = ['Apache-2.0']
5
- s.summary = 'Writes events to Azure KustoDB'
6
- s.description = 'This is a logstash output plugin used to write events to an Azure KustoDB instance'
7
- s.homepage = 'https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash'
5
+ s.summary = 'Writes events to Azure Data Explorer (Kusto)'
6
+ s.description = 'This is a logstash output plugin used to write events to an Azure Data Explorer (a.k.a Kusto)'
7
+ s.homepage = 'https://github.com/Azure/logstash-output-kusto'
8
8
  s.authors = ['Tamir Kamara']
9
- s.email = 'tamir.kamara@microsoft.com'
9
+ s.email = 'nugetkusto@microsoft.com'
10
10
  s.require_paths = ['lib']
11
11
 
12
12
  # Files
13
- s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
14
-
13
+ s.files = Dir['lib/**/*', 'spec/**/*', 'vendor/**/*', '*.gemspec', '*.md', 'CONTRIBUTORS', 'Gemfile', 'LICENSE', 'NOTICE.TXT']
14
+
15
15
  # Tests
16
16
  s.test_files = s.files.grep(%r{^(test|spec|features)/})
17
17
 
@@ -19,7 +19,7 @@ Gem::Specification.new do |s|
19
19
  s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
20
20
 
21
21
  # Gem dependencies
22
- s.add_runtime_dependency "logstash-core-plugin-api", "~> 2.0"
22
+ s.add_runtime_dependency 'logstash-core-plugin-api', '~> 2.0'
23
23
  s.add_runtime_dependency 'logstash-codec-json_lines'
24
24
  s.add_runtime_dependency 'logstash-codec-line'
25
25
 
@@ -0,0 +1,109 @@
1
+ # encoding: utf-8
2
+ require_relative "../../spec_helpers.rb"
3
+ require 'logstash/outputs/kusto'
4
+ require 'logstash/outputs/kusto/ingestor'
5
+
6
+ describe LogStash::Outputs::Kusto::Ingestor do
7
+
8
+ let(:ingest_url) { "mycluster" }
9
+ let(:app_id) { "myid" }
10
+ let(:app_key) { LogStash::Util::Password.new("mykey") }
11
+ let(:app_tenant) { "mytenant" }
12
+ let(:database) { "mydatabase" }
13
+ let(:table) { "mytable" }
14
+ let(:mapping) { "mymapping" }
15
+ let(:delete_local) { false }
16
+ let(:logger) { spy('logger') }
17
+
18
+ describe '#initialize' do
19
+
20
+ it 'does not throw an error when initializing' do
21
+ # note that this will cause an internal error since connection is being tried.
22
+ # however we still want to test that all the java stuff is working as expected
23
+ expect {
24
+ ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, database, table, mapping, delete_local, logger)
25
+ ingestor.stop
26
+ }.not_to raise_error
27
+ end
28
+
29
+ dynamic_name_array = ['/a%{name}/', '/a %{name}/', '/a- %{name}/', '/a- %{name}']
30
+
31
+ context 'doesnt allow database to have some dynamic part' do
32
+ dynamic_name_array.each do |test_database|
33
+ it "with database: #{test_database}" do
34
+ expect {
35
+ ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, test_database, table, mapping, delete_local, logger)
36
+ ingestor.stop
37
+ }.to raise_error(LogStash::ConfigurationError)
38
+ end
39
+ end
40
+ end
41
+
42
+ context 'doesnt allow table to have some dynamic part' do
43
+ dynamic_name_array.each do |test_table|
44
+ it "with database: #{test_table}" do
45
+ expect {
46
+ ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, database, test_table, mapping, delete_local, logger)
47
+ ingestor.stop
48
+ }.to raise_error(LogStash::ConfigurationError)
49
+ end
50
+ end
51
+ end
52
+
53
+ context 'doesnt allow mapping to have some dynamic part' do
54
+ dynamic_name_array.each do |test_mapping|
55
+ it "with database: #{test_mapping}" do
56
+ expect {
57
+ ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, database, table, test_mapping, delete_local, logger)
58
+ ingestor.stop
59
+ }.to raise_error(LogStash::ConfigurationError)
60
+ end
61
+ end
62
+ end
63
+
64
+ end
65
+
66
+ # describe 'receiving events' do
67
+
68
+ # context 'with non-zero flush interval' do
69
+ # let(:temporary_output_file) { Stud::Temporary.pathname }
70
+
71
+ # let(:event_count) { 100 }
72
+ # let(:flush_interval) { 5 }
73
+
74
+ # let(:events) do
75
+ # event_count.times.map do |idx|
76
+ # LogStash::Event.new('subject' => idx)
77
+ # end
78
+ # end
79
+
80
+ # let(:output) { described_class.new(options.merge( {'path' => temporary_output_file, 'flush_interval' => flush_interval, 'delete_temp_files' => false } )) }
81
+
82
+ # before(:each) { output.register }
83
+
84
+ # after(:each) do
85
+ # output.close
86
+ # File.exist?(temporary_output_file) && File.unlink(temporary_output_file)
87
+ # File.exist?(temporary_output_file + '.kusto') && File.unlink(temporary_output_file + '.kusto')
88
+ # end
89
+
90
+ # it 'eventually flushes without receiving additional events' do
91
+ # output.multi_receive_encoded(events)
92
+
93
+ # # events should not all be flushed just yet...
94
+ # expect(File.read(temporary_output_file)).to satisfy("have less than #{event_count} lines") do |contents|
95
+ # contents && contents.lines.count < event_count
96
+ # end
97
+
98
+ # # wait for the flusher to run...
99
+ # sleep(flush_interval + 1)
100
+
101
+ # # events should all be flushed
102
+ # expect(File.read(temporary_output_file)).to satisfy("have exactly #{event_count} lines") do |contents|
103
+ # contents && contents.lines.count == event_count
104
+ # end
105
+ # end
106
+ # end
107
+
108
+ # end
109
+ end
@@ -1,22 +1,54 @@
1
1
  # encoding: utf-8
2
- require "logstash/devutils/rspec/spec_helper"
3
- require "logstash/outputs/kusto"
4
- require "logstash/codecs/plain"
5
- require "logstash/event"
2
+ require 'logstash/devutils/rspec/spec_helper'
3
+ require 'logstash/outputs/kusto'
4
+ require 'logstash/codecs/plain'
5
+ require 'logstash/event'
6
6
 
7
7
  describe LogStash::Outputs::Kusto do
8
- let(:sample_event) { LogStash::Event.new }
9
- let(:output) { LogStash::Outputs::Kusto.new }
10
8
 
11
- before do
12
- output.register
13
- end
9
+ let(:options) { { "path" => "./kusto_tst/%{+YYYY-MM-dd-HH-mm}",
10
+ "ingest_url" => "mycluster",
11
+ "app_id" => "myid",
12
+ "app_key" => "mykey",
13
+ "app_tenant" => "mytenant",
14
+ "database" => "mydatabase",
15
+ "table" => "mytable",
16
+ "mapping" => "mymapping"
17
+ } }
18
+
19
+ describe '#register' do
20
+
21
+ it 'doesnt allow the path to start with a dynamic string' do
22
+ kusto = described_class.new(options.merge( {'path' => '/%{name}'} ))
23
+ expect { kusto.register }.to raise_error(LogStash::ConfigurationError)
24
+ kusto.close
25
+ end
26
+
27
+ it 'path must include a dynamic string to allow file rotation' do
28
+ kusto = described_class.new(options.merge( {'path' => '/{name}'} ))
29
+ expect { kusto.register }.to raise_error(LogStash::ConfigurationError)
30
+ kusto.close
31
+ end
32
+
14
33
 
15
- describe "receive message" do
16
- subject { output.receive(sample_event) }
34
+ dynamic_name_array = ['/a%{name}/', '/a %{name}/', '/a- %{name}/', '/a- %{name}']
17
35
 
18
- it "returns a string" do
19
- expect(subject).to eq("Event received")
36
+ context 'doesnt allow the root directory to have some dynamic part' do
37
+ dynamic_name_array.each do |test_path|
38
+ it "with path: #{test_path}" do
39
+ kusto = described_class.new(options.merge( {'path' => test_path} ))
40
+ expect { kusto.register }.to raise_error(LogStash::ConfigurationError)
41
+ kusto.close
42
+ end
43
+ end
20
44
  end
45
+
46
+ it 'allow to have dynamic part after the file root' do
47
+ kusto = described_class.new(options.merge({'path' => '/tmp/%{name}'}))
48
+ expect { kusto.register }.not_to raise_error
49
+ kusto.close
50
+ end
51
+
21
52
  end
53
+
22
54
  end
@@ -0,0 +1,5 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/logging/logger"
4
+
5
+ LogStash::Logging::Logger::configure_logging("debug")
metadata CHANGED
@@ -1,116 +1,116 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-output-kusto
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.6
4
+ version: 0.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Tamir Kamara
8
- autorequire:
8
+ autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2018-08-09 00:00:00.000000000 Z
11
+ date: 2019-01-16 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
+ name: logstash-core-plugin-api
14
15
  requirement: !ruby/object:Gem::Requirement
15
16
  requirements:
16
17
  - - "~>"
17
18
  - !ruby/object:Gem::Version
18
19
  version: '2.0'
19
- name: logstash-core-plugin-api
20
- prerelease: false
21
20
  type: :runtime
21
+ prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '2.0'
27
27
  - !ruby/object:Gem::Dependency
28
+ name: logstash-codec-json_lines
28
29
  requirement: !ruby/object:Gem::Requirement
29
30
  requirements:
30
31
  - - ">="
31
32
  - !ruby/object:Gem::Version
32
33
  version: '0'
33
- name: logstash-codec-json_lines
34
- prerelease: false
35
34
  type: :runtime
35
+ prerelease: false
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
38
  - - ">="
39
39
  - !ruby/object:Gem::Version
40
40
  version: '0'
41
41
  - !ruby/object:Gem::Dependency
42
+ name: logstash-codec-line
42
43
  requirement: !ruby/object:Gem::Requirement
43
44
  requirements:
44
45
  - - ">="
45
46
  - !ruby/object:Gem::Version
46
47
  version: '0'
47
- name: logstash-codec-line
48
- prerelease: false
49
48
  type: :runtime
49
+ prerelease: false
50
50
  version_requirements: !ruby/object:Gem::Requirement
51
51
  requirements:
52
52
  - - ">="
53
53
  - !ruby/object:Gem::Version
54
54
  version: '0'
55
55
  - !ruby/object:Gem::Dependency
56
+ name: logstash-devutils
56
57
  requirement: !ruby/object:Gem::Requirement
57
58
  requirements:
58
59
  - - ">="
59
60
  - !ruby/object:Gem::Version
60
61
  version: '0'
61
- name: logstash-devutils
62
- prerelease: false
63
62
  type: :development
63
+ prerelease: false
64
64
  version_requirements: !ruby/object:Gem::Requirement
65
65
  requirements:
66
66
  - - ">="
67
67
  - !ruby/object:Gem::Version
68
68
  version: '0'
69
69
  - !ruby/object:Gem::Dependency
70
+ name: flores
70
71
  requirement: !ruby/object:Gem::Requirement
71
72
  requirements:
72
73
  - - ">="
73
74
  - !ruby/object:Gem::Version
74
75
  version: '0'
75
- name: flores
76
- prerelease: false
77
76
  type: :development
77
+ prerelease: false
78
78
  version_requirements: !ruby/object:Gem::Requirement
79
79
  requirements:
80
80
  - - ">="
81
81
  - !ruby/object:Gem::Version
82
82
  version: '0'
83
83
  - !ruby/object:Gem::Dependency
84
+ name: logstash-input-generator
84
85
  requirement: !ruby/object:Gem::Requirement
85
86
  requirements:
86
87
  - - ">="
87
88
  - !ruby/object:Gem::Version
88
89
  version: '0'
89
- name: logstash-input-generator
90
- prerelease: false
91
90
  type: :development
91
+ prerelease: false
92
92
  version_requirements: !ruby/object:Gem::Requirement
93
93
  requirements:
94
94
  - - ">="
95
95
  - !ruby/object:Gem::Version
96
96
  version: '0'
97
97
  - !ruby/object:Gem::Dependency
98
+ name: jar-dependencies
98
99
  requirement: !ruby/object:Gem::Requirement
99
100
  requirements:
100
101
  - - ">="
101
102
  - !ruby/object:Gem::Version
102
103
  version: '0'
103
- name: jar-dependencies
104
- prerelease: false
105
104
  type: :runtime
105
+ prerelease: false
106
106
  version_requirements: !ruby/object:Gem::Requirement
107
107
  requirements:
108
108
  - - ">="
109
109
  - !ruby/object:Gem::Version
110
110
  version: '0'
111
- description: This is a logstash output plugin used to write events to an Azure KustoDB
112
- instance
113
- email: tamir.kamara@microsoft.com
111
+ description: This is a logstash output plugin used to write events to an Azure Data
112
+ Explorer (a.k.a Kusto)
113
+ email: nugetkusto@microsoft.com
114
114
  executables: []
115
115
  extensions: []
116
116
  extra_rdoc_files: []
@@ -120,19 +120,21 @@ files:
120
120
  - Gemfile
121
121
  - LICENSE
122
122
  - README.md
123
- - lib/kusto/KustoClient-0.1.6.jar
123
+ - lib/kusto/kusto-ingest-1.0.0-BETA-01-jar-with-dependencies.jar
124
124
  - lib/logstash/outputs/kusto.rb
125
125
  - lib/logstash/outputs/kusto/ingestor.rb
126
126
  - lib/logstash/outputs/kusto/interval.rb
127
127
  - logstash-output-kusto.gemspec
128
+ - spec/outputs/kusto/ingestor_spec.rb
128
129
  - spec/outputs/kusto_spec.rb
129
- homepage: https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash
130
+ - spec/spec_helpers.rb
131
+ homepage: https://github.com/Azure/logstash-output-kusto
130
132
  licenses:
131
133
  - Apache-2.0
132
134
  metadata:
133
135
  logstash_plugin: 'true'
134
136
  logstash_group: output
135
- post_install_message:
137
+ post_install_message:
136
138
  rdoc_options: []
137
139
  require_paths:
138
140
  - lib
@@ -147,10 +149,11 @@ required_rubygems_version: !ruby/object:Gem::Requirement
147
149
  - !ruby/object:Gem::Version
148
150
  version: '0'
149
151
  requirements: []
150
- rubyforge_project:
151
- rubygems_version: 2.7.6
152
- signing_key:
152
+ rubygems_version: 3.0.2
153
+ signing_key:
153
154
  specification_version: 4
154
- summary: Writes events to Azure KustoDB
155
+ summary: Writes events to Azure Data Explorer (Kusto)
155
156
  test_files:
157
+ - spec/outputs/kusto/ingestor_spec.rb
156
158
  - spec/outputs/kusto_spec.rb
159
+ - spec/spec_helpers.rb