sentinelblue-logstash-output-azure-loganalytics 1.1.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 41e20d6292a1fe6e9b70ce0860be3f92dd47b6a90ddbb6d5ee87f7335fb30270
4
+ data.tar.gz: bbfd6a8f894905f0f2e2e04fdcb3daa034f04047c5f847e26817670ee8cf58f3
5
+ SHA512:
6
+ metadata.gz: 269d4146de6a9bdbd206f76c2d8c2563093cfb71c43a5d1660e429816d5e6577e4610987ce9b2c63f545efe385beaa1b394b0f26f7017a6ca43e9ca6e95195fa
7
+ data.tar.gz: 78984fe0392d030f17f1842738371c875fc7478db115fc1f0f8a42769210ce0a7e5f277aab641f63e06ab267db55b1016b6d280bcd9c21e6c2c90e887d88dc5d
data/CHANGELOG.md ADDED
@@ -0,0 +1,7 @@
1
+ ## 1.0.0
2
+
3
+ * Initial release for output plugin for logstash to Azure Sentinel(Log Analytics)
4
+
5
+ ## 1.1.1
6
+
7
+ * Updated strings to reference Sentinel Blue
data/Gemfile ADDED
@@ -0,0 +1,2 @@
1
+ source 'https://rubygems.org'
2
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,13 @@
1
+ Copyright 2022 Sentinel Blue
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
data/README.md ADDED
@@ -0,0 +1,167 @@
1
+ # Sentinel Blue Azure Log Analytics output plugin for Logstash
2
+
3
+ Azure Sentinel provides an output plugin for Logstash. Using this output plugin, you will be able to send any log you want using Logstash to the Azure Sentinel/Log Analytics workspace
4
+ Today you will be able to send messages to custom logs table that you will define in the output plugin.
5
+ [Getting started with Logstash](<https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html>)
6
+
7
+ Azure Sentinel output plugin uses the rest API integration to Log Analytics, in order to ingest the logs into custom logs tables [What are custom logs tables](<https://docs.microsoft.com/azure/azure-monitor/platform/data-sources-custom-logs>)
8
+
9
+ This plugin is based on the original provided by the Azure Sentinel team. View the original plugin here: <https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics>
10
+
11
+ ```text
12
+ Plugin version: v1.1.1
13
+ Released on: 2022-10-20
14
+ ```
15
+
16
+ This plugin is currently in development and is free to use. We welcome contributions from the open source community on this project, and we request and appreciate feedback from users.
17
+
18
+ ## Support
19
+
20
+ For issues regarding the output plugin please open a support issue here. Create a new issue describing the problem so that we can assist you.
21
+
22
+ ## Installation
23
+
24
+ Azure Sentinel provides Logstash an output plugin to Log analytics workspace.
25
+ Install the sentinelblue-logstash-output-azure-loganalytics, use [Logstash Working with plugins](<https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html>) document.
26
+ For offline setup follow [Logstash Offline Plugin Management instruction](<https://www.elastic.co/guide/en/logstash/current/offline-plugins.html>).
27
+
28
+ Required Logstash version: between 7.0+
29
+
30
+ ## Configuration
31
+
32
+ in your Logstash configuration file, add the Azure Sentinel output plugin to the configuration with following values:
33
+
34
+ - workspace_id
35
+ - your workspace ID guid
36
+ - workspace_key (primary key)
37
+ - your workspace primary key guid. You can find your workspace key and id the following path: Home > Log Analytics workspace > Advanced settings
38
+ - custom_log_table_name
39
+ - table name, in which the logs will be ingested, limited to one table, the log table will be presented in the logs blade under the custom logs label, with a _CL suffix.
40
+ - custom_log_table_name must be either a static name consisting only of numbers, letters, and underscores OR a dynamic table name of the format used by logstash (e.g. ```%{field_name}```, ```%{[nested][field]}```.
41
+ - endpoint
42
+ - Optional field by default set as log analytics endpoint.
43
+ - time_generated_field
44
+ - Optional field, this property is used to override the default TimeGenerated field in Log Analytics. Populate this property with the name of the sent data time field.
45
+ - key_names
46
+ - list of Log analytics output schema fields.
47
+ - plugin_flash_interval
48
+ - Optional filed, define the maximal time difference (in seconds) between sending two messages to Log Analytics.
49
+ - Max_items
50
+ - Optional field, 2000 by default. this parameter will control the maximum batch size. This value will be changed if the user didn’t specify “amount_resizing = false” in the configuration.
51
+
52
+ Note: View the GitHub to learn more about the sent message’s configuration, performance settings and mechanism
53
+
54
+ Security notice: We recommend not to implicitly state the workspace_id and workspace_key in your Logstash configuration for security reasons.
55
+ It is best to store this sensitive information in a Logstash KeyStore as described here- https://www.elastic.co/guide/en/elasticsearch/reference/current/get-started-logstash-user.html
56
+
57
+ ## Tests
58
+
59
+ Here is an example configuration who parse Syslog incoming data into a custom table named "logstashCustomTableName".
60
+
61
+ ### Example Configurations
62
+
63
+ #### Basic configuration
64
+
65
+ - Using filebeat input pipe
66
+
67
+ ```text
68
+ input {
69
+ beats {
70
+ port => "5044"
71
+ }
72
+ }
73
+ filter {
74
+ }
75
+ output {
76
+ sentinelblue-logstash-output-azure-loganalytics {
77
+ workspace_id => "4g5tad2b-a4u4-147v-a4r7-23148a5f2c21" # <your workspace id>
78
+ workspace_key => "u/saRtY0JGHJ4Ce93g5WQ3Lk50ZnZ8ugfd74nk78RPLPP/KgfnjU5478Ndh64sNfdrsMni975HJP6lp==" # <your workspace key>
79
+ custom_log_table_name => "tableName"
80
+ }
81
+ }
82
+ ```
83
+
84
+ Or using the tcp input pipe
85
+
86
+ ```text
87
+ input {
88
+ tcp {
89
+ port => "514"
90
+ type => syslog #optional, will effect log type in table
91
+ }
92
+ }
93
+ filter {
94
+ }
95
+ output {
96
+ sentinelblue-logstash-output-azure-loganalytics {
97
+ workspace_id => "4g5tad2b-a4u4-147v-a4r7-23148a5f2c21" # <your workspace id>
98
+ workspace_key => "u/saRtY0JGHJ4Ce93g5WQ3Lk50ZnZ8ugfd74nk78RPLPP/KgfnjU5478Ndh64sNfdrsMni975HJP6lp==" # <your workspace key>
99
+ custom_log_table_name => "tableName"
100
+ }
101
+ }
102
+ ```
103
+
104
+ #### Advanced Configuration
105
+
106
+ ```text
107
+ input {
108
+ tcp {
109
+ port => 514
110
+ type => syslog
111
+ }
112
+ }
113
+
114
+ filter {
115
+ grok {
116
+ match => { "message" => "<%{NUMBER:PRI}>1 (?<TIME_TAG>[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}T[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2})[^ ]* (?<HOSTNAME>[^ ]*) %{GREEDYDATA:MSG}" }
117
+ }
118
+ }
119
+
120
+ output {
121
+ sentinelblue-logstash-output-azure-loganalytics {
122
+ workspace_id => "<WS_ID>"
123
+ workspace_key => "${WS_KEY}"
124
+ custom_log_table_name => "logstashCustomTableName"
125
+ key_names => ['PRI','TIME_TAG','HOSTNAME','MSG']
126
+ plugin_flush_interval => 5
127
+ }
128
+ }
129
+ ```
130
+
131
+ ```text
132
+ filter {
133
+ grok {
134
+ match => { "message" => "<%{NUMBER:PRI}>1 (?<TIME_TAG>[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}T[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2})[^ ]* (?<HOSTNAME>[^ ]*) %{GREEDYDATA:MSG}" }
135
+ }
136
+ }
137
+
138
+ output {
139
+ sentinelblue-logstash-output-azure-loganalytics {
140
+ workspace_id => "<WS_ID>"
141
+ workspace_key => "${WS_KEY}"
142
+ custom_log_table_name => "%{[event][name]}"
143
+ key_names => ['PRI','TIME_TAG','HOSTNAME','MSG']
144
+ plugin_flush_interval => 5
145
+ }
146
+ }
147
+ ```
148
+
149
+ Now you are able to run logstash with the example configuration and send mock data using the 'logger' command.
150
+
151
+ For example:
152
+
153
+ ```text
154
+ logger -p local4.warn -t CEF: "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example" -P 514 -d -n 127.0.0.1
155
+ ```
156
+
157
+ Note: this format of pushing logs is not tested. You can tail a file for similar results.
158
+
159
+ ```text
160
+ logger -p local4.warn -t JSON: "{"event":{"name":"logstashCustomTableName"},"purpose":"testplugin"}"
161
+ ```
162
+
163
+ Alternativly you can use netcat to test your configuration:
164
+
165
+ ```text
166
+ echo "test string" | netcat localhost 514
167
+ ```
data/VERSION ADDED
@@ -0,0 +1 @@
1
+ 1.1.1
@@ -0,0 +1,72 @@
1
+ # encoding: utf-8
2
+ require "logstash/logAnalyticsClient/logstashLoganalyticsConfiguration"
3
+ require 'rest-client'
4
+ require 'json'
5
+ require 'openssl'
6
+ require 'base64'
7
+ require 'time'
8
+
9
+ class LogAnalyticsClient
10
+ API_VERSION = '2016-04-01'.freeze
11
+
12
+ def initialize (logstashLoganalyticsConfiguration)
13
+ @logstashLoganalyticsConfiguration = logstashLoganalyticsConfiguration
14
+ set_proxy(@logstashLoganalyticsConfiguration.proxy)
15
+ @uri = sprintf("https://%s.%s/api/logs?api-version=%s", @logstashLoganalyticsConfiguration.workspace_id, @logstashLoganalyticsConfiguration.endpoint, API_VERSION)
16
+ end # def initialize
17
+
18
+
19
+ # Post the given json to Azure Loganalytics
20
+ def post_data(body,custom_table_name)
21
+ raise ConfigError, 'no json_records' if body.empty?
22
+ # Create REST request header
23
+ header = get_header(body.bytesize,custom_table_name)
24
+ # Post REST request
25
+ response = RestClient.post(@uri, body, header)
26
+
27
+ return response
28
+ end # def post_data
29
+
30
+ private
31
+
32
+ # Create a header for the given length
33
+ def get_header(body_bytesize_length,custom_table_name)
34
+ # We would like each request to be sent with the current time
35
+ date = rfc1123date()
36
+
37
+ return {
38
+ 'Content-Type' => 'application/json',
39
+ 'Authorization' => signature(date, body_bytesize_length),
40
+ 'Log-Type' => custom_table_name,
41
+ 'x-ms-date' => date,
42
+ 'time-generated-field' => @logstashLoganalyticsConfiguration.time_generated_field,
43
+ 'x-ms-AzureResourceId' => @logstashLoganalyticsConfiguration.azure_resource_id
44
+ }
45
+ end # def get_header
46
+
47
+ # Setting proxy for the REST client.
48
+ # This option is not used in the output plugin and will be used
49
+ #
50
+ def set_proxy(proxy='')
51
+ RestClient.proxy = proxy.empty? ? ENV['http_proxy'] : proxy
52
+ end # def set_proxy
53
+
54
+ # Return the current data
55
+ def rfc1123date()
56
+ current_time = Time.now
57
+
58
+ return current_time.httpdate()
59
+ end # def rfc1123date
60
+
61
+ def signature(date, body_bytesize_length)
62
+ sigs = sprintf("POST\n%d\napplication/json\nx-ms-date:%s\n/api/logs", body_bytesize_length, date)
63
+ utf8_sigs = sigs.encode('utf-8')
64
+ decoded_shared_key = Base64.decode64(@logstashLoganalyticsConfiguration.workspace_key)
65
+ hmac_sha256_sigs = OpenSSL::HMAC.digest(OpenSSL::Digest.new('sha256'), decoded_shared_key, utf8_sigs)
66
+ encoded_hash = Base64.encode64(hmac_sha256_sigs)
67
+ authorization = sprintf("SharedKey %s:%s", @logstashLoganalyticsConfiguration.workspace_id, encoded_hash)
68
+
69
+ return authorization
70
+ end # def signature
71
+
72
+ end # end of class
@@ -0,0 +1,143 @@
1
+ # encoding: utf-8
2
+ require "stud/buffer"
3
+ require "logstash/logAnalyticsClient/logAnalyticsClient"
4
+ require "stud/buffer"
5
+ require "logstash/logAnalyticsClient/logstashLoganalyticsConfiguration"
6
+
7
+ # LogStashAutoResizeBuffer class setting a resizable buffer which is flushed periodically
8
+ # The buffer resize itself according to Azure Loganalytics and configuration limitations
9
+ class LogStashAutoResizeBuffer
10
+ include Stud::Buffer
11
+
12
+ def initialize(logstashLoganalyticsConfiguration,custom_table_name)
13
+ @logstashLoganalyticsConfiguration = logstashLoganalyticsConfiguration
14
+ @logger = @logstashLoganalyticsConfiguration.logger
15
+ @custom_log_table_name = custom_table_name
16
+ @client=LogAnalyticsClient::new(logstashLoganalyticsConfiguration)
17
+ buffer_initialize(
18
+ :max_items => logstashLoganalyticsConfiguration.max_items,
19
+ :max_interval => logstashLoganalyticsConfiguration.plugin_flush_interval,
20
+ :logger => @logstashLoganalyticsConfiguration.logger
21
+ )
22
+ end # initialize
23
+
24
+ # Public methods
25
+ public
26
+
27
+ # Adding an event document into the buffer
28
+ def add_event_document(event_document)
29
+ buffer_receive(event_document)
30
+ end # def add_event_document
31
+
32
+ # Flushing all buffer content to Azure Loganalytics.
33
+ # Called from Stud::Buffer#buffer_flush when there are events to flush
34
+ def flush (documents, close=false)
35
+ # Skip in case there are no candidate documents to deliver
36
+ if documents.length < 1
37
+ @logger.warn("No documents in batch for log type #{@custom_log_table_name}. Skipping")
38
+ return
39
+ end
40
+
41
+ # We send Json in the REST request
42
+ documents_json = documents.to_json
43
+ # Setting resizing to true will cause changing the max size
44
+ if @logstashLoganalyticsConfiguration.amount_resizing == true
45
+ # Resizing the amount of messages according to size of message received and amount of messages
46
+ change_message_limit_size(documents.length, documents_json.bytesize)
47
+ end
48
+ send_message_to_loganalytics(documents_json, documents.length)
49
+ end # def flush
50
+
51
+ # Private methods
52
+ private
53
+
54
+ # Send documents_json to Azure Loganalytics
55
+ def send_message_to_loganalytics(documents_json, amount_of_documents)
56
+ begin
57
+ @logger.debug("Posting log batch (log count: #{amount_of_documents}) as log type #{@custom_log_table_name} to DataCollector API.")
58
+ response = @client.post_data(documents_json,@custom_log_table_name)
59
+ if is_successfully_posted(response)
60
+ @logger.info("Successfully posted #{amount_of_documents} logs into custom log analytics table[#{@custom_log_table_name}].")
61
+ else
62
+ @logger.error("DataCollector API request failure: error code: #{response.code}, data=>" + (documents.to_json).to_s)
63
+ resend_message(documents_json, amount_of_documents, @logstashLoganalyticsConfiguration.retransmission_time)
64
+ end
65
+ rescue Exception => ex
66
+ @logger.error("Exception in posting data to Azure Loganalytics.\n[Exception: '#{ex}]'")
67
+ @logger.trace("Exception in posting data to Azure Loganalytics.[amount_of_documents=#{amount_of_documents} documents=#{documents_json}]")
68
+ resend_message(documents_json, amount_of_documents, @logstashLoganalyticsConfiguration.retransmission_time)
69
+ end
70
+ end # end send_message_to_loganalytics
71
+
72
+ # If sending the message toAzure Loganalytics fails we would like to retry to send it again.
73
+ # We would like to do it until we reached to the duration
74
+ def resend_message(documents_json, amount_of_documents, remaining_duration)
75
+ if remaining_duration > 0
76
+ @logger.info("Resending #{amount_of_documents} documents as log type #{@custom_log_table_name} to DataCollector API in #{@logstashLoganalyticsConfiguration.RETRANSMISSION_DELAY} seconds.")
77
+ sleep @logstashLoganalyticsConfiguration.RETRANSMISSION_DELAY
78
+ begin
79
+ response = @client.post_data(documents_json,@custom_log_table_name)
80
+ if is_successfully_posted(response)
81
+ @logger.info("Successfully sent #{amount_of_documents} logs into custom log analytics table[#{@custom_log_table_name}] after resending.")
82
+ else
83
+ @logger.debug("Resending #{amount_of_documents} documents failed, will try to resend for #{(remaining_duration - @logstashLoganalyticsConfiguration.RETRANSMISSION_DELAY)}")
84
+ resend_message(documents_json, amount_of_documents, (remaining_duration - @logstashLoganalyticsConfiguration.RETRANSMISSION_DELAY))
85
+ end
86
+ rescue Exception => ex
87
+ @logger.debug("Resending #{amount_of_documents} documents failed, will try to resend for #{(remaining_duration - @logstashLoganalyticsConfiguration.RETRANSMISSION_DELAY)}")
88
+ resend_message(documents_json, amount_of_documents, (remaining_duration - @logstashLoganalyticsConfiguration.RETRANSMISSION_DELAY))
89
+ end
90
+ else
91
+ @logger.error("Could not resend #{amount_of_documents} documents, message is dropped.")
92
+ @logger.trace("Documents (#{amount_of_documents}) dropped. [documents_json=#{documents_json}]")
93
+ end
94
+ end # def resend_message
95
+
96
+ # We would like to change the amount of messages in the buffer (change_max_size)
97
+ # We change the amount according to the Azure Loganalytics limitation and the amount of messages inserted to the buffer
98
+ # in one sending window.
99
+ # Meaning that if we reached the max amount we would like to increase it.
100
+ # Else we would like to decrease it(to reduce latency for messages)
101
+ def change_message_limit_size(amount_of_documents, documents_byte_size)
102
+ new_buffer_size = @logstashLoganalyticsConfiguration.max_items
103
+ average_document_size = documents_byte_size / amount_of_documents
104
+ # If window is full we need to increase it
105
+ # "amount_of_documents" can be greater since buffer is not synchronized meaning
106
+ # that flush can occur after limit was reached.
107
+ if amount_of_documents >= @logstashLoganalyticsConfiguration.max_items
108
+ # if doubling the size wouldn't exceed the API limit
109
+ if ((2 * @logstashLoganalyticsConfiguration.max_items) * average_document_size) < @logstashLoganalyticsConfiguration.MAX_SIZE_BYTES
110
+ new_buffer_size = 2 * @logstashLoganalyticsConfiguration.max_items
111
+ else
112
+ new_buffer_size = (@logstashLoganalyticsConfiguration.MAX_SIZE_BYTES / average_document_size) -1000
113
+ end
114
+
115
+ # We would like to decrease the window but not more then the MIN_MESSAGE_AMOUNT
116
+ # We are trying to decrease it slowly to be able to send as much messages as we can in one window
117
+ elsif amount_of_documents < @logstashLoganalyticsConfiguration.max_items and @logstashLoganalyticsConfiguration.max_items != [(@logstashLoganalyticsConfiguration.max_items - @logstashLoganalyticsConfiguration.decrease_factor) ,@logstashLoganalyticsConfiguration.MIN_MESSAGE_AMOUNT].max
118
+ new_buffer_size = [(@logstashLoganalyticsConfiguration.max_items - @logstashLoganalyticsConfiguration.decrease_factor) ,@logstashLoganalyticsConfiguration.MIN_MESSAGE_AMOUNT].max
119
+ end
120
+
121
+ change_buffer_size(new_buffer_size)
122
+ end # def change_message_limit_size
123
+
124
+ # Receiving new_size as the new max buffer size.
125
+ # Changing both the buffer, the configuration and logging as necessary
126
+ def change_buffer_size(new_size)
127
+ # Change buffer size only if it's needed(new size)
128
+ if @buffer_config[:max_items] != new_size
129
+ old_buffer_size = @buffer_config[:max_items]
130
+ @buffer_config[:max_items] = new_size
131
+ @logstashLoganalyticsConfiguration.max_items = new_size
132
+ @logger.info("Changing buffer size.[configuration='#{old_buffer_size}' , new_size='#{new_size}']")
133
+ else
134
+ @logger.info("Buffer size wasn't changed.[configuration='#{old_buffer_size}' , new_size='#{new_size}']")
135
+ end
136
+ end # def change_buffer_size
137
+
138
+ # Function to return if the response is OK or else
139
+ def is_successfully_posted(response)
140
+ return (response.code == 200) ? true : false
141
+ end # def is_successfully_posted
142
+
143
+ end # LogStashAutoResizeBuffer
@@ -0,0 +1,147 @@
1
+ # encoding: utf-8
2
+ class LogstashLoganalyticsOutputConfiguration
3
+ def initialize(workspace_id, workspace_key, logger)
4
+ @workspace_id = workspace_id
5
+ @workspace_key = workspace_key
6
+ @logger = logger
7
+
8
+ # Delay between each resending of a message
9
+ @RETRANSMISSION_DELAY = 2
10
+ @MIN_MESSAGE_AMOUNT = 100
11
+ # Maximum of 30 MB per post to Log Analytics Data Collector API.
12
+ # This is a size limit for a single post.
13
+ # If the data from a single post that exceeds 30 MB, you should split it.
14
+ @loganalytics_api_data_limit = 30 * 1000 * 1000
15
+
16
+ # Taking 4K safety buffer
17
+ @MAX_SIZE_BYTES = @loganalytics_api_data_limit - 10000
18
+ end
19
+
20
+ def validate_configuration()
21
+ if @retransmission_time < 0
22
+ raise ArgumentError, "Setting retransmission_time which sets the time spent for resending each failed messages must be positive integer. [retransmission_time=#{@retransmission_time}]."
23
+
24
+ elsif @max_items < @MIN_MESSAGE_AMOUNT
25
+ raise ArgumentError, "Setting max_items to value must be greater then #{@MIN_MESSAGE_AMOUNT}."
26
+
27
+ elsif @workspace_id.empty? or @workspace_key.empty?
28
+ raise ArgumentError, "Malformed configuration , the following arguments can not be null or empty.[workspace_id=#{@workspace_id} , workspace_key=#{@workspace_key}]"
29
+
30
+ elsif @key_names.length > 500
31
+ raise ArgumentError, 'Azure Loganalytics limits the amount of columns to 500 in each table.'
32
+ end
33
+
34
+ @logger.info("Azure Loganalytics configuration was found valid.")
35
+
36
+ # If all validation pass then configuration is valid
37
+ return true
38
+ end # def validate_configuration
39
+
40
+ def azure_resource_id
41
+ @azure_resource_id
42
+ end
43
+
44
+ def RETRANSMISSION_DELAY
45
+ @RETRANSMISSION_DELAY
46
+ end
47
+
48
+ def MAX_SIZE_BYTES
49
+ @MAX_SIZE_BYTES
50
+ end
51
+
52
+ def amount_resizing
53
+ @amount_resizing
54
+ end
55
+
56
+ def retransmission_time
57
+ @retransmission_time
58
+ end
59
+
60
+ def proxy
61
+ @proxy
62
+ end
63
+
64
+ def logger
65
+ @logger
66
+ end
67
+
68
+ def decrease_factor
69
+ @decrease_factor
70
+ end
71
+
72
+ def workspace_id
73
+ @workspace_id
74
+ end
75
+
76
+ def workspace_key
77
+ @workspace_key
78
+ end
79
+
80
+ def endpoint
81
+ @endpoint
82
+ end
83
+
84
+ def time_generated_field
85
+ @time_generated_field
86
+ end
87
+
88
+ def key_names
89
+ @key_names
90
+ end
91
+
92
+ def max_items
93
+ @max_items
94
+ end
95
+
96
+ def plugin_flush_interval
97
+ @plugin_flush_interval
98
+ end
99
+
100
+ def MIN_MESSAGE_AMOUNT
101
+ @MIN_MESSAGE_AMOUNT
102
+ end
103
+
104
+ def max_items=(new_max_items)
105
+ @max_items = new_max_items
106
+ end
107
+
108
+ def endpoint=(new_endpoint)
109
+ @endpoint = new_endpoint
110
+ end
111
+
112
+ def time_generated_field=(new_time_generated_field)
113
+ @time_generated_field = new_time_generated_field
114
+ end
115
+
116
+ def key_names=(new_key_names)
117
+ @key_names = new_key_names
118
+ end
119
+
120
+ def plugin_flush_interval=(new_plugin_flush_interval)
121
+ @plugin_flush_interval = new_plugin_flush_interval
122
+ end
123
+
124
+ def decrease_factor=(new_decrease_factor)
125
+ @decrease_factor = new_decrease_factor
126
+ end
127
+
128
+ def amount_resizing=(new_amount_resizing)
129
+ @amount_resizing = new_amount_resizing
130
+ end
131
+
132
+ def max_items=(new_max_items)
133
+ @max_items = new_max_items
134
+ end
135
+
136
+ def azure_resource_id=(new_azure_resource_id)
137
+ @azure_resource_id = new_azure_resource_id
138
+ end
139
+
140
+ def proxy=(new_proxy)
141
+ @proxy = new_proxy
142
+ end
143
+
144
+ def retransmission_time=(new_retransmission_time)
145
+ @retransmission_time = new_retransmission_time
146
+ end
147
+ end
@@ -0,0 +1,185 @@
1
+ # encoding: utf-8
2
+ require "logstash/outputs/base"
3
+ require "logstash/namespace"
4
+ require "stud/buffer"
5
+ require "logstash/logAnalyticsClient/logStashAutoResizeBuffer"
6
+ require "logstash/logAnalyticsClient/logstashLoganalyticsConfiguration"
7
+
8
+ class LogStash::Outputs::AzureLogAnalytics < LogStash::Outputs::Base
9
+
10
+ config_name "sentinelblue-logstash-output-azure-loganalytics"
11
+
12
+ # Stating that the output plugin will run in concurrent mode
13
+ concurrency :shared
14
+
15
+ # Your Operations Management Suite workspace ID
16
+ config :workspace_id, :validate => :string, :required => true
17
+
18
+ # The primary or the secondary key used for authentication, required by Azure Loganalytics REST API
19
+ config :workspace_key, :validate => :string, :required => true
20
+
21
+ # The name of the event type that is being submitted to Log Analytics.
22
+ # This must be only alpha characters, numbers and underscore.
23
+ # This must not exceed 100 characters.
24
+ # Table name under custom logs in which the data will be inserted
25
+ config :custom_log_table_name, :validate => :string, :required => true
26
+
27
+ # The service endpoint (Default: ods.opinsights.azure.com)
28
+ config :endpoint, :validate => :string, :default => 'ods.opinsights.azure.com'
29
+
30
+ # The name of the time generated field.
31
+ # Be careful that the value of field should strictly follow the ISO 8601 format (YYYY-MM-DDThh:mm:ssZ)
32
+ config :time_generated_field, :validate => :string, :default => ''
33
+
34
+ # Subset of keys to send to the Azure Loganalytics workspace
35
+ config :key_names, :validate => :array, :default => []
36
+
37
+ # # Max number of items to buffer before flushing. Default 50.
38
+ # config :flush_items, :validate => :number, :default => 50
39
+
40
+ # Max number of seconds to wait between flushes. Default 5
41
+ config :plugin_flush_interval, :validate => :number, :default => 5
42
+
43
+ # Factor for adding to the amount of messages sent
44
+ config :decrease_factor, :validate => :number, :default => 100
45
+
46
+ # This will trigger message amount resizing in a REST request to LA
47
+ config :amount_resizing, :validate => :boolean, :default => true
48
+
49
+ # Setting the default amount of messages sent
50
+ # it this is set with amount_resizing=false --> each message will have max_items
51
+ config :max_items, :validate => :number, :default => 2000
52
+
53
+ # Setting proxy to be used for the Azure Loganalytics REST client
54
+ config :proxy, :validate => :string, :default => ''
55
+
56
+ # This will set the amount of time given for retransmitting messages once sending is failed
57
+ config :retransmission_time, :validate => :number, :default => 10
58
+
59
+ # Optional to override the resource ID field on the workspace table.
60
+ # Resource ID provided must be a valid resource ID on azure
61
+ config :azure_resource_id, :validate => :string, :default => ''
62
+
63
+ public
64
+ def register
65
+ @logstash_configuration= build_logstash_configuration()
66
+ # Validate configuration correctness
67
+ @logstash_configuration.validate_configuration()
68
+ @logger.info("Logstash Sentinel Blue Azure Log Analytics output plugin configuration was found valid")
69
+
70
+ # Initialize the logstash resizable buffer
71
+ # This buffer will increase and decrease size according to the amount of messages inserted.
72
+ # If the buffer reached the max amount of messages the amount will be increased until the limit
73
+ # @logstash_resizable_event_buffer=LogStashAutoResizeBuffer::new(@logstash_configuration)
74
+
75
+ end # def register
76
+
77
+ def multi_receive(events)
78
+
79
+ # Create a hash table for each buffer. A buffer is needed per table used
80
+ buffers = Hash.new
81
+
82
+ events.each do |event|
83
+ # creating document from event
84
+ document = create_event_document(event)
85
+ # Skip if document doesn't contain any items
86
+ next if (document.keys).length < 1
87
+
88
+ # Get the custom table name
89
+ custom_table_name = ""
90
+
91
+ # Check if the table name is static or dynamic
92
+ if @custom_log_table_name.match(/^[[:alpha:][:digit:]_]+$/)
93
+ # Table name is static.
94
+ custom_table_name = @custom_log_table_name
95
+
96
+ # /^%\{((([a-zA-Z]|[0-9]|_)+)|(\[((([a-zA-Z]|[0-9]|_|@)+))\])+)\}$/
97
+ elsif @custom_log_table_name.match(/^%{((\[[\w_\-@]*\])*)([\w_\-@]*)}$/)
98
+ # Table name is dynamic
99
+ custom_table_name = event.sprintf(@custom_log_table_name)
100
+
101
+ else
102
+ # Incorrect format
103
+ @logger.warn("custom_log_table_name must be either a static name consisting only of numbers, letters, and underscores OR a dynamic table name of the format used by logstash (e.g. %{field_name}, %{[nested][field]}.")
104
+ break
105
+
106
+ end
107
+
108
+ # Check that the table name is a string, exists, and is les than 100 characters
109
+ if !custom_table_name.is_a?(String)
110
+ @logger.warn("The custom table name must be a string. If you used a dynamic name from one of the log fields, make sure it is a string.")
111
+ break
112
+
113
+ elsif custom_table_name.empty? or custom_table_name.nil?
114
+ @logger.warn("The custom table name is empty. If you used a dynamic name from one of the log fields, check that it exists.")
115
+ break
116
+
117
+ elsif custom_table_name.length > 100
118
+ @logger.warn("The custom table name must not exceed 100 characters")
119
+ break
120
+
121
+ end
122
+
123
+ @logger.info("Custom table name #{custom_table_name} is valid")
124
+
125
+ # Determine if there is a buffer for the given table
126
+ if buffers.keys.include?(custom_table_name)
127
+ @logger.trace("Adding event document - " + event.to_s)
128
+ buffers[custom_table_name].add_event_document(document)
129
+
130
+ else
131
+ # If the buffer doesn't exist for the table, create one and add the document
132
+ buffers[custom_table_name] = LogStashAutoResizeBuffer::new(@logstash_configuration,custom_table_name)
133
+ @logger.trace("Adding event document - " + event.to_s)
134
+ buffers[custom_table_name].add_event_document(document)
135
+
136
+ end
137
+
138
+ end # events.each do
139
+ end # def multi_receive
140
+
141
+ #private
142
+ private
143
+
144
+ # In case that the user has defined key_names meaning that he would like to a subset of the data,
145
+ # we would like to insert only those keys.
146
+ # If no keys were defined we will send all the data
147
+
148
+ def create_event_document(event)
149
+ document = {}
150
+ event_hash = event.to_hash()
151
+ if @key_names.length > 0
152
+ # Get the intersection of key_names and keys of event_hash
153
+ keys_intersection = @key_names & event_hash.keys
154
+ keys_intersection.each do |key|
155
+ document[key] = event_hash[key]
156
+ end
157
+ if document.keys.length < 1
158
+ @logger.warn("No keys found, message is dropped. Plugin keys: #{@key_names}, Event keys: #{event_hash}. The event message do not match event expected structre. Please edit key_names section in output plugin and try again.")
159
+ end
160
+ else
161
+ document = event_hash
162
+ end
163
+
164
+ return document
165
+ end # def create_event_document
166
+
167
+ # Building the logstash object configuration from the output configuration provided by the user
168
+ # Return LogstashLoganalyticsOutputConfiguration populated with the configuration values
169
+ def build_logstash_configuration()
170
+ logstash_configuration= LogstashLoganalyticsOutputConfiguration::new(@workspace_id, @workspace_key, @logger)
171
+ logstash_configuration.endpoint = @endpoint
172
+ logstash_configuration.time_generated_field = @time_generated_field
173
+ logstash_configuration.key_names = @key_names
174
+ logstash_configuration.plugin_flush_interval = @plugin_flush_interval
175
+ logstash_configuration.decrease_factor = @decrease_factor
176
+ logstash_configuration.amount_resizing = @amount_resizing
177
+ logstash_configuration.max_items = @max_items
178
+ logstash_configuration.azure_resource_id = @azure_resource_id
179
+ logstash_configuration.proxy = @proxy
180
+ logstash_configuration.retransmission_time = @retransmission_time
181
+
182
+ return logstash_configuration
183
+ end # def build_logstash_configuration
184
+
185
+ end # class LogStash::Outputs::AzureLogAnalytics
@@ -0,0 +1,25 @@
1
+ Gem::Specification.new do |s|
2
+ s.name = 'sentinelblue-logstash-output-azure-loganalytics'
3
+ s.version = File.read("VERSION").strip
4
+ s.authors = ["Sentinel Blue"]
5
+ s.email = "info@sentinelblue.com"
6
+ s.summary = %q{Sentinel Blue provides a plugin outputing to Azure Sentinel for Logstash. Using this output plugin, you will be able to send any log you want using Logstash to the Azure Sentinel/Log Analytics workspace. You can utilize a dynamic table name during output to simplify complex table schemes.}
7
+ s.description = s.summary
8
+ s.homepage = "https://github.com/sentinelblue/sentinelblue-logstash-output-azure-loganalytics"
9
+ s.licenses = ['Apache License (2.0)']
10
+ s.require_paths = ["lib"]
11
+
12
+ # Files
13
+ s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT', 'VERSION']
14
+ # Tests
15
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
16
+
17
+ # Special flag to let us know this is actually a logstash plugin
18
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
19
+
20
+ # Gem dependencies
21
+ s.add_runtime_dependency "rest-client", ">= 1.8.0"
22
+ s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
23
+ s.add_runtime_dependency "logstash-codec-plain"
24
+ s.add_development_dependency "logstash-devutils"
25
+ end
@@ -0,0 +1,78 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/outputs/sentinelblue-logstash-output-azure-loganalytics"
4
+ require "logstash/codecs/plain"
5
+ require "logstash/event"
6
+
7
+ describe LogStash::Outputs::AzureLogAnalytics do
8
+
9
+ let(:workspace_id) { '<Workspace ID identifying your workspace>' }
10
+ let(:workspace_key) { '<Primary Key for the Azure log analytics workspace>' }
11
+ let(:custom_log_table_name) { 'ApacheAccessLog' }
12
+ let(:key_names) { ['logid','date','processing_time','remote','user','method','status','agent','eventtime'] }
13
+ let(:time_generated_field) { 'eventtime' }
14
+ let(:amount_resizing) {false}
15
+
16
+ # 1 second flush interval
17
+ let(:plugin_flush_interval) {1}
18
+
19
+ let(:azure_loganalytics_config) {
20
+ {
21
+ "workspace_id" => workspace_id,
22
+ "workspace_key" => workspace_key,
23
+ "custom_log_table_name" => custom_log_table_name,
24
+ "key_names" => key_names,
25
+ "time_generated_field" => time_generated_field,
26
+ "plugin_flush_interval" => plugin_flush_interval,
27
+ "amount_resizing" => amount_resizing
28
+ }
29
+ }
30
+
31
+ let(:azure_loganalytics) { LogStash::Outputs::AzureLogAnalytics.new(azure_loganalytics_config) }
32
+
33
+ before do
34
+ azure_loganalytics.register
35
+ end
36
+
37
+ describe "#flush" do
38
+ it "Should successfully send the event to Azure Log Analytics" do
39
+ events = []
40
+ log1 = {
41
+ :logid => "5cdad72f-c848-4df0-8aaa-ffe033e75d57",
42
+ :date => "2017-04-22 09:44:32 JST",
43
+ :processing_time => "372",
44
+ :remote => "101.202.74.59",
45
+ :user => "-",
46
+ :method => "GET / HTTP/1.1",
47
+ :status => "304",
48
+ :size => "-",
49
+ :referer => "-",
50
+ :agent => "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:27.0) Gecko/20100101 Firefox/27.0",
51
+ :eventtime => "2017-04-22T01:44:32Z"
52
+ }
53
+
54
+ log2 = {
55
+ :logid => "7260iswx-8034-4cc3-uirtx-f068dd4cd659",
56
+ :date => "2017-04-22 09:45:14 JST",
57
+ :processing_time => "105",
58
+ :remote => "201.78.74.59",
59
+ :user => "-",
60
+ :method => "GET /manager/html HTTP/1.1",
61
+ :status =>"200",
62
+ :size => "-",
63
+ :referer => "-",
64
+ :agent => "Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0",
65
+ :eventtime => "2017-04-22T01:45:14Z"
66
+ }
67
+
68
+ event1 = LogStash::Event.new(log1)
69
+ event2 = LogStash::Event.new(log2)
70
+ events.push(event1)
71
+ events.push(event2)
72
+ expect {azure_loganalytics.multi_receive(events)}.to_not raise_error
73
+ # Waiting for the data to be sent
74
+ sleep(plugin_flush_interval + 2)
75
+ end
76
+ end
77
+
78
+ end
metadata ADDED
@@ -0,0 +1,124 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: sentinelblue-logstash-output-azure-loganalytics
3
+ version: !ruby/object:Gem::Version
4
+ version: 1.1.1
5
+ platform: ruby
6
+ authors:
7
+ - Sentinel Blue
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2022-10-20 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: rest-client
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: 1.8.0
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: 1.8.0
27
+ - !ruby/object:Gem::Dependency
28
+ name: logstash-core-plugin-api
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: '1.60'
34
+ - - "<="
35
+ - !ruby/object:Gem::Version
36
+ version: '2.99'
37
+ type: :runtime
38
+ prerelease: false
39
+ version_requirements: !ruby/object:Gem::Requirement
40
+ requirements:
41
+ - - ">="
42
+ - !ruby/object:Gem::Version
43
+ version: '1.60'
44
+ - - "<="
45
+ - !ruby/object:Gem::Version
46
+ version: '2.99'
47
+ - !ruby/object:Gem::Dependency
48
+ name: logstash-codec-plain
49
+ requirement: !ruby/object:Gem::Requirement
50
+ requirements:
51
+ - - ">="
52
+ - !ruby/object:Gem::Version
53
+ version: '0'
54
+ type: :runtime
55
+ prerelease: false
56
+ version_requirements: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: '0'
61
+ - !ruby/object:Gem::Dependency
62
+ name: logstash-devutils
63
+ requirement: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - ">="
66
+ - !ruby/object:Gem::Version
67
+ version: '0'
68
+ type: :development
69
+ prerelease: false
70
+ version_requirements: !ruby/object:Gem::Requirement
71
+ requirements:
72
+ - - ">="
73
+ - !ruby/object:Gem::Version
74
+ version: '0'
75
+ description: Sentinel Blue provides a plugin outputing to Azure Sentinel for Logstash.
76
+ Using this output plugin, you will be able to send any log you want using Logstash
77
+ to the Azure Sentinel/Log Analytics workspace. You can utilize a dynamic table name
78
+ during output to simplify complex table schemes.
79
+ email: info@sentinelblue.com
80
+ executables: []
81
+ extensions: []
82
+ extra_rdoc_files: []
83
+ files:
84
+ - CHANGELOG.md
85
+ - Gemfile
86
+ - LICENSE
87
+ - README.md
88
+ - VERSION
89
+ - lib/logstash/logAnalyticsClient/logAnalyticsClient.rb
90
+ - lib/logstash/logAnalyticsClient/logStashAutoResizeBuffer.rb
91
+ - lib/logstash/logAnalyticsClient/logstashLoganalyticsConfiguration.rb
92
+ - lib/logstash/outputs/sentinelblue-logstash-output-azure-loganalytics.rb
93
+ - sentinelblue-logstash-output-azure-loganalytics.gemspec
94
+ - spec/outputs/azure_loganalytics_spec.rb
95
+ homepage: https://github.com/sentinelblue/sentinelblue-logstash-output-azure-loganalytics
96
+ licenses:
97
+ - Apache License (2.0)
98
+ metadata:
99
+ logstash_plugin: 'true'
100
+ logstash_group: output
101
+ post_install_message:
102
+ rdoc_options: []
103
+ require_paths:
104
+ - lib
105
+ required_ruby_version: !ruby/object:Gem::Requirement
106
+ requirements:
107
+ - - ">="
108
+ - !ruby/object:Gem::Version
109
+ version: '0'
110
+ required_rubygems_version: !ruby/object:Gem::Requirement
111
+ requirements:
112
+ - - ">="
113
+ - !ruby/object:Gem::Version
114
+ version: '0'
115
+ requirements: []
116
+ rubygems_version: 3.3.7
117
+ signing_key:
118
+ specification_version: 4
119
+ summary: Sentinel Blue provides a plugin outputing to Azure Sentinel for Logstash.
120
+ Using this output plugin, you will be able to send any log you want using Logstash
121
+ to the Azure Sentinel/Log Analytics workspace. You can utilize a dynamic table name
122
+ during output to simplify complex table schemes.
123
+ test_files:
124
+ - spec/outputs/azure_loganalytics_spec.rb