microsoft-sentinel-logstash-output 1.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: f3314cd9fbec5bd71add1de243e0474739dfe43aa1bd7a498c316066681306d8
4
+ data.tar.gz: 8ddab05b2acceb5d15a051b5134ccb7ea12b6efb22fb1712192d1a00bfac60ff
5
+ SHA512:
6
+ metadata.gz: efafed55777a2757fb3c1416b8bbf6e4e8653fd45cd08a26bda22d997a7dd9aac4624b7d5854df24e4de9f50deed4d1d1c8cadf69aab9e4197d8219615416db6
7
+ data.tar.gz: 0b75c805e3e640df6cfcdcdc6646ee2e158b9c2c5c0c5caa1b24f2823270d70b0e148df76207d94508440f0877aa5b6c2a9111a6a8a79b7a3e166eab94106661
data/CHANGELOG.md ADDED
@@ -0,0 +1,17 @@
1
+ ## 1.0.0
2
+ * Initial release for output plugin for logstash to Microsoft Sentinel. This is done with the Log Analytics DCR based API.
3
+
4
+ ## 1.1.0
5
+ * Increase timeout for read/open connections to 120 seconds.
6
+ * Add error handling for when connection timeout occurs.
7
+ * Upgrade the rest-client dependency minimum version to 2.1.0.
8
+ * Allow setting different proxy values for api connections.
9
+ * Upgrade version for ingestion api to 2023-01-01.
10
+ * Rename the plugin to microsoft-sentinel-log-analytics-logstash-output-plugin.
11
+
12
+ ## 1.1.1
13
+ * Support China and US Government Azure sovereign clouds.
14
+ * Increase timeout for read/open connections to 240 seconds.
15
+
16
+ ## 1.2.0
17
+ * Added support for Managed Identity authentication on both Azure VMs and Azure Arc connected machines.
data/Gemfile ADDED
@@ -0,0 +1,2 @@
1
+ source 'https://rubygems.org'
2
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) Microsoft Corporation. All rights reserved.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE
data/README.md ADDED
@@ -0,0 +1,278 @@
1
+ # Microsoft Sentinel output plugin for Logstash
2
+
3
+ Microsoft Sentinel provides a new output plugin for Logstash. Use this output plugin to send any log via Logstash to the Microsoft Sentinel/Log Analytics workspace. This is done with the Log Analytics DCR-based API.
4
+ You may send logs to custom or standard tables.
5
+
6
+ Plugin version: v1.2.0
7
+ Released on: 2024-02-23
8
+
9
+ This plugin is currently in development and is free to use. We welcome contributions from the open source community on this project, and we request and appreciate feedback from users.
10
+
11
+ ## Steps to implement the output plugin
12
+ 1) Install the plugin
13
+ 2) Create a sample file
14
+ 3) Create the required DCR-related resources
15
+ 4) Configure Logstash configuration file
16
+ 5) Basic logs transmission
17
+
18
+ ## 1. Install Logstash and the plugin
19
+
20
+ Microsoft Sentinel provides Logstash output plugin to Log analytics workspace using DCR based logs API.
21
+
22
+ Microsoft Sentinel's Logstash output plugin supports the following versions
23
+ - 7.0 - 7.17.13
24
+ - 8.0 - 8.9
25
+ - 8.11
26
+
27
+ ```
28
+ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/elastic.gpg >/dev/null
29
+ echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list >/dev/null
30
+ sudo apt-get update && sudo apt-get install logstash=1:8.8.1-1
31
+ ```
32
+
33
+ To make sure Logstash isn't automatically updated to a newer version, make sure its package is on hold for automatic updates:
34
+
35
+ ```
36
+ sudo apt-mark hold logstash
37
+ ```
38
+
39
+ Please note that when using Logstash 8, it is recommended to disable ECS in the pipeline. For more information refer to [Logstash documentation.](<https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html>)
40
+
41
+ To install the microsoft-sentinel-log-analytics-logstash-output-plugin, you can make use of the published gem at rubygems.com:
42
+
43
+ ```
44
+ sudo /usr/share/logstash/bin/logstash-plugin install microsoft-sentinel-log-analytics-logstash-output-plugin
45
+ ```
46
+
47
+ If your machine doesn't has an active Internet connection, or you want to install the plugin manually, you can download the plugin files and perform an 'offline' installation. [Logstash Offline Plugin Management instruction](<https://www.elastic.co/guide/en/logstash/current/offline-plugins.html>).
48
+
49
+ If you already have the plugin installed, you can check which version you have by running:
50
+
51
+ ```
52
+ sudo /usr/share/logstash/bin/logstash-plugin list --verbose microsoft-sentinel-log-analytics-logstash-output-plugin
53
+ ```
54
+
55
+ ## 2. Create a sample file
56
+ To create a sample file, follow the following steps:
57
+ 1) Copy the output plugin configuration below to your Logstash configuration file:
58
+ ```
59
+ output {
60
+ microsoft-sentinel-log-analytics-logstash-output-plugin {
61
+ create_sample_file => true
62
+ sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/var/log" for Linux.
63
+ }
64
+ }
65
+ ```
66
+ Note: make sure that the path exists before creating the sample file.
67
+ 2) Start Logstash. The plugin will collect up to 10 records to a sample.
68
+ 3) The file named "sampleFile<epoch seconds>.json" in the configured path will be created once there are 10 events to sample or when the Logstash process exited gracefully. (for example: "c:\temp\sampleFile1648453501.json").
69
+
70
+
71
+ ### Configurations:
72
+ The following parameters are optional and should be used to create a sample file.
73
+ - **create_sample_file** - Boolean, False by default. When enabled, up to 10 events will be written to a sample json file.
74
+ - **sample_file_path** - Number, Empty by default. Required when create_sample_file is enabled. Should include a valid path in which to place the sample file generated.
75
+
76
+ ### Complete example
77
+ 1. set the pipeline.conf with the following configuration:
78
+ ```
79
+ input {
80
+ generator {
81
+ lines => [ "This is a test log message"]
82
+ count => 10
83
+ }
84
+ }
85
+
86
+ output {
87
+ microsoft-sentinel-log-analytics-logstash-output-plugin {
88
+ create_sample_file => true
89
+ sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/var/log" for Linux.
90
+ }
91
+ }
92
+ ```
93
+
94
+ 2. the following sample file will be generated:
95
+ ```
96
+ [
97
+ {
98
+ "host": "logstashMachine",
99
+ "sequence": 0,
100
+ "message": "This is a test log message",
101
+ "ls_timestamp": "2022-10-29T13:19:28.116Z",
102
+ "ls_version": "1"
103
+ },
104
+ ...
105
+ ]
106
+ ```
107
+
108
+ ## 3. Create the required DCR-related resources
109
+ To configure Microsoft Sentinel Logstash plugin you first need to create the DCR-related resources. To create these resources, follow one of the following tutorials:
110
+ 1) To ingest the data to a custom table use [Tutorial - Send custom logs to Azure Monitor Logs (preview) - Azure Monitor | Microsoft Docs](<https://docs.microsoft.com/azure/azure-monitor/logs/tutorial-custom-logs>) tutorial. Note that as part of creating the table and the DCR you will need to provide the sample file that you've created in the previous section.
111
+ 2) To ingest the data to a standard table like Syslog or CommonSecurityLog use [Tutorial - Send custom logs to Azure Monitor Logs using resource manager templates - Azure Monitor | Microsoft Docs](<https://docs.microsoft.com/azure/azure-monitor/logs/tutorial-custom-logs-api>).
112
+
113
+
114
+ ## 4. Configure Logstash configuration file
115
+
116
+ Use the tutorial from the previous section to retrieve the following attributes:
117
+ - **client_app_Id** - String, The 'Application (client) ID' value created in step #3 of the "Configure Application" section of the tutorial you used in the previous step.
118
+ - **client_app_secret** -String, The value of the client secret created in step #5 of the "Configure Application" section of the tutorial you used in the previous step.
119
+ - **tenant_id** - String, Your subscription's tenant id. You can find in the following path: Home -> Microsoft Entra ID -> Overview Under 'Basic Information'.
120
+ - **data_collection_endpoint** - String, - The value of the logsIngestion URI (see step #3 of the "Create data collection endpoint" section in Tutorial [Tutorial - Send custom logs to Azure Monitor Logs using resource manager templates - Azure Monitor | Microsoft Docs](<https://docs.microsoft.com/azure/azure-monitor/logs/tutorial-custom-logs-api#create-data-collection-endpoint>).
121
+ - **dcr_immutable_id** - String, The value of the DCR immutableId (see the "Collect information from DCR" section in [Tutorial - Send custom logs to Azure Monitor Logs (preview) - Azure Monitor | Microsoft Docs](<https://docs.microsoft.com/azure/azure-monitor/logs/tutorial-custom-logs#collect-information-from-dcr>).
122
+ - **dcr_stream_name** - String, The name of the data stream (Go to the json view of the DCR as explained in the "Collect information from DCR" section in [Tutorial - Send custom logs to Azure Monitor Logs (preview) - Azure Monitor | Microsoft Docs](<https://docs.microsoft.com/azure/azure-monitor/logs/tutorial-custom-logs#collect-information-from-dcr>) and copy the value of the "dataFlows -> streams" property (see circled in red in the below example).
123
+
124
+ After retrieving the required values replace the output section of the Logstash configuration file created in the previous steps with the example below. Then, replace the strings in the brackets below with the corresponding values. Make sure you change the "create_sample_file" attribute to false.
125
+
126
+ Here is an example for the output plugin configuration section:
127
+
128
+ ```
129
+ output {
130
+ microsoft-sentinel-log-analytics-logstash-output-plugin {
131
+ client_app_Id => "<enter your client_app_id value here>"
132
+ client_app_secret => "<enter your client_app_secret value here>"
133
+ tenant_id => "<enter your tenant id here>"
134
+ data_collection_endpoint => "<enter your DCE logsIngestion URI here>"
135
+ dcr_immutable_id => "<enter your DCR immutableId here>"
136
+ dcr_stream_name => "<enter your stream name here>"
137
+ create_sample_file=> false
138
+ sample_file_path => "c:\\temp"
139
+ }
140
+ }
141
+ ```
142
+
143
+ ### Optional configuration
144
+
145
+ - **managed_identity** - Boolean, false by default. Set to `true` if you'd whish to authenticate using a Managed Identity. Managed Identities provide a "passwordless" authentication solution. This means providing `client_app_id`, `client_app_secret` and `tenant_id` is no longer requird. [Learn more about using anaged Identities](<https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview>).
146
+
147
+ **Using Managed Identities over app registrations is highly recommended!**
148
+
149
+ If your machine resides outside of Azure, please make sure the machine is onboarded into Azure Arc. [Learn more about Azure Arc](<https://learn.microsoft.com/en-us/azure/azure-arc/servers/overview#next-steps>)
150
+ - **key_names** – Array of strings, if you wish to send a subset of the columns to Log Analytics.
151
+ - **plugin_flush_interval** – Number, 5 by default. Defines the maximal time difference (in seconds) between sending two messages to Log Analytics.
152
+ - **retransmission_time** - Number, 10 by default. This will set the amount of time in seconds given for retransmitting messages once sending has failed.
153
+ - **compress_data** - Boolean, false by default. When this field is true, the event data is compressed before using the API. Recommended for high throughput pipelines
154
+ - **proxy** - String, Empty by default. Specify which proxy URL to use for API calls for all of the communications with Azure.
155
+ - **proxy_aad** - String, Empty by default. Specify which proxy URL to use for API calls to the Microsoft Entra ID service. Overrides the proxy setting.
156
+ - **proxy_endpoint** - String, Empty by default. Specify which proxy URL to use when sending log data to the endpoint. Overrides the proxy setting.
157
+ - **azure_cloud** - String, Empty by default. Used to specify the name of the Azure cloud that is being used, AzureCloud is set as default. Available values are: AzureCloud, AzureChinaCloud and AzureUSGovernment.
158
+
159
+ Here is an example for the output plugin configuration section using a Managed Identity:
160
+
161
+ ```
162
+ output {
163
+ microsoft-sentinel-log-analytics-logstash-output-plugin {
164
+ managed_identity => true
165
+ data_collection_endpoint => "<enter your DCE logsIngestion URI here>"
166
+ dcr_immutable_id => "<enter your DCR immutableId here>"
167
+ dcr_stream_name => "<enter your stream name here>"
168
+ }
169
+ }
170
+ ```
171
+
172
+ #### Note: When setting an empty string as a value for a proxy setting, it will unset any system wide proxy setting.
173
+
174
+ Security notice: We recommend not to implicitly state client_app_Id, client_app_secret, tenant_id, data_collection_endpoint, and dcr_immutable_id in your Logstash configuration for security reasons.
175
+ It is best to store this sensitive information in a Logstash KeyStore as described here- ['Secrets Keystore'](<https://www.elastic.co/guide/en/logstash/current/keystore.html>)
176
+
177
+
178
+ ## 5. Basic logs transmission
179
+
180
+ Here is an example configuration that parses Syslog incoming data into a custom stream named "Custom-MyTableRawData".
181
+
182
+ ### Example Configuration
183
+
184
+ - Using filebeat input pipe
185
+
186
+ ```
187
+ input {
188
+ beats {
189
+ port => "5044"
190
+ }
191
+ }
192
+ filter {
193
+ }
194
+ output {
195
+ microsoft-sentinel-log-analytics-logstash-output-plugin {
196
+ client_app_Id => "619c1731-15ca-4403-9c61-xxxxxxxxxxxx"
197
+ client_app_secret => "xxxxxxxxxxxxxxxx"
198
+ tenant_id => "72f988bf-86f1-41af-91ab-xxxxxxxxxxxx"
199
+ data_collection_endpoint => "https://my-customlogsv2-test-jz2a.eastus2-1.ingest.monitor.azure.com"
200
+ dcr_immutable_id => "dcr-xxxxxxxxxxxxxxxxac23b8978251433a"
201
+ dcr_stream_name => "Custom-MyTableRawData"
202
+ proxy_aad => "http://proxy.example.com"
203
+ }
204
+ }
205
+
206
+ ```
207
+ - Or using the tcp input pipe
208
+
209
+ ```
210
+ input {
211
+ tcp {
212
+ port => "514"
213
+ type => syslog #optional, will effect log type in table
214
+ }
215
+ }
216
+ filter {
217
+ }
218
+ output {
219
+ microsoft-sentinel-log-analytics-logstash-output-plugin {
220
+ client_app_Id => "619c1731-15ca-4403-9c61-xxxxxxxxxxxx"
221
+ client_app_secret => "xxxxxxxxxxxxxxxx"
222
+ tenant_id => "72f988bf-86f1-41af-91ab-xxxxxxxxxxxx"
223
+ data_collection_endpoint => "https://my-customlogsv2-test-jz2a.eastus2-1.ingest.monitor.azure.com"
224
+ dcr_immutable_id => "dcr-xxxxxxxxxxxxxxxxac23b8978251433a"
225
+ dcr_stream_name => "Custom-MyTableRawData"
226
+ }
227
+ }
228
+ ```
229
+
230
+ <u>Advanced Configuration</u>
231
+ ```
232
+ input {
233
+ syslog {
234
+ port => 514
235
+ }
236
+ }
237
+
238
+ output {
239
+ microsoft-sentinel-log-analytics-logstash-output-plugin {
240
+ client_app_Id => "${CLIENT_APP_ID}"
241
+ client_app_secret => "${CLIENT_APP_SECRET}"
242
+ tenant_id => "${TENANT_ID}"
243
+ data_collection_endpoint => "${DATA_COLLECTION_ENDPOINT}"
244
+ dcr_immutable_id => "${DCR_IMMUTABLE_ID}"
245
+ dcr_stream_name => "Custom-MyTableRawData"
246
+ key_names => ['PRI','TIME_TAG','HOSTNAME','MSG']
247
+ }
248
+ }
249
+
250
+ ```
251
+
252
+ Now you are able to run logstash with the example configuration and send mock data using the 'logger' command.
253
+
254
+ For example:
255
+ ```
256
+ logger -p local4.warn --rfc3164 --tcp -t CEF "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example" -P 514 -d -n 127.0.0.1
257
+ ```
258
+
259
+ Which will produce this content in the sample file:
260
+
261
+ ```
262
+ [
263
+ {
264
+ "logsource": "logstashMachine",
265
+ "facility": 20,
266
+ "severity_label": "Warning",
267
+ "severity": 4,
268
+ "timestamp": "Apr 7 08:26:04",
269
+ "program": "CEF:",
270
+ "host": "127.0.0.1",
271
+ "facility_label": "local4",
272
+ "priority": 164,
273
+ "message": "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example",
274
+ "ls_timestamp": "2022-04-07T08:26:04.000Z",
275
+ "ls_version": "1"
276
+ }
277
+ ]
278
+ ```
@@ -0,0 +1,132 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our
6
+ community a harassment-free experience for everyone, regardless of age, body
7
+ size, visible or invisible disability, ethnicity, sex characteristics, gender
8
+ identity and expression, level of experience, education, socio-economic status,
9
+ nationality, personal appearance, race, caste, color, religion, or sexual
10
+ identity and orientation.
11
+
12
+ We pledge to act and interact in ways that contribute to an open, welcoming,
13
+ diverse, inclusive, and healthy community.
14
+
15
+ ## Our Standards
16
+
17
+ Examples of behavior that contributes to a positive environment for our
18
+ community include:
19
+
20
+ * Demonstrating empathy and kindness toward other people
21
+ * Being respectful of differing opinions, viewpoints, and experiences
22
+ * Giving and gracefully accepting constructive feedback
23
+ * Accepting responsibility and apologizing to those affected by our mistakes,
24
+ and learning from the experience
25
+ * Focusing on what is best not just for us as individuals, but for the overall
26
+ community
27
+
28
+ Examples of unacceptable behavior include:
29
+
30
+ * The use of sexualized language or imagery, and sexual attention or advances of
31
+ any kind
32
+ * Trolling, insulting or derogatory comments, and personal or political attacks
33
+ * Public or private harassment
34
+ * Publishing others' private information, such as a physical or email address,
35
+ without their explicit permission
36
+ * Other conduct which could reasonably be considered inappropriate in a
37
+ professional setting
38
+
39
+ ## Enforcement Responsibilities
40
+
41
+ Community leaders are responsible for clarifying and enforcing our standards of
42
+ acceptable behavior and will take appropriate and fair corrective action in
43
+ response to any behavior that they deem inappropriate, threatening, offensive,
44
+ or harmful.
45
+
46
+ Community leaders have the right and responsibility to remove, edit, or reject
47
+ comments, commits, code, wiki edits, issues, and other contributions that are
48
+ not aligned to this Code of Conduct, and will communicate reasons for moderation
49
+ decisions when appropriate.
50
+
51
+ ## Scope
52
+
53
+ This Code of Conduct applies within all community spaces, and also applies when
54
+ an individual is officially representing the community in public spaces.
55
+ Examples of representing our community include using an official email address,
56
+ posting via an official social media account, or acting as an appointed
57
+ representative at an online or offline event.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported to the community leaders responsible for enforcement at
63
+ [INSERT CONTACT METHOD].
64
+ All complaints will be reviewed and investigated promptly and fairly.
65
+
66
+ All community leaders are obligated to respect the privacy and security of the
67
+ reporter of any incident.
68
+
69
+ ## Enforcement Guidelines
70
+
71
+ Community leaders will follow these Community Impact Guidelines in determining
72
+ the consequences for any action they deem in violation of this Code of Conduct:
73
+
74
+ ### 1. Correction
75
+
76
+ **Community Impact**: Use of inappropriate language or other behavior deemed
77
+ unprofessional or unwelcome in the community.
78
+
79
+ **Consequence**: A private, written warning from community leaders, providing
80
+ clarity around the nature of the violation and an explanation of why the
81
+ behavior was inappropriate. A public apology may be requested.
82
+
83
+ ### 2. Warning
84
+
85
+ **Community Impact**: A violation through a single incident or series of
86
+ actions.
87
+
88
+ **Consequence**: A warning with consequences for continued behavior. No
89
+ interaction with the people involved, including unsolicited interaction with
90
+ those enforcing the Code of Conduct, for a specified period of time. This
91
+ includes avoiding interactions in community spaces as well as external channels
92
+ like social media. Violating these terms may lead to a temporary or permanent
93
+ ban.
94
+
95
+ ### 3. Temporary Ban
96
+
97
+ **Community Impact**: A serious violation of community standards, including
98
+ sustained inappropriate behavior.
99
+
100
+ **Consequence**: A temporary ban from any sort of interaction or public
101
+ communication with the community for a specified period of time. No public or
102
+ private interaction with the people involved, including unsolicited interaction
103
+ with those enforcing the Code of Conduct, is allowed during this period.
104
+ Violating these terms may lead to a permanent ban.
105
+
106
+ ### 4. Permanent Ban
107
+
108
+ **Community Impact**: Demonstrating a pattern of violation of community
109
+ standards, including sustained inappropriate behavior, harassment of an
110
+ individual, or aggression toward or disparagement of classes of individuals.
111
+
112
+ **Consequence**: A permanent ban from any sort of public interaction within the
113
+ community.
114
+
115
+ ## Attribution
116
+
117
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118
+ version 2.1, available at
119
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
120
+
121
+ Community Impact Guidelines were inspired by
122
+ [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
123
+
124
+ For answers to common questions about this code of conduct, see the FAQ at
125
+ [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
126
+ [https://www.contributor-covenant.org/translations][translations].
127
+
128
+ [homepage]: https://www.contributor-covenant.org
129
+ [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
130
+ [Mozilla CoC]: https://github.com/mozilla/diversity
131
+ [FAQ]: https://www.contributor-covenant.org/faq
132
+ [translations]: https://www.contributor-covenant.org/translations
@@ -0,0 +1,119 @@
1
+ # encoding: utf-8
2
+ require "logstash/outputs/base"
3
+ require "logstash/namespace"
4
+ require "logstash/sentinel_la/logstashLoganalyticsConfiguration"
5
+ require "logstash/sentinel_la/sampleFileCreator"
6
+ require "logstash/sentinel_la/logsSender"
7
+
8
+
9
+ class LogStash::Outputs::MicrosoftSentinelOutput < LogStash::Outputs::Base
10
+
11
+ config_name "microsoft-sentinel-log-analytics-logstash-output-plugin"
12
+
13
+ # Stating that the output plugin will run in concurrent mode
14
+ concurrency :shared
15
+
16
+ # If managed Identity is used, the plugin will use the managed identity to authenticate with Microsoft Entra ID
17
+ config :managed_identity, :validate => :boolean, :default => false
18
+
19
+ # Your registered app ID
20
+ config :client_app_Id, :validate => :string
21
+
22
+ # The registered app's secret, required by Azure Loganalytics REST API
23
+ config :client_app_secret, :validate => :string
24
+
25
+ # Your Operations Management Suite Tenant ID
26
+ config :tenant_id, :validate => :string
27
+
28
+ # Your data collection rule endpoint
29
+ config :data_collection_endpoint, :validate => :string
30
+
31
+ # Your data collection rule ID
32
+ config :dcr_immutable_id, :validate => :string
33
+
34
+ # Your dcr data stream name
35
+ config :dcr_stream_name, :validate => :string
36
+
37
+ # Subset of keys to send to the Azure Loganalytics workspace
38
+ config :key_names, :validate => :array, :default => []
39
+
40
+ # Max number of seconds to wait between flushes. Default 5
41
+ config :plugin_flush_interval, :validate => :number, :default => 5
42
+
43
+ # Factor for adding to the amount of messages sent
44
+ config :decrease_factor, :validate => :number, :default => 100
45
+
46
+ # This will trigger message amount resizing in a REST request to LA
47
+ config :amount_resizing, :validate => :boolean, :default => true
48
+
49
+ # Setting the default amount of messages sent
50
+ # it this is set with amount_resizing=false --> each message will have max_items
51
+ config :max_items, :validate => :number, :default => 2000
52
+
53
+ # Setting default proxy to be used for all communication with azure
54
+ config :proxy, :validate => :string
55
+
56
+ # Setting proxy_aad to be used for communicating with the Microsoft Entra ID service
57
+ config :proxy_aad, :validate => :string
58
+
59
+ # Setting proxy to be used for the LogAnalytics endpoint REST client
60
+ config :proxy_endpoint, :validate => :string
61
+
62
+ # This will set the amount of time given for retransmitting messages once sending is failed
63
+ config :retransmission_time, :validate => :number, :default => 10
64
+
65
+ # Compress the message body before sending to LA
66
+ config :compress_data, :validate => :boolean, :default => false
67
+
68
+ # Generate sample file from incoming events
69
+ config :create_sample_file, :validate => :boolean, :default => false
70
+
71
+ # Path where to place the sample file created
72
+ config :sample_file_path, :validate => :string
73
+
74
+ # Used to specify the name of the Azure cloud that is being used. By default, the value is set to "AzureCloud", which
75
+ # is the public Azure cloud. However, you can specify a different Azure cloud if you are
76
+ # using a different environment, such as Azure Government or Azure China.
77
+ config :azure_cloud, :validate => :string
78
+
79
+ public
80
+ def register
81
+ @logstash_configuration= build_logstash_configuration()
82
+
83
+ # Validate configuration correctness
84
+ @logstash_configuration.validate_configuration()
85
+
86
+ @events_handler = @logstash_configuration.create_sample_file ?
87
+ LogStash::Outputs::MicrosoftSentinelOutputInternal::SampleFileCreator::new(@logstash_configuration) :
88
+ LogStash::Outputs::MicrosoftSentinelOutputInternal::LogsSender::new(@logstash_configuration)
89
+ end # def register
90
+
91
+ def multi_receive(events)
92
+ @events_handler.handle_events(events)
93
+ end # def multi_receive
94
+
95
+ def close
96
+ @events_handler.close
97
+ end
98
+
99
+ #private
100
+ private
101
+
102
+ # Building the logstash object configuration from the output configuration provided by the user
103
+ # Return LogstashLoganalyticsOutputConfiguration populated with the configuration values
104
+ def build_logstash_configuration()
105
+ logstash_configuration= LogStash::Outputs::MicrosoftSentinelOutputInternal::LogstashLoganalyticsOutputConfiguration::new(@client_app_Id, @client_app_secret, @tenant_id, @data_collection_endpoint, @dcr_immutable_id, @dcr_stream_name, @compress_data, @create_sample_file, @sample_file_path, @logger, @managed_identity)
106
+ logstash_configuration.key_names = @key_names
107
+ logstash_configuration.plugin_flush_interval = @plugin_flush_interval
108
+ logstash_configuration.decrease_factor = @decrease_factor
109
+ logstash_configuration.amount_resizing = @amount_resizing
110
+ logstash_configuration.max_items = @max_items
111
+ logstash_configuration.proxy_aad = @proxy_aad || @proxy || ENV['http_proxy']
112
+ logstash_configuration.proxy_endpoint = @proxy_endpoint || @proxy || ENV['http_proxy']
113
+ logstash_configuration.retransmission_time = @retransmission_time
114
+ logstash_configuration.azure_cloud = @azure_cloud || "AzureCloud"
115
+
116
+ return logstash_configuration
117
+ end # def build_logstash_configuration
118
+
119
+ end # class LogStash::Outputs::MicrosoftSentinelOutput