logstash-input-okta_system_log 0.9.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/CHANGELOG.md +4 -0
- data/CONTRIBUTORS +10 -0
- data/DEVELOPER.md +1 -0
- data/Gemfile +3 -0
- data/LICENSE +11 -0
- data/README.md +90 -0
- data/lib/logstash/inputs/okta_system_log.rb +1051 -0
- data/logstash-input-okta_system_log.gemspec +35 -0
- data/spec/inputs/okta_system_log_spec.rb +753 -0
- metadata +221 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: 0f5737bcc3a684163066307d22818a14ea946d56
|
4
|
+
data.tar.gz: 24e2853bc64ff945c4aa27b71fdd19dd85ba0758
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: dbdb05c6a4ac1f1c75b92e47ede1ca5cc755b7df6e8e35021fab79d5f98026aebd9d96f0bf83073c17e0f3eae3f94bfa34565ba2f491c423d93f42f168be8164
|
7
|
+
data.tar.gz: 02905ed6be4f62bfed6a5b09049382fbb7725e98e128a8d3e8de018bc3871399475d4bf22e61eee706252f0ed2a9a0b79dde0381dda73e3160e597804e89cc72
|
data/CHANGELOG.md
ADDED
data/CONTRIBUTORS
ADDED
@@ -0,0 +1,10 @@
|
|
1
|
+
The following is a list of people who have contributed ideas, code, bug
|
2
|
+
reports, or in general have helped logstash along its way.
|
3
|
+
|
4
|
+
Contributors:
|
5
|
+
* Security Risk Advisors
|
6
|
+
|
7
|
+
Note: If you've sent us patches, bug reports, or otherwise contributed to
|
8
|
+
Logstash, and you aren't on the list above and want to be, please let us know
|
9
|
+
and we'll make sure you're here. Contributions from folks like you are what make
|
10
|
+
open source awesome.
|
data/DEVELOPER.md
ADDED
@@ -0,0 +1 @@
|
|
1
|
+
# logstash-input-okta_system_log
|
data/Gemfile
ADDED
data/LICENSE
ADDED
@@ -0,0 +1,11 @@
|
|
1
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
2
|
+
you may not use this file except in compliance with the License.
|
3
|
+
You may obtain a copy of the License at
|
4
|
+
|
5
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
6
|
+
|
7
|
+
Unless required by applicable law or agreed to in writing, software
|
8
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
9
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
10
|
+
See the License for the specific language governing permissions and
|
11
|
+
limitations under the License.
|
data/README.md
ADDED
@@ -0,0 +1,90 @@
|
|
1
|
+
## Looking for the docs?
|
2
|
+
|
3
|
+
You can find them here: [docs](docs/index.asciidoc)
|
4
|
+
|
5
|
+
# Logstash Plugin
|
6
|
+
|
7
|
+
This is a plugin for [Logstash](https://github.com/elastic/logstash).
|
8
|
+
|
9
|
+
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
|
10
|
+
|
11
|
+
## Documentation
|
12
|
+
|
13
|
+
Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
|
14
|
+
|
15
|
+
- For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
|
16
|
+
- For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
|
17
|
+
|
18
|
+
## Need Help?
|
19
|
+
|
20
|
+
Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
|
21
|
+
|
22
|
+
## Developing
|
23
|
+
|
24
|
+
### 1. Plugin Developement and Testing
|
25
|
+
|
26
|
+
#### Code
|
27
|
+
- To get started, you'll need JRuby with the Bundler gem installed.
|
28
|
+
|
29
|
+
- Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
|
30
|
+
|
31
|
+
- Install dependencies
|
32
|
+
```sh
|
33
|
+
bundle install
|
34
|
+
```
|
35
|
+
|
36
|
+
#### Test
|
37
|
+
|
38
|
+
- Update your dependencies
|
39
|
+
|
40
|
+
```sh
|
41
|
+
bundle install
|
42
|
+
```
|
43
|
+
|
44
|
+
- Run tests
|
45
|
+
|
46
|
+
```sh
|
47
|
+
bundle exec rspec
|
48
|
+
```
|
49
|
+
|
50
|
+
### 2. Running your unpublished Plugin in Logstash
|
51
|
+
|
52
|
+
#### 2.1 Run in a local Logstash clone
|
53
|
+
|
54
|
+
- Edit Logstash `Gemfile` and add the local plugin path, for example:
|
55
|
+
```ruby
|
56
|
+
gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
|
57
|
+
```
|
58
|
+
- Install plugin
|
59
|
+
```sh
|
60
|
+
bin/logstash-plugin install --no-verify
|
61
|
+
```
|
62
|
+
- Run Logstash with your plugin
|
63
|
+
```sh
|
64
|
+
bin/logstash -e 'filter {awesome {}}'
|
65
|
+
```
|
66
|
+
At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
|
67
|
+
|
68
|
+
#### 2.2 Run in an installed Logstash
|
69
|
+
|
70
|
+
You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
|
71
|
+
|
72
|
+
- Build your plugin gem
|
73
|
+
```sh
|
74
|
+
gem build logstash-filter-awesome.gemspec
|
75
|
+
```
|
76
|
+
- Install the plugin from the Logstash home
|
77
|
+
```sh
|
78
|
+
bin/logstash-plugin install /your/local/plugin/logstash-filter-awesome.gem
|
79
|
+
```
|
80
|
+
- Start Logstash and proceed to test the plugin
|
81
|
+
|
82
|
+
## Contributing
|
83
|
+
|
84
|
+
All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
|
85
|
+
|
86
|
+
Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
|
87
|
+
|
88
|
+
It is more important to the community that you are able to contribute.
|
89
|
+
|
90
|
+
For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
|
@@ -0,0 +1,1051 @@
|
|
1
|
+
# encoding: utf-8
|
2
|
+
require "logstash/inputs/base"
|
3
|
+
require "logstash/namespace"
|
4
|
+
require "rufus/scheduler"
|
5
|
+
require "socket" # for Socket.gethostname
|
6
|
+
require "logstash/plugin_mixins/http_client"
|
7
|
+
require "manticore"
|
8
|
+
require "uri"
|
9
|
+
|
10
|
+
|
11
|
+
class LogStash::Inputs::OktaSystemLog < LogStash::Inputs::Base
|
12
|
+
include LogStash::PluginMixins::HttpClient
|
13
|
+
|
14
|
+
MAX_MMAP_FILE_SIZE = 1 * 2**10
|
15
|
+
OKTA_EVENT_LOG_PATH = "/api/v1/logs"
|
16
|
+
AUTH_TEST_URL = "?limit=1#auth-test"
|
17
|
+
|
18
|
+
HTTP_OK_200 = 200
|
19
|
+
HTTP_BAD_REQUEST_400 = 400
|
20
|
+
HTTP_UNAUTHORIZED_401 = 401
|
21
|
+
|
22
|
+
# Sleep Timers
|
23
|
+
SLEEP_API_RATE_LIMIT = 1
|
24
|
+
SLEEP_STATE_FILE_RETRY = 0.25
|
25
|
+
|
26
|
+
config_name "okta_system_log"
|
27
|
+
|
28
|
+
# If undefined, Logstash will complain, even if codec is unused.
|
29
|
+
default :codec, "json"
|
30
|
+
|
31
|
+
# Schedule of when to periodically poll from the url
|
32
|
+
# Format: A hash with
|
33
|
+
# + key: "cron" | "every" | "in" | "at"
|
34
|
+
# + value: string
|
35
|
+
# Examples:
|
36
|
+
# a) { "every" => "1h" }
|
37
|
+
# b) { "cron" => "* * * * * UTC" }
|
38
|
+
# See: rufus/scheduler for details about different schedule options and value string format
|
39
|
+
# See here for rate limits: https://developer.okta.com/docs/api/resources/system_log#rate-limits
|
40
|
+
config :schedule, :validate => :hash, :required => true
|
41
|
+
|
42
|
+
# The Okta host which you would like to use
|
43
|
+
# The system log path will be appended onto this host
|
44
|
+
# Ex: dev-instance.oktapreview.com
|
45
|
+
# Ex: org-name.okta.com
|
46
|
+
#
|
47
|
+
# Format: Hostname
|
48
|
+
config :hostname, :validate => :string
|
49
|
+
|
50
|
+
# The date and time after which to fetch events
|
51
|
+
# NOTE: By default the API will only fetch events seven days before time of the first call
|
52
|
+
# To get more data, please select the desired date to start fetching data
|
53
|
+
# Docs: https://developer.okta.com/docs/api/resources/system_log#request-parameters
|
54
|
+
# Okta log retention by default is 90 days, it is suggested to set the date accordingly
|
55
|
+
#
|
56
|
+
# Format: string with a RFC 3339 formatted date (e.g. 2016-10-09T22:25:06-07:00)
|
57
|
+
config :since, :validate => :string
|
58
|
+
|
59
|
+
# Set how many messages you want to pull with each request
|
60
|
+
# The default, `1000`, means to fetch 1000 events at a time.
|
61
|
+
#
|
62
|
+
# Format: Number between 1 and 1000
|
63
|
+
# Default: 1000
|
64
|
+
config :limit, :validate => :number, :default => 1000
|
65
|
+
|
66
|
+
# The free form filter to use to filter data to requirements.
|
67
|
+
# Docs: https://developer.okta.com/docs/api/resources/system_log#expression-filter
|
68
|
+
# The filter will be URL encoded by the plugin
|
69
|
+
# The plugin will not validate the filter.
|
70
|
+
# Use single quotes in the config file,
|
71
|
+
# e.g. 'published gt "2017-01-01T00:00:00.000Z"'
|
72
|
+
#
|
73
|
+
# Format: Plain text filter field.
|
74
|
+
config :filter, :validate => :string
|
75
|
+
|
76
|
+
# Filters the log events results by one or more exact keywords in a list
|
77
|
+
# Docs: https://developer.okta.com/docs/api/resources/system_log#keyword-filter
|
78
|
+
# Documentation bug: https://github.com/okta/okta.github.io/issues/2500
|
79
|
+
# The plugin will URL encode the list
|
80
|
+
# The query cannot have more than ten items
|
81
|
+
# Query items cannot have a space
|
82
|
+
# Query items cannot be longer than 40 chars
|
83
|
+
#
|
84
|
+
# Format: A list with the items to query on
|
85
|
+
# Ex. ["foo", "bar"]
|
86
|
+
# Ex. ["new", "york"]
|
87
|
+
config :q, :validate => :string, :list => true
|
88
|
+
|
89
|
+
# The file in which the auth_token for Okta will be contained.
|
90
|
+
# This will contain the auth_token which can have a lot access to your Okta instance.
|
91
|
+
# It cannot be stressed enough how important it is to protect this file.
|
92
|
+
# NOTE: This option is deprecated and will be removed in favor of the secrets store.
|
93
|
+
#
|
94
|
+
# Format: File path
|
95
|
+
config :auth_token_file, :validate => :path, :deprecated => true
|
96
|
+
|
97
|
+
# The auth token used to authenticate to Okta.
|
98
|
+
# NOTE: Avoid storing the auth_token directly in the config file.
|
99
|
+
# This method is provided solely to add the auth_token via secrets store.
|
100
|
+
# Docs: https://www.elastic.co/guide/en/logstash/current/keystore.html
|
101
|
+
# WARNING: This will contain the auth_token which can have a lot access to your Okta instance.
|
102
|
+
#
|
103
|
+
# Format: File path
|
104
|
+
config :auth_token_key, :validate => :password
|
105
|
+
|
106
|
+
# Path to the state file (keeps track of the current position
|
107
|
+
# of the API) that will be written to disk.
|
108
|
+
# The default will write state files to `<path.data>/plugins/inputs/okta_system_log`
|
109
|
+
# NOTE: it must be a file path and not a directory path
|
110
|
+
#
|
111
|
+
# Format: Filepath
|
112
|
+
config :state_file_path, :validate => :string
|
113
|
+
|
114
|
+
# Option to cause a fatal error if the state file can't update
|
115
|
+
# Normal operation will generate an error when state file update fails
|
116
|
+
# However, it will continue pull events from API
|
117
|
+
# This option will reverse that paradigm and exit if a failure occurs
|
118
|
+
#
|
119
|
+
# Format: Boolean
|
120
|
+
config :state_file_fatal_falure, :validate => :boolean, :default => false
|
121
|
+
|
122
|
+
# If you'd like to work with the request/response metadata.
|
123
|
+
# Set this value to the name of the field you'd like to store a nested
|
124
|
+
# hash of metadata.
|
125
|
+
config :metadata_target, :validate => :string, :default => '@metadata'
|
126
|
+
|
127
|
+
# Define the target field for placing the received data.
|
128
|
+
# If this setting is omitted
|
129
|
+
# the data will be stored at the root (top level) of the event.
|
130
|
+
#
|
131
|
+
# Format: String
|
132
|
+
config :target, :validate => :string
|
133
|
+
|
134
|
+
# The URL for the Okta instance to access
|
135
|
+
# NOTE: This is useful for an iPaaS instance
|
136
|
+
#
|
137
|
+
# Format: URI
|
138
|
+
config :custom_url, :validate => :uri, :required => false
|
139
|
+
|
140
|
+
# Custom authorization header to be added instead of default header
|
141
|
+
# This is useful for an iPaaS only
|
142
|
+
# Example: Basic dXNlcjpwYXNzd29yZA==
|
143
|
+
# This will be added to the authorization header accordingly
|
144
|
+
# Authorization: Basic dXNlcjpwYXNzd29yZA==
|
145
|
+
# NOTE: It is suggested to use the secrets store to store the header
|
146
|
+
# It is an error to set both this and the auth_token
|
147
|
+
#
|
148
|
+
# Format: string
|
149
|
+
config :custom_auth_header, :validate => :password, :required => false
|
150
|
+
|
151
|
+
# This option is obsoleted in favor of hostname or custom_url.
|
152
|
+
# THe URL for the Okta instance to access
|
153
|
+
#
|
154
|
+
# Format: URI
|
155
|
+
config :url, :validate => :uri,
|
156
|
+
:obsolete => "url is obsolete, please use hostname or custom_url instead"
|
157
|
+
|
158
|
+
# This option is obsolete
|
159
|
+
# The throttle value to use for noisy log lines (at the info level)
|
160
|
+
# Currently just one log statement (successful HTTP connects)
|
161
|
+
# The value is used to mod a counter, so set it appropriately for log levels
|
162
|
+
# NOTE: This value will be ignored when the log level is debug or trace
|
163
|
+
#
|
164
|
+
# Format: Integer
|
165
|
+
config :log_throttle, :validate => :number,
|
166
|
+
:obsolete => "Log throttling is longer required"
|
167
|
+
|
168
|
+
# This option is obsoleted in favor of limit.
|
169
|
+
# Set how many messages you want to pull with each request
|
170
|
+
#
|
171
|
+
# The default, `1000`, means to fetch 1000 events at a time.
|
172
|
+
# Any value less than 1 will fetch all possible events.
|
173
|
+
config :chunk_size, :validate => :number,
|
174
|
+
:obsolete => "chunk_size is obsolete, please use limit instead"
|
175
|
+
|
176
|
+
# This option is obsoleted in favor of since.
|
177
|
+
# The date and time after which to fetch events
|
178
|
+
#
|
179
|
+
# Format: string with a RFC 3339 formatted date
|
180
|
+
# Ex. 2016-10-09T22:25:06-07:00
|
181
|
+
config :start_date, :validate => :string,
|
182
|
+
:obsolete => "start_date is obsolete, please use since instead"
|
183
|
+
|
184
|
+
# This option is obsoleted in favor of auth_token_key.
|
185
|
+
# The auth token used to authenticate to Okta.
|
186
|
+
# WARNING: Avoid storing the auth_token directly in this file.
|
187
|
+
# This method is provided solely to add the auth_token via environment variable.
|
188
|
+
# This will contain the auth_token which can have a lot access to your Okta instance.
|
189
|
+
#
|
190
|
+
# Format: File path
|
191
|
+
config :auth_token_env, :validate => :string,
|
192
|
+
:obsolete => "auth_token_env is obsolete, please use auth_token_key instead"
|
193
|
+
|
194
|
+
# This option is obsoleted in favor of state_file_path.
|
195
|
+
# The base filename to store the pointer to the current location in the logs
|
196
|
+
# This file will be renamed with each new reference to limit loss of this data
|
197
|
+
# The location will need at least write and execute privs for the logstash user
|
198
|
+
#
|
199
|
+
# Format: Filepath
|
200
|
+
# This is not the filepath of the file itself, but to generate the file.
|
201
|
+
config :state_file_base, :validate => :string,
|
202
|
+
:obsolete => "state_file_base is obsolete, use state_file_path instead"
|
203
|
+
|
204
|
+
public
|
205
|
+
Schedule_types = %w(cron every at in)
|
206
|
+
def register
|
207
|
+
|
208
|
+
@trace_log_method = detect_trace_log_method()
|
209
|
+
|
210
|
+
if (@limit < 1 or @limit > 1000 or !@limit.integer?)
|
211
|
+
@logger.fatal("Invalid `limit` value: #{@limit}. " +
|
212
|
+
"Config limit should be an integer between 1 and 1000.")
|
213
|
+
raise LogStash::ConfigurationError, "Invalid `limit` value: #{@limit}. " +
|
214
|
+
"Config limit should be an integer between 1 and 1000."
|
215
|
+
end
|
216
|
+
|
217
|
+
unless (@hostname.nil? ^ @custom_url.nil?)
|
218
|
+
@logger.fatal("Please configure the hostname " +
|
219
|
+
"or the custom_url to use.")
|
220
|
+
raise LogStash::ConfigurationError, "Please configure the hostname " +
|
221
|
+
"or the custom_url to use."
|
222
|
+
end
|
223
|
+
|
224
|
+
if (@hostname)
|
225
|
+
begin
|
226
|
+
url_obj = URI::HTTPS.build(
|
227
|
+
:host => @hostname,
|
228
|
+
:path => OKTA_EVENT_LOG_PATH)
|
229
|
+
rescue URI::InvalidComponentError
|
230
|
+
@logger.fatal("Invalid hostname, " +
|
231
|
+
"could not configure URL. hostname = #{@hostname}.")
|
232
|
+
raise LogStash::ConfigurationError, "Invalid hostname, " +
|
233
|
+
"could not configure URL. hostname = #{@hostname}."
|
234
|
+
end
|
235
|
+
end
|
236
|
+
if (@custom_url)
|
237
|
+
begin
|
238
|
+
# The URL comes in as a SafeURI object which doesn't get parsed nicely.
|
239
|
+
# Cast to string helps with that
|
240
|
+
# Really only happens during tests and not during normal operations
|
241
|
+
url_obj = URI.parse(@custom_url.to_s)
|
242
|
+
rescue URI::InvalidURIError
|
243
|
+
@logger.fatal("Invalid custom_url, " +
|
244
|
+
"please verify the URL. custom_url = #{@custom_url}")
|
245
|
+
raise LogStash::ConfigurationError, "Invalid custom_url, " +
|
246
|
+
"please verify the URL. custom_url = #{@custom_url}"
|
247
|
+
end
|
248
|
+
|
249
|
+
end
|
250
|
+
|
251
|
+
if (@since)
|
252
|
+
begin
|
253
|
+
@since = DateTime.parse(@since).rfc3339(0)
|
254
|
+
rescue ArgumentError => e
|
255
|
+
@logger.fatal("since must be of the form " +
|
256
|
+
"yyyy-MM-dd’‘T’‘HH:mm:ssZZ, e.g. 2013-01-01T12:00:00-07:00.")
|
257
|
+
raise LogStash::ConfigurationError, "since must be of the form " +
|
258
|
+
"yyyy-MM-dd’‘T’‘HH:mm:ssZZ, e.g. 2013-01-01T12:00:00-07:00."
|
259
|
+
end
|
260
|
+
end
|
261
|
+
|
262
|
+
if (@q)
|
263
|
+
if (@q.length > 10)
|
264
|
+
msg = "q cannot have more than 10 terms. " +
|
265
|
+
"Use the `filter` to limit the query."
|
266
|
+
@logger.fatal(msg)
|
267
|
+
raise LogStash::ConfigurationError, msg
|
268
|
+
end
|
269
|
+
space_errors = []
|
270
|
+
length_errors = []
|
271
|
+
for item in @q
|
272
|
+
if (item.include? " ")
|
273
|
+
space_errors.push(item)
|
274
|
+
elsif (item.length > 40)
|
275
|
+
length_errors.push(item)
|
276
|
+
end
|
277
|
+
end
|
278
|
+
if (space_errors.length > 0)
|
279
|
+
@logger.fatal("q items cannot contain a space. " +
|
280
|
+
"Items: #{space_errors.join(" ")}.")
|
281
|
+
raise LogStash::ConfigurationError, "q items cannot contain a space. " +
|
282
|
+
"Items: #{space_errors.join(" ")}."
|
283
|
+
end
|
284
|
+
if (length_errors.length > 0)
|
285
|
+
msg = "q items cannot contain be longer than 40 characters. " +
|
286
|
+
"Items: #{length_errors.join(" ")}."
|
287
|
+
@logger.fatal(msg)
|
288
|
+
raise LogStash::ConfigurationError, msg
|
289
|
+
end
|
290
|
+
end
|
291
|
+
|
292
|
+
if (@custom_auth_header)
|
293
|
+
if (@auth_token_key or @auth_token_file)
|
294
|
+
@logger.fatal("If custom_auth_header is used " +
|
295
|
+
"you cannot set auth_token_key or auth_token_file")
|
296
|
+
raise LogStash::ConfigurationError, "If custom_auth_header is used " +
|
297
|
+
"you cannot set auth_token_key or auth_token_file"
|
298
|
+
end
|
299
|
+
else
|
300
|
+
unless (@auth_token_key.nil? ^ @auth_token_file.nil?)
|
301
|
+
auth_message = "Set only the auth_token_key or auth_token_file."
|
302
|
+
@logger.fatal(auth_message)
|
303
|
+
raise LogStash::ConfigurationError, auth_message
|
304
|
+
end
|
305
|
+
|
306
|
+
if (@auth_token_file)
|
307
|
+
begin
|
308
|
+
auth_file_size = File.size(@auth_token_file)
|
309
|
+
if (auth_file_size > MAX_MMAP_FILE_SIZE)
|
310
|
+
@logger.fatal("The auth_token file " +
|
311
|
+
"is too large to map")
|
312
|
+
raise LogStash::ConfigurationError, "The auth_token file " +
|
313
|
+
"is too large to map"
|
314
|
+
else
|
315
|
+
@auth_token = LogStash::Util::Password.new(
|
316
|
+
File.read(@auth_token_file, auth_file_size).chomp)
|
317
|
+
@logger.info("Successfully opened auth_token_file",
|
318
|
+
:auth_token_file => @auth_token_file)
|
319
|
+
end
|
320
|
+
rescue LogStash::ConfigurationError
|
321
|
+
raise
|
322
|
+
rescue => e
|
323
|
+
# This is a bug in older versions of logstash, confirmed here:
|
324
|
+
# https://discuss.elastic.co/t/logstash-configurationerror-but-configurationok-logstash-2-4-0/65727/2
|
325
|
+
@logger.fatal(e.inspect)
|
326
|
+
raise LogStash::ConfigurationError, e.inspect
|
327
|
+
end
|
328
|
+
else
|
329
|
+
@auth_token = @auth_token_key
|
330
|
+
end
|
331
|
+
|
332
|
+
if (@auth_token)
|
333
|
+
begin
|
334
|
+
response = client.get(
|
335
|
+
url_obj.to_s+AUTH_TEST_URL,
|
336
|
+
headers: {'Authorization' => "SSWS #{@auth_token.value}"},
|
337
|
+
request_timeout: 2,
|
338
|
+
connect_timeout: 2,
|
339
|
+
socket_timeout: 2)
|
340
|
+
if (response.code == HTTP_UNAUTHORIZED_401)
|
341
|
+
@logger.fatal("The auth_code provided " +
|
342
|
+
"was not valid, please check the input")
|
343
|
+
raise LogStash::ConfigurationError, "The auth_code provided " +
|
344
|
+
"was not valid, please check the input"
|
345
|
+
end
|
346
|
+
rescue LogStash::ConfigurationError
|
347
|
+
raise
|
348
|
+
rescue Manticore::ManticoreException => m
|
349
|
+
msg = "There was a connection error verifying the auth_token, " +
|
350
|
+
"continuing without verification"
|
351
|
+
@logger.error(msg, :client_error => m.inspect)
|
352
|
+
rescue => e
|
353
|
+
@logger.fatal("Could not verify auth_token, " +
|
354
|
+
"error: #{e.inspect}")
|
355
|
+
raise LogStash::ConfigurationError, "Could not verify auth_token, " +
|
356
|
+
"error: #{e.inspect}"
|
357
|
+
end
|
358
|
+
end
|
359
|
+
end
|
360
|
+
|
361
|
+
params_event = Hash.new
|
362
|
+
params_event[:limit] = @limit if @limit > 0
|
363
|
+
params_event[:since] = @since if @since
|
364
|
+
params_event[:filter] = @filter if @filter
|
365
|
+
params_event[:q] = @q.join(" ") if @q
|
366
|
+
url_obj.query = URI.encode_www_form(params_event)
|
367
|
+
|
368
|
+
|
369
|
+
# This check is Logstash 5 specific. If the class does not exist, and it
|
370
|
+
# won't in older versions of Logstash, then we need to set it to nil.
|
371
|
+
settings = defined?(LogStash::SETTINGS) ? LogStash::SETTINGS : nil
|
372
|
+
|
373
|
+
if (@state_file_path.nil?)
|
374
|
+
begin
|
375
|
+
base_state_file_path = build_state_file_base(settings)
|
376
|
+
rescue LogStash::ConfigurationError
|
377
|
+
raise
|
378
|
+
rescue => e
|
379
|
+
@logger.fatal("Could not set up state file", :exception => e.inspect)
|
380
|
+
raise LogStash::ConfigurationError, e.inspect
|
381
|
+
end
|
382
|
+
file_prefix = "#{@hostname}_system_log_state"
|
383
|
+
case Dir[File.join(base_state_file_path,"#{file_prefix}*")].size
|
384
|
+
when 0
|
385
|
+
# Build a file name randomly
|
386
|
+
@state_file_path = File.join(
|
387
|
+
base_state_file_path,
|
388
|
+
rand_filename("#{file_prefix}"))
|
389
|
+
@logger.info('No state_file_path set, generating one based on the ' +
|
390
|
+
'"hostname" setting',
|
391
|
+
:state_file_path => @state_file_path.to_s,
|
392
|
+
:hostname => @hostname)
|
393
|
+
when 1
|
394
|
+
@state_file_path = Dir[File.join(base_state_file_path,"#{file_prefix}*")].last
|
395
|
+
@logger.info('Found state file based on the "hostname" setting',
|
396
|
+
:state_file_path => @state_file_path.to_s,
|
397
|
+
:hostname => @hostname)
|
398
|
+
else
|
399
|
+
msg = "There is more than one file" +
|
400
|
+
"in the state file base dir (possibly an error?)." +
|
401
|
+
"Please keep the latest/most relevant file.\n" +
|
402
|
+
"Directory: #{base_state_file_path}"
|
403
|
+
@logger.fatal(msg)
|
404
|
+
raise LogStash::ConfigurationError, msg
|
405
|
+
end
|
406
|
+
|
407
|
+
else
|
408
|
+
@state_file_path = File.path(@state_file_path)
|
409
|
+
if (File.directory?(@state_file_path))
|
410
|
+
@logger.fatal("The `state_file_path` argument must point to a file, " +
|
411
|
+
"received a directory: #{@state_file_path}")
|
412
|
+
raise LogStash::ConfigurationError, "The `state_file_path` argument " +
|
413
|
+
"must point to a file, received a directory: #{@state_file_path}"
|
414
|
+
end
|
415
|
+
end
|
416
|
+
begin
|
417
|
+
@state_file_stat = detect_state_file_mode(@state_file_path)
|
418
|
+
rescue => e
|
419
|
+
@logger.fatal("Error getting state file info. " +
|
420
|
+
"Exception: #{e.inspect}")
|
421
|
+
raise LogStash::ConfigurationError, "Error getting state file info. " +
|
422
|
+
"Exception: #{e.inspect}"
|
423
|
+
end
|
424
|
+
|
425
|
+
@write_method = detect_write_method(@state_file_path)
|
426
|
+
|
427
|
+
begin
|
428
|
+
state_file_size = File.size(@state_file_path)
|
429
|
+
if (state_file_size > 0)
|
430
|
+
if (state_file_size > MAX_MMAP_FILE_SIZE)
|
431
|
+
@logger.fatal("The state file: " +
|
432
|
+
"#{@state_file_path} is too large to map")
|
433
|
+
raise LogStash::ConfigurationError, "The state file: " +
|
434
|
+
"#{@state_file_path} is too large to map"
|
435
|
+
end
|
436
|
+
state_url = File.read(@state_file_path, state_file_size).chomp
|
437
|
+
if (state_url.length > 0)
|
438
|
+
state_url_obj = URI.parse(state_url)
|
439
|
+
@logger.info(
|
440
|
+
"Successfully opened state_file_path",
|
441
|
+
:state_url => state_url_obj.to_s,
|
442
|
+
:state_file_path => @state_file_path)
|
443
|
+
if (@custom_url)
|
444
|
+
unless (url_obj.hostname == state_url_obj.hostname)
|
445
|
+
@logger.fatal("The state URL " +
|
446
|
+
"does not match configured URL. ",
|
447
|
+
:configured_url => url_obj.to_s,
|
448
|
+
:state_url => state_url_obj.to_s)
|
449
|
+
raise LogStash::ConfigurationError, "The state URL " +
|
450
|
+
"does not match configured URL. " +
|
451
|
+
"Configured url: #{url_obj.to_s}, state_url: #{state_url_obj.to_s}"
|
452
|
+
end
|
453
|
+
else
|
454
|
+
unless (state_url_obj.hostname == @hostname and
|
455
|
+
state_url_obj.path == OKTA_EVENT_LOG_PATH)
|
456
|
+
@logger.fatal("The state URL " +
|
457
|
+
"does not match configured URL. " +
|
458
|
+
:configured_url => url_obj.to_s,
|
459
|
+
:state_url => state_url_obj.to_s)
|
460
|
+
raise LogStash::ConfigurationError, "The state URL " +
|
461
|
+
"does not match configured URL. " +
|
462
|
+
"Configured url: #{url_obj.to_s}, state_url: #{state_url_obj.to_s}"
|
463
|
+
end
|
464
|
+
end
|
465
|
+
url_obj = state_url_obj
|
466
|
+
end
|
467
|
+
end
|
468
|
+
rescue LogStash::ConfigurationError
|
469
|
+
raise
|
470
|
+
rescue URI::InvalidURIError => e
|
471
|
+
@logger.fatal("Could not parse url " +
|
472
|
+
"from state_file_path. URL: #{state_url}. Error: #{e.inspect}.")
|
473
|
+
raise LogStash::ConfigurationError, "Could not parse url " +
|
474
|
+
"from state_file_path. URL: #{state_url}. Error: #{e.inspect}."
|
475
|
+
rescue => e
|
476
|
+
@logger.fatal(e.inspect)
|
477
|
+
raise LogStash::ConfigurationError, e.inspect
|
478
|
+
end
|
479
|
+
|
480
|
+
@url = url_obj.to_s
|
481
|
+
|
482
|
+
@logger.info("Created initial URL to call", :url => @url)
|
483
|
+
@host = Socket.gethostname.force_encoding(Encoding::UTF_8)
|
484
|
+
|
485
|
+
if (@metadata_target)
|
486
|
+
@metadata_function = method(:apply_metadata)
|
487
|
+
else
|
488
|
+
@metadata_function = method(:noop)
|
489
|
+
end
|
490
|
+
|
491
|
+
if (@state_file_fatal_falure)
|
492
|
+
@state_file_failure_function = method(:fatal_state_file)
|
493
|
+
else
|
494
|
+
@state_file_failure_function = method(:error_state_file)
|
495
|
+
end
|
496
|
+
|
497
|
+
end # def register
|
498
|
+
|
499
|
+
|
500
|
+
def run(queue)
|
501
|
+
|
502
|
+
msg_invalid_schedule = "Invalid config. schedule hash must contain " +
|
503
|
+
"exactly one of the following keys - cron, at, every or in"
|
504
|
+
|
505
|
+
@logger.fatal(msg_invalid_schedule) if @schedule.keys.length !=1
|
506
|
+
raise LogStash::ConfigurationError, msg_invalid_schedule if @schedule.keys.length !=1
|
507
|
+
schedule_type = @schedule.keys.first
|
508
|
+
schedule_value = @schedule[schedule_type]
|
509
|
+
@logger.fatal(msg_invalid_schedule) unless Schedule_types.include?(schedule_type)
|
510
|
+
raise LogStash::ConfigurationError, msg_invalid_schedule unless Schedule_types.include?(schedule_type)
|
511
|
+
@scheduler = Rufus::Scheduler.new(:max_work_threads => 1)
|
512
|
+
|
513
|
+
#as of v3.0.9, :first_in => :now doesn't work. Use the following workaround instead
|
514
|
+
opts = schedule_type == "every" ? { :first_in => 0.01 } : {}
|
515
|
+
opts[:overlap] = false;
|
516
|
+
|
517
|
+
@logger.info("Starting event stream with the configured URL.",
|
518
|
+
:url => @url)
|
519
|
+
@scheduler.send(schedule_type, schedule_value, opts) { run_once(queue) }
|
520
|
+
|
521
|
+
@scheduler.join
|
522
|
+
|
523
|
+
end # def run
|
524
|
+
|
525
|
+
private
|
526
|
+
def run_once(queue)
|
527
|
+
|
528
|
+
request_async(queue)
|
529
|
+
|
530
|
+
end # def run_once
|
531
|
+
|
532
|
+
private
|
533
|
+
def request_async(queue)
|
534
|
+
|
535
|
+
@continue = true
|
536
|
+
|
537
|
+
header_hash = {
|
538
|
+
"Accept" => "application/json",
|
539
|
+
"Content-Type" => "application/json"
|
540
|
+
}
|
541
|
+
|
542
|
+
if (@auth_token)
|
543
|
+
header_hash["Authorization"] = "SSWS #{@auth_token.value}"
|
544
|
+
elsif (@custom_auth_header)
|
545
|
+
header_hash["Authorization"] = @custom_auth_header.value
|
546
|
+
end
|
547
|
+
|
548
|
+
begin
|
549
|
+
while @continue and !stop?
|
550
|
+
@logger.debug("Calling URL",
|
551
|
+
:url => @url,
|
552
|
+
:token_set => !@auth_token.nil?)
|
553
|
+
|
554
|
+
started = Time.now
|
555
|
+
|
556
|
+
client.async.get(@url.to_s, headers: header_hash).
|
557
|
+
on_success { |response| handle_success(queue, response, @url, Time.now - started) }.
|
558
|
+
on_failure { |exception| handle_failure(queue, exception, @url, Time.now - started) }
|
559
|
+
|
560
|
+
client.execute!
|
561
|
+
end
|
562
|
+
rescue => e
|
563
|
+
@logger.fatal(e.inspect)
|
564
|
+
raise e
|
565
|
+
ensure
|
566
|
+
update_state_file()
|
567
|
+
end
|
568
|
+
end # def request_async
|
569
|
+
|
570
|
+
private
|
571
|
+
def update_state_file()
|
572
|
+
for i in 1..3
|
573
|
+
@trace_log_method.call("Starting state file update",
|
574
|
+
:state_file_path => @state_file_path,
|
575
|
+
:url => @url,
|
576
|
+
:attempt_num => i)
|
577
|
+
|
578
|
+
begin
|
579
|
+
@write_method.call(@state_file_path, @url)
|
580
|
+
rescue => e
|
581
|
+
@logger.warn("Could not save state, retrying",
|
582
|
+
:state_file_path => @state_file_path,
|
583
|
+
:url => @url,
|
584
|
+
:exception => e.inspect)
|
585
|
+
|
586
|
+
sleep SLEEP_STATE_FILE_RETRY
|
587
|
+
next
|
588
|
+
end
|
589
|
+
@logger.debug("Successfully wrote the state file",
|
590
|
+
:state_file_path => @state_file_path,
|
591
|
+
:url => @url,
|
592
|
+
:attempts => i)
|
593
|
+
# Break out of the loop once you're done
|
594
|
+
return nil
|
595
|
+
end
|
596
|
+
@state_file_failure_function.call()
|
597
|
+
end # def update_state_file
|
598
|
+
|
599
|
+
private
|
600
|
+
def handle_success(queue, response, requested_url, exec_time)
|
601
|
+
|
602
|
+
@continue = false
|
603
|
+
|
604
|
+
case response.code
|
605
|
+
when HTTP_OK_200
|
606
|
+
## Some benchmarking code for reasonings behind the methods.
|
607
|
+
## They aren't great benchmarks, but basic ones that proved a point.
|
608
|
+
## If anyone has better/contradicting results let me know
|
609
|
+
#
|
610
|
+
## Some system info on which these tests were run:
|
611
|
+
#$ cat /proc/cpuinfo | grep -i "model name" | uniq -c
|
612
|
+
# 4 model name : Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz
|
613
|
+
#
|
614
|
+
#$ free -m
|
615
|
+
# total used free shared buff/cache available
|
616
|
+
# Mem: 1984 925 372 8 686 833
|
617
|
+
# Swap: 2047 0 2047
|
618
|
+
#
|
619
|
+
#str = '<https://dev-instance.oktapreview.com/api/v1/events?after=tevHLxinRbATJeKgKjgXGXy0Q1479278142000&limit=1000>; rel="next"'
|
620
|
+
#require "benchmark"
|
621
|
+
#
|
622
|
+
#
|
623
|
+
#n = 50000000
|
624
|
+
#
|
625
|
+
#
|
626
|
+
#Benchmark.bm do |x|
|
627
|
+
# x.report { n.times { str.include?('rel="next"') } } # (2) 23.008853sec @50000000 times
|
628
|
+
# x.report { n.times { str.end_with?('rel="next"') } } # (1) 16.894623sec @50000000 times
|
629
|
+
# x.report { n.times { str =~ /rel="next"$/ } } # (3) 30.757554sec @50000000 times
|
630
|
+
#end
|
631
|
+
#
|
632
|
+
#Benchmark.bm do |x|
|
633
|
+
# x.report { n.times { str.match(/<([^>]+)>/).captures[0] } } # (2) 262.166085sec @50000000 times
|
634
|
+
# x.report { n.times { str.split(';')[0][1...-1] } } # (1) 31.673270sec @50000000 times
|
635
|
+
#end
|
636
|
+
|
637
|
+
# Store the next URL to call from the header
|
638
|
+
next_url = nil
|
639
|
+
Array(response.headers["link"]).each do |link_header|
|
640
|
+
if link_header.end_with?('rel="next"')
|
641
|
+
next_url = link_header.split(';')[0][1...-1]
|
642
|
+
end
|
643
|
+
end
|
644
|
+
|
645
|
+
if (response.body.length > 0)
|
646
|
+
@codec.decode(response.body) do |decoded|
|
647
|
+
@logger.debug("Pushing event to queue")
|
648
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
649
|
+
@metadata_function.call(event, requested_url, response, exec_time)
|
650
|
+
decorate(event)
|
651
|
+
queue << event
|
652
|
+
end
|
653
|
+
else
|
654
|
+
@codec.decode("{}") do |decoded|
|
655
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
656
|
+
@metadata_function.call(event, requested_url, response, exec_time)
|
657
|
+
decorate(event)
|
658
|
+
queue << event
|
659
|
+
end
|
660
|
+
end
|
661
|
+
|
662
|
+
if (!next_url.nil? and next_url != @url)
|
663
|
+
@url = next_url
|
664
|
+
@continue = true
|
665
|
+
@logger.debug("Continue status", :continue => @continue )
|
666
|
+
# Add a sleep since we're gonna hit the API again
|
667
|
+
sleep SLEEP_API_RATE_LIMIT
|
668
|
+
end
|
669
|
+
|
670
|
+
@trace_log_method.call("Response body", :body => response.body)
|
671
|
+
|
672
|
+
when HTTP_UNAUTHORIZED_401
|
673
|
+
@codec.decode(response.body) do |decoded|
|
674
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
675
|
+
@metadata_function.call(event, requested_url, response, exec_time)
|
676
|
+
event.set("okta_response_error", {
|
677
|
+
"okta_plugin_status" => "Auth_token supplied is not valid, " +
|
678
|
+
"validate the auth_token and update the plugin config.",
|
679
|
+
"http_code" => 401
|
680
|
+
})
|
681
|
+
event.tag("_okta_response_error")
|
682
|
+
decorate(event)
|
683
|
+
queue << event
|
684
|
+
end
|
685
|
+
|
686
|
+
@logger.error("Authentication required, check auth_code",
|
687
|
+
:code => response.code,
|
688
|
+
:headers => response.headers)
|
689
|
+
@trace_log_method.call("Authentication failed body", :body => response.body)
|
690
|
+
|
691
|
+
when HTTP_BAD_REQUEST_400
|
692
|
+
if (response.body.include?("E0000031"))
|
693
|
+
@codec.decode(response.body) do |decoded|
|
694
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
695
|
+
@metadata_function.call(event, requested_url, response, exec_time)
|
696
|
+
event.set("okta_response_error", {
|
697
|
+
"okta_plugin_status" => "Filter string was not valid.",
|
698
|
+
"http_code" => 400
|
699
|
+
})
|
700
|
+
event.tag("_okta_response_error")
|
701
|
+
decorate(event)
|
702
|
+
queue << event
|
703
|
+
end
|
704
|
+
|
705
|
+
@logger.error("Filter string was not valid",
|
706
|
+
:response_code => response.code,
|
707
|
+
:okta_error => "E0000031",
|
708
|
+
:filter_string => @filter)
|
709
|
+
|
710
|
+
@logger.debug("Filter string error response",
|
711
|
+
:response_body => response.body,
|
712
|
+
:response_headers => response.headers)
|
713
|
+
|
714
|
+
elsif (response.body.include?("E0000030"))
|
715
|
+
|
716
|
+
@codec.decode(response.body) do |decoded|
|
717
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
718
|
+
@metadata_function.call(event, requested_url, response, exec_time)
|
719
|
+
event.set("okta_response_error", {
|
720
|
+
"okta_plugin_status" => "since was not valid.",
|
721
|
+
"http_code" => 400
|
722
|
+
})
|
723
|
+
event.tag("_okta_response_error")
|
724
|
+
decorate(event)
|
725
|
+
queue << event
|
726
|
+
end
|
727
|
+
|
728
|
+
@logger.error("Date was not formatted correctly",
|
729
|
+
:response_code => response.code,
|
730
|
+
:okta_error => "E0000030",
|
731
|
+
:date_string => @since)
|
732
|
+
|
733
|
+
@logger.debug("Start date error response",
|
734
|
+
:response_body => response.body,
|
735
|
+
:response_headers => response.headers)
|
736
|
+
|
737
|
+
## If the Okta error code does not match known codes
|
738
|
+
## Process it as a generic error
|
739
|
+
else
|
740
|
+
handle_unknown_okta_code(queue,response,requested_url,exec_time)
|
741
|
+
end
|
742
|
+
else
|
743
|
+
handle_unknown_http_code(queue,response,requested_url,exec_time)
|
744
|
+
end
|
745
|
+
|
746
|
+
end # def handle_success
|
747
|
+
|
748
|
+
private
|
749
|
+
def handle_unknown_okta_code(queue,response,requested_url,exec_time)
|
750
|
+
@codec.decode(response.body) do |decoded|
|
751
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
752
|
+
@metadata_function.call(event, requested_url, response, exec_time)
|
753
|
+
event.set("okta_response_error", {
|
754
|
+
"okta_plugin_status" => "Unknown error code from Okta",
|
755
|
+
"http_code" => response.code,
|
756
|
+
})
|
757
|
+
event.tag("_okta_response_error")
|
758
|
+
decorate(event)
|
759
|
+
queue << event
|
760
|
+
end
|
761
|
+
|
762
|
+
@logger.error("Okta API Error",
|
763
|
+
:http_code => response.code,
|
764
|
+
:body => response.body,
|
765
|
+
:headers => response.headers)
|
766
|
+
|
767
|
+
end # def handle_unknown_okta_code
|
768
|
+
|
769
|
+
private
|
770
|
+
def handle_unknown_http_code(queue,response,requested_url,exec_time)
|
771
|
+
@codec.decode(response.body) do |decoded|
|
772
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
773
|
+
@metadata_function.call(event, requested_url, response, exec_time)
|
774
|
+
|
775
|
+
event.set("http_response_error", {
|
776
|
+
"okta_plugin_status" => "Unknown HTTP code, review HTTP errors",
|
777
|
+
"http_code" => response.code,
|
778
|
+
"http_headers" => response.headers
|
779
|
+
})
|
780
|
+
event.tag("_http_response_error")
|
781
|
+
decorate(event)
|
782
|
+
queue << event
|
783
|
+
end
|
784
|
+
|
785
|
+
@logger.error("HTTP Error",
|
786
|
+
:http_code => response.code,
|
787
|
+
:body => response.body,
|
788
|
+
:headers => response.headers)
|
789
|
+
end # def handle_unknown_http_code
|
790
|
+
|
791
|
+
private
|
792
|
+
def handle_failure(queue, exception, requested_url, exec_time)
|
793
|
+
|
794
|
+
@continue = false
|
795
|
+
@logger.error("Client Connection Error",
|
796
|
+
:exception => exception.inspect)
|
797
|
+
|
798
|
+
event = LogStash::Event.new
|
799
|
+
@metadata_function.call(event, requested_url, nil, exec_time)
|
800
|
+
event.set("http_request_error", {
|
801
|
+
"okta_plugin_status" => "Client Connection Error",
|
802
|
+
"connect_error" => exception.message,
|
803
|
+
"backtrace" => exception.backtrace
|
804
|
+
})
|
805
|
+
event.tag("_http_request_error")
|
806
|
+
decorate(event)
|
807
|
+
queue << event
|
808
|
+
|
809
|
+
end # def handle_failure
|
810
|
+
|
811
|
+
private
|
812
|
+
def apply_metadata(event, requested_url, response=nil, exec_time=nil)
|
813
|
+
|
814
|
+
m = {
|
815
|
+
"host" => @host,
|
816
|
+
"url" => requested_url
|
817
|
+
}
|
818
|
+
|
819
|
+
if exec_time
|
820
|
+
m["runtime_seconds"] = exec_time.round(3)
|
821
|
+
end
|
822
|
+
|
823
|
+
if response
|
824
|
+
m["code"] = response.code
|
825
|
+
m["response_headers"] = response.headers
|
826
|
+
m["response_message"] = response.message
|
827
|
+
m["retry_count"] = response.times_retried
|
828
|
+
end
|
829
|
+
|
830
|
+
event.set(@metadata_target,m)
|
831
|
+
|
832
|
+
end
|
833
|
+
|
834
|
+
# Dummy function to handle noops
|
835
|
+
private
|
836
|
+
def noop(*args)
|
837
|
+
return
|
838
|
+
end
|
839
|
+
|
840
|
+
private
|
841
|
+
def fatal_state_file()
|
842
|
+
@logger.fatal("Unable to save state file after retrying. Exiting...",
|
843
|
+
:url => @url,
|
844
|
+
:state_file_path => @state_file_path)
|
845
|
+
|
846
|
+
@logger.fatal("Unable to save state_file_path, " +
|
847
|
+
"#{@state_file_path} after retrying.")
|
848
|
+
raise LogStash::EnvironmentError, "Unable to save state_file_path, " +
|
849
|
+
"#{@state_file_path} after retrying."
|
850
|
+
end
|
851
|
+
|
852
|
+
private
|
853
|
+
def error_state_file()
|
854
|
+
@logger.error("Unable to save state_file_path after retrying three times",
|
855
|
+
:url => @url,
|
856
|
+
:state_file_path => @state_file_path)
|
857
|
+
end
|
858
|
+
|
859
|
+
# based on code from logstash-input-file
|
860
|
+
private
|
861
|
+
def atomic_write(path, content)
|
862
|
+
write_atomically(path) do |io|
|
863
|
+
io.write("#{content}\n")
|
864
|
+
end
|
865
|
+
end
|
866
|
+
|
867
|
+
private
|
868
|
+
def non_atomic_write(path, content)
|
869
|
+
IO.open(IO.sysopen(path, "w+")) do |io|
|
870
|
+
io.write("#{content}\n")
|
871
|
+
end
|
872
|
+
end
|
873
|
+
|
874
|
+
|
875
|
+
# Write to a file atomically. Useful for situations where you don't
|
876
|
+
# want other processes or threads to see half-written files.
|
877
|
+
#
|
878
|
+
# File.write_atomically('important.file') do |file|
|
879
|
+
# file.write('hello')
|
880
|
+
# end
|
881
|
+
private
|
882
|
+
def write_atomically(file_name)
|
883
|
+
|
884
|
+
# Create temporary file with identical permissions
|
885
|
+
begin
|
886
|
+
temp_file = File.new(rand_filename(file_name), "w", @state_file_stat.mode)
|
887
|
+
temp_file.binmode
|
888
|
+
return_val = yield temp_file
|
889
|
+
ensure
|
890
|
+
temp_file.close
|
891
|
+
end
|
892
|
+
|
893
|
+
# Overwrite original file with temp file
|
894
|
+
File.rename(temp_file.path, file_name)
|
895
|
+
|
896
|
+
# Unable to get permissions of the original file => return
|
897
|
+
return return_val if @state_file_mode.nil?
|
898
|
+
|
899
|
+
# Set correct uid/gid on new file
|
900
|
+
File.chown(@state_file_stat.uid, @state_file_stat.gid, file_name) if old_stat
|
901
|
+
|
902
|
+
return return_val
|
903
|
+
end
|
904
|
+
|
905
|
+
private
|
906
|
+
def rand_filename(prefix) #:nodoc:
|
907
|
+
[ prefix, Thread.current.object_id, Process.pid, rand(1000000) ].join('.')
|
908
|
+
end
|
909
|
+
|
910
|
+
## Not used -- but keeping it in case I need to use it at some point
|
911
|
+
## Private utility method.
|
912
|
+
#private
|
913
|
+
#def probe_stat_in(dir) #:nodoc:
|
914
|
+
# begin
|
915
|
+
# basename = rand_filename(".permissions_check")
|
916
|
+
# file_name = File.join(dir, basename)
|
917
|
+
# #FileUtils.touch(file_name)
|
918
|
+
# # 'touch' a file to keep the conditional from happening later
|
919
|
+
# File.open(file_name, "w") {}
|
920
|
+
# File.stat(file_name)
|
921
|
+
# rescue
|
922
|
+
# # ...
|
923
|
+
# ensure
|
924
|
+
# File.delete(file_name) if File.exist?(file_name)
|
925
|
+
# end
|
926
|
+
#end
|
927
|
+
|
928
|
+
private
|
929
|
+
def build_state_file_base(settings) #:nodoc:
|
930
|
+
if (settings.nil?)
|
931
|
+
@logger.warn("Attempting to use LOGSTASH_HOME. Note that this method is deprecated. " \
|
932
|
+
"Consider upgrading or using state_file_path config option instead.")
|
933
|
+
# This section is going to be deprecated eventually, as path.data will be
|
934
|
+
# the default, not an environment variable (SINCEDB_DIR or LOGSTASH_HOME)
|
935
|
+
# NOTE: I don't have an answer for this right now, but this raise needs to be moved to `register`
|
936
|
+
if ENV["LOGSTASH_HOME"].nil?
|
937
|
+
@logger.error("No settings or LOGSTASH_HOME environment variable set, I don't know where " +
|
938
|
+
"to keep track of the files I'm watching. " +
|
939
|
+
"Set state_file_path in " +
|
940
|
+
"in your Logstash config for the file input with " +
|
941
|
+
"state_file_path '#{@state_file_path.inspect}'")
|
942
|
+
raise LogStash::ConfigurationError, 'The "state_file_path" setting ' +
|
943
|
+
'was not given and the environment variable "LOGSTASH_HOME" ' +
|
944
|
+
'is not set so we cannot build a file path for the state_file_path.'
|
945
|
+
end
|
946
|
+
logstash_data_path = File.path(ENV["LOGSTASH_HOME"])
|
947
|
+
else
|
948
|
+
logstash_data_path = settings.get_value("path.data")
|
949
|
+
end
|
950
|
+
File.join(logstash_data_path, "plugins", "inputs", "okta_system_log").tap do |path|
|
951
|
+
# Ensure that the filepath exists before writing, since it's deeply nested.
|
952
|
+
nested_dir_create(path)
|
953
|
+
end
|
954
|
+
end
|
955
|
+
|
956
|
+
private
|
957
|
+
def nested_dir_create(path) # :nodoc:
|
958
|
+
dirs = []
|
959
|
+
until File.directory?(path)
|
960
|
+
dirs.push path
|
961
|
+
path = File.dirname(path)
|
962
|
+
end
|
963
|
+
|
964
|
+
dirs.reverse_each do |dir|
|
965
|
+
Dir.mkdir(dir)
|
966
|
+
end
|
967
|
+
end
|
968
|
+
|
969
|
+
private
|
970
|
+
def log_trace(message, vars = {})
|
971
|
+
@logger.trace(message, vars)
|
972
|
+
end
|
973
|
+
|
974
|
+
private
|
975
|
+
def log_debug(message, vars = {})
|
976
|
+
@logger.debug(message, vars)
|
977
|
+
end
|
978
|
+
|
979
|
+
private
|
980
|
+
def detect_trace_log_method() #:nodoc:
|
981
|
+
begin
|
982
|
+
if (@logger.trace?)
|
983
|
+
return method(:log_trace)
|
984
|
+
end
|
985
|
+
rescue NoMethodError
|
986
|
+
@logger.info("Using debug instead of trace due to lack of support" +
|
987
|
+
"in this version.")
|
988
|
+
return method(:log_debug)
|
989
|
+
end
|
990
|
+
return method(:log_trace)
|
991
|
+
end
|
992
|
+
|
993
|
+
private
|
994
|
+
def is_defined(str) #:nodoc:
|
995
|
+
return !(str.nil? or str.length == 0)
|
996
|
+
end
|
997
|
+
|
998
|
+
def detect_write_method(path)
|
999
|
+
if (LogStash::Environment.windows? ||
|
1000
|
+
File.chardev?(path) ||
|
1001
|
+
File.blockdev?(path) ||
|
1002
|
+
File.socket?(path))
|
1003
|
+
@logger.info("State file cannot be updated using an atomic write, " +
|
1004
|
+
"using non-atomic write", :state_file_path => path)
|
1005
|
+
return method(:non_atomic_write)
|
1006
|
+
else
|
1007
|
+
return method(:atomic_write)
|
1008
|
+
end
|
1009
|
+
end
|
1010
|
+
|
1011
|
+
def detect_state_file_mode(path)
|
1012
|
+
if (File.exist?(path))
|
1013
|
+
old_stat = File.stat(path)
|
1014
|
+
else
|
1015
|
+
# We need to create a file anyway so check it with the file created
|
1016
|
+
# # If not possible, probe which are the default permissions in the
|
1017
|
+
# # destination directory.
|
1018
|
+
# old_stat = probe_stat_in(File.dirname(@state_file_path))
|
1019
|
+
|
1020
|
+
# 'touch' a file
|
1021
|
+
File.open(path, "w") {}
|
1022
|
+
old_stat = File.stat(path)
|
1023
|
+
end
|
1024
|
+
|
1025
|
+
return old_stat ? old_stat : nil
|
1026
|
+
|
1027
|
+
end
|
1028
|
+
|
1029
|
+
public
|
1030
|
+
def stop
|
1031
|
+
# nothing to do in this case so it is not necessary to define stop
|
1032
|
+
# examples of common "stop" tasks:
|
1033
|
+
# * close sockets (unblocking blocking reads/accepts)
|
1034
|
+
# * cleanup temporary files
|
1035
|
+
# * terminate spawned threads
|
1036
|
+
begin
|
1037
|
+
@scheduler.stop
|
1038
|
+
rescue NoMethodError => e
|
1039
|
+
unless (e.message == "undefined method `stop' for nil:NilClass")
|
1040
|
+
raise
|
1041
|
+
end
|
1042
|
+
rescue => e
|
1043
|
+
@logger.warn("Undefined error", :exception => e.inspect)
|
1044
|
+
raise
|
1045
|
+
ensure
|
1046
|
+
if (is_defined(@url))
|
1047
|
+
update_state_file()
|
1048
|
+
end
|
1049
|
+
end
|
1050
|
+
end # def stop
|
1051
|
+
end # class LogStash::Inputs::OktaSystemLog
|