logstash-input-okta_enterprise 0.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/CHANGELOG.md +2 -0
- data/CONTRIBUTORS +10 -0
- data/DEVELOPER.md +2 -0
- data/Gemfile +3 -0
- data/LICENSE +11 -0
- data/README.md +86 -0
- data/lib/logstash/inputs/okta_enterprise.rb +619 -0
- data/logstash-input-okta_enterprise.gemspec +33 -0
- data/spec/inputs/okta_enterprise_spec.rb +498 -0
- metadata +207 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: 8d9e8104fd93142a38dda0c356c17c2683f7c786
|
4
|
+
data.tar.gz: 62d0cf3957e8b6e8402b2b47464aae208a159501
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 6ec0cd5914f7af3ebb110ea5890caef8582575521a9c29bd77f3ec49a38c97d344a31d28a602c4cd6ad9d1e40915a6557a8d2f4e5ae413596eee4c8cadc49492
|
7
|
+
data.tar.gz: 1840ab41b09a997ea52bb6ffd503f94990f5066a5ddfb66c73ca6a2c8e36b2052685689953a73ba76d9cf086af93249f56a88bfa6ba2d88955f96415d48d8a2c
|
data/CHANGELOG.md
ADDED
data/CONTRIBUTORS
ADDED
@@ -0,0 +1,10 @@
|
|
1
|
+
The following is a list of people who have contributed ideas, code, bug
|
2
|
+
reports, or in general have helped logstash along its way.
|
3
|
+
|
4
|
+
Contributors:
|
5
|
+
* - Security Risk Advisors
|
6
|
+
|
7
|
+
Note: If you've sent us patches, bug reports, or otherwise contributed to
|
8
|
+
Logstash, and you aren't on the list above and want to be, please let us know
|
9
|
+
and we'll make sure you're here. Contributions from folks like you are what make
|
10
|
+
open source awesome.
|
data/DEVELOPER.md
ADDED
data/Gemfile
ADDED
data/LICENSE
ADDED
@@ -0,0 +1,11 @@
|
|
1
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
2
|
+
you may not use this file except in compliance with the License.
|
3
|
+
You may obtain a copy of the License at
|
4
|
+
|
5
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
6
|
+
|
7
|
+
Unless required by applicable law or agreed to in writing, software
|
8
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
9
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
10
|
+
See the License for the specific language governing permissions and
|
11
|
+
limitations under the License.
|
data/README.md
ADDED
@@ -0,0 +1,86 @@
|
|
1
|
+
# Logstash Plugin
|
2
|
+
|
3
|
+
This is a plugin for [Logstash](https://github.com/elastic/logstash).
|
4
|
+
|
5
|
+
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
|
6
|
+
|
7
|
+
## Documentation
|
8
|
+
|
9
|
+
Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
|
10
|
+
|
11
|
+
- For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
|
12
|
+
- For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
|
13
|
+
|
14
|
+
## Need Help?
|
15
|
+
|
16
|
+
Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
|
17
|
+
|
18
|
+
## Developing
|
19
|
+
|
20
|
+
### 1. Plugin Developement and Testing
|
21
|
+
|
22
|
+
#### Code
|
23
|
+
- To get started, you'll need JRuby with the Bundler gem installed.
|
24
|
+
|
25
|
+
- Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
|
26
|
+
|
27
|
+
- Install dependencies
|
28
|
+
```sh
|
29
|
+
bundle install
|
30
|
+
```
|
31
|
+
|
32
|
+
#### Test
|
33
|
+
|
34
|
+
- Update your dependencies
|
35
|
+
|
36
|
+
```sh
|
37
|
+
bundle install
|
38
|
+
```
|
39
|
+
|
40
|
+
- Run tests
|
41
|
+
|
42
|
+
```sh
|
43
|
+
bundle exec rspec
|
44
|
+
```
|
45
|
+
|
46
|
+
### 2. Running your unpublished Plugin in Logstash
|
47
|
+
|
48
|
+
#### 2.1 Run in a local Logstash clone
|
49
|
+
|
50
|
+
- Edit Logstash `Gemfile` and add the local plugin path, for example:
|
51
|
+
```ruby
|
52
|
+
gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
|
53
|
+
```
|
54
|
+
- Install plugin
|
55
|
+
```sh
|
56
|
+
bin/logstash-plugin install --no-verify
|
57
|
+
```
|
58
|
+
- Run Logstash with your plugin
|
59
|
+
```sh
|
60
|
+
bin/logstash -e 'filter {awesome {}}'
|
61
|
+
```
|
62
|
+
At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
|
63
|
+
|
64
|
+
#### 2.2 Run in an installed Logstash
|
65
|
+
|
66
|
+
You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
|
67
|
+
|
68
|
+
- Build your plugin gem
|
69
|
+
```sh
|
70
|
+
gem build logstash-filter-awesome.gemspec
|
71
|
+
```
|
72
|
+
- Install the plugin from the Logstash home
|
73
|
+
```sh
|
74
|
+
bin/logstash-plugin install /your/local/plugin/logstash-filter-awesome.gem
|
75
|
+
```
|
76
|
+
- Start Logstash and proceed to test the plugin
|
77
|
+
|
78
|
+
## Contributing
|
79
|
+
|
80
|
+
All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
|
81
|
+
|
82
|
+
Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
|
83
|
+
|
84
|
+
It is more important to the community that you are able to contribute.
|
85
|
+
|
86
|
+
For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
|
@@ -0,0 +1,619 @@
|
|
1
|
+
# encoding: utf-8
|
2
|
+
require "logstash/inputs/base"
|
3
|
+
require "logstash/namespace"
|
4
|
+
require "rufus/scheduler"
|
5
|
+
require "socket" # for Socket.gethostname
|
6
|
+
require "logstash/plugin_mixins/http_client"
|
7
|
+
require "manticore"
|
8
|
+
require "base64"
|
9
|
+
require "cgi"
|
10
|
+
|
11
|
+
MAX_AUTH_TOKEN_FILE_SIZE = 1 * 2**10
|
12
|
+
|
13
|
+
# This Logstash input plugin allows you to call an the Okta HTTP API to ship to other SIEMS.
|
14
|
+
# This plugin is based on the http_poller plugin, however the plugin needed to retain a state.
|
15
|
+
# It should do that, and can be used as a basis for similar web api style loggers.
|
16
|
+
# The plugin supports the rufus style scheduling.
|
17
|
+
# Using the HTTP poller with custom a custom CA or self signed cert.
|
18
|
+
# ==== Example
|
19
|
+
# This is a basic configuration. The API key is passed through using an environment variable.
|
20
|
+
# While it is possible to just put the API key directly into the file, it is NOT recommended.
|
21
|
+
#
|
22
|
+
# [source,ruby]
|
23
|
+
# ----------------------------------
|
24
|
+
# input {
|
25
|
+
# okta_enterprise {
|
26
|
+
# schedule => { every => "30s" }
|
27
|
+
# chunk_size => 1000
|
28
|
+
# auth_token_env => "${OKTA_API_KEY}"
|
29
|
+
# url => "https://uri.okta.com/api/v1/events"
|
30
|
+
# }
|
31
|
+
# }
|
32
|
+
# output {
|
33
|
+
# stdout {
|
34
|
+
# codec => rubydebug
|
35
|
+
# }
|
36
|
+
# }
|
37
|
+
# ----------------------------------
|
38
|
+
#
|
39
|
+
#
|
40
|
+
# It is possible to save the application state, so if the plugin is stopped it won't have to pull
|
41
|
+
# all the data again.
|
42
|
+
# Currently Linux ONLY.
|
43
|
+
# The state file base is added to the config, which will be used store the state of the event query.
|
44
|
+
# The directory in which the exists should have rwx permissions for the logstash user.
|
45
|
+
# As such it should not be the primary logstash config directory.
|
46
|
+
#
|
47
|
+
# [source,ruby]
|
48
|
+
# ----------------------------------
|
49
|
+
# input {
|
50
|
+
# okta_enterprise {
|
51
|
+
# schedule => { every => "30s" }
|
52
|
+
# state_file_base => "/etc/logstash/state_file/okta_base_"
|
53
|
+
# # A file can also be used instead of environment variable.
|
54
|
+
# auth_token_file => "/path/to/security/creds"
|
55
|
+
# url => "https://uri.okta.com/api/v1/events"
|
56
|
+
# # Metadata can be stored in the same way as the http_poller
|
57
|
+
# metadata_target => "metadata"
|
58
|
+
# # Data can be stored in any arbitrary key
|
59
|
+
# target => "target"
|
60
|
+
# }
|
61
|
+
# }
|
62
|
+
#
|
63
|
+
# output {
|
64
|
+
# stdout {
|
65
|
+
# codec => rubydebug
|
66
|
+
# }
|
67
|
+
# }
|
68
|
+
# ----------------------------------
|
69
|
+
#
|
70
|
+
# If you have a self signed cert you will need to convert your server's certificate to a valid# `.jks` or `.p12` file. An easy way to do it is to run the following one-liner, substituting your server's URL for the placeholder `MYURL` and `MYPORT`.
|
71
|
+
#
|
72
|
+
# [source,ruby]
|
73
|
+
# ----------------------------------
|
74
|
+
# openssl s_client -showcerts -connect MYURL:MYPORT </dev/null 2>/dev/null|openssl x509 -outform PEM > downloaded_cert.pem; keytool -import -alias test -file downloaded_cert.pem -keystore downloaded_truststore.jks
|
75
|
+
# ----------------------------------
|
76
|
+
#
|
77
|
+
# The above snippet will create two files `downloaded_cert.pem` and `downloaded_truststore.jks`. You will be prompted to set a password for the `jks` file during this process. To configure logstash use a config like the one that follows.
|
78
|
+
#
|
79
|
+
#
|
80
|
+
# [source,ruby]
|
81
|
+
# ----------------------------------
|
82
|
+
#input {
|
83
|
+
# okta_enterprise {
|
84
|
+
# ...
|
85
|
+
# truststore => "/path/to/downloaded_truststore.jks"
|
86
|
+
# truststore_password => "mypassword"
|
87
|
+
#
|
88
|
+
# }
|
89
|
+
#}
|
90
|
+
# ----------------------------------
|
91
|
+
|
92
|
+
class LogStash::Inputs::OktaEnterprise < LogStash::Inputs::Base
|
93
|
+
include LogStash::PluginMixins::HttpClient
|
94
|
+
|
95
|
+
config_name "okta_enterprise"
|
96
|
+
|
97
|
+
# If undefined, Logstash will complain, even if codec is unused.
|
98
|
+
default :codec, "json"
|
99
|
+
|
100
|
+
# Set how many messages you want to pull with each request
|
101
|
+
#
|
102
|
+
# The default, `1000`, means to fetch 1000 events at a time.
|
103
|
+
# Any value less than 1 will fetch all possible events.
|
104
|
+
config :chunk_size, :validate => :number, :default => 1000
|
105
|
+
|
106
|
+
# Schedule of when to periodically poll from the urls
|
107
|
+
# Format: A hash with
|
108
|
+
# + key: "cron" | "every" | "in" | "at"
|
109
|
+
# + value: string
|
110
|
+
# Examples:
|
111
|
+
# a) { "every" => "1h" }
|
112
|
+
# b) { "cron" => "* * * * * UTC" }
|
113
|
+
# See: rufus/scheduler for details about different schedule options and value string format
|
114
|
+
config :schedule, :validate => :hash, :required => true
|
115
|
+
|
116
|
+
# THe URL for the Okta instance to access
|
117
|
+
#
|
118
|
+
# Format: URI
|
119
|
+
config :url, :validate => :uri, :required => true
|
120
|
+
|
121
|
+
# The date and time after which to fetch events
|
122
|
+
#
|
123
|
+
# Format: string with a RFC 3339 formatted date (e.g. 2016-10-09T22:25:06-07:00)
|
124
|
+
config :start_date, :validate => :string
|
125
|
+
|
126
|
+
# The free form filter to use to filter data to requirements.
|
127
|
+
# Spec can be found at the link below
|
128
|
+
# http://developer.okta.com/docs/api/resources/events.html#filters
|
129
|
+
# The filter will be URL encoded by the plugin
|
130
|
+
# The plugin will not validate the filter.
|
131
|
+
# Use single quotes in the config file,
|
132
|
+
# e.g. 'published gt "2017-01-01T00:00:00.000Z"'
|
133
|
+
#
|
134
|
+
# Format: Plain text filter field.
|
135
|
+
config :filter, :validate => :string
|
136
|
+
|
137
|
+
# The file in which the auth_token for Okta will be contained.
|
138
|
+
# WARNING: This file should be VERY carefully monitored.
|
139
|
+
# This will contain the auth_token which can have a lot access to your Okta instance.
|
140
|
+
# It cannot be stressed enough how important it is to protect this file.
|
141
|
+
#
|
142
|
+
# Format: File path
|
143
|
+
config :auth_token_file, :validate => :path
|
144
|
+
|
145
|
+
# The auth token used to authenticate to Okta.
|
146
|
+
# WARNING: Avoid storing the auth_token directly in this file.
|
147
|
+
# This method is provided solely to add the auth_token via environment variable.
|
148
|
+
# This will contain the auth_token which can have a lot access to your Okta instance.
|
149
|
+
#
|
150
|
+
# Format: File path
|
151
|
+
config :auth_token_env, :validate => :string
|
152
|
+
|
153
|
+
# The base filename to store the pointer to the current location in the logs
|
154
|
+
# This file will be renamed with each new reference to limit loss of this data
|
155
|
+
# The location will need at least write and execute privs for the logstash user
|
156
|
+
# This parameter is not required, however on start logstash will ship all logs to your SIEM.
|
157
|
+
#
|
158
|
+
# Format: Filepath
|
159
|
+
# This is not the filepath of the file itself, but to generate the file.
|
160
|
+
config :state_file_base, :validate => :string
|
161
|
+
|
162
|
+
# If you'd like to work with the request/response metadata.
|
163
|
+
# Set this value to the name of the field you'd like to store a nested
|
164
|
+
# hash of metadata.
|
165
|
+
config :metadata_target, :validate => :string, :default => '@metadata'
|
166
|
+
|
167
|
+
# Define the target field for placing the received data.
|
168
|
+
# If this setting is omitted, the data will be stored at the root (top level) of the event.
|
169
|
+
config :target, :validate => :string
|
170
|
+
|
171
|
+
public
|
172
|
+
Schedule_types = %w(cron every at in)
|
173
|
+
def register
|
174
|
+
|
175
|
+
if (@auth_token_env and @auth_token_file)
|
176
|
+
raise LogStash::ConfigurationError, "auth_token_file and auth_token_env" +
|
177
|
+
"cannot be set. Please select one for use."
|
178
|
+
end
|
179
|
+
|
180
|
+
unless (@auth_token_env or @auth_token_file)
|
181
|
+
auth_message = "Both auth_token_file and auth_token_env cannot be empty."+
|
182
|
+
"Please select one for use."
|
183
|
+
raise LogStash::ConfigurationError, auth_message
|
184
|
+
end
|
185
|
+
|
186
|
+
if (@auth_token_file)
|
187
|
+
begin
|
188
|
+
if (File.size(@auth_token_file) > MAX_AUTH_TOKEN_FILE_SIZE)
|
189
|
+
raise LogStash::ConfigurationError, "The auth_token file is too large to map"
|
190
|
+
else
|
191
|
+
@auth_token = File.read(@auth_token_file).chomp
|
192
|
+
@logger.info("Successfully opened auth_token_file",:auth_token_file => @auth_token_file)
|
193
|
+
end
|
194
|
+
rescue LogStash::ConfigurationError
|
195
|
+
raise
|
196
|
+
# Some clean up magic to cover the stuff below.
|
197
|
+
# This will keep me from stomping on signal interrupts and ctrl+c
|
198
|
+
rescue SignalException
|
199
|
+
raise
|
200
|
+
rescue Exception => e
|
201
|
+
# This is currently a bug in logstash, confirmed here:
|
202
|
+
# https://discuss.elastic.co/t/logstash-configurationerror-but-configurationok-logstash-2-4-0/65727/2
|
203
|
+
# Will need to determine the best way to handle this
|
204
|
+
# Rather than testing all error conditions, this can just display them.
|
205
|
+
# Should figure out a way to display this in a better fashion.
|
206
|
+
raise LogStash::ConfigurationError, e.inspect
|
207
|
+
end
|
208
|
+
else (@auth_token_env)
|
209
|
+
@auth_token = @auth_token_env
|
210
|
+
end
|
211
|
+
|
212
|
+
unless (@auth_token.index(/[^A-Za-z0-9-]/).nil?)
|
213
|
+
raise LogStash::ConfigurationError, "The auth_token should be" +
|
214
|
+
"alpha-numeric characters only, please check the token to ensure it is correct."
|
215
|
+
end
|
216
|
+
|
217
|
+
if (@start_date and @filter)
|
218
|
+
raise LogStash::ConfigurationError, "You can only set either" +
|
219
|
+
"start_date or filter."
|
220
|
+
end
|
221
|
+
|
222
|
+
if (@start_date)
|
223
|
+
begin
|
224
|
+
@start_date = DateTime.parse(@start_date).rfc3339(3)
|
225
|
+
rescue ArgumentError => e
|
226
|
+
raise LogStash::ConfigurationError, "start_date must be of the form " +
|
227
|
+
"yyyy-MM-dd’‘T’‘HH:mm:ss.SSSZZ, e.g. 2013-01-01T12:00:00.000-07:00."
|
228
|
+
end
|
229
|
+
@start_date = CGI.escape(@start_date)
|
230
|
+
end
|
231
|
+
|
232
|
+
if (@filter)
|
233
|
+
@filter = CGI.escape(@filter)
|
234
|
+
end
|
235
|
+
|
236
|
+
if (@state_file_base)
|
237
|
+
dir_name = File.dirname(@state_file_base)
|
238
|
+
## Generally the state file directory will have the correct permissions
|
239
|
+
## so check for that case first.
|
240
|
+
if (File.readable?(dir_name) and File.executable?(dir_name) and
|
241
|
+
File.writable?(dir_name))
|
242
|
+
|
243
|
+
if (Dir[@state_file_base + "*"].size > 1)
|
244
|
+
raise LogStash::ConfigurationError, "There is more than one file" +
|
245
|
+
"in the state file base dir (possibly an error?)." +
|
246
|
+
"Please keep the latest/most relevant file"
|
247
|
+
end
|
248
|
+
|
249
|
+
@state_file = Dir[@state_file_base + "*"].last
|
250
|
+
else
|
251
|
+
## Build one message for the rest of the issues
|
252
|
+
access_message = "Could not access the state file dir" +
|
253
|
+
"#{dir_name} for the following reasons: "
|
254
|
+
|
255
|
+
unless (File.readable?(dir_name))
|
256
|
+
access_message << "Cannot read #{dir_name}."
|
257
|
+
end
|
258
|
+
|
259
|
+
unless (File.executable?(dir_name))
|
260
|
+
access_message << "Cannot list directory or perform special" +
|
261
|
+
"operations on #{dir_name}."
|
262
|
+
end
|
263
|
+
|
264
|
+
unless (File.writable?(dir_name))
|
265
|
+
access_message << "Cannot write to #{dir_name}."
|
266
|
+
end
|
267
|
+
|
268
|
+
access_message << "Please provide the appropriate permissions."
|
269
|
+
|
270
|
+
raise LogStash::ConfigurationError, access_message
|
271
|
+
|
272
|
+
end
|
273
|
+
|
274
|
+
if (@state_file)
|
275
|
+
## Only wanna pull the base64 encoded url outta there
|
276
|
+
unless (@state_file.eql?(@state_file_base + "start"))
|
277
|
+
regex_state_file = %r{(?<state_file>#{@state_file_base})
|
278
|
+
(?<state>(?:[A-Za-z0-9_-]{4})+(?:[A-Za-z0-9_-]{2}==|[A-Za-z0-9_-]{3}=)?)}x
|
279
|
+
state_url = Base64.urlsafe_decode64(@state_file.slice(regex_state_file,'state'))
|
280
|
+
unless (state_url =~ /^#{@url}.*/)
|
281
|
+
raise LogStash::ConfigurationError, "State file does not match #{@url}. " +
|
282
|
+
"Please ensure the state file is correct: #{state_url}."
|
283
|
+
end
|
284
|
+
@url = Base64.urlsafe_decode64(@state_file.slice(regex_state_file,'state'))
|
285
|
+
end
|
286
|
+
|
287
|
+
else
|
288
|
+
|
289
|
+
begin
|
290
|
+
@state_file = @state_file_base + "start"
|
291
|
+
# 'touch' a file to keep the conditional from happening later
|
292
|
+
File.open(@state_file, "w") {}
|
293
|
+
@logger.info("Created base state_file", :state_file => @state_file)
|
294
|
+
rescue Exception => e
|
295
|
+
raise LogStash::ConfigurationError, "Could not create #{@statefile}. " +
|
296
|
+
"Error: #{e.inspect}."
|
297
|
+
end
|
298
|
+
end
|
299
|
+
end
|
300
|
+
|
301
|
+
params_event = Hash.new
|
302
|
+
params_event[:limit] = @chunk_size if @chunk_size > 0
|
303
|
+
params_event[:startDate] = @start_date if @start_date
|
304
|
+
params_event[:filter] = @filter if @filter
|
305
|
+
|
306
|
+
if (!@url.to_s.include?('?') and params_event.count > 0)
|
307
|
+
@url = "#{@url}?" + params_event.to_a.map { |arr|"#{arr[0]}=#{arr[1]}" }.join('&')
|
308
|
+
end
|
309
|
+
|
310
|
+
@logger.debug("Created initial URL to call", :url => @url)
|
311
|
+
@host = Socket.gethostname
|
312
|
+
|
313
|
+
end # def register
|
314
|
+
|
315
|
+
|
316
|
+
def run(queue)
|
317
|
+
|
318
|
+
msg_invalid_schedule = "Invalid config. schedule hash must contain " +
|
319
|
+
"exactly one of the following keys - cron, at, every or in"
|
320
|
+
|
321
|
+
raise LogStash::ConfigurationError, msg_invalid_schedule if @schedule.keys.length !=1
|
322
|
+
schedule_type = @schedule.keys.first
|
323
|
+
schedule_value = @schedule[schedule_type]
|
324
|
+
raise LogStash::ConfigurationError, msg_invalid_schedule unless Schedule_types.include?(schedule_type)
|
325
|
+
@scheduler = Rufus::Scheduler.new(:max_work_threads => 1)
|
326
|
+
|
327
|
+
#as of v3.0.9, :first_in => :now doesn't work. Use the following workaround instead
|
328
|
+
opts = schedule_type == "every" ? { :first_in => 0.01 } : {}
|
329
|
+
opts[:overlap] = false;
|
330
|
+
|
331
|
+
@scheduler.send(schedule_type, schedule_value, opts) { run_once(queue) }
|
332
|
+
|
333
|
+
@scheduler.join
|
334
|
+
|
335
|
+
end # def run
|
336
|
+
|
337
|
+
private
|
338
|
+
def run_once(queue)
|
339
|
+
|
340
|
+
request_async(queue)
|
341
|
+
|
342
|
+
end # def run_once
|
343
|
+
|
344
|
+
private
|
345
|
+
def request_async(queue)
|
346
|
+
|
347
|
+
@continue = true
|
348
|
+
|
349
|
+
accept = "application/json"
|
350
|
+
content_type = "application/json"
|
351
|
+
|
352
|
+
begin
|
353
|
+
while @continue and !stop?
|
354
|
+
@logger.debug("Calling URL",
|
355
|
+
:url => @url,
|
356
|
+
:token_set => @auth_token.length > 0,
|
357
|
+
:accept => accept,
|
358
|
+
:content_type => content_type)
|
359
|
+
|
360
|
+
started = Time.now
|
361
|
+
|
362
|
+
client.async.get(@url.to_s, headers:
|
363
|
+
{"Authorization" => "SSWS #{@auth_token}",
|
364
|
+
"Accept" => accept,
|
365
|
+
"Content-Type" => content_type }).
|
366
|
+
on_success { |response | handle_success(queue, response, @url, Time.now - started) }.
|
367
|
+
on_failure { |exception | handle_failure(queue, exception, @url, Time.now - started) }
|
368
|
+
|
369
|
+
client.execute!
|
370
|
+
end
|
371
|
+
rescue Exception => e
|
372
|
+
raise e
|
373
|
+
ensure
|
374
|
+
if (@state_file_base)
|
375
|
+
new_file = @state_file_base + Base64.urlsafe_encode64(@url)
|
376
|
+
if (@state_file != new_file )
|
377
|
+
begin
|
378
|
+
File.rename(@state_file,new_file)
|
379
|
+
rescue SignalException
|
380
|
+
raise
|
381
|
+
rescue Exception => e
|
382
|
+
@logger.fatal("Could not rename file",
|
383
|
+
:old_file => @state_file,
|
384
|
+
:new_file => new_file,
|
385
|
+
:exception => e.inspect)
|
386
|
+
raise
|
387
|
+
end
|
388
|
+
|
389
|
+
@state_file = new_file
|
390
|
+
end
|
391
|
+
end
|
392
|
+
end
|
393
|
+
end # def request_async
|
394
|
+
|
395
|
+
private
|
396
|
+
def handle_success(queue, response, requested_url, exec_time)
|
397
|
+
|
398
|
+
@continue = false
|
399
|
+
|
400
|
+
case response.code
|
401
|
+
when 200
|
402
|
+
## Some benchmarking code for reasonings behind the methods.
|
403
|
+
## They aren't great benchmarks, but basic ones that proved a point.
|
404
|
+
## If anyone has better/contradicting results let me know
|
405
|
+
#
|
406
|
+
## Some system info on which these tests were run:
|
407
|
+
#$ cat /proc/cpuinfo | grep -i "model name" | uniq -c
|
408
|
+
# 4 model name : Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz
|
409
|
+
#
|
410
|
+
#$ free -m
|
411
|
+
# total used free shared buff/cache available
|
412
|
+
# Mem: 1984 925 372 8 686 833
|
413
|
+
# Swap: 2047 0 2047
|
414
|
+
#
|
415
|
+
#str = '<https://dev-instance.oktapreview.com/api/v1/events?after=tevHLxinRbATJeKgKjgXGXy0Q1479278142000&limit=1000>; rel="next"'
|
416
|
+
#require "benchmark"
|
417
|
+
#
|
418
|
+
#
|
419
|
+
#n = 50000000
|
420
|
+
#
|
421
|
+
#
|
422
|
+
#Benchmark.bm do |x|
|
423
|
+
# x.report { n.times { str.include?('rel="next"') } } # (2) 23.008853sec @50000000 times
|
424
|
+
# x.report { n.times { str.end_with?('rel="next"') } } # (1) 16.894623sec @50000000 times
|
425
|
+
# x.report { n.times { str =~ /rel="next"$/ } } # (3) 30.757554sec @50000000 times
|
426
|
+
#end
|
427
|
+
#
|
428
|
+
#Benchmark.bm do |x|
|
429
|
+
# x.report { n.times { str.match(/<([^>]+)>/).captures[0] } } # (2) 262.166085sec @50000000 times
|
430
|
+
# x.report { n.times { str.split(';')[0][1...-1] } } # (1) 31.673270sec @50000000 times
|
431
|
+
#end
|
432
|
+
|
433
|
+
## This feels like gross code
|
434
|
+
Array(response.headers["link"]).each do |link_header|
|
435
|
+
if link_header.end_with?('rel="next"')
|
436
|
+
@url = link_header.split(';')[0][1...-1]
|
437
|
+
end
|
438
|
+
end
|
439
|
+
|
440
|
+
@codec.decode(response.body) do |decoded|
|
441
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
442
|
+
apply_metadata(event, requested_url, response, exec_time)
|
443
|
+
decorate(event)
|
444
|
+
queue << event
|
445
|
+
end
|
446
|
+
|
447
|
+
if (Array(response.headers["link"]).count > 1)
|
448
|
+
@continue = true
|
449
|
+
end
|
450
|
+
|
451
|
+
@logger.info("Successful response returned", :code => response.code, :headers => response.headers)
|
452
|
+
@logger.debug("Response body", :body => response.body)
|
453
|
+
|
454
|
+
when 401
|
455
|
+
@codec.decode(response.body) do |decoded|
|
456
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
457
|
+
apply_metadata(event, requested_url, response, exec_time)
|
458
|
+
event["Okta-Plugin-Status"] = "Auth_token supplied is not valid, " +
|
459
|
+
"validate the auth_token and update the plugin config."
|
460
|
+
event["HTTP-Code"] = 401
|
461
|
+
event.tag("_okta_response_error")
|
462
|
+
decorate(event)
|
463
|
+
queue << event
|
464
|
+
end
|
465
|
+
|
466
|
+
@logger.error("Authentication required, check auth_code",
|
467
|
+
:code => response.code,
|
468
|
+
:headers => response.headers)
|
469
|
+
@logger.debug("Authentication failed body", :body => response.body)
|
470
|
+
|
471
|
+
when 400
|
472
|
+
if (response.body.include?("E0000031"))
|
473
|
+
@codec.decode(response.body) do |decoded|
|
474
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
475
|
+
apply_metadata(event, requested_url, response, exec_time)
|
476
|
+
event["Okta-Plugin-Status"] = "Filter string was not valid."
|
477
|
+
event["HTTP-Code"] = 400
|
478
|
+
event.tag("_okta_response_error")
|
479
|
+
decorate(event)
|
480
|
+
queue << event
|
481
|
+
end
|
482
|
+
|
483
|
+
@logger.error("Filter string was not valid",
|
484
|
+
:response_code => response.code,
|
485
|
+
:okta_error => "E0000031",
|
486
|
+
:filter_string => @filter)
|
487
|
+
|
488
|
+
@logger.debug("Filter string error response",
|
489
|
+
:response_body => response.body,
|
490
|
+
:response_headers => response.headers)
|
491
|
+
|
492
|
+
elsif (response.body.include?("E0000030"))
|
493
|
+
|
494
|
+
@codec.decode(response.body) do |decoded|
|
495
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
496
|
+
apply_metadata(event, requested_url, response, exec_time)
|
497
|
+
event["Okta-Plugin-Status"] = "Date was not formatted correctly."
|
498
|
+
event["HTTP-Code"] = 400
|
499
|
+
event.tag("_okta_response_error")
|
500
|
+
decorate(event)
|
501
|
+
queue << event
|
502
|
+
end
|
503
|
+
|
504
|
+
@logger.error("Date was not formatted correctly",
|
505
|
+
:response_code => response.code,
|
506
|
+
:okta_error => "E0000030",
|
507
|
+
:date_string => @start_date)
|
508
|
+
|
509
|
+
@logger.debug("Start date error response",
|
510
|
+
:response_body => response.body,
|
511
|
+
:response_headers => response.headers)
|
512
|
+
|
513
|
+
## If the Okta error code does not match known codes
|
514
|
+
## Process it as a generic error
|
515
|
+
else
|
516
|
+
handle_unknown_http_code(queue,response,requested_url,exec_time)
|
517
|
+
end
|
518
|
+
else
|
519
|
+
handle_unknown_http_code(queue,response,requested_url,exec_time)
|
520
|
+
end
|
521
|
+
|
522
|
+
end # def handle_success
|
523
|
+
|
524
|
+
private
|
525
|
+
def handle_unknown_http_code(queue,response,requested_url,exec_time)
|
526
|
+
@codec.decode(response.body) do |decoded|
|
527
|
+
event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded
|
528
|
+
apply_metadata(event, requested_url, response, exec_time)
|
529
|
+
event["Okta-Plugin-Status"] = "Unknown error, see Okta error"
|
530
|
+
event["HTTP-Code"] = response.code
|
531
|
+
event.tag("_okta_response_error")
|
532
|
+
decorate(event)
|
533
|
+
queue << event
|
534
|
+
end
|
535
|
+
|
536
|
+
@logger.error("Okta API Error",
|
537
|
+
:http_code => response.code,
|
538
|
+
:body => response.body,
|
539
|
+
:headers => response.headers)
|
540
|
+
end # def handle_unknown_http_code
|
541
|
+
|
542
|
+
private
|
543
|
+
def handle_failure(queue, exception, requested_url, exec_time)
|
544
|
+
|
545
|
+
@continue = false
|
546
|
+
@logger.warn("Client Connection Error",
|
547
|
+
:exception => exception.inspect)
|
548
|
+
|
549
|
+
event = LogStash::Event.new
|
550
|
+
apply_metadata(event, requested_url, nil, exec_time)
|
551
|
+
event["http_request_failure"] = {
|
552
|
+
"Okta-Plugin-Status" => "Client Connection Error",
|
553
|
+
"Connection-Error" => exception.message,
|
554
|
+
"backtrace" => exception.backtrace
|
555
|
+
}
|
556
|
+
event.tag("_http_request_failure")
|
557
|
+
decorate(event)
|
558
|
+
queue << event
|
559
|
+
|
560
|
+
end # def handle_failure
|
561
|
+
|
562
|
+
private
|
563
|
+
def apply_metadata(event, requested_url, response=nil, exec_time=nil)
|
564
|
+
return unless @metadata_target
|
565
|
+
|
566
|
+
m = {
|
567
|
+
"host" => @host,
|
568
|
+
"url" => requested_url,
|
569
|
+
"runtime_seconds" => exec_time
|
570
|
+
}
|
571
|
+
|
572
|
+
if response
|
573
|
+
m["code"] = response.code
|
574
|
+
m["response_headers"] = response.headers
|
575
|
+
m["response_message"] = response.message
|
576
|
+
m["retry_count"] = response.times_retried
|
577
|
+
end
|
578
|
+
|
579
|
+
event[@metadata_target] = m
|
580
|
+
|
581
|
+
end
|
582
|
+
|
583
|
+
public
|
584
|
+
def stop
|
585
|
+
# nothing to do in this case so it is not necessary to define stop
|
586
|
+
# examples of common "stop" tasks:
|
587
|
+
# * close sockets (unblocking blocking reads/accepts)
|
588
|
+
# * cleanup temporary files
|
589
|
+
# * terminate spawned threads
|
590
|
+
begin
|
591
|
+
@scheduler.stop
|
592
|
+
rescue NoMethodError => e
|
593
|
+
unless (e.message == "undefined method `stop' for nil:NilClass")
|
594
|
+
raise
|
595
|
+
end
|
596
|
+
rescue Exception => e
|
597
|
+
@logger.warn("Undefined error", :exception => e.inspect)
|
598
|
+
raise
|
599
|
+
ensure
|
600
|
+
if (@state_file_base)
|
601
|
+
new_file = @state_file_base + Base64.urlsafe_encode64(@url)
|
602
|
+
if (@state_file != new_file )
|
603
|
+
begin
|
604
|
+
File.rename(@state_file,new_file)
|
605
|
+
rescue SignalException
|
606
|
+
raise
|
607
|
+
rescue Exception => e
|
608
|
+
@logger.fatal("Could not rename file",
|
609
|
+
:old_file => @state_file,
|
610
|
+
:new_file => new_file,
|
611
|
+
:exception => e.inspect)
|
612
|
+
raise
|
613
|
+
end
|
614
|
+
@state_file = new_file
|
615
|
+
end
|
616
|
+
end
|
617
|
+
end
|
618
|
+
end # def stop
|
619
|
+
end # class LogStash::Inputs::OktaEnterprise
|