logstash-input-elf-se 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: b13ec6985e00a56f332af3df61d47279cd5c0964
4
+ data.tar.gz: a91038100b51d6e6f4247a3ca40c0affc3b586f5
5
+ SHA512:
6
+ metadata.gz: c989d2e3d6fd0c4f61f88034b6124a1459fcbeec89e339d10b815e5c02b867c195c8047f6c410a95b051aa5919fe765a23f7e928d7f7e3959c0cbb8b0593abfa
7
+ data.tar.gz: 1943274c8b29246d09f39d46029a9378afd287f5d96e82decb2dc693e56f11baa2907c9fabdfd2741c0d6e701b75087a35ec78b45a940fbfcc4f469db5914482
data/CHANGELOG.md ADDED
@@ -0,0 +1,2 @@
1
+ ## 0.1.0
2
+ - Plugin created with the logstash plugin generator
data/CONTRIBUTORS ADDED
@@ -0,0 +1,10 @@
1
+ The following is a list of people who have contributed ideas, code, bug
2
+ reports, or in general have helped logstash along its way.
3
+
4
+ Contributors:
5
+ * Sid - siddharatha.n@gmail.com
6
+
7
+ Note: If you've sent us patches, bug reports, or otherwise contributed to
8
+ Logstash, and you aren't on the list above and want to be, please let us know
9
+ and we'll make sure you're here. Contributions from folks like you are what make
10
+ open source awesome.
data/DEVELOPER.md ADDED
@@ -0,0 +1,2 @@
1
+ # logstash-input-elf-se
2
+ Example input plugin. This should help bootstrap your effort to write your own input plugin!
data/Gemfile ADDED
@@ -0,0 +1,3 @@
1
+ source 'https://rubygems.org'
2
+ gemspec
3
+
data/LICENSE ADDED
@@ -0,0 +1,11 @@
1
+ Licensed under the Apache License, Version 2.0 (the "License");
2
+ you may not use this file except in compliance with the License.
3
+ You may obtain a copy of the License at
4
+
5
+ http://www.apache.org/licenses/LICENSE-2.0
6
+
7
+ Unless required by applicable law or agreed to in writing, software
8
+ distributed under the License is distributed on an "AS IS" BASIS,
9
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10
+ See the License for the specific language governing permissions and
11
+ limitations under the License.
data/README.md ADDED
@@ -0,0 +1,86 @@
1
+ # Logstash Plugin
2
+
3
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
4
+
5
+ It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
6
+
7
+ ## Documentation
8
+
9
+ Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
10
+
11
+ - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
+ - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
+
14
+ ## Need Help?
15
+
16
+ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
17
+
18
+ ## Developing
19
+
20
+ ### 1. Plugin Developement and Testing
21
+
22
+ #### Code
23
+ - To get started, you'll need JRuby with the Bundler gem installed.
24
+
25
+ - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
26
+
27
+ - Install dependencies
28
+ ```sh
29
+ bundle install
30
+ ```
31
+
32
+ #### Test
33
+
34
+ - Update your dependencies
35
+
36
+ ```sh
37
+ bundle install
38
+ ```
39
+
40
+ - Run tests
41
+
42
+ ```sh
43
+ bundle exec rspec
44
+ ```
45
+
46
+ ### 2. Running your unpublished Plugin in Logstash
47
+
48
+ #### 2.1 Run in a local Logstash clone
49
+
50
+ - Edit Logstash `Gemfile` and add the local plugin path, for example:
51
+ ```ruby
52
+ gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
53
+ ```
54
+ - Install plugin
55
+ ```sh
56
+ bin/logstash-plugin install --no-verify
57
+ ```
58
+ - Run Logstash with your plugin
59
+ ```sh
60
+ bin/logstash -e 'filter {awesome {}}'
61
+ ```
62
+ At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
63
+
64
+ #### 2.2 Run in an installed Logstash
65
+
66
+ You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
67
+
68
+ - Build your plugin gem
69
+ ```sh
70
+ gem build logstash-filter-awesome.gemspec
71
+ ```
72
+ - Install the plugin from the Logstash home
73
+ ```sh
74
+ bin/logstash-plugin install /your/local/plugin/logstash-filter-awesome.gem
75
+ ```
76
+ - Start Logstash and proceed to test the plugin
77
+
78
+ ## Contributing
79
+
80
+ All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
81
+
82
+ Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
83
+
84
+ It is more important to the community that you are able to contribute.
85
+
86
+ For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -0,0 +1,150 @@
1
+ # encoding: utf-8
2
+ require 'logstash/inputs/base'
3
+ require 'logstash/namespace'
4
+ require_relative 'sfdc_elf/client_with_streaming_support'
5
+ require_relative 'sfdc_elf/queue_util'
6
+ require_relative 'sfdc_elf/state_persistor'
7
+ require_relative 'sfdc_elf/scheduler'
8
+
9
+ # This plugin enables Salesforce customers to load EventLogFile(ELF) data from their Force.com orgs. The plugin will
10
+ # handle downloading ELF CSV file, parsing them, and handling any schema changes transparently.
11
+ class LogStash::Inputs::SfdcElf < LogStash::Inputs::Base
12
+ LOG_KEY = 'SFDC'
13
+ RETRY_ATTEMPTS = 3
14
+
15
+ config_name 'sfdc_elf'
16
+ default :codec, 'plain'
17
+
18
+ # Username to your Force.com organization.
19
+ config :username, validate: :string, required: true
20
+
21
+ # Password to your Force.com organization.
22
+ config :password, validate: :password, required: true
23
+
24
+ # Client id to your Force.com organization.
25
+ config :client_id, validate: :password, required: true
26
+
27
+ # Client secret to your Force.com organization.
28
+ config :client_secret, validate: :password, required: true
29
+
30
+ # The host to use for OAuth2 authentication.
31
+ config :host, validate: :string, default: 'login.salesforce.com'
32
+
33
+
34
+ config :eventtypesstring, validate: :string, default: "'ApexExecution','ApexSoap','API','BulkApi','Dashboard','LightningError','LightningInteraction','LightningPageView','LightningPerformance','LoginAs','Login','Logout','MetadataApiOperation','Report','RestApi','URI','VisualforceRequest','WaveChange','WaveInteraction','WavePerformance'"
35
+
36
+ # Only needed when your Force.com organization requires it.
37
+ # Security token to you Force.com organization, can be found in My Settings > Personal > Reset My Security Token.
38
+ # Then it will take you to "Reset My Security Token" page, and click on the "Reset Security Token" button. The token
39
+ # will be emailed to you.
40
+ config :security_token, validate: :password, default: ''
41
+
42
+ # The path to be use to store the .sfdc_info_logstash state persistor file. You set the path like so, `~/SomeDirectory` Paths must be
43
+ # absolute and cannot be relative.
44
+ config :path, validate: :string, default: Dir.home
45
+
46
+ # Specify how often the plugin should grab new data in terms of minutes.
47
+ config :poll_interval_in_minutes, validate: [*1..(24 * 60)], default: (24 * 60)
48
+
49
+
50
+ # The first part of logstash pipeline is register, where all instance variables are initialized.
51
+
52
+ public
53
+ def register
54
+ # Initialize the client.
55
+ @client = ClientWithStreamingSupport.new
56
+ @client.client_id = @client_id.value
57
+ @client.client_secret = @client_secret.value
58
+ @client.host = @host
59
+ @client.version = '46.0'
60
+
61
+ # Authenticate the client
62
+ @logger.info("#{LOG_KEY}: tyring to authenticate client")
63
+ @client.retryable_authenticate(username: @username,
64
+ password: @password.value + @security_token.value,
65
+ retry_attempts: RETRY_ATTEMPTS)
66
+ @logger.info("#{LOG_KEY}: authenticating succeeded")
67
+
68
+ # Save org id to distinguish between multiple orgs.
69
+ @org_id = @client.query('select id from Organization')[0]['Id']
70
+
71
+ # Set up time interval for forever while loop.
72
+ @poll_interval_in_seconds = @poll_interval_in_minutes * 60
73
+
74
+ # Handel the @path config passed by the user. If path does not exist then set @path to home directory.
75
+ verify_path
76
+
77
+ # Handel parsing the data into event objects and enqueue it to the queue.
78
+ @queue_util = QueueUtil.new
79
+
80
+ # Handel when to schedule the next process based on the @poll_interval_in_hours config.
81
+ @scheduler = Scheduler.new(@poll_interval_in_seconds)
82
+
83
+ # Handel state of the plugin based on the read and writes of LogDates to the .sdfc_info_logstash file.
84
+ @state_persistor = StatePersistor.new(@path, @org_id)
85
+
86
+ # Grab the last indexed log date.
87
+ @last_indexed_log_date = @state_persistor.get_last_indexed_log_date
88
+ @logger.info("#{LOG_KEY}: @last_indexed_log_date = #{@last_indexed_log_date}")
89
+ end # def register
90
+
91
+
92
+
93
+
94
+ # The second stage of Logstash pipeline is run, where it expects to parse your data into event objects and then pass
95
+ # it into the queue to be used in the rest of the pipeline.
96
+
97
+ public
98
+ def run(queue)
99
+ @scheduler.schedule do
100
+ # Line for readable log statements.
101
+ @logger.info('---------------------------------------------------')
102
+
103
+ # Grab a list of SObjects, specifically EventLogFiles.
104
+
105
+ soql_expr= "SELECT Id, EventType, Logfile, LogDate, LogFileLength, LogFileFieldTypes
106
+ FROM EventLogFile
107
+ WHERE LogDate > #{@last_indexed_log_date} and EventType in (#{@eventtypesstring}) and Sequence>0 and Interval='Hourly' ORDER BY LogDate ASC"
108
+
109
+ query_result_list = @client.retryable_query(username: @username,
110
+ password: @password.value + @security_token.value,
111
+ retry_attempts: RETRY_ATTEMPTS,
112
+ soql_expr: soql_expr)
113
+
114
+ @logger.info("#{LOG_KEY}: query result size = #{query_result_list.size}")
115
+
116
+ if !query_result_list.empty?
117
+ # query_result_list is in ascending order based on the LogDate, so grab the last one of the list and save the
118
+ # LogDate to @last_read_log_date and .sfdc_info_logstash
119
+ @last_indexed_log_date = query_result_list.last.LogDate.strftime('%FT%T.%LZ')
120
+
121
+ # TODO: grab tempfiles here!!
122
+
123
+ # Overwrite the .sfdc_info_logstash file with the @last_read_log_date.
124
+ # Note: we currently do not support deduplication, but will implement it soon.
125
+ # TODO: need to implement deduplication
126
+ # TODO: might have to move this after enqueue_events(), in case of a crash in between.
127
+ # TODO: can do all @state_persistor calls after the if statement
128
+ @state_persistor.update_last_indexed_log_date(@last_indexed_log_date)
129
+
130
+ # Creates events from query_result_list, then simply append the events to the queue.
131
+ @queue_util.enqueue_events(query_result_list, queue, @client)
132
+ end
133
+ end # do loop
134
+ end # def run
135
+
136
+
137
+
138
+
139
+ # Handel the @path variable passed by the user. If path does not exist then set @path to home directory.
140
+
141
+ private
142
+ def verify_path
143
+ # Check if the path exist, if not then set @path to home directory.
144
+ unless File.directory?(@path)
145
+ @logger.warn("#{LOG_KEY}: provided path does not exist or is invalid. path=#{@path}")
146
+ @path = Dir.home
147
+ end
148
+ @logger.info("#{LOG_KEY}: path = #{@path}")
149
+ end
150
+ end # class LogStash::inputs::File
@@ -0,0 +1,61 @@
1
+ # encoding: utf-8
2
+ require 'databasedotcom'
3
+
4
+ # This class subclasses Databasedotcom Client object and added steaming
5
+ # downloading and retryable authentication and retryable query.
6
+ class ClientWithStreamingSupport < Databasedotcom::Client
7
+ # Constants
8
+ # LOG_KEY = 'SFDC - ClientWithStreamingSupport'
9
+ #
10
+ # def initialize
11
+ # @logger = Cabin::Channel.get(LogStash)
12
+ # end
13
+
14
+ def streaming_download(path, output_stream)
15
+ connection = Net::HTTP.new(URI.parse(instance_url).host, 443)
16
+ connection.use_ssl = true
17
+ encoded_path = URI.escape(path)
18
+
19
+ req = Net::HTTP::Get.new(encoded_path, 'Authorization' => "OAuth #{oauth_token}")
20
+ connection.request(req) do |response|
21
+ raise SalesForceError.new(response) unless response.is_a?(Net::HTTPSuccess)
22
+ response.read_body do |chunk|
23
+ output_stream.write chunk
24
+ end
25
+ end
26
+ end
27
+
28
+ # This helper method is called whenever we need to initaialize the client
29
+ # object or whenever the client token expires. It will attempt 3 times
30
+ # with a 30 second delay between each retry. On the 3th try, if it fails the
31
+ # exception will be raised.
32
+
33
+ def retryable_authenticate(options = {})
34
+ 1.upto(options[:retry_attempts]) do |count|
35
+ begin
36
+ # If exception is not thrown, then break out of loop.
37
+ authenticate(username: options[:username], password: options[:password])
38
+ break
39
+ rescue StandardError => e
40
+ # Sleep for 30 seconds 2 times. On the 3th time if it fails raise the exception without sleeping.
41
+ if (count == options[:retry_attempts])
42
+ raise e
43
+ else
44
+ sleep(30)
45
+ end
46
+ end
47
+ end
48
+ end # def authenticate
49
+
50
+
51
+ def retryable_query(options = {})
52
+ query(options[:soql_expr])
53
+ rescue Databasedotcom::SalesForceError => e
54
+ # Session has expired. Force user logout, then re-authenticate.
55
+ if e.message == 'Session expired or invalid'
56
+ retryable_authenticate(options)
57
+ else
58
+ raise e
59
+ end
60
+ end # def retryable_query
61
+ end # ClientWithStreamingSupport
@@ -0,0 +1,176 @@
1
+ # encoding: utf-8
2
+ require 'csv'
3
+ require 'resolv'
4
+
5
+ # Handel parsing data into event objects and then enqueue all of the events to the queue.
6
+ class QueueUtil
7
+ # Constants
8
+ LOG_KEY = 'SFDC - QueueUtil'
9
+ SEPARATOR = ','
10
+ QUOTE_CHAR = '"'
11
+
12
+ # Zip up the tempfile, which is a CSV file, and the field types, so that when parsing the CSV file we can accurately
13
+ # convert each field to its respective type. Like Integers and Booleans.
14
+ EventLogFile = Struct.new(:field_types, :temp_file, :event_type)
15
+
16
+
17
+ def initialize
18
+ @logger = Cabin::Channel.get(LogStash)
19
+ end
20
+
21
+
22
+
23
+
24
+ # Given a list of query result's, iterate through it and grab the CSV file associated with it. Then parse the CSV file
25
+ # line by line and generating the event object for it. Then enqueue it.
26
+
27
+ public
28
+ def enqueue_events(query_result_list, queue, client)
29
+ @logger.info("#{LOG_KEY}: enqueue events")
30
+
31
+ # Grab a list of Tempfiles that contains CSV file data.
32
+ event_log_file_records = get_event_log_file_records(query_result_list, client)
33
+
34
+ # Iterate though each record.
35
+ event_log_file_records.each do |elf|
36
+ begin
37
+ # Create local variable to simplify & make code more readable.
38
+ tmp = elf.temp_file
39
+
40
+ # Get the schema from the first line in the tempfile. It will be in CSV format so we parse it, and it will
41
+ # return an array.
42
+ schema = CSV.parse_line(tmp.readline, col_sep: SEPARATOR, quote_char: QUOTE_CHAR)
43
+
44
+ # Loop through tempfile, line by line.
45
+ tmp.each_line do |line|
46
+ # Parse the current line, it will return an string array.
47
+ string_array = CSV.parse_line(line, col_sep: SEPARATOR, quote_char: QUOTE_CHAR)
48
+
49
+ # Convert the string array into its corresponding type array.
50
+ data = string_to_type_array(string_array, elf.field_types)
51
+
52
+ # create_event will return a event object.
53
+ queue << create_event(schema, data, elf.event_type)
54
+ end
55
+ ensure
56
+ # Close tmp file and unlink it, doing this will delete the actual tempfile.
57
+ tmp.close
58
+ tmp.unlink
59
+ end
60
+ end # do loop, tempfile_list
61
+ end # def create_event_list
62
+
63
+
64
+
65
+
66
+ # Convert the given string array to its corresponding type array and return it.
67
+
68
+ private
69
+ def string_to_type_array(string_array, field_types)
70
+ data = []
71
+
72
+ field_types.each_with_index do |type, i|
73
+ case type
74
+ when 'Number'
75
+ data[i] = (string_array[i].empty?) ? nil : string_array[i].to_f
76
+ when 'Boolean'
77
+ data[i] = (string_array[i].empty?) ? nil : (string_array[i] == '0')
78
+ when 'IP'
79
+ data[i] = valid_ip(string_array[i]) ? string_array[i] : nil
80
+ else # 'String', 'Id', 'EscapedString', 'Set'
81
+ data[i] = (string_array[i].empty?) ? nil : string_array[i]
82
+ end
83
+ end # do loop
84
+
85
+ data
86
+ end # convert_string_to_type
87
+
88
+
89
+
90
+ # Check if the given ip address is truely an ip address.
91
+
92
+ private
93
+ def valid_ip(ip)
94
+ ip =~ Resolv::IPv4::Regex ? true : false
95
+ end
96
+
97
+
98
+
99
+
100
+ # Bases on the schema and data, we create the event object. At any point if the data is nil we simply dont add
101
+ # the data to the event object. Special handling is needed when the schema 'TIMESTAMP' occurs, then the data
102
+ # associated with it needs to be converted into a LogStash::Timestamp.
103
+
104
+ private
105
+ def create_event(schema, data, event_type)
106
+ # Initaialize event to be used. @timestamp and @version is automatically added
107
+ event = LogStash::Event.new
108
+
109
+ # Add column data pair to event.
110
+ data.each_index do |i|
111
+ # Grab current key.
112
+ schema_name = schema[i]
113
+
114
+ # Handle when field_name is 'TIMESTAMP', Change the @timestamp field to the actual time on the CSV file,
115
+ # but convert it to iso8601.
116
+ if schema_name == 'TIMESTAMP'
117
+ epochmillis = DateTime.parse(data[i]).to_time.to_f
118
+ event.timestamp = LogStash::Timestamp.at(epochmillis)
119
+ end
120
+
121
+ # Allow Elasticsearch index's have to types set to EventType.
122
+ event['type'] = event_type.downcase
123
+
124
+ # Add the schema data pair to event object.
125
+ if data[i] != nil
126
+ event[schema_name] = data[i]
127
+ end
128
+ end
129
+
130
+ # Return the event
131
+ event
132
+ end # def create_event
133
+
134
+
135
+
136
+
137
+ # This helper method takes as input a list/collection of SObjects which each contains a path to their respective CSV
138
+ # files. The path is stored in the LogFile field. Using that path, we are able to grab the actual CSV file via
139
+ # @client.http_get method.
140
+ #
141
+ # After grabbing the CSV file we then store them using the standard Tempfile library. Tempfile will create a unique
142
+ # file each time using 'sfdc_elf_tempfile' as the prefix and finally we will be returning a list of Tempfile object,
143
+ # where the user can read the Tempfile and then close it and unlink it, which will delete the file.
144
+
145
+ public
146
+ def get_event_log_file_records(query_result_list, client)
147
+ @logger.info("#{LOG_KEY}: generating tempfile list")
148
+ result = []
149
+ query_result_list.each do |event_log_file|
150
+ # Get the path of the CSV file from the LogFile field, then stream the data to the .write method of the Tempfile
151
+ tmp = Tempfile.new('sfdc_elf_tempfile')
152
+ client.streaming_download(event_log_file.LogFile, tmp)
153
+
154
+ # Flushing will write the buffer into the Tempfile itself.
155
+ tmp.flush
156
+
157
+ # Rewind will move the file pointer from the end to the beginning of the file, so that users can simple
158
+ # call the Read method.
159
+ tmp.rewind
160
+
161
+ # Append the EventLogFile object into the result list
162
+ field_types = event_log_file.LogFileFieldTypes.split(',')
163
+ result << EventLogFile.new(field_types, tmp, event_log_file.EventType)
164
+
165
+ # Log the info from event_log_file object.
166
+ @logger.info(" #{LOG_KEY}: Id = #{event_log_file.Id}")
167
+ @logger.info(" #{LOG_KEY}: EventType = #{event_log_file.EventType}")
168
+ @logger.info(" #{LOG_KEY}: LogFile = #{event_log_file.LogFile}")
169
+ @logger.info(" #{LOG_KEY}: LogDate = #{event_log_file.LogDate}")
170
+ @logger.info(" #{LOG_KEY}: LogFileLength = #{event_log_file.LogFileLength}")
171
+ @logger.info(" #{LOG_KEY}: LogFileFieldTypes = #{event_log_file.LogFileFieldTypes}")
172
+ @logger.info(' ......................................')
173
+ end
174
+ result
175
+ end # def get_event_log_file_records
176
+ end # QueueUtil
@@ -0,0 +1,73 @@
1
+ # encoding: utf-8
2
+
3
+ # Handel when to schedule the next process based on the poll interval specified. The poll interval provided has to be
4
+ # in seconds.
5
+ class Scheduler
6
+ LOG_KEY = 'SFDC - Scheduler'
7
+
8
+ def initialize(poll_interval_in_seconds)
9
+ @logger = Cabin::Channel.get(LogStash)
10
+ @poll_interval_in_seconds = poll_interval_in_seconds
11
+ end
12
+
13
+
14
+
15
+
16
+ # In a forever loop, run the block provided then sleep based on the poll interval and repeat.
17
+
18
+ public
19
+ def schedule(&block)
20
+ # Grab the current time and one @interval to it so that the while loop knows when it need to compute again.
21
+ next_schedule_time = Time.now + @poll_interval_in_seconds
22
+
23
+ # sleep until start time
24
+ loop do
25
+ block.call
26
+
27
+ # Depending on the next_schedule_time and the time taking the compute the code above,
28
+ # sleep this loop and adjust the next_schedule_time.
29
+ @logger.info("#{LOG_KEY}: next_schedule_time = #{next_schedule_time}")
30
+ next_schedule_time = stall_schedule(next_schedule_time)
31
+ end
32
+ end
33
+
34
+
35
+
36
+
37
+ # Given as input the next schedule time, stall_schedule() will decide if we need to sleep until the next
38
+ # schedule time or skip sleeping because of missing the next schedule time.
39
+ #
40
+ # For both examples, the time interval is 1 hour.
41
+ # Example 1:
42
+ # started time = 1:00pm
43
+ # next_schedule_time = 2:00pm
44
+ # current_time = 1:30pm
45
+ # In this example you will need to sleep for 30 mins, so you will be on schedule.
46
+ #
47
+ # Example 2:
48
+ # started time = 1:00pm
49
+ # next_schedule_time = 2:00pm
50
+ # current_time = 2:30pm
51
+ # In this example you will not be allowed to sleep, and will proceed to compute again since you missed the
52
+ # schedule time.
53
+
54
+ public
55
+ def stall_schedule(next_schedule_time)
56
+ current_time = Time.now
57
+ @logger.info("#{LOG_KEY}: time before sleep = #{current_time}")
58
+
59
+ # Example 2 case from above.
60
+ if current_time > next_schedule_time
61
+ @logger.info("#{LOG_KEY}: missed next schedule time, proceeding to next task without sleeping")
62
+ next_schedule_time += @poll_interval_in_seconds while current_time > next_schedule_time
63
+
64
+ # Example 1 case from above.
65
+ else
66
+ @logger.info("#{LOG_KEY}: sleeping for #{(next_schedule_time - current_time)} seconds")
67
+ sleep(next_schedule_time - current_time)
68
+ next_schedule_time += @poll_interval_in_seconds
69
+ end
70
+ @logger.info("#{LOG_KEY} time after sleep = #{Time.now}")
71
+ next_schedule_time
72
+ end # def determine_loop_stall
73
+ end # Scheduler
@@ -0,0 +1,49 @@
1
+ # encoding: utf-8
2
+
3
+ # Handel what the next procedure should be based on the .sfdc_info_logstash file. States proceed via reading and
4
+ # writing LogDates to the .sfdc_info_logstash file.
5
+ class StatePersistor
6
+ LOG_KEY = 'SFDC - StatePersistor'
7
+ FILE_PREFIX = 'sfdc_info_logstash'
8
+ DEFAULT_TIME = '0001-01-01T00:00:00Z'
9
+
10
+ def initialize(base_path, org_id)
11
+ @logger = Cabin::Channel.get(LogStash)
12
+ @path_with_file_name = "#{base_path}/.#{FILE_PREFIX}_#{org_id}"
13
+ end
14
+
15
+
16
+ # Read the last indexed LogDate from .sfdc_info_logstash file and return it. If the .sfdc_info_logstash file does
17
+ # not exist then create the file and write DEFAULT_TIME to it using update_last_indexed_log_date() method.
18
+
19
+ public
20
+ def get_last_indexed_log_date
21
+ # Read from .sfdc_info_logstash if it exists, otherwise load @last_read_log_date with DEFAULT_TIME.
22
+ if File.exist?(@path_with_file_name)
23
+ # Load last read LogDate from .sfdc_info_logstash.
24
+ @logger.info("#{LOG_KEY}: .#{@path_with_file_name} does exist, read and return the time on it.")
25
+ File.read(@path_with_file_name)
26
+ else
27
+ # Load default time to ensure getting all possible EventLogFiles from oldest to current. Also
28
+ # create .sfdc_info_logstash file
29
+ @logger.info("#{LOG_KEY}: .sfdc_info_logstash does not exist and loaded DEFAULT_TIME to @last_read_instant")
30
+ update_last_indexed_log_date(DEFAULT_TIME)
31
+ DEFAULT_TIME
32
+ end
33
+ end
34
+
35
+
36
+
37
+
38
+ # Take as input a date sting that is in iso8601 format, then overwrite .sfdc_info_logstash with the date string,
39
+ # because of the 'w' flag used with the File class.
40
+
41
+ public
42
+ def update_last_indexed_log_date(date)
43
+ @logger.info("#{LOG_KEY}: overwriting #{@path_with_file_name} with #{date}")
44
+ f = File.open(@path_with_file_name, 'w')
45
+ f.write(date)
46
+ f.flush
47
+ f.close
48
+ end
49
+ end
@@ -0,0 +1,27 @@
1
+ Gem::Specification.new do |s|
2
+ s.name = 'logstash-input-elf-se'
3
+ s.version = '0.1.0'
4
+ s.licenses = ['Apache-2.0']
5
+ s.summary = 'A Logstash plugin the receives events from Salesforce EventLogFile'
6
+ s.description = 'This gem is a logstash plugin required to be installed on top of the Logstash core pipeline
7
+ using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program , changes made are quite specific to schneider electric, we removed some logtypes to support our cause'
8
+ s.homepage = 'https://github.com/siddharatha/logstash-input-elf-se'
9
+ s.authors = ['Sid']
10
+ s.email = 'siddharatha.n@gmail.com'
11
+ s.require_paths = ['lib']
12
+
13
+ # Files
14
+ s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
15
+ # Tests
16
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
17
+
18
+ # Special flag to let us know this is actually a logstash plugin
19
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" }
20
+
21
+ # Gem dependencies
22
+ s.add_runtime_dependency "logstash-core-plugin-api", "~> 2.0"
23
+ s.add_runtime_dependency 'logstash-codec-plain'
24
+ s.add_runtime_dependency 'stud', '>= 0.0.22'
25
+ s.add_runtime_dependency 'databasedotcom', '~> 1.3', '>= 1.3.3'
26
+ s.add_development_dependency 'logstash-devutils', '>= 0.0.16'
27
+ end
@@ -0,0 +1,11 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/inputs/elf-se"
4
+
5
+ describe LogStash::Inputs::ElfSe do
6
+
7
+ it_behaves_like "an interruptible input plugin" do
8
+ let(:config) { { "interval" => 100 } }
9
+ end
10
+
11
+ end
metadata ADDED
@@ -0,0 +1,137 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-input-elf-se
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Sid
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2019-10-23 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: logstash-core-plugin-api
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - "~>"
18
+ - !ruby/object:Gem::Version
19
+ version: '2.0'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - "~>"
25
+ - !ruby/object:Gem::Version
26
+ version: '2.0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: logstash-codec-plain
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: '0'
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
41
+ - !ruby/object:Gem::Dependency
42
+ name: stud
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - ">="
46
+ - !ruby/object:Gem::Version
47
+ version: 0.0.22
48
+ type: :runtime
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - ">="
53
+ - !ruby/object:Gem::Version
54
+ version: 0.0.22
55
+ - !ruby/object:Gem::Dependency
56
+ name: databasedotcom
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - "~>"
60
+ - !ruby/object:Gem::Version
61
+ version: '1.3'
62
+ - - ">="
63
+ - !ruby/object:Gem::Version
64
+ version: 1.3.3
65
+ type: :runtime
66
+ prerelease: false
67
+ version_requirements: !ruby/object:Gem::Requirement
68
+ requirements:
69
+ - - "~>"
70
+ - !ruby/object:Gem::Version
71
+ version: '1.3'
72
+ - - ">="
73
+ - !ruby/object:Gem::Version
74
+ version: 1.3.3
75
+ - !ruby/object:Gem::Dependency
76
+ name: logstash-devutils
77
+ requirement: !ruby/object:Gem::Requirement
78
+ requirements:
79
+ - - ">="
80
+ - !ruby/object:Gem::Version
81
+ version: 0.0.16
82
+ type: :development
83
+ prerelease: false
84
+ version_requirements: !ruby/object:Gem::Requirement
85
+ requirements:
86
+ - - ">="
87
+ - !ruby/object:Gem::Version
88
+ version: 0.0.16
89
+ description: |-
90
+ This gem is a logstash plugin required to be installed on top of the Logstash core pipeline
91
+ using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program , changes made are quite specific to schneider electric, we removed some logtypes to support our cause
92
+ email: siddharatha.n@gmail.com
93
+ executables: []
94
+ extensions: []
95
+ extra_rdoc_files: []
96
+ files:
97
+ - CHANGELOG.md
98
+ - CONTRIBUTORS
99
+ - DEVELOPER.md
100
+ - Gemfile
101
+ - LICENSE
102
+ - README.md
103
+ - lib/logstash/inputs/elf-se.rb
104
+ - lib/logstash/inputs/sfdc_elf/client_with_streaming_support.rb
105
+ - lib/logstash/inputs/sfdc_elf/queue_util.rb
106
+ - lib/logstash/inputs/sfdc_elf/scheduler.rb
107
+ - lib/logstash/inputs/sfdc_elf/state_persistor.rb
108
+ - logstash-input-elf-se.gemspec
109
+ - spec/inputs/elf-se_spec.rb
110
+ homepage: https://github.com/siddharatha/logstash-input-elf-se
111
+ licenses:
112
+ - Apache-2.0
113
+ metadata:
114
+ logstash_plugin: 'true'
115
+ logstash_group: input
116
+ post_install_message:
117
+ rdoc_options: []
118
+ require_paths:
119
+ - lib
120
+ required_ruby_version: !ruby/object:Gem::Requirement
121
+ requirements:
122
+ - - ">="
123
+ - !ruby/object:Gem::Version
124
+ version: '0'
125
+ required_rubygems_version: !ruby/object:Gem::Requirement
126
+ requirements:
127
+ - - ">="
128
+ - !ruby/object:Gem::Version
129
+ version: '0'
130
+ requirements: []
131
+ rubyforge_project:
132
+ rubygems_version: 2.5.2.3
133
+ signing_key:
134
+ specification_version: 4
135
+ summary: A Logstash plugin the receives events from Salesforce EventLogFile
136
+ test_files:
137
+ - spec/inputs/elf-se_spec.rb