logstash-input-azuretable 0.1.4 → 0.1.6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: 58645fe766010e590cc5e044198542ce888c4fa7
4
- data.tar.gz: 90bb575fc879d670f7878d9c66ad7d1787234cc0
2
+ SHA256:
3
+ metadata.gz: 707e6d1d36113d66ba03e7964ec23d9e3c2cbe20f08854707090f1a363e9d3ac
4
+ data.tar.gz: 61fed6a42d9533b149b9a967630e8bcf3fb9d158f3ba60703f9d2c5801938d45
5
5
  SHA512:
6
- metadata.gz: 4ed43180c6e7a54a54357ef630c110865acff4f438d4371d61600389e8cc4091a91cd933509ed1b9fb165d74e975d1d244b5f1b33bf6d5089b9da175be7013f3
7
- data.tar.gz: 8644c0761223551e1ae116e774e34c3f6122c064f57482a53302daa2a6ea375074cf3293cb301a4bae5e34920c77b13dd07ba1b92940c7b1892d93b47e9f6d05
6
+ metadata.gz: 3984e0a7ebee987ec6e2c994200e79474e137fb9484f13ede1bac3e357fe637f236ee8d34ebf147914170613fa359b3aa20ce81f9533cf3b0ae573850091910e
7
+ data.tar.gz: 646b8dd9acc72aad69bdb51ec01df5167c0950e193f366c71fee798e7e2cc34dbe56aba2d9b339cf5ebd99e843dcb06d36db41cd22916eb85995e5a80a3cf202
@@ -1,2 +1,7 @@
1
- ## 0.1.0
2
- - Plugin created with the logstash plugin generator
1
+ ## 2016.06.27
2
+ * Added support for setting Azure service endpoint in the configuration.
3
+
4
+ ## 2016.05.02
5
+ * Made the plugin to respect Logstash shutdown signal.
6
+ * Updated the *logstash-core* runtime dependency requirement to '~> 2.0'.
7
+ * Updated the *logstash-devutils* development dependency requirement to '>= 0.0.16'
data/Gemfile CHANGED
@@ -1,3 +1,2 @@
1
1
  source 'https://rubygems.org'
2
2
  gemspec
3
-
data/LICENSE CHANGED
@@ -1,11 +1,17 @@
1
- Licensed under the Apache License, Version 2.0 (the "License");
2
- you may not use this file except in compliance with the License.
3
- You may obtain a copy of the License at
4
1
 
5
- http://www.apache.org/licenses/LICENSE-2.0
2
+ Copyright (c) Microsoft. All rights reserved.
3
+ Microsoft would like to thank its contributors, a list
4
+ of whom are at http://aka.ms/entlib-contributors
5
+
6
+ Licensed under the Apache License, Version 2.0 (the "License"); you
7
+ may not use this file except in compliance with the License. You may
8
+ obtain a copy of the License at
9
+
10
+ http://www.apache.org/licenses/LICENSE-2.0
6
11
 
7
12
  Unless required by applicable law or agreed to in writing, software
8
13
  distributed under the License is distributed on an "AS IS" BASIS,
9
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10
- See the License for the specific language governing permissions and
11
- limitations under the License.
14
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
15
+ implied. See the License for the specific language governing permissions
16
+ and limitations under the License.
17
+
data/README.md CHANGED
@@ -1,86 +1,67 @@
1
- # Logstash Plugin
1
+ # Logstash input plugin for Azure diagnostics data from Storage Tables
2
2
 
3
- This is a plugin for [Logstash](https://github.com/elastic/logstash).
3
+ ## Summary
4
+ This plugin reads Azure diagnostics data from specified Azure Storage Table and parses the data for output.
4
5
 
5
- It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
6
-
7
- ## Documentation
8
-
9
- Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
6
+ ## Installation
7
+ You can install this plugin using the Logstash "plugin" or "logstash-plugin" (for newer versions of Logstash) command:
8
+ ```sh
9
+ logstash-plugin install logstash-input-azurewadtable
10
+ ```
11
+ For more information, see Logstash reference [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html).
10
12
 
11
- - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
- - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
+ ## Configuration
14
+ ### Required Parameters
15
+ __*account_name*__
13
16
 
14
- ## Need Help?
17
+ The Azure Storage account name.
15
18
 
16
- Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
19
+ __*access_key*__
17
20
 
18
- ## Developing
21
+ The access key to the storage account.
19
22
 
20
- ### 1. Plugin Developement and Testing
23
+ __*table_name*__
21
24
 
22
- #### Code
23
- - To get started, you'll need JRuby with the Bundler gem installed.
25
+ The storage table to pull data from.
24
26
 
25
- - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
27
+ ### Optional Parameters
28
+ __*entity_count_to_process*__
26
29
 
27
- - Install dependencies
28
- ```sh
29
- bundle install
30
- ```
30
+ The plugin queries and processes table entities in a loop, this parameter is to specify the maximum number of entities it should query and process per loop. The default value is 100.
31
31
 
32
- #### Test
32
+ __*collection_start_time_utc*__
33
33
 
34
- - Update your dependencies
34
+ Specifies the point of time after which the entities created should be included in the query results. The default value is when the plugin gets initialized:
35
35
 
36
- ```sh
37
- bundle install
36
+ ```ruby
37
+ Time.now.utc.iso8601
38
38
  ```
39
+ __*etw_pretty_print*__
39
40
 
40
- - Run tests
41
+ True to pretty print ETW files, otherwise False. The default value is False.
41
42
 
42
- ```sh
43
- bundle exec rspec
44
- ```
43
+ __*idle_delay_seconds*__
45
44
 
46
- ### 2. Running your unpublished Plugin in Logstash
45
+ Specifies the seconds to wait between each processing loop. The default value is 15.
47
46
 
48
- #### 2.1 Run in a local Logstash clone
47
+ __*endpoint*__
49
48
 
50
- - Edit Logstash `Gemfile` and add the local plugin path, for example:
51
- ```ruby
52
- gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
53
- ```
54
- - Install plugin
55
- ```sh
56
- bin/logstash-plugin install --no-verify
57
- ```
58
- - Run Logstash with your plugin
59
- ```sh
60
- bin/logstash -e 'filter {awesome {}}'
61
- ```
62
- At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
63
-
64
- #### 2.2 Run in an installed Logstash
65
-
66
- You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
49
+ Specifies the endpoint of Azure environment. The default value is "core.windows.net".
67
50
 
68
- - Build your plugin gem
69
- ```sh
70
- gem build logstash-filter-awesome.gemspec
51
+ ### Examples
71
52
  ```
72
- - Install the plugin from the Logstash home
73
- ```sh
74
- bin/logstash-plugin install /your/local/plugin/logstash-filter-awesome.gem
53
+ input
54
+ {
55
+ azurewadtable
56
+ {
57
+ account_name => "mystorageaccount"
58
+ access_key => "VGhpcyBpcyBhIGZha2Uga2V5Lg=="
59
+ table_name => "WADWindowsEventLogsTable"
60
+ }
61
+ }
75
62
  ```
76
- - Start Logstash and proceed to test the plugin
77
-
78
- ## Contributing
79
-
80
- All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
81
-
82
- Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
83
63
 
84
- It is more important to the community that you are able to contribute.
64
+ ## More information
65
+ The source code of this plugin is hosted in GitHub repo [Microsoft Azure Diagnostics with ELK](https://github.com/Azure/azure-diagnostics-tools). We welcome you to provide feedback and/or contribute to the project.
85
66
 
86
- For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
67
+ Please also see [Analyze Diagnostics Data with ELK template](https://github.com/Azure/azure-quickstart-templates/tree/master/diagnostics-with-elk) for quick deployment of ELK to Azure.
@@ -1,175 +1,170 @@
1
- # encoding: utf-8
2
- require "logstash/inputs/base"
3
- require "logstash/namespace"
4
- require "stud/interval"
5
- require "socket" # for Socket.gethostname
6
- require "time"
7
- require "azure"
8
-
9
- class LogStash::Inputs::Azuretable < LogStash::Inputs::Base
10
- class Interrupred < StandardError; end
11
-
12
- config_name "azuretable"
13
-
14
- # If undefined, Logstash will complain, even if codec is unused.
15
- default :codec, "plain"
16
-
17
- # The message string to use in the event.
18
- config :storage_sas_token, :validate => :string
19
- config :table_name, :validate => :string
20
- config :entity_count_to_process, :validate => :string, :default => 100
21
- config :collection_start_time_utc, :validate => :string, :default => Time.now.utc.iso8601
22
- config :etw_pretty_print, :validate => :boolean, :default => false
23
- config :idle_delay_seconds, :validate => :number, :default => 15
24
- config :endpoint, :validate => :string, :default => "core.windows.net"
25
-
26
- # Default 1 minute delay to ensure all data is published to the table before querying.
27
- # See issue #23 for more: https://github.com/Azure/azure-diagnostics-tools/issues/23
28
- config :data_latency_minutes, :validate => :number, :default => 1
29
-
30
- public
31
- def register
32
- @host = Socket.gethostname
33
-
34
- Azure.config.storage_sas_token = @storage_sas_token
35
- #Azure.configure do |config|
36
- #config.storage_sas_token = @storage_sas_token
37
- #end
38
- @azure_table_service = Azure::Table::TableService.new
39
- @last_timestamp = @collection_start_time_utc
40
- @idle_delay = @idle_delay_seconds
41
- @continuation_token = nil
42
- end # def register
43
-
44
- def run(queue)
45
- while !stop?
46
- @logger.debug("Starting process method @" + Time.now.to_s);
47
- process(output_queue)
48
- @logger.debug("Starting delay of: " + @idle_delay.to_s + " seconds @" + Time.now.to_s);
49
- sleep @idle_delay
50
- end # while
51
- end # def run
52
-
53
- def stop
54
- # nothing to do in this case so it is not necessary to define stop
55
- # examples of common "stop" tasks:
56
- # * close sockets (unblocking blocking reads/accepts)
57
- # * cleanup temporary files
58
- # * terminate spawned threads
59
- end
60
-
61
- def build_latent_query
62
- @logger.debug("from #{@last_timestamp} to #{@until_timestamp}")
63
- query_filter = "(PartitionKey gt '#{partitionkey_from_datetime(@last_timestamp)}' and PartitionKey lt '#{partitionkey_from_datetime(@until_timestamp)}')"
64
- for i in 0..99
65
- query_filter << " or (PartitionKey gt '#{i.to_s.rjust(19, '0')}___#{partitionkey_from_datetime(@last_timestamp)}' and PartitionKey lt '#{i.to_s.rjust(19, '0')}___#{partitionkey_from_datetime(@until_timestamp)}')"
66
- end # for block
67
- query_filter = query_filter.gsub('"','')
68
- query_filter
69
- end
70
-
71
- def build_zero_latency_query
72
- @logger.debug("from #{@last_timestamp} to most recent data")
73
- # query data using start_from_time
74
- query_filter = "(PartitionKey gt '#{partitionkey_from_datetime(@last_timestamp)}')"
75
- for i in 0..99
76
- query_filter << " or (PartitionKey gt '#{i.to_s.rjust(19, '0')}___#{partitionkey_from_datetime(@last_timestamp)}' and PartitionKey lt '#{i.to_s.rjust(19, '0')}___9999999999999999999')"
77
- end # for block
78
- query_filter = query_filter.gsub('"','')
79
- query_filter
80
- end
81
-
82
- def process(output_queue)
83
- if @data_latency_minutes > 0
84
- @until_timestamp = (Time.now - (60 * @data_latency_minutes)).iso8601 unless @continuation_token
85
- query_filter = build_latent_query
86
- else
87
- query_filter = build_zero_latency_query
88
- end
89
- @logger.debug("Query filter: " + query_filter)
90
- query = { :top => @entity_count_to_process, :filter => query_filter, :continuation_token => @continuation_token }
91
- result = @azure_table_service.query_entities(@table_name, query)
92
- @continuation_token = result.continuation_token
93
-
94
- if result and result.length > 0
95
- @logger.debug("#{result.length} results found.")
96
- last_good_timestamp = nil
97
- result.each do |entity|
98
- event = LogStash::Event.new(entity.properties)
99
- event.set("type", @table_name)
100
-
101
- # Help pretty print etw files
102
- if (@etw_pretty_print && !event.get("EventMessage").nil? && !event.get("Message").nil?)
103
- @logger.debug("event: " + event.to_s)
104
- eventMessage = event.get("EventMessage").to_s
105
- message = event.get("Message").to_s
106
- @logger.debug("EventMessage: " + eventMessage)
107
- @logger.debug("Message: " + message)
108
- if (eventMessage.include? "%")
109
- @logger.debug("starting pretty print")
110
- toReplace = eventMessage.scan(/%\d+/)
111
- payload = message.scan(/(?<!\\S)([a-zA-Z]+)=(\"[^\"]*\")(?!\\S)/)
112
- # Split up the format string to seperate all of the numbers
113
- toReplace.each do |key|
114
- @logger.debug("Replacing key: " + key.to_s)
115
- index = key.scan(/\d+/).join.to_i
116
- newValue = payload[index - 1][1]
117
- @logger.debug("New Value: " + newValue)
118
- eventMessage[key] = newValue
119
- end # do block
120
- event.set("EventMessage", eventMessage)
121
- @logger.debug("pretty print end. result: " + event.get("EventMessage").to_s)
122
- end
123
- end
124
- decorate(event)
125
- if event.get('PreciseTimeStamp').is_a?(Time)
126
- event.set('PreciseTimeStamp', LogStash::Timestamp.new(event.get('PreciseTimeStamp')))
127
- end
128
- theTIMESTAMP = event.get('TIMESTAMP')
129
- if theTIMESTAMP.is_a?(LogStash::Timestamp)
130
- last_good_timestamp = theTIMESTAMP.to_iso8601
131
- elsif theTIMESTAMP.is_a?(Time)
132
- last_good_timestamp = theTIMESTAMP.iso8601
133
- event.set('TIMESTAMP', LogStash::Timestamp.new(theTIMESTAMP))
134
- else
135
- @logger.warn("Found result with invalid TIMESTAMP. " + event.to_hash.to_s)
136
- end
137
- output_queue << event
138
- end # each block
139
- @idle_delay = 0
140
- if (!last_good_timestamp.nil?)
141
- @last_timestamp = last_good_timestamp unless @continuation_token
142
- end
143
- else
144
- @logger.debug("No new results found.")
145
- @idle_delay = @idle_delay_seconds
146
- end # if block
147
-
148
- rescue => e
149
- @logger.error("Oh My, An error occurred.", :exception => e)
150
- raise
151
- end # process
152
-
153
- # Windows Azure Diagnostic's algorithm for determining the partition key based on time is as follows:
154
- # 1. Take time in UTC without seconds.
155
- # 2. Convert it into .net ticks
156
- # 3. add a '0' prefix.
157
- def partitionkey_from_datetime(time_string)
158
- collection_time = Time.parse(time_string)
159
- if collection_time
160
- @logger.debug("collection time parsed successfully #{collection_time}")
161
- else
162
- raise(ArgumentError, "Could not parse the time_string")
163
- end # if else block
164
-
165
- collection_time -= collection_time.sec
166
- ticks = to_ticks(collection_time)
167
- "0#{ticks}"
168
- end # partitionkey_from_datetime
169
-
170
- # Convert time to ticks
171
- def to_ticks(time_to_convert)
172
- @logger.debug("Converting time to ticks")
173
- time_to_convert.to_i * 10000000 - TICKS_SINCE_EPOCH
174
- end # to_ticks
175
- end # class LogStash::Inputs::Azuretable
1
+ # encoding: utf-8
2
+ require "logstash/inputs/base"
3
+ require "logstash/namespace"
4
+ require "time"
5
+ require "azure/storage"
6
+
7
+ class LogStash::Inputs::AzureWADTable < LogStash::Inputs::Base
8
+ class Interrupted < StandardError; end
9
+
10
+ config_name "azurewadtable"
11
+ milestone 1
12
+
13
+ config :storage_account_name, :validate => :string
14
+ config :storage_sas_token, :validate => :string
15
+ config :table_name, :validate => :string
16
+ config :entity_count_to_process, :validate => :string, :default => 100
17
+ config :collection_start_time_utc, :validate => :string, :default => Time.now.utc.iso8601
18
+ config :etw_pretty_print, :validate => :boolean, :default => false
19
+ config :idle_delay_seconds, :validate => :number, :default => 15
20
+
21
+ # Default 1 minute delay to ensure all data is published to the table before querying.
22
+ # See issue #23 for more: https://github.com/Azure/azure-diagnostics-tools/issues/23
23
+ config :data_latency_minutes, :validate => :number, :default => 1
24
+
25
+ TICKS_SINCE_EPOCH = Time.utc(0001, 01, 01).to_i * 10000000
26
+
27
+ def initialize(*args)
28
+ super(*args)
29
+ end # initialize
30
+
31
+ public
32
+ def register
33
+ client = Azure::Storage::Client.create(:storage_account_name => @storage_account_name, :storage_sas_token => @storage_sas_token)
34
+ @azure_table_service = client.table_client
35
+
36
+ @last_timestamp = @collection_start_time_utc
37
+ @idle_delay = @idle_delay_seconds
38
+ @continuation_token = nil
39
+ end # register
40
+
41
+ public
42
+ def run(output_queue)
43
+ while !stop?
44
+ @logger.debug("Starting process method @" + Time.now.to_s);
45
+ process(output_queue)
46
+ @logger.debug("Starting delay of: " + @idle_delay.to_s + " seconds @" + Time.now.to_s);
47
+ sleep @idle_delay
48
+ end # while
49
+ end # run
50
+
51
+ public
52
+ def teardown
53
+ end
54
+
55
+ def build_latent_query
56
+ @logger.debug("from #{@last_timestamp} to #{@until_timestamp}")
57
+ query_filter = "(PartitionKey gt '#{partitionkey_from_datetime(@last_timestamp)}' and PartitionKey lt '#{partitionkey_from_datetime(@until_timestamp)}')"
58
+ for i in 0..99
59
+ query_filter << " or (PartitionKey gt '#{i.to_s.rjust(19, '0')}___#{partitionkey_from_datetime(@last_timestamp)}' and PartitionKey lt '#{i.to_s.rjust(19, '0')}___#{partitionkey_from_datetime(@until_timestamp)}')"
60
+ end # for block
61
+ query_filter = query_filter.gsub('"','')
62
+ query_filter
63
+ end
64
+
65
+ def build_zero_latency_query
66
+ @logger.debug("from #{@last_timestamp} to most recent data")
67
+ # query data using start_from_time
68
+ query_filter = "(PartitionKey gt '#{partitionkey_from_datetime(@last_timestamp)}')"
69
+ for i in 0..99
70
+ query_filter << " or (PartitionKey gt '#{i.to_s.rjust(19, '0')}___#{partitionkey_from_datetime(@last_timestamp)}' and PartitionKey lt '#{i.to_s.rjust(19, '0')}___9999999999999999999')"
71
+ end # for block
72
+ query_filter = query_filter.gsub('"','')
73
+ query_filter
74
+ end
75
+
76
+ def process(output_queue)
77
+ if @data_latency_minutes > 0
78
+ @until_timestamp = (Time.now - (60 * @data_latency_minutes)).iso8601 unless @continuation_token
79
+ query_filter = build_latent_query
80
+ else
81
+ query_filter = build_zero_latency_query
82
+ end
83
+ @logger.debug("Query filter: " + query_filter)
84
+ query = { :top => @entity_count_to_process, :filter => query_filter, :continuation_token => @continuation_token }
85
+ result = @azure_table_service.query_entities(@table_name, query)
86
+ @continuation_token = result.continuation_token
87
+
88
+ if result and result.length > 0
89
+ @logger.debug("#{result.length} results found.")
90
+ last_good_timestamp = nil
91
+ result.each do |entity|
92
+ event = LogStash::Event.new(entity.properties)
93
+ event.set("type", @table_name)
94
+
95
+ # Help pretty print etw files
96
+ if (@etw_pretty_print && !event.get("EventMessage").nil? && !event.get("Message").nil?)
97
+ @logger.debug("event: " + event.to_s)
98
+ eventMessage = event.get("EventMessage").to_s
99
+ message = event.get("Message").to_s
100
+ @logger.debug("EventMessage: " + eventMessage)
101
+ @logger.debug("Message: " + message)
102
+ if (eventMessage.include? "%")
103
+ @logger.debug("starting pretty print")
104
+ toReplace = eventMessage.scan(/%\d+/)
105
+ payload = message.scan(/(?<!\\S)([a-zA-Z]+)=(\"[^\"]*\")(?!\\S)/)
106
+ # Split up the format string to seperate all of the numbers
107
+ toReplace.each do |key|
108
+ @logger.debug("Replacing key: " + key.to_s)
109
+ index = key.scan(/\d+/).join.to_i
110
+ newValue = payload[index - 1][1]
111
+ @logger.debug("New Value: " + newValue)
112
+ eventMessage[key] = newValue
113
+ end # do block
114
+ event.set("EventMessage", eventMessage)
115
+ @logger.debug("pretty print end. result: " + event.get("EventMessage").to_s)
116
+ end
117
+ end
118
+ decorate(event)
119
+ if event.get('PreciseTimeStamp').is_a?(Time)
120
+ event.set('PreciseTimeStamp', LogStash::Timestamp.new(event.get('PreciseTimeStamp')))
121
+ end
122
+ theTIMESTAMP = event.get('TIMESTAMP')
123
+ if theTIMESTAMP.is_a?(LogStash::Timestamp)
124
+ last_good_timestamp = theTIMESTAMP.to_iso8601
125
+ elsif theTIMESTAMP.is_a?(Time)
126
+ last_good_timestamp = theTIMESTAMP.iso8601
127
+ event.set('TIMESTAMP', LogStash::Timestamp.new(theTIMESTAMP))
128
+ else
129
+ @logger.warn("Found result with invalid TIMESTAMP. " + event.to_hash.to_s)
130
+ end
131
+ output_queue << event
132
+ end # each block
133
+ @idle_delay = 0
134
+ if (!last_good_timestamp.nil?)
135
+ @last_timestamp = last_good_timestamp unless @continuation_token
136
+ end
137
+ else
138
+ @logger.debug("No new results found.")
139
+ @idle_delay = @idle_delay_seconds
140
+ end # if block
141
+
142
+ rescue => e
143
+ @logger.error("Oh My, An error occurred.", :exception => e)
144
+ raise
145
+ end # process
146
+
147
+ # Windows Azure Diagnostic's algorithm for determining the partition key based on time is as follows:
148
+ # 1. Take time in UTC without seconds.
149
+ # 2. Convert it into .net ticks
150
+ # 3. add a '0' prefix.
151
+ def partitionkey_from_datetime(time_string)
152
+ collection_time = Time.parse(time_string)
153
+ if collection_time
154
+ @logger.debug("collection time parsed successfully #{collection_time}")
155
+ else
156
+ raise(ArgumentError, "Could not parse the time_string")
157
+ end # if else block
158
+
159
+ collection_time -= collection_time.sec
160
+ ticks = to_ticks(collection_time)
161
+ "0#{ticks}"
162
+ end # partitionkey_from_datetime
163
+
164
+ # Convert time to ticks
165
+ def to_ticks(time_to_convert)
166
+ @logger.debug("Converting time to ticks")
167
+ time_to_convert.to_i * 10000000 - TICKS_SINCE_EPOCH
168
+ end # to_ticks
169
+
170
+ end # LogStash::Inputs::AzureWADTable
@@ -0,0 +1,25 @@
1
+ Gem::Specification.new do |s|
2
+ s.name = 'logstash-input-azuretable'
3
+ s.version = '0.1.6'
4
+ s.licenses = ['Apache License (2.0)']
5
+ s.summary = "CLONED!!! - This plugin collects Microsoft Azure Diagnostics data from Azure Storage Tables."
6
+ s.description = "CLONED!!! - This gem is a Logstash plugin. It reads and parses diagnostics data from Azure Storage Tables."
7
+ s.authors = ["Microsoft Corporation"]
8
+ s.email = ''
9
+ s.homepage = "https://github.com/chris-evans/azure-diagnostics-tools"
10
+ s.require_paths = ["lib"]
11
+
12
+ # Files
13
+ s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','Gemfile','LICENSE']
14
+ # Tests
15
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
16
+
17
+ # Special flag to let us know this is actually a logstash plugin
18
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" }
19
+
20
+ # Gem dependencies
21
+ s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
22
+ s.add_runtime_dependency 'azure-storage', '~> 0.12.1.preview'
23
+ s.add_development_dependency 'logstash-devutils', '>= 1.1.0'
24
+ end
25
+
@@ -0,0 +1 @@
1
+ require "logstash/devutils/rspec/spec_helper"
metadata CHANGED
@@ -1,10 +1,10 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-input-azuretable
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.4
4
+ version: 0.1.6
5
5
  platform: ruby
6
6
  authors:
7
- - ''
7
+ - Microsoft Corporation
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
@@ -12,66 +12,44 @@ date: 2017-06-29 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
15
- requirements:
16
- - - "~>"
17
- - !ruby/object:Gem::Version
18
- version: '2.0'
19
- name: logstash-core-plugin-api
20
- prerelease: false
21
- type: :runtime
22
- version_requirements: !ruby/object:Gem::Requirement
23
- requirements:
24
- - - "~>"
25
- - !ruby/object:Gem::Version
26
- version: '2.0'
27
- - !ruby/object:Gem::Dependency
28
- requirement: !ruby/object:Gem::Requirement
29
- requirements:
30
- - - ">="
31
- - !ruby/object:Gem::Version
32
- version: '0'
33
- name: logstash-codec-plain
34
- prerelease: false
35
- type: :runtime
36
- version_requirements: !ruby/object:Gem::Requirement
37
15
  requirements:
38
16
  - - ">="
39
17
  - !ruby/object:Gem::Version
40
- version: '0'
41
- - !ruby/object:Gem::Dependency
42
- requirement: !ruby/object:Gem::Requirement
43
- requirements:
44
- - - ">="
18
+ version: '1.60'
19
+ - - "<="
45
20
  - !ruby/object:Gem::Version
46
- version: 0.0.22
47
- name: stud
21
+ version: '2.99'
22
+ name: logstash-core-plugin-api
48
23
  prerelease: false
49
24
  type: :runtime
50
25
  version_requirements: !ruby/object:Gem::Requirement
51
26
  requirements:
52
27
  - - ">="
53
28
  - !ruby/object:Gem::Version
54
- version: 0.0.22
29
+ version: '1.60'
30
+ - - "<="
31
+ - !ruby/object:Gem::Version
32
+ version: '2.99'
55
33
  - !ruby/object:Gem::Dependency
56
34
  requirement: !ruby/object:Gem::Requirement
57
35
  requirements:
58
- - - ">="
36
+ - - "~>"
59
37
  - !ruby/object:Gem::Version
60
- version: 0.7.9
61
- name: azure
38
+ version: 0.12.1.preview
39
+ name: azure-storage
62
40
  prerelease: false
63
41
  type: :runtime
64
42
  version_requirements: !ruby/object:Gem::Requirement
65
43
  requirements:
66
- - - ">="
44
+ - - "~>"
67
45
  - !ruby/object:Gem::Version
68
- version: 0.7.9
46
+ version: 0.12.1.preview
69
47
  - !ruby/object:Gem::Dependency
70
48
  requirement: !ruby/object:Gem::Requirement
71
49
  requirements:
72
50
  - - ">="
73
51
  - !ruby/object:Gem::Version
74
- version: 0.0.16
52
+ version: 1.1.0
75
53
  name: logstash-devutils
76
54
  prerelease: false
77
55
  type: :development
@@ -79,23 +57,22 @@ dependencies:
79
57
  requirements:
80
58
  - - ">="
81
59
  - !ruby/object:Gem::Version
82
- version: 0.0.16
83
- description: WADLogsTable Logging
60
+ version: 1.1.0
61
+ description: CLONED!!! - This gem is a Logstash plugin. It reads and parses diagnostics
62
+ data from Azure Storage Tables.
84
63
  email: ''
85
64
  executables: []
86
65
  extensions: []
87
66
  extra_rdoc_files: []
88
67
  files:
89
68
  - CHANGELOG.md
90
- - CONTRIBUTORS
91
- - DEVELOPER.md
92
69
  - Gemfile
93
70
  - LICENSE
94
71
  - README.md
95
- - lib/logstash/inputs/azuretable.rb
96
- - logstash-input-azuretable.gemspec
97
- - spec/inputs/azuretable_spec.rb
98
- homepage: ''
72
+ - lib/logstash/inputs/azurewadtable.rb
73
+ - logstash-input-azurewadtable.gemspec
74
+ - spec/inputs/azurewadtable_spec.rb
75
+ homepage: https://github.com/chris-evans/azure-diagnostics-tools
99
76
  licenses:
100
77
  - Apache License (2.0)
101
78
  metadata:
@@ -117,9 +94,10 @@ required_rubygems_version: !ruby/object:Gem::Requirement
117
94
  version: '0'
118
95
  requirements: []
119
96
  rubyforge_project:
120
- rubygems_version: 2.6.4
97
+ rubygems_version: 2.6.11
121
98
  signing_key:
122
99
  specification_version: 4
123
- summary: WADLogsTable Logging
100
+ summary: CLONED!!! - This plugin collects Microsoft Azure Diagnostics data from Azure
101
+ Storage Tables.
124
102
  test_files:
125
- - spec/inputs/azuretable_spec.rb
103
+ - spec/inputs/azurewadtable_spec.rb
@@ -1,10 +0,0 @@
1
- The following is a list of people who have contributed ideas, code, bug
2
- reports, or in general have helped logstash along its way.
3
-
4
- Contributors:
5
- * -
6
-
7
- Note: If you've sent us patches, bug reports, or otherwise contributed to
8
- Logstash, and you aren't on the list above and want to be, please let us know
9
- and we'll make sure you're here. Contributions from folks like you are what make
10
- open source awesome.
@@ -1,2 +0,0 @@
1
- # logstash-input-azuretable
2
- Example input plugin. This should help bootstrap your effort to write your own input plugin!
@@ -1,26 +0,0 @@
1
- Gem::Specification.new do |s|
2
- s.name = 'logstash-input-azuretable'
3
- s.version = '0.1.4'
4
- s.licenses = ['Apache License (2.0)']
5
- s.summary = 'WADLogsTable Logging'
6
- s.description = 'WADLogsTable Logging'
7
- s.homepage = ''
8
- s.authors = ['']
9
- s.email = ''
10
- s.require_paths = ['lib']
11
-
12
- # Files
13
- s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
14
- # Tests
15
- s.test_files = s.files.grep(%r{^(test|spec|features)/})
16
-
17
- # Special flag to let us know this is actually a logstash plugin
18
- s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" }
19
-
20
- # Gem dependencies
21
- s.add_runtime_dependency "logstash-core-plugin-api", "~> 2.0"
22
- s.add_runtime_dependency 'logstash-codec-plain'
23
- s.add_runtime_dependency 'stud', '>= 0.0.22'
24
- s.add_runtime_dependency 'azure', '>= 0.7.9'
25
- s.add_development_dependency 'logstash-devutils', '>= 0.0.16'
26
- end
@@ -1,11 +0,0 @@
1
- # encoding: utf-8
2
- require "logstash/devutils/rspec/spec_helper"
3
- require "logstash/inputs/azuretable"
4
-
5
- describe LogStash::Inputs::Azuretable do
6
-
7
- it_behaves_like "an interruptible input plugin" do
8
- let(:config) { { "interval" => 100 } }
9
- end
10
-
11
- end