logstash-input-azure_blob_storage 0.11.5 → 0.12.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 3d446aed971a95e6e17a27ed1e9ec8b141f939b53697fb9c332cfb130404745a
4
- data.tar.gz: 4a1321f6c6a30f6787d2133642ca23840371d6f4e18102cb775d345b09eb176a
3
+ metadata.gz: 202f43b2d085a872fef61594e05dbc3eb19188863a20b5f4ffece2ef66608d15
4
+ data.tar.gz: 3b13f7a610c7b541ee7265340103011998d5c1799dfc7a1057add7307f1550dc
5
5
  SHA512:
6
- metadata.gz: b4f48a0bebcd6e3594584a4473b223838359d44e9ef591f958aa4c80c4c22953f6b0f708b19faeaf0517c66f47185bda4de75ab4e3618b23e2e7f23f71cb4bee
7
- data.tar.gz: 508cd39ea159a4655e590f46ad0108c3b6e6de95ed575c4456da0230bae73fb384ecb7697ed710e7afb1542fe01cbd8a62130acedcbf0ba9c3040ace1f9d76d0
6
+ metadata.gz: 0af0f2e0f0f955849840190ebfd60185ae5b0465e4d509f40b99bf075da44ed50e0120c8d781d2455b0b16969cc4c75f8a3ff9e71431e73e0bfaa03dc284b708
7
+ data.tar.gz: 3e7907fe4700bfe3640b5783d63b17d0f153061b0a57bb61964ca08f83332e616e158f754904807a1b804be0ba7a64be4c6d92aa7ee9393e42ca33a8ac9e54a4
data/CHANGELOG.md CHANGED
@@ -1,6 +1,22 @@
1
+ ## 0.12.0
2
+ - version 2 of azure-storage
3
+ - saving current files registry, not keeping historical files
4
+
5
+ ## 0.11.7
6
+ - implemented skip_learning
7
+ - start ignoring failed files and not retry
8
+
9
+ ## 0.11.6
10
+ - fix in json head and tail learning the max_results
11
+ - broke out connection setup in order to call it again if connection exceptions come
12
+ - deal better with skipping of empty files.
13
+
1
14
  ## 0.11.5
2
- - Added optional filename into the message
3
- - plumbing for emulator, start_over not learning from registry
15
+ - added optional addfilename to add filename in message
16
+ - NSGFLOWLOG version 2 uses 0 as value instead of NULL in src and dst values
17
+ - added connection exception handling when full_read files
18
+ - rewritten json header footer learning to ignore learning from registry
19
+ - plumbing for emulator
4
20
 
5
21
  ## 0.11.4
6
22
  - fixed listing 3 times, rather than retrying to list max 3 times
data/README.md CHANGED
@@ -1,30 +1,34 @@
1
- # Logstash Plugin
1
+ # Logstash
2
2
 
3
- This is a plugin for [Logstash](https://github.com/elastic/logstash).
3
+ This is a plugin for [Logstash](https://github.com/elastic/logstash). It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way. All logstash plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/). Need generic logstash help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
4
4
 
5
- It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
5
+ For problems or feature requests with this specific plugin, raise a github issue [GITHUB/janmg/logstash-input-azure_blob_storage/](https://github.com/janmg/logstash-input-azure_blob_storage). Pull requests will also be welcomed after discussion through an issue.
6
6
 
7
- ## Documentation
8
-
9
- All logstash plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
7
+ ## Purpose
8
+ This plugin can read from Azure Storage Blobs, for instance JSON diagnostics logs for NSG flow logs or LINE based accesslogs from App Services.
9
+ [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
10
10
 
11
- ## Need Help?
11
+ The plugin depends on the [Ruby library azure-storage-blon](https://rubygems.org/gems/azure-storage-blob/versions/1.1.0) from Microsoft, that depends on Faraday for the HTTPS connection to Azure.
12
12
 
13
- Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum. For real problems or feature requests, raise a github issue [GITHUB/janmg/logstash-input-azure_blob_storage/](https://github.com/janmg/logstash-input-azure_blob_storage). Pull requests will ionly be merged after discussion through an issue.
13
+ The plugin executes the following steps
14
+ 1. Lists all the files in the azure storage account. where the path of the files are matching pathprefix
15
+ 2. Filters on path_filters to only include files that match the directory and file glob (e.g. **/*.json)
16
+ 3. Save the listed files in a registry of known files and filesizes. (data/registry.dat on azure, or in a file on the logstash instance)
17
+ 4. List all the files again and compare the registry with the new filelist and put the delta in a worklist
18
+ 5. Process the worklist and put all events in the logstash queue.
19
+ 6. if there is time left, sleep to complete the interval. If processing takes more than an inteval, save the registry and continue processing.
20
+ 7. If logstash is stopped, a stop signal will try to finish the current file, save the registry and than quit
14
21
 
15
- ## Purpose
16
- This plugin can read from Azure Storage Blobs, for instance diagnostics logs for NSG flow logs or accesslogs from App Services.
17
- [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
18
- This
19
22
  ## Installation
20
23
  This plugin can be installed through logstash-plugin
21
24
  ```
22
- logstash-plugin install logstash-input-azure_blob_storage
25
+ /usr/share/logstash/bin/logstash-plugin install logstash-input-azure_blob_storage
23
26
  ```
24
27
 
25
28
  ## Minimal Configuration
26
29
  The minimum configuration required as input is storageaccount, access_key and container.
27
30
 
31
+ /etc/logstash/conf.d/test.conf
28
32
  ```
29
33
  input {
30
34
  azure_blob_storage {
@@ -36,27 +40,29 @@ input {
36
40
  ```
37
41
 
38
42
  ## Additional Configuration
39
- The registry_create_policy is used when the pipeline is started to either resume from the last known unprocessed file, or to start_fresh ignoring old files or start_over to process all the files from the beginning.
43
+ The registry keeps track of files in the storage account, their size and how many bytes have been processed. Files can grow and the added part will be processed as a partial file. The registry is saved todisk every interval.
40
44
 
41
- interval defines the minimum time the registry should be saved to the registry file (by default 'data/registry.dat'), this is only needed in case the pipeline dies unexpectedly. During a normal shutdown the registry is also saved.
45
+ The registry_create_policy determines at the start of the pipeline if processing should resume from the last known unprocessed file, or to start_fresh ignoring old files and start only processing new events that came after the start of the pipeline. Or start_over to process all the files ignoring the registry.
42
46
 
43
- When registry_local_path is set to a directory, the registry is save on the logstash server in that directory. The filename is the pipe.id
47
+ interval defines the minimum time the registry should be saved to the registry file (by default to 'data/registry.dat'), this is only needed in case the pipeline dies unexpectedly. During a normal shutdown the registry is also saved.
44
48
 
45
- with registry_create_policy set to resume and the registry_local_path set to a directory where the registry isn't yet created, should load from the storage account and save the registry on the local server
49
+ When registry_local_path is set to a directory, the registry is saved on the logstash server in that directory. The filename is the pipe.id
46
50
 
47
- During the pipeline start for JSON codec, the plugin uses one file to learn how the JSON header and tail look like, they can also be configured manually.
51
+ with registry_create_policy set to resume and the registry_local_path set to a directory where the registry isn't yet created, should load the registry from the storage account and save the registry on the local server. This allows for a migration to localstorage
52
+
53
+ For pipelines that use the JSON codec or the JSON_LINE codec, the plugin uses one file to learn how the JSON header and tail look like, they can also be configured manually. Using skip_learning the learning can be disabled.
48
54
 
49
55
  ## Running the pipeline
50
56
  The pipeline can be started in several ways.
51
57
  - On the commandline
52
58
  ```
53
- /usr/share/logstash/bin/logtash -f /etc/logstash/pipeline.d/test.yml
59
+ /usr/share/logstash/bin/logtash -f /etc/logstash/conf.d/test.conf
54
60
  ```
55
61
  - In the pipeline.yml
56
62
  ```
57
63
  /etc/logstash/pipeline.yml
58
64
  pipe.id = test
59
- pipe.path = /etc/logstash/pipeline.d/test.yml
65
+ pipe.path = /etc/logstash/conf.d/test.conf
60
66
  ```
61
67
  - As managed pipeline from Kibana
62
68
 
@@ -95,7 +101,9 @@ The log level of the plugin can be put into DEBUG through
95
101
  curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'{"logger.logstash.inputs.azureblobstorage" : "DEBUG"}'
96
102
  ```
97
103
 
98
- because debug also makes logstash chatty, there are also debug_timer and debug_until that can be used to print additional informantion on what the pipeline is doing and how long it takes. debug_until is for the number of events until debug is disabled.
104
+ Because logstash debug makes logstash very chatty, the option debug_until will for a number of processed events and stops debuging. One file can easily contain thousands of events. The debug_until is useful to monitor the start of the plugin and the processing of the first files.
105
+
106
+ debug_timer will show detailed information on how much time listing of files took and how long the plugin will sleep to fill the interval and the listing and processing starts again.
99
107
 
100
108
  ## Other Configuration Examples
101
109
  For nsgflowlogs, a simple configuration looks like this
@@ -121,6 +129,10 @@ filter {
121
129
  }
122
130
  }
123
131
 
132
+ output {
133
+ stdout { }
134
+ }
135
+
124
136
  output {
125
137
  elasticsearch {
126
138
  hosts => "elasticsearch"
@@ -128,21 +140,35 @@ output {
128
140
  }
129
141
  }
130
142
  ```
131
-
143
+ A more elaborate input configuration example
132
144
  ```
133
145
  input {
134
146
  azure_blob_storage {
147
+ codec => "json"
135
148
  storageaccount => "yourstorageaccountname"
136
149
  access_key => "Ba5e64c0d3=="
137
150
  container => "insights-logs-networksecuritygroupflowevent"
138
- codec => "json"
139
151
  logtype => "nsgflowlog"
140
152
  prefix => "resourceId=/"
153
+ path_filters => ['**/*.json']
154
+ addfilename => true
141
155
  registry_create_policy => "resume"
156
+ registry_local_path => "/usr/share/logstash/plugin"
142
157
  interval => 300
158
+ debug_timer => true
159
+ debug_until => 100
160
+ }
161
+ }
162
+
163
+ output {
164
+ elasticsearch {
165
+ hosts => "elasticsearch"
166
+ index => "nsg-flow-logs-%{+xxxx.ww}"
143
167
  }
144
168
  }
145
169
  ```
170
+ The configuration documentation is in the first 100 lines of the code
171
+ [GITHUB/janmg/logstash-input-azure_blob_storage/blob/master/lib/logstash/inputs/azure_blob_storage.rb](https://github.com/janmg/logstash-input-azure_blob_storage/blob/master/lib/logstash/inputs/azure_blob_storage.rb)
146
172
 
147
173
  For WAD IIS and App Services the HTTP AccessLogs can be retrieved from a storage account as line based events and parsed through GROK. The date stamp can also be parsed with %{TIMESTAMP_ISO8601:log_timestamp}. For WAD IIS logfiles the container is wad-iis-logfiles. In the future grokking may happen already by the plugin.
148
174
  ```
@@ -61,7 +61,9 @@ config :registry_create_policy, :validate => ['resume','start_over','start_fresh
61
61
  # Z00000000000000000000000000000000 2 ]}
62
62
  config :interval, :validate => :number, :default => 60
63
63
 
64
+ # add the filename into the events
64
65
  config :addfilename, :validate => :boolean, :default => false, :required => false
66
+
65
67
  # debug_until will for a maximum amount of processed messages shows 3 types of log printouts including processed filenames. This is a lightweight alternative to switching the loglevel from info to debug or even trace
66
68
  config :debug_until, :validate => :number, :default => 0, :required => false
67
69
 
@@ -71,6 +73,9 @@ config :debug_timer, :validate => :boolean, :default => false, :required => fals
71
73
  # WAD IIS Grok Pattern
72
74
  #config :grokpattern, :validate => :string, :required => false, :default => '%{TIMESTAMP_ISO8601:log_timestamp} %{NOTSPACE:instanceId} %{NOTSPACE:instanceId2} %{IPORHOST:ServerIP} %{WORD:httpMethod} %{URIPATH:requestUri} %{NOTSPACE:requestQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:httpVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:host} %{NUMBER:httpStatus} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:sentBytes:int} %{NUMBER:receivedBytes:int} %{NUMBER:timeTaken:int}'
73
75
 
76
+ # skip learning if you use json and don't want to learn the head and tail, but use either the defaults or configure them.
77
+ config :skip_learning, :validate => :boolean, :default => false, :required => false
78
+
74
79
  # The string that starts the JSON. Only needed when the codec is JSON. When partial file are read, the result will not be valid JSON unless the start and end are put back. the file_head and file_tail are learned at startup, by reading the first file in the blob_list and taking the first and last block, this would work for blobs that are appended like nsgflowlogs. The configuration can be set to override the learning. In case learning fails and the option is not set, the default is to use the 'records' as set by nsgflowlogs.
75
80
  config :file_head, :validate => :string, :required => false, :default => '{"records":['
76
81
  # The string that ends the JSON
@@ -113,34 +118,7 @@ def run(queue)
113
118
  @processed = 0
114
119
  @regsaved = @processed
115
120
 
116
- # Try in this order to access the storageaccount
117
- # 1. storageaccount / sas_token
118
- # 2. connection_string
119
- # 3. storageaccount / access_key
120
-
121
- unless connection_string.nil?
122
- conn = connection_string.value
123
- end
124
- unless sas_token.nil?
125
- unless sas_token.value.start_with?('?')
126
- conn = "BlobEndpoint=https://#{storageaccount}.#{dns_suffix};SharedAccessSignature=#{sas_token.value}"
127
- else
128
- conn = sas_token.value
129
- end
130
- end
131
- unless conn.nil?
132
- @blob_client = Azure::Storage::Blob::BlobService.create_from_connection_string(conn)
133
- else
134
- # unless use_development_storage?
135
- @blob_client = Azure::Storage::Blob::BlobService.create(
136
- storage_account_name: storageaccount,
137
- storage_dns_suffix: dns_suffix,
138
- storage_access_key: access_key.value,
139
- )
140
- # else
141
- # @logger.info("not yet implemented")
142
- # end
143
- end
121
+ connect
144
122
 
145
123
  @registry = Hash.new
146
124
  if registry_create_policy == "resume"
@@ -175,7 +153,7 @@ def run(queue)
175
153
  if registry_create_policy == "start_fresh"
176
154
  @registry = list_blobs(true)
177
155
  save_registry(@registry)
178
- @logger.info("starting fresh, writing a clean the registry to contain #{@registry.size} blobs/files")
156
+ @logger.info("starting fresh, writing a clean registry to contain #{@registry.size} blobs/files")
179
157
  end
180
158
 
181
159
  @is_json = false
@@ -188,12 +166,14 @@ def run(queue)
188
166
  @tail = ''
189
167
  # if codec=json sniff one files blocks A and Z to learn file_head and file_tail
190
168
  if @is_json
191
- learn_encapsulation
192
169
  if file_head
193
- @head = file_head
170
+ @head = file_head
194
171
  end
195
172
  if file_tail
196
- @tail = file_tail
173
+ @tail = file_tail
174
+ end
175
+ if file_head and file_tail and !skip_learning
176
+ learn_encapsulation
197
177
  end
198
178
  @logger.info("head will be: #{@head} and tail is set to #{@tail}")
199
179
  end
@@ -233,7 +213,10 @@ def run(queue)
233
213
  end
234
214
  # size nilClass when the list doesn't grow?!
235
215
  # Worklist is the subset of files where the already read offset is smaller than the file size
236
- worklist.clear
216
+ @registry = newreg
217
+ worklist.clear
218
+ chunk = nil
219
+
237
220
  worklist = newreg.select {|name,file| file[:offset] < file[:length]}
238
221
  if (worklist.size > 4) then @logger.info("worklist contains #{worklist.size} blobs") end
239
222
 
@@ -246,17 +229,28 @@ def run(queue)
246
229
  size = 0
247
230
  if file[:offset] == 0
248
231
  # This is where Sera4000 issue starts
249
- begin
250
- chunk = full_read(name)
251
- size=chunk.size
252
- rescue Exception => e
253
- @logger.error("Failed to read #{name} because of: #{e.message} .. will continue and pretend this never happened")
232
+ # For an append blob, reading full and crashing, retry, last_modified? ... lenght? ... committed? ...
233
+ # length and skip reg value
234
+ if (file[:length] > 0)
235
+ begin
236
+ chunk = full_read(name)
237
+ size=chunk.size
238
+ rescue Exception => e
239
+ @logger.error("Failed to read #{name} because of: #{e.message} .. will continue, set file as read and pretend this never happened")
240
+ @logger.error("#{size} size and #{file[:length]} file length")
241
+ size = file[:length]
242
+ end
243
+ else
244
+ @logger.info("found a zero size file #{name}")
245
+ chunk = nil
254
246
  end
255
247
  else
256
248
  chunk = partial_read_json(name, file[:offset], file[:length])
257
249
  @logger.debug("partial file #{name} from #{file[:offset]} to #{file[:length]}")
258
250
  end
259
251
  if logtype == "nsgflowlog" && @is_json
252
+ # skip empty chunks
253
+ unless chunk.nil?
260
254
  res = resource(name)
261
255
  begin
262
256
  fingjson = JSON.parse(chunk)
@@ -265,6 +259,7 @@ def run(queue)
265
259
  rescue JSON::ParserError
266
260
  @logger.error("parse error on #{res[:nsg]} [#{res[:date]}] offset: #{file[:offset]} length: #{file[:length]}")
267
261
  end
262
+ end
268
263
  # TODO: Convert this to line based grokking.
269
264
  # TODO: ECS Compliance?
270
265
  elsif logtype == "wadiis" && !@is_json
@@ -272,7 +267,7 @@ def run(queue)
272
267
  else
273
268
  counter = 0
274
269
  begin
275
- @codec.decode(chunk) do |event|
270
+ @codec.decode(chunk) do |event|
276
271
  counter += 1
277
272
  if @addfilename
278
273
  event.set('filename', name)
@@ -282,6 +277,7 @@ def run(queue)
282
277
  end
283
278
  rescue Exception => e
284
279
  @logger.error("codec exception: #{e.message} .. will continue and pretend this never happened")
280
+ @registry.store(name, { :offset => file[:length], :length => file[:length] })
285
281
  @logger.debug("#{chunk}")
286
282
  end
287
283
  @processed += counter
@@ -321,8 +317,54 @@ end
321
317
 
322
318
 
323
319
  private
320
+ def connect
321
+ # Try in this order to access the storageaccount
322
+ # 1. storageaccount / sas_token
323
+ # 2. connection_string
324
+ # 3. storageaccount / access_key
325
+
326
+ unless connection_string.nil?
327
+ conn = connection_string.value
328
+ end
329
+ unless sas_token.nil?
330
+ unless sas_token.value.start_with?('?')
331
+ conn = "BlobEndpoint=https://#{storageaccount}.#{dns_suffix};SharedAccessSignature=#{sas_token.value}"
332
+ else
333
+ conn = sas_token.value
334
+ end
335
+ end
336
+ unless conn.nil?
337
+ @blob_client = Azure::Storage::Blob::BlobService.create_from_connection_string(conn)
338
+ else
339
+ # unless use_development_storage?
340
+ @blob_client = Azure::Storage::Blob::BlobService.create(
341
+ storage_account_name: storageaccount,
342
+ storage_dns_suffix: dns_suffix,
343
+ storage_access_key: access_key.value,
344
+ )
345
+ # else
346
+ # @logger.info("not yet implemented")
347
+ # end
348
+ end
349
+ end
350
+
324
351
  def full_read(filename)
325
- return @blob_client.get_blob(container, filename)[1]
352
+ tries ||= 2
353
+ begin
354
+ return @blob_client.get_blob(container, filename)[1]
355
+ rescue Exception => e
356
+ @logger.error("caught: #{e.message} for full_read")
357
+ if (tries -= 1) > 0
358
+ if e.message = "Connection reset by peer"
359
+ connect
360
+ end
361
+ retry
362
+ end
363
+ end
364
+ begin
365
+ chuck = @blob_client.get_blob(container, filename)[1]
366
+ end
367
+ return chuck
326
368
  end
327
369
 
328
370
  def partial_read_json(filename, offset, length)
@@ -347,6 +389,7 @@ end
347
389
 
348
390
  def nsgflowlog(queue, json, name)
349
391
  count=0
392
+ begin
350
393
  json["records"].each do |record|
351
394
  res = resource(record["resourceId"])
352
395
  resource = { :subscription => res[:subscription], :resourcegroup => res[:resourcegroup], :nsg => res[:nsg] }
@@ -376,6 +419,9 @@ def nsgflowlog(queue, json, name)
376
419
  end
377
420
  end
378
421
  end
422
+ rescue Exception => e
423
+ @logger.error("NSG Flowlog problem for #{name}, with #{json["records"].size} records and error message #{e.message}")
424
+ end
379
425
  return count
380
426
  end
381
427
 
@@ -473,9 +519,10 @@ end
473
519
 
474
520
 
475
521
  def learn_encapsulation
522
+ @logger.info("learn_encapsulation, this can be skipped by setting skip_learning => true. Or set both head_file and tail_file")
476
523
  # From one file, read first block and last block to learn head and tail
477
524
  begin
478
- blobs = @blob_client.list_blobs(container, { maxresults: 3, prefix: @prefix})
525
+ blobs = @blob_client.list_blobs(container, { max_results: 3, prefix: @prefix})
479
526
  blobs.each do |blob|
480
527
  unless blob.name == registry_path
481
528
  begin
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-input-azure_blob_storage'
3
- s.version = '0.11.5'
3
+ s.version = '0.12.1'
4
4
  s.licenses = ['Apache-2.0']
5
5
  s.summary = 'This logstash plugin reads and parses data from Azure Storage Blobs.'
6
6
  s.description = <<-EOF
@@ -22,6 +22,6 @@ EOF
22
22
  # Gem dependencies
23
23
  s.add_runtime_dependency 'logstash-core-plugin-api', '~> 2.1'
24
24
  s.add_runtime_dependency 'stud', '~> 0.0.23'
25
- s.add_runtime_dependency 'azure-storage-blob', '~> 1.1'
25
+ s.add_runtime_dependency 'azure-storage-blob', '~> 2', '>= 2.0.3'
26
26
  #s.add_development_dependency 'logstash-devutils', '~> 2'
27
27
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-input-azure_blob_storage
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.11.5
4
+ version: 0.12.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jan Geertsma
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-12-19 00:00:00.000000000 Z
11
+ date: 2021-12-21 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -17,8 +17,8 @@ dependencies:
17
17
  - !ruby/object:Gem::Version
18
18
  version: '2.1'
19
19
  name: logstash-core-plugin-api
20
- type: :runtime
21
20
  prerelease: false
21
+ type: :runtime
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - "~>"
@@ -31,8 +31,8 @@ dependencies:
31
31
  - !ruby/object:Gem::Version
32
32
  version: 0.0.23
33
33
  name: stud
34
- type: :runtime
35
34
  prerelease: false
35
+ type: :runtime
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
38
  - - "~>"
@@ -43,15 +43,21 @@ dependencies:
43
43
  requirements:
44
44
  - - "~>"
45
45
  - !ruby/object:Gem::Version
46
- version: '1.1'
46
+ version: '2'
47
+ - - ">="
48
+ - !ruby/object:Gem::Version
49
+ version: 2.0.3
47
50
  name: azure-storage-blob
48
- type: :runtime
49
51
  prerelease: false
52
+ type: :runtime
50
53
  version_requirements: !ruby/object:Gem::Requirement
51
54
  requirements:
52
55
  - - "~>"
53
56
  - !ruby/object:Gem::Version
54
- version: '1.1'
57
+ version: '2'
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: 2.0.3
55
61
  description: " This gem is a Logstash plugin. It reads and parses data from Azure\
56
62
  \ Storage Blobs. The azure_blob_storage is a reimplementation to replace azureblob\
57
63
  \ from azure-diagnostics-tools/Logstash. It can deal with larger volumes and partial\
@@ -92,7 +98,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
92
98
  - !ruby/object:Gem::Version
93
99
  version: '0'
94
100
  requirements: []
95
- rubygems_version: 3.0.6
101
+ rubygems_version: 3.1.6
96
102
  signing_key:
97
103
  specification_version: 4
98
104
  summary: This logstash plugin reads and parses data from Azure Storage Blobs.