logstash-filter-aggregate 2.8.0 → 2.9.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +16 -9
- data/docs/index.asciidoc +12 -3
- data/lib/logstash/filters/aggregate.rb +74 -14
- data/logstash-filter-aggregate.gemspec +1 -1
- data/spec/filters/aggregate_spec.rb +31 -0
- metadata +2 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 468412047e7db18515ba09a4a112753019e7bcb1
|
4
|
+
data.tar.gz: db82a1a0b681c38c600d72a8bc0a9d8215013437
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 4b5ac8ba6307a2e41d582655b34ecd4d2c2da808ac0f49dc332107676631eb32155e19dd3c870783e06babd39f55a7f2e5346d3c1d565d3a4de5342a0969452e
|
7
|
+
data.tar.gz: fd827e4944b5f30dd3b53561a3914668911861584f125302136179d4d27d4cfb66f957438994b6f42c188eb778264da5f5c46452ff216ddba6d66d9566536512
|
data/CHANGELOG.md
CHANGED
@@ -1,9 +1,16 @@
|
|
1
|
+
## 2.9.0
|
2
|
+
- new feature: add ability to dynamically define a custom `timeout` or `inactivity_timeout` in `code` block (fix issues [#91](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/91) and [#92](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/92))
|
3
|
+
- new feature: add meta informations available in `code` block through `map_meta` variable
|
4
|
+
- new feature: add Logstash metrics, specific to aggregate plugin: aggregate_maps, pushed_events, task_timeouts, code_errors, timeout_code_errors
|
5
|
+
- new feature: validate at startup that `map_action` option equals to 'create', 'update' or 'create_or_update'
|
6
|
+
|
1
7
|
## 2.8.0
|
2
|
-
- new feature: add 'timeout_timestamp_field' option.
|
3
|
-
When set, this option lets to compute timeout based on event timestamp field (and not system time).
|
8
|
+
- new feature: add 'timeout_timestamp_field' option (fix issue [#81](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/81))
|
9
|
+
When set, this option lets to compute timeout based on event timestamp field (and not system time).
|
10
|
+
It's particularly useful when processing old logs.
|
4
11
|
|
5
12
|
## 2.7.2
|
6
|
-
- bugfix: fix synchronisation issue at Logstash shutdown (#75)
|
13
|
+
- bugfix: fix synchronisation issue at Logstash shutdown (issue [#75](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/75))
|
7
14
|
|
8
15
|
## 2.7.1
|
9
16
|
- docs: update gemspec summary
|
@@ -32,7 +39,7 @@
|
|
32
39
|
Events for a given `task_id` will be aggregated for as long as they keep arriving within the defined `inactivity_timeout` option - the inactivity timeout is reset each time a new event happens. On the contrary, `timeout` is never reset and happens after `timeout` seconds since aggregation map creation.
|
33
40
|
|
34
41
|
## 2.5.2
|
35
|
-
- bugfix: fix 'aggregate_maps_path' load (issue #62). Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
|
42
|
+
- bugfix: fix 'aggregate_maps_path' load (issue [#62](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/62)). Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
|
36
43
|
- enhancement: at Logstash startup, check that 'task_id' option contains a field reference expression (else raise error)
|
37
44
|
- docs: enhance examples
|
38
45
|
- docs: precise that tasks are tied to their task_id pattern, even if they have same task_id value
|
@@ -50,7 +57,7 @@
|
|
50
57
|
- breaking: need Logstash 2.4 or later
|
51
58
|
|
52
59
|
## 2.4.0
|
53
|
-
- new feature: You can now define timeout options per task_id pattern (#42)
|
60
|
+
- new feature: You can now define timeout options per task_id pattern (fix issue [#42](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/42))
|
54
61
|
timeout options are : `timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags`
|
55
62
|
- validation: a configuration error is thrown at startup if you define any timeout option on several aggregate filters for the same task_id pattern
|
56
63
|
- breaking: if you use `aggregate_maps_path` option, storage format has changed. So you have to delete `aggregate_maps_path` file before starting Logstash
|
@@ -84,14 +91,14 @@
|
|
84
91
|
- internal,deps: New dependency requirements for logstash-core for the 5.0 release
|
85
92
|
|
86
93
|
## 2.0.3
|
87
|
-
- bugfix: fix issue #10 : numeric task_id is now well processed
|
94
|
+
- bugfix: fix issue [#10](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/10) : numeric task_id is now well processed
|
88
95
|
|
89
96
|
## 2.0.2
|
90
|
-
- bugfix: fix issue #5 : when code call raises an exception, the error is logged and the event is tagged '_aggregateexception'. It avoids logstash crash.
|
97
|
+
- bugfix: fix issue [#5](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/5) : when code call raises an exception, the error is logged and the event is tagged '_aggregateexception'. It avoids logstash crash.
|
91
98
|
|
92
99
|
## 2.0.0
|
93
|
-
- internal: Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully,
|
94
|
-
|
100
|
+
- internal: Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully, instead of using Thread.raise on the plugins' threads.
|
101
|
+
Ref: https://github.com/elastic/logstash/pull/3895
|
95
102
|
- internal,deps: Dependency on logstash-core update to 2.0
|
96
103
|
|
97
104
|
## 0.1.3
|
data/docs/index.asciidoc
CHANGED
@@ -397,11 +397,20 @@ Example:
|
|
397
397
|
* Value type is <<string,string>>
|
398
398
|
* There is no default value for this setting.
|
399
399
|
|
400
|
-
The code to execute to update map, using current event.
|
400
|
+
The code to execute to update aggregated map, using current event.
|
401
401
|
|
402
|
-
Or on the contrary, the code to execute to update event, using
|
402
|
+
Or on the contrary, the code to execute to update event, using aggregated map.
|
403
403
|
|
404
|
-
|
404
|
+
Available variables are :
|
405
|
+
|
406
|
+
`event`: current Logstash event
|
407
|
+
|
408
|
+
`map`: aggregated map associated to `task_id`, containing key/value pairs. Data structure is a ruby http://ruby-doc.org/core-1.9.1/Hash.html[Hash]
|
409
|
+
|
410
|
+
`map_meta`: meta informations associated to aggregate map. It allows to set a custom `timeout` or `inactivity_timeout`.
|
411
|
+
It allows also to get `creation_timestamp`, `lastevent_timestamp` and `task_id`.
|
412
|
+
|
413
|
+
When option push_map_as_event_on_timeout=true, if you set `map_meta.timeout=0` in `code` block, then aggregated map is immediately pushed as a new event.
|
405
414
|
|
406
415
|
Example:
|
407
416
|
[source,ruby]
|
@@ -20,7 +20,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
20
20
|
|
21
21
|
config :code, :validate => :string, :required => true
|
22
22
|
|
23
|
-
config :map_action, :validate =>
|
23
|
+
config :map_action, :validate => ["create", "update", "create_or_update"], :default => "create_or_update"
|
24
24
|
|
25
25
|
config :end_of_task, :validate => :boolean, :default => false
|
26
26
|
|
@@ -51,6 +51,8 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
51
51
|
# pointer to current pipeline context
|
52
52
|
attr_accessor :current_pipeline
|
53
53
|
|
54
|
+
# boolean indicating if expired maps should be checked on every flush call (typically because custom timeout has beeen set on a map)
|
55
|
+
attr_accessor :check_expired_maps_on_every_flush
|
54
56
|
|
55
57
|
# ################ #
|
56
58
|
# STATIC VARIABLES #
|
@@ -81,7 +83,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
81
83
|
end
|
82
84
|
|
83
85
|
# process lambda expression to call in each filter call
|
84
|
-
eval("@codeblock = lambda { |event, map| #{@code} }", binding, "(aggregate filter code)")
|
86
|
+
eval("@codeblock = lambda { |event, map, map_meta| #{@code} }", binding, "(aggregate filter code)")
|
85
87
|
|
86
88
|
# process lambda expression to call in the timeout case or previous event case
|
87
89
|
if @timeout_code
|
@@ -140,6 +142,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
140
142
|
|
141
143
|
# init aggregate_maps
|
142
144
|
@current_pipeline.aggregate_maps[@task_id] ||= {}
|
145
|
+
update_aggregate_maps_metric()
|
143
146
|
|
144
147
|
end
|
145
148
|
end
|
@@ -202,8 +205,9 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
202
205
|
|
203
206
|
# create aggregate map
|
204
207
|
creation_timestamp = reference_timestamp(event)
|
205
|
-
aggregate_maps_element = LogStash::Filters::Aggregate::Element.new(creation_timestamp)
|
208
|
+
aggregate_maps_element = LogStash::Filters::Aggregate::Element.new(creation_timestamp, task_id)
|
206
209
|
@current_pipeline.aggregate_maps[@task_id][task_id] = aggregate_maps_element
|
210
|
+
update_aggregate_maps_metric()
|
207
211
|
else
|
208
212
|
return if @map_action == "create"
|
209
213
|
end
|
@@ -214,7 +218,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
214
218
|
# execute the code to read/update map and event
|
215
219
|
map = aggregate_maps_element.map
|
216
220
|
begin
|
217
|
-
@codeblock.call(event, map)
|
221
|
+
@codeblock.call(event, map, aggregate_maps_element)
|
218
222
|
@logger.debug("Aggregate successful filter code execution", :code => @code)
|
219
223
|
noError = true
|
220
224
|
rescue => exception
|
@@ -224,10 +228,17 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
224
228
|
:map => map,
|
225
229
|
:event_data => event.to_hash_with_metadata)
|
226
230
|
event.tag("_aggregateexception")
|
231
|
+
metric.increment(:code_errors)
|
227
232
|
end
|
228
233
|
|
229
234
|
# delete the map if task is ended
|
230
235
|
@current_pipeline.aggregate_maps[@task_id].delete(task_id) if @end_of_task
|
236
|
+
update_aggregate_maps_metric()
|
237
|
+
|
238
|
+
# process custom timeout set by code block
|
239
|
+
if (aggregate_maps_element.timeout || aggregate_maps_element.inactivity_timeout)
|
240
|
+
event_to_yield = process_map_timeout(aggregate_maps_element)
|
241
|
+
end
|
231
242
|
|
232
243
|
end
|
233
244
|
|
@@ -238,6 +249,25 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
238
249
|
yield event_to_yield if event_to_yield
|
239
250
|
end
|
240
251
|
|
252
|
+
# Process a custom timeout defined in aggregate map element
|
253
|
+
# Returns an event to yield if timeout=0 and push_map_as_event_on_timeout=true
|
254
|
+
def process_map_timeout(element)
|
255
|
+
event_to_yield = nil
|
256
|
+
init_pipeline_timeout_management()
|
257
|
+
if (element.timeout == 0 || element.inactivity_timeout == 0)
|
258
|
+
@current_pipeline.aggregate_maps[@task_id].delete(element.task_id)
|
259
|
+
if @current_pipeline.flush_instance_map[@task_id].push_map_as_event_on_timeout
|
260
|
+
event_to_yield = create_timeout_event(element.map, element.task_id)
|
261
|
+
end
|
262
|
+
@logger.debug("Aggregate remove expired map with task_id=#{element.task_id} and custom timeout=0")
|
263
|
+
metric.increment(:task_timeouts)
|
264
|
+
update_aggregate_maps_metric()
|
265
|
+
else
|
266
|
+
@current_pipeline.flush_instance_map[@task_id].check_expired_maps_on_every_flush ||= true
|
267
|
+
end
|
268
|
+
return event_to_yield
|
269
|
+
end
|
270
|
+
|
241
271
|
# Create a new event from the aggregation_map and the corresponding task_id
|
242
272
|
# This will create the event and
|
243
273
|
# if @timeout_task_id_field is set, it will set the task_id on the timeout event
|
@@ -255,7 +285,8 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
255
285
|
|
256
286
|
LogStash::Util::Decorators.add_tags(@timeout_tags, event_to_yield, "filters/#{self.class.name}")
|
257
287
|
|
258
|
-
|
288
|
+
|
289
|
+
# Call timeout code block if available
|
259
290
|
if @timeout_code
|
260
291
|
begin
|
261
292
|
@timeout_codeblock.call(event_to_yield)
|
@@ -265,9 +296,12 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
265
296
|
:timeout_code => @timeout_code,
|
266
297
|
:timeout_event_data => event_to_yield.to_hash_with_metadata)
|
267
298
|
event_to_yield.tag("_aggregateexception")
|
299
|
+
metric.increment(:timeout_code_errors)
|
268
300
|
end
|
269
301
|
end
|
270
302
|
|
303
|
+
metric.increment(:pushed_events)
|
304
|
+
|
271
305
|
return event_to_yield
|
272
306
|
end
|
273
307
|
|
@@ -276,6 +310,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
276
310
|
previous_entry = @current_pipeline.aggregate_maps[@task_id].shift()
|
277
311
|
previous_task_id = previous_entry[0]
|
278
312
|
previous_map = previous_entry[1].map
|
313
|
+
update_aggregate_maps_metric()
|
279
314
|
return create_timeout_event(previous_map, previous_task_id)
|
280
315
|
end
|
281
316
|
|
@@ -287,13 +322,13 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
287
322
|
# This method is invoked by LogStash every 5 seconds.
|
288
323
|
def flush(options = {})
|
289
324
|
|
290
|
-
@logger.
|
325
|
+
@logger.trace("Aggregate flush call with #{options}")
|
291
326
|
|
292
327
|
# init flush/timeout properties for current pipeline
|
293
328
|
init_pipeline_timeout_management()
|
294
329
|
|
295
330
|
# launch timeout management only every interval of (@inactivity_timeout / 2) seconds or at Logstash shutdown
|
296
|
-
if @current_pipeline.flush_instance_map[@task_id] == self && @current_pipeline.aggregate_maps[@task_id] && (!@current_pipeline.last_flush_timestamp_map.has_key?(@task_id) || Time.now > @current_pipeline.last_flush_timestamp_map[@task_id] + @inactivity_timeout / 2 || options[:final])
|
331
|
+
if @current_pipeline.flush_instance_map[@task_id] == self && @current_pipeline.aggregate_maps[@task_id] && (!@current_pipeline.last_flush_timestamp_map.has_key?(@task_id) || Time.now > @current_pipeline.last_flush_timestamp_map[@task_id] + @inactivity_timeout / 2 || options[:final] || @check_expired_maps_on_every_flush)
|
297
332
|
events_to_flush = remove_expired_maps()
|
298
333
|
|
299
334
|
# at Logstash shutdown, if push_previous_map_as_event is enabled, it's important to force flush (particularly for jdbc input plugin)
|
@@ -302,6 +337,8 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
302
337
|
events_to_flush << extract_previous_map_as_event()
|
303
338
|
end
|
304
339
|
end
|
340
|
+
|
341
|
+
update_aggregate_maps_metric()
|
305
342
|
|
306
343
|
# tag flushed events, indicating "final flush" special event
|
307
344
|
if options[:final]
|
@@ -335,6 +372,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
335
372
|
if @current_pipeline.flush_instance_map[@task_id] == self
|
336
373
|
if @timeout.nil?
|
337
374
|
@timeout = @current_pipeline.default_timeout
|
375
|
+
@logger.debug("Aggregate timeout for '#{@task_id}' pattern: #{@timeout} seconds")
|
338
376
|
end
|
339
377
|
if @inactivity_timeout.nil?
|
340
378
|
@inactivity_timeout = @timeout
|
@@ -347,23 +385,32 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
347
385
|
# If @push_previous_map_as_event option is set, or @push_map_as_event_on_timeout is set, expired maps are returned as new events to be flushed to Logstash pipeline.
|
348
386
|
def remove_expired_maps()
|
349
387
|
events_to_flush = []
|
350
|
-
|
351
|
-
|
388
|
+
default_min_timestamp = Time.now - @timeout
|
389
|
+
default_min_inactivity_timestamp = Time.now - @inactivity_timeout
|
352
390
|
|
353
391
|
@current_pipeline.mutex.synchronize do
|
354
392
|
|
355
393
|
@logger.debug("Aggregate remove_expired_maps call with '#{@task_id}' pattern and #{@current_pipeline.aggregate_maps[@task_id].length} maps")
|
356
394
|
|
357
395
|
@current_pipeline.aggregate_maps[@task_id].delete_if do |key, element|
|
396
|
+
min_timestamp = element.timeout ? Time.now - element.timeout : default_min_timestamp
|
397
|
+
min_inactivity_timestamp = element.inactivity_timeout ? Time.now - element.inactivity_timeout : default_min_inactivity_timestamp
|
358
398
|
if element.creation_timestamp + element.difference_from_creation_to_now < min_timestamp || element.lastevent_timestamp + element.difference_from_creation_to_now < min_inactivity_timestamp
|
359
399
|
if @push_previous_map_as_event || @push_map_as_event_on_timeout
|
360
400
|
events_to_flush << create_timeout_event(element.map, key)
|
361
401
|
end
|
402
|
+
@logger.debug("Aggregate remove expired map with task_id=#{key}")
|
403
|
+
metric.increment(:task_timeouts)
|
362
404
|
next true
|
363
405
|
end
|
364
406
|
next false
|
365
407
|
end
|
366
408
|
end
|
409
|
+
|
410
|
+
# disable check_expired_maps_on_every_flush if there is not anymore maps
|
411
|
+
if @current_pipeline.aggregate_maps[@task_id].length == 0 && @check_expired_maps_on_every_flush
|
412
|
+
@check_expired_maps_on_every_flush = nil
|
413
|
+
end
|
367
414
|
|
368
415
|
return events_to_flush
|
369
416
|
end
|
@@ -382,14 +429,16 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
382
429
|
|
383
430
|
event_to_flush = nil
|
384
431
|
event_timestamp = reference_timestamp(event)
|
385
|
-
min_timestamp = event_timestamp - @timeout
|
386
|
-
min_inactivity_timestamp = event_timestamp - @inactivity_timeout
|
432
|
+
min_timestamp = element.timeout ? event_timestamp - element.timeout : event_timestamp - @timeout
|
433
|
+
min_inactivity_timestamp = element.inactivity_timeout ? event_timestamp - element.inactivity_timeout : event_timestamp - @inactivity_timeout
|
387
434
|
|
388
435
|
if element.creation_timestamp < min_timestamp || element.lastevent_timestamp < min_inactivity_timestamp
|
389
436
|
if @push_previous_map_as_event || @push_map_as_event_on_timeout
|
390
437
|
event_to_flush = create_timeout_event(element.map, task_id)
|
391
438
|
end
|
392
439
|
@current_pipeline.aggregate_maps[@task_id].delete(task_id)
|
440
|
+
@logger.debug("Aggregate remove expired map with task_id=#{task_id}")
|
441
|
+
metric.increment(:task_timeouts)
|
393
442
|
end
|
394
443
|
|
395
444
|
return event_to_flush
|
@@ -428,7 +477,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
428
477
|
if @execution_context
|
429
478
|
return @execution_context.pipeline_id
|
430
479
|
else
|
431
|
-
return
|
480
|
+
return "main"
|
432
481
|
end
|
433
482
|
end
|
434
483
|
|
@@ -438,17 +487,28 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
438
487
|
return (@timeout_timestamp_field) ? event.get(@timeout_timestamp_field).time : Time.now
|
439
488
|
end
|
440
489
|
|
490
|
+
# update "aggregate_maps" metric, with aggregate maps count associated to configured taskid pattern
|
491
|
+
def update_aggregate_maps_metric()
|
492
|
+
aggregate_maps = @current_pipeline.aggregate_maps[@task_id]
|
493
|
+
if aggregate_maps
|
494
|
+
metric.gauge(:aggregate_maps, aggregate_maps.length)
|
495
|
+
end
|
496
|
+
end
|
497
|
+
|
441
498
|
end # class LogStash::Filters::Aggregate
|
442
499
|
|
443
500
|
# Element of "aggregate_maps"
|
444
501
|
class LogStash::Filters::Aggregate::Element
|
445
502
|
|
446
|
-
attr_accessor :creation_timestamp, :lastevent_timestamp, :difference_from_creation_to_now, :map
|
503
|
+
attr_accessor :creation_timestamp, :lastevent_timestamp, :difference_from_creation_to_now, :timeout, :inactivity_timeout, :task_id, :map
|
447
504
|
|
448
|
-
def initialize(creation_timestamp)
|
505
|
+
def initialize(creation_timestamp, task_id)
|
449
506
|
@creation_timestamp = creation_timestamp
|
450
507
|
@lastevent_timestamp = creation_timestamp
|
451
508
|
@difference_from_creation_to_now = (Time.now - creation_timestamp).to_i
|
509
|
+
@timeout = nil
|
510
|
+
@inactivity_timeout = nil
|
511
|
+
@task_id = task_id
|
452
512
|
@map = {}
|
453
513
|
end
|
454
514
|
end
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Gem::Specification.new do |s|
|
2
2
|
s.name = 'logstash-filter-aggregate'
|
3
|
-
s.version = '2.
|
3
|
+
s.version = '2.9.0'
|
4
4
|
s.licenses = ['Apache License (2.0)']
|
5
5
|
s.summary = "Aggregates information from several events originating with a single task"
|
6
6
|
s.description = 'This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program'
|
@@ -389,4 +389,35 @@ describe LogStash::Filters::Aggregate do
|
|
389
389
|
end
|
390
390
|
end
|
391
391
|
|
392
|
+
context "custom timeout on map_meta, " do
|
393
|
+
describe "when map_meta.timeout=0, " do
|
394
|
+
it "should push a new aggregated event immediately" do
|
395
|
+
agg_filter = setup_filter({ "task_id" => "%{ppm_id}", "code" => "map['sql_duration'] = 2; map_meta.timeout = 0", "push_map_as_event_on_timeout" => true, "timeout" => 120 })
|
396
|
+
agg_filter.filter(event({"ppm_id" => "1"})) do |yield_event|
|
397
|
+
expect(yield_event).not_to be_nil
|
398
|
+
expect(yield_event.get("sql_duration")).to eq(2)
|
399
|
+
end
|
400
|
+
expect(aggregate_maps["%{ppm_id}"]).to be_empty
|
401
|
+
end
|
402
|
+
end
|
403
|
+
describe "when map_meta.timeout=0 and push_map_as_event_on_timeout=false, " do
|
404
|
+
it "should just remove expired map and not push an aggregated event" do
|
405
|
+
agg_filter = setup_filter({ "task_id" => "%{ppm_id}", "code" => "map_meta.timeout = 0", "push_map_as_event_on_timeout" => false, "timeout" => 120 })
|
406
|
+
agg_filter.filter(event({"ppm_id" => "1"})) { |yield_event| fail "it shouldn't have yield event" }
|
407
|
+
expect(aggregate_maps["%{ppm_id}"]).to be_empty
|
408
|
+
end
|
409
|
+
end
|
410
|
+
describe "when map_meta.inactivity_timeout=1, " do
|
411
|
+
it "should push a new aggregated event at next flush call" do
|
412
|
+
agg_filter = setup_filter({ "task_id" => "%{ppm_id}", "code" => "map['sql_duration'] = 2; map_meta.inactivity_timeout = 1", "push_map_as_event_on_timeout" => true, "timeout" => 120 })
|
413
|
+
agg_filter.filter(event({"ppm_id" => "1"})) { |yield_event| fail "it shouldn't have yield event" }
|
414
|
+
expect(aggregate_maps["%{ppm_id}"].size).to eq(1)
|
415
|
+
sleep(2)
|
416
|
+
events_to_flush = agg_filter.flush()
|
417
|
+
expect(events_to_flush.size).to eq(1)
|
418
|
+
expect(aggregate_maps["%{ppm_id}"]).to be_empty
|
419
|
+
end
|
420
|
+
end
|
421
|
+
end
|
422
|
+
|
392
423
|
end
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: logstash-filter-aggregate
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 2.
|
4
|
+
version: 2.9.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Elastic
|
@@ -9,7 +9,7 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date: 2018-03
|
12
|
+
date: 2018-11-03 00:00:00.000000000 Z
|
13
13
|
dependencies:
|
14
14
|
- !ruby/object:Gem::Dependency
|
15
15
|
requirement: !ruby/object:Gem::Requirement
|