logstash-filter-aggregate 2.8.0 → 2.10.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +29 -9
- data/LICENSE +199 -10
- data/README.md +1 -1
- data/docs/index.asciidoc +43 -10
- data/lib/logstash/filters/aggregate.rb +83 -35
- data/logstash-filter-aggregate.gemspec +3 -3
- data/spec/filters/aggregate_spec.rb +45 -1
- data/spec/filters/aggregate_spec_helper.rb +0 -1
- metadata +7 -5
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: e30c90c81bac3cd99cf2a01d8e66e010a89acf87
|
4
|
+
data.tar.gz: 5733c4b4a64b9a6b7032fd093f289ad0d2e84f2e
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 809dbbdba440d501fb460ab5d60cbaaabb397654a054cd668a2d38ed1de31b99876839896cbd0c16634572d702ba8e944f094616d95e916935009a3290de8d36
|
7
|
+
data.tar.gz: 6cb68ae7e22433d0bc0a49803a95e1836de98f8bb36c937812b2bbb6981d5624590df1ce27694ee34cb7fef53774ab18e9b823b0253b62c3687ea8f83895b82e
|
data/CHANGELOG.md
CHANGED
@@ -1,9 +1,29 @@
|
|
1
|
+
## 2.10.0
|
2
|
+
- new feature: add ability to generate new event during code execution (#116)
|
3
|
+
|
4
|
+
## 2.9.2
|
5
|
+
- bugfix: remove 'default_timeout' at pipeline level (fix #112)
|
6
|
+
- ci: update travis ci configuration
|
7
|
+
|
8
|
+
## 2.9.1
|
9
|
+
- bugfix: fix inactivity timeout feature when processing old logs (PR [#103](https://github.com/logstash-plugins/logstash-filter-aggregate/pull/103), thanks @jdratlif for his contribution!)
|
10
|
+
- docs: fix several typos in documentation
|
11
|
+
- docs: enhance example 4 documentation
|
12
|
+
- ci: enhance plugin continuous integration
|
13
|
+
|
14
|
+
## 2.9.0
|
15
|
+
- new feature: add ability to dynamically define a custom `timeout` or `inactivity_timeout` in `code` block (fix issues [#91](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/91) and [#92](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/92))
|
16
|
+
- new feature: add meta informations available in `code` block through `map_meta` variable
|
17
|
+
- new feature: add Logstash metrics, specific to aggregate plugin: aggregate_maps, pushed_events, task_timeouts, code_errors, timeout_code_errors
|
18
|
+
- new feature: validate at startup that `map_action` option equals to 'create', 'update' or 'create_or_update'
|
19
|
+
|
1
20
|
## 2.8.0
|
2
|
-
- new feature: add 'timeout_timestamp_field' option.
|
3
|
-
When set, this option lets to compute timeout based on event timestamp field (and not system time).
|
21
|
+
- new feature: add 'timeout_timestamp_field' option (fix issue [#81](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/81))
|
22
|
+
When set, this option lets to compute timeout based on event timestamp field (and not system time).
|
23
|
+
It's particularly useful when processing old logs.
|
4
24
|
|
5
25
|
## 2.7.2
|
6
|
-
- bugfix: fix synchronisation issue at Logstash shutdown (#75)
|
26
|
+
- bugfix: fix synchronisation issue at Logstash shutdown (issue [#75](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/75))
|
7
27
|
|
8
28
|
## 2.7.1
|
9
29
|
- docs: update gemspec summary
|
@@ -32,7 +52,7 @@
|
|
32
52
|
Events for a given `task_id` will be aggregated for as long as they keep arriving within the defined `inactivity_timeout` option - the inactivity timeout is reset each time a new event happens. On the contrary, `timeout` is never reset and happens after `timeout` seconds since aggregation map creation.
|
33
53
|
|
34
54
|
## 2.5.2
|
35
|
-
- bugfix: fix 'aggregate_maps_path' load (issue #62). Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
|
55
|
+
- bugfix: fix 'aggregate_maps_path' load (issue [#62](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/62)). Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
|
36
56
|
- enhancement: at Logstash startup, check that 'task_id' option contains a field reference expression (else raise error)
|
37
57
|
- docs: enhance examples
|
38
58
|
- docs: precise that tasks are tied to their task_id pattern, even if they have same task_id value
|
@@ -50,7 +70,7 @@
|
|
50
70
|
- breaking: need Logstash 2.4 or later
|
51
71
|
|
52
72
|
## 2.4.0
|
53
|
-
- new feature: You can now define timeout options per task_id pattern (#42)
|
73
|
+
- new feature: You can now define timeout options per task_id pattern (fix issue [#42](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/42))
|
54
74
|
timeout options are : `timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags`
|
55
75
|
- validation: a configuration error is thrown at startup if you define any timeout option on several aggregate filters for the same task_id pattern
|
56
76
|
- breaking: if you use `aggregate_maps_path` option, storage format has changed. So you have to delete `aggregate_maps_path` file before starting Logstash
|
@@ -84,14 +104,14 @@
|
|
84
104
|
- internal,deps: New dependency requirements for logstash-core for the 5.0 release
|
85
105
|
|
86
106
|
## 2.0.3
|
87
|
-
- bugfix: fix issue #10 : numeric task_id is now well processed
|
107
|
+
- bugfix: fix issue [#10](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/10) : numeric task_id is now well processed
|
88
108
|
|
89
109
|
## 2.0.2
|
90
|
-
- bugfix: fix issue #5 : when code call raises an exception, the error is logged and the event is tagged '_aggregateexception'. It avoids logstash crash.
|
110
|
+
- bugfix: fix issue [#5](https://github.com/logstash-plugins/logstash-filter-aggregate/issues/5) : when code call raises an exception, the error is logged and the event is tagged '_aggregateexception'. It avoids logstash crash.
|
91
111
|
|
92
112
|
## 2.0.0
|
93
|
-
- internal: Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully,
|
94
|
-
|
113
|
+
- internal: Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully, instead of using Thread.raise on the plugins' threads.
|
114
|
+
Ref: https://github.com/elastic/logstash/pull/3895
|
95
115
|
- internal,deps: Dependency on logstash-core update to 2.0
|
96
116
|
|
97
117
|
## 0.1.3
|
data/LICENSE
CHANGED
@@ -1,13 +1,202 @@
|
|
1
|
-
Copyright (c) 2012-2018 Elasticsearch <http://www.elasticsearch.org>
|
2
1
|
|
3
|
-
|
4
|
-
|
5
|
-
|
2
|
+
Apache License
|
3
|
+
Version 2.0, January 2004
|
4
|
+
http://www.apache.org/licenses/
|
6
5
|
|
7
|
-
|
6
|
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
8
7
|
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
|
13
|
-
|
8
|
+
1. Definitions.
|
9
|
+
|
10
|
+
"License" shall mean the terms and conditions for use, reproduction,
|
11
|
+
and distribution as defined by Sections 1 through 9 of this document.
|
12
|
+
|
13
|
+
"Licensor" shall mean the copyright owner or entity authorized by
|
14
|
+
the copyright owner that is granting the License.
|
15
|
+
|
16
|
+
"Legal Entity" shall mean the union of the acting entity and all
|
17
|
+
other entities that control, are controlled by, or are under common
|
18
|
+
control with that entity. For the purposes of this definition,
|
19
|
+
"control" means (i) the power, direct or indirect, to cause the
|
20
|
+
direction or management of such entity, whether by contract or
|
21
|
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
22
|
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
23
|
+
|
24
|
+
"You" (or "Your") shall mean an individual or Legal Entity
|
25
|
+
exercising permissions granted by this License.
|
26
|
+
|
27
|
+
"Source" form shall mean the preferred form for making modifications,
|
28
|
+
including but not limited to software source code, documentation
|
29
|
+
source, and configuration files.
|
30
|
+
|
31
|
+
"Object" form shall mean any form resulting from mechanical
|
32
|
+
transformation or translation of a Source form, including but
|
33
|
+
not limited to compiled object code, generated documentation,
|
34
|
+
and conversions to other media types.
|
35
|
+
|
36
|
+
"Work" shall mean the work of authorship, whether in Source or
|
37
|
+
Object form, made available under the License, as indicated by a
|
38
|
+
copyright notice that is included in or attached to the work
|
39
|
+
(an example is provided in the Appendix below).
|
40
|
+
|
41
|
+
"Derivative Works" shall mean any work, whether in Source or Object
|
42
|
+
form, that is based on (or derived from) the Work and for which the
|
43
|
+
editorial revisions, annotations, elaborations, or other modifications
|
44
|
+
represent, as a whole, an original work of authorship. For the purposes
|
45
|
+
of this License, Derivative Works shall not include works that remain
|
46
|
+
separable from, or merely link (or bind by name) to the interfaces of,
|
47
|
+
the Work and Derivative Works thereof.
|
48
|
+
|
49
|
+
"Contribution" shall mean any work of authorship, including
|
50
|
+
the original version of the Work and any modifications or additions
|
51
|
+
to that Work or Derivative Works thereof, that is intentionally
|
52
|
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
53
|
+
or by an individual or Legal Entity authorized to submit on behalf of
|
54
|
+
the copyright owner. For the purposes of this definition, "submitted"
|
55
|
+
means any form of electronic, verbal, or written communication sent
|
56
|
+
to the Licensor or its representatives, including but not limited to
|
57
|
+
communication on electronic mailing lists, source code control systems,
|
58
|
+
and issue tracking systems that are managed by, or on behalf of, the
|
59
|
+
Licensor for the purpose of discussing and improving the Work, but
|
60
|
+
excluding communication that is conspicuously marked or otherwise
|
61
|
+
designated in writing by the copyright owner as "Not a Contribution."
|
62
|
+
|
63
|
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
64
|
+
on behalf of whom a Contribution has been received by Licensor and
|
65
|
+
subsequently incorporated within the Work.
|
66
|
+
|
67
|
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
68
|
+
this License, each Contributor hereby grants to You a perpetual,
|
69
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
70
|
+
copyright license to reproduce, prepare Derivative Works of,
|
71
|
+
publicly display, publicly perform, sublicense, and distribute the
|
72
|
+
Work and such Derivative Works in Source or Object form.
|
73
|
+
|
74
|
+
3. Grant of Patent License. Subject to the terms and conditions of
|
75
|
+
this License, each Contributor hereby grants to You a perpetual,
|
76
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
77
|
+
(except as stated in this section) patent license to make, have made,
|
78
|
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
79
|
+
where such license applies only to those patent claims licensable
|
80
|
+
by such Contributor that are necessarily infringed by their
|
81
|
+
Contribution(s) alone or by combination of their Contribution(s)
|
82
|
+
with the Work to which such Contribution(s) was submitted. If You
|
83
|
+
institute patent litigation against any entity (including a
|
84
|
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
85
|
+
or a Contribution incorporated within the Work constitutes direct
|
86
|
+
or contributory patent infringement, then any patent licenses
|
87
|
+
granted to You under this License for that Work shall terminate
|
88
|
+
as of the date such litigation is filed.
|
89
|
+
|
90
|
+
4. Redistribution. You may reproduce and distribute copies of the
|
91
|
+
Work or Derivative Works thereof in any medium, with or without
|
92
|
+
modifications, and in Source or Object form, provided that You
|
93
|
+
meet the following conditions:
|
94
|
+
|
95
|
+
(a) You must give any other recipients of the Work or
|
96
|
+
Derivative Works a copy of this License; and
|
97
|
+
|
98
|
+
(b) You must cause any modified files to carry prominent notices
|
99
|
+
stating that You changed the files; and
|
100
|
+
|
101
|
+
(c) You must retain, in the Source form of any Derivative Works
|
102
|
+
that You distribute, all copyright, patent, trademark, and
|
103
|
+
attribution notices from the Source form of the Work,
|
104
|
+
excluding those notices that do not pertain to any part of
|
105
|
+
the Derivative Works; and
|
106
|
+
|
107
|
+
(d) If the Work includes a "NOTICE" text file as part of its
|
108
|
+
distribution, then any Derivative Works that You distribute must
|
109
|
+
include a readable copy of the attribution notices contained
|
110
|
+
within such NOTICE file, excluding those notices that do not
|
111
|
+
pertain to any part of the Derivative Works, in at least one
|
112
|
+
of the following places: within a NOTICE text file distributed
|
113
|
+
as part of the Derivative Works; within the Source form or
|
114
|
+
documentation, if provided along with the Derivative Works; or,
|
115
|
+
within a display generated by the Derivative Works, if and
|
116
|
+
wherever such third-party notices normally appear. The contents
|
117
|
+
of the NOTICE file are for informational purposes only and
|
118
|
+
do not modify the License. You may add Your own attribution
|
119
|
+
notices within Derivative Works that You distribute, alongside
|
120
|
+
or as an addendum to the NOTICE text from the Work, provided
|
121
|
+
that such additional attribution notices cannot be construed
|
122
|
+
as modifying the License.
|
123
|
+
|
124
|
+
You may add Your own copyright statement to Your modifications and
|
125
|
+
may provide additional or different license terms and conditions
|
126
|
+
for use, reproduction, or distribution of Your modifications, or
|
127
|
+
for any such Derivative Works as a whole, provided Your use,
|
128
|
+
reproduction, and distribution of the Work otherwise complies with
|
129
|
+
the conditions stated in this License.
|
130
|
+
|
131
|
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
132
|
+
any Contribution intentionally submitted for inclusion in the Work
|
133
|
+
by You to the Licensor shall be under the terms and conditions of
|
134
|
+
this License, without any additional terms or conditions.
|
135
|
+
Notwithstanding the above, nothing herein shall supersede or modify
|
136
|
+
the terms of any separate license agreement you may have executed
|
137
|
+
with Licensor regarding such Contributions.
|
138
|
+
|
139
|
+
6. Trademarks. This License does not grant permission to use the trade
|
140
|
+
names, trademarks, service marks, or product names of the Licensor,
|
141
|
+
except as required for reasonable and customary use in describing the
|
142
|
+
origin of the Work and reproducing the content of the NOTICE file.
|
143
|
+
|
144
|
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
145
|
+
agreed to in writing, Licensor provides the Work (and each
|
146
|
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
147
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
148
|
+
implied, including, without limitation, any warranties or conditions
|
149
|
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
150
|
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
151
|
+
appropriateness of using or redistributing the Work and assume any
|
152
|
+
risks associated with Your exercise of permissions under this License.
|
153
|
+
|
154
|
+
8. Limitation of Liability. In no event and under no legal theory,
|
155
|
+
whether in tort (including negligence), contract, or otherwise,
|
156
|
+
unless required by applicable law (such as deliberate and grossly
|
157
|
+
negligent acts) or agreed to in writing, shall any Contributor be
|
158
|
+
liable to You for damages, including any direct, indirect, special,
|
159
|
+
incidental, or consequential damages of any character arising as a
|
160
|
+
result of this License or out of the use or inability to use the
|
161
|
+
Work (including but not limited to damages for loss of goodwill,
|
162
|
+
work stoppage, computer failure or malfunction, or any and all
|
163
|
+
other commercial damages or losses), even if such Contributor
|
164
|
+
has been advised of the possibility of such damages.
|
165
|
+
|
166
|
+
9. Accepting Warranty or Additional Liability. While redistributing
|
167
|
+
the Work or Derivative Works thereof, You may choose to offer,
|
168
|
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
169
|
+
or other liability obligations and/or rights consistent with this
|
170
|
+
License. However, in accepting such obligations, You may act only
|
171
|
+
on Your own behalf and on Your sole responsibility, not on behalf
|
172
|
+
of any other Contributor, and only if You agree to indemnify,
|
173
|
+
defend, and hold each Contributor harmless for any liability
|
174
|
+
incurred by, or claims asserted against, such Contributor by reason
|
175
|
+
of your accepting any such warranty or additional liability.
|
176
|
+
|
177
|
+
END OF TERMS AND CONDITIONS
|
178
|
+
|
179
|
+
APPENDIX: How to apply the Apache License to your work.
|
180
|
+
|
181
|
+
To apply the Apache License to your work, attach the following
|
182
|
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
183
|
+
replaced with your own identifying information. (Don't include
|
184
|
+
the brackets!) The text should be enclosed in the appropriate
|
185
|
+
comment syntax for the file format. We also recommend that a
|
186
|
+
file or class name and description of purpose be included on the
|
187
|
+
same "printed page" as the copyright notice for easier
|
188
|
+
identification within third-party archives.
|
189
|
+
|
190
|
+
Copyright 2020 Elastic and contributors
|
191
|
+
|
192
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
193
|
+
you may not use this file except in compliance with the License.
|
194
|
+
You may obtain a copy of the License at
|
195
|
+
|
196
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
197
|
+
|
198
|
+
Unless required by applicable law or agreed to in writing, software
|
199
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
200
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
201
|
+
See the License for the specific language governing permissions and
|
202
|
+
limitations under the License.
|
data/README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
# Aggregate Logstash Plugin
|
2
2
|
|
3
|
-
[![Travis Build Status](https://travis-ci.
|
3
|
+
[![Travis Build Status](https://travis-ci.com/logstash-plugins/logstash-filter-aggregate.svg)](https://travis-ci.com/logstash-plugins/logstash-filter-aggregate)
|
4
4
|
|
5
5
|
This is a plugin for [Logstash](https://github.com/elastic/logstash).
|
6
6
|
|
data/docs/index.asciidoc
CHANGED
@@ -228,7 +228,7 @@ In that case, you don't want to wait task timeout to flush aggregation map.
|
|
228
228
|
aggregate {
|
229
229
|
task_id => "%{country_name}"
|
230
230
|
code => "
|
231
|
-
map['country_name']
|
231
|
+
map['country_name'] ||= event.get('country_name')
|
232
232
|
map['towns'] ||= []
|
233
233
|
map['towns'] << {'town_name' => event.get('town_name')}
|
234
234
|
event.cancel()
|
@@ -240,8 +240,9 @@ In that case, you don't want to wait task timeout to flush aggregation map.
|
|
240
240
|
----------------------------------
|
241
241
|
|
242
242
|
* The key point is that each time aggregate plugin detects a new `country_name`, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next country
|
243
|
-
* When
|
244
|
-
*
|
243
|
+
* When 3s timeout comes, the last aggregate map is pushed as a new event
|
244
|
+
* Initial events (which are not aggregated) are dropped because useless (thanks to `event.cancel()`)
|
245
|
+
* Last point: if a field is not fulfilled for every event (say "town_postcode" field), the `||=` operator will let you to push into aggregate map, the first "not null" value. Example: `map['town_postcode'] ||= event.get('town_postcode')`
|
245
246
|
|
246
247
|
|
247
248
|
[id="plugins-{type}s-{plugin}-example5"]
|
@@ -249,7 +250,7 @@ In that case, you don't want to wait task timeout to flush aggregation map.
|
|
249
250
|
|
250
251
|
Fifth use case: like example #3, there is no end event.
|
251
252
|
|
252
|
-
Events keep
|
253
|
+
Events keep coming for an indefinite time and you want to push the aggregation map as soon as possible after the last user interaction without waiting for the `timeout`.
|
253
254
|
|
254
255
|
This allows to have the aggregated events pushed closer to real time.
|
255
256
|
|
@@ -260,7 +261,7 @@ We can track a user by its ID through the events, however once the user stops in
|
|
260
261
|
|
261
262
|
There is no specific event indicating the end of the user's interaction.
|
262
263
|
|
263
|
-
The user
|
264
|
+
The user interaction will be considered as ended when no events for the specified user (task_id) arrive after the specified inactivity_timeout`.
|
264
265
|
|
265
266
|
If the user continues interacting for longer than `timeout` seconds (since first event), the aggregation map will still be deleted and pushed as a new event when timeout occurs.
|
266
267
|
|
@@ -295,7 +296,7 @@ filter {
|
|
295
296
|
code => "map['clicks'] ||= 0; map['clicks'] += 1;"
|
296
297
|
push_map_as_event_on_timeout => true
|
297
298
|
timeout_task_id_field => "user_id"
|
298
|
-
timeout => 3600 # 1 hour timeout, user activity will be considered finished one hour after the first event, even if events keep
|
299
|
+
timeout => 3600 # 1 hour timeout, user activity will be considered finished one hour after the first event, even if events keep coming
|
299
300
|
inactivity_timeout => 300 # 5 minutes timeout, user activity will be considered finished if no new events arrive 5 minutes after the last event
|
300
301
|
timeout_tags => ['_aggregatetimeout']
|
301
302
|
timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
|
@@ -326,7 +327,7 @@ filter {
|
|
326
327
|
* in the final event, you can execute a last code (for instance, add map data to final event)
|
327
328
|
* after the final event, the map attached to task is deleted (thanks to `end_of_task => true`)
|
328
329
|
* an aggregate map is tied to one task_id value which is tied to one task_id pattern. So if you have 2 filters with different task_id patterns, even if you have same task_id value, they won't share the same aggregate map.
|
329
|
-
* in one filter configuration, it is
|
330
|
+
* in one filter configuration, it is recommended to define a timeout option to protect the feature against unterminated tasks. It tells the filter to delete expired maps
|
330
331
|
* if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
|
331
332
|
* all timeout options have to be defined in only one aggregate filter per task_id pattern (per pipeline). Timeout options are : timeout, inactivity_timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_timestamp_field, timeout_task_id_field, timeout_tags
|
332
333
|
* if `code` execution raises an exception, the error is logged and event is tagged '_aggregateexception'
|
@@ -397,11 +398,23 @@ Example:
|
|
397
398
|
* Value type is <<string,string>>
|
398
399
|
* There is no default value for this setting.
|
399
400
|
|
400
|
-
The code to execute to update map, using current event.
|
401
|
+
The code to execute to update aggregated map, using current event.
|
401
402
|
|
402
|
-
Or on the contrary, the code to execute to update event, using
|
403
|
+
Or on the contrary, the code to execute to update event, using aggregated map.
|
404
|
+
|
405
|
+
Available variables are:
|
406
|
+
|
407
|
+
`event`: current Logstash event
|
408
|
+
|
409
|
+
`map`: aggregated map associated to `task_id`, containing key/value pairs. Data structure is a ruby http://ruby-doc.org/core-1.9.1/Hash.html[Hash]
|
410
|
+
|
411
|
+
`map_meta`: meta informations associated to aggregate map. It allows to set a custom `timeout` or `inactivity_timeout`.
|
412
|
+
It allows also to get `creation_timestamp`, `lastevent_timestamp` and `task_id`.
|
413
|
+
|
414
|
+
`new_event_block`: block used to emit new Logstash events. See the second example on how to use it.
|
415
|
+
|
416
|
+
When option push_map_as_event_on_timeout=true, if you set `map_meta.timeout=0` in `code` block, then aggregated map is immediately pushed as a new event.
|
403
417
|
|
404
|
-
You will have a 'map' variable and an 'event' variable available (that is the event itself).
|
405
418
|
|
406
419
|
Example:
|
407
420
|
[source,ruby]
|
@@ -411,6 +424,26 @@ Example:
|
|
411
424
|
}
|
412
425
|
}
|
413
426
|
|
427
|
+
|
428
|
+
To create additional events during the code execution, to be emitted immediately, you can use `new_event_block.call(event)` function, like in the following example:
|
429
|
+
|
430
|
+
[source,ruby]
|
431
|
+
filter {
|
432
|
+
aggregate {
|
433
|
+
code => "
|
434
|
+
data = {:my_sql_duration => map['sql_duration']}
|
435
|
+
generated_event = LogStash::Event.new(data)
|
436
|
+
generated_event.set('my_other_field', 34)
|
437
|
+
new_event_block.call(generated_event)
|
438
|
+
"
|
439
|
+
}
|
440
|
+
}
|
441
|
+
|
442
|
+
The parameter of the function `new_event_block.call` must be of type `LogStash::Event`.
|
443
|
+
To create such an object, the constructor of the same class can be used: `LogStash::Event.new()`.
|
444
|
+
`LogStash::Event.new()` can receive a parameter of type ruby http://ruby-doc.org/core-1.9.1/Hash.html[Hash] to initialize the new event fields.
|
445
|
+
|
446
|
+
|
414
447
|
[id="plugins-{type}s-{plugin}-end_of_task"]
|
415
448
|
===== `end_of_task`
|
416
449
|
|
@@ -20,7 +20,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
20
20
|
|
21
21
|
config :code, :validate => :string, :required => true
|
22
22
|
|
23
|
-
config :map_action, :validate =>
|
23
|
+
config :map_action, :validate => ["create", "update", "create_or_update"], :default => "create_or_update"
|
24
24
|
|
25
25
|
config :end_of_task, :validate => :boolean, :default => false
|
26
26
|
|
@@ -51,6 +51,8 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
51
51
|
# pointer to current pipeline context
|
52
52
|
attr_accessor :current_pipeline
|
53
53
|
|
54
|
+
# boolean indicating if expired maps should be checked on every flush call (typically because custom timeout has beeen set on a map)
|
55
|
+
attr_accessor :check_expired_maps_on_every_flush
|
54
56
|
|
55
57
|
# ################ #
|
56
58
|
# STATIC VARIABLES #
|
@@ -81,7 +83,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
81
83
|
end
|
82
84
|
|
83
85
|
# process lambda expression to call in each filter call
|
84
|
-
eval("@codeblock = lambda { |event, map| #{@code} }", binding, "(aggregate filter code)")
|
86
|
+
eval("@codeblock = lambda { |event, map, map_meta, &new_event_block| #{@code} }", binding, "(aggregate filter code)")
|
85
87
|
|
86
88
|
# process lambda expression to call in the timeout case or previous event case
|
87
89
|
if @timeout_code
|
@@ -104,15 +106,9 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
104
106
|
@logger.debug("Aggregate timeout for '#{@task_id}' pattern: #{@timeout} seconds")
|
105
107
|
end
|
106
108
|
|
107
|
-
# timeout management : define default_timeout
|
108
|
-
if @timeout && (@current_pipeline.default_timeout.nil? || @timeout < @current_pipeline.default_timeout)
|
109
|
-
@current_pipeline.default_timeout = @timeout
|
110
|
-
@logger.debug("Aggregate default timeout: #{@timeout} seconds")
|
111
|
-
end
|
112
|
-
|
113
109
|
# inactivity timeout management: make sure it is lower than timeout
|
114
|
-
if @inactivity_timeout && ((@timeout && @inactivity_timeout > @timeout) || (@
|
115
|
-
raise LogStash::ConfigurationError, "Aggregate plugin: For task_id pattern #{@task_id}, inactivity_timeout must be lower than timeout"
|
110
|
+
if @inactivity_timeout && ((@timeout && @inactivity_timeout > @timeout) || (@timeout.nil? && @inactivity_timeout > DEFAULT_TIMEOUT))
|
111
|
+
raise LogStash::ConfigurationError, "Aggregate plugin: For task_id pattern #{@task_id}, inactivity_timeout (#{@inactivity_timeout}) must be lower than timeout (#{@timeout})"
|
116
112
|
end
|
117
113
|
|
118
114
|
# reinit pipeline_close_instance (if necessary)
|
@@ -140,6 +136,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
140
136
|
|
141
137
|
# init aggregate_maps
|
142
138
|
@current_pipeline.aggregate_maps[@task_id] ||= {}
|
139
|
+
update_aggregate_maps_metric()
|
143
140
|
|
144
141
|
end
|
145
142
|
end
|
@@ -171,7 +168,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
171
168
|
|
172
169
|
# This method is invoked each time an event matches the filter
|
173
170
|
public
|
174
|
-
def filter(event)
|
171
|
+
def filter(event, &new_event_block)
|
175
172
|
|
176
173
|
# define task id
|
177
174
|
task_id = event.sprintf(@task_id)
|
@@ -202,19 +199,21 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
202
199
|
|
203
200
|
# create aggregate map
|
204
201
|
creation_timestamp = reference_timestamp(event)
|
205
|
-
aggregate_maps_element = LogStash::Filters::Aggregate::Element.new(creation_timestamp)
|
202
|
+
aggregate_maps_element = LogStash::Filters::Aggregate::Element.new(creation_timestamp, task_id)
|
206
203
|
@current_pipeline.aggregate_maps[@task_id][task_id] = aggregate_maps_element
|
204
|
+
update_aggregate_maps_metric()
|
207
205
|
else
|
208
206
|
return if @map_action == "create"
|
209
207
|
end
|
210
208
|
|
211
209
|
# update last event timestamp
|
212
210
|
aggregate_maps_element.lastevent_timestamp = reference_timestamp(event)
|
211
|
+
aggregate_maps_element.difference_from_lastevent_to_now = (Time.now - aggregate_maps_element.lastevent_timestamp).to_i
|
213
212
|
|
214
213
|
# execute the code to read/update map and event
|
215
214
|
map = aggregate_maps_element.map
|
216
215
|
begin
|
217
|
-
@codeblock.call(event, map)
|
216
|
+
@codeblock.call(event, map, aggregate_maps_element, &new_event_block)
|
218
217
|
@logger.debug("Aggregate successful filter code execution", :code => @code)
|
219
218
|
noError = true
|
220
219
|
rescue => exception
|
@@ -224,10 +223,17 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
224
223
|
:map => map,
|
225
224
|
:event_data => event.to_hash_with_metadata)
|
226
225
|
event.tag("_aggregateexception")
|
226
|
+
metric.increment(:code_errors)
|
227
227
|
end
|
228
228
|
|
229
229
|
# delete the map if task is ended
|
230
230
|
@current_pipeline.aggregate_maps[@task_id].delete(task_id) if @end_of_task
|
231
|
+
update_aggregate_maps_metric()
|
232
|
+
|
233
|
+
# process custom timeout set by code block
|
234
|
+
if (aggregate_maps_element.timeout || aggregate_maps_element.inactivity_timeout)
|
235
|
+
event_to_yield = process_map_timeout(aggregate_maps_element)
|
236
|
+
end
|
231
237
|
|
232
238
|
end
|
233
239
|
|
@@ -238,6 +244,25 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
238
244
|
yield event_to_yield if event_to_yield
|
239
245
|
end
|
240
246
|
|
247
|
+
# Process a custom timeout defined in aggregate map element
|
248
|
+
# Returns an event to yield if timeout=0 and push_map_as_event_on_timeout=true
|
249
|
+
def process_map_timeout(element)
|
250
|
+
event_to_yield = nil
|
251
|
+
init_pipeline_timeout_management()
|
252
|
+
if (element.timeout == 0 || element.inactivity_timeout == 0)
|
253
|
+
@current_pipeline.aggregate_maps[@task_id].delete(element.task_id)
|
254
|
+
if @current_pipeline.flush_instance_map[@task_id].push_map_as_event_on_timeout
|
255
|
+
event_to_yield = create_timeout_event(element.map, element.task_id)
|
256
|
+
end
|
257
|
+
@logger.debug("Aggregate remove expired map with task_id=#{element.task_id} and custom timeout=0")
|
258
|
+
metric.increment(:task_timeouts)
|
259
|
+
update_aggregate_maps_metric()
|
260
|
+
else
|
261
|
+
@current_pipeline.flush_instance_map[@task_id].check_expired_maps_on_every_flush ||= true
|
262
|
+
end
|
263
|
+
return event_to_yield
|
264
|
+
end
|
265
|
+
|
241
266
|
# Create a new event from the aggregation_map and the corresponding task_id
|
242
267
|
# This will create the event and
|
243
268
|
# if @timeout_task_id_field is set, it will set the task_id on the timeout event
|
@@ -255,7 +280,8 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
255
280
|
|
256
281
|
LogStash::Util::Decorators.add_tags(@timeout_tags, event_to_yield, "filters/#{self.class.name}")
|
257
282
|
|
258
|
-
|
283
|
+
|
284
|
+
# Call timeout code block if available
|
259
285
|
if @timeout_code
|
260
286
|
begin
|
261
287
|
@timeout_codeblock.call(event_to_yield)
|
@@ -265,9 +291,12 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
265
291
|
:timeout_code => @timeout_code,
|
266
292
|
:timeout_event_data => event_to_yield.to_hash_with_metadata)
|
267
293
|
event_to_yield.tag("_aggregateexception")
|
294
|
+
metric.increment(:timeout_code_errors)
|
268
295
|
end
|
269
296
|
end
|
270
297
|
|
298
|
+
metric.increment(:pushed_events)
|
299
|
+
|
271
300
|
return event_to_yield
|
272
301
|
end
|
273
302
|
|
@@ -276,6 +305,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
276
305
|
previous_entry = @current_pipeline.aggregate_maps[@task_id].shift()
|
277
306
|
previous_task_id = previous_entry[0]
|
278
307
|
previous_map = previous_entry[1].map
|
308
|
+
update_aggregate_maps_metric()
|
279
309
|
return create_timeout_event(previous_map, previous_task_id)
|
280
310
|
end
|
281
311
|
|
@@ -287,13 +317,13 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
287
317
|
# This method is invoked by LogStash every 5 seconds.
|
288
318
|
def flush(options = {})
|
289
319
|
|
290
|
-
@logger.
|
320
|
+
@logger.trace("Aggregate flush call with #{options}")
|
291
321
|
|
292
322
|
# init flush/timeout properties for current pipeline
|
293
323
|
init_pipeline_timeout_management()
|
294
324
|
|
295
325
|
# launch timeout management only every interval of (@inactivity_timeout / 2) seconds or at Logstash shutdown
|
296
|
-
if @current_pipeline.flush_instance_map[@task_id] == self && @current_pipeline.aggregate_maps[@task_id] && (!@current_pipeline.last_flush_timestamp_map.has_key?(@task_id) || Time.now > @current_pipeline.last_flush_timestamp_map[@task_id] + @inactivity_timeout / 2 || options[:final])
|
326
|
+
if @current_pipeline.flush_instance_map[@task_id] == self && @current_pipeline.aggregate_maps[@task_id] && (!@current_pipeline.last_flush_timestamp_map.has_key?(@task_id) || Time.now > @current_pipeline.last_flush_timestamp_map[@task_id] + @inactivity_timeout / 2 || options[:final] || @check_expired_maps_on_every_flush)
|
297
327
|
events_to_flush = remove_expired_maps()
|
298
328
|
|
299
329
|
# at Logstash shutdown, if push_previous_map_as_event is enabled, it's important to force flush (particularly for jdbc input plugin)
|
@@ -302,6 +332,8 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
302
332
|
events_to_flush << extract_previous_map_as_event()
|
303
333
|
end
|
304
334
|
end
|
335
|
+
|
336
|
+
update_aggregate_maps_metric()
|
305
337
|
|
306
338
|
# tag flushed events, indicating "final flush" special event
|
307
339
|
if options[:final]
|
@@ -321,11 +353,6 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
321
353
|
# init flush/timeout properties for current pipeline
|
322
354
|
def init_pipeline_timeout_management()
|
323
355
|
|
324
|
-
# Define default timeout (if not defined by user)
|
325
|
-
if @current_pipeline.default_timeout.nil?
|
326
|
-
@current_pipeline.default_timeout = DEFAULT_TIMEOUT
|
327
|
-
end
|
328
|
-
|
329
356
|
# Define default flush instance that manages timeout (if not defined by user)
|
330
357
|
if !@current_pipeline.flush_instance_map.has_key?(@task_id)
|
331
358
|
@current_pipeline.flush_instance_map[@task_id] = self
|
@@ -334,7 +361,8 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
334
361
|
# Define timeout and inactivity_timeout (if not defined by user)
|
335
362
|
if @current_pipeline.flush_instance_map[@task_id] == self
|
336
363
|
if @timeout.nil?
|
337
|
-
@timeout =
|
364
|
+
@timeout = DEFAULT_TIMEOUT
|
365
|
+
@logger.debug("Aggregate timeout for '#{@task_id}' pattern: #{@timeout} seconds (default value)")
|
338
366
|
end
|
339
367
|
if @inactivity_timeout.nil?
|
340
368
|
@inactivity_timeout = @timeout
|
@@ -347,23 +375,32 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
347
375
|
# If @push_previous_map_as_event option is set, or @push_map_as_event_on_timeout is set, expired maps are returned as new events to be flushed to Logstash pipeline.
|
348
376
|
def remove_expired_maps()
|
349
377
|
events_to_flush = []
|
350
|
-
|
351
|
-
|
378
|
+
default_min_timestamp = Time.now - @timeout
|
379
|
+
default_min_inactivity_timestamp = Time.now - @inactivity_timeout
|
352
380
|
|
353
381
|
@current_pipeline.mutex.synchronize do
|
354
382
|
|
355
383
|
@logger.debug("Aggregate remove_expired_maps call with '#{@task_id}' pattern and #{@current_pipeline.aggregate_maps[@task_id].length} maps")
|
356
384
|
|
357
385
|
@current_pipeline.aggregate_maps[@task_id].delete_if do |key, element|
|
358
|
-
|
386
|
+
min_timestamp = element.timeout ? Time.now - element.timeout : default_min_timestamp
|
387
|
+
min_inactivity_timestamp = element.inactivity_timeout ? Time.now - element.inactivity_timeout : default_min_inactivity_timestamp
|
388
|
+
if element.creation_timestamp + element.difference_from_creation_to_now < min_timestamp || element.lastevent_timestamp + element.difference_from_lastevent_to_now < min_inactivity_timestamp
|
359
389
|
if @push_previous_map_as_event || @push_map_as_event_on_timeout
|
360
390
|
events_to_flush << create_timeout_event(element.map, key)
|
361
391
|
end
|
392
|
+
@logger.debug("Aggregate remove expired map with task_id=#{key}")
|
393
|
+
metric.increment(:task_timeouts)
|
362
394
|
next true
|
363
395
|
end
|
364
396
|
next false
|
365
397
|
end
|
366
398
|
end
|
399
|
+
|
400
|
+
# disable check_expired_maps_on_every_flush if there is not anymore maps
|
401
|
+
if @current_pipeline.aggregate_maps[@task_id].length == 0 && @check_expired_maps_on_every_flush
|
402
|
+
@check_expired_maps_on_every_flush = nil
|
403
|
+
end
|
367
404
|
|
368
405
|
return events_to_flush
|
369
406
|
end
|
@@ -382,14 +419,16 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
382
419
|
|
383
420
|
event_to_flush = nil
|
384
421
|
event_timestamp = reference_timestamp(event)
|
385
|
-
min_timestamp = event_timestamp - @timeout
|
386
|
-
min_inactivity_timestamp = event_timestamp - @inactivity_timeout
|
422
|
+
min_timestamp = element.timeout ? event_timestamp - element.timeout : event_timestamp - @timeout
|
423
|
+
min_inactivity_timestamp = element.inactivity_timeout ? event_timestamp - element.inactivity_timeout : event_timestamp - @inactivity_timeout
|
387
424
|
|
388
425
|
if element.creation_timestamp < min_timestamp || element.lastevent_timestamp < min_inactivity_timestamp
|
389
426
|
if @push_previous_map_as_event || @push_map_as_event_on_timeout
|
390
427
|
event_to_flush = create_timeout_event(element.map, task_id)
|
391
428
|
end
|
392
429
|
@current_pipeline.aggregate_maps[@task_id].delete(task_id)
|
430
|
+
@logger.debug("Aggregate remove expired map with task_id=#{task_id}")
|
431
|
+
metric.increment(:task_timeouts)
|
393
432
|
end
|
394
433
|
|
395
434
|
return event_to_flush
|
@@ -428,7 +467,7 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
428
467
|
if @execution_context
|
429
468
|
return @execution_context.pipeline_id
|
430
469
|
else
|
431
|
-
return
|
470
|
+
return "main"
|
432
471
|
end
|
433
472
|
end
|
434
473
|
|
@@ -438,17 +477,29 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
|
|
438
477
|
return (@timeout_timestamp_field) ? event.get(@timeout_timestamp_field).time : Time.now
|
439
478
|
end
|
440
479
|
|
480
|
+
# update "aggregate_maps" metric, with aggregate maps count associated to configured taskid pattern
|
481
|
+
def update_aggregate_maps_metric()
|
482
|
+
aggregate_maps = @current_pipeline.aggregate_maps[@task_id]
|
483
|
+
if aggregate_maps
|
484
|
+
metric.gauge(:aggregate_maps, aggregate_maps.length)
|
485
|
+
end
|
486
|
+
end
|
487
|
+
|
441
488
|
end # class LogStash::Filters::Aggregate
|
442
489
|
|
443
490
|
# Element of "aggregate_maps"
|
444
491
|
class LogStash::Filters::Aggregate::Element
|
445
492
|
|
446
|
-
attr_accessor :creation_timestamp, :lastevent_timestamp, :difference_from_creation_to_now, :map
|
493
|
+
attr_accessor :creation_timestamp, :lastevent_timestamp, :difference_from_creation_to_now, :difference_from_lastevent_to_now, :timeout, :inactivity_timeout, :task_id, :map
|
447
494
|
|
448
|
-
def initialize(creation_timestamp)
|
495
|
+
def initialize(creation_timestamp, task_id)
|
449
496
|
@creation_timestamp = creation_timestamp
|
450
|
-
@lastevent_timestamp = creation_timestamp
|
497
|
+
@lastevent_timestamp = creation_timestamp
|
451
498
|
@difference_from_creation_to_now = (Time.now - creation_timestamp).to_i
|
499
|
+
@difference_from_lastevent_to_now = @difference_from_creation_to_now
|
500
|
+
@timeout = nil
|
501
|
+
@inactivity_timeout = nil
|
502
|
+
@task_id = task_id
|
452
503
|
@map = {}
|
453
504
|
end
|
454
505
|
end
|
@@ -456,7 +507,7 @@ end
|
|
456
507
|
# shared aggregate attributes for each pipeline
|
457
508
|
class LogStash::Filters::Aggregate::Pipeline
|
458
509
|
|
459
|
-
attr_accessor :aggregate_maps, :mutex, :
|
510
|
+
attr_accessor :aggregate_maps, :mutex, :flush_instance_map, :last_flush_timestamp_map, :aggregate_maps_path_set, :pipeline_close_instance
|
460
511
|
|
461
512
|
def initialize()
|
462
513
|
# Stores all aggregate maps, per task_id pattern, then per task_id value
|
@@ -465,9 +516,6 @@ class LogStash::Filters::Aggregate::Pipeline
|
|
465
516
|
# Mutex used to synchronize access to 'aggregate_maps'
|
466
517
|
@mutex = Mutex.new
|
467
518
|
|
468
|
-
# Default timeout for task_id patterns where timeout is not defined in Logstash filter configuration
|
469
|
-
@default_timeout = nil
|
470
|
-
|
471
519
|
# For each "task_id" pattern, defines which Aggregate instance will process flush() call, processing expired Aggregate elements (older than timeout)
|
472
520
|
# For each entry, key is "task_id pattern" and value is "aggregate instance"
|
473
521
|
@flush_instance_map = {}
|
@@ -1,8 +1,8 @@
|
|
1
1
|
Gem::Specification.new do |s|
|
2
2
|
s.name = 'logstash-filter-aggregate'
|
3
|
-
s.version
|
4
|
-
s.licenses = ['Apache
|
5
|
-
s.summary =
|
3
|
+
s.version = '2.10.0'
|
4
|
+
s.licenses = ['Apache-2.0']
|
5
|
+
s.summary = 'Aggregates information from several events originating with a single task'
|
6
6
|
s.description = 'This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program'
|
7
7
|
s.authors = ['Elastic', 'Fabien Baligand']
|
8
8
|
s.email = 'info@elastic.co'
|
@@ -389,4 +389,48 @@ describe LogStash::Filters::Aggregate do
|
|
389
389
|
end
|
390
390
|
end
|
391
391
|
|
392
|
-
|
392
|
+
context "custom timeout on map_meta, " do
|
393
|
+
describe "when map_meta.timeout=0, " do
|
394
|
+
it "should push a new aggregated event immediately" do
|
395
|
+
agg_filter = setup_filter({ "task_id" => "%{ppm_id}", "code" => "map['sql_duration'] = 2; map_meta.timeout = 0", "push_map_as_event_on_timeout" => true, "timeout" => 120 })
|
396
|
+
agg_filter.filter(event({"ppm_id" => "1"})) do |yield_event|
|
397
|
+
expect(yield_event).not_to be_nil
|
398
|
+
expect(yield_event.get("sql_duration")).to eq(2)
|
399
|
+
end
|
400
|
+
expect(aggregate_maps["%{ppm_id}"]).to be_empty
|
401
|
+
end
|
402
|
+
end
|
403
|
+
describe "when map_meta.timeout=0 and push_map_as_event_on_timeout=false, " do
|
404
|
+
it "should just remove expired map and not push an aggregated event" do
|
405
|
+
agg_filter = setup_filter({ "task_id" => "%{ppm_id}", "code" => "map_meta.timeout = 0", "push_map_as_event_on_timeout" => false, "timeout" => 120 })
|
406
|
+
agg_filter.filter(event({"ppm_id" => "1"})) { |yield_event| fail "it shouldn't have yield event" }
|
407
|
+
expect(aggregate_maps["%{ppm_id}"]).to be_empty
|
408
|
+
end
|
409
|
+
end
|
410
|
+
describe "when map_meta.inactivity_timeout=1, " do
|
411
|
+
it "should push a new aggregated event at next flush call" do
|
412
|
+
agg_filter = setup_filter({ "task_id" => "%{ppm_id}", "code" => "map['sql_duration'] = 2; map_meta.inactivity_timeout = 1", "push_map_as_event_on_timeout" => true, "timeout" => 120 })
|
413
|
+
agg_filter.filter(event({"ppm_id" => "1"})) { |yield_event| fail "it shouldn't have yield event" }
|
414
|
+
expect(aggregate_maps["%{ppm_id}"].size).to eq(1)
|
415
|
+
sleep(2)
|
416
|
+
events_to_flush = agg_filter.flush()
|
417
|
+
expect(events_to_flush.size).to eq(1)
|
418
|
+
expect(aggregate_maps["%{ppm_id}"]).to be_empty
|
419
|
+
end
|
420
|
+
end
|
421
|
+
end
|
422
|
+
|
423
|
+
context "Custom event generation code is used" do
|
424
|
+
describe "when a new event is manually generated" do
|
425
|
+
it "should push a new event immediately" do
|
426
|
+
agg_filter = setup_filter({ "task_id" => "%{task_id}", "code" => "map['sql_duration'] = 2; new_event_block.call(LogStash::Event.new({:my_sql_duration => map['sql_duration']}))", "timeout" => 120 })
|
427
|
+
agg_filter.filter(event({"task_id" => "1"})) do |yield_event|
|
428
|
+
expect(yield_event).not_to be_nil
|
429
|
+
expect(yield_event.get("my_sql_duration")).to eq(2)
|
430
|
+
end
|
431
|
+
end
|
432
|
+
end
|
433
|
+
|
434
|
+
end
|
435
|
+
|
436
|
+
end
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: logstash-filter-aggregate
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 2.
|
4
|
+
version: 2.10.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Elastic
|
@@ -9,7 +9,7 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date:
|
12
|
+
date: 2021-10-11 00:00:00.000000000 Z
|
13
13
|
dependencies:
|
14
14
|
- !ruby/object:Gem::Dependency
|
15
15
|
requirement: !ruby/object:Gem::Requirement
|
@@ -45,7 +45,9 @@ dependencies:
|
|
45
45
|
- - ">="
|
46
46
|
- !ruby/object:Gem::Version
|
47
47
|
version: '0'
|
48
|
-
description: This gem is a Logstash plugin required to be installed on top of the
|
48
|
+
description: This gem is a Logstash plugin required to be installed on top of the
|
49
|
+
Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This
|
50
|
+
gem is not a stand-alone program
|
49
51
|
email: info@elastic.co
|
50
52
|
executables: []
|
51
53
|
extensions: []
|
@@ -65,7 +67,7 @@ files:
|
|
65
67
|
- spec/filters/aggregate_spec_helper.rb
|
66
68
|
homepage: https://github.com/logstash-plugins/logstash-filter-aggregate
|
67
69
|
licenses:
|
68
|
-
- Apache
|
70
|
+
- Apache-2.0
|
69
71
|
metadata:
|
70
72
|
logstash_plugin: 'true'
|
71
73
|
logstash_group: filter
|
@@ -85,7 +87,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
85
87
|
version: '0'
|
86
88
|
requirements: []
|
87
89
|
rubyforge_project:
|
88
|
-
rubygems_version: 2.
|
90
|
+
rubygems_version: 2.6.14.1
|
89
91
|
signing_key:
|
90
92
|
specification_version: 4
|
91
93
|
summary: Aggregates information from several events originating with a single task
|