logstash-filter-aggregate 2.5.1 → 2.5.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: a9d474bc096fd4b164adb655607abdc0d52b7f15
4
- data.tar.gz: 9e7b57528e3201bea76ed198e080be776b0ed3af
3
+ metadata.gz: 1b057d7aa6713960e001b6aefd58e80790bdf5a0
4
+ data.tar.gz: 907229127b32da2754a62578af6adaa4468ab40a
5
5
  SHA512:
6
- metadata.gz: 6d08a02cdf7a32904e74f235f2f3888170cd535dce21aeb5767c6ddfa302920beb2a6c4216aac1b485ccb502a0f0180d687b5276e7e18ec691e4c8ae2c5895e2
7
- data.tar.gz: 5d509ab8cf7d26fbce5cf4f53d8255ff7c203492c9fe96aab363afa73dedcda278e2497a7be91e7f8f7ce58f08bb42cfd93f08afd94628c4d54b6cd671dfeee9
6
+ metadata.gz: 8e77c72b1f8c14fe69224c151841dd175a67615e921b0653412ee10c8b0378e3a0f06d69360527e586fb0374e3f27b408982dd31db6e051f972b4a6db7fcddeb
7
+ data.tar.gz: c7d65a60dbc9c07765e27b7fa08b103fdbdd210eacf4c12ec9936188bcee3952ec38207608d88b4269ab8ff13984dd179fa0c15d7a220783230e92e2155437be
data/BUILD.md CHANGED
@@ -1,82 +1,82 @@
1
- # Logstash Plugin
2
-
3
- This is a plugin for [Logstash](https://github.com/elastic/logstash).
4
-
5
- It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
6
-
7
- ## Documentation
8
-
9
- Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elasticsearch.org/guide/en/logstash/current/).
10
-
11
- - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
- - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
-
14
- ## Developing
15
-
16
- ### 1. Plugin Developement and Testing
17
-
18
- #### Code
19
- - To get started, you'll need JRuby with the Bundler gem installed.
20
-
21
- - Create a new plugin or clone an existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
22
-
23
- - Install dependencies
24
- ```sh
25
- bundle install
26
- ```
27
-
28
- #### Test
29
-
30
- - Update your dependencies
31
-
32
- ```sh
33
- bundle install
34
- ```
35
-
36
- - Run tests
37
-
38
- ```sh
39
- bundle exec rspec
40
- ```
41
-
42
- ### 2. Running your unpublished Plugin in Logstash
43
-
44
- #### 2.1 Run in a local Logstash clone
45
-
46
- - Edit Logstash `Gemfile` and add the local plugin path, for example:
47
- ```ruby
48
- gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
49
- ```
50
- - Install plugin
51
- ```sh
52
- bin/plugin install --no-verify
53
- ```
54
- - Run Logstash with your plugin
55
- ```sh
56
- bin/logstash -e 'filter {awesome {}}'
57
- ```
58
- At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
59
-
60
- #### 2.2 Run in an installed Logstash
61
-
62
- You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
63
-
64
- - Build your plugin gem
65
- ```sh
66
- gem build logstash-filter-awesome.gemspec
67
- ```
68
- - Install the plugin from the Logstash home
69
- ```sh
70
- bin/plugin install /your/local/plugin/logstash-filter-awesome.gem
71
- ```
72
- - Start Logstash and proceed to test the plugin
73
-
74
- ## Contributing
75
-
76
- All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
77
-
78
- Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
79
-
80
- It is more important to the community that you are able to contribute.
81
-
1
+ # Logstash Plugin
2
+
3
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
4
+
5
+ It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
6
+
7
+ ## Documentation
8
+
9
+ Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elasticsearch.org/guide/en/logstash/current/).
10
+
11
+ - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
+ - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
+
14
+ ## Developing
15
+
16
+ ### 1. Plugin Developement and Testing
17
+
18
+ #### Code
19
+ - To get started, you'll need JRuby with the Bundler gem installed.
20
+
21
+ - Create a new plugin or clone an existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
22
+
23
+ - Install dependencies
24
+ ```sh
25
+ bundle install
26
+ ```
27
+
28
+ #### Test
29
+
30
+ - Update your dependencies
31
+
32
+ ```sh
33
+ bundle install
34
+ ```
35
+
36
+ - Run tests
37
+
38
+ ```sh
39
+ bundle exec rspec
40
+ ```
41
+
42
+ ### 2. Running your unpublished Plugin in Logstash
43
+
44
+ #### 2.1 Run in a local Logstash clone
45
+
46
+ - Edit Logstash `Gemfile` and add the local plugin path, for example:
47
+ ```ruby
48
+ gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
49
+ ```
50
+ - Install plugin
51
+ ```sh
52
+ bin/plugin install --no-verify
53
+ ```
54
+ - Run Logstash with your plugin
55
+ ```sh
56
+ bin/logstash -e 'filter {awesome {}}'
57
+ ```
58
+ At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
59
+
60
+ #### 2.2 Run in an installed Logstash
61
+
62
+ You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
63
+
64
+ - Build your plugin gem
65
+ ```sh
66
+ gem build logstash-filter-awesome.gemspec
67
+ ```
68
+ - Install the plugin from the Logstash home
69
+ ```sh
70
+ bin/plugin install /your/local/plugin/logstash-filter-awesome.gem
71
+ ```
72
+ - Start Logstash and proceed to test the plugin
73
+
74
+ ## Contributing
75
+
76
+ All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
77
+
78
+ Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
79
+
80
+ It is more important to the community that you are able to contribute.
81
+
82
82
  For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -1,65 +1,71 @@
1
- ## 2.5.1
2
- - enhancement: when final flush occurs (just before Logstash shutdown), add `_aggregatefinalflush` tag on generated timeout events
3
- - bugfix: when final flush occurs (just before Logstash shutdown), push last aggregate map as event (if push_previous_map_as_event=true)
4
- - bugfix: fix 'timeout_task_id_field' feature when push_previous_map_as_event=true
5
- - bugfix: fix aggregate_maps_path feature (bug since v2.4.0)
6
- - internal: add debug logging
7
- - internal: refactor flush management static variables
8
-
9
- ## 2.5.0
10
- - new feature: add compatibility with Logstash 5
11
- - breaking: need Logstash 2.4 or later
12
-
13
- ## 2.4.0
14
- - new feature: You can now define timeout options per task_id pattern (#42)
15
- timeout options are : `timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags`
16
- - validation: a configuration error is thrown at startup if you define any timeout option on several aggregate filters for the same task_id pattern
17
- - breaking: if you use `aggregate_maps_path` option, storage format has changed. So you have to delete `aggregate_maps_path` file before starting Logstash
18
-
19
- ## 2.3.1
20
- - new feature: Add new option "timeout_tags" so that you can add tags to generated timeout events
21
-
22
- ## 2.3.0
23
- - new feature: Add new option "push_map_as_event_on_timeout" so that when a task timeout happens the aggregation map can be yielded as a new event
24
- - new feature: Add new option "timeout_code" which takes the timeout event populated with the aggregation map and executes code on it. This works for "push_map_as_event_on_timeout" as well as "push_previous_map_as_event"
25
- - new feature: Add new option "timeout_task_id_field" which is used to map the task_id on timeout events.
26
-
27
- ## 2.2.0
28
- - new feature: add new option "push_previous_map_as_event" so that each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new logstash event
29
-
30
- ## 2.1.2
31
- - bugfix: clarify default timeout behaviour : by default, timeout is 1800s
32
-
33
- ## 2.1.1
34
- - bugfix: when "aggregate_maps_path" option is defined in more than one aggregate filter, raise a Logstash::ConfigurationError
35
- - bugfix: add support for logstash hot reload feature
36
-
37
- ## 2.1.0
38
- - new feature: add new option "aggregate_maps_path" so that aggregate maps can be stored at logstash shutdown and reloaded at logstash startup
39
-
40
- ## 2.0.5
41
- - internal,deps: Depend on logstash-core-plugin-api instead of logstash-core, removing the need to mass update plugins on major releases of logstash
42
- - breaking: need Logstash 2.3 or later
43
-
44
- ## 2.0.4
45
- - internal,deps: New dependency requirements for logstash-core for the 5.0 release
46
-
47
- ## 2.0.3
48
- - bugfix: fix issue #10 : numeric task_id is now well processed
49
-
50
- ## 2.0.2
51
- - bugfix: fix issue #5 : when code call raises an exception, the error is logged and the event is tagged '_aggregateexception'. It avoids logstash crash.
52
-
53
- ## 2.0.0
54
- - internal: Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully,
55
- instead of using Thread.raise on the plugins' threads. Ref: https://github.com/elastic/logstash/pull/3895
56
- - internal,deps: Dependency on logstash-core update to 2.0
57
-
58
- ## 0.1.3
59
- - breaking: remove "milestone" method call which is deprecated in logstash 1.5, break compatibility with logstash 1.4
60
- - internal,test: enhanced tests using 'expect' command
61
- - docs: add a second example in documentation
62
-
63
- ## 0.1.2
64
- - compatible with logstash 1.4
65
- - first version available on github
1
+ ## 2.5.2
2
+ - bugfix: fix 'aggregate_maps_path' load (issue #62). Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
3
+ - enhancement: at Logstash startup, check that 'task_id' option contains a field reference expression (else raise error)
4
+ - docs: enhance examples
5
+ - docs: precise that tasks are tied to their task_id pattern, even if they have same task_id value
6
+
7
+ ## 2.5.1
8
+ - enhancement: when final flush occurs (just before Logstash shutdown), add `_aggregatefinalflush` tag on generated timeout events
9
+ - bugfix: when final flush occurs (just before Logstash shutdown), push last aggregate map as event (if push_previous_map_as_event=true)
10
+ - bugfix: fix 'timeout_task_id_field' feature when push_previous_map_as_event=true
11
+ - bugfix: fix aggregate_maps_path feature (bug since v2.4.0)
12
+ - internal: add debug logging
13
+ - internal: refactor flush management static variables
14
+
15
+ ## 2.5.0
16
+ - new feature: add compatibility with Logstash 5
17
+ - breaking: need Logstash 2.4 or later
18
+
19
+ ## 2.4.0
20
+ - new feature: You can now define timeout options per task_id pattern (#42)
21
+ timeout options are : `timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags`
22
+ - validation: a configuration error is thrown at startup if you define any timeout option on several aggregate filters for the same task_id pattern
23
+ - breaking: if you use `aggregate_maps_path` option, storage format has changed. So you have to delete `aggregate_maps_path` file before starting Logstash
24
+
25
+ ## 2.3.1
26
+ - new feature: Add new option "timeout_tags" so that you can add tags to generated timeout events
27
+
28
+ ## 2.3.0
29
+ - new feature: Add new option "push_map_as_event_on_timeout" so that when a task timeout happens the aggregation map can be yielded as a new event
30
+ - new feature: Add new option "timeout_code" which takes the timeout event populated with the aggregation map and executes code on it. This works for "push_map_as_event_on_timeout" as well as "push_previous_map_as_event"
31
+ - new feature: Add new option "timeout_task_id_field" which is used to map the task_id on timeout events.
32
+
33
+ ## 2.2.0
34
+ - new feature: add new option "push_previous_map_as_event" so that each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new logstash event
35
+
36
+ ## 2.1.2
37
+ - bugfix: clarify default timeout behaviour : by default, timeout is 1800s
38
+
39
+ ## 2.1.1
40
+ - bugfix: when "aggregate_maps_path" option is defined in more than one aggregate filter, raise a Logstash::ConfigurationError
41
+ - bugfix: add support for logstash hot reload feature
42
+
43
+ ## 2.1.0
44
+ - new feature: add new option "aggregate_maps_path" so that aggregate maps can be stored at logstash shutdown and reloaded at logstash startup
45
+
46
+ ## 2.0.5
47
+ - internal,deps: Depend on logstash-core-plugin-api instead of logstash-core, removing the need to mass update plugins on major releases of logstash
48
+ - breaking: need Logstash 2.3 or later
49
+
50
+ ## 2.0.4
51
+ - internal,deps: New dependency requirements for logstash-core for the 5.0 release
52
+
53
+ ## 2.0.3
54
+ - bugfix: fix issue #10 : numeric task_id is now well processed
55
+
56
+ ## 2.0.2
57
+ - bugfix: fix issue #5 : when code call raises an exception, the error is logged and the event is tagged '_aggregateexception'. It avoids logstash crash.
58
+
59
+ ## 2.0.0
60
+ - internal: Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully,
61
+ instead of using Thread.raise on the plugins' threads. Ref: https://github.com/elastic/logstash/pull/3895
62
+ - internal,deps: Dependency on logstash-core update to 2.0
63
+
64
+ ## 0.1.3
65
+ - breaking: remove "milestone" method call which is deprecated in logstash 1.5, break compatibility with logstash 1.4
66
+ - internal,test: enhanced tests using 'expect' command
67
+ - docs: add a second example in documentation
68
+
69
+ ## 0.1.2
70
+ - compatible with logstash 1.4
71
+ - first version available on github
@@ -1,14 +1,14 @@
1
- The following is a list of people who have contributed ideas, code, bug
2
- reports, or in general have helped logstash along its way.
3
-
4
- Maintainers:
5
- * Fabien Baligand (fbaligand)
6
-
7
- Contributors:
8
- * Fabien Baligand (fbaligand)
9
- * Artur Kronenberg (pandaadb)
10
-
11
- Note: If you've sent us patches, bug reports, or otherwise contributed to
12
- Logstash, and you aren't on the list above and want to be, please let us know
13
- and we'll make sure you're here. Contributions from folks like you are what make
14
- open source awesome.
1
+ The following is a list of people who have contributed ideas, code, bug
2
+ reports, or in general have helped logstash along its way.
3
+
4
+ Maintainers:
5
+ * Fabien Baligand (fbaligand)
6
+
7
+ Contributors:
8
+ * Fabien Baligand (fbaligand)
9
+ * Artur Kronenberg (pandaadb)
10
+
11
+ Note: If you've sent us patches, bug reports, or otherwise contributed to
12
+ Logstash, and you aren't on the list above and want to be, please let us know
13
+ and we'll make sure you're here. Contributions from folks like you are what make
14
+ open source awesome.
data/Gemfile CHANGED
@@ -1,2 +1,2 @@
1
- source 'https://rubygems.org'
2
- gemspec
1
+ source 'https://rubygems.org'
2
+ gemspec
data/LICENSE CHANGED
@@ -1,13 +1,13 @@
1
- Copyright (c) 2012-2015 Elasticsearch <http://www.elasticsearch.org>
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License");
4
- you may not use this file except in compliance with the License.
5
- You may obtain a copy of the License at
6
-
7
- http://www.apache.org/licenses/LICENSE-2.0
8
-
9
- Unless required by applicable law or agreed to in writing, software
10
- distributed under the License is distributed on an "AS IS" BASIS,
11
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- See the License for the specific language governing permissions and
13
- limitations under the License.
1
+ Copyright (c) 2012-2015 Elasticsearch <http://www.elasticsearch.org>
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
data/NOTICE.txt CHANGED
@@ -1,5 +1,5 @@
1
- Elasticsearch
2
- Copyright 2012-2015 Elasticsearch
3
-
4
- This product includes software developed by The Apache Software
1
+ Elasticsearch
2
+ Copyright 2012-2015 Elasticsearch
3
+
4
+ This product includes software developed by The Apache Software
5
5
  Foundation (http://www.apache.org/).
data/README.md CHANGED
@@ -1,296 +1,327 @@
1
- # Logstash Filter Aggregate Documentation
2
-
3
- [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-filter-aggregate.svg)](https://travis-ci.org/logstash-plugins/logstash-filter-aggregate)
4
-
5
- The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
6
-
7
- You should be very careful to set logstash filter workers to 1 (`-w 1` flag) for this filter to work correctly
8
- otherwise events may be processed out of sequence and unexpected results will occur.
9
-
10
- ## Example #1
11
-
12
- * with these given logs :
13
- ```
14
- INFO - 12345 - TASK_START - start
15
- INFO - 12345 - SQL - sqlQuery1 - 12
16
- INFO - 12345 - SQL - sqlQuery2 - 34
17
- INFO - 12345 - TASK_END - end
18
- ```
19
-
20
- * you can aggregate "sql duration" for the whole task with this configuration :
21
- ``` ruby
22
- filter {
23
- grok {
24
- match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
25
- }
26
-
27
- if [logger] == "TASK_START" {
28
- aggregate {
29
- task_id => "%{taskid}"
30
- code => "map['sql_duration'] = 0"
31
- map_action => "create"
32
- }
33
- }
34
-
35
- if [logger] == "SQL" {
36
- aggregate {
37
- task_id => "%{taskid}"
38
- code => "map['sql_duration'] += event.get('duration')"
39
- map_action => "update"
40
- }
41
- }
42
-
43
- if [logger] == "TASK_END" {
44
- aggregate {
45
- task_id => "%{taskid}"
46
- code => "event.set('sql_duration', map['sql_duration'])"
47
- map_action => "update"
48
- end_of_task => true
49
- timeout => 120
50
- }
51
- }
52
- }
53
- ```
54
-
55
- * the final event then looks like :
56
- ``` ruby
57
- {
58
- "message" => "INFO - 12345 - TASK_END - end",
59
- "sql_duration" => 46
60
- }
61
- ```
62
-
63
- the field `sql_duration` is added and contains the sum of all sql queries durations.
64
-
65
- ## Example #2 : no start event
66
-
67
- * If you have the same logs than example #1, but without a start log :
68
- ```
69
- INFO - 12345 - SQL - sqlQuery1 - 12
70
- INFO - 12345 - SQL - sqlQuery2 - 34
71
- INFO - 12345 - TASK_END - end
72
- ```
73
-
74
- * you can also aggregate "sql duration" with a slightly different configuration :
75
- ``` ruby
76
- filter {
77
- grok {
78
- match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
79
- }
80
-
81
- if [logger] == "SQL" {
82
- aggregate {
83
- task_id => "%{taskid}"
84
- code => "map['sql_duration'] ||= 0 ; map['sql_duration'] += event.get('duration')"
85
- }
86
- }
87
-
88
- if [logger] == "TASK_END" {
89
- aggregate {
90
- task_id => "%{taskid}"
91
- code => "event.set('sql_duration', map['sql_duration'])"
92
- end_of_task => true
93
- timeout => 120
94
- }
95
- }
96
- }
97
- ```
98
-
99
- * the final event is exactly the same than example #1
100
- * the key point is the "||=" ruby operator.
101
- it allows to initialize 'sql_duration' map entry to 0 only if this map entry is not already initialized
102
-
103
- ## Example #3 : no end event
104
-
105
- Third use case: You have no specific end event.
106
-
107
- A typical case is aggregating or tracking user behaviour. We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in. There is no specific event indicating the end of the user's interaction.
108
-
109
- In this case, we can enable the option 'push_map_as_event_on_timeout' to enable pushing the aggregation map as a new event when a timeout occurs.
110
- In addition, we can enable 'timeout_code' to execute code on the populated timeout event.
111
- We can also add 'timeout_task_id_field' so we can correlate the task_id, which in this case would be the user's ID.
112
-
113
- * Given these logs:
114
-
115
- ```
116
- INFO - 12345 - Clicked One
117
- INFO - 12345 - Clicked Two
118
- INFO - 12345 - Clicked Three
119
- ```
120
-
121
- * You can aggregate the amount of clicks the user did like this:
122
-
123
- ``` ruby
124
- filter {
125
- grok {
126
- match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ]
127
- }
128
-
129
- aggregate {
130
- task_id => "%{user_id}"
131
- code => "map['clicks'] ||= 0; map['clicks'] += 1;"
132
- push_map_as_event_on_timeout => true
133
- timeout_task_id_field => "user_id"
134
- timeout => 600 # 10 minutes timeout
135
- timeout_tags => ['_aggregatetimeout']
136
- timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
137
- }
138
- }
139
- ```
140
-
141
- * After ten minutes, this will yield an event like:
142
-
143
- ``` json
144
- {
145
- "user_id": "12345",
146
- "clicks": 3,
147
- "several_clicks": true,
148
- "tags": [
149
- "_aggregatetimeout"
150
- ]
151
- }
152
- ```
153
-
154
-
155
- ## Example #4 : no end event and tasks come one after the other
156
-
157
- Fourth use case : like example #3, you have no specific end event, but also, tasks come one after the other.
158
- That is to say : tasks are not interlaced. All task1 events come, then all task2 events come, ...
159
- In that case, you don't want to wait task timeout to flush aggregation map.
160
- * A typical case is aggregating results from jdbc input plugin.
161
- * Given that you have this SQL query : `SELECT country_name, town_name FROM town ORDER BY country_name`
162
- * Using jdbc input plugin, you get these 3 events from :
163
- ``` json
164
- { "country_name": "France", "town_name": "Paris" }
165
- { "country_name": "France", "town_name": "Marseille" }
166
- { "country_name": "USA", "town_name": "New-York" }
167
- ```
168
- * And you would like these 2 result events to push them into elasticsearch :
169
- ``` json
170
- { "country_name": "France", "town_name": [ "Paris", "Marseille" ] }
171
- { "country_name": "USA", "town_name": [ "New-York" ] }
172
- ```
173
- * You can do that using `push_previous_map_as_event` aggregate plugin option :
174
- ``` ruby
175
- filter {
176
- aggregate {
177
- task_id => "%{country_name}"
178
- code => "
179
- map['town_name'] ||= []
180
- event.to_hash.each do |key,value|
181
- map[key] = value unless map.has_key?(key)
182
- map[key] << value if map[key].is_a?(Array) and !value.is_a?(Array)
183
- end
184
- "
185
- push_previous_map_as_event => true
186
- timeout => 5
187
- timeout_tags => ['aggregated']
188
- }
189
-
190
- if "aggregated" not in [tags] {
191
- drop {}
192
- }
193
- }
194
- ```
195
- * The key point is that each time aggregate plugin detects a new `country_name`, it pushes previous aggregate map as a new logstash event (with 'aggregated' tag), and then creates a new empty map for the next country
196
- * When 5s timeout comes, the last aggregate map is pushed as a new event
197
- * Finally, initial events (which are not aggregated) are dropped because useless
198
-
199
- ## How it works
200
- - the filter needs a "task_id" to correlate events (log lines) of a same task
201
- - at the task beggining, filter creates a map, attached to task_id
202
- - for each event, you can execute code using 'event' and 'map' (for instance, copy an event field to map)
203
- - in the final event, you can execute a last code (for instance, add map data to final event)
204
- - after the final event, the map attached to task is deleted
205
- - in one filter configuration, it is recommanded to define a timeout option to protect the filter against unterminated tasks. It tells the filter to delete expired maps
206
- - if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
207
- - all timeout options have to be defined in only one aggregate filter per task_id pattern.
208
- Timeout options are : `timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags`
209
- - if `code` execution raises an exception, the error is logged and event is tagged '_aggregateexception'
210
-
211
- ## Use Cases
212
- - extract some cool metrics from task logs and push them into task final log event (like in example #1 and #2)
213
- - extract error information in any task log line, and push it in final task event (to get a final event with all error information if any)
214
- - extract all back-end calls as a list, and push this list in final task event (to get a task profile)
215
- - extract all http headers logged in several lines to push this list in final task event (complete http request info)
216
- - for every back-end call, collect call details available on several lines, analyse it and finally tag final back-end call log line (error, timeout, business-warning, ...)
217
- - Finally, task id can be any correlation id matching your need : it can be a session id, a file path, ...
218
-
219
- ## Aggregate Plugin Options
220
- - **task_id :**
221
- The expression defining task ID to correlate logs.
222
- This value must uniquely identify the task in the system.
223
- This option is required.
224
- Example value : `"%{application}%{my_task_id}"`
225
-
226
- - **code:**
227
- The code to execute to update map, using current event.
228
- Or on the contrary, the code to execute to update event, using current map.
229
- You will have a 'map' variable and an 'event' variable available (that is the event itself).
230
- This option is required.
231
- Example value : `"map['sql_duration'] += event.get('duration')"`
232
-
233
- - **map_action:**
234
- Tell the filter what to do with aggregate map (default : "create_or_update").
235
- `create`: create the map, and execute the code only if map wasn't created before
236
- `update`: doesn't create the map, and execute the code only if map was created before
237
- `create_or_update`: create the map if it wasn't created before, execute the code in all cases
238
- Default value: `create_or_update`
239
-
240
- - **end_of_task:**
241
- Tell the filter that task is ended, and therefore, to delete aggregate map after code execution.
242
- Default value: `false`
243
-
244
- - **aggregate_maps_path:**
245
- The path to file where aggregate maps are stored when logstash stops and are loaded from when logstash starts.
246
- If not defined, aggregate maps will not be stored at logstash stop and will be lost.
247
- Must be defined in only one aggregate filter (as aggregate maps are global).
248
- Example value : `"/path/to/.aggregate_maps"`
249
-
250
- - **timeout:**
251
- The amount of seconds after a task "end event" can be considered lost.
252
- When timeout occurs for a task, The task "map" is evicted.
253
- Timeout can be defined for each "task_id" pattern.
254
- If no timeout is defined, default timeout will be applied : 1800 seconds.
255
-
256
- - **timeout_code**
257
- The code to execute to complete timeout generated event, when 'push_map_as_event_on_timeout' or 'push_previous_map_as_event' is set to true.
258
- The code block will have access to the newly generated timeout event that is pre-populated with the aggregation map.
259
- If 'timeout_task_id_field' is set, the event is also populated with the task_id value
260
- Example value: `"event.set('state', 'timeout')"`
261
-
262
- - **push_map_as_event_on_timeout**
263
- When this option is enabled, each time a task timeout is detected, it pushes task aggregation map as a new logstash event.
264
- This enables to detect and process task timeouts in logstash, but also to manage tasks that have no explicit end event.
265
- Default value: `false`
266
-
267
- - **push_previous_map_as_event:**
268
- When this option is enabled, each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new logstash event,
269
- and then creates a new empty map for the next task.
270
- _WARNING:_ this option works fine only if tasks come one after the other. It means : all task1 events, then all task2 events, etc...
271
- Default value: `false`
272
-
273
- - **timeout_task_id_field**
274
- This option indicates the timeout generated event's field for the "task_id" value.
275
- The task id will then be set into the timeout event. This can help correlate which tasks have been timed out.
276
- This field has no default value and will not be set on the event if not configured.
277
- Example:
278
- If the task_id is "12345" and this field is set to "my_id", the generated timeout event will contain `'my_id'` key with `'12345'` value.
279
-
280
- - **timeout_tags**
281
- Defines tags to add when a timeout event is generated and yield.
282
- Default value: `[]`
283
-
284
- ## Changelog
285
-
286
- Read [CHANGELOG.md](CHANGELOG.md).
287
-
288
-
289
- ## Need Help?
290
-
291
- Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
292
-
293
-
294
- ## Want to contribute?
295
-
296
- Read [BUILD.md](BUILD.md).
1
+ # Logstash Filter Aggregate Documentation
2
+
3
+ [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-filter-aggregate.svg)](https://travis-ci.org/logstash-plugins/logstash-filter-aggregate)
4
+
5
+ The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
6
+
7
+ You should be very careful to set Logstash filter workers to 1 (`-w 1` flag) for this filter to work correctly
8
+ otherwise events may be processed out of sequence and unexpected results will occur.
9
+
10
+ ## Example #1
11
+
12
+ * with these given logs :
13
+ ```
14
+ INFO - 12345 - TASK_START - start
15
+ INFO - 12345 - SQL - sqlQuery1 - 12
16
+ INFO - 12345 - SQL - sqlQuery2 - 34
17
+ INFO - 12345 - TASK_END - end
18
+ ```
19
+
20
+ * you can aggregate "sql duration" for the whole task with this configuration :
21
+ ``` ruby
22
+ filter {
23
+ grok {
24
+ match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
25
+ }
26
+
27
+ if [logger] == "TASK_START" {
28
+ aggregate {
29
+ task_id => "%{taskid}"
30
+ code => "map['sql_duration'] = 0"
31
+ map_action => "create"
32
+ }
33
+ }
34
+
35
+ if [logger] == "SQL" {
36
+ aggregate {
37
+ task_id => "%{taskid}"
38
+ code => "map['sql_duration'] += event.get('duration')"
39
+ map_action => "update"
40
+ }
41
+ }
42
+
43
+ if [logger] == "TASK_END" {
44
+ aggregate {
45
+ task_id => "%{taskid}"
46
+ code => "event.set('sql_duration', map['sql_duration'])"
47
+ map_action => "update"
48
+ end_of_task => true
49
+ timeout => 120
50
+ }
51
+ }
52
+ }
53
+ ```
54
+
55
+ * the final event then looks like :
56
+ ``` ruby
57
+ {
58
+ "message" => "INFO - 12345 - TASK_END - end",
59
+ "sql_duration" => 46
60
+ }
61
+ ```
62
+
63
+ the field `sql_duration` is added and contains the sum of all sql queries durations.
64
+
65
+ ## Example #2 : no start event
66
+
67
+ * If you have the same logs than example #1, but without a start log :
68
+ ```
69
+ INFO - 12345 - SQL - sqlQuery1 - 12
70
+ INFO - 12345 - SQL - sqlQuery2 - 34
71
+ INFO - 12345 - TASK_END - end
72
+ ```
73
+
74
+ * you can also aggregate "sql duration" with a slightly different configuration :
75
+ ``` ruby
76
+ filter {
77
+ grok {
78
+ match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
79
+ }
80
+
81
+ if [logger] == "SQL" {
82
+ aggregate {
83
+ task_id => "%{taskid}"
84
+ code => "map['sql_duration'] ||= 0 ; map['sql_duration'] += event.get('duration')"
85
+ }
86
+ }
87
+
88
+ if [logger] == "TASK_END" {
89
+ aggregate {
90
+ task_id => "%{taskid}"
91
+ code => "event.set('sql_duration', map['sql_duration'])"
92
+ end_of_task => true
93
+ timeout => 120
94
+ }
95
+ }
96
+ }
97
+ ```
98
+
99
+ * the final event is exactly the same than example #1
100
+ * the key point is the "||=" ruby operator.
101
+ it allows to initialize 'sql_duration' map entry to 0 only if this map entry is not already initialized
102
+
103
+ ## Example #3 : no end event
104
+
105
+ Third use case: You have no specific end event.
106
+
107
+ A typical case is aggregating or tracking user behaviour. We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in. There is no specific event indicating the end of the user's interaction.
108
+
109
+ In this case, we can enable the option 'push_map_as_event_on_timeout' to enable pushing the aggregation map as a new event when a timeout occurs.
110
+ In addition, we can enable 'timeout_code' to execute code on the populated timeout event.
111
+ We can also add 'timeout_task_id_field' so we can correlate the task_id, which in this case would be the user's ID.
112
+
113
+ * Given these logs:
114
+
115
+ ```
116
+ INFO - 12345 - Clicked One
117
+ INFO - 12345 - Clicked Two
118
+ INFO - 12345 - Clicked Three
119
+ ```
120
+
121
+ * You can aggregate the amount of clicks the user did like this:
122
+
123
+ ``` ruby
124
+ filter {
125
+ grok {
126
+ match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ]
127
+ }
128
+
129
+ aggregate {
130
+ task_id => "%{user_id}"
131
+ code => "map['clicks'] ||= 0; map['clicks'] += 1;"
132
+ push_map_as_event_on_timeout => true
133
+ timeout_task_id_field => "user_id"
134
+ timeout => 600 # 10 minutes timeout
135
+ timeout_tags => ['_aggregatetimeout']
136
+ timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
137
+ }
138
+ }
139
+ ```
140
+
141
+ * After ten minutes, this will yield an event like:
142
+
143
+ ``` json
144
+ {
145
+ "user_id": "12345",
146
+ "clicks": 3,
147
+ "several_clicks": true,
148
+ "tags": [
149
+ "_aggregatetimeout"
150
+ ]
151
+ }
152
+ ```
153
+
154
+
155
+ ## Example #4 : no end event and tasks come one after the other
156
+
157
+ Fourth use case : like example #3, you have no specific end event, but also, tasks come one after the other.
158
+ That is to say : tasks are not interlaced. All task1 events come, then all task2 events come, ...
159
+ In that case, you don't want to wait task timeout to flush aggregation map.
160
+ * A typical case is aggregating results from jdbc input plugin.
161
+ * Given that you have this SQL query : `SELECT country_name, town_name FROM town ORDER BY country_name`
162
+ * Using jdbc input plugin, you get these 3 events from :
163
+ ``` json
164
+ { "country_name": "France", "town_name": "Paris" }
165
+ { "country_name": "France", "town_name": "Marseille" }
166
+ { "country_name": "USA", "town_name": "New-York" }
167
+ ```
168
+ * And you would like these 2 result events to push them into elasticsearch :
169
+ ``` json
170
+ { "country_name": "France", "towns": [ {"town_name": "Paris"}, {"town_name": "Marseille"} ] }
171
+ { "country_name": "USA", "towns": [ {"town_name": "New-York"} ] }
172
+ ```
173
+ * You can do that using `push_previous_map_as_event` aggregate plugin option :
174
+ ``` ruby
175
+ filter {
176
+ aggregate {
177
+ task_id => "%{country_name}"
178
+ code => "
179
+ map['country_name'] = event.get('country_name')
180
+ map['towns'] ||= []
181
+ map['towns'] << {'town_name' => event.get('town_name')}
182
+ event.cancel()
183
+ "
184
+ push_previous_map_as_event => true
185
+ timeout => 3
186
+ }
187
+ }
188
+ ```
189
+ * The key point is that each time aggregate plugin detects a new `country_name`, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next country
190
+ * When 5s timeout comes, the last aggregate map is pushed as a new event
191
+ * Finally, initial events (which are not aggregated) are dropped because useless (thanks to `event.cancel()`)
192
+
193
+ ## How it works
194
+ - the filter needs a "task_id" to correlate events (log lines) of a same task
195
+ - at the task beggining, filter creates a map, attached to task_id
196
+ - for each event, you can execute code using 'event' and 'map' (for instance, copy an event field to map)
197
+ - in the final event, you can execute a last code (for instance, add map data to final event)
198
+ - after the final event, the map attached to task is deleted (thanks to `end_of_task => true`)
199
+ - an aggregate map is tied to one task_id value which is tied to one task_id pattern.
200
+ So if you have 2 filters with different task_id patterns, even if you have same task_id value, they won't share the same aggregate map.
201
+ - in one filter configuration, it is recommanded to define a timeout option to protect the filter against unterminated tasks. It tells the filter to delete expired maps
202
+ - if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
203
+ - all timeout options have to be defined in only one aggregate filter per task_id pattern.
204
+ Timeout options are : `timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags`
205
+ - if `code` execution raises an exception, the error is logged and event is tagged '_aggregateexception'
206
+
207
+ ## Use Cases
208
+ - extract some cool metrics from task logs and push them into task final log event (like in example #1 and #2)
209
+ - extract error information in any task log line, and push it in final task event (to get a final event with all error information if any)
210
+ - extract all back-end calls as a list, and push this list in final task event (to get a task profile)
211
+ - extract all http headers logged in several lines to push this list in final task event (complete http request info)
212
+ - for every back-end call, collect call details available on several lines, analyse it and finally tag final back-end call log line (error, timeout, business-warning, ...)
213
+ - Finally, task id can be any correlation id matching your need : it can be a session id, a file path, ...
214
+
215
+ ## Aggregate Plugin Options
216
+ - **task_id :**
217
+ The expression defining task ID to correlate logs.
218
+ This value must uniquely identify the task.
219
+ This option is required.
220
+ Example:
221
+ ``` ruby
222
+ filter {
223
+ aggregate {
224
+ task_id => "%{type}%{my_task_id}"
225
+ }
226
+ }
227
+ ```
228
+
229
+ - **code:**
230
+ The code to execute to update map, using current event.
231
+ Or on the contrary, the code to execute to update event, using current map.
232
+ You will have a 'map' variable and an 'event' variable available (that is the event itself).
233
+ This option is required.
234
+ Example:
235
+ ``` ruby
236
+ filter {
237
+ aggregate {
238
+ code => "map['sql_duration'] += event.get('duration')"
239
+ }
240
+ }
241
+ ```
242
+
243
+ - **map_action:**
244
+ Tell the filter what to do with aggregate map (default : "create_or_update").
245
+ `create`: create the map, and execute the code only if map wasn't created before
246
+ `update`: doesn't create the map, and execute the code only if map was created before
247
+ `create_or_update`: create the map if it wasn't created before, execute the code in all cases
248
+ Default value: `create_or_update`
249
+
250
+ - **end_of_task:**
251
+ Tell the filter that task is ended, and therefore, to delete aggregate map after code execution.
252
+ Default value: `false`
253
+
254
+ - **aggregate_maps_path:**
255
+ The path to file where aggregate maps are stored when Logstash stops and are loaded from when Logstash starts.
256
+ If not defined, aggregate maps will not be stored at Logstash stop and will be lost.
257
+ Must be defined in only one aggregate filter (as aggregate maps are global).
258
+ Example:
259
+ ``` ruby
260
+ filter {
261
+ aggregate {
262
+ aggregate_maps_path => "/path/to/.aggregate_maps"
263
+ }
264
+ }
265
+ ```
266
+
267
+ - **timeout:**
268
+ The amount of seconds after a task "end event" can be considered lost.
269
+ When timeout occurs for a task, The task "map" is evicted.
270
+ Timeout can be defined for each "task_id" pattern.
271
+ If no timeout is defined, default timeout will be applied : 1800 seconds.
272
+
273
+ - **timeout_code**
274
+ The code to execute to complete timeout generated event, when `'push_map_as_event_on_timeout'` or `'push_previous_map_as_event'` is set to true.
275
+ The code block will have access to the newly generated timeout event that is pre-populated with the aggregation map.
276
+ If `'timeout_task_id_field'` is set, the event is also populated with the task_id value
277
+ Example:
278
+ ``` ruby
279
+ filter {
280
+ aggregate {
281
+ timeout_code => "event.set('state', 'timeout')"
282
+ }
283
+ }
284
+ ```
285
+
286
+ - **push_map_as_event_on_timeout**
287
+ When this option is enabled, each time a task timeout is detected, it pushes task aggregation map as a new Logstash event.
288
+ This enables to detect and process task timeouts in Logstash, but also to manage tasks that have no explicit end event.
289
+ Default value: `false`
290
+
291
+ - **push_previous_map_as_event:**
292
+ When this option is enabled, each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new Logstash event,
293
+ and then creates a new empty map for the next task.
294
+ _WARNING:_ this option works fine only if tasks come one after the other. It means : all task1 events, then all task2 events, etc...
295
+ Default value: `false`
296
+
297
+ - **timeout_task_id_field**
298
+ This option indicates the timeout generated event's field for the "task_id" value.
299
+ The task id will then be set into the timeout event. This can help correlate which tasks have been timed out.
300
+ For example, with option `timeout_task_id_field => "my_id"` ,when timeout task id is `"12345"`, the generated timeout event will contain `'my_id' => '12345'`.
301
+ By default, if this option is not set, task id value won't be set into timeout generated event.
302
+
303
+ - **timeout_tags**
304
+ Defines tags to add when a timeout event is generated and yield.
305
+ Default value: `[]`
306
+ Example:
307
+ ``` ruby
308
+ filter {
309
+ aggregate {
310
+ timeout_tags => ["aggregate_timeout']
311
+ }
312
+ }
313
+ ```
314
+
315
+ ## Changelog
316
+
317
+ Read [CHANGELOG.md](CHANGELOG.md).
318
+
319
+
320
+ ## Need Help?
321
+
322
+ Need help? Try #Logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
323
+
324
+
325
+ ## Want to contribute?
326
+
327
+ Read [BUILD.md](BUILD.md).