logstash-filter-aggregate 2.5.2 → 2.6.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 1b057d7aa6713960e001b6aefd58e80790bdf5a0
4
- data.tar.gz: 907229127b32da2754a62578af6adaa4468ab40a
3
+ metadata.gz: a472e0a2745d183d8fac17465d5323f6c90e403a
4
+ data.tar.gz: 0ab61d24cfc2ddbe5abc757e49f7de6bf993f7cf
5
5
  SHA512:
6
- metadata.gz: 8e77c72b1f8c14fe69224c151841dd175a67615e921b0653412ee10c8b0378e3a0f06d69360527e586fb0374e3f27b408982dd31db6e051f972b4a6db7fcddeb
7
- data.tar.gz: c7d65a60dbc9c07765e27b7fa08b103fdbdd210eacf4c12ec9936188bcee3952ec38207608d88b4269ab8ff13984dd179fa0c15d7a220783230e92e2155437be
6
+ metadata.gz: 3c44c668f3f6e896fe0cd2f453bb40c8ea478e77e606f8d553282a70d65f0aca14863a14907f73ee1638e84383c939ce276c74d892757eb20cea74d96a20484c
7
+ data.tar.gz: 276033d1b5227a625b3092fed02097d64c6cd3c0e7eaa2ce176565d3cc6264eea4a7a2ce32fc1c037e69a5e538f362ba62f94f09681cbfe4174d0726670261c7
data/CHANGELOG.md CHANGED
@@ -1,3 +1,6 @@
1
+ ## 2.6.0
2
+ - new feature: 'inactivity_timeout'. Events for a given `task_id` will be aggregated for as long as they keep arriving within the defined `inactivity_timeout` option - the inactivity timeout is reset each time a new event happens. On the contrary, `timeout` is never reset and happens after `timeout` seconds since aggregation map creation.
3
+
1
4
  ## 2.5.2
2
5
  - bugfix: fix 'aggregate_maps_path' load (issue #62). Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
3
6
  - enhancement: at Logstash startup, check that 'task_id' option contains a field reference expression (else raise error)
data/CONTRIBUTORS CHANGED
@@ -7,6 +7,7 @@ Maintainers:
7
7
  Contributors:
8
8
  * Fabien Baligand (fbaligand)
9
9
  * Artur Kronenberg (pandaadb)
10
+ * Fernando Galandrini (fjgal)
10
11
 
11
12
  Note: If you've sent us patches, bug reports, or otherwise contributed to
12
13
  Logstash, and you aren't on the list above and want to be, please let us know
data/Gemfile CHANGED
@@ -1,2 +1,11 @@
1
1
  source 'https://rubygems.org'
2
+
2
3
  gemspec
4
+
5
+ logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash"
6
+ use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1"
7
+
8
+ if Dir.exist?(logstash_path) && use_logstash_source
9
+ gem 'logstash-core', :path => "#{logstash_path}/logstash-core"
10
+ gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api"
11
+ end
File without changes
data/README.md CHANGED
@@ -1,327 +1,104 @@
1
- # Logstash Filter Aggregate Documentation
1
+ # Aggregate Logstash Plugin
2
2
 
3
3
  [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-filter-aggregate.svg)](https://travis-ci.org/logstash-plugins/logstash-filter-aggregate)
4
4
 
5
- The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
5
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
6
6
 
7
- You should be very careful to set Logstash filter workers to 1 (`-w 1` flag) for this filter to work correctly
8
- otherwise events may be processed out of sequence and unexpected results will occur.
9
-
10
- ## Example #1
7
+ It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
11
8
 
12
- * with these given logs :
13
- ```
14
- INFO - 12345 - TASK_START - start
15
- INFO - 12345 - SQL - sqlQuery1 - 12
16
- INFO - 12345 - SQL - sqlQuery2 - 34
17
- INFO - 12345 - TASK_END - end
18
- ```
9
+ ## Documentation
19
10
 
20
- * you can aggregate "sql duration" for the whole task with this configuration :
21
- ``` ruby
22
- filter {
23
- grok {
24
- match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
25
- }
26
-
27
- if [logger] == "TASK_START" {
28
- aggregate {
29
- task_id => "%{taskid}"
30
- code => "map['sql_duration'] = 0"
31
- map_action => "create"
32
- }
33
- }
34
-
35
- if [logger] == "SQL" {
36
- aggregate {
37
- task_id => "%{taskid}"
38
- code => "map['sql_duration'] += event.get('duration')"
39
- map_action => "update"
40
- }
41
- }
42
-
43
- if [logger] == "TASK_END" {
44
- aggregate {
45
- task_id => "%{taskid}"
46
- code => "event.set('sql_duration', map['sql_duration'])"
47
- map_action => "update"
48
- end_of_task => true
49
- timeout => 120
50
- }
51
- }
52
- }
53
- ```
11
+ Latest aggregate plugin documentation is available [here](docs/index.asciidoc).
54
12
 
55
- * the final event then looks like :
56
- ``` ruby
57
- {
58
- "message" => "INFO - 12345 - TASK_END - end",
59
- "sql_duration" => 46
60
- }
61
- ```
13
+ Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
62
14
 
63
- the field `sql_duration` is added and contains the sum of all sql queries durations.
15
+ - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
16
+ - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
64
17
 
65
- ## Example #2 : no start event
66
-
67
- * If you have the same logs than example #1, but without a start log :
68
- ```
69
- INFO - 12345 - SQL - sqlQuery1 - 12
70
- INFO - 12345 - SQL - sqlQuery2 - 34
71
- INFO - 12345 - TASK_END - end
72
- ```
18
+ ## Changelog
73
19
 
74
- * you can also aggregate "sql duration" with a slightly different configuration :
75
- ``` ruby
76
- filter {
77
- grok {
78
- match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
79
- }
80
-
81
- if [logger] == "SQL" {
82
- aggregate {
83
- task_id => "%{taskid}"
84
- code => "map['sql_duration'] ||= 0 ; map['sql_duration'] += event.get('duration')"
85
- }
86
- }
87
-
88
- if [logger] == "TASK_END" {
89
- aggregate {
90
- task_id => "%{taskid}"
91
- code => "event.set('sql_duration', map['sql_duration'])"
92
- end_of_task => true
93
- timeout => 120
94
- }
95
- }
96
- }
97
- ```
20
+ Read [CHANGELOG.md](CHANGELOG.md).
98
21
 
99
- * the final event is exactly the same than example #1
100
- * the key point is the "||=" ruby operator.
101
- it allows to initialize 'sql_duration' map entry to 0 only if this map entry is not already initialized
22
+ ## Need Help?
102
23
 
103
- ## Example #3 : no end event
24
+ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
104
25
 
105
- Third use case: You have no specific end event.
26
+ ## Developing
106
27
 
107
- A typical case is aggregating or tracking user behaviour. We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in. There is no specific event indicating the end of the user's interaction.
28
+ ### 1. Plugin Developement and Testing
108
29
 
109
- In this case, we can enable the option 'push_map_as_event_on_timeout' to enable pushing the aggregation map as a new event when a timeout occurs.
110
- In addition, we can enable 'timeout_code' to execute code on the populated timeout event.
111
- We can also add 'timeout_task_id_field' so we can correlate the task_id, which in this case would be the user's ID.
30
+ #### Code
31
+ - To get started, you'll need JRuby with the Bundler gem installed.
112
32
 
113
- * Given these logs:
33
+ - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
114
34
 
35
+ - Install dependencies
36
+ ```sh
37
+ bundle install
115
38
  ```
116
- INFO - 12345 - Clicked One
117
- INFO - 12345 - Clicked Two
118
- INFO - 12345 - Clicked Three
119
- ```
120
39
 
121
- * You can aggregate the amount of clicks the user did like this:
122
-
123
- ``` ruby
124
- filter {
125
- grok {
126
- match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ]
127
- }
128
-
129
- aggregate {
130
- task_id => "%{user_id}"
131
- code => "map['clicks'] ||= 0; map['clicks'] += 1;"
132
- push_map_as_event_on_timeout => true
133
- timeout_task_id_field => "user_id"
134
- timeout => 600 # 10 minutes timeout
135
- timeout_tags => ['_aggregatetimeout']
136
- timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
137
- }
138
- }
40
+ #### Test
41
+
42
+ - Update your dependencies
43
+
44
+ ```sh
45
+ bundle install
139
46
  ```
140
47
 
141
- * After ten minutes, this will yield an event like:
142
-
143
- ``` json
144
- {
145
- "user_id": "12345",
146
- "clicks": 3,
147
- "several_clicks": true,
148
- "tags": [
149
- "_aggregatetimeout"
150
- ]
151
- }
48
+ - Run tests
49
+
50
+ ```sh
51
+ bundle exec rspec
152
52
  ```
153
53
 
54
+ ### 2. Running your unpublished Plugin in Logstash
154
55
 
155
- ## Example #4 : no end event and tasks come one after the other
56
+ #### 2.1 Run in a local Logstash clone
156
57
 
157
- Fourth use case : like example #3, you have no specific end event, but also, tasks come one after the other.
158
- That is to say : tasks are not interlaced. All task1 events come, then all task2 events come, ...
159
- In that case, you don't want to wait task timeout to flush aggregation map.
160
- * A typical case is aggregating results from jdbc input plugin.
161
- * Given that you have this SQL query : `SELECT country_name, town_name FROM town ORDER BY country_name`
162
- * Using jdbc input plugin, you get these 3 events from :
163
- ``` json
164
- { "country_name": "France", "town_name": "Paris" }
165
- { "country_name": "France", "town_name": "Marseille" }
166
- { "country_name": "USA", "town_name": "New-York" }
167
- ```
168
- * And you would like these 2 result events to push them into elasticsearch :
169
- ``` json
170
- { "country_name": "France", "towns": [ {"town_name": "Paris"}, {"town_name": "Marseille"} ] }
171
- { "country_name": "USA", "towns": [ {"town_name": "New-York"} ] }
172
- ```
173
- * You can do that using `push_previous_map_as_event` aggregate plugin option :
174
- ``` ruby
175
- filter {
176
- aggregate {
177
- task_id => "%{country_name}"
178
- code => "
179
- map['country_name'] = event.get('country_name')
180
- map['towns'] ||= []
181
- map['towns'] << {'town_name' => event.get('town_name')}
182
- event.cancel()
183
- "
184
- push_previous_map_as_event => true
185
- timeout => 3
186
- }
187
- }
188
- ```
189
- * The key point is that each time aggregate plugin detects a new `country_name`, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next country
190
- * When 5s timeout comes, the last aggregate map is pushed as a new event
191
- * Finally, initial events (which are not aggregated) are dropped because useless (thanks to `event.cancel()`)
192
-
193
- ## How it works
194
- - the filter needs a "task_id" to correlate events (log lines) of a same task
195
- - at the task beggining, filter creates a map, attached to task_id
196
- - for each event, you can execute code using 'event' and 'map' (for instance, copy an event field to map)
197
- - in the final event, you can execute a last code (for instance, add map data to final event)
198
- - after the final event, the map attached to task is deleted (thanks to `end_of_task => true`)
199
- - an aggregate map is tied to one task_id value which is tied to one task_id pattern.
200
- So if you have 2 filters with different task_id patterns, even if you have same task_id value, they won't share the same aggregate map.
201
- - in one filter configuration, it is recommanded to define a timeout option to protect the filter against unterminated tasks. It tells the filter to delete expired maps
202
- - if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
203
- - all timeout options have to be defined in only one aggregate filter per task_id pattern.
204
- Timeout options are : `timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags`
205
- - if `code` execution raises an exception, the error is logged and event is tagged '_aggregateexception'
206
-
207
- ## Use Cases
208
- - extract some cool metrics from task logs and push them into task final log event (like in example #1 and #2)
209
- - extract error information in any task log line, and push it in final task event (to get a final event with all error information if any)
210
- - extract all back-end calls as a list, and push this list in final task event (to get a task profile)
211
- - extract all http headers logged in several lines to push this list in final task event (complete http request info)
212
- - for every back-end call, collect call details available on several lines, analyse it and finally tag final back-end call log line (error, timeout, business-warning, ...)
213
- - Finally, task id can be any correlation id matching your need : it can be a session id, a file path, ...
214
-
215
- ## Aggregate Plugin Options
216
- - **task_id :**
217
- The expression defining task ID to correlate logs.
218
- This value must uniquely identify the task.
219
- This option is required.
220
- Example:
221
- ``` ruby
222
- filter {
223
- aggregate {
224
- task_id => "%{type}%{my_task_id}"
225
- }
226
- }
58
+ - Edit Logstash `Gemfile` and add the local plugin path, for example:
59
+ ```ruby
60
+ gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
227
61
  ```
62
+ - Install plugin
63
+ ```sh
64
+ # Logstash 2.3 and higher
65
+ bin/logstash-plugin install --no-verify
228
66
 
229
- - **code:**
230
- The code to execute to update map, using current event.
231
- Or on the contrary, the code to execute to update event, using current map.
232
- You will have a 'map' variable and an 'event' variable available (that is the event itself).
233
- This option is required.
234
- Example:
235
- ``` ruby
236
- filter {
237
- aggregate {
238
- code => "map['sql_duration'] += event.get('duration')"
239
- }
240
- }
241
- ```
67
+ # Prior to Logstash 2.3
68
+ bin/plugin install --no-verify
242
69
 
243
- - **map_action:**
244
- Tell the filter what to do with aggregate map (default : "create_or_update").
245
- `create`: create the map, and execute the code only if map wasn't created before
246
- `update`: doesn't create the map, and execute the code only if map was created before
247
- `create_or_update`: create the map if it wasn't created before, execute the code in all cases
248
- Default value: `create_or_update`
249
-
250
- - **end_of_task:**
251
- Tell the filter that task is ended, and therefore, to delete aggregate map after code execution.
252
- Default value: `false`
253
-
254
- - **aggregate_maps_path:**
255
- The path to file where aggregate maps are stored when Logstash stops and are loaded from when Logstash starts.
256
- If not defined, aggregate maps will not be stored at Logstash stop and will be lost.
257
- Must be defined in only one aggregate filter (as aggregate maps are global).
258
- Example:
259
- ``` ruby
260
- filter {
261
- aggregate {
262
- aggregate_maps_path => "/path/to/.aggregate_maps"
263
- }
264
- }
265
70
  ```
266
-
267
- - **timeout:**
268
- The amount of seconds after a task "end event" can be considered lost.
269
- When timeout occurs for a task, The task "map" is evicted.
270
- Timeout can be defined for each "task_id" pattern.
271
- If no timeout is defined, default timeout will be applied : 1800 seconds.
272
-
273
- - **timeout_code**
274
- The code to execute to complete timeout generated event, when `'push_map_as_event_on_timeout'` or `'push_previous_map_as_event'` is set to true.
275
- The code block will have access to the newly generated timeout event that is pre-populated with the aggregation map.
276
- If `'timeout_task_id_field'` is set, the event is also populated with the task_id value
277
- Example:
278
- ``` ruby
279
- filter {
280
- aggregate {
281
- timeout_code => "event.set('state', 'timeout')"
282
- }
283
- }
71
+ - Run Logstash with your plugin
72
+ ```sh
73
+ bin/logstash -e 'filter {awesome {}}'
284
74
  ```
75
+ At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
285
76
 
286
- - **push_map_as_event_on_timeout**
287
- When this option is enabled, each time a task timeout is detected, it pushes task aggregation map as a new Logstash event.
288
- This enables to detect and process task timeouts in Logstash, but also to manage tasks that have no explicit end event.
289
- Default value: `false`
290
-
291
- - **push_previous_map_as_event:**
292
- When this option is enabled, each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new Logstash event,
293
- and then creates a new empty map for the next task.
294
- _WARNING:_ this option works fine only if tasks come one after the other. It means : all task1 events, then all task2 events, etc...
295
- Default value: `false`
296
-
297
- - **timeout_task_id_field**
298
- This option indicates the timeout generated event's field for the "task_id" value.
299
- The task id will then be set into the timeout event. This can help correlate which tasks have been timed out.
300
- For example, with option `timeout_task_id_field => "my_id"` ,when timeout task id is `"12345"`, the generated timeout event will contain `'my_id' => '12345'`.
301
- By default, if this option is not set, task id value won't be set into timeout generated event.
302
-
303
- - **timeout_tags**
304
- Defines tags to add when a timeout event is generated and yield.
305
- Default value: `[]`
306
- Example:
307
- ``` ruby
308
- filter {
309
- aggregate {
310
- timeout_tags => ["aggregate_timeout']
311
- }
312
- }
313
- ```
77
+ #### 2.2 Run in an installed Logstash
314
78
 
315
- ## Changelog
79
+ You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
316
80
 
317
- Read [CHANGELOG.md](CHANGELOG.md).
81
+ - Build your plugin gem
82
+ ```sh
83
+ gem build logstash-filter-awesome.gemspec
84
+ ```
85
+ - Install the plugin from the Logstash home
86
+ ```sh
87
+ # Logstash 2.3 and higher
88
+ bin/logstash-plugin install --no-verify
318
89
 
90
+ # Prior to Logstash 2.3
91
+ bin/plugin install --no-verify
319
92
 
320
- ## Need Help?
93
+ ```
94
+ - Start Logstash and proceed to test the plugin
95
+
96
+ ## Contributing
321
97
 
322
- Need help? Try #Logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
98
+ All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
323
99
 
100
+ Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
324
101
 
325
- ## Want to contribute?
102
+ It is more important to the community that you are able to contribute.
326
103
 
327
- Read [BUILD.md](BUILD.md).
104
+ For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -0,0 +1,552 @@
1
+ :plugin: aggregate
2
+ :type: filter
3
+
4
+ ///////////////////////////////////////////
5
+ START - GENERATED VARIABLES, DO NOT EDIT!
6
+ ///////////////////////////////////////////
7
+ :version: %VERSION%
8
+ :release_date: %RELEASE_DATE%
9
+ :changelog_url: %CHANGELOG_URL%
10
+ :include_path: ../../../logstash/docs/include
11
+ ///////////////////////////////////////////
12
+ END - GENERATED VARIABLES, DO NOT EDIT!
13
+ ///////////////////////////////////////////
14
+
15
+ [id="plugins-{type}-{plugin}"]
16
+
17
+ === Aggregate
18
+
19
+ include::{include_path}/plugin_header.asciidoc[]
20
+
21
+
22
+ <<plugins-{type}s-{plugin}-description>> +
23
+ <<plugins-{type}s-{plugin}-example1>> +
24
+ <<plugins-{type}s-{plugin}-example2>> +
25
+ <<plugins-{type}s-{plugin}-example3>> +
26
+ <<plugins-{type}s-{plugin}-example4>> +
27
+ <<plugins-{type}s-{plugin}-example5>> +
28
+ <<plugins-{type}s-{plugin}-howitworks>> +
29
+ <<plugins-{type}s-{plugin}-usecases>> +
30
+ <<plugins-{type}s-{plugin}-options>> +
31
+
32
+
33
+ [id="plugins-{type}s-{plugin}-description"]
34
+ ==== Description
35
+
36
+
37
+ The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task,
38
+ and finally push aggregated information into final task event.
39
+
40
+ You should be very careful to set Logstash filter workers to 1 (`-w 1` flag) for this filter to work correctly
41
+ otherwise events may be processed out of sequence and unexpected results will occur.
42
+
43
+
44
+ [id="plugins-{type}s-{plugin}-example1"]
45
+ ==== Example #1
46
+
47
+ * with these given logs :
48
+
49
+ [source,ruby]
50
+ ----------------------------------
51
+ INFO - 12345 - TASK_START - start
52
+ INFO - 12345 - SQL - sqlQuery1 - 12
53
+ INFO - 12345 - SQL - sqlQuery2 - 34
54
+ INFO - 12345 - TASK_END - end
55
+ ----------------------------------
56
+
57
+ * you can aggregate "sql duration" for the whole task with this configuration :
58
+
59
+ [source,ruby]
60
+ ----------------------------------
61
+ filter {
62
+ grok {
63
+ match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
64
+ }
65
+
66
+ if [logger] == "TASK_START" {
67
+ aggregate {
68
+ task_id => "%{taskid}"
69
+ code => "map['sql_duration'] = 0"
70
+ map_action => "create"
71
+ }
72
+ }
73
+
74
+ if [logger] == "SQL" {
75
+ aggregate {
76
+ task_id => "%{taskid}"
77
+ code => "map['sql_duration'] += event.get('duration')"
78
+ map_action => "update"
79
+ }
80
+ }
81
+
82
+ if [logger] == "TASK_END" {
83
+ aggregate {
84
+ task_id => "%{taskid}"
85
+ code => "event.set('sql_duration', map['sql_duration'])"
86
+ map_action => "update"
87
+ end_of_task => true
88
+ timeout => 120
89
+ }
90
+ }
91
+ }
92
+ ----------------------------------
93
+
94
+ * the final event then looks like :
95
+
96
+ [source,ruby]
97
+ ----------------------------------
98
+ {
99
+ "message" => "INFO - 12345 - TASK_END - end message",
100
+ "sql_duration" => 46
101
+ }
102
+ ----------------------------------
103
+
104
+ the field `sql_duration` is added and contains the sum of all sql queries durations.
105
+
106
+
107
+ [id="plugins-{type}s-{plugin}-example2"]
108
+ ==== Example #2 : no start event
109
+
110
+ * If you have the same logs than example #1, but without a start log :
111
+
112
+ [source,ruby]
113
+ ----------------------------------
114
+ INFO - 12345 - SQL - sqlQuery1 - 12
115
+ INFO - 12345 - SQL - sqlQuery2 - 34
116
+ INFO - 12345 - TASK_END - end
117
+ ----------------------------------
118
+
119
+ * you can also aggregate "sql duration" with a slightly different configuration :
120
+
121
+ [source,ruby]
122
+ ----------------------------------
123
+ filter {
124
+ grok {
125
+ match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
126
+ }
127
+
128
+ if [logger] == "SQL" {
129
+ aggregate {
130
+ task_id => "%{taskid}"
131
+ code => "map['sql_duration'] ||= 0 ; map['sql_duration'] += event.get('duration')"
132
+ }
133
+ }
134
+
135
+ if [logger] == "TASK_END" {
136
+ aggregate {
137
+ task_id => "%{taskid}"
138
+ code => "event.set('sql_duration', map['sql_duration'])"
139
+ end_of_task => true
140
+ timeout => 120
141
+ }
142
+ }
143
+ }
144
+ ----------------------------------
145
+
146
+ * the final event is exactly the same than example #1
147
+ * the key point is the "||=" ruby operator. It allows to initialize 'sql_duration' map entry to 0 only if this map entry is not already initialized
148
+
149
+
150
+ [id="plugins-{type}s-{plugin}-example3"]
151
+ ==== Example #3 : no end event
152
+
153
+ Third use case: You have no specific end event.
154
+
155
+ A typical case is aggregating or tracking user behaviour. We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in. There is no specific event indicating the end of the user's interaction.
156
+
157
+ In this case, we can enable the option 'push_map_as_event_on_timeout' to enable pushing the aggregation map as a new event when a timeout occurs.
158
+ In addition, we can enable 'timeout_code' to execute code on the populated timeout event.
159
+ We can also add 'timeout_task_id_field' so we can correlate the task_id, which in this case would be the user's ID.
160
+
161
+ * Given these logs:
162
+
163
+ [source,ruby]
164
+ ----------------------------------
165
+ INFO - 12345 - Clicked One
166
+ INFO - 12345 - Clicked Two
167
+ INFO - 12345 - Clicked Three
168
+ ----------------------------------
169
+
170
+ * You can aggregate the amount of clicks the user did like this:
171
+
172
+ [source,ruby]
173
+ ----------------------------------
174
+ filter {
175
+ grok {
176
+ match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ]
177
+ }
178
+
179
+ aggregate {
180
+ task_id => "%{user_id}"
181
+ code => "map['clicks'] ||= 0; map['clicks'] += 1;"
182
+ push_map_as_event_on_timeout => true
183
+ timeout_task_id_field => "user_id"
184
+ timeout => 600 # 10 minutes timeout
185
+ timeout_tags => ['_aggregatetimeout']
186
+ timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
187
+ }
188
+ }
189
+ ----------------------------------
190
+
191
+ * After ten minutes, this will yield an event like:
192
+
193
+ [source,json]
194
+ ----------------------------------
195
+ {
196
+ "user_id": "12345",
197
+ "clicks": 3,
198
+ "several_clicks": true,
199
+ "tags": [
200
+ "_aggregatetimeout"
201
+ ]
202
+ }
203
+ ----------------------------------
204
+
205
+
206
+ [id="plugins-{type}s-{plugin}-example4"]
207
+ ==== Example #4 : no end event and tasks come one after the other
208
+
209
+ Fourth use case : like example #3, you have no specific end event, but also, tasks come one after the other. +
210
+ That is to say : tasks are not interlaced. All task1 events come, then all task2 events come, ... +
211
+ In that case, you don't want to wait task timeout to flush aggregation map. +
212
+
213
+ * A typical case is aggregating results from jdbc input plugin.
214
+ * Given that you have this SQL query : `SELECT country_name, town_name FROM town`
215
+ * Using jdbc input plugin, you get these 3 events from :
216
+
217
+ [source,json]
218
+ ----------------------------------
219
+ { "country_name": "France", "town_name": "Paris" }
220
+ { "country_name": "France", "town_name": "Marseille" }
221
+ { "country_name": "USA", "town_name": "New-York" }
222
+ ----------------------------------
223
+
224
+ * And you would like these 2 result events to push them into elasticsearch :
225
+
226
+ [source,json]
227
+ ----------------------------------
228
+ { "country_name": "France", "towns": [ {"town_name": "Paris"}, {"town_name": "Marseille"} ] }
229
+ { "country_name": "USA", "towns": [ {"town_name": "New-York"} ] }
230
+ ----------------------------------
231
+
232
+ * You can do that using `push_previous_map_as_event` aggregate plugin option :
233
+
234
+ [source,ruby]
235
+ ----------------------------------
236
+ filter {
237
+ aggregate {
238
+ task_id => "%{country_name}"
239
+ code => "
240
+ map['country_name'] = event.get('country_name')
241
+ map['towns'] ||= []
242
+ map['towns'] << {'town_name' => event.get('town_name')}
243
+ event.cancel()
244
+ "
245
+ push_previous_map_as_event => true
246
+ timeout => 3
247
+ }
248
+ }
249
+ ----------------------------------
250
+
251
+ * The key point is that each time aggregate plugin detects a new `country_name`, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next country
252
+ * When 5s timeout comes, the last aggregate map is pushed as a new event
253
+ * Finally, initial events (which are not aggregated) are dropped because useless (thanks to `event.cancel()`)
254
+
255
+
256
+ [id="plugins-{type}s-{plugin}-example5"]
257
+ ==== Example #5 : no end event and push events as soon as possible
258
+
259
+ Fifth use case: like example #3, there is no end event. +
260
+ Events keep comming for an indefinite time and you want to push the aggregation map as soon as possible after the last user interaction without waiting for the `timeout`. +
261
+ This allows to have the aggregated events pushed closer to real time. +
262
+
263
+ A typical case is aggregating or tracking user behaviour. +
264
+ We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in. +
265
+ There is no specific event indicating the end of the user's interaction. +
266
+ The user ineraction will be considered as ended when no events for the specified user (task_id) arrive after the specified inactivity_timeout`. +
267
+ If the user continues interacting for longer than `timeout` seconds (since first event), the aggregation map will still be deleted and pushed as a new event when timeout occurs. +
268
+ The difference with example #3 is that the events will be pushed as soon as the user stops interacting for `inactivity_timeout` seconds instead of waiting for the end of `timeout` seconds since first event.
269
+
270
+ In this case, we can enable the option 'push_map_as_event_on_timeout' to enable pushing the aggregation map as a new event when inactivity timeout occurs. +
271
+ In addition, we can enable 'timeout_code' to execute code on the populated timeout event. +
272
+ We can also add 'timeout_task_id_field' so we can correlate the task_id, which in this case would be the user's ID. +
273
+
274
+ * Given these logs:
275
+
276
+ [source,ruby]
277
+ ----------------------------------
278
+ INFO - 12345 - Clicked One
279
+ INFO - 12345 - Clicked Two
280
+ INFO - 12345 - Clicked Three
281
+ ----------------------------------
282
+
283
+ * You can aggregate the amount of clicks the user did like this:
284
+
285
+ [source,ruby]
286
+ ----------------------------------
287
+ filter {
288
+ grok {
289
+ match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ]
290
+ }
291
+ aggregate {
292
+ task_id => "%{user_id}"
293
+ code => "map['clicks'] ||= 0; map['clicks'] += 1;"
294
+ push_map_as_event_on_timeout => true
295
+ timeout_task_id_field => "user_id"
296
+ timeout => 3600 # 1 hour timeout, user activity will be considered finished one hour after the first event, even if events keep comming
297
+ inactivity_timeout => 300 # 5 minutes timeout, user activity will be considered finished if no new events arrive 5 minutes after the last event
298
+ timeout_tags => ['_aggregatetimeout']
299
+ timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
300
+ }
301
+ }
302
+ ----------------------------------
303
+
304
+ * After five minutes of inactivity or one hour since first event, this will yield an event like:
305
+
306
+ [source,json]
307
+ ----------------------------------
308
+ {
309
+ "user_id": "12345",
310
+ "clicks": 3,
311
+ "several_clicks": true,
312
+ "tags": [
313
+ "_aggregatetimeout"
314
+ ]
315
+ }
316
+ ----------------------------------
317
+
318
+
319
+ [id="plugins-{type}s-{plugin}-howitworks"]
320
+ ==== How it works
321
+ * the filter needs a "task_id" to correlate events (log lines) of a same task
322
+ * at the task beggining, filter creates a map, attached to task_id
323
+ * for each event, you can execute code using 'event' and 'map' (for instance, copy an event field to map)
324
+ * in the final event, you can execute a last code (for instance, add map data to final event)
325
+ * after the final event, the map attached to task is deleted (thanks to `end_of_task => true`)
326
+ * an aggregate map is tied to one task_id value which is tied to one task_id pattern. So if you have 2 filters with different task_id patterns, even if you have same task_id value, they won't share the same aggregate map.
327
+ * in one filter configuration, it is recommanded to define a timeout option to protect the feature against unterminated tasks. It tells the filter to delete expired maps
328
+ * if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
329
+ * all timeout options have to be defined in only one aggregate filter per task_id pattern. Timeout options are : timeout, inactivity_timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags
330
+ * if `code` execution raises an exception, the error is logged and event is tagged '_aggregateexception'
331
+
332
+
333
+ [id="plugins-{type}s-{plugin}-usecases"]
334
+ ==== Use Cases
335
+ * extract some cool metrics from task logs and push them into task final log event (like in example #1 and #2)
336
+ * extract error information in any task log line, and push it in final task event (to get a final event with all error information if any)
337
+ * extract all back-end calls as a list, and push this list in final task event (to get a task profile)
338
+ * extract all http headers logged in several lines to push this list in final task event (complete http request info)
339
+ * for every back-end call, collect call details available on several lines, analyse it and finally tag final back-end call log line (error, timeout, business-warning, ...)
340
+ * Finally, task id can be any correlation id matching your need : it can be a session id, a file path, ...
341
+
342
+
343
+ [id="plugins-{type}s-{plugin}-options"]
344
+ ==== Aggregate Filter Configuration Options
345
+
346
+ This plugin supports the following configuration options plus the <<plugins-{type}s-common-options>> described later.
347
+
348
+ [cols="<,<,<",options="header",]
349
+ |=======================================================================
350
+ |Setting |Input type|Required
351
+ | <<plugins-{type}s-{plugin}-aggregate_maps_path>> |<<string,string>>, a valid filesystem path|No
352
+ | <<plugins-{type}s-{plugin}-code>> |<<string,string>>|Yes
353
+ | <<plugins-{type}s-{plugin}-end_of_task>> |<<boolean,boolean>>|No
354
+ | <<plugins-{type}s-{plugin}-inactivity_timeout>> |<<number,number>>|No
355
+ | <<plugins-{type}s-{plugin}-map_action>> |<<string,string>>, one of `["create", "update", "create_or_update"]`|No
356
+ | <<plugins-{type}s-{plugin}-push_map_as_event_on_timeout>> |<<boolean,boolean>>|No
357
+ | <<plugins-{type}s-{plugin}-push_previous_map_as_event>> |<<boolean,boolean>>|No
358
+ | <<plugins-{type}s-{plugin}-task_id>> |<<string,string>>|Yes
359
+ | <<plugins-{type}s-{plugin}-timeout>> |<<number,number>>|No
360
+ | <<plugins-{type}s-{plugin}-timeout_code>> |<<string,string>>|No
361
+ | <<plugins-{type}s-{plugin}-timeout_tags>> |<<array,array>>|No
362
+ | <<plugins-{type}s-{plugin}-timeout_task_id_field>> |<<string,string>>|No
363
+ |=======================================================================
364
+
365
+ Also see <<plugins-{type}s-common-options>> for a list of options supported by all
366
+ filter plugins.
367
+
368
+ &nbsp;
369
+
370
+ [id="plugins-{type}s-{plugin}-aggregate_maps_path"]
371
+ ===== `aggregate_maps_path`
372
+
373
+ * Value type is <<string,string>>
374
+ * There is no default value for this setting.
375
+
376
+ The path to file where aggregate maps are stored when Logstash stops
377
+ and are loaded from when Logstash starts.
378
+
379
+ If not defined, aggregate maps will not be stored at Logstash stop and will be lost.
380
+ Must be defined in only one aggregate filter (as aggregate maps are global).
381
+
382
+ Example:
383
+ [source,ruby]
384
+ filter {
385
+ aggregate {
386
+ aggregate_maps_path => "/path/to/.aggregate_maps"
387
+ }
388
+ }
389
+
390
+ [id="plugins-{type}s-{plugin}-code"]
391
+ ===== `code`
392
+
393
+ * This is a required setting.
394
+ * Value type is <<string,string>>
395
+ * There is no default value for this setting.
396
+
397
+ The code to execute to update map, using current event.
398
+
399
+ Or on the contrary, the code to execute to update event, using current map.
400
+
401
+ You will have a 'map' variable and an 'event' variable available (that is the event itself).
402
+
403
+ Example:
404
+ [source,ruby]
405
+ filter {
406
+ aggregate {
407
+ code => "map['sql_duration'] += event.get('duration')"
408
+ }
409
+ }
410
+
411
+ [id="plugins-{type}s-{plugin}-end_of_task"]
412
+ ===== `end_of_task`
413
+
414
+ * Value type is <<boolean,boolean>>
415
+ * Default value is `false`
416
+
417
+ Tell the filter that task is ended, and therefore, to delete aggregate map after code execution.
418
+
419
+ [id="plugins-{type}s-{plugin}-inactivity_timeout"]
420
+ ===== `inactivity_timeout`
421
+
422
+ * Value type is <<number,number>>
423
+ * There is no default value for this setting.
424
+
425
+ The amount of seconds (since the last event) after which a task is considered as expired.
426
+
427
+ When timeout occurs for a task, its aggregate map is evicted.
428
+
429
+ If 'push_map_as_event_on_timeout' or 'push_previous_map_as_event' is set to true, the task aggregation map is pushed as a new Logstash event.
430
+
431
+ `inactivity_timeout` can be defined for each "task_id" pattern.
432
+
433
+ `inactivity_timeout` must be lower than `timeout`.
434
+
435
+ [id="plugins-{type}s-{plugin}-map_action"]
436
+ ===== `map_action`
437
+
438
+ * Value type is <<string,string>>
439
+ * Default value is `"create_or_update"`
440
+
441
+ Tell the filter what to do with aggregate map.
442
+
443
+ `"create"`: create the map, and execute the code only if map wasn't created before
444
+
445
+ `"update"`: doesn't create the map, and execute the code only if map was created before
446
+
447
+ `"create_or_update"`: create the map if it wasn't created before, execute the code in all cases
448
+
449
+ [id="plugins-{type}s-{plugin}-push_map_as_event_on_timeout"]
450
+ ===== `push_map_as_event_on_timeout`
451
+
452
+ * Value type is <<boolean,boolean>>
453
+ * Default value is `false`
454
+
455
+ When this option is enabled, each time a task timeout is detected, it pushes task aggregation map as a new Logstash event.
456
+ This enables to detect and process task timeouts in Logstash, but also to manage tasks that have no explicit end event.
457
+
458
+ [id="plugins-{type}s-{plugin}-push_previous_map_as_event"]
459
+ ===== `push_previous_map_as_event`
460
+
461
+ * Value type is <<boolean,boolean>>
462
+ * Default value is `false`
463
+
464
+ When this option is enabled, each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new Logstash event,
465
+ and then creates a new empty map for the next task.
466
+
467
+ WARNING: this option works fine only if tasks come one after the other. It means : all task1 events, then all task2 events, etc...
468
+
469
+ [id="plugins-{type}s-{plugin}-task_id"]
470
+ ===== `task_id`
471
+
472
+ * This is a required setting.
473
+ * Value type is <<string,string>>
474
+ * There is no default value for this setting.
475
+
476
+ The expression defining task ID to correlate logs.
477
+
478
+ This value must uniquely identify the task.
479
+
480
+ Example:
481
+ [source,ruby]
482
+ filter {
483
+ aggregate {
484
+ task_id => "%{type}%{my_task_id}"
485
+ }
486
+ }
487
+
488
+ [id="plugins-{type}s-{plugin}-timeout"]
489
+ ===== `timeout`
490
+
491
+ * Value type is <<number,number>>
492
+ * Default value is `1800`
493
+
494
+ The amount of seconds (since the first event) after which a task is considered as expired.
495
+
496
+ When timeout occurs for a task, its aggregate map is evicted.
497
+
498
+ If 'push_map_as_event_on_timeout' or 'push_previous_map_as_event' is set to true, the task aggregation map is pushed as a new Logstash event.
499
+
500
+ Timeout can be defined for each "task_id" pattern.
501
+
502
+ [id="plugins-{type}s-{plugin}-timeout_code"]
503
+ ===== `timeout_code`
504
+
505
+ * Value type is <<string,string>>
506
+ * There is no default value for this setting.
507
+
508
+ The code to execute to complete timeout generated event, when `'push_map_as_event_on_timeout'` or `'push_previous_map_as_event'` is set to true.
509
+ The code block will have access to the newly generated timeout event that is pre-populated with the aggregation map.
510
+
511
+ If `'timeout_task_id_field'` is set, the event is also populated with the task_id value
512
+
513
+ Example:
514
+ [source,ruby]
515
+ filter {
516
+ aggregate {
517
+ timeout_code => "event.set('state', 'timeout')"
518
+ }
519
+ }
520
+
521
+ [id="plugins-{type}s-{plugin}-timeout_tags"]
522
+ ===== `timeout_tags`
523
+
524
+ * Value type is <<array,array>>
525
+ * Default value is `[]`
526
+
527
+ Defines tags to add when a timeout event is generated and yield
528
+
529
+ Example:
530
+ [source,ruby]
531
+ filter {
532
+ aggregate {
533
+ timeout_tags => ["aggregate_timeout']
534
+ }
535
+ }
536
+
537
+ [id="plugins-{type}s-{plugin}-timeout_task_id_field"]
538
+ ===== `timeout_task_id_field`
539
+
540
+ * Value type is <<string,string>>
541
+ * There is no default value for this setting.
542
+
543
+ This option indicates the timeout generated event's field for the "task_id" value.
544
+ The task id will then be set into the timeout event. This can help correlate which tasks have been timed out.
545
+
546
+ For example, with option `timeout_task_id_field => "my_id"` ,when timeout task id is `"12345"`, the generated timeout event will contain `'my_id' => '12345'`.
547
+
548
+ By default, if this option is not set, task id value won't be set into timeout generated event.
549
+
550
+
551
+
552
+ include::{include_path}/{type}.asciidoc[]