logstash-filter-aggregate 0.1.3 → 0.1.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 60e752551408fc57869c2ab7866e76f8612aa0c6
4
- data.tar.gz: 3757e961e87a984a827b07a748811638c2df4cb7
3
+ metadata.gz: fd8b53ee4301fd348e4c688e79b3f1aad2586c2f
4
+ data.tar.gz: 2fd5b3532db3244d976fc837d50f2ee8faf4631c
5
5
  SHA512:
6
- metadata.gz: b9cb0e54a99fd0933499f98101d72b11943a24bd039c84a76fa9e9578a4580ef4b4ff514c9141415fc6740f93e2c9b34fed4fa1df9b4d980df70eb520e399154
7
- data.tar.gz: caec2ae83b4d7d0985e2a96b1cf40d5be58de8915b2b8a17f3c8043085db270d895e8c6f60fe250d0333390d507458306a3b39ec6a192ae18d7a4ab0071c89cf
6
+ metadata.gz: ede2f98ac9891b48583578798c746173a04bbbfe5444394b93c97fe8e11ef3934db8c43076b6c3f2c197f8aa247e1716f026faf6eecef6db12b362eaa49bc5eb
7
+ data.tar.gz: 23ce067a14832716f854752e4ed05ccf5b4f237b47f00021a30ee6c751a2e710a607aa702906df587546bc15fe5e9b7b7464b5024d421725af82c3d6e27f5a9c
data/BUILD.md CHANGED
@@ -11,10 +11,6 @@ Logstash provides infrastructure to automatically generate documentation for thi
11
11
  - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
12
  - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
13
 
14
- ## Need Help?
15
-
16
- Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
17
-
18
14
  ## Developing
19
15
 
20
16
  ### 1. Plugin Developement and Testing
@@ -22,7 +18,7 @@ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/log
22
18
  #### Code
23
19
  - To get started, you'll need JRuby with the Bundler gem installed.
24
20
 
25
- - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
21
+ - Create a new plugin or clone an existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
26
22
 
27
23
  - Install dependencies
28
24
  ```sh
@@ -0,0 +1,12 @@
1
+ # v 0.1.4
2
+ - fix issue #5 : when code call raises an exception, the error is logged and the event is tagged '_aggregateexception'. It avoids logstash crash.
3
+
4
+ # v 0.1.3
5
+ - break compatibility with logstash 1.4
6
+ - remove "milestone" method call which is deprecated in logstash 1.5
7
+ - enhanced tests using 'expect' command
8
+ - add a second example in documentation
9
+
10
+ # v 0.1.2
11
+ - compatible with logstash 1.4
12
+ - first version available on github
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # Logstash Filter Aggregate Documentation
2
2
 
3
- The aim of this filter is to aggregate informations available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
3
+ The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
4
4
 
5
5
  ## Example #1
6
6
 
@@ -104,6 +104,7 @@ it allows to initialize 'sql_duration' map entry to 0 only if this map entry is
104
104
  - after the final event, the map attached to task is deleted
105
105
  - in one filter configuration, it is recommanded to define a timeout option to protect the filter against unterminated tasks. It tells the filter to delete expired maps
106
106
  - if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
107
+ - finally, if `code` execution raises an exception, the error is logged and event is tagged '_aggregateexception'
107
108
 
108
109
  ## Aggregate Plugin Options
109
110
  - **task_id :**
@@ -134,3 +135,13 @@ Default value: `false`
134
135
  The amount of seconds after a task "end event" can be considered lost.
135
136
  The task "map" is then evicted.
136
137
  The default value is 0, which means no timeout so no auto eviction.
138
+
139
+
140
+ ## Need Help?
141
+
142
+ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
143
+
144
+
145
+ ## Want to contribute?
146
+
147
+ Read [BUILD.md](BUILD.md).
@@ -5,13 +5,13 @@ require "logstash/namespace"
5
5
  require "thread"
6
6
 
7
7
  #
8
- # The aim of this filter is to aggregate informations available among several events (typically log lines) belonging to a same task,
8
+ # The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task,
9
9
  # and finally push aggregated information into final task event.
10
10
  #
11
- # An example of use can be:
11
+ # ==== Example #1
12
12
  #
13
13
  # * with these given logs :
14
- # [source,log]
14
+ # [source,ruby]
15
15
  # ----------------------------------
16
16
  # INFO - 12345 - TASK_START - start
17
17
  # INFO - 12345 - SQL - sqlQuery1 - 12
@@ -19,7 +19,7 @@ require "thread"
19
19
  # INFO - 12345 - TASK_END - end
20
20
  # ----------------------------------
21
21
  #
22
- # * you can aggregate "dao duration" with this configuration :
22
+ # * you can aggregate "sql duration" for the whole task with this configuration :
23
23
  # [source,ruby]
24
24
  # ----------------------------------
25
25
  # filter {
@@ -56,7 +56,7 @@ require "thread"
56
56
  # ----------------------------------
57
57
  #
58
58
  # * the final event then looks like :
59
- # [source,json]
59
+ # [source,ruby]
60
60
  # ----------------------------------
61
61
  # {
62
62
  # "message" => "INFO - 12345 - TASK_END - end message",
@@ -66,9 +66,10 @@ require "thread"
66
66
  #
67
67
  # the field `sql_duration` is added and contains the sum of all sql queries durations.
68
68
  #
69
- #
70
- # * Another example : imagine you have the same logs than example #1, but without a start log :
71
- # [source,log]
69
+ # ==== Example #2
70
+ #
71
+ # * If you have the same logs than example #1, but without a start log :
72
+ # [source,ruby]
72
73
  # ----------------------------------
73
74
  # INFO - 12345 - SQL - sqlQuery1 - 12
74
75
  # INFO - 12345 - SQL - sqlQuery2 - 34
@@ -102,47 +103,57 @@ require "thread"
102
103
  # ----------------------------------
103
104
  #
104
105
  # * the final event is exactly the same than example #1
105
- # * the key point is the "||=" ruby operator. +
106
- # it allows to initialize 'sql_duration' map entry to 0 only if this map entry is not already initialized
106
+ # * the key point is the "||=" ruby operator. It allows to initialize 'sql_duration' map entry to 0 only if this map entry is not already initialized
107
107
  #
108
108
  #
109
- # How it works :
110
- # - the filter needs a "task_id" to correlate events (log lines) of a same task
111
- # - at the task beggining, filter creates a map, attached to task_id
112
- # - for each event, you can execute code using 'event' and 'map' (for instance, copy an event field to map)
113
- # - in the final event, you can execute a last code (for instance, add map data to final event)
114
- # - after the final event, the map attached to task is deleted
115
- # - in one filter configuration, it is recommanded to define a timeout option to protect the feature against unterminated tasks. It tells the filter to delete expired maps
116
- # - if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
109
+ # ==== How it works
110
+ # * the filter needs a "task_id" to correlate events (log lines) of a same task
111
+ # * at the task beggining, filter creates a map, attached to task_id
112
+ # * for each event, you can execute code using 'event' and 'map' (for instance, copy an event field to map)
113
+ # * in the final event, you can execute a last code (for instance, add map data to final event)
114
+ # * after the final event, the map attached to task is deleted
115
+ # * in one filter configuration, it is recommanded to define a timeout option to protect the feature against unterminated tasks. It tells the filter to delete expired maps
116
+ # * if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
117
+ # * finally, if `code` execution raises an exception, the error is logged and event is tagged '_aggregateexception'
117
118
  #
118
119
  #
119
120
  class LogStash::Filters::Aggregate < LogStash::Filters::Base
120
121
 
121
122
  config_name "aggregate"
122
123
 
123
- # The expression defining task ID to correlate logs. +
124
- # This value must uniquely identify the task in the system +
125
- # Example value : "%{application}%{my_task_id}" +
124
+ # The expression defining task ID to correlate logs.
125
+ #
126
+ # This value must uniquely identify the task in the system.
127
+ #
128
+ # Example value : "%{application}%{my_task_id}"
126
129
  config :task_id, :validate => :string, :required => true
127
130
 
128
- # The code to execute to update map, using current event. +
129
- # Or on the contrary, the code to execute to update event, using current map. +
130
- # You will have a 'map' variable and an 'event' variable available (that is the event itself). +
131
- # Example value : "map['sql_duration'] += event['duration']" +
131
+ # The code to execute to update map, using current event.
132
+ #
133
+ # Or on the contrary, the code to execute to update event, using current map.
134
+ #
135
+ # You will have a 'map' variable and an 'event' variable available (that is the event itself).
136
+ #
137
+ # Example value : "map['sql_duration'] += event['duration']"
132
138
  config :code, :validate => :string, :required => true
133
139
 
134
- # Tell the filter what to do with aggregate map (default : "create_or_update"). +
135
- # create: create the map, and execute the code only if map wasn't created before +
136
- # update: doesn't create the map, and execute the code only if map was created before +
137
- # create_or_update: create the map if it wasn't created before, execute the code in all cases +
140
+ # Tell the filter what to do with aggregate map.
141
+ #
142
+ # `create`: create the map, and execute the code only if map wasn't created before
143
+ #
144
+ # `update`: doesn't create the map, and execute the code only if map was created before
145
+ #
146
+ # `create_or_update`: create the map if it wasn't created before, execute the code in all cases
138
147
  config :map_action, :validate => :string, :default => "create_or_update"
139
148
 
140
149
  # Tell the filter that task is ended, and therefore, to delete map after code execution.
141
150
  config :end_of_task, :validate => :boolean, :default => false
142
151
 
143
- # The amount of seconds after a task "end event" can be considered lost. +
144
- # The task "map" is evicted. +
145
- # The default value is 0, which means no timeout so no auto eviction. +
152
+ # The amount of seconds after a task "end event" can be considered lost.
153
+ #
154
+ # The task "map" is evicted.
155
+ #
156
+ # Default value (`0`) means no timeout so no auto eviction.
146
157
  config :timeout, :validate => :number, :required => false, :default => 0
147
158
 
148
159
 
@@ -188,7 +199,11 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
188
199
  task_id = event.sprintf(@task_id)
189
200
  return if task_id.nil? || task_id.empty? || task_id == @task_id
190
201
 
202
+ noError = false
203
+
204
+ # protect aggregate_maps against concurrent access, using a mutex
191
205
  @@mutex.synchronize do
206
+
192
207
  # retrieve the current aggregate map
193
208
  aggregate_maps_element = @@aggregate_maps[task_id]
194
209
  if (aggregate_maps_element.nil?)
@@ -201,13 +216,20 @@ class LogStash::Filters::Aggregate < LogStash::Filters::Base
201
216
  map = aggregate_maps_element.map
202
217
 
203
218
  # execute the code to read/update map and event
204
- @codeblock.call(event, map)
219
+ begin
220
+ @codeblock.call(event, map)
221
+ noError = true
222
+ rescue => exception
223
+ @logger.error("Aggregate exception occurred. Error: #{exception} ; Code: #{@code} ; Map: #{map} ; EventData: #{event.instance_variable_get('@data')}")
224
+ event.tag("_aggregateexception")
225
+ end
205
226
 
206
227
  # delete the map if task is ended
207
228
  @@aggregate_maps.delete(task_id) if @end_of_task
208
229
  end
209
230
 
210
- filter_matched(event)
231
+ # match the filter, only if no error occurred
232
+ filter_matched(event) if noError
211
233
  end
212
234
 
213
235
  # Necessary to indicate logstash to periodically call 'flush' method
@@ -1,8 +1,8 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-filter-aggregate'
3
- s.version = '0.1.3'
3
+ s.version = '0.1.4'
4
4
  s.licenses = ['Apache License (2.0)']
5
- s.summary = "The aim of this filter is to aggregate informations available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event."
5
+ s.summary = "The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event."
6
6
  s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program"
7
7
  s.authors = ["Elastic", "Fabien Baligand"]
8
8
  s.email = 'info@elastic.co'
@@ -1,167 +1,177 @@
1
- # encoding: utf-8
2
- require "logstash/devutils/rspec/spec_helper"
3
- require "logstash/filters/aggregate"
4
- require_relative "aggregate_spec_helper"
5
-
6
- describe LogStash::Filters::Aggregate do
7
-
8
- before(:each) do
9
- set_eviction_instance(nil)
10
- aggregate_maps.clear()
11
- @start_filter = setup_filter({ "map_action" => "create", "code" => "map['sql_duration'] = 0" })
12
- @update_filter = setup_filter({ "map_action" => "update", "code" => "map['sql_duration'] += event['duration']" })
13
- @end_filter = setup_filter({ "map_action" => "update", "code" => "event.to_hash.merge!(map)", "end_of_task" => true, "timeout" => 5 })
14
- end
15
-
16
- context "Start event" do
17
- describe "and receiving an event without task_id" do
18
- it "does not record it" do
19
- @start_filter.filter(event())
20
- expect(aggregate_maps).to be_empty
21
- end
22
- end
23
- describe "and receiving an event with task_id" do
24
- it "records it" do
25
- event = start_event("taskid" => "id123")
26
- @start_filter.filter(event)
27
-
28
- expect(aggregate_maps.size).to eq(1)
29
- expect(aggregate_maps["id123"]).not_to be_nil
30
- expect(aggregate_maps["id123"].creation_timestamp).to be >= event["@timestamp"]
31
- expect(aggregate_maps["id123"].map["sql_duration"]).to eq(0)
32
- end
33
- end
34
-
35
- describe "and receiving two 'start events' for the same task_id" do
36
- it "keeps the first one and does nothing with the second one" do
37
-
38
- first_start_event = start_event("taskid" => "id124")
39
- @start_filter.filter(first_start_event)
40
-
41
- first_update_event = update_event("taskid" => "id124", "duration" => 2)
42
- @update_filter.filter(first_update_event)
43
-
44
- sleep(1)
45
- second_start_event = start_event("taskid" => "id124")
46
- @start_filter.filter(second_start_event)
47
-
48
- expect(aggregate_maps.size).to eq(1)
49
- expect(aggregate_maps["id124"].creation_timestamp).to be < second_start_event["@timestamp"]
50
- expect(aggregate_maps["id124"].map["sql_duration"]).to eq(first_update_event["duration"])
51
- end
52
- end
53
- end
54
-
55
- context "End event" do
56
- describe "receiving an event without a previous 'start event'" do
57
- describe "but without a previous 'start event'" do
58
- it "does nothing with the event" do
59
- end_event = end_event("taskid" => "id124")
60
- @end_filter.filter(end_event)
61
-
62
- expect(aggregate_maps).to be_empty
63
- expect(end_event["sql_duration"]).to be_nil
64
- end
65
- end
66
- end
67
- end
68
-
69
- context "Start/end events interaction" do
70
- describe "receiving a 'start event'" do
71
- before(:each) do
72
- @task_id_value = "id_123"
73
- @start_event = start_event({"taskid" => @task_id_value})
74
- @start_filter.filter(@start_event)
75
- expect(aggregate_maps.size).to eq(1)
76
- end
77
-
78
- describe "and receiving an end event" do
79
- describe "and without an id" do
80
- it "does nothing" do
81
- end_event = end_event()
82
- @end_filter.filter(end_event)
83
- expect(aggregate_maps.size).to eq(1)
84
- expect(end_event["sql_duration"]).to be_nil
85
- end
86
- end
87
-
88
- describe "and an id different from the one of the 'start event'" do
89
- it "does nothing" do
90
- different_id_value = @task_id_value + "_different"
91
- @end_filter.filter(end_event("taskid" => different_id_value))
92
-
93
- expect(aggregate_maps.size).to eq(1)
94
- expect(aggregate_maps[@task_id_value]).not_to be_nil
95
- end
96
- end
97
-
98
- describe "and the same id of the 'start event'" do
99
- it "add 'sql_duration' field to the end event and deletes the recorded 'start event'" do
100
- expect(aggregate_maps.size).to eq(1)
101
-
102
- @update_filter.filter(update_event("taskid" => @task_id_value, "duration" => 2))
103
-
104
- end_event = end_event("taskid" => @task_id_value)
105
- @end_filter.filter(end_event)
106
-
107
- expect(aggregate_maps).to be_empty
108
- expect(end_event["sql_duration"]).to eq(2)
109
- end
110
-
111
- end
112
- end
113
- end
114
- end
115
-
116
- context "flush call" do
117
- before(:each) do
118
- @end_filter.timeout = 1
119
- expect(@end_filter.timeout).to eq(1)
120
- @task_id_value = "id_123"
121
- @start_event = start_event({"taskid" => @task_id_value})
122
- @start_filter.filter(@start_event)
123
- expect(aggregate_maps.size).to eq(1)
124
- end
125
-
126
- describe "no timeout defined in none filter" do
127
- it "defines a default timeout on a default filter" do
128
- set_eviction_instance(nil)
129
- expect(eviction_instance).to be_nil
130
- @end_filter.flush()
131
- expect(eviction_instance).to eq(@end_filter)
132
- expect(@end_filter.timeout).to eq(LogStash::Filters::Aggregate::DEFAULT_TIMEOUT)
133
- end
134
- end
135
-
136
- describe "timeout is defined on another filter" do
137
- it "eviction_instance is not updated" do
138
- expect(eviction_instance).not_to be_nil
139
- @start_filter.flush()
140
- expect(eviction_instance).not_to eq(@start_filter)
141
- expect(eviction_instance).to eq(@end_filter)
142
- end
143
- end
144
-
145
- describe "no timeout defined on the filter" do
146
- it "event is not removed" do
147
- sleep(2)
148
- @start_filter.flush()
149
- expect(aggregate_maps.size).to eq(1)
150
- end
151
- end
152
-
153
- describe "timeout defined on the filter" do
154
- it "event is not removed if not expired" do
155
- @end_filter.flush()
156
- expect(aggregate_maps.size).to eq(1)
157
- end
158
- it "event is removed if expired" do
159
- sleep(2)
160
- @end_filter.flush()
161
- expect(aggregate_maps).to be_empty
162
- end
163
- end
164
-
165
- end
166
-
167
- end
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/filters/aggregate"
4
+ require_relative "aggregate_spec_helper"
5
+
6
+ describe LogStash::Filters::Aggregate do
7
+
8
+ before(:each) do
9
+ set_eviction_instance(nil)
10
+ aggregate_maps.clear()
11
+ @start_filter = setup_filter({ "map_action" => "create", "code" => "map['sql_duration'] = 0" })
12
+ @update_filter = setup_filter({ "map_action" => "update", "code" => "map['sql_duration'] += event['duration']" })
13
+ @end_filter = setup_filter({ "map_action" => "update", "code" => "event.to_hash.merge!(map)", "end_of_task" => true, "timeout" => 5 })
14
+ end
15
+
16
+ context "Start event" do
17
+ describe "and receiving an event without task_id" do
18
+ it "does not record it" do
19
+ @start_filter.filter(event())
20
+ expect(aggregate_maps).to be_empty
21
+ end
22
+ end
23
+ describe "and receiving an event with task_id" do
24
+ it "records it" do
25
+ event = start_event("taskid" => "id123")
26
+ @start_filter.filter(event)
27
+
28
+ expect(aggregate_maps.size).to eq(1)
29
+ expect(aggregate_maps["id123"]).not_to be_nil
30
+ expect(aggregate_maps["id123"].creation_timestamp).to be >= event["@timestamp"]
31
+ expect(aggregate_maps["id123"].map["sql_duration"]).to eq(0)
32
+ end
33
+ end
34
+
35
+ describe "and receiving two 'start events' for the same task_id" do
36
+ it "keeps the first one and does nothing with the second one" do
37
+
38
+ first_start_event = start_event("taskid" => "id124")
39
+ @start_filter.filter(first_start_event)
40
+
41
+ first_update_event = update_event("taskid" => "id124", "duration" => 2)
42
+ @update_filter.filter(first_update_event)
43
+
44
+ sleep(1)
45
+ second_start_event = start_event("taskid" => "id124")
46
+ @start_filter.filter(second_start_event)
47
+
48
+ expect(aggregate_maps.size).to eq(1)
49
+ expect(aggregate_maps["id124"].creation_timestamp).to be < second_start_event["@timestamp"]
50
+ expect(aggregate_maps["id124"].map["sql_duration"]).to eq(first_update_event["duration"])
51
+ end
52
+ end
53
+ end
54
+
55
+ context "End event" do
56
+ describe "receiving an event without a previous 'start event'" do
57
+ describe "but without a previous 'start event'" do
58
+ it "does nothing with the event" do
59
+ end_event = end_event("taskid" => "id124")
60
+ @end_filter.filter(end_event)
61
+
62
+ expect(aggregate_maps).to be_empty
63
+ expect(end_event["sql_duration"]).to be_nil
64
+ end
65
+ end
66
+ end
67
+ end
68
+
69
+ context "Start/end events interaction" do
70
+ describe "receiving a 'start event'" do
71
+ before(:each) do
72
+ @task_id_value = "id_123"
73
+ @start_event = start_event({"taskid" => @task_id_value})
74
+ @start_filter.filter(@start_event)
75
+ expect(aggregate_maps.size).to eq(1)
76
+ end
77
+
78
+ describe "and receiving an end event" do
79
+ describe "and without an id" do
80
+ it "does nothing" do
81
+ end_event = end_event()
82
+ @end_filter.filter(end_event)
83
+ expect(aggregate_maps.size).to eq(1)
84
+ expect(end_event["sql_duration"]).to be_nil
85
+ end
86
+ end
87
+
88
+ describe "and an id different from the one of the 'start event'" do
89
+ it "does nothing" do
90
+ different_id_value = @task_id_value + "_different"
91
+ @end_filter.filter(end_event("taskid" => different_id_value))
92
+
93
+ expect(aggregate_maps.size).to eq(1)
94
+ expect(aggregate_maps[@task_id_value]).not_to be_nil
95
+ end
96
+ end
97
+
98
+ describe "and the same id of the 'start event'" do
99
+ it "add 'sql_duration' field to the end event and deletes the aggregate map associated to taskid" do
100
+ expect(aggregate_maps.size).to eq(1)
101
+
102
+ @update_filter.filter(update_event("taskid" => @task_id_value, "duration" => 2))
103
+
104
+ end_event = end_event("taskid" => @task_id_value)
105
+ @end_filter.filter(end_event)
106
+
107
+ expect(aggregate_maps).to be_empty
108
+ expect(end_event["sql_duration"]).to eq(2)
109
+ end
110
+
111
+ end
112
+ end
113
+ end
114
+ end
115
+
116
+ context "Event which causes an exception when code call" do
117
+ it "intercepts exception, logs the error and tags the event with '_aggregateexception'" do
118
+ @start_filter = setup_filter({ "code" => "fail 'Test'" })
119
+ start_event = start_event("taskid" => "id124")
120
+ @start_filter.filter(start_event)
121
+
122
+ expect(start_event["tags"]).to eq(["_aggregateexception"])
123
+ end
124
+ end
125
+
126
+ context "flush call" do
127
+ before(:each) do
128
+ @end_filter.timeout = 1
129
+ expect(@end_filter.timeout).to eq(1)
130
+ @task_id_value = "id_123"
131
+ @start_event = start_event({"taskid" => @task_id_value})
132
+ @start_filter.filter(@start_event)
133
+ expect(aggregate_maps.size).to eq(1)
134
+ end
135
+
136
+ describe "no timeout defined in none filter" do
137
+ it "defines a default timeout on a default filter" do
138
+ set_eviction_instance(nil)
139
+ expect(eviction_instance).to be_nil
140
+ @end_filter.flush()
141
+ expect(eviction_instance).to eq(@end_filter)
142
+ expect(@end_filter.timeout).to eq(LogStash::Filters::Aggregate::DEFAULT_TIMEOUT)
143
+ end
144
+ end
145
+
146
+ describe "timeout is defined on another filter" do
147
+ it "eviction_instance is not updated" do
148
+ expect(eviction_instance).not_to be_nil
149
+ @start_filter.flush()
150
+ expect(eviction_instance).not_to eq(@start_filter)
151
+ expect(eviction_instance).to eq(@end_filter)
152
+ end
153
+ end
154
+
155
+ describe "no timeout defined on the filter" do
156
+ it "event is not removed" do
157
+ sleep(2)
158
+ @start_filter.flush()
159
+ expect(aggregate_maps.size).to eq(1)
160
+ end
161
+ end
162
+
163
+ describe "timeout defined on the filter" do
164
+ it "event is not removed if not expired" do
165
+ @end_filter.flush()
166
+ expect(aggregate_maps.size).to eq(1)
167
+ end
168
+ it "event is removed if expired" do
169
+ sleep(2)
170
+ @end_filter.flush()
171
+ expect(aggregate_maps).to be_empty
172
+ end
173
+ end
174
+
175
+ end
176
+
177
+ end
@@ -1,49 +1,49 @@
1
- # encoding: utf-8
2
- require "logstash/filters/aggregate"
3
-
4
- def event(data = {})
5
- data["message"] ||= "Log message"
6
- data["@timestamp"] ||= Time.now
7
- LogStash::Event.new(data)
8
- end
9
-
10
- def start_event(data = {})
11
- data["logger"] = "TASK_START"
12
- event(data)
13
- end
14
-
15
- def update_event(data = {})
16
- data["logger"] = "SQL"
17
- event(data)
18
- end
19
-
20
- def end_event(data = {})
21
- data["logger"] = "TASK_END"
22
- event(data)
23
- end
24
-
25
- def setup_filter(config = {})
26
- config["task_id"] ||= "%{taskid}"
27
- filter = LogStash::Filters::Aggregate.new(config)
28
- filter.register()
29
- return filter
30
- end
31
-
32
- def filter(event)
33
- @start_filter.filter(event)
34
- @update_filter.filter(event)
35
- @end_filter.filter(event)
36
- end
37
-
38
- def aggregate_maps()
39
- LogStash::Filters::Aggregate.class_variable_get(:@@aggregate_maps)
40
- end
41
-
42
- def eviction_instance()
43
- LogStash::Filters::Aggregate.class_variable_get(:@@eviction_instance)
44
- end
45
-
46
- def set_eviction_instance(new_value)
47
- LogStash::Filters::Aggregate.class_variable_set(:@@eviction_instance, new_value)
48
- end
49
-
1
+ # encoding: utf-8
2
+ require "logstash/filters/aggregate"
3
+
4
+ def event(data = {})
5
+ data["message"] ||= "Log message"
6
+ data["@timestamp"] ||= Time.now
7
+ LogStash::Event.new(data)
8
+ end
9
+
10
+ def start_event(data = {})
11
+ data["logger"] = "TASK_START"
12
+ event(data)
13
+ end
14
+
15
+ def update_event(data = {})
16
+ data["logger"] = "SQL"
17
+ event(data)
18
+ end
19
+
20
+ def end_event(data = {})
21
+ data["logger"] = "TASK_END"
22
+ event(data)
23
+ end
24
+
25
+ def setup_filter(config = {})
26
+ config["task_id"] ||= "%{taskid}"
27
+ filter = LogStash::Filters::Aggregate.new(config)
28
+ filter.register()
29
+ return filter
30
+ end
31
+
32
+ def filter(event)
33
+ @start_filter.filter(event)
34
+ @update_filter.filter(event)
35
+ @end_filter.filter(event)
36
+ end
37
+
38
+ def aggregate_maps()
39
+ LogStash::Filters::Aggregate.class_variable_get(:@@aggregate_maps)
40
+ end
41
+
42
+ def eviction_instance()
43
+ LogStash::Filters::Aggregate.class_variable_get(:@@eviction_instance)
44
+ end
45
+
46
+ def set_eviction_instance(new_value)
47
+ LogStash::Filters::Aggregate.class_variable_set(:@@eviction_instance, new_value)
48
+ end
49
+
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-filter-aggregate
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.3
4
+ version: 0.1.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - Elastic
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2015-07-04 00:00:00.000000000 Z
12
+ date: 2015-10-14 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: logstash-core
@@ -86,7 +86,7 @@ rubyforge_project:
86
86
  rubygems_version: 2.4.5
87
87
  signing_key:
88
88
  specification_version: 4
89
- summary: The aim of this filter is to aggregate informations available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
89
+ summary: The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
90
90
  test_files:
91
91
  - spec/filters/aggregate_spec.rb
92
92
  - spec/filters/aggregate_spec_helper.rb