fluent-plugin-netflow 0.2.0 → 0.2.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: d5a011a605cc39a2aff47556d09b98973bd84eb9
4
- data.tar.gz: 779a28e68e6cd2bcd480d74ddcaa69605111839b
3
+ metadata.gz: 3cd8781b6ecc14e1dd982e4046b795a34412e768
4
+ data.tar.gz: be9f57fbb0fd1323fba31f718146632e6d2d5fdc
5
5
  SHA512:
6
- metadata.gz: 0c9396c4d0b6f9b8f1d6fda6ca0a53bdb4c0772fb12780e5a52930e840c886f73b4b6221cf6225773053038cf532319aab0c3439797b244239a20c3816462ae1
7
- data.tar.gz: 6e6e44c7685bff5996bd068b068308934245f32928ab0e334623dfadd0359e7dc14658d96a7c85dded55dd58a7c9c155f0f3e033626cf3923295870f4dcc3977
6
+ metadata.gz: aca8f4e3c4f146ad05e0b0f166e1a7bd65e77352e4ce0927101e5d2d6b3f0d6ad443a39ba976c8c47b917366bf363ff2d1a17ed0bc0b068e4be81879b9d9f27e
7
+ data.tar.gz: 14d3a412b02acccd57ef7723ee1b2b32e87f3b3c34d5f4e14f6b85bb5fc6f23f78d575c809a24a56e410879af6b6a22f139a62279a400340f3a827ebcc257f2b
data/README.md CHANGED
@@ -1,8 +1,12 @@
1
1
  # Netflow plugin for Fluentd
2
2
 
3
- Accept Netflow logs.
3
+ [![Build Status](https://travis-ci.org/repeatedly/fluent-plugin-netflow.svg)](https://travis-ci.org/repeatedly/fluent-plugin-netflow)
4
+
5
+
6
+ ## Overview
7
+
8
+ [Fluentd](http://fluentd.org/) input plugin that acts as Netflow v5/v9 collector.
4
9
 
5
- Netflow parser is based on [Logstash's netflow codes](https://github.com/elasticsearch/logstash/blob/master/lib/logstash/codecs/netflow.rb).
6
10
 
7
11
  ## Installation
8
12
 
@@ -10,6 +14,7 @@ Use RubyGems:
10
14
 
11
15
  fluent-gem install fluent-plugin-netflow
12
16
 
17
+
13
18
  ## Configuration
14
19
 
15
20
  <source>
@@ -17,16 +22,99 @@ Use RubyGems:
17
22
  tag netflow.event
18
23
 
19
24
  # optional parameters
20
- bind 127.0.0.1
21
- port 5140
22
-
23
- # optional parser parameters
25
+ bind 192.168.0.1
26
+ port 2055
24
27
  cache_ttl 6000
25
28
  versions [5, 9]
26
29
  </source>
27
30
 
31
+ **bind**
32
+
33
+ IP address on which the plugin will accept Netflow.
34
+ (Default: '0.0.0.0')
35
+
36
+ **port**
37
+
38
+ UDP port number on which tpe plugin will accept Netflow.
39
+ (Default: 5140)
40
+
41
+ **cache_ttl**
42
+
43
+ Template cache TTL for Netflow v9 in seconds. Templates not refreshed from the Netflow v9 exporter within the TTL are expired at the plugin.
44
+ (Default: 4000)
45
+
46
+ **versions**
47
+
48
+ Netflow versions which are acceptable.
49
+ (Default:[5, 9])
50
+
51
+ **switched_times_from_uptime**
52
+
53
+ When set to true, the plugin stores system uptime for ```first_switched``` and ```last_switched``` instead of ISO8601-formatted absolute time.
54
+ (Defaults: false)
55
+
56
+
57
+ ## Performance Evaluation
58
+
59
+ Benchmark for v5 protocol on Macbook Air (Early 2014, 1.7 GHz Intel Core i7):
60
+ * 0 packets dropped in 32,000 records/second (for 3,000,000 packets)
61
+ * 45,000 records/second in maximum (for flooding netflow packets)
62
+
63
+ Tested with the packet generator below:
64
+
65
+ * https://github.com/mshindo/NetFlow-Generator
66
+ * `./flowgen -n3000000 -i50 -w1 -p5140 localhost`
67
+
68
+ And configuration:
69
+
70
+ <source>
71
+ @type netflow
72
+ tag netflow.event
73
+ bind 0.0.0.0
74
+ port 5140
75
+ switched_times_from_uptime yes
76
+ </source>
77
+ <match netflow.event>
78
+ @type flowcounter
79
+ unit minute
80
+ count_keys count # missing column for counting events only
81
+ tag flowcount
82
+ </match>
83
+ <match flowcount>
84
+ @type stdout
85
+ </match>
86
+
87
+
88
+ ## Tips
89
+
90
+ ### Use netflow parser in other plugins
91
+
92
+ ```ruby
93
+ require 'fluent/plugin/parser_netflow'
94
+
95
+ parser = TextParser::NetflowParser.new
96
+ parser.configure(conf)
97
+
98
+ # Netflow v5
99
+ parser.call(payload) do |time, record|
100
+ # do something
101
+ end
102
+
103
+ # Netflow v9
104
+ parser.call(payload, source_ip_address) do |time, record|
105
+ # do something
106
+ end
107
+ ```
108
+
109
+ **NOTE:**
110
+ If the plugin receives Netflow v9 from multiple sources, provide ```source_ip_address``` argument to parse correctly.
111
+
112
+ ### More speed ?
113
+
114
+ :bullettrain_side: Try ```switched_times_from_uptime true``` option !
115
+
116
+
28
117
  ## TODO
29
118
 
30
- - Support TCP protocol? TCP is needed?
31
- - Use Fluentd feature instead of own handlers
32
- - Need another maintainer who uses Netflow in production!
119
+ * Netflow v9 protocol parser optimization
120
+ * Use Fluentd feature instead of own handlers
data/VERSION CHANGED
@@ -1 +1 @@
1
- 0.2.0
1
+ 0.2.1
@@ -69,7 +69,7 @@ module Fluent
69
69
  def receive_data(host, data)
70
70
  log.on_debug { log.debug "received logs", :host => host, :data => data }
71
71
 
72
- @parser.call(data) { |time, record|
72
+ @parser.call(data, host) { |time, record|
73
73
  unless time && record
74
74
  log.warn "pattern not match: #{data.inspect}"
75
75
  return
@@ -0,0 +1,258 @@
1
+ ---
2
+ option:
3
+ 1:
4
+ - 4
5
+ - :in_bytes
6
+ 2:
7
+ - 4
8
+ - :in_pkts
9
+ 3:
10
+ - 4
11
+ - :flows
12
+ 4:
13
+ - :uint8
14
+ - :protocol
15
+ 5:
16
+ - :uint8
17
+ - :src_tos
18
+ 6:
19
+ - :uint8
20
+ - :tcp_flags
21
+ 7:
22
+ - :uint16
23
+ - :l4_src_port
24
+ 8:
25
+ - :ip4_addr
26
+ - :ipv4_src_addr
27
+ 9:
28
+ - :uint8
29
+ - :src_mask
30
+ 10:
31
+ - 2
32
+ - :input_snmp
33
+ 11:
34
+ - :uint16
35
+ - :l4_dst_port
36
+ 12:
37
+ - :ip4_addr
38
+ - :ipv4_dst_addr
39
+ 13:
40
+ - :uint8
41
+ - :dst_mask
42
+ 14:
43
+ - 2
44
+ - :output_snmp
45
+ 15:
46
+ - :ip4_addr
47
+ - :ipv4_next_hop
48
+ 16:
49
+ - 2
50
+ - :src_as
51
+ 17:
52
+ - 2
53
+ - :dst_as
54
+ 18:
55
+ - :ip4_addr
56
+ - :bgp_ipv4_next_hop
57
+ 19:
58
+ - 4
59
+ - :mul_dst_pkts
60
+ 20:
61
+ - 4
62
+ - :mul_dst_bytes
63
+ 21:
64
+ - :uint32
65
+ - :last_switched
66
+ 22:
67
+ - :uint32
68
+ - :first_switched
69
+ 23:
70
+ - 4
71
+ - :out_bytes
72
+ 24:
73
+ - 4
74
+ - :out_pkts
75
+ 25:
76
+ - :uint16
77
+ - :min_pkt_length
78
+ 26:
79
+ - :uint16
80
+ - :max_pkt_length
81
+ 27:
82
+ - :ip6_addr
83
+ - :ipv6_src_addr
84
+ 28:
85
+ - :ip6_addr
86
+ - :ipv6_dst_addr
87
+ 29:
88
+ - :uint8
89
+ - :ipv6_src_mask
90
+ 30:
91
+ - :uint8
92
+ - :ipv6_dst_mask
93
+ 31:
94
+ - 3
95
+ - :ipv6_flow_label
96
+ 32:
97
+ - :uint16
98
+ - :icmp_type
99
+ 33:
100
+ - :uint8
101
+ - :mul_igmp_type
102
+ 34:
103
+ - :uint32
104
+ - :sampling_interval
105
+ 35:
106
+ - :uint8
107
+ - :sampling_algorithm
108
+ 36:
109
+ - :uint16
110
+ - :flow_active_timeout
111
+ 37:
112
+ - :uint16
113
+ - :flow_inactive_timeout
114
+ 38:
115
+ - :uint8
116
+ - :engine_type
117
+ 39:
118
+ - :uint8
119
+ - :engine_id
120
+ 40:
121
+ - 4
122
+ - :total_bytes_exp
123
+ 41:
124
+ - 4
125
+ - :total_pkts_exp
126
+ 42:
127
+ - 4
128
+ - :total_flows_exp
129
+ 43:
130
+ - :skip
131
+ 44:
132
+ - :ip4_addr
133
+ - :ipv4_src_prefix
134
+ 45:
135
+ - :ip4_addr
136
+ - :ipv4_dst_prefix
137
+ 46:
138
+ - :uint8
139
+ - :mpls_top_label_type
140
+ 47:
141
+ - :uint32
142
+ - :mpls_top_label_ip_addr
143
+ 48:
144
+ - 1
145
+ - :flow_sampler_id
146
+ 49:
147
+ - :uint8
148
+ - :flow_sampler_mode
149
+ 50:
150
+ - :uint32
151
+ - :flow_sampler_random_interval
152
+ 51:
153
+ - :skip
154
+ 52:
155
+ - :uint8
156
+ - :min_ttl
157
+ 53:
158
+ - :uint8
159
+ - :max_ttl
160
+ 54:
161
+ - :uint16
162
+ - :ipv4_ident
163
+ 55:
164
+ - :uint8
165
+ - :dst_tos
166
+ 56:
167
+ - :mac_addr
168
+ - :in_src_mac
169
+ 57:
170
+ - :mac_addr
171
+ - :out_dst_mac
172
+ 58:
173
+ - :uint16
174
+ - :src_vlan
175
+ 59:
176
+ - :uint16
177
+ - :dst_vlan
178
+ 60:
179
+ - :uint8
180
+ - :ip_protocol_version
181
+ 61:
182
+ - :uint8
183
+ - :direction
184
+ 62:
185
+ - :ip6_addr
186
+ - :ipv6_next_hop
187
+ 63:
188
+ - :ip6_addr
189
+ - :bgp_ipv6_next_hop
190
+ 64:
191
+ - :uint32
192
+ - :ipv6_option_headers
193
+ 65:
194
+ - :skip
195
+ 66:
196
+ - :skip
197
+ 67:
198
+ - :skip
199
+ 68:
200
+ - :skip
201
+ 69:
202
+ - :skip
203
+ 70:
204
+ - :mpls_label
205
+ - :mpls_label_1
206
+ 71:
207
+ - :mpls_label
208
+ - :mpls_label_2
209
+ 72:
210
+ - :mpls_label
211
+ - :mpls_label_3
212
+ 80:
213
+ - :mac_addr
214
+ - :in_dst_mac
215
+ 81:
216
+ - :mac_addr
217
+ - :out_src_mac
218
+ 82:
219
+ - :string
220
+ - :if_name
221
+ 83:
222
+ - :string
223
+ - :if_desc
224
+ 84:
225
+ - :string
226
+ - :sampler_name
227
+ 89:
228
+ - :uint8
229
+ - :forwarding_status
230
+ 91:
231
+ - :uint8
232
+ - :mpls_prefix_len
233
+ 234:
234
+ - :uint32
235
+ - :ingress_vrf_id
236
+ 235:
237
+ - :uint32
238
+ - :egress_vrf_id
239
+ 236:
240
+ - :string
241
+ - :vrf_name
242
+
243
+ scope:
244
+ 1:
245
+ - :ip4_addr
246
+ - :system
247
+ 2:
248
+ - :skip
249
+ - :interface
250
+ 3:
251
+ - :skip
252
+ - :line_card
253
+ 4:
254
+ - :skip
255
+ - :netflow_cache
256
+ 5:
257
+ - :skip
258
+ - :template
@@ -26,43 +26,36 @@ module Fluent
26
26
  super
27
27
 
28
28
  @templates = Vash.new()
29
+ @samplers_v9 = Vash.new()
29
30
  # Path to default Netflow v9 field definitions
30
- filename = File.expand_path('../netflow_option_fields.yaml', __FILE__)
31
+ filename = File.expand_path('../netflow_fields.yaml', __FILE__)
31
32
 
32
33
  begin
33
34
  @fields = YAML.load_file(filename)
34
35
  rescue => e
35
- raise "Bad syntax in definitions file #{filename}", error_class: e.class, error: e.message
36
+ raise ConfigError, "Bad syntax in definitions file #{filename}, error_class = #{e.class.name}, error = #{e.message}"
36
37
  end
37
38
 
38
39
  # Allow the user to augment/override/rename the supported Netflow fields
39
40
  if @definitions
40
- raise "definitions file #{@definitions} does not exists" unless File.exist?(@definitions)
41
+ raise ConfigError, "definitions file #{@definitions} does not exists" unless File.exist?(@definitions)
41
42
  begin
42
- @fields.merge!(YAML.load_file(@definitions))
43
+ @fields['option'].merge!(YAML.load_file(@definitions))
43
44
  rescue => e
44
- raise "Bad syntax in definitions file #{@definitions}", error_class: e.class, error: e.message
45
+ raise ConfigError, "Bad syntax in definitions file #{@definitions}, error_class = #{e.class.name}, error = #{e.message}"
45
46
  end
46
47
  end
47
- # Path to default Netflow v9 scope field definitions
48
- filename = File.expand_path('../netflow_scope_fields.yaml', __FILE__)
49
-
50
- begin
51
- @scope_fields = YAML.load_file(filename)
52
- rescue => e
53
- raise "Bad syntax in scope definitions file #{filename}", error_class: e.class, error: e.message
54
- end
55
48
  end
56
49
 
57
- def call(payload, &block)
50
+ def call(payload, host=nil, &block)
58
51
  version,_ = payload[0,2].unpack('n')
59
52
  case version
60
53
  when 5
61
54
  forV5(payload, block)
62
55
  when 9
63
56
  # TODO: implement forV9
64
- flowset = Netflow9PDU.read(payload)
65
- handle_v9(flowset, block)
57
+ pdu = Netflow9PDU.read(payload)
58
+ handle_v9(host, pdu, block)
66
59
  else
67
60
  $log.warn "Unsupported Netflow version v#{version}: #{version.class}"
68
61
  end
@@ -190,34 +183,33 @@ module Fluent
190
183
  end
191
184
  end
192
185
 
193
- def handle_v9(flowset, block)
194
- flowset.records.each do |record|
195
- case record.flowset_id
186
+ def handle_v9(host, pdu, block)
187
+ pdu.records.each do |flowset|
188
+ case flowset.flowset_id
196
189
  when 0
197
- handle_v9_flowset_template(flowset, record)
190
+ handle_v9_flowset_template(host, pdu, flowset)
198
191
  when 1
199
- handle_v9_flowset_options_template(flowset, record)
192
+ handle_v9_flowset_options_template(host, pdu, flowset)
200
193
  when 256..65535
201
- handle_v9_flowset_data(flowset, record, block)
194
+ handle_v9_flowset_data(host, pdu, flowset, block)
202
195
  else
203
- $log.warn "Unsupported flowset id #{record.flowset_id}"
196
+ $log.warn "Unsupported flowset id #{flowset.flowset_id}"
204
197
  end
205
198
  end
206
199
  end
207
200
 
208
- def handle_v9_flowset_template(flowset, record)
209
- record.flowset_data.templates.each do |template|
201
+ def handle_v9_flowset_template(host, pdu, flowset)
202
+ flowset.flowset_data.templates.each do |template|
210
203
  catch (:field) do
211
204
  fields = []
212
205
  template.fields.each do |field|
213
- entry = netflow_field_for(field.field_type, field.field_length, @fields)
214
- if !entry
215
- throw :field
216
- end
206
+ entry = netflow_field_for(field.field_type, field.field_length)
207
+ throw :field unless entry
208
+
217
209
  fields += entry
218
210
  end
219
211
  # We get this far, we have a list of fields
220
- key = "#{flowset.source_id}|#{template.template_id}"
212
+ key = "#{host}|#{pdu.source_id}|#{template.template_id}"
221
213
  @templates[key, @cache_ttl] = BinData::Struct.new(endian: :big, fields: fields)
222
214
  # Purge any expired templates
223
215
  @templates.cleanup!
@@ -225,26 +217,24 @@ module Fluent
225
217
  end
226
218
  end
227
219
 
228
- def handle_v9_flowset_options_template(flowset, record)
229
- record.flowset_data.templates.each do |template|
220
+ NETFLOW_V9_FIELD_CATEGORIES = ['scope', 'option']
221
+
222
+ def handle_v9_flowset_options_template(host, pdu, flowset)
223
+ flowset.flowset_data.templates.each do |template|
230
224
  catch (:field) do
231
225
  fields = []
232
- template.scope_fields.each do |field|
233
- entry = netflow_field_for(field.field_type, field.field_length, @scope_fields)
234
- if ! entry
235
- throw :field
236
- end
237
- fields += entry
238
- end
239
- template.option_fields.each do |field|
240
- entry = netflow_field_for(field.field_type, field.field_length, @fields)
241
- if ! entry
242
- throw :field
226
+
227
+ NETFLOW_V9_FIELD_CATEGORIES.each do |category|
228
+ template["#{category}_fields"].each do |field|
229
+ entry = netflow_field_for(field.field_type, field.field_length, category)
230
+ throw :field unless entry
231
+
232
+ fields += entry
243
233
  end
244
- fields += entry
245
234
  end
235
+
246
236
  # We get this far, we have a list of fields
247
- key = "#{flowset.source_id}|#{template.template_id}"
237
+ key = "#{host}|#{pdu.source_id}|#{template.template_id}"
248
238
  @templates[key, @cache_ttl] = BinData::Struct.new(endian: :big, fields: fields)
249
239
  # Purge any expired templates
250
240
  @templates.cleanup!
@@ -254,48 +244,55 @@ module Fluent
254
244
 
255
245
  FIELDS_FOR_COPY_V9 = ['version', 'flow_seq_num']
256
246
 
257
- def handle_v9_flowset_data(flowset, record, block)
258
- key = "#{flowset.source_id}|#{record.flowset_id}"
259
- template = @templates[key]
247
+ def handle_v9_flowset_data(host, pdu, flowset, block)
248
+ template_key = "#{host}|#{pdu.source_id}|#{flowset.flowset_id}"
249
+ template = @templates[template_key]
260
250
  if ! template
261
- $log.warn("No matching template for flow id #{record.flowset_id}")
251
+ $log.warn("No matching template for flow id #{flowset.flowset_id}")
262
252
  return
263
253
  end
264
254
 
265
- length = record.flowset_length - 4
255
+ length = flowset.flowset_length - 4
266
256
 
267
- # Template shouldn't be longer than the record and there should
257
+ # Template shouldn't be longer than the flowset and there should
268
258
  # be at most 3 padding bytes
269
259
  if template.num_bytes > length or ! (length % template.num_bytes).between?(0, 3)
270
260
  $log.warn "Template length doesn't fit cleanly into flowset",
271
- template_id: record.flowset_id, template_length: template.num_bytes, record_length: length
261
+ template_id: flowset.flowset_id, template_length: template.num_bytes, flowset_length: length
272
262
  return
273
263
  end
274
264
 
275
265
  array = BinData::Array.new(type: template, initial_length: length / template.num_bytes)
276
266
 
277
- records = array.read(record.flowset_data)
278
- records.each do |r|
279
- time = flowset.unix_sec
267
+ fields = array.read(flowset.flowset_data)
268
+ fields.each do |r|
269
+ if is_sampler?(r)
270
+ sampler_key = "#{host}|#{pdu.source_id}|#{r.flow_sampler_id}"
271
+ register_sampler_v9 sampler_key, r
272
+ next
273
+ end
274
+
275
+ time = pdu.unix_sec # TODO: Fluent::EventTime (see: forV5)
280
276
  event = {}
281
277
 
282
278
  # Fewer fields in the v9 header
283
279
  FIELDS_FOR_COPY_V9.each do |f|
284
- event[f] = flowset[f]
280
+ event[f] = pdu[f]
281
+ end
282
+
283
+ event['flowset_id'] = flowset.flowset_id
284
+
285
+ r.each_pair {|k,v| event[k.to_s] = v }
286
+ unless @switched_times_from_uptime
287
+ event['first_switched'] = format_for_switched(msec_from_boot_to_time(event['first_switched'], pdu.uptime, time, 0))
288
+ event['last_switched'] = format_for_switched(msec_from_boot_to_time(event['last_switched'] , pdu.uptime, time, 0))
285
289
  end
286
290
 
287
- event['flowset_id'] = record.flowset_id
288
-
289
- r.each_pair do |k,v|
290
- case k.to_s
291
- when /_switched$/
292
- millis = flowset.uptime - v
293
- seconds = flowset.unix_sec - (millis / 1000)
294
- # v9 did away with the nanosecs field
295
- micros = 1000000 - (millis % 1000)
296
- event[k.to_s] = Time.at(seconds, micros).utc.strftime("%Y-%m-%dT%H:%M:%S.%3NZ")
297
- else
298
- event[k.to_s] = v
291
+ if sampler_id = r['flow_sampler_id']
292
+ sampler_key = "#{host}|#{pdu.source_id}|#{sampler_id}"
293
+ if sampler = @samplers_v9[sampler_key]
294
+ event['sampling_algorithm'] ||= sampler['flow_sampler_mode']
295
+ event['sampling_interval'] ||= sampler['flow_sampler_random_interval']
299
296
  end
300
297
  end
301
298
 
@@ -308,9 +305,9 @@ module Fluent
308
305
  ("uint" + (((length > 0) ? length : default) * 8).to_s).to_sym
309
306
  end
310
307
 
311
- def netflow_field_for(type, length, field_definitions)
312
- if field_definitions.include?(type)
313
- field = field_definitions[type]
308
+ def netflow_field_for(type, length, category='option')
309
+ if @fields[category].include?(type)
310
+ field = @fields[category][type]
314
311
  if field.is_a?(Array)
315
312
 
316
313
  if field[0].is_a?(Integer)
@@ -336,6 +333,16 @@ module Fluent
336
333
  nil
337
334
  end
338
335
  end
336
+
337
+ # covers Netflow v9 and v10 (a.k.a IPFIX)
338
+ def is_sampler?(record)
339
+ record['flow_sampler_id'] && record['flow_sampler_mode'] && record['flow_sampler_random_interval']
340
+ end
341
+
342
+ def register_sampler_v9(key, sampler)
343
+ @samplers_v9[key, @cache_ttl] = sampler
344
+ @samplers_v9.cleanup!
345
+ end
339
346
  end
340
347
  end
341
348
  end
File without changes
Binary file
Binary file
Binary file
@@ -20,7 +20,7 @@ class NetflowParserTest < Test::Unit::TestCase
20
20
  test 'parse v5 binary data, dumped by netflow-generator' do
21
21
  # generated by https://github.com/mshindo/NetFlow-Generator
22
22
  parser = create_parser
23
- raw_data = File.open(File.expand_path('../netflow.v5.dump', __FILE__)){|f| f.read }
23
+ raw_data = File.open(File.expand_path('../dump/netflow.v5.dump', __FILE__)){|f| f.read }
24
24
  bytes_for_1record = 72
25
25
  assert_equal bytes_for_1record, raw_data.size
26
26
  parsed = []
@@ -0,0 +1,130 @@
1
+ require 'helper'
2
+
3
+ class Netflow9ParserTest < Test::Unit::TestCase
4
+ def setup
5
+ Fluent::Test.setup
6
+ end
7
+
8
+ def create_parser(conf={})
9
+ parser = Fluent::TextParser::NetflowParser.new
10
+ parser.configure(Fluent::Config::Element.new('ROOT', '', conf, []))
11
+ parser
12
+ end
13
+
14
+ def raw_template
15
+ @raw_template ||= File.read(File.expand_path('../dump/netflow.v9.template.dump', __FILE__))
16
+ end
17
+
18
+ def raw_data
19
+ @raw_data ||= File.read(File.expand_path('../dump/netflow.v9.dump', __FILE__))
20
+ end
21
+
22
+ def raw_sampler_template
23
+ @raw_sampler_template ||= File.read(File.expand_path('../dump/netflow.v9.sampler_template.dump', __FILE__))
24
+ end
25
+
26
+ def raw_sampler_data
27
+ @raw_sampler_data ||= File.read(File.expand_path('../dump/netflow.v9.sampler.dump', __FILE__))
28
+ end
29
+
30
+ DEFAULT_HOST = '127.0.0.1'
31
+
32
+ test 'parse netflow v9 binary data before loading corresponding template' do
33
+ parser = create_parser
34
+
35
+ assert_equal 92, raw_data.size
36
+ parser.call(raw_data, DEFAULT_HOST) do |time, record|
37
+ assert false, 'nothing emitted'
38
+ end
39
+ end
40
+
41
+ test 'parse netflow v9 binary data' do
42
+ parser = create_parser
43
+
44
+ parsed = []
45
+ parser.call raw_template, DEFAULT_HOST
46
+ parser.call(raw_data, DEFAULT_HOST) do |time, record|
47
+ parsed << [time, record]
48
+ end
49
+
50
+ assert_equal 1, parsed.size
51
+ assert_equal Time.parse('2016-02-12T04:02:25Z').to_i, parsed.first[0]
52
+ expected_record = {
53
+ # header
54
+ 'version' => 9,
55
+ 'flow_seq_num' => 4645895,
56
+ 'flowset_id' => 260,
57
+
58
+ # flowset
59
+ 'in_pkts' => 1,
60
+ 'in_bytes' => 60,
61
+ 'ipv4_src_addr' => '192.168.0.1',
62
+ 'ipv4_dst_addr' => '192.168.0.2',
63
+ 'input_snmp' => 54,
64
+ 'output_snmp' => 29,
65
+ 'last_switched' => '2016-02-12T04:02:09.053Z',
66
+ 'first_switched' => '2016-02-12T04:02:09.053Z',
67
+ 'l4_src_port' => 80,
68
+ 'l4_dst_port' => 32822,
69
+ 'src_as' => 0,
70
+ 'dst_as' => 65000,
71
+ 'bgp_ipv4_next_hop' => '192.168.0.3',
72
+ 'src_mask' => 24,
73
+ 'dst_mask' => 24,
74
+ 'protocol' => 6,
75
+ 'tcp_flags' => 0x12,
76
+ 'src_tos' => 0x0,
77
+ 'direction' => 0,
78
+ 'forwarding_status' => 0b01000000,
79
+ 'flow_sampler_id' => 1,
80
+ 'ingress_vrf_id' => 1610612736,
81
+ 'egress_vrf_id' => 1610612736
82
+ }
83
+ assert_equal expected_record, parsed.first[1]
84
+ end
85
+
86
+ test 'parse netflow v9 binary data after sampler data is cached' do
87
+ parser = create_parser
88
+
89
+ parsed = []
90
+ [raw_sampler_template, raw_sampler_data, raw_template].each {|raw| parser.call(raw, DEFAULT_HOST){} }
91
+ parser.call(raw_data, DEFAULT_HOST) do |time, record|
92
+ parsed << [time, record]
93
+ end
94
+
95
+ assert_equal 2, parsed.first[1]['sampling_algorithm']
96
+ assert_equal 5000, parsed.first[1]['sampling_interval']
97
+ end
98
+
99
+ test 'parse netflow v9 binary data with host-based template cache' do
100
+ parser = create_parser
101
+ another_host = DEFAULT_HOST.next
102
+
103
+ parsed = []
104
+ parser.call raw_template, DEFAULT_HOST
105
+ parser.call(raw_data, another_host) do |time, record|
106
+ assert false, 'nothing emitted'
107
+ end
108
+ parser.call raw_template, another_host
109
+ parser.call(raw_data, another_host) do |time, record|
110
+ parsed << [time, record]
111
+ end
112
+
113
+ assert_equal 1, parsed.size
114
+ end
115
+
116
+ test 'parse netflow v9 binary data with host-based sampler cache' do
117
+ parser = create_parser
118
+ another_host = DEFAULT_HOST.next
119
+
120
+ parsed = []
121
+ [raw_sampler_template, raw_sampler_data, raw_template].each {|raw| parser.call(raw, DEFAULT_HOST){} }
122
+ parser.call(raw_template, another_host){}
123
+ parser.call(raw_data, another_host) do |time, record|
124
+ parsed << [time, record]
125
+ end
126
+
127
+ assert_equal nil, parsed.first[1]['sampling_algorithm']
128
+ assert_equal nil, parsed.first[1]['sampling_interval']
129
+ end
130
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluent-plugin-netflow
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.0
4
+ version: 0.2.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Masahiro Nakagawa
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2016-03-01 00:00:00.000000000 Z
11
+ date: 2016-03-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: fluentd
@@ -87,15 +87,19 @@ files:
87
87
  - example/fluentd.conf
88
88
  - fluent-plugin-netflow.gemspec
89
89
  - lib/fluent/plugin/in_netflow.rb
90
- - lib/fluent/plugin/netflow_option_fields.yaml
90
+ - lib/fluent/plugin/netflow_fields.yaml
91
91
  - lib/fluent/plugin/netflow_records.rb
92
- - lib/fluent/plugin/netflow_scope_fields.yaml
93
92
  - lib/fluent/plugin/parser_netflow.rb
94
93
  - lib/fluent/plugin/vash.rb
94
+ - test/dump/netflow.v5.dump
95
+ - test/dump/netflow.v9.dump
96
+ - test/dump/netflow.v9.sampler.dump
97
+ - test/dump/netflow.v9.sampler_template.dump
98
+ - test/dump/netflow.v9.template.dump
95
99
  - test/helper.rb
96
- - test/netflow.v5.dump
97
100
  - test/test_in_netflow.rb
98
101
  - test/test_parser_netflow.rb
102
+ - test/test_parser_netflow9.rb
99
103
  homepage: https://github.com/repeatedly/fluent-plugin-netflow
100
104
  licenses:
101
105
  - Apache License (2.0)
@@ -121,7 +125,12 @@ signing_key:
121
125
  specification_version: 4
122
126
  summary: Netflow plugin for Fluentd
123
127
  test_files:
128
+ - test/dump/netflow.v5.dump
129
+ - test/dump/netflow.v9.dump
130
+ - test/dump/netflow.v9.sampler.dump
131
+ - test/dump/netflow.v9.sampler_template.dump
132
+ - test/dump/netflow.v9.template.dump
124
133
  - test/helper.rb
125
- - test/netflow.v5.dump
126
134
  - test/test_in_netflow.rb
127
135
  - test/test_parser_netflow.rb
136
+ - test/test_parser_netflow9.rb
@@ -1,237 +0,0 @@
1
- ---
2
- 1:
3
- - 4
4
- - :in_bytes
5
- 2:
6
- - 4
7
- - :in_pkts
8
- 3:
9
- - 4
10
- - :flows
11
- 4:
12
- - :uint8
13
- - :protocol
14
- 5:
15
- - :uint8
16
- - :src_tos
17
- 6:
18
- - :uint8
19
- - :tcp_flags
20
- 7:
21
- - :uint16
22
- - :l4_src_port
23
- 8:
24
- - :ip4_addr
25
- - :ipv4_src_addr
26
- 9:
27
- - :uint8
28
- - :src_mask
29
- 10:
30
- - 2
31
- - :input_snmp
32
- 11:
33
- - :uint16
34
- - :l4_dst_port
35
- 12:
36
- - :ip4_addr
37
- - :ipv4_dst_addr
38
- 13:
39
- - :uint8
40
- - :dst_mask
41
- 14:
42
- - 2
43
- - :output_snmp
44
- 15:
45
- - :ip4_addr
46
- - :ipv4_next_hop
47
- 16:
48
- - 2
49
- - :src_as
50
- 17:
51
- - 2
52
- - :dst_as
53
- 18:
54
- - :ip4_addr
55
- - :bgp_ipv4_next_hop
56
- 19:
57
- - 4
58
- - :mul_dst_pkts
59
- 20:
60
- - 4
61
- - :mul_dst_bytes
62
- 21:
63
- - :uint32
64
- - :last_switched
65
- 22:
66
- - :uint32
67
- - :first_switched
68
- 23:
69
- - 4
70
- - :out_bytes
71
- 24:
72
- - 4
73
- - :out_pkts
74
- 25:
75
- - :uint16
76
- - :min_pkt_length
77
- 26:
78
- - :uint16
79
- - :max_pkt_length
80
- 27:
81
- - :ip6_addr
82
- - :ipv6_src_addr
83
- 28:
84
- - :ip6_addr
85
- - :ipv6_dst_addr
86
- 29:
87
- - :uint8
88
- - :ipv6_src_mask
89
- 30:
90
- - :uint8
91
- - :ipv6_dst_mask
92
- 31:
93
- - 3
94
- - :ipv6_flow_label
95
- 32:
96
- - :uint16
97
- - :icmp_type
98
- 33:
99
- - :uint8
100
- - :mul_igmp_type
101
- 34:
102
- - :uint32
103
- - :sampling_interval
104
- 35:
105
- - :uint8
106
- - :sampling_algorithm
107
- 36:
108
- - :uint16
109
- - :flow_active_timeout
110
- 37:
111
- - :uint16
112
- - :flow_inactive_timeout
113
- 38:
114
- - :uint8
115
- - :engine_type
116
- 39:
117
- - :uint8
118
- - :engine_id
119
- 40:
120
- - 4
121
- - :total_bytes_exp
122
- 41:
123
- - 4
124
- - :total_pkts_exp
125
- 42:
126
- - 4
127
- - :total_flows_exp
128
- 43:
129
- - :skip
130
- 44:
131
- - :ip4_addr
132
- - :ipv4_src_prefix
133
- 45:
134
- - :ip4_addr
135
- - :ipv4_dst_prefix
136
- 46:
137
- - :uint8
138
- - :mpls_top_label_type
139
- 47:
140
- - :uint32
141
- - :mpls_top_label_ip_addr
142
- 48:
143
- - 1
144
- - :flow_sampler_id
145
- 49:
146
- - :uint8
147
- - :flow_sampler_mode
148
- 50:
149
- - :uint32
150
- - :flow_sampler_random_interval
151
- 51:
152
- - :skip
153
- 52:
154
- - :uint8
155
- - :min_ttl
156
- 53:
157
- - :uint8
158
- - :max_ttl
159
- 54:
160
- - :uint16
161
- - :ipv4_ident
162
- 55:
163
- - :uint8
164
- - :dst_tos
165
- 56:
166
- - :mac_addr
167
- - :in_src_mac
168
- 57:
169
- - :mac_addr
170
- - :out_dst_mac
171
- 58:
172
- - :uint16
173
- - :src_vlan
174
- 59:
175
- - :uint16
176
- - :dst_vlan
177
- 60:
178
- - :uint8
179
- - :ip_protocol_version
180
- 61:
181
- - :uint8
182
- - :direction
183
- 62:
184
- - :ip6_addr
185
- - :ipv6_next_hop
186
- 63:
187
- - :ip6_addr
188
- - :bgp_ipv6_next_hop
189
- 64:
190
- - :uint32
191
- - :ipv6_option_headers
192
- 65:
193
- - :skip
194
- 66:
195
- - :skip
196
- 67:
197
- - :skip
198
- 68:
199
- - :skip
200
- 69:
201
- - :skip
202
- 70:
203
- - :mpls_label
204
- - :mpls_label_1
205
- 71:
206
- - :mpls_label
207
- - :mpls_label_2
208
- 72:
209
- - :mpls_label
210
- - :mpls_label_3
211
- 80:
212
- - :mac_addr
213
- - :in_dst_mac
214
- 81:
215
- - :mac_addr
216
- - :out_src_mac
217
- 82:
218
- - :string
219
- - :if_name
220
- 83:
221
- - :string
222
- - :if_desc
223
- 84:
224
- - :string
225
- - :sampler_name
226
- 89:
227
- - :uint8
228
- - :forwarding_status
229
- 91:
230
- - :uint8
231
- - :mpls_prefix_len
232
- 234:
233
- - :uint32
234
- - :ingress_vrf_id
235
- 235:
236
- - :uint32
237
- - :egress_vrf_id
@@ -1,16 +0,0 @@
1
- ---
2
- 1:
3
- - :ip4_addr
4
- - :system
5
- 2:
6
- - :skip
7
- - :interface
8
- 3:
9
- - :skip
10
- - :line_card
11
- 4:
12
- - :skip
13
- - :netflow_cache
14
- 5:
15
- - :skip
16
- - :template