logstash-patterns-core 4.2.0 → 4.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +98 -0
- data/Gemfile +3 -0
- data/README.md +11 -18
- data/lib/logstash/patterns/core.rb +11 -3
- data/logstash-patterns-core.gemspec +1 -1
- data/patterns/ecs-v1/aws +28 -0
- data/patterns/ecs-v1/bacula +53 -0
- data/patterns/ecs-v1/bind +13 -0
- data/patterns/ecs-v1/bro +30 -0
- data/patterns/ecs-v1/exim +26 -0
- data/patterns/ecs-v1/firewalls +111 -0
- data/patterns/ecs-v1/grok-patterns +95 -0
- data/patterns/ecs-v1/haproxy +40 -0
- data/patterns/ecs-v1/httpd +17 -0
- data/patterns/ecs-v1/java +34 -0
- data/patterns/ecs-v1/junos +13 -0
- data/patterns/ecs-v1/linux-syslog +16 -0
- data/patterns/{maven → ecs-v1/maven} +0 -0
- data/patterns/ecs-v1/mcollective +4 -0
- data/patterns/ecs-v1/mongodb +7 -0
- data/patterns/ecs-v1/nagios +124 -0
- data/patterns/ecs-v1/postgresql +2 -0
- data/patterns/ecs-v1/rails +13 -0
- data/patterns/ecs-v1/redis +3 -0
- data/patterns/ecs-v1/ruby +2 -0
- data/patterns/ecs-v1/squid +6 -0
- data/patterns/ecs-v1/zeek +33 -0
- data/patterns/{aws → legacy/aws} +1 -1
- data/patterns/{bacula → legacy/bacula} +5 -5
- data/patterns/legacy/bind +3 -0
- data/patterns/{bro → legacy/bro} +0 -0
- data/patterns/{exim → legacy/exim} +8 -2
- data/patterns/{firewalls → legacy/firewalls} +2 -2
- data/patterns/{grok-patterns → legacy/grok-patterns} +0 -0
- data/patterns/{haproxy → legacy/haproxy} +0 -0
- data/patterns/{httpd → legacy/httpd} +1 -1
- data/patterns/{java → legacy/java} +0 -0
- data/patterns/{junos → legacy/junos} +0 -0
- data/patterns/{linux-syslog → legacy/linux-syslog} +0 -0
- data/patterns/legacy/maven +1 -0
- data/patterns/{mcollective → legacy/mcollective} +0 -0
- data/patterns/{mcollective-patterns → legacy/mcollective-patterns} +0 -0
- data/patterns/{mongodb → legacy/mongodb} +0 -0
- data/patterns/{nagios → legacy/nagios} +0 -0
- data/patterns/{postgresql → legacy/postgresql} +0 -0
- data/patterns/{rails → legacy/rails} +0 -0
- data/patterns/{redis → legacy/redis} +0 -0
- data/patterns/{ruby → legacy/ruby} +0 -0
- data/patterns/legacy/squid +4 -0
- data/spec/patterns/aws_spec.rb +395 -0
- data/spec/patterns/bacula_spec.rb +367 -0
- data/spec/patterns/bind_spec.rb +78 -0
- data/spec/patterns/bro_spec.rb +613 -0
- data/spec/patterns/core_spec.rb +51 -9
- data/spec/patterns/exim_spec.rb +201 -0
- data/spec/patterns/firewalls_spec.rb +669 -66
- data/spec/patterns/haproxy_spec.rb +246 -38
- data/spec/patterns/httpd_spec.rb +215 -94
- data/spec/patterns/java_spec.rb +357 -27
- data/spec/patterns/junos_spec.rb +101 -0
- data/spec/patterns/mcollective_spec.rb +35 -0
- data/spec/patterns/mongodb_spec.rb +170 -33
- data/spec/patterns/nagios_spec.rb +296 -79
- data/spec/patterns/netscreen_spec.rb +123 -0
- data/spec/patterns/rails3_spec.rb +87 -29
- data/spec/patterns/redis_spec.rb +157 -121
- data/spec/patterns/shorewall_spec.rb +85 -74
- data/spec/patterns/squid_spec.rb +139 -0
- data/spec/patterns/syslog_spec.rb +266 -22
- data/spec/spec_helper.rb +80 -6
- metadata +64 -28
- data/patterns/bind +0 -3
- data/patterns/squid +0 -4
- data/spec/patterns/bro.rb +0 -126
- data/spec/patterns/s3_spec.rb +0 -173
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: e1bcd46da3433a07874d058278a22f6addc6df5c334ec8059ba27dcf6ab789aa
|
4
|
+
data.tar.gz: 41da2ae6492e28d1c3a702d1e2b21e10b176ee592901eef8b50f34c0ca5d55d5
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 1ce64ad8d5f113ddf6f4be969ed208016d5b86d1398a550d5e260f3d46596f32165c9067c1c6e3d5d77db4068808e4b5c75e026ebe602e17b4f1708111d82a85
|
7
|
+
data.tar.gz: 4c06ff167b397aab038abbce4aed6f7d5d2f60de3bdace4d55a8e468700315a5d44a042a5d731645eb09819ceac54c5dcd48f96b2713b08466ee31de6257ccae
|
data/CHANGELOG.md
CHANGED
@@ -1,3 +1,101 @@
|
|
1
|
+
## 4.3.0
|
2
|
+
|
3
|
+
With **4.3.0** we're introducing a new set of pattern definitions compliant with Elastic Common Schema (ECS), on numerous
|
4
|
+
places patterns are capturing names prescribed by the schema or use custom namespaces that do not conflict with ECS ones.
|
5
|
+
|
6
|
+
Changes are backwards compatible as much as possible and also include improvements to some of the existing patterns.
|
7
|
+
|
8
|
+
Besides fields having new names, values for numeric (integer or floating point) types are usually converted to their
|
9
|
+
numeric representation to ease further event processing (e.g. `http.response.status_code` is now stored as an integer).
|
10
|
+
|
11
|
+
NOTE: to leverage the new ECS pattern set in Logstash a grok filter upgrade to version >= 4.4.0 is required.
|
12
|
+
|
13
|
+
- **aws**
|
14
|
+
* in ECS mode we dropped the (incomplete) attempt to capture `rawrequest` from `S3_REQUEST_LINE`
|
15
|
+
* `S3_ACCESS_LOG` will handle up-to-date S3 access-log formats (6 'new' field captures at the end)
|
16
|
+
Host Id -> Signature Version -> Cipher Suite -> Authentication Type -> Host Header -> TLS version
|
17
|
+
* `ELB_ACCESS_LOG` will handle optional (`-`) in legacy mode
|
18
|
+
* null values such as `-` or `-1` time values (e.g. `ELB_ACCESS_LOG`'s `request_processing_time`)
|
19
|
+
are not captured in ECS mode
|
20
|
+
|
21
|
+
- **bacula**
|
22
|
+
- Fix: improve matching of `BACULA_HOST` as `HOSTNAME`
|
23
|
+
- Fix: legacy `BACULA_` patterns to handle (optional) spaces
|
24
|
+
- Fix: handle `BACULA_LOG` 'Job Id: X' prefix as optional
|
25
|
+
- Fix: legacy matching of BACULA fatal error lines
|
26
|
+
|
27
|
+
- **bind**
|
28
|
+
- `BIND9`'s legacy `querytype` was further split into multiple fields as:
|
29
|
+
`dns.question.type` and `bind.log.question.flags`
|
30
|
+
- `BIND9` patterns (legacy as well) were adjusted to handle Bind9 >= 9.11 compatibility
|
31
|
+
- `BIND9_QUERYLOGBASE` was introduced for potential re-use
|
32
|
+
|
33
|
+
- **bro**
|
34
|
+
* `BRO_` patterns are stricter in ECS mode - won't mistakenly match newer BRO/Zeek formats
|
35
|
+
* place holders such as `(empty)` tags and `-` null values won't be captured
|
36
|
+
* each `BRO_` pattern has a newer `ZEEK_` variant that supports latest Zeek 3.x versions
|
37
|
+
e.g. `ZEEK_HTTP` as a replacement for `BRO_HTTP` (in ECS mode only),
|
38
|
+
there's a new file **zeek** where all of the `ZEEK_XXX` pattern variants live
|
39
|
+
|
40
|
+
- **exim**
|
41
|
+
* introduced `EXIM` (`EXIM_MESSAGE_ARRIVAL`) to match message arrival log lines - in ECS mode!
|
42
|
+
|
43
|
+
- **firewalls**
|
44
|
+
* introduced `IPTABLES` pattern which is re-used within `SHOREWALL` and `SFW2`
|
45
|
+
* `SHOREWALL` now supports IPv6 addresses (in ECS mode - due `IPTABLES` pattern)
|
46
|
+
* `timestamp` fields will be captured for `SHOREWALL` and `SFW2` in legacy mode as well
|
47
|
+
* `SHOREWALL` became less strict in containing the `kernel:` sub-string
|
48
|
+
* `NETSCREENSESSIONLOG` properly handles optional `session_id=... reason=...` suffix
|
49
|
+
* `interval` and `xlate_type` (legacy) CISCO fields are not captured in ECS mode
|
50
|
+
|
51
|
+
- **core** (grok-patterns)
|
52
|
+
* `SYSLOGFACILITY` type casts facility code and priority in ECS mode
|
53
|
+
* `SYSLOGTIMESTAMP` will be captured (from `SYSLOGBASE`) as `timestamp`
|
54
|
+
* Fix: e-mail address's local part to match according to RFC (#273)
|
55
|
+
|
56
|
+
- **haproxy**
|
57
|
+
* several ECS-ified fields will be type-casted to integer in ECS mode e.g. *haproxy.bytes_read*
|
58
|
+
* fields containing null value (`-`) are no longer captured
|
59
|
+
(e.g. in legacy mode `captured_request_cookie` gets captured even if `"-"`)
|
60
|
+
|
61
|
+
- **httpd**
|
62
|
+
* optional fields (e.g. `http.request.referrer` or `user_agent`) are only captured when not null (`-`)
|
63
|
+
* `source.port` (`clientport` in legacy mode) is considered optional
|
64
|
+
* dropped raw data (`rawrequest` legacy field) in ECS mode
|
65
|
+
* Fix: HTTPD_ERRORLOG should match when module missing (#299)
|
66
|
+
|
67
|
+
- **java**
|
68
|
+
* `JAVASTACKTRACEPART`'s matched line number will be converted to an integer
|
69
|
+
* `CATALINALOG` matching was updated to handle Tomcat 7/8/9 logging format
|
70
|
+
* `TOMCATLOG` handles the default Tomcat 7/8/9 logging format
|
71
|
+
* old (custom) legacy TOMCAT format is handled by the added `TOMCATLEGACY_LOG`
|
72
|
+
* `TOMCATLOG` and `TOMCAT_DATESTAMP` still match the legacy format,
|
73
|
+
however this might change at a later point - if you rely on the old format use `TOMCATLEGACY_` patterns
|
74
|
+
|
75
|
+
- **junos**
|
76
|
+
* integer fields (e.g. `juniper.srx.elapsed_time`) are captured as integer values
|
77
|
+
|
78
|
+
- **linux-syslog**
|
79
|
+
* `SYSLOG5424LINE` captures (overwrites) the `message` field instead of using a custom field name
|
80
|
+
* regardless of the format used, in ECS mode, timestamps are always captured as `timestamp`
|
81
|
+
* fields such as `log.syslog.facility.code` and `process.pid` are converted to integers
|
82
|
+
|
83
|
+
- **mcollective**
|
84
|
+
* *mcollective-patterns* file was removed, it's all one *mcollective* in ECS mode
|
85
|
+
* `MCOLLECTIVE`'s `process.pid` (`pid` previously) is not type-casted to an integer
|
86
|
+
|
87
|
+
- **nagios**
|
88
|
+
* numeric fields such as `nagios.log.attempt` are converted to integer values in ECS mode
|
89
|
+
|
90
|
+
- **rails**
|
91
|
+
* request duration times from `RAILS3` log will be converted to floating point values
|
92
|
+
|
93
|
+
- **squid**
|
94
|
+
* `SQUID3`'s `duration` http.response `status_code` and `bytes` are type-casted to int
|
95
|
+
* `SQUID3` pattern won't capture null ('-') `user.name` or `squid.response.content_type`
|
96
|
+
* Fix: allow to parse SQUID log with status 0 (#298)
|
97
|
+
* Fix: handle optional server address (#298)
|
98
|
+
|
1
99
|
## 4.2.0
|
2
100
|
- Fix: Java stack trace's JAVAFILE to better match generated names
|
3
101
|
- Fix: match Information/INFORMATION in LOGLEVEL [#274](https://github.com/logstash-plugins/logstash-patterns-core/pull/274)
|
data/Gemfile
CHANGED
@@ -9,3 +9,6 @@ if Dir.exist?(logstash_path) && use_logstash_source
|
|
9
9
|
gem 'logstash-core', :path => "#{logstash_path}/logstash-core"
|
10
10
|
gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api"
|
11
11
|
end
|
12
|
+
|
13
|
+
# TODO till filter grok with ECS support is released :
|
14
|
+
gem 'logstash-filter-grok', git: 'https://github.com/kares/logstash-filter-grok.git', ref: 'ecs-1-support'
|
data/README.md
CHANGED
@@ -2,29 +2,28 @@
|
|
2
2
|
|
3
3
|
[](https://travis-ci.com/logstash-plugins/logstash-patterns-core)
|
4
4
|
|
5
|
-
This
|
5
|
+
This plugin provides [pattern definitions][1] used by the [grok filter][2].
|
6
6
|
|
7
7
|
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
|
8
8
|
|
9
9
|
## Documentation
|
10
10
|
|
11
|
-
Logstash provides infrastructure to automatically generate documentation for this plugin.
|
11
|
+
Logstash provides infrastructure to automatically generate documentation for this plugin.
|
12
|
+
We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc
|
13
|
+
and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
|
12
14
|
|
13
15
|
- For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
|
14
16
|
- For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
|
15
17
|
|
16
18
|
## Need Help?
|
17
19
|
|
18
|
-
Need help? Try
|
20
|
+
Need help? Try https://discuss.elastic.co/c/logstash discussion forum.
|
19
21
|
|
20
22
|
## Developing
|
21
23
|
|
22
24
|
### 1. Plugin Developement and Testing
|
23
25
|
|
24
26
|
#### Code
|
25
|
-
- To get started, you'll need JRuby with the Bundler gem installed.
|
26
|
-
|
27
|
-
- Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
|
28
27
|
|
29
28
|
- Install dependencies
|
30
29
|
```sh
|
@@ -51,20 +50,16 @@ bundle exec rspec
|
|
51
50
|
|
52
51
|
- Edit Logstash `Gemfile` and add the local plugin path, for example:
|
53
52
|
```ruby
|
54
|
-
gem "logstash-
|
53
|
+
gem "logstash-patterns-core", :path => "/your/local/logstash-patterns-core"
|
55
54
|
```
|
56
55
|
- Install plugin
|
57
56
|
```sh
|
58
57
|
# Logstash 2.3 and higher
|
59
58
|
bin/logstash-plugin install --no-verify
|
60
|
-
|
61
|
-
# Prior to Logstash 2.3
|
62
|
-
bin/plugin install --no-verify
|
63
|
-
|
64
59
|
```
|
65
60
|
- Run Logstash with your plugin
|
66
61
|
```sh
|
67
|
-
bin/logstash -e 'filter {
|
62
|
+
bin/logstash -e 'filter { grok { } }'
|
68
63
|
```
|
69
64
|
At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
|
70
65
|
|
@@ -74,16 +69,11 @@ You can use the same **2.1** method to run your plugin in an installed Logstash
|
|
74
69
|
|
75
70
|
- Build your plugin gem
|
76
71
|
```sh
|
77
|
-
gem build logstash-
|
72
|
+
gem build logstash-patterns-core.gemspec
|
78
73
|
```
|
79
74
|
- Install the plugin from the Logstash home
|
80
75
|
```sh
|
81
|
-
# Logstash 2.3 and higher
|
82
76
|
bin/logstash-plugin install --no-verify
|
83
|
-
|
84
|
-
# Prior to Logstash 2.3
|
85
|
-
bin/plugin install --no-verify
|
86
|
-
|
87
77
|
```
|
88
78
|
- Start Logstash and proceed to test the plugin
|
89
79
|
|
@@ -96,3 +86,6 @@ Programming is not a required skill. Whatever you've seen about open source and
|
|
96
86
|
It is more important to the community that you are able to contribute.
|
97
87
|
|
98
88
|
For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
|
89
|
+
|
90
|
+
[1]: /tree/master/patterns
|
91
|
+
[2]: https://github.com/logstash-plugins/logstash-filter-grok
|
@@ -1,10 +1,18 @@
|
|
1
1
|
module LogStash
|
2
2
|
module Patterns
|
3
3
|
module Core
|
4
|
-
|
4
|
+
extend self
|
5
5
|
|
6
|
-
|
7
|
-
|
6
|
+
BASE_PATH = ::File.expand_path('../../../patterns', ::File.dirname(__FILE__))
|
7
|
+
private_constant :BASE_PATH
|
8
|
+
|
9
|
+
def path(type = 'legacy')
|
10
|
+
case type = type.to_s
|
11
|
+
when 'legacy', 'ecs-v1'
|
12
|
+
::File.join(BASE_PATH, type)
|
13
|
+
else
|
14
|
+
raise ArgumentError, "#{type.inspect} path not supported"
|
15
|
+
end
|
8
16
|
end
|
9
17
|
|
10
18
|
end
|
@@ -1,7 +1,7 @@
|
|
1
1
|
Gem::Specification.new do |s|
|
2
2
|
|
3
3
|
s.name = 'logstash-patterns-core'
|
4
|
-
s.version = '4.
|
4
|
+
s.version = '4.3.0'
|
5
5
|
s.licenses = ['Apache License (2.0)']
|
6
6
|
s.summary = "Patterns to be used in logstash"
|
7
7
|
s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
|
data/patterns/ecs-v1/aws
ADDED
@@ -0,0 +1,28 @@
|
|
1
|
+
S3_REQUEST_LINE (?:%{WORD:[http][request][method]} %{NOTSPACE:[url][original]}(?: HTTP/%{NUMBER:[http][version]})?)
|
2
|
+
|
3
|
+
S3_ACCESS_LOG %{WORD:[aws][s3access][bucket_owner]} %{NOTSPACE:[aws][s3access][bucket]} \[%{HTTPDATE:timestamp}\] (?:-|%{IP:[client][ip]}) (?:-|%{NOTSPACE:[client][user][id]}) %{NOTSPACE:[aws][s3access][request_id]} %{NOTSPACE:[aws][s3access][operation]} (?:-|%{NOTSPACE:[aws][s3access][key]}) (?:-|"%{S3_REQUEST_LINE:[aws][s3access][request_uri]}") (?:-|%{INT:[http][response][status_code]:int}) (?:-|%{NOTSPACE:[aws][s3access][error_code]}) (?:-|%{INT:[aws][s3access][bytes_sent]:int}) (?:-|%{INT:[aws][s3access][object_size]:int}) (?:-|%{INT:[aws][s3access][total_time]:int}) (?:-|%{INT:[aws][s3access][turn_around_time]:int}) "(?:-|%{DATA:[http][request][referrer]})" "(?:-|%{DATA:[user_agent][original]})" (?:-|%{NOTSPACE:[aws][s3access][version_id]})(?: (?:-|%{NOTSPACE:[aws][s3access][host_id]}) (?:-|%{NOTSPACE:[aws][s3access][signature_version]}) (?:-|%{NOTSPACE:[tls][cipher]}) (?:-|%{NOTSPACE:[aws][s3access][authentication_type]}) (?:-|%{NOTSPACE:[aws][s3access][host_header]}) (?:-|%{NOTSPACE:[aws][s3access][tls_version]}))?
|
4
|
+
# :long - %{INT:[aws][s3access][bytes_sent]:int}
|
5
|
+
# :long - %{INT:[aws][s3access][object_size]:int}
|
6
|
+
|
7
|
+
ELB_URIHOST %{IPORHOST:[url][domain]}(?::%{POSINT:[url][port]:int})?
|
8
|
+
ELB_URIPATHQUERY %{URIPATH:[url][path]}(?:\?%{URIQUERY:[url][query]})?
|
9
|
+
# deprecated - old name:
|
10
|
+
ELB_URIPATHPARAM %{ELB_URIPATHQUERY}
|
11
|
+
ELB_URI %{URIPROTO:[url][scheme]}://(?:%{USER:[url][username]}(?::[^@]*)?@)?(?:%{ELB_URIHOST})?(?:%{ELB_URIPATHQUERY})?
|
12
|
+
|
13
|
+
ELB_REQUEST_LINE (?:%{WORD:[http][request][method]} %{ELB_URI:[url][original]}(?: HTTP/%{NUMBER:[http][version]})?)
|
14
|
+
|
15
|
+
# pattern supports 'regular' HTTP ELB format
|
16
|
+
ELB_V1_HTTP_LOG %{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:[aws][elb][name]} %{IP:[source][ip]}:%{INT:[source][port]:int} (?:-|(?:%{IP:[aws][elb][backend][ip]}:%{INT:[aws][elb][backend][port]:int})) (?:-1|%{NUMBER:[aws][elb][request_processing_time][sec]:float}) (?:-1|%{NUMBER:[aws][elb][backend_processing_time][sec]:float}) (?:-1|%{NUMBER:[aws][elb][response_processing_time][sec]:float}) %{INT:[http][response][status_code]:int} (?:-|%{INT:[aws][elb][backend][http][response][status_code]:int}) %{INT:[http][request][body][bytes]:int} %{INT:[http][response][body][bytes]:int} "%{ELB_REQUEST_LINE}"(?: "(?:-|%{DATA:[user_agent][original]})" (?:-|%{NOTSPACE:[tls][cipher]}) (?:-|%{NOTSPACE:[aws][elb][ssl_protocol]}))?
|
17
|
+
# :long - %{INT:[http][request][body][bytes]:int}
|
18
|
+
# :long - %{INT:[http][response][body][bytes]:int}
|
19
|
+
|
20
|
+
ELB_ACCESS_LOG %{ELB_V1_HTTP_LOG}
|
21
|
+
|
22
|
+
# pattern used to match a shorted format, that's why we have the optional part (starting with *http.version*) at the end
|
23
|
+
CLOUDFRONT_ACCESS_LOG (?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY}\t%{TIME})\t%{WORD:[aws][cloudfront][x_edge_location]}\t(?:-|%{INT:[destination][bytes]:int})\t%{IPORHOST:[source][ip]}\t%{WORD:[http][request][method]}\t%{HOSTNAME:[url][domain]}\t%{NOTSPACE:[url][path]}\t(?:(?:000)|%{INT:[http][response][status_code]:int})\t(?:-|%{DATA:[http][request][referrer]})\t%{DATA:[user_agent][original]}\t(?:-|%{DATA:[url][query]})\t(?:-|%{DATA:[aws][cloudfront][http][request][cookie]})\t%{WORD:[aws][cloudfront][x_edge_result_type]}\t%{NOTSPACE:[aws][cloudfront][x_edge_request_id]}\t%{HOSTNAME:[aws][cloudfront][http][request][host]}\t%{URIPROTO:[network][protocol]}\t(?:-|%{INT:[source][bytes]:int})\t%{NUMBER:[aws][cloudfront][time_taken]:float}\t(?:-|%{IP:[network][forwarded_ip]})\t(?:-|%{DATA:[aws][cloudfront][ssl_protocol]})\t(?:-|%{NOTSPACE:[tls][cipher]})\t%{WORD:[aws][cloudfront][x_edge_response_result_type]}(?:\t(?:-|HTTP/%{NUMBER:[http][version]})\t(?:-|%{DATA:[aws][cloudfront][fle_status]})\t(?:-|%{DATA:[aws][cloudfront][fle_encrypted_fields]})\t%{INT:[source][port]:int}\t%{NUMBER:[aws][cloudfront][time_to_first_byte]:float}\t(?:-|%{DATA:[aws][cloudfront][x_edge_detailed_result_type]})\t(?:-|%{NOTSPACE:[http][request][mime_type]})\t(?:-|%{INT:[aws][cloudfront][http][request][size]:int})\t(?:-|%{INT:[aws][cloudfront][http][request][range][start]:int})\t(?:-|%{INT:[aws][cloudfront][http][request][range][end]:int}))?
|
24
|
+
# :long - %{INT:[destination][bytes]:int}
|
25
|
+
# :long - %{INT:[source][bytes]:int}
|
26
|
+
# :long - %{INT:[aws][cloudfront][http][request][size]:int}
|
27
|
+
# :long - %{INT:[aws][cloudfront][http][request][range][start]:int}
|
28
|
+
# :long - %{INT:[aws][cloudfront][http][request][range][end]:int}
|
@@ -0,0 +1,53 @@
|
|
1
|
+
BACULA_TIMESTAMP %{MONTHDAY}-%{MONTH}(?:-%{YEAR})? %{HOUR}:%{MINUTE}
|
2
|
+
BACULA_HOST %{HOSTNAME}
|
3
|
+
BACULA_VOLUME %{USER}
|
4
|
+
BACULA_DEVICE %{USER}
|
5
|
+
BACULA_DEVICEPATH %{UNIXPATH}
|
6
|
+
BACULA_CAPACITY %{INT}{1,3}(,%{INT}{3})*
|
7
|
+
BACULA_VERSION %{USER}
|
8
|
+
BACULA_JOB %{USER}
|
9
|
+
|
10
|
+
BACULA_LOG_MAX_CAPACITY User defined maximum volume capacity %{BACULA_CAPACITY:[bacula][volume][max_capacity]} exceeded on device \"%{BACULA_DEVICE:[bacula][volume][device]}\" \(%{BACULA_DEVICEPATH:[bacula][volume][path]}\).?
|
11
|
+
BACULA_LOG_END_VOLUME End of medium on Volume \"%{BACULA_VOLUME:[bacula][volume][name]}\" Bytes=%{BACULA_CAPACITY:[bacula][volume][bytes]} Blocks=%{BACULA_CAPACITY:[bacula][volume][blocks]} at %{BACULA_TIMESTAMP:[bacula][timestamp]}.
|
12
|
+
BACULA_LOG_NEW_VOLUME Created new Volume \"%{BACULA_VOLUME:[bacula][volume][name]}\" in catalog.
|
13
|
+
BACULA_LOG_NEW_LABEL Labeled new Volume \"%{BACULA_VOLUME:[bacula][volume][name]}\" on (?:file )?device \"%{BACULA_DEVICE:[bacula][volume][device]}\" \(%{BACULA_DEVICEPATH:[bacula][volume][path]}\).
|
14
|
+
BACULA_LOG_WROTE_LABEL Wrote label to prelabeled Volume \"%{BACULA_VOLUME:[bacula][volume][name]}\" on device \"%{BACULA_DEVICE:[bacula][volume][device]}\" \(%{BACULA_DEVICEPATH:[bacula][volume][path]}\)
|
15
|
+
BACULA_LOG_NEW_MOUNT New volume \"%{BACULA_VOLUME:[bacula][volume][name]}\" mounted on device \"%{BACULA_DEVICE:[bacula][volume][device]}\" \(%{BACULA_DEVICEPATH:[bacula][volume][path]}\) at %{BACULA_TIMESTAMP:[bacula][timestamp]}.
|
16
|
+
BACULA_LOG_NOOPEN \s*Cannot open %{DATA}: ERR=%{GREEDYDATA:[error][message]}
|
17
|
+
BACULA_LOG_NOOPENDIR \s*Could not open directory \"?%{DATA:[file][path]}\"?: ERR=%{GREEDYDATA:[error][message]}
|
18
|
+
BACULA_LOG_NOSTAT \s*Could not stat %{DATA:[file][path]}: ERR=%{GREEDYDATA:[error][message]}
|
19
|
+
BACULA_LOG_NOJOBS There are no more Jobs associated with Volume \"%{BACULA_VOLUME:[bacula][volume][name]}\". Marking it purged.
|
20
|
+
BACULA_LOG_ALL_RECORDS_PRUNED .*?All records pruned from Volume \"%{BACULA_VOLUME:[bacula][volume][name]}\"; marking it \"Purged\"
|
21
|
+
BACULA_LOG_BEGIN_PRUNE_JOBS Begin pruning Jobs older than %{INT} month %{INT} days .
|
22
|
+
BACULA_LOG_BEGIN_PRUNE_FILES Begin pruning Files.
|
23
|
+
BACULA_LOG_PRUNED_JOBS Pruned %{INT} Jobs* for client %{BACULA_HOST:[bacula][client][name]} from catalog.
|
24
|
+
BACULA_LOG_PRUNED_FILES Pruned Files from %{INT} Jobs* for client %{BACULA_HOST:[bacula][client][name]} from catalog.
|
25
|
+
BACULA_LOG_ENDPRUNE End auto prune.
|
26
|
+
BACULA_LOG_STARTJOB Start Backup JobId %{INT}, Job=%{BACULA_JOB:[bacula][job][name]}
|
27
|
+
BACULA_LOG_STARTRESTORE Start Restore Job %{BACULA_JOB:[bacula][job][name]}
|
28
|
+
BACULA_LOG_USEDEVICE Using Device \"%{BACULA_DEVICE:[bacula][volume][device]}\"
|
29
|
+
BACULA_LOG_DIFF_FS \s*%{UNIXPATH} is a different filesystem. Will not descend from %{UNIXPATH} into it.
|
30
|
+
BACULA_LOG_JOBEND Job write elapsed time = %{DATA:[bacula][job][elapsed_time]}, Transfer rate = %{NUMBER} (K|M|G)? Bytes/second
|
31
|
+
BACULA_LOG_NOPRUNE_JOBS No Jobs found to prune.
|
32
|
+
BACULA_LOG_NOPRUNE_FILES No Files found to prune.
|
33
|
+
BACULA_LOG_VOLUME_PREVWRITTEN Volume \"?%{BACULA_VOLUME:[bacula][volume][name]}\"? previously written, moving to end of data.
|
34
|
+
BACULA_LOG_READYAPPEND Ready to append to end of Volume \"%{BACULA_VOLUME:[bacula][volume][name]}\" size=%{INT:[bacula][volume][size]:int}
|
35
|
+
# :long - %{INT:[bacula][volume][size]:int}
|
36
|
+
BACULA_LOG_CANCELLING Cancelling duplicate JobId=%{INT:[bacula][job][other_id]}.
|
37
|
+
BACULA_LOG_MARKCANCEL JobId %{INT:[bacula][job][id]}, Job %{BACULA_JOB:[bacula][job][name]} marked to be canceled.
|
38
|
+
BACULA_LOG_CLIENT_RBJ shell command: run ClientRunBeforeJob \"%{GREEDYDATA:[bacula][job][client_run_before_command]}\"
|
39
|
+
BACULA_LOG_VSS (Generate )?VSS (Writer)?
|
40
|
+
BACULA_LOG_MAXSTART Fatal [eE]rror: Job canceled because max start delay time exceeded.
|
41
|
+
BACULA_LOG_DUPLICATE Fatal [eE]rror: JobId %{INT:[bacula][job][other_id]} already running. Duplicate job not allowed.
|
42
|
+
BACULA_LOG_NOJOBSTAT Fatal [eE]rror: No Job status returned from FD.
|
43
|
+
BACULA_LOG_FATAL_CONN Fatal [eE]rror: bsock.c:133 Unable to connect to (Client: %{BACULA_HOST:[bacula][client][name]}|Storage daemon) on %{IPORHOST:[client][address]}:%{POSINT:[client][port]:int}. ERR=%{GREEDYDATA:[error][message]}
|
44
|
+
BACULA_LOG_NO_CONNECT Warning: bsock.c:127 Could not connect to (Client: %{BACULA_HOST:[bacula][client][name]}|Storage daemon) on %{IPORHOST:[client][address]}:%{POSINT:[client][port]:int}. ERR=%{GREEDYDATA:[error][message]}
|
45
|
+
BACULA_LOG_NO_AUTH Fatal error: Unable to authenticate with File daemon at \"?%{IPORHOST:[client][address]}(?::%{POSINT:[client][port]:int})?\"?. Possible causes:
|
46
|
+
BACULA_LOG_NOSUIT No prior or suitable Full backup found in catalog. Doing FULL backup.
|
47
|
+
BACULA_LOG_NOPRIOR No prior Full backup Job record found.
|
48
|
+
|
49
|
+
BACULA_LOG_JOB (Error: )?Bacula %{BACULA_HOST} %{BACULA_VERSION} \(%{BACULA_VERSION}\):
|
50
|
+
|
51
|
+
BACULA_LOG %{BACULA_TIMESTAMP:timestamp} %{BACULA_HOST:[host][hostname]}(?: JobId %{INT:[bacula][job][id]})?:? (%{BACULA_LOG_MAX_CAPACITY}|%{BACULA_LOG_END_VOLUME}|%{BACULA_LOG_NEW_VOLUME}|%{BACULA_LOG_NEW_LABEL}|%{BACULA_LOG_WROTE_LABEL}|%{BACULA_LOG_NEW_MOUNT}|%{BACULA_LOG_NOOPEN}|%{BACULA_LOG_NOOPENDIR}|%{BACULA_LOG_NOSTAT}|%{BACULA_LOG_NOJOBS}|%{BACULA_LOG_ALL_RECORDS_PRUNED}|%{BACULA_LOG_BEGIN_PRUNE_JOBS}|%{BACULA_LOG_BEGIN_PRUNE_FILES}|%{BACULA_LOG_PRUNED_JOBS}|%{BACULA_LOG_PRUNED_FILES}|%{BACULA_LOG_ENDPRUNE}|%{BACULA_LOG_STARTJOB}|%{BACULA_LOG_STARTRESTORE}|%{BACULA_LOG_USEDEVICE}|%{BACULA_LOG_DIFF_FS}|%{BACULA_LOG_JOBEND}|%{BACULA_LOG_NOPRUNE_JOBS}|%{BACULA_LOG_NOPRUNE_FILES}|%{BACULA_LOG_VOLUME_PREVWRITTEN}|%{BACULA_LOG_READYAPPEND}|%{BACULA_LOG_CANCELLING}|%{BACULA_LOG_MARKCANCEL}|%{BACULA_LOG_CLIENT_RBJ}|%{BACULA_LOG_VSS}|%{BACULA_LOG_MAXSTART}|%{BACULA_LOG_DUPLICATE}|%{BACULA_LOG_NOJOBSTAT}|%{BACULA_LOG_FATAL_CONN}|%{BACULA_LOG_NO_CONNECT}|%{BACULA_LOG_NO_AUTH}|%{BACULA_LOG_NOSUIT}|%{BACULA_LOG_JOB}|%{BACULA_LOG_NOPRIOR})
|
52
|
+
# old (deprecated) name :
|
53
|
+
BACULA_LOGLINE %{BACULA_LOG}
|
@@ -0,0 +1,13 @@
|
|
1
|
+
BIND9_TIMESTAMP %{MONTHDAY}[-]%{MONTH}[-]%{YEAR} %{TIME}
|
2
|
+
|
3
|
+
BIND9_DNSTYPE (?:A|AAAA|CAA|CDNSKEY|CDS|CERT|CNAME|CSYNC|DLV|DNAME|DNSKEY|DS|HINFO|LOC|MX|NAPTR|NS|NSEC|NSEC3|OPENPGPKEY|PTR|RRSIG|RP|SIG|SMIMEA|SOA|SRV|TSIG|TXT|URI)
|
4
|
+
BIND9_CATEGORY (?:queries)
|
5
|
+
|
6
|
+
# dns.question.class is static - only 'IN' is supported by Bind9
|
7
|
+
# bind.log.question.name is expected to be a 'duplicate' (same as the dns.question.name capture)
|
8
|
+
BIND9_QUERYLOGBASE client(:? @0x(?:[0-9A-Fa-f]+))? %{IP:[client][ip]}#%{POSINT:[client][port]:int} \(%{GREEDYDATA:[bind][log][question][name]}\): query: %{GREEDYDATA:[dns][question][name]} (?<[dns][question][class]>IN) %{BIND9_DNSTYPE:[dns][question][type]}(:? %{DATA:[bind][log][question][flags]})? \(%{IP:[server][ip]}\)
|
9
|
+
|
10
|
+
# for query-logging category and severity are always fixed as "queries: info: "
|
11
|
+
BIND9_QUERYLOG %{BIND9_TIMESTAMP:timestamp} %{BIND9_CATEGORY:[bing][log][category]}: %{LOGLEVEL:[log][level]}: %{BIND9_QUERYLOGBASE}
|
12
|
+
|
13
|
+
BIND9 %{BIND9_QUERYLOG}
|
data/patterns/ecs-v1/bro
ADDED
@@ -0,0 +1,30 @@
|
|
1
|
+
# supports the 'old' BRO log files, for updated Zeek log format see the patters/ecs-v1/zeek
|
2
|
+
# https://www.bro.org/sphinx/script-reference/log-files.html
|
3
|
+
|
4
|
+
BRO_BOOL [TF]
|
5
|
+
BRO_DATA [^\t]+
|
6
|
+
|
7
|
+
# http.log - old format (before the Zeek rename) :
|
8
|
+
BRO_HTTP %{NUMBER:timestamp}\t%{NOTSPACE:[zeek][session_id]}\t%{IP:[source][ip]}\t%{INT:[source][port]:int}\t%{IP:[destination][ip]}\t%{INT:[destination][port]:int}\t%{INT:[zeek][http][trans_depth]:int}\t(?:-|%{WORD:[http][request][method]})\t(?:-|%{BRO_DATA:[url][domain]})\t(?:-|%{BRO_DATA:[url][original]})\t(?:-|%{BRO_DATA:[http][request][referrer]})\t(?:-|%{BRO_DATA:[user_agent][original]})\t(?:-|%{NUMBER:[http][request][body][bytes]:int})\t(?:-|%{NUMBER:[http][response][body][bytes]:int})\t(?:-|%{POSINT:[http][response][status_code]:int})\t(?:-|%{DATA:[zeek][http][status_msg]})\t(?:-|%{POSINT:[zeek][http][info_code]:int})\t(?:-|%{DATA:[zeek][http][info_msg]})\t(?:-|%{BRO_DATA:[zeek][http][filename]})\t(?:\(empty\)|%{BRO_DATA:[zeek][http][tags]})\t(?:-|%{BRO_DATA:[url][username]})\t(?:-|%{BRO_DATA:[url][password]})\t(?:-|%{BRO_DATA:[zeek][http][proxied]})\t(?:-|%{BRO_DATA:[zeek][http][orig_fuids]})\t(?:-|%{BRO_DATA:[http][request][mime_type]})\t(?:-|%{BRO_DATA:[zeek][http][resp_fuids]})\t(?:-|%{BRO_DATA:[http][response][mime_type]})
|
9
|
+
# :long - %{NUMBER:[http][request][body][bytes]:int}
|
10
|
+
# :long - %{NUMBER:[http][response][body][bytes]:int}
|
11
|
+
|
12
|
+
# dns.log - old format
|
13
|
+
BRO_DNS %{NUMBER:timestamp}\t%{NOTSPACE:[zeek][session_id]}\t%{IP:[source][ip]}\t%{INT:[source][port]:int}\t%{IP:[destination][ip]}\t%{INT:[destination][port]:int}\t%{WORD:[network][transport]}\t(?:-|%{INT:[dns][id]:int})\t(?:-|%{BRO_DATA:[dns][question][name]})\t(?:-|%{INT:[zeek][dns][qclass]:int})\t(?:-|%{BRO_DATA:[zeek][dns][qclass_name]})\t(?:-|%{INT:[zeek][dns][qtype]:int})\t(?:-|%{BRO_DATA:[dns][question][type]})\t(?:-|%{INT:[zeek][dns][rcode]:int})\t(?:-|%{BRO_DATA:[dns][response_code]})\t(?:-|%{BRO_BOOL:[zeek][dns][AA]})\t(?:-|%{BRO_BOOL:[zeek][dns][TC]})\t(?:-|%{BRO_BOOL:[zeek][dns][RD]})\t(?:-|%{BRO_BOOL:[zeek][dns][RA]})\t(?:-|%{NONNEGINT:[zeek][dns][Z]:int})\t(?:-|%{BRO_DATA:[zeek][dns][answers]})\t(?:-|%{DATA:[zeek][dns][TTLs]})\t(?:-|%{BRO_BOOL:[zeek][dns][rejected]})
|
14
|
+
|
15
|
+
# conn.log - old bro, also supports 'newer' format (optional *zeek.connection.local_resp* flag) compared to non-ecs mode
|
16
|
+
BRO_CONN %{NUMBER:timestamp}\t%{NOTSPACE:[zeek][session_id]}\t%{IP:[source][ip]}\t%{INT:[source][port]:int}\t%{IP:[destination][ip]}\t%{INT:[destination][port]:int}\t%{WORD:[network][transport]}\t(?:-|%{BRO_DATA:[network][protocol]})\t(?:-|%{NUMBER:[zeek][connection][duration]:float})\t(?:-|%{INT:[zeek][connection][orig_bytes]:int})\t(?:-|%{INT:[zeek][connection][resp_bytes]:int})\t(?:-|%{BRO_DATA:[zeek][connection][state]})\t(?:-|%{BRO_BOOL:[zeek][connection][local_orig]})\t(?:(?:-|%{BRO_BOOL:[zeek][connection][local_resp]})\t)?(?:-|%{INT:[zeek][connection][missed_bytes]:int})\t(?:-|%{BRO_DATA:[zeek][connection][history]})\t(?:-|%{INT:[source][packets]:int})\t(?:-|%{INT:[source][bytes]:int})\t(?:-|%{INT:[destination][packets]:int})\t(?:-|%{INT:[destination][bytes]:int})\t(?:\(empty\)|%{BRO_DATA:[zeek][connection][tunnel_parents]})
|
17
|
+
# :long - %{INT:[zeek][connection][orig_bytes]:int}
|
18
|
+
# :long - %{INT:[zeek][connection][resp_bytes]:int}
|
19
|
+
# :long - %{INT:[zeek][connection][missed_bytes]:int}
|
20
|
+
# :long - %{INT:[source][packets]:int}
|
21
|
+
# :long - %{INT:[source][bytes]:int}
|
22
|
+
# :long - %{INT:[destination][packets]:int}
|
23
|
+
# :long - %{INT:[destination][bytes]:int}
|
24
|
+
|
25
|
+
# files.log - old format
|
26
|
+
BRO_FILES %{NUMBER:timestamp}\t%{NOTSPACE:[zeek][files][fuid]}\t(?:-|%{IP:[server][ip]})\t(?:-|%{IP:[client][ip]})\t(?:-|%{BRO_DATA:[zeek][files][session_ids]})\t(?:-|%{BRO_DATA:[zeek][files][source]})\t(?:-|%{INT:[zeek][files][depth]:int})\t(?:-|%{BRO_DATA:[zeek][files][analyzers]})\t(?:-|%{BRO_DATA:[file][mime_type]})\t(?:-|%{BRO_DATA:[file][name]})\t(?:-|%{NUMBER:[zeek][files][duration]:float})\t(?:-|%{BRO_DATA:[zeek][files][local_orig]})\t(?:-|%{BRO_BOOL:[zeek][files][is_orig]})\t(?:-|%{INT:[zeek][files][seen_bytes]:int})\t(?:-|%{INT:[file][size]:int})\t(?:-|%{INT:[zeek][files][missing_bytes]:int})\t(?:-|%{INT:[zeek][files][overflow_bytes]:int})\t(?:-|%{BRO_BOOL:[zeek][files][timedout]})\t(?:-|%{BRO_DATA:[zeek][files][parent_fuid]})\t(?:-|%{BRO_DATA:[file][hash][md5]})\t(?:-|%{BRO_DATA:[file][hash][sha1]})\t(?:-|%{BRO_DATA:[file][hash][sha256]})\t(?:-|%{BRO_DATA:[zeek][files][extracted]})
|
27
|
+
# :long - %{INT:[zeek][files][seen_bytes]:int}
|
28
|
+
# :long - %{INT:[file][size]:int}
|
29
|
+
# :long - %{INT:[zeek][files][missing_bytes]:int}
|
30
|
+
# :long - %{INT:[zeek][files][overflow_bytes]:int}
|
@@ -0,0 +1,26 @@
|
|
1
|
+
EXIM_MSGID [0-9A-Za-z]{6}-[0-9A-Za-z]{6}-[0-9A-Za-z]{2}
|
2
|
+
# <= message arrival
|
3
|
+
# => normal message delivery
|
4
|
+
# -> additional address in same delivery
|
5
|
+
# *> delivery suppressed by -N
|
6
|
+
# ** delivery failed; address bounced
|
7
|
+
# == delivery deferred; temporary problem
|
8
|
+
EXIM_FLAGS (?:<=|=>|->|\*>|\*\*|==|<>|>>)
|
9
|
+
EXIM_DATE (:?%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME})
|
10
|
+
EXIM_PID \[%{POSINT:[process][pid]:int}\]
|
11
|
+
EXIM_QT ((\d+y)?(\d+w)?(\d+d)?(\d+h)?(\d+m)?(\d+s)?)
|
12
|
+
EXIM_EXCLUDE_TERMS (Message is frozen|(Start|End) queue run| Warning: | retry time not reached | no (IP address|host name) found for (IP address|host) | unexpected disconnection while reading SMTP command | no immediate delivery: |another process is handling this message)
|
13
|
+
EXIM_REMOTE_HOST (H=(%{NOTSPACE:[source][address]} )?(\(%{NOTSPACE:[exim][log][remote_address]}\) )?\[%{IP:[source][ip]}\](?::%{POSINT:[source][port]:int})?)
|
14
|
+
EXIM_INTERFACE (I=\[%{IP:[destination][ip]}\](?::%{NUMBER:[destination][port]:int}))
|
15
|
+
EXIM_PROTOCOL (P=%{NOTSPACE:[network][protocol]})
|
16
|
+
EXIM_MSG_SIZE (S=%{NUMBER:[exim][log][message][size]:int})
|
17
|
+
EXIM_HEADER_ID (id=%{NOTSPACE:[exim][log][header_id]})
|
18
|
+
EXIM_QUOTED_CONTENT (?:\\.|[^\\"])*
|
19
|
+
EXIM_SUBJECT (T="%{EXIM_QUOTED_CONTENT:[exim][log][message][subject]}")
|
20
|
+
|
21
|
+
EXIM_UNKNOWN_FIELD (?:[A-Za-z0-9]{1,4}=(?:%{QUOTEDSTRING}|%{NOTSPACE}))
|
22
|
+
EXIM_NAMED_FIELDS (?: (?:%{EXIM_REMOTE_HOST}|%{EXIM_INTERFACE}|%{EXIM_PROTOCOL}|%{EXIM_MSG_SIZE}|%{EXIM_HEADER_ID}|%{EXIM_SUBJECT}|%{EXIM_UNKNOWN_FIELD}))*
|
23
|
+
|
24
|
+
EXIM_MESSAGE_ARRIVAL %{EXIM_DATE:timestamp} (?:%{EXIM_PID} )?%{EXIM_MSGID:[exim][log][message][id]} (?<[exim][log][flags]><=) (?<[exim][log][status]>[a-z:] )?%{EMAILADDRESS:[exim][log][sender][email]}%{EXIM_NAMED_FIELDS}(?:(?: from <?%{DATA:[exim][log][sender][original]}>?)? for %{EMAILADDRESS:[exim][log][recipient][email]})?
|
25
|
+
|
26
|
+
EXIM %{EXIM_MESSAGE_ARRIVAL}
|
@@ -0,0 +1,111 @@
|
|
1
|
+
# NetScreen firewall logs
|
2
|
+
NETSCREENSESSIONLOG %{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:[observer][hostname]} %{NOTSPACE:[observer][name]}: (?<[observer][product]>NetScreen) device_id=%{WORD:[netscreen][device_id]} .*?(system-\w+-%{NONNEGINT:[event][code]}\(%{WORD:[netscreen][session][type]}\))?: start_time="%{DATA:[netscreen][session][start_time]}" duration=%{INT:[netscreen][session][duration]:int} policy_id=%{INT:[netscreen][policy_id]} service=%{DATA:[netscreen][service]} proto=%{INT:[netscreen][protocol_number]:int} src zone=%{WORD:[observer][ingress][zone]} dst zone=%{WORD:[observer][egress][zone]} action=%{WORD:[event][action]} sent=%{INT:[source][bytes]:int} rcvd=%{INT:[destination][bytes]:int} src=%{IPORHOST:[source][address]} dst=%{IPORHOST:[destination][address]}(?: src_port=%{INT:[source][port]:int} dst_port=%{INT:[destination][port]:int})?(?: src-xlated ip=%{IP:[source][nat][ip]} port=%{INT:[source][nat][port]:int} dst-xlated ip=%{IP:[destination][nat][ip]} port=%{INT:[destination][nat][port]:int})?(?: session_id=%{INT:[netscreen][session][id]} reason=%{GREEDYDATA:[netscreen][session][reason]})?
|
3
|
+
# :long - %{INT:[source][bytes]:int}
|
4
|
+
# :long - %{INT:[destination][bytes]:int}
|
5
|
+
|
6
|
+
#== Cisco ASA ==
|
7
|
+
CISCO_TAGGED_SYSLOG ^<%{POSINT:[log][syslog][facility][code]:int}>%{CISCOTIMESTAMP:timestamp}( %{SYSLOGHOST:[host][hostname]})? ?: %%{CISCOTAG:ciscotag}:
|
8
|
+
CISCOTIMESTAMP %{MONTH} +%{MONTHDAY}(?: %{YEAR})? %{TIME}
|
9
|
+
CISCOTAG [A-Z0-9]+-%{INT}-(?:[A-Z0-9_]+)
|
10
|
+
# Common Particles
|
11
|
+
CISCO_ACTION Built|Teardown|Deny|Denied|denied|requested|permitted|denied by ACL|discarded|est-allowed|Dropping|created|deleted
|
12
|
+
CISCO_REASON Duplicate TCP SYN|Failed to locate egress interface|Invalid transport field|No matching connection|DNS Response|DNS Query|(?:%{WORD}\s*)*
|
13
|
+
CISCO_DIRECTION Inbound|inbound|Outbound|outbound
|
14
|
+
CISCO_INTERVAL first hit|%{INT}-second interval
|
15
|
+
CISCO_XLATE_TYPE static|dynamic
|
16
|
+
# helpers
|
17
|
+
CISCO_HITCOUNT_INTERVAL hit-cnt %{INT:[cisco][asa][hit_count]:int} (?:first hit|%{INT:[cisco][asa][interval]:int}-second interval)
|
18
|
+
CISCO_SRC_IP_USER %{NOTSPACE:[observer][ingress][interface][name]}:%{IP:[source][ip]}(?:\(%{DATA:[source][user][name]}\))?
|
19
|
+
CISCO_DST_IP_USER %{NOTSPACE:[observer][egress][interface][name]}:%{IP:[destination][ip]}(?:\(%{DATA:[destination][user][name]}\))?
|
20
|
+
CISCO_SRC_HOST_PORT_USER %{NOTSPACE:[observer][ingress][interface][name]}:(?:(?:%{IP:[source][ip]})|(?:%{HOSTNAME:[source][address]}))(?:/%{INT:[source][port]:int})?(?:\(%{DATA:[source][user][name]}\))?
|
21
|
+
CISCO_DST_HOST_PORT_USER %{NOTSPACE:[observer][egress][interface][name]}:(?:(?:%{IP:[destination][ip]})|(?:%{HOSTNAME:[destination][address]}))(?:/%{INT:[destination][port]:int})?(?:\(%{DATA:[destination][user][name]}\))?
|
22
|
+
# ASA-1-104001
|
23
|
+
CISCOFW104001 \((?:Primary|Secondary)\) Switching to ACTIVE - %{GREEDYDATA:[event][reason]}
|
24
|
+
# ASA-1-104002
|
25
|
+
CISCOFW104002 \((?:Primary|Secondary)\) Switching to STANDBY - %{GREEDYDATA:[event][reason]}
|
26
|
+
# ASA-1-104003
|
27
|
+
CISCOFW104003 \((?:Primary|Secondary)\) Switching to FAILED\.
|
28
|
+
# ASA-1-104004
|
29
|
+
CISCOFW104004 \((?:Primary|Secondary)\) Switching to OK\.
|
30
|
+
# ASA-1-105003
|
31
|
+
CISCOFW105003 \((?:Primary|Secondary)\) Monitoring on [Ii]nterface %{NOTSPACE:[network][interface][name]} waiting
|
32
|
+
# ASA-1-105004
|
33
|
+
CISCOFW105004 \((?:Primary|Secondary)\) Monitoring on [Ii]nterface %{NOTSPACE:[network][interface][name]} normal
|
34
|
+
# ASA-1-105005
|
35
|
+
CISCOFW105005 \((?:Primary|Secondary)\) Lost Failover communications with mate on [Ii]nterface %{NOTSPACE:[network][interface][name]}
|
36
|
+
# ASA-1-105008
|
37
|
+
CISCOFW105008 \((?:Primary|Secondary)\) Testing [Ii]nterface %{NOTSPACE:[network][interface][name]}
|
38
|
+
# ASA-1-105009
|
39
|
+
CISCOFW105009 \((?:Primary|Secondary)\) Testing on [Ii]nterface %{NOTSPACE:[network][interface][name]} (?:Passed|Failed)
|
40
|
+
# ASA-2-106001
|
41
|
+
CISCOFW106001 %{CISCO_DIRECTION:[cisco][asa][network][direction]} %{WORD:[cisco][asa][network][transport]} connection %{CISCO_ACTION:[cisco][asa][outcome]} from %{IP:[source][ip]}/%{INT:[source][port]:int} to %{IP:[destination][ip]}/%{INT:[destination][port]:int} flags %{DATA:[cisco][asa][tcp_flags]} on interface %{NOTSPACE:[observer][egress][interface][name]}
|
42
|
+
# ASA-2-106006, ASA-2-106007, ASA-2-106010
|
43
|
+
CISCOFW106006_106007_106010 %{CISCO_ACTION:[cisco][asa][outcome]} %{CISCO_DIRECTION:[cisco][asa][network][direction]} %{WORD:[cisco][asa][network][transport]} (?:from|src) %{IP:[source][ip]}/%{INT:[source][port]:int}(?:\(%{DATA:[source][user][name]}\))? (?:to|dst) %{IP:[destination][ip]}/%{INT:[destination][port]:int}(?:\(%{DATA:[destination][user][name]}\))? (?:(?:on interface %{NOTSPACE:[observer][egress][interface][name]})|(?:due to %{CISCO_REASON:[event][reason]}))
|
44
|
+
# ASA-3-106014
|
45
|
+
CISCOFW106014 %{CISCO_ACTION:[cisco][asa][outcome]} %{CISCO_DIRECTION:[cisco][asa][network][direction]} %{WORD:[cisco][asa][network][transport]} src %{CISCO_SRC_IP_USER} dst %{CISCO_DST_IP_USER}\s?\(type %{INT:[cisco][asa][icmp_type]:int}, code %{INT:[cisco][asa][icmp_code]:int}\)
|
46
|
+
# ASA-6-106015
|
47
|
+
CISCOFW106015 %{CISCO_ACTION:[cisco][asa][outcome]} %{WORD:[cisco][asa][network][transport]} \(%{DATA:[cisco][asa][rule_name]}\) from %{IP:[source][ip]}/%{INT:[source][port]:int} to %{IP:[destination][ip]}/%{INT:[destination][port]:int} flags %{DATA:[cisco][asa][tcp_flags]} on interface %{NOTSPACE:[observer][egress][interface][name]}
|
48
|
+
# ASA-1-106021
|
49
|
+
CISCOFW106021 %{CISCO_ACTION:[cisco][asa][outcome]} %{WORD:[cisco][asa][network][transport]} reverse path check from %{IP:[source][ip]} to %{IP:[destination][ip]} on interface %{NOTSPACE:[observer][egress][interface][name]}
|
50
|
+
# ASA-4-106023
|
51
|
+
CISCOFW106023 %{CISCO_ACTION:[cisco][asa][outcome]}(?: protocol)? %{WORD:[cisco][asa][network][transport]} src %{CISCO_SRC_HOST_PORT_USER} dst %{CISCO_DST_HOST_PORT_USER}( \(type %{INT:[cisco][asa][icmp_type]:int}, code %{INT:[cisco][asa][icmp_code]:int}\))? by access-group "?%{DATA:[cisco][asa][rule_name]}"? \[%{DATA:[@metadata][cisco][asa][hashcode1]}, %{DATA:[@metadata][cisco][asa][hashcode2]}\]
|
52
|
+
# ASA-4-106100, ASA-4-106102, ASA-4-106103
|
53
|
+
CISCOFW106100_2_3 access-list %{NOTSPACE:[cisco][asa][rule_name]} %{CISCO_ACTION:[cisco][asa][outcome]} %{WORD:[cisco][asa][network][transport]} for user '%{DATA:[user][name]}' %{DATA:[observer][ingress][interface][name]}/%{IP:[source][ip]}\(%{INT:[source][port]:int}\) -> %{DATA:[observer][egress][interface][name]}/%{IP:[destination][ip]}\(%{INT:[destination][port]:int}\) %{CISCO_HITCOUNT_INTERVAL} \[%{DATA:[@metadata][cisco][asa][hashcode1]}, %{DATA:[@metadata][cisco][asa][hashcode2]}\]
|
54
|
+
# ASA-5-106100
|
55
|
+
CISCOFW106100 access-list %{NOTSPACE:[cisco][asa][rule_name]} %{CISCO_ACTION:[cisco][asa][outcome]} %{WORD:[cisco][asa][network][transport]} %{DATA:[observer][ingress][interface][name]}/%{IP:[source][ip]}\(%{INT:[source][port]:int}\)(?:\(%{DATA:[source][user][name]}\))? -> %{DATA:[observer][egress][interface][name]}/%{IP:[destination][ip]}\(%{INT:[destination][port]:int}\)(?:\(%{DATA:[source][user][name]}\))? hit-cnt %{INT:[cisco][asa][hit_count]:int} %{CISCO_INTERVAL} \[%{DATA:[@metadata][cisco][asa][hashcode1]}, %{DATA:[@metadata][cisco][asa][hashcode2]}\]
|
56
|
+
# ASA-5-304001
|
57
|
+
CISCOFW304001 %{IP:[source][ip]}(?:\(%{DATA:[source][user][name]}\))? Accessed URL %{IP:[destination][ip]}:%{GREEDYDATA:[url][original]}
|
58
|
+
# ASA-6-110002
|
59
|
+
CISCOFW110002 %{CISCO_REASON:[event][reason]} for %{WORD:[cisco][asa][network][transport]} from %{DATA:[observer][ingress][interface][name]}:%{IP:[source][ip]}/%{INT:[source][port]:int} to %{IP:[destination][ip]}/%{INT:[destination][port]:int}
|
60
|
+
# ASA-6-302010
|
61
|
+
CISCOFW302010 %{INT:[cisco][asa][connections][in_use]:int} in use, %{INT:[cisco][asa][connections][most_used]:int} most used
|
62
|
+
# ASA-6-302013, ASA-6-302014, ASA-6-302015, ASA-6-302016
|
63
|
+
CISCOFW302013_302014_302015_302016 %{CISCO_ACTION:[cisco][asa][outcome]}(?: %{CISCO_DIRECTION:[cisco][asa][network][direction]})? %{WORD:[cisco][asa][network][transport]} connection %{INT:[cisco][asa][connection_id]} for %{NOTSPACE:[observer][ingress][interface][name]}:%{IP:[source][ip]}/%{INT:[source][port]:int}(?: \(%{IP:[source][nat][ip]}/%{INT:[source][nat][port]:int}\))?(?:\(%{DATA:[source][user][name?]}\))? to %{NOTSPACE:[observer][egress][interface][name]}:%{IP:[destination][ip]}/%{INT:[destination][port]:int}( \(%{IP:[destination][nat][ip]}/%{INT:[destination][nat][port]:int}\))?(?:\(%{DATA:[destination][user][name]}\))?( duration %{TIME:[cisco][asa][duration]} bytes %{INT:[network][bytes]:int})?(?: %{CISCO_REASON:[event][reason]})?(?: \(%{DATA:[user][name]}\))?
|
64
|
+
# :long - %{INT:[network][bytes]:int}
|
65
|
+
# ASA-6-302020, ASA-6-302021
|
66
|
+
CISCOFW302020_302021 %{CISCO_ACTION:[cisco][asa][outcome]}(?: %{CISCO_DIRECTION:[cisco][asa][network][direction]})? %{WORD:[cisco][asa][network][transport]} connection for faddr %{IP:[destination][ip]}/%{INT:[cisco][asa][icmp_seq]:int}(?:\(%{DATA:[destination][user][name]}\))? gaddr %{IP:[source][nat][ip]}/%{INT:[cisco][asa][icmp_type]:int} laddr %{IP:[source][ip]}/%{INT}(?: \(%{DATA:[source][user][name]}\))?
|
67
|
+
# ASA-6-305011
|
68
|
+
CISCOFW305011 %{CISCO_ACTION:[cisco][asa][outcome]} %{CISCO_XLATE_TYPE} %{WORD:[cisco][asa][network][transport]} translation from %{DATA:[observer][ingress][interface][name]}:%{IP:[source][ip]}(/%{INT:[source][port]:int})?(?:\(%{DATA:[source][user][name]}\))? to %{DATA:[observer][egress][interface][name]}:%{IP:[destination][ip]}/%{INT:[destination][port]:int}
|
69
|
+
# ASA-3-313001, ASA-3-313004, ASA-3-313008
|
70
|
+
CISCOFW313001_313004_313008 %{CISCO_ACTION:[cisco][asa][outcome]} %{WORD:[cisco][asa][network][transport]} type=%{INT:[cisco][asa][icmp_type]:int}, code=%{INT:[cisco][asa][icmp_code]:int} from %{IP:[source][ip]} on interface %{NOTSPACE:[observer][egress][interface][name]}(?: to %{IP:[destination][ip]})?
|
71
|
+
# ASA-4-313005
|
72
|
+
CISCOFW313005 %{CISCO_REASON:[event][reason]} for %{WORD:[cisco][asa][network][transport]} error message: %{WORD} src %{CISCO_SRC_IP_USER} dst %{CISCO_DST_IP_USER} \(type %{INT:[cisco][asa][icmp_type]:int}, code %{INT:[cisco][asa][icmp_code]:int}\) on %{NOTSPACE} interface\.\s+Original IP payload: %{WORD:[cisco][asa][original_ip_payload][network][transport]} src %{IP:[cisco][asa][original_ip_payload][source][ip]}/%{INT:[cisco][asa][original_ip_payload][source][port]:int}(?:\(%{DATA:[cisco][asa][original_ip_payload][source][user][name]}\))? dst %{IP:[cisco][asa][original_ip_payload][destination][ip]}/%{INT:[cisco][asa][original_ip_payload][destination][port]:int}(?:\(%{DATA:[cisco][asa][original_ip_payload][destination][user][name]}\))?
|
73
|
+
# ASA-5-321001
|
74
|
+
CISCOFW321001 Resource '%{DATA:[cisco][asa][resource][name]}' limit of %{POSINT:[cisco][asa][resource][limit]:int} reached for system
|
75
|
+
# ASA-4-402117
|
76
|
+
CISCOFW402117 %{WORD:[cisco][asa][network][type]}: Received a non-IPSec packet \(protocol=\s?%{WORD:[cisco][asa][network][transport]}\) from %{IP:[source][ip]} to %{IP:[destination][ip]}\.?
|
77
|
+
# ASA-4-402119
|
78
|
+
CISCOFW402119 %{WORD:[cisco][asa][network][type]}: Received an %{WORD:[cisco][asa][ipsec][protocol]} packet \(SPI=\s?%{DATA:[cisco][asa][ipsec][spi]}, sequence number=\s?%{DATA:[cisco][asa][ipsec][seq_num]}\) from %{IP:[source][ip]} \(user=\s?%{DATA:[source][user][name]}\) to %{IP:[destination][ip]} that failed anti-replay checking\.?
|
79
|
+
# ASA-4-419001
|
80
|
+
CISCOFW419001 %{CISCO_ACTION:[cisco][asa][outcome]} %{WORD:[cisco][asa][network][transport]} packet from %{NOTSPACE:[observer][ingress][interface][name]}:%{IP:[source][ip]}/%{INT:[source][port]:int} to %{NOTSPACE:[observer][egress][interface][name]}:%{IP:[destination][ip]}/%{INT:[destination][port]:int}, reason: %{GREEDYDATA:[event][reason]}
|
81
|
+
# ASA-4-419002
|
82
|
+
CISCOFW419002 %{CISCO_REASON:[event][reason]} from %{DATA:[observer][ingress][interface][name]}:%{IP:[source][ip]}/%{INT:[source][port]:int} to %{DATA:[observer][egress][interface][name]}:%{IP:[destination][ip]}/%{INT:[destination][port]:int} with different initial sequence number
|
83
|
+
# ASA-4-500004
|
84
|
+
CISCOFW500004 %{CISCO_REASON:[event][reason]} for protocol=%{WORD:[cisco][asa][network][transport]}, from %{IP:[source][ip]}/%{INT:[source][port]:int} to %{IP:[destination][ip]}/%{INT:[destination][port]:int}
|
85
|
+
# ASA-6-602303, ASA-6-602304
|
86
|
+
CISCOFW602303_602304 %{WORD:[cisco][asa][network][type]}: An %{CISCO_DIRECTION:[cisco][asa][network][direction]} %{DATA:[cisco][asa][ipsec][tunnel_type]} SA \(SPI=\s?%{DATA:[cisco][asa][ipsec][spi]}\) between %{IP:[source][ip]} and %{IP:[destination][ip]} \(user=\s?%{DATA:[source][user][name]}\) has been %{CISCO_ACTION:[cisco][asa][outcome]}
|
87
|
+
# ASA-7-710001, ASA-7-710002, ASA-7-710003, ASA-7-710005, ASA-7-710006
|
88
|
+
CISCOFW710001_710002_710003_710005_710006 %{WORD:[cisco][asa][network][transport]} (?:request|access) %{CISCO_ACTION:[cisco][asa][outcome]} from %{IP:[source][ip]}/%{INT:[source][port]:int} to %{DATA:[observer][egress][interface][name]}:%{IP:[destination][ip]}/%{INT:[destination][port]:int}
|
89
|
+
# ASA-6-713172
|
90
|
+
CISCOFW713172 Group = %{DATA:[cisco][asa][source][group]}, IP = %{IP:[source][ip]}, Automatic NAT Detection Status:\s+Remote end\s*%{DATA:[@metadata][cisco][asa][remote_nat]}\s*behind a NAT device\s+This\s+end\s*%{DATA:[@metadata][cisco][asa][local_nat]}\s*behind a NAT device
|
91
|
+
# ASA-4-733100
|
92
|
+
CISCOFW733100 \[\s*%{DATA:[cisco][asa][burst][object]}\s*\] drop %{DATA:[cisco][asa][burst][id]} exceeded. Current burst rate is %{INT:[cisco][asa][burst][current_rate]:int} per second, max configured rate is %{INT:[cisco][asa][burst][configured_rate]:int}; Current average rate is %{INT:[cisco][asa][burst][avg_rate]:int} per second, max configured rate is %{INT:[cisco][asa][burst][configured_avg_rate]:int}; Cumulative total count is %{INT:[cisco][asa][burst][cumulative_count]:int}
|
93
|
+
#== End Cisco ASA ==
|
94
|
+
|
95
|
+
|
96
|
+
IPTABLES_TCP_FLAGS (CWR |ECE |URG |ACK |PSH |RST |SYN |FIN )*
|
97
|
+
IPTABLES_TCP_PART (?:SEQ=%{INT:[iptables][tcp][seq]:int}\s+)?(?:ACK=%{INT:[iptables][tcp][ack]:int}\s+)?WINDOW=%{INT:[iptables][tcp][window]:int}\s+RES=0x%{BASE16NUM:[iptables][tcp_reserved_bits]}\s+%{IPTABLES_TCP_FLAGS:[iptables][tcp][flags]}
|
98
|
+
|
99
|
+
IPTABLES4_FRAG (?:(?<= )(?:CE|DF|MF))*
|
100
|
+
IPTABLES4_PART SRC=%{IPV4:[source][ip]}\s+DST=%{IPV4:[destination][ip]}\s+LEN=(?:%{INT:[iptables][length]:int})?\s+TOS=(?:0|0x%{BASE16NUM:[iptables][tos]})?\s+PREC=(?:0x%{BASE16NUM:[iptables][precedence_bits]})?\s+TTL=(?:%{INT:[iptables][ttl]:int})?\s+ID=(?:%{INT:[iptables][id]})?\s+(?:%{IPTABLES4_FRAG:[iptables][fragment_flags]})?(?:\s+FRAG: %{INT:[iptables][fragment_offset]:int})?
|
101
|
+
IPTABLES6_PART SRC=%{IPV6:[source][ip]}\s+DST=%{IPV6:[destination][ip]}\s+LEN=(?:%{INT:[iptables][length]:int})?\s+TC=(?:0|0x%{BASE16NUM:[iptables][tos]})?\s+HOPLIMIT=(?:%{INT:[iptables][ttl]:int})?\s+FLOWLBL=(?:%{INT:[iptables][flow_label]})?
|
102
|
+
|
103
|
+
IPTABLES IN=(?:%{NOTSPACE:[observer][ingress][interface][name]})?\s+OUT=(?:%{NOTSPACE:[observer][egress][interface][name]})?\s+(?:MAC=(?:%{COMMONMAC:[destination][mac]})?(?::%{COMMONMAC:[source][mac]})?(?::[A-Fa-f0-9]{2}:[A-Fa-f0-9]{2})?\s+)?(:?%{IPTABLES4_PART}|%{IPTABLES6_PART}).*?PROTO=(?:%{WORD:[network][transport]})?\s+SPT=(?:%{INT:[source][port]:int})?\s+DPT=(?:%{INT:[destination][port]:int})?\s+(?:%{IPTABLES_TCP_PART})?
|
104
|
+
|
105
|
+
# Shorewall firewall logs
|
106
|
+
SHOREWALL (?:%{SYSLOGTIMESTAMP:timestamp}) (?:%{WORD:[observer][hostname]}) .*Shorewall:(?:%{WORD:[shorewall][firewall][type]})?:(?:%{WORD:[shorewall][firewall][action]})?.*%{IPTABLES}
|
107
|
+
#== End Shorewall
|
108
|
+
#== SuSE Firewall 2 ==
|
109
|
+
SFW2_LOG_PREFIX SFW2\-INext\-%{NOTSPACE:[suse][firewall][action]}
|
110
|
+
SFW2 ((?:%{SYSLOGTIMESTAMP:timestamp})|(?:%{TIMESTAMP_ISO8601:timestamp}))\s*%{HOSTNAME:[observer][hostname]}.*?%{SFW2_LOG_PREFIX:[suse][firewall][log_prefix]}\s*%{IPTABLES}
|
111
|
+
#== End SuSE ==
|