logstash-output-sumologic 1.0.4 → 1.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +4 -22
- data/DEVELOPER.md +25 -0
- data/README.md +50 -39
- data/lib/logstash/outputs/sumologic.rb +266 -41
- data/lib/logstash/plugin_mixins/http_client.rb +187 -0
- data/logstash-output-sumologic.gemspec +5 -5
- data/spec/outputs/sumologic_spec.rb +340 -9
- data/spec/spec_helper.rb +53 -0
- metadata +31 -30
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: e52d2f2af928de4e3f0318396239ad13552c8381
|
4
|
+
data.tar.gz: adbeece1b46c5a0409eff70626e81ac717e4dfb1
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: b34303f871a5efbbd9a9a47f0f0eee6edd59672eb321daa7c25883fb0d4972d249eea597e64e4cf50a26e2f9ed9271ce86b010214e60f3357a16b093e6cd1cfb
|
7
|
+
data.tar.gz: 6b0e002735a97349cf933f35d16bb7a5cf69c77833c295535bf25c7ecafe4e7c28efbb73d6548ba9573f6d2edb368051bf169e652ada4c792e93ad258ca3ca21
|
data/CHANGELOG.md
CHANGED
@@ -1,23 +1,5 @@
|
|
1
|
-
## 1.
|
2
|
-
-
|
3
|
-
|
4
|
-
### 1.0.1
|
5
|
-
- Update gem description
|
1
|
+
## 1.1.0
|
2
|
+
- Support metrics sending
|
6
3
|
|
7
|
-
|
8
|
-
-
|
9
|
-
- Support pararmeter `json_mapping` to filter json fields. For example:
|
10
|
-
```
|
11
|
-
json_mapping => {
|
12
|
-
"foo" => "%{@timestamp}"
|
13
|
-
"bar" => "%{message}"
|
14
|
-
}
|
15
|
-
```
|
16
|
-
will create message as:
|
17
|
-
```
|
18
|
-
{"foo":"2016-07-27T18:37:59.460Z","bar":"hello world"}
|
19
|
-
{"foo":"2016-07-27T18:38:01.222Z","bar":"bye!"}
|
20
|
-
```
|
21
|
-
|
22
|
-
### 1.0.3
|
23
|
-
- Remove version limitation so it works with Log Stash 5.0.0 core
|
4
|
+
## 1.0.0
|
5
|
+
- Initial release
|
data/DEVELOPER.md
CHANGED
@@ -1,2 +1,27 @@
|
|
1
1
|
# logstash-output-sumologic
|
2
2
|
Logstash output plugin for delivering log to Sumo Logic cloud service through HTTP source.
|
3
|
+
|
4
|
+
# How to build .gem file from repository
|
5
|
+
Open logstash-output-sumologic.gemspec and make any necessary configuration changes.
|
6
|
+
In your local Git clone, run:
|
7
|
+
```sh
|
8
|
+
gem build logstash-output-sumologic.gemspec
|
9
|
+
```
|
10
|
+
You will get a .gem file in the same directory as `logstash-output-sumologic-x.y.z.gem`
|
11
|
+
Remove old version of plugin (optional):
|
12
|
+
```sh
|
13
|
+
bin/logstash-plugin remove logstash-output-sumologic
|
14
|
+
```
|
15
|
+
And then install the plugin locally:
|
16
|
+
```sh
|
17
|
+
bin/logstash-plugin install <full path of .gem>
|
18
|
+
```
|
19
|
+
|
20
|
+
# How to run test with rspec
|
21
|
+
The test requires JRuby to run. So you need to install [JRuby](http://jruby.org/) and [RVM](https://rvm.io/) (for switching between JRuby and Ruby) first.
|
22
|
+
And then run:
|
23
|
+
```bash
|
24
|
+
rvm use jruby
|
25
|
+
rspec spec/
|
26
|
+
```
|
27
|
+
|
data/README.md
CHANGED
@@ -1,9 +1,10 @@
|
|
1
1
|
# Logstash Sumo Logic Output Plugin
|
2
2
|
|
3
|
-
This is
|
3
|
+
This is an output plugin for [Logstash](https://github.com/elastic/logstash).
|
4
4
|
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
|
5
5
|
|
6
6
|
## Getting Started
|
7
|
+
This guide is for the users just want download the binary and make the plugin work. For the developer, please refer to the [Developer Guide](DEVELOPER.md)
|
7
8
|
|
8
9
|
### 1. Create a Sumo Logic HTTP source
|
9
10
|
Create a [Sumo Logic](https://www.sumologic.com/) free account if you currently don't have one.
|
@@ -16,57 +17,67 @@ https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
|
|
16
17
|
### 2. Install LogStash on your machine
|
17
18
|
Following this [instruction](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html) to download and install LogStash. This plugin requires Logstash 2.3 or higher to work.
|
18
19
|
|
19
|
-
### 3.
|
20
|
-
In your local Git clone, running:
|
20
|
+
### 3. Install latest Logstash Sumo Logic Output plugin from [RubyGems](https://rubygems.org/gems/logstash-output-sumologic)
|
21
21
|
```sh
|
22
|
-
|
22
|
+
bin/logstash-plugin install logstash-output-sumologic
|
23
23
|
```
|
24
|
-
|
25
|
-
|
26
|
-
### 4. Install plugin into LogStash
|
27
|
-
In the Logstash home, running:
|
28
|
-
```sh
|
29
|
-
bin/logstash-plugin install <path of .gem>
|
30
|
-
```
|
31
|
-
|
32
|
-
### 5. Start Logstash and send log
|
24
|
+
### 4. Start Logstash and send log
|
33
25
|
In the Logstash home, running:
|
34
26
|
```sh
|
35
|
-
bin/logstash -e 'input{stdin{}}output{sumologic{url=>"<
|
27
|
+
bin/logstash -e 'input{stdin{}}output{sumologic{url=>"<URL from step 1>"}}'
|
36
28
|
```
|
37
29
|
This will send any input from console to Sumo Logic cloud service.
|
38
30
|
|
39
|
-
###
|
40
|
-
|
41
|
-
|
42
|
-
### Furthermore
|
43
|
-
- Try it with different input/filter/codec plugins
|
44
|
-
- Start LogStash as a service/daemon in your production environment
|
45
|
-
- Report any issue or idea through [Git Hub](https://github.com/SumoLogic/logstash-output-sumologic)
|
46
|
-
|
47
|
-
## Parameters
|
48
|
-
This plugin is based on [logstash-mixin-http_client](https://github.com/logstash-plugins/logstash-mixin-http_client) thus it supports all parameters like proxy, authentication, retry, etc.
|
31
|
+
### 5. Try out samples
|
32
|
+
Open samples/sample-logs.conf, replace #URL# placeholder as real URL got from step 1
|
49
33
|
|
50
|
-
|
34
|
+
Launch sample with:
|
35
|
+
```sh
|
36
|
+
bin/logstash -f samples/log.conf
|
51
37
|
```
|
52
|
-
|
53
|
-
# on Sumo Logic web app. See http://help.sumologic.com/Send_Data/Sources/HTTP_Source
|
54
|
-
config :url, :validate => :string, :required => true
|
55
|
-
|
56
|
-
# Include extra HTTP headers on request if needed
|
57
|
-
config :extra_headers, :validate => :hash, :default => []
|
58
|
-
|
59
|
-
# The formatter of message, by default is message with timestamp and host as prefix
|
60
|
-
config :format, :validate => :string, :default => "%{@timestamp} %{host} %{message}"
|
61
|
-
|
62
|
-
# Hold messages for at least (x) seconds as a pile; 0 means sending every events immediately
|
63
|
-
config :interval, :validate => :number, :default => 0
|
38
|
+
The input from console will be sent to Sumo Logic cloud service as log lines.
|
64
39
|
|
65
|
-
|
66
|
-
|
40
|
+
Open samples/sample-metrics.conf, replace #URL# placeholder as real URL got from step 1
|
41
|
+
(This sample may require installing the [plugins-filters-metrics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html) plugin first)
|
67
42
|
|
43
|
+
Launch sample with:
|
44
|
+
```sh
|
45
|
+
bin/logstash -f samples/metrics.conf
|
68
46
|
```
|
47
|
+
A mocked event will be sent to Sumo Logic cloud service as 1 minute and 15 minutes rate metrics.
|
69
48
|
|
49
|
+
### 6. Get result from Sumo Logic web app
|
50
|
+
Logon to Sumo Logic [web app](https://prod-www.sumologic.net/ui/) and run
|
51
|
+
- [Log Search](http://help.sumologic.com/Search)
|
52
|
+
- [Live Tail](http://help.sumologic.com/Search/Live_Tail)
|
53
|
+
- [Metrics Search](https://help.sumologic.com/Metrics)
|
70
54
|
|
55
|
+
## What's Next
|
56
|
+
- Try it with different input/filter/codec plugins
|
57
|
+
- Start LogStash as a service/daemon in your production environment
|
58
|
+
- Report any issue or idea through [Git Hub](https://github.com/SumoLogic/logstash-output-sumologic)
|
71
59
|
|
60
|
+
## Parameters of Plugin
|
61
|
+
| Parameter | Type | Required? | Default value | Decription |
|
62
|
+
| ------------------- | ------- | --------- | ------------- | --------------------- |
|
63
|
+
| `url` | string | Yes | | HTTP Source URL
|
64
|
+
| `source_category` | string | No | | Source category to appear when searching in Sumo Logic by `_sourceCategory`. If not specified, the source category of the HTTP source will be used.
|
65
|
+
| `source_name` | string | No | | Source name to appear when searching in Sumo Logic by `_sourceName`.
|
66
|
+
| `source_host` | string | No | | Source host to appear when searching in Sumo Logic by `_sourceHost`. If not specified, it will be the machine host name.
|
67
|
+
| `extra_headers` | hash | No | | Extra fields need to be send in HTTP header.
|
68
|
+
| `compress` | boolean | No | `false` | Enable or disable compression.
|
69
|
+
| `compress_encoding` | string | No | `'deflate'` | Encoding method of comressing, can only be `'deflate'` or `'gzip'`.
|
70
|
+
| `interval` | number | No | `0` | The maximum time for waiting before send in batch, in ms.
|
71
|
+
| `format` | string | No | `"%{@timestamp} %{host} %{message}"` | For log only, the formatter of log lines. Use `%{@json}` as the placeholder for whole event json.
|
72
|
+
| `json_mapping` | hash | No | | Override the structure of `{@json}` tag with the given key value pairs.
|
73
|
+
| `metrics` | hash | No | | If defined, the event will be sent as metrics. Keys will be the metrics name and values will be the metrics value.
|
74
|
+
| `metrics_format` | string | No | `'cabon2'` | Metrics format, can only be `'graphite'` or `'carbon2'`.
|
75
|
+
| `metrics_name` | string | No | `*` | Define the metric name looking, the placeholder '*' will be replaced with the actual metric name.
|
76
|
+
| `intrinsic_tags` | hash | No | | For carbon2 format only, send extra intrinsic key-value pairs other than `metric` (which is the metric name).
|
77
|
+
| `meta_tags` | hash | No | | For carbon2 format only, send metadata key-value pairs.
|
78
|
+
| `fields_as_metrics` | boolean | No | `false` | If `true`, all fields in logstash event with number value will be sent as a metrics (with filtering by `fields_include` and `fields_exclude` ; the `metics` parameter is ignored.
|
79
|
+
| `fields_include` | array | No | all fields | Working with `fields_as_metrics` parameter, only the fields which full name matching these RegEx pattern(s) will be included in metrics.
|
80
|
+
| `fields_exclude` | array | No | none | Working with `fields_as_metrics` parameter, the fields which full name matching these RegEx pattern(s) will be ignored in metrics.
|
81
|
+
|
82
|
+
This plugin is based on [logstash-mixin-http_client](https://github.com/logstash-plugins/logstash-mixin-http_client) thus we also support all HTTP layer parameters like proxy, authentication, retry, etc.
|
72
83
|
|
@@ -6,6 +6,7 @@ require "logstash/plugin_mixins/http_client"
|
|
6
6
|
require 'thread'
|
7
7
|
require "uri"
|
8
8
|
require "zlib"
|
9
|
+
require "stringio"
|
9
10
|
|
10
11
|
# Now you can use logstash to deliver logs to Sumo Logic
|
11
12
|
#
|
@@ -17,35 +18,98 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
|
|
17
18
|
include LogStash::PluginMixins::HttpClient
|
18
19
|
|
19
20
|
config_name "sumologic"
|
20
|
-
|
21
|
+
|
22
|
+
CONTENT_TYPE = "Content-Type"
|
23
|
+
CONTENT_TYPE_LOG = "text/plain"
|
24
|
+
CONTENT_TYPE_GRAPHITE = "application/vnd.sumologic.graphite"
|
25
|
+
CONTENT_TYPE_CARBON2 = "application/vnd.sumologic.carbon2"
|
26
|
+
CATEGORY_HEADER = "X-Sumo-Category"
|
27
|
+
HOST_HEADER = "X-Sumo-Host"
|
28
|
+
NAME_HEADER = "X-Sumo-Name"
|
29
|
+
CLIENT_HEADER = "X-Sumo-Client"
|
30
|
+
TIMESTAMP_FIELD = "@timestamp"
|
31
|
+
METRICS_NAME_PLACEHOLDER = "*"
|
32
|
+
GRAPHITE = "graphite"
|
33
|
+
CARBON2 = "carbon2"
|
34
|
+
CONTENT_ENCODING = "Content-Encoding"
|
35
|
+
DEFLATE = "deflate"
|
36
|
+
GZIP = "gzip"
|
37
|
+
ALWAYS_EXCLUDED = [ "@timestamp", "@version" ]
|
38
|
+
|
21
39
|
# The URL to send logs to. This should be given when creating a HTTP Source
|
22
40
|
# on Sumo Logic web app. See http://help.sumologic.com/Send_Data/Sources/HTTP_Source
|
23
41
|
config :url, :validate => :string, :required => true
|
24
42
|
|
43
|
+
# Define the source category metadata
|
44
|
+
config :source_category, :validate => :string
|
45
|
+
|
46
|
+
# Define the source host metadata
|
47
|
+
config :source_host, :validate => :string
|
48
|
+
|
49
|
+
# Define the source name metadata
|
50
|
+
config :source_name, :validate => :string
|
51
|
+
|
25
52
|
# Include extra HTTP headers on request if needed
|
26
|
-
config :extra_headers, :validate => :hash
|
53
|
+
config :extra_headers, :validate => :hash
|
27
54
|
|
28
|
-
#
|
29
|
-
|
30
|
-
|
55
|
+
# Compress the payload
|
56
|
+
config :compress, :validate => :boolean, :default => false
|
57
|
+
|
58
|
+
# The encoding method of compress
|
59
|
+
config :compress_encoding, :validate =>:string, :default => DEFLATE
|
31
60
|
|
32
61
|
# Hold messages for at least (x) seconds as a pile; 0 means sending every events immediately
|
33
62
|
config :interval, :validate => :number, :default => 0
|
34
63
|
|
35
|
-
#
|
36
|
-
|
64
|
+
# The formatter of log message, by default is message with timestamp and host as prefix
|
65
|
+
# Use %{@json} tag to send whole event
|
66
|
+
config :format, :validate => :string, :default => "%{@timestamp} %{host} %{message}"
|
37
67
|
|
38
|
-
#
|
68
|
+
# Override the structure of @json tag with the given key value pairs
|
39
69
|
config :json_mapping, :validate => :hash
|
70
|
+
|
71
|
+
# Send metric(s) if configured. This is a hash with k as metric name and v as metric value
|
72
|
+
# Both metric names and values support dynamic strings like %{host}
|
73
|
+
# For example:
|
74
|
+
# metrics => { "%{host}/uptime" => "%{uptime_1m}" }
|
75
|
+
config :metrics, :validate => :hash
|
76
|
+
|
77
|
+
# Create metric(s) automatically from @json fields if configured.
|
78
|
+
config :fields_as_metrics, :validate => :boolean, :default => false
|
79
|
+
|
80
|
+
config :fields_include, :validate => :array, :default => [ ]
|
81
|
+
|
82
|
+
config :fields_exclude, :validate => :array, :default => [ ]
|
83
|
+
|
84
|
+
# Defines the format of the metric, support "graphite" or "carbon2"
|
85
|
+
config :metrics_format, :validate => :string, :default => CARBON2
|
40
86
|
|
87
|
+
# Define the metric name looking, the placeholder '*' will be replaced with the actual metric name
|
88
|
+
# For example:
|
89
|
+
# metrics => { "uptime.1m" => "%{uptime_1m}" }
|
90
|
+
# metrics_name => "mynamespace.*"
|
91
|
+
# will produce metrics as:
|
92
|
+
# "mynamespace.uptime.1m xxx 1234567"
|
93
|
+
config :metrics_name, :validate => :string, :default => METRICS_NAME_PLACEHOLDER
|
94
|
+
|
95
|
+
# For carbon2 metrics format only, define the intrinsic tags (which will be used to identify the metrics)
|
96
|
+
# There is always an intrinsic tag as "metric" which value is from metrics_name
|
97
|
+
config :intrinsic_tags, :validate => :hash, :default => {}
|
98
|
+
|
99
|
+
# For carbon2 metrics format only, define the meta tags (which will NOT be used to identify the metrics)
|
100
|
+
config :meta_tags, :validate => :hash, :default => {}
|
101
|
+
|
41
102
|
public
|
42
103
|
def register
|
104
|
+
@source_host = `hostname`.strip unless @source_host
|
105
|
+
|
43
106
|
# initialize request pool
|
44
107
|
@request_tokens = SizedQueue.new(@pool_max)
|
45
108
|
@pool_max.times { |t| @request_tokens << true }
|
46
109
|
@timer = Time.now
|
47
110
|
@pile = Array.new
|
48
111
|
@semaphore = Mutex.new
|
112
|
+
connect
|
49
113
|
end # def register
|
50
114
|
|
51
115
|
public
|
@@ -61,23 +125,9 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
|
|
61
125
|
return
|
62
126
|
end
|
63
127
|
|
64
|
-
content =
|
128
|
+
content = event2content(event)
|
129
|
+
queue_and_send(content)
|
65
130
|
|
66
|
-
if @interval <= 0 # means send immediately
|
67
|
-
send_request(content)
|
68
|
-
return
|
69
|
-
end
|
70
|
-
|
71
|
-
@semaphore.synchronize {
|
72
|
-
now = Time.now
|
73
|
-
@pile << content
|
74
|
-
|
75
|
-
if now - @timer > @interval # ready to send
|
76
|
-
send_request(@pile.join($/))
|
77
|
-
@timer = now
|
78
|
-
@pile.clear
|
79
|
-
end
|
80
|
-
}
|
81
131
|
end # def receive
|
82
132
|
|
83
133
|
public
|
@@ -88,15 +138,35 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
|
|
88
138
|
}
|
89
139
|
client.close
|
90
140
|
end # def close
|
141
|
+
|
142
|
+
|
143
|
+
private
|
144
|
+
def connect
|
145
|
+
# TODO: ping endpoint make sure config correct
|
146
|
+
end # def connect
|
91
147
|
|
92
148
|
private
|
93
|
-
def
|
94
|
-
|
95
|
-
|
96
|
-
Zlib::Deflate.deflate(content)
|
149
|
+
def queue_and_send(content)
|
150
|
+
if @interval <= 0 # means send immediately
|
151
|
+
send_request(content)
|
97
152
|
else
|
98
|
-
|
153
|
+
@semaphore.synchronize {
|
154
|
+
now = Time.now
|
155
|
+
@pile << event
|
156
|
+
|
157
|
+
if now - @timer > @interval # ready to send
|
158
|
+
send_request(@pile.join($/))
|
159
|
+
@timer = now
|
160
|
+
@pile.clear
|
161
|
+
end
|
162
|
+
}
|
99
163
|
end
|
164
|
+
end
|
165
|
+
|
166
|
+
private
|
167
|
+
def send_request(content)
|
168
|
+
token = @request_tokens.pop
|
169
|
+
body = compress(content)
|
100
170
|
headers = get_headers()
|
101
171
|
|
102
172
|
request = client.send(:parallel).send(:post, @url, :body => body, :headers => headers)
|
@@ -128,29 +198,146 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
|
|
128
198
|
request.call
|
129
199
|
end # def send_request
|
130
200
|
|
201
|
+
private
|
202
|
+
def compress(content)
|
203
|
+
if @compress
|
204
|
+
if @compress_encoding == GZIP
|
205
|
+
result = gzip(content)
|
206
|
+
result.bytes.to_a.pack('c*')
|
207
|
+
else
|
208
|
+
Zlib::Deflate.deflate(content)
|
209
|
+
end
|
210
|
+
else
|
211
|
+
content
|
212
|
+
end
|
213
|
+
end # def compress
|
214
|
+
|
215
|
+
private
|
216
|
+
def gzip(content)
|
217
|
+
stream = StringIO.new("w")
|
218
|
+
stream.set_encoding("ASCII")
|
219
|
+
gz = Zlib::GzipWriter.new(stream)
|
220
|
+
gz.write(content)
|
221
|
+
gz.close
|
222
|
+
stream.string.bytes.to_a.pack('c*')
|
223
|
+
end # def gzip
|
224
|
+
|
131
225
|
private
|
132
226
|
def get_headers()
|
133
|
-
|
134
|
-
base
|
135
|
-
base.
|
227
|
+
|
228
|
+
base = {}
|
229
|
+
base = @extra_headers if @extra_headers.is_a?(Hash)
|
230
|
+
|
231
|
+
base[CATEGORY_HEADER] = @source_category if @source_category
|
232
|
+
base[HOST_HEADER] = @source_host if @source_host
|
233
|
+
base[NAME_HEADER] = @source_name if @source_name
|
234
|
+
base[CLIENT_HEADER] = 'logstash-output-sumologic'
|
235
|
+
|
236
|
+
if @compress
|
237
|
+
if @compress_encoding == GZIP
|
238
|
+
base[CONTENT_ENCODING] = GZIP
|
239
|
+
elsif
|
240
|
+
base[CONTENT_ENCODING] = DEFLATE
|
241
|
+
else
|
242
|
+
log_failure(
|
243
|
+
"Unrecogonized compress encoding",
|
244
|
+
:encoding => @compress_encoding
|
245
|
+
)
|
246
|
+
end
|
247
|
+
end
|
248
|
+
|
249
|
+
if @metrics || @fields_as_metrics
|
250
|
+
if @metrics_format == CARBON2
|
251
|
+
base[CONTENT_TYPE] = CONTENT_TYPE_CARBON2
|
252
|
+
elsif @metrics_format == GRAPHITE
|
253
|
+
base[CONTENT_TYPE] = CONTENT_TYPE_GRAPHITE
|
254
|
+
else
|
255
|
+
log_failure(
|
256
|
+
"Unrecogonized metrics format",
|
257
|
+
:format => @metrics_format
|
258
|
+
)
|
259
|
+
end
|
260
|
+
else
|
261
|
+
base[CONTENT_TYPE] = CONTENT_TYPE_LOG
|
262
|
+
end
|
263
|
+
|
264
|
+
base
|
265
|
+
|
136
266
|
end # def get_headers
|
137
267
|
|
138
268
|
private
|
139
|
-
def
|
140
|
-
if @
|
141
|
-
|
269
|
+
def event2content(event)
|
270
|
+
if @metrics || @fields_as_metrics
|
271
|
+
event2metrics(event)
|
142
272
|
else
|
143
|
-
|
144
|
-
|
273
|
+
event2log(event)
|
274
|
+
end
|
275
|
+
end # def event2content
|
276
|
+
|
277
|
+
private
|
278
|
+
def event2log(event)
|
279
|
+
@format = "%{@json}" if @format.nil? || @format.empty?
|
280
|
+
expand(@format, event)
|
281
|
+
end # def event2log
|
282
|
+
|
283
|
+
private
|
284
|
+
def event2metrics(event)
|
285
|
+
timestamp = get_timestamp(event)
|
286
|
+
source = expand_hash(@metrics, event) unless @fields_as_metrics
|
287
|
+
source = event_as_metrics(event) if @fields_as_metrics
|
288
|
+
source.flat_map { |key, value|
|
289
|
+
get_single_line(event, key, value, timestamp)
|
290
|
+
}.reject(&:nil?).join("\n")
|
291
|
+
end # def event2metrics
|
292
|
+
|
293
|
+
def event_as_metrics(event)
|
294
|
+
hash = event2hash(event)
|
295
|
+
acc = {}
|
296
|
+
hash.keys.each do |field|
|
297
|
+
value = hash[field]
|
298
|
+
dotify(acc, field, value, nil)
|
299
|
+
end
|
300
|
+
acc
|
301
|
+
end # def event_as_metrics
|
302
|
+
|
303
|
+
def get_single_line(event, key, value, timestamp)
|
304
|
+
full = get_metrics_name(event, key)
|
305
|
+
if !ALWAYS_EXCLUDED.include?(full) && \
|
306
|
+
(fields_include.empty? || fields_include.any? { |regexp| full.match(regexp) }) && \
|
307
|
+
!(fields_exclude.any? {|regexp| full.match(regexp)}) && \
|
308
|
+
is_number?(value)
|
309
|
+
if @metrics_format == CARBON2
|
310
|
+
@intrinsic_tags["metric"] = full
|
311
|
+
"#{hash2line(@intrinsic_tags, event)} #{hash2line(@meta_tags, event)}#{value} #{timestamp}"
|
145
312
|
else
|
146
|
-
|
313
|
+
"#{full} #{value} #{timestamp}"
|
147
314
|
end
|
148
|
-
event.sprintf(f)
|
149
315
|
end
|
150
|
-
|
316
|
+
end # def get_single_line
|
317
|
+
|
318
|
+
def dotify(acc, key, value, prefix)
|
319
|
+
pk = prefix ? "#{prefix}.#{key}" : key.to_s
|
320
|
+
if value.is_a?(Hash)
|
321
|
+
value.each do |k, v|
|
322
|
+
dotify(acc, k, v, pk)
|
323
|
+
end
|
324
|
+
elsif value.is_a?(Array)
|
325
|
+
value.each_with_index.map { |v, i|
|
326
|
+
dotify(acc, i.to_s, v, pk)
|
327
|
+
}
|
328
|
+
else
|
329
|
+
acc[pk] = value
|
330
|
+
end
|
331
|
+
end # def dotify
|
332
|
+
|
333
|
+
private
|
334
|
+
def expand(template, event)
|
335
|
+
template = template.gsub("%{@json}", LogStash::Json.dump(event2hash(event))) if template.include? "%{@json}"
|
336
|
+
event.sprintf(template)
|
337
|
+
end # def expand
|
151
338
|
|
152
339
|
private
|
153
|
-
def
|
340
|
+
def event2hash(event)
|
154
341
|
if @json_mapping
|
155
342
|
@json_mapping.reduce({}) do |acc, kv|
|
156
343
|
k, v = kv
|
@@ -161,7 +348,45 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
|
|
161
348
|
event.to_hash
|
162
349
|
end
|
163
350
|
end # def map_event
|
351
|
+
|
352
|
+
private
|
353
|
+
def is_number?(me)
|
354
|
+
me.to_f.to_s == me.to_s || me.to_i.to_s == me.to_s
|
355
|
+
end
|
356
|
+
|
357
|
+
private
|
358
|
+
def expand_hash(hash, event)
|
359
|
+
hash.reduce({}) do |acc, kv|
|
360
|
+
k, v = kv
|
361
|
+
exp_k = expand(k, event)
|
362
|
+
exp_v = expand(v, event)
|
363
|
+
acc[exp_k] = exp_v
|
364
|
+
acc
|
365
|
+
end # def expand_hash
|
366
|
+
end
|
164
367
|
|
368
|
+
private
|
369
|
+
def get_timestamp(event)
|
370
|
+
event.get(TIMESTAMP_FIELD).to_i
|
371
|
+
end # def get_timestamp
|
372
|
+
|
373
|
+
private
|
374
|
+
def get_metrics_name(event, name)
|
375
|
+
name = @metrics_name.gsub(METRICS_NAME_PLACEHOLDER, name) if @metrics_name
|
376
|
+
event.sprintf(name)
|
377
|
+
end # def get_metrics_name
|
378
|
+
|
379
|
+
private
|
380
|
+
def hash2line(hash, event)
|
381
|
+
if (hash.is_a?(Hash) && !hash.empty?)
|
382
|
+
expand_hash(hash, event).flat_map { |k, v|
|
383
|
+
"#{k}=#{v} "
|
384
|
+
}.join()
|
385
|
+
else
|
386
|
+
""
|
387
|
+
end
|
388
|
+
end # hash2line
|
389
|
+
|
165
390
|
private
|
166
391
|
def log_failure(message, opts)
|
167
392
|
@logger.error(message, opts)
|