logstash-output-sumologic 1.1.4 → 1.1.9

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: 2e93fc2c3b965317a3ce2e7d4dac9400bb9ef83f
4
- data.tar.gz: 63e2f08e478e46ee1e9413f983dfd3c4973aa183
2
+ SHA256:
3
+ metadata.gz: 38154784dc8755290fa30a990402a34ba11074c1a0718dbca8c77431f7f1d567
4
+ data.tar.gz: 66ebfe180d5d6836da7a33d5b2310779d6c60a4aed44357fa72f7b9d13eb048a
5
5
  SHA512:
6
- metadata.gz: d65bf23542a63ce80fe6d129bdf33d96d03b046461b344773575d9a672a8880f0f3f490a79aee8fe1d2b527ff143c39808bf0c8b05aacbcfb6e4705183cc2a2e
7
- data.tar.gz: cc696890c823265d3f0b21a4cf6daf2d24dadf841f5401a80c36e0ebad6fac263582f9bafd2f4d8e628f57ec9273508bc52b2c56e4d64b44700ad33fbf69d3a5
6
+ metadata.gz: 87778952988d6f3cf26bda8e8d889f8b06a378a063c96caa5ddcc9968c6bfd8d0315d0e7c0d65273b11acc5fed8111be2695bed0d168f8340df7e9ceccffc7ee
7
+ data.tar.gz: badfe59cf7d84ce33602a35e16960a31e22c0e9af386b609c7d7e1d0d4aba3674181366545a92bf5f7065766f60b0045c374dcc2c66d97a66c2fe46c3e9ce25b
@@ -1,11 +1,16 @@
1
- ## 1.1.4
2
- - bug fix
1
+ # Change Log
3
2
 
4
- ## 1.1.3
5
- - bug fix
3
+ ## 1.2.0
4
+
5
+ - Support message piling with both `interval` and `pile_max`
6
+ - Support in memory message queue to overall enhance throughput
7
+ - Retry sending when get throttled or temporary server problem
8
+ - Support monitor throughput statistics in metrics
6
9
 
7
10
  ## 1.1.0
8
- - Support metrics sending
11
+
12
+ - Support metrics sending
9
13
 
10
14
  ## 1.0.0
11
- - Initial release
15
+
16
+ - Initial release
@@ -1,27 +1,39 @@
1
- # logstash-output-sumologic
1
+ # Development Guide
2
+
2
3
  Logstash output plugin for delivering log to Sumo Logic cloud service through HTTP source.
3
4
 
4
- # How to build .gem file from repository
5
+ ## How to build .gem file from repository
6
+
5
7
  Open logstash-output-sumologic.gemspec and make any necessary configuration changes.
6
8
  In your local Git clone, run:
7
- ```sh
9
+
10
+ ```bash
8
11
  gem build logstash-output-sumologic.gemspec
9
12
  ```
13
+
10
14
  You will get a .gem file in the same directory as `logstash-output-sumologic-x.y.z.gem`
11
15
  Remove old version of plugin (optional):
12
- ```sh
16
+
17
+ ```bash
13
18
  bin/logstash-plugin remove logstash-output-sumologic
14
19
  ```
20
+
15
21
  And then install the plugin locally:
16
- ```sh
22
+
23
+ ```bash
17
24
  bin/logstash-plugin install <full path of .gem>
18
25
  ```
19
26
 
20
- # How to run test with rspec
21
- The test requires JRuby to run. So you need to install [JRuby](http://jruby.org/) and [RVM](https://rvm.io/) (for switching between JRuby and Ruby) first.
27
+ ## How to run test with rspec
28
+
29
+ The test requires JRuby to run. So you need to install [JRuby](http://jruby.org/), [bundle](https://bundler.io/bundle_install.html) and [RVM](https://rvm.io/) (for switching between JRuby and Ruby) first.
22
30
  And then run:
31
+
23
32
  ```bash
24
33
  rvm use jruby
34
+ bundle install
35
+ export sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
25
36
  rspec spec/
26
37
  ```
27
38
 
39
+ The project is integrated to the Travis CI now. Make sure [all test passed](https://travis-ci.org/SumoLogic/logstash-output-sumologic) before creating PR
data/Gemfile CHANGED
@@ -1,3 +1,4 @@
1
1
  source 'https://rubygems.org'
2
2
  gemspec
3
- gem 'rspec'
3
+ gem 'rspec'
4
+ gem 'rspec-eventually'
data/README.md CHANGED
@@ -1,83 +1,155 @@
1
1
  # Logstash Sumo Logic Output Plugin
2
2
 
3
+ [![Build Status](https://travis-ci.org/SumoLogic/logstash-output-sumologic.svg?branch=master)](https://travis-ci.org/SumoLogic/logstash-output-sumologic)
4
+
3
5
  This is an output plugin for [Logstash](https://github.com/elastic/logstash).
4
6
  It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
5
7
 
8
+ | TLS Deprecation Notice |
9
+ | --- |
10
+ | In keeping with industry standard security best practices, as of May 31, 2018, the Sumo Logic service will only support TLS version 1.2 going forward. Verify that all connections to Sumo Logic endpoints are made from software that supports TLS 1.2. |
11
+
6
12
  ## Getting Started
13
+
7
14
  This guide is for the users just want to download the binary and make the plugin work. For the developer, please refer to the [Developer Guide](DEVELOPER.md)
8
15
 
9
16
  ### 1. Create a Sumo Logic HTTP source
17
+
10
18
  Create a [Sumo Logic](https://www.sumologic.com/) free account if you currently don't have one.
11
19
 
12
20
  Create a [HTTP source](http://help.sumologic.com/Send_Data/Sources/HTTP_Source) in your account and get the URL for this source. It should be something like:
13
- ```
14
- https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
15
- ```
21
+ `https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX`
16
22
 
17
23
  ### 2. Install LogStash on your machine
24
+
18
25
  Following this [instruction](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html) to download and install LogStash. This plugin requires Logstash 2.3 or higher to work.
19
26
 
20
27
  ### 3. Install latest Logstash Sumo Logic Output plugin from [RubyGems](https://rubygems.org/gems/logstash-output-sumologic)
21
- ```sh
28
+
29
+ ```bash
22
30
  bin/logstash-plugin install logstash-output-sumologic
23
31
  ```
32
+
24
33
  ### 4. Start Logstash and send log
34
+
25
35
  In the Logstash home, running:
26
- ```sh
36
+
37
+ ```bash
27
38
  bin/logstash -e "input{stdin{}}output{sumologic{url=>'<URL from step 1>'}}"
28
39
  ```
40
+
29
41
  This will send any input from console to Sumo Logic cloud service.
30
42
 
31
43
  ### 5. Try out samples
32
- Open samples/sample-logs.conf, replace #URL# placeholder as real URL got from step 1
44
+
45
+ #### Send Log lines
46
+
47
+ Set the URL got from step 1 as environment variable:
48
+
49
+ ```bash
50
+ export sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
51
+ ```
33
52
 
34
53
  Launch sample with:
35
- ```sh
54
+
55
+ ```bash
36
56
  bin/logstash -f samples/log.conf
37
57
  ```
58
+
38
59
  The input from console will be sent to Sumo Logic cloud service as log lines.
39
60
 
40
- Open samples/sample-metrics.conf, replace #URL# placeholder as real URL got from step 1
41
- (This sample may require installing the [plugins-filters-metrics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html) plugin first)
61
+ #### Send Metrics
62
+
63
+ Set the URL got from step 1 as environment variable:
64
+
65
+ ```bash
66
+ export sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
67
+ ```
68
+
69
+ Install [plugins-filters-metrics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html) plugin
42
70
 
43
71
  Launch sample with:
44
- ```sh
72
+
73
+ ```bash
45
74
  bin/logstash -f samples/metrics.conf
46
75
  ```
47
- A mocked event will be sent to Sumo Logic cloud service as 1 minute and 15 minutes rate metrics.
76
+
77
+ Mocked events will be sent to Sumo Logic server as 1 minute and 15 minutes rate metrics.
48
78
 
49
79
  ### 6. Get result from Sumo Logic web app
50
- Logon to Sumo Logic [web app](https://service.sumologic.com/) and run
51
- - [Log Search](http://help.sumologic.com/Search)
52
- - [Live Tail](http://help.sumologic.com/Search/Live_Tail)
53
- - [Metrics Search](https://help.sumologic.com/Metrics)
80
+
81
+ Logon to Sumo Logic [web app](https://service.sumologic.com/) and run
82
+
83
+ - [Log Search](http://help.sumologic.com/Search)
84
+ - [Live Tail](http://help.sumologic.com/Search/Live_Tail)
85
+ - [Metrics Search](https://help.sumologic.com/Metrics)
54
86
 
55
87
  ## What's Next
88
+
56
89
  - Try it with different input/filter/codec plugins
57
- - Start LogStash as a service/daemon in your production environment
90
+ - Start LogStash as a service/daemon in your production environment
58
91
  - Report any issue or idea through [Git Hub](https://github.com/SumoLogic/logstash-output-sumologic)
59
92
 
60
93
  ## Parameters of Plugin
61
- | Parameter | Type | Required? | Default value | Decription |
62
- | ------------------- | ------- | --------- | :---------------: | --------------------- |
63
- | `url` | string | Yes | | HTTP Source URL
64
- | `source_category` | string | No | | Source category to appear when searching in Sumo Logic by `_sourceCategory`. If not specified, the source category of the HTTP source will be used.
65
- | `source_name` | string | No | | Source name to appear when searching in Sumo Logic by `_sourceName`.
66
- | `source_host` | string | No | | Source host to appear when searching in Sumo Logic by `_sourceHost`. If not specified, it will be the machine host name.
67
- | `extra_headers` | hash | No | | Extra fields need to be send in HTTP header.
68
- | `compress` | boolean | No | `false` | Enable or disable compression.
69
- | `compress_encoding` | string | No | `"deflate"` | Encoding method of comressing, can only be `"deflate"` or `"gzip"`.
70
- | `interval` | number | No | `0` | The maximum time for waiting before send in batch, in seconds.
71
- | `format` | string | No | `"%{@timestamp} %{host} %{message}"` | For log only, the formatter of log lines. Use `%{@json}` as the placeholder for whole event json.
72
- | `json_mapping` | hash | No | | Override the structure of `{@json}` tag with the given key value pairs. <br />For example:<br />`json_mapping => { "foo" => "%{@timestamp}" "bar" => "%{message}" }`<br />will create messages as:<br />`{"foo":"2016-07-27T18:37:59.460Z","bar":"hello world"}`<br />`{"foo":"2016-07-27T18:38:01.222Z","bar":"bye!"}`
73
- | `metrics` | hash | No | | If defined, the event will be sent as metrics. Keys will be the metrics name and values will be the metrics value.
74
- | `metrics_format` | string | No | `"cabon2"` | Metrics format, can only be `"graphite"` or `"carbon2"`.
75
- | `metrics_name` | string | No | `"*"` | Define the metric name looking, the placeholder "*" will be replaced with the actual metric name.
76
- | `intrinsic_tags` | hash | No | | For carbon2 format only, send extra intrinsic key-value pairs other than `metric` (which is the metric name).
77
- | `meta_tags` | hash | No | | For carbon2 format only, send metadata key-value pairs.
78
- | `fields_as_metrics` | boolean | No | `false` | If `true`, all fields in logstash event with number value will be sent as a metrics (with filtering by `fields_include` and `fields_exclude` ; the `metics` parameter is ignored.
79
- | `fields_include` | array | No | all fields | Working with `fields_as_metrics` parameter, only the fields which full name matching these RegEx pattern(s) will be included in metrics.
80
- | `fields_exclude` | array | No | none | Working with `fields_as_metrics` parameter, the fields which full name matching these RegEx pattern(s) will be ignored in metrics.
81
-
82
- This plugin is based on [logstash-mixin-http_client](https://github.com/logstash-plugins/logstash-mixin-http_client) thus we also support all HTTP layer parameters like proxy, authentication, retry, etc.
83
94
 
95
+ | Parameter | Type | Required? | Default value | Description |
96
+ | ---------------------- | ------- | --------- | :-----------: | --------------------- |
97
+ | `url` | string | Yes | | HTTP Source URL
98
+ | `source_category` | string | No | `Logstash` | Source category to appear when searching in Sumo Logic by `_sourceCategory`. Using empty string if want keep source category of the HTTP source.
99
+ | `source_name` | string | No | `logstash-output-sumologic` | Source name to appear when searching in Sumo Logic by `_sourceName`. Using empty string if want keep source name of the HTTP source.
100
+ | `source_host` | string | No | machine name | Source host to appear when searching in Sumo Logic by `_sourceHost`. Using empty string if want keep source host of the HTTP source.
101
+ | `extra_headers` | hash | No | | Extra fields need to be send in HTTP headers.
102
+ | `compress` | boolean | No | `false` | Enable or disable compression.
103
+ | `compress_encoding` | string | No | `"deflate"` | Encoding method of comressing, can only be `"deflate"` or `"gzip"`.
104
+ | `interval` | number | No | `0` | The maximum time for waiting before sending the message pile, in seconds.
105
+ | `pile_max` | number | No | `102400` | The maximum size of message pile, in bytes.
106
+ | `queue_max` | number | No | `4096` | The maximum message piles can be hold in memory.
107
+ | `sender_max` | number | No | `100` | The maximum HTTP senders working in parallel.
108
+ | `format` | string | No | `"%{@timestamp} %{host} %{message}"` | For log only, the formatter of log lines. Use `%{@json}` as the placeholder for whole event json.
109
+ | `json_mapping` | hash | No | | Override the structure of `{@json}` tag with the given key value pairs.<br />For example:<br />`json_mapping => { "foo" => "%{@timestamp}" "bar" => "%{message}" }`<br />will create messages as:<br />`{"foo":"2016-07-27T18:37:59.460Z","bar":"hello world"}`<br />`{"foo":"2016-07-27T18:38:01.222Z","bar":"bye!"}`
110
+ | `metrics` | hash | No | | If defined, the event will be sent as metrics. Keys will be the metrics name and values will be the metrics value.
111
+ | `metrics_format` | string | No | `"cabon2"` | Metrics format, can only be `"graphite"` or `"carbon2"`.
112
+ | `metrics_name` | string | No | `"*"` | Define the metric name looking, the placeholder "*" will be replaced with the actual metric name.
113
+ | `intrinsic_tags` | hash | No | | For carbon2 format only, send extra intrinsic key-value pairs other than `metric` (which is the metric name).
114
+ | `meta_tags` | hash | No | | For carbon2 format only, send metadata key-value pairs.
115
+ | `fields_as_metrics` | boolean | No | `false` | If `true`, all fields in logstash event with number value will be sent as a metrics (with filtering by `fields_include` and `fields_exclude` ; the `metics` parameter is ignored.
116
+ | `fields_include` | array | No | all fields | Working with `fields_as_metrics` parameter, only the fields which full name matching these RegEx pattern(s) will be included in metrics.
117
+ | `fields_exclude` | array | No | none | Working with `fields_as_metrics` parameter, the fields which full name matching these RegEx pattern(s) will be ignored in metrics.
118
+ | `sleep_before_requeue` | number | No | `30` | The message failed to send to server will be retried after (x) seconds. Not retried if negative number set
119
+ | `stats_enabled` | boolean | No | `false` | If `true`, stats of this plugin will be sent as metrics
120
+ | `stats_interval` | number | No | `60` | The stats will be sent every (x) seconds
121
+
122
+ This plugin is based on [logstash-mixin-http_client](https://github.com/logstash-plugins/logstash-mixin-http_client) thus also supports all HTTP layer parameters like proxy, authentication, timeout etc.
123
+
124
+ ## Trouble Shooting
125
+
126
+ ### Enable plugin logging
127
+
128
+ Logstash is using log4j2 framework for [logging](https://www.elastic.co/guide/en/logstash/current/logging.html). Starting with 5.0, each individual plugin can configure the logging strategy. [Here](https://github.com/SumoLogic/logstash-output-sumologic/blob/master/samples/log4j2.properties) is a sample log4j2.properties to print plugin log to console and a rotating file.
129
+
130
+ ### Optimize throughput
131
+
132
+ The throughput can be tuning with following parameters:
133
+
134
+ - Messages will be piled before sending if both `interval` and `pile_max` are larger than `0`. (e.g. multiple messages will sent in single HTTP request); The maximum size of pile is defined in `pile_max` and if there is no more message comes in, piled message will be sent out every `interval` seconds. A higher number of these parameters normally means more messages will be piled together so overall reduce the overhead in transmission and benefit for compressing efficiency; but it may make a larger latency because messages may be hold in plugin for longer before sending;
135
+ - Message piles will be cached before sending in a memory queue. The maximum piles can stay in queue is defined with `queue_max`. A larger setting may be helpful if input is blocked by the plugin consuming speed, but may also consume more RAM (which can be set in [JVM options](https://www.elastic.co/guide/en/logstash/current/config-setting-files.html))
136
+ - The plugin will use up to `sender_max` HTTP senders in parallel for talking to Sumo Logic server. This number is also limited by the max TCP connections
137
+ - Depends on the content pattern, adjusting `compress`/`compress_encoding` for balancing between the CPU consumption and package size
138
+
139
+ On the other side, this version is marked as thread safe so if necessary, multiple plugins can work [in parallel as workers](https://www.elastic.co/guide/en/logstash/current/tuning-logstash.html)
140
+
141
+ ### Monitor throughput in metrics
142
+
143
+ If your Sumo Logic account supporting metrics feature, you can enable the stats monitor of plugin with configuring `stats_enabled` to `true`. For every `stats_interval` seconds, a batch of metrics data points will be sent to Sumo Logic with source category `XXX.stats` (`XXX` is the source category of main output). Include:
144
+
145
+ | Metric | Description |
146
+ | ------------------------------- | ----------------------------------------------------------- |
147
+ | `total_input_events` | Total number of events handled from the plugin startup
148
+ | `total_input_bytes` | Total bytes of inputs after encoded to payload
149
+ | `total_metrics_datapoints` | Total metrics data points generated from input
150
+ | `total_log_lines` | Total log lines generated from input
151
+ | `total_output_requests` | Total number of HTTP requests sent to Sumo Logic server
152
+ | `total_output_bytes` | Total bytes of payloads sent to Sumo Logic server
153
+ | `total_output_bytes_compressed` | Total bytes of payloads sent to Sumo Logic server (after compressing)
154
+ | `total_response_times` | Total number of responses acknowledged by Sumo Logic server
155
+ | `total_response_success` | Total number of accepted(200) acknowledged by Sumo Logic server
@@ -1,13 +1,9 @@
1
1
  # encoding: utf-8
2
+ require "logstash/event"
2
3
  require "logstash/json"
3
4
  require "logstash/namespace"
4
5
  require "logstash/outputs/base"
5
6
  require "logstash/plugin_mixins/http_client"
6
- require 'thread'
7
- require "uri"
8
- require "zlib"
9
- require "stringio"
10
- require "socket"
11
7
 
12
8
  # Now you can use logstash to deliver logs to Sumo Logic
13
9
  #
@@ -16,26 +12,22 @@ require "socket"
16
12
  # send your logs to your account at Sumo Logic.
17
13
  #
18
14
  class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
19
- include LogStash::PluginMixins::HttpClient
15
+ declare_threadsafe!
20
16
 
21
- config_name "sumologic"
17
+ require "logstash/outputs/sumologic/common"
18
+ require "logstash/outputs/sumologic/compressor"
19
+ require "logstash/outputs/sumologic/header_builder"
20
+ require "logstash/outputs/sumologic/message_queue"
21
+ require "logstash/outputs/sumologic/monitor"
22
+ require "logstash/outputs/sumologic/payload_builder"
23
+ require "logstash/outputs/sumologic/piler"
24
+ require "logstash/outputs/sumologic/sender"
25
+ require "logstash/outputs/sumologic/statistics"
26
+
27
+ include LogStash::PluginMixins::HttpClient
28
+ include LogStash::Outputs::SumoLogic::Common
22
29
 
23
- CONTENT_TYPE = "Content-Type"
24
- CONTENT_TYPE_LOG = "text/plain"
25
- CONTENT_TYPE_GRAPHITE = "application/vnd.sumologic.graphite"
26
- CONTENT_TYPE_CARBON2 = "application/vnd.sumologic.carbon2"
27
- CATEGORY_HEADER = "X-Sumo-Category"
28
- HOST_HEADER = "X-Sumo-Host"
29
- NAME_HEADER = "X-Sumo-Name"
30
- CLIENT_HEADER = "X-Sumo-Client"
31
- TIMESTAMP_FIELD = "@timestamp"
32
- METRICS_NAME_PLACEHOLDER = "*"
33
- GRAPHITE = "graphite"
34
- CARBON2 = "carbon2"
35
- CONTENT_ENCODING = "Content-Encoding"
36
- DEFLATE = "deflate"
37
- GZIP = "gzip"
38
- ALWAYS_EXCLUDED = [ "@timestamp", "@version" ]
30
+ config_name "sumologic"
39
31
 
40
32
  # The URL to send logs to. This should be given when creating a HTTP Source
41
33
  # on Sumo Logic web app. See http://help.sumologic.com/Send_Data/Sources/HTTP_Source
@@ -59,12 +51,21 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
59
51
  # The encoding method of compress
60
52
  config :compress_encoding, :validate =>:string, :default => DEFLATE
61
53
 
62
- # Hold messages for at least (x) seconds as a pile; 0 means sending every events immediately
54
+ # Accumulate events in (x) seconds as a pile/request; 0 means sending every events in isolated requests
63
55
  config :interval, :validate => :number, :default => 0
64
56
 
57
+ # Accumulate events for up to (x) bytes as a pile/request; messages larger than this size will be sent in isolated requests
58
+ config :pile_max, :validate => :number, :default => 1024000
59
+
60
+ # Max # of events can be hold in memory before sending
61
+ config :queue_max, :validate => :number, :default => 4096
62
+
63
+ # Max # of HTTP senders working in parallel
64
+ config :sender_max, :validate => :number, :default => 100
65
+
65
66
  # The formatter of log message, by default is message with timestamp and host as prefix
66
67
  # Use %{@json} tag to send whole event
67
- config :format, :validate => :string, :default => "%{@timestamp} %{host} %{message}"
68
+ config :format, :validate => :string, :default => DEFAULT_LOG_FORMAT
68
69
 
69
70
  # Override the structure of @json tag with the given key value pairs
70
71
  config :json_mapping, :validate => :hash
@@ -85,7 +86,7 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
85
86
  # Defines the format of the metric, support "graphite" or "carbon2"
86
87
  config :metrics_format, :validate => :string, :default => CARBON2
87
88
 
88
- # Define the metric name looking, the placeholder '*' will be replaced with the actual metric name
89
+ # Define the metric name looking, the placeholder "*" will be replaced with the actual metric name
89
90
  # For example:
90
91
  # metrics => { "uptime.1m" => "%{uptime_1m}" }
91
92
  # metrics_name => "mynamespace.*"
@@ -100,307 +101,71 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
100
101
  # For carbon2 metrics format only, define the meta tags (which will NOT be used to identify the metrics)
101
102
  config :meta_tags, :validate => :hash, :default => {}
102
103
 
103
- public
104
- def register
105
- @source_host = Socket.gethostname unless @source_host
106
-
107
- # initialize request pool
108
- @request_tokens = SizedQueue.new(@pool_max)
109
- @pool_max.times { |t| @request_tokens << true }
110
- @timer = Time.now
111
- @pile = Array.new
112
- @semaphore = Mutex.new
113
- connect
114
- end # def register
115
-
116
- public
117
- def multi_receive(events)
118
- events.each { |event| receive(event) }
119
- client.execute!
120
- end # def multi_receive
121
-
122
- public
123
- def receive(event)
124
- begin
125
-
126
- if event == LogStash::SHUTDOWN
127
- finished
128
- return
129
- end
104
+ # For messages fail to send or get 429/503/504 response, try to resend after (x) seconds; don't resend if (x) < 0
105
+ config :sleep_before_requeue, :validate => :number, :default => 30
130
106
 
131
- content = event2content(event)
132
- queue_and_send(content)
107
+ # Sending throughput data as metrics
108
+ config :stats_enabled, :validate => :boolean, :default => false
133
109
 
134
- rescue
135
- log_failure(
136
- "Error when processing event",
137
- :event => event
138
- )
139
- end
140
- end # def receive
110
+ # Sending throughput data points every (x) seconds
111
+ config :stats_interval, :validate => :number, :default => 60
141
112
 
142
- public
143
- def close
144
- @semaphore.synchronize {
145
- send_request(@pile.join($/))
146
- @pile.clear
147
- }
148
- client.close
149
- end # def close
150
-
151
-
152
- private
153
- def connect
154
- # TODO: ping endpoint make sure config correct
155
- end # def connect
113
+ attr_reader :stats
156
114
 
157
- private
158
- def queue_and_send(content)
159
- if @interval <= 0 # means send immediately
160
- send_request(content)
115
+ def register
116
+ set_logger(@logger)
117
+ @stats = Statistics.new()
118
+ @queue = MessageQueue.new(@stats, config)
119
+ @builder = PayloadBuilder.new(@stats, config)
120
+ @piler = Piler.new(@queue, @stats, config)
121
+ @monitor = Monitor.new(@queue, @stats, config)
122
+ @sender = Sender.new(client, @queue, @stats, config)
123
+ if @sender.connect()
124
+ @sender.start()
125
+ @piler.start()
126
+ @monitor.start()
161
127
  else
162
- @semaphore.synchronize {
163
- now = Time.now
164
- @pile << content
165
-
166
- if now - @timer > @interval # ready to send
167
- send_request(@pile.join($/))
168
- @timer = now
169
- @pile.clear
170
- end
171
- }
172
- end
173
- end
174
-
175
- private
176
- def send_request(content)
177
- token = @request_tokens.pop
178
- body = compress(content)
179
- headers = get_headers()
180
-
181
- request = client.send(:parallel).send(:post, @url, :body => body, :headers => headers)
182
- request.on_complete do
183
- @request_tokens << token
184
- end
185
-
186
- request.on_success do |response|
187
- if response.code < 200 || response.code > 299
188
- log_failure(
189
- "HTTP response #{response.code}",
190
- :body => body,
191
- :headers => headers
192
- )
193
- end
128
+ throw "connection failed, please check the url and retry"
194
129
  end
130
+ end # def register
195
131
 
196
- request.on_failure do |exception|
197
- log_failure(
198
- "Could not fetch URL",
199
- :body => body,
200
- :headers => headers,
132
+ def multi_receive(events)
133
+ # events.map { |e| receive(e) }
134
+ begin
135
+ content = Array(events).map { |event| @builder.build(event) }.join($/)
136
+ @queue.enq(content)
137
+ @stats.record_multi_input(events.size, content.bytesize)
138
+ rescue Exception => exception
139
+ log_err(
140
+ "Error when processing events",
141
+ :events => events,
201
142
  :message => exception.message,
202
143
  :class => exception.class.name,
203
144
  :backtrace => exception.backtrace
204
- )
145
+ )
205
146
  end
206
-
207
- request.call
208
- end # def send_request
209
-
210
- private
211
- def compress(content)
212
- if @compress
213
- if @compress_encoding == GZIP
214
- result = gzip(content)
215
- result.bytes.to_a.pack('c*')
216
- else
217
- Zlib::Deflate.deflate(content)
218
- end
219
- else
220
- content
221
- end
222
- end # def compress
223
-
224
- private
225
- def gzip(content)
226
- stream = StringIO.new("w")
227
- stream.set_encoding("ASCII")
228
- gz = Zlib::GzipWriter.new(stream)
229
- gz.write(content)
230
- gz.close
231
- stream.string.bytes.to_a.pack('c*')
232
- end # def gzip
233
-
234
- private
235
- def get_headers()
236
-
237
- base = {}
238
- base = @extra_headers if @extra_headers.is_a?(Hash)
239
-
240
- base[CATEGORY_HEADER] = @source_category if @source_category
241
- base[HOST_HEADER] = @source_host if @source_host
242
- base[NAME_HEADER] = @source_name if @source_name
243
- base[CLIENT_HEADER] = 'logstash-output-sumologic'
244
-
245
- if @compress
246
- if @compress_encoding == GZIP
247
- base[CONTENT_ENCODING] = GZIP
248
- elsif
249
- base[CONTENT_ENCODING] = DEFLATE
250
- else
251
- log_failure(
252
- "Unrecogonized compress encoding",
253
- :encoding => @compress_encoding
254
- )
255
- end
256
- end
257
-
258
- if @metrics || @fields_as_metrics
259
- if @metrics_format == CARBON2
260
- base[CONTENT_TYPE] = CONTENT_TYPE_CARBON2
261
- elsif @metrics_format == GRAPHITE
262
- base[CONTENT_TYPE] = CONTENT_TYPE_GRAPHITE
263
- else
264
- log_failure(
265
- "Unrecogonized metrics format",
266
- :format => @metrics_format
267
- )
268
- end
269
- else
270
- base[CONTENT_TYPE] = CONTENT_TYPE_LOG
271
- end
272
-
273
- base
274
-
275
- end # def get_headers
276
-
277
- private
278
- def event2content(event)
279
- if @metrics || @fields_as_metrics
280
- event2metrics(event)
281
- else
282
- event2log(event)
283
- end
284
- end # def event2content
285
-
286
- private
287
- def event2log(event)
288
- @format = "%{@json}" if @format.nil? || @format.empty?
289
- expand(@format, event)
290
- end # def event2log
291
-
292
- private
293
- def event2metrics(event)
294
- timestamp = get_timestamp(event)
295
- source = expand_hash(@metrics, event) unless @fields_as_metrics
296
- source = event_as_metrics(event) if @fields_as_metrics
297
- source.flat_map { |key, value|
298
- get_single_line(event, key, value, timestamp)
299
- }.reject(&:nil?).join("\n")
300
- end # def event2metrics
301
-
302
- def event_as_metrics(event)
303
- hash = event2hash(event)
304
- acc = {}
305
- hash.keys.each do |field|
306
- value = hash[field]
307
- dotify(acc, field, value, nil)
308
- end
309
- acc
310
- end # def event_as_metrics
311
-
312
- def get_single_line(event, key, value, timestamp)
313
- full = get_metrics_name(event, key)
314
- if !ALWAYS_EXCLUDED.include?(full) && \
315
- (fields_include.empty? || fields_include.any? { |regexp| full.match(regexp) }) && \
316
- !(fields_exclude.any? {|regexp| full.match(regexp)}) && \
317
- is_number?(value)
318
- if @metrics_format == CARBON2
319
- @intrinsic_tags["metric"] = full
320
- "#{hash2line(@intrinsic_tags, event)} #{hash2line(@meta_tags, event)}#{value} #{timestamp}"
321
- else
322
- "#{full} #{value} #{timestamp}"
323
- end
324
- end
325
- end # def get_single_line
326
-
327
- def dotify(acc, key, value, prefix)
328
- pk = prefix ? "#{prefix}.#{key}" : key.to_s
329
- if value.is_a?(Hash)
330
- value.each do |k, v|
331
- dotify(acc, k, v, pk)
332
- end
333
- elsif value.is_a?(Array)
334
- value.each_with_index.map { |v, i|
335
- dotify(acc, i.to_s, v, pk)
336
- }
337
- else
338
- acc[pk] = value
339
- end
340
- end # def dotify
341
-
342
- private
343
- def expand(template, event)
344
- hash = event2hash(event)
345
- dump = LogStash::Json.dump(hash)
346
- template = template.gsub("%{@json}") { dump }
347
- event.sprintf(template)
348
- end # def expand
349
-
350
- private
351
- def event2hash(event)
352
- if @json_mapping
353
- @json_mapping.reduce({}) do |acc, kv|
354
- k, v = kv
355
- acc[k] = event.sprintf(v)
356
- acc
357
- end
358
- else
359
- event.to_hash
360
- end
361
- end # def map_event
362
-
363
- private
364
- def is_number?(me)
365
- me.to_f.to_s == me.to_s || me.to_i.to_s == me.to_s
366
- end
367
-
368
- private
369
- def expand_hash(hash, event)
370
- hash.reduce({}) do |acc, kv|
371
- k, v = kv
372
- exp_k = expand(k, event)
373
- exp_v = expand(v, event)
374
- acc[exp_k] = exp_v
375
- acc
376
- end # def expand_hash
377
- end
147
+ end # def multi_receive
378
148
 
379
- private
380
- def get_timestamp(event)
381
- event.get(TIMESTAMP_FIELD).to_i
382
- end # def get_timestamp
383
-
384
- private
385
- def get_metrics_name(event, name)
386
- name = @metrics_name.gsub(METRICS_NAME_PLACEHOLDER) { name } if @metrics_name
387
- event.sprintf(name)
388
- end # def get_metrics_name
389
-
390
- private
391
- def hash2line(hash, event)
392
- if (hash.is_a?(Hash) && !hash.empty?)
393
- expand_hash(hash, event).flat_map { |k, v|
394
- "#{k}=#{v} "
395
- }.join()
396
- else
397
- ""
149
+ def receive(event)
150
+ begin
151
+ content = @builder.build(event)
152
+ @piler.input(content)
153
+ rescue Exception => exception
154
+ log_err(
155
+ "Error when processing event",
156
+ :event => event,
157
+ :message => exception.message,
158
+ :class => exception.class.name,
159
+ :backtrace => exception.backtrace
160
+ )
398
161
  end
399
- end # hash2line
162
+ end # def receive
400
163
 
401
- private
402
- def log_failure(message, opts)
403
- @logger.error(message, opts)
404
- end # def log_failure
164
+ def close
165
+ @monitor.stop()
166
+ @piler.stop()
167
+ @sender.stop()
168
+ client.close()
169
+ end # def close
405
170
 
406
171
  end # class LogStash::Outputs::SumoLogic