logstash-output-sumologic 1.2.2 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b6f148933e50f3fad54125d1e006def5f8515add3a030d4e163aca083e6d0c6d
4
- data.tar.gz: f1bcdaa22d833c71ac6b6779993427afd500d798417f7f33a498f150367b2f47
3
+ metadata.gz: ab9e1ae438a4d6030629b00a67c0536aba6995f6a2bae1f952b9737cf2c28189
4
+ data.tar.gz: fdfec75b2d7c4bcd5c242c1d142d041648d450e3dfcb7a576de9531986f5438d
5
5
  SHA512:
6
- metadata.gz: 580d1c927da3234976e7c71beea83dcffe5e8dabb2e495f50197f91b8ac156e57740d33433c13a985d332542b2ae38613a3e3167adbe23c3ee3ad4ca2a91452f
7
- data.tar.gz: be8464e519361f94ee835c93440ebfc9cb1b169b81cc62a477118753ec048a3742896a112fd82cc9123a8b864be756e7ec1114fcc725b5fa03bee370ba13248f
6
+ metadata.gz: 66620931f50b2c241835824e26e16e0c05ef8c4ec9ac5ad3770d579ae345ef4486c68de23de8c7c284e3005598231738f75bef59704977a65e3911b992907951
7
+ data.tar.gz: 294a5910f5ec91ace98405439411d26ddf1e3c858e6d17cefb67d8b2b3c0b534c101cdd93faeab08c34dd42f77f5fb078e03b853e6937d63b1ed9f56ba69b840
data/CHANGELOG.md CHANGED
@@ -1,5 +1,27 @@
1
1
  # Change Log
2
2
 
3
+ ## 1.4.0 (2021-09-27)
4
+
5
+ - [#68](https://github.com/SumoLogic/logstash-output-sumologic/pull/68) feat: retry on 502 error code
6
+
7
+ ## 1.3.2 (2021-02-03)
8
+
9
+ - [#60](https://github.com/SumoLogic/logstash-output-sumologic/pull/60) Fix plugin metrics not being sent
10
+
11
+ ## 1.3.1 (2020-12-18)
12
+
13
+ - [#53](https://github.com/SumoLogic/logstash-output-sumologic/pull/53) Fix "undefined method `blank?'"
14
+ - [#52](https://github.com/SumoLogic/logstash-output-sumologic/pull/52) Fix logstash-plugin-http_client version conflict in Logstash 7
15
+
16
+ ## 1.3.0
17
+
18
+ - [#41](https://github.com/SumoLogic/logstash-output-sumologic/pull/41) Provide Docker image with Logstash 6.6 + output plugin on docker hub
19
+ - [#41](https://github.com/SumoLogic/logstash-output-sumologic/pull/41) Kubernetes support with Logstash beats to SumoLogic
20
+ - [#41](https://github.com/SumoLogic/logstash-output-sumologic/pull/41) CI improving
21
+ - [#36](https://github.com/SumoLogic/logstash-output-sumologic/pull/36) Update sender to send in batch.
22
+ - [#36](https://github.com/SumoLogic/logstash-output-sumologic/pull/36) Support %{} field evaluation in `source_category`, `source_name`, `source_host` parameters
23
+ - [#39](https://github.com/SumoLogic/logstash-output-sumologic/pull/39) Disable cookies by default
24
+
3
25
  ## 1.2.2
4
26
 
5
27
  - Bug fix: memory leak when using `%{@json}` in format
@@ -21,4 +43,4 @@
21
43
 
22
44
  ## 1.0.0
23
45
 
24
- - Initial release
46
+ - Initial release
data/CONTRIBUTING.md ADDED
@@ -0,0 +1,75 @@
1
+ # Contributing
2
+
3
+ ## Set up development environment
4
+
5
+ You can use the preconfigured Vagrant virtual machine for development
6
+ It includes everything that is needed to start coding.
7
+ See [./vagrant/README.md](./vagrant/README.md).
8
+
9
+ If you don't want to or cannot use Vagrant, you need to install the following dependencies:
10
+
11
+ - [Java SE](http://www.oracle.com/technetwork/java/javase/downloads/index.html) as a prerequisite to JRuby,
12
+ - [JRuby](https://www.jruby.org/),
13
+ - [Bundler](https://bundler.io/),
14
+ - optionally [Docker](https://docs.docker.com/get-docker/), if you want to build and run container images.
15
+
16
+ When your machine is ready (either in Vagrant or not), run this in the root directory of this repository:
17
+
18
+ ```sh
19
+ bundle install
20
+ ```
21
+
22
+ ## Running tests
23
+
24
+ Some of the tests try to send actual data to an actual Sumo Logic account.
25
+ To run those tests, you need to make `sumo_url` environment variable available.
26
+ If the `sumo_url` environment variable is not present, the tests reaching Sumo Logic will be skipped.
27
+
28
+ ```sh
29
+ export sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
30
+ ```
31
+
32
+ To run the tests, execute:
33
+
34
+ ```sh
35
+ bundle exec rspec
36
+ ```
37
+
38
+ To run tests in Docker, execute:
39
+
40
+ ```sh
41
+ docker build -t logstash-output-plugin .
42
+ docker run --rm -it -e 'sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX' logstash-output-plugin
43
+ ```
44
+
45
+ ## How to build .gem file from repository
46
+
47
+ Open logstash-output-sumologic.gemspec and make any necessary configuration changes.
48
+ In your local Git clone, run:
49
+
50
+ ```bash
51
+ gem build logstash-output-sumologic.gemspec
52
+ ```
53
+
54
+ You will get a .gem file in the same directory as `logstash-output-sumologic-x.y.z.gem`
55
+ Remove old version of plugin (optional):
56
+
57
+ ```bash
58
+ bin/logstash-plugin remove logstash-output-sumologic
59
+ ```
60
+
61
+ And then install the plugin locally:
62
+
63
+ ```bash
64
+ bin/logstash-plugin install <full path of .gem>
65
+ ```
66
+
67
+ ## Continuous Integration
68
+
69
+ The project uses GitHub Actions for:
70
+
71
+ - Testing pull requests and main branch commits: [.github/workflows/ci.yml](.github/workflows/ci.yml)
72
+ - Publishing new version of gem to RubyGems.org after tagging: [.github/workflows/publish.yml](.github/workflows/publish.yml)
73
+
74
+ Before publishing a new version, make sure the RubyGems account has MFA disabled for API access.
75
+ Go to [Settings](https://rubygems.org/settings/edit) and set `MFA Level` to `UI only`.
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # Logstash Sumo Logic Output Plugin
2
2
 
3
- [![Build Status](https://travis-ci.org/SumoLogic/logstash-output-sumologic.svg?branch=master)](https://travis-ci.org/SumoLogic/logstash-output-sumologic) [![Gem Version](https://badge.fury.io/rb/logstash-output-sumologic.svg)](https://badge.fury.io/rb/logstash-output-sumologic)
3
+ [![Build Status](https://travis-ci.org/SumoLogic/logstash-output-sumologic.svg?branch=master)](https://travis-ci.org/SumoLogic/logstash-output-sumologic) [![Gem Version](https://badge.fury.io/rb/logstash-output-sumologic.svg)](https://badge.fury.io/rb/logstash-output-sumologic) [![Docker Pulls](https://img.shields.io/docker/pulls/sumologic/logstash-output-sumologic.svg)](https://hub.docker.com/r/sumologic/logstash-output-sumologic)
4
4
 
5
5
  This is an output plugin for [Logstash](https://github.com/elastic/logstash).
6
6
  It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
@@ -112,6 +112,7 @@ Logon to Sumo Logic [web app](https://service.sumologic.com/) and run
112
112
  | `fields_include` | array | No | all fields | Working with `fields_as_metrics` parameter, only the fields which full name matching these RegEx pattern(s) will be included in metrics.
113
113
  | `fields_exclude` | array | No | none | Working with `fields_as_metrics` parameter, the fields which full name matching these RegEx pattern(s) will be ignored in metrics.
114
114
  | `sleep_before_requeue` | number | No | `30` | The message failed to send to server will be retried after (x) seconds. Not retried if negative number set
115
+ | `stats_category` | string | No | `Logstash.stats` | The source category this plugin's stats will be sent with when `stats_enabled` is `true`
115
116
  | `stats_enabled` | boolean | No | `false` | If `true`, stats of this plugin will be sent as metrics
116
117
  | `stats_interval` | number | No | `60` | The stats will be sent every (x) seconds
117
118
 
@@ -136,7 +137,7 @@ On the other side, this version is marked as thread safe so if necessary, multip
136
137
 
137
138
  ### Monitor throughput in metrics
138
139
 
139
- If your Sumo Logic account supports metrics feature, you can enable the stats monitor of this plugin with configuring `stats_enabled` to `true`. For every `stats_interval` seconds, a batch of metrics data points will be sent to Sumo Logic with source category `XXX.stats` (`XXX` is the source category of main output):
140
+ If your Sumo Logic account supports metrics feature, you can enable the stats monitor of this plugin with configuring `stats_enabled` to `true`. For every `stats_interval` seconds, a batch of metrics data points will be sent to Sumo Logic with source category specified by `stats_category` parameter:
140
141
 
141
142
  | Metric | Description |
142
143
  | ------------------------------- | ----------------------------------------------------------- |
@@ -0,0 +1,13 @@
1
+ # encoding: utf-8
2
+
3
+ module LogStash; module Outputs; class SumoLogic;
4
+ class Batch
5
+
6
+ attr_accessor :headers, :payload
7
+
8
+ def initialize(headers, payload)
9
+ @headers, @payload = headers, payload
10
+ end
11
+
12
+ end
13
+ end; end; end;
@@ -1,9 +1,9 @@
1
1
  # encoding: utf-8
2
- require "date"
3
-
4
2
  module LogStash; module Outputs; class SumoLogic;
5
3
  module Common
6
4
 
5
+ require "date"
6
+
7
7
  # global constants
8
8
  DEFAULT_LOG_FORMAT = "%{@timestamp} %{host} %{message}"
9
9
  METRICS_NAME_PLACEHOLDER = "*"
@@ -12,6 +12,23 @@ module LogStash; module Outputs; class SumoLogic;
12
12
  DEFLATE = "deflate"
13
13
  GZIP = "gzip"
14
14
  STATS_TAG = "STATS_TAG"
15
+ STOP_TAG = "PLUGIN STOPPED"
16
+
17
+ CONTENT_TYPE = "Content-Type"
18
+ CONTENT_TYPE_LOG = "text/plain"
19
+ CONTENT_TYPE_GRAPHITE = "application/vnd.sumologic.graphite"
20
+ CONTENT_TYPE_CARBON2 = "application/vnd.sumologic.carbon2"
21
+ CONTENT_ENCODING = "Content-Encoding"
22
+
23
+ CATEGORY_HEADER = "X-Sumo-Category"
24
+ CATEGORY_HEADER_DEFAULT = "Logstash"
25
+ CATEGORY_HEADER_DEFAULT_STATS = "Logstash.stats"
26
+ HOST_HEADER = "X-Sumo-Host"
27
+ NAME_HEADER = "X-Sumo-Name"
28
+ NAME_HEADER_DEFAULT = "logstash-output-sumologic"
29
+
30
+ CLIENT_HEADER = "X-Sumo-Client"
31
+ CLIENT_HEADER_VALUE = "logstash-output-sumologic"
15
32
 
16
33
  # for debugging test
17
34
  LOG_TO_CONSOLE = false
@@ -53,5 +70,15 @@ module LogStash; module Outputs; class SumoLogic;
53
70
  end
54
71
  end # def log_dbg
55
72
 
73
+ def blank?(value)
74
+ if value.kind_of?(NilClass)
75
+ true
76
+ elsif value.kind_of?(String)
77
+ value !~ /\S/
78
+ else
79
+ value.respond_to?(:empty?) ? value.empty? : !value
80
+ end
81
+ end
82
+
56
83
  end
57
- end; end; end
84
+ end; end; end
@@ -1,11 +1,11 @@
1
1
  # encoding: utf-8
2
- require "stringio"
3
- require "zlib"
4
- require "logstash/outputs/sumologic/common"
5
2
 
6
3
  module LogStash; module Outputs; class SumoLogic;
7
4
  class Compressor
8
5
 
6
+ require "stringio"
7
+ require "zlib"
8
+ require "logstash/outputs/sumologic/common"
9
9
  include LogStash::Outputs::SumoLogic::Common
10
10
 
11
11
  def initialize(config)
@@ -1,31 +1,17 @@
1
1
  # encoding: utf-8
2
- require "socket"
3
- require "logstash/outputs/sumologic/common"
4
2
 
5
3
  module LogStash; module Outputs; class SumoLogic;
6
4
  class HeaderBuilder
7
5
 
6
+ require "socket"
7
+ require "logstash/outputs/sumologic/common"
8
8
  include LogStash::Outputs::SumoLogic::Common
9
9
 
10
- CONTENT_TYPE = "Content-Type"
11
- CONTENT_TYPE_LOG = "text/plain"
12
- CONTENT_TYPE_GRAPHITE = "application/vnd.sumologic.graphite"
13
- CONTENT_TYPE_CARBON2 = "application/vnd.sumologic.carbon2"
14
- CONTENT_ENCODING = "Content-Encoding"
15
-
16
- CATEGORY_HEADER = "X-Sumo-Category"
17
- CATEGORY_HEADER_DEFAULT = "Logstash"
18
- HOST_HEADER = "X-Sumo-Host"
19
- NAME_HEADER = "X-Sumo-Name"
20
- NAME_HEADER_DEFAULT = "logstash-output-sumologic"
21
-
22
- CLIENT_HEADER = "X-Sumo-Client"
23
- CLIENT_HEADER_VALUE = "logstash-output-sumologic"
24
-
25
10
  def initialize(config)
26
11
 
27
12
  @extra_headers = config["extra_headers"] ||= {}
28
13
  @source_category = config["source_category"] ||= CATEGORY_HEADER_DEFAULT
14
+ @stats_category = config["stats_category"] ||= CATEGORY_HEADER_DEFAULT_STATS
29
15
  @source_host = config["source_host"] ||= Socket.gethostname
30
16
  @source_name = config["source_name"] ||= NAME_HEADER_DEFAULT
31
17
  @metrics = config["metrics"]
@@ -36,31 +22,31 @@ module LogStash; module Outputs; class SumoLogic;
36
22
 
37
23
  end # def initialize
38
24
 
39
- def build()
40
- headers = build_common()
41
- headers[CATEGORY_HEADER] = @source_category unless @source_category.blank?
25
+ def build(event)
26
+ headers = Hash.new
27
+ headers.merge!(@extra_headers)
28
+ headers[CLIENT_HEADER] = CLIENT_HEADER_VALUE
29
+ headers[CATEGORY_HEADER] = event.sprintf(@source_category) unless blank?(@source_category)
30
+ headers[HOST_HEADER] = event.sprintf(@source_host) unless blank?(@source_host)
31
+ headers[NAME_HEADER] = event.sprintf(@source_name) unless blank?(@source_name)
42
32
  append_content_header(headers)
33
+ append_compress_header(headers)
43
34
  headers
44
35
  end # def build
45
36
 
46
37
  def build_stats()
47
- headers = build_common()
48
- headers[CATEGORY_HEADER] = "#{@source_category}.stats"
49
- headers[CONTENT_TYPE] = CONTENT_TYPE_CARBON2
50
- headers
51
- end # def build_stats
52
-
53
- private
54
- def build_common()
55
- headers = Hash.new()
38
+ headers = Hash.new
56
39
  headers.merge!(@extra_headers)
57
40
  headers[CLIENT_HEADER] = CLIENT_HEADER_VALUE
58
- headers[HOST_HEADER] = @source_host unless @source_host.blank?
59
- headers[NAME_HEADER] = @source_name unless @source_name.blank?
41
+ headers[CATEGORY_HEADER] = @stats_category
42
+ headers[HOST_HEADER] = Socket.gethostname
43
+ headers[NAME_HEADER] = NAME_HEADER_DEFAULT
44
+ headers[CONTENT_TYPE] = CONTENT_TYPE_CARBON2
60
45
  append_compress_header(headers)
61
46
  headers
62
- end # build_common
47
+ end # def build_stats
63
48
 
49
+ private
64
50
  def append_content_header(headers)
65
51
  contentType = CONTENT_TYPE_LOG
66
52
  if @metrics || @fields_as_metrics
@@ -76,4 +62,4 @@ module LogStash; module Outputs; class SumoLogic;
76
62
  end # append_compress_header
77
63
 
78
64
  end
79
- end; end; end
65
+ end; end; end
@@ -1,38 +1,42 @@
1
1
  # encoding: utf-8
2
- require "logstash/outputs/sumologic/common"
3
- require "logstash/outputs/sumologic/statistics"
4
-
5
2
  module LogStash; module Outputs; class SumoLogic;
6
3
  class MessageQueue
7
4
 
5
+ require "logstash/outputs/sumologic/common"
6
+ require "logstash/outputs/sumologic/statistics"
8
7
  include LogStash::Outputs::SumoLogic::Common
9
8
 
10
9
  def initialize(stats, config)
11
10
  @queue_max = (config["queue_max"] ||= 1) < 1 ? 1 : config["queue_max"]
12
11
  @queue = SizedQueue::new(@queue_max)
13
12
  log_info("initialize memory queue", :max => @queue_max)
13
+ @queue_bytesize = Concurrent::AtomicFixnum.new
14
14
  @stats = stats
15
15
  end # def initialize
16
16
 
17
- def enq(obj)
18
- if (obj.bytesize > 0)
19
- @queue.enq(obj)
20
- @stats.record_enque(obj)
17
+ def enq(batch)
18
+ batch_size = batch.payload.bytesize
19
+ if (batch_size > 0)
20
+ @queue.enq(batch)
21
+ @stats.record_enque(batch_size)
22
+ @queue_bytesize.update { |v| v + batch_size }
21
23
  log_dbg("enqueue",
22
24
  :objects_in_queue => size,
23
- :bytes_in_queue => @stats.current_queue_bytes,
24
- :size => obj.bytesize)
25
- end
25
+ :bytes_in_queue => @queue_bytesize,
26
+ :size => batch_size)
27
+ end
26
28
  end # def enq
27
29
 
28
30
  def deq()
29
- obj = @queue.deq()
30
- @stats.record_deque(obj)
31
+ batch = @queue.deq()
32
+ batch_size = batch.payload.bytesize
33
+ @stats.record_deque(batch_size)
34
+ @queue_bytesize.update { |v| v - batch_size }
31
35
  log_dbg("dequeue",
32
36
  :objects_in_queue => size,
33
- :bytes_in_queue => @stats.current_queue_bytes,
34
- :size => obj.bytesize)
35
- obj
37
+ :bytes_in_queue => @queue_bytesize,
38
+ :size => batch_size)
39
+ batch
36
40
  end # def deq
37
41
 
38
42
  def drain()
@@ -44,6 +48,10 @@ module LogStash; module Outputs; class SumoLogic;
44
48
  def size()
45
49
  @queue.size()
46
50
  end # size
51
+
52
+ def bytesize()
53
+ @queue_bytesize.value
54
+ end # bytesize
47
55
 
48
56
  end
49
57
  end; end; end
@@ -1,11 +1,11 @@
1
1
  # encoding: utf-8
2
- require "logstash/outputs/sumologic/common"
3
- require "logstash/outputs/sumologic/statistics"
4
- require "logstash/outputs/sumologic/message_queue"
5
2
 
6
3
  module LogStash; module Outputs; class SumoLogic;
7
4
  class Monitor
8
5
 
6
+ require "logstash/outputs/sumologic/common"
7
+ require "logstash/outputs/sumologic/statistics"
8
+ require "logstash/outputs/sumologic/message_queue"
9
9
  include LogStash::Outputs::SumoLogic::Common
10
10
 
11
11
  attr_reader :is_pile
@@ -14,6 +14,7 @@ module LogStash; module Outputs; class SumoLogic;
14
14
  @queue = queue
15
15
  @stats = stats
16
16
  @stopping = Concurrent::AtomicBoolean.new(false)
17
+ @header_builder = HeaderBuilder.new(config)
17
18
 
18
19
  @enabled = config["stats_enabled"] ||= false
19
20
  @interval = config["stats_interval"] ||= 60
@@ -27,8 +28,8 @@ module LogStash; module Outputs; class SumoLogic;
27
28
  @monitor_t = Thread.new {
28
29
  while @stopping.false?
29
30
  Stud.stoppable_sleep(@interval) { @stopping.true? }
30
- if @stats.total_input_events.value > 0
31
- @queue.enq(build_stats_payload())
31
+ if @stats.total_log_lines.value > 0 || @stats.total_metrics_datapoints.value > 0
32
+ @queue.enq(Batch.new(headers(), build_stats_payload()))
32
33
  end
33
34
  end # while
34
35
  }
@@ -72,5 +73,9 @@ module LogStash; module Outputs; class SumoLogic;
72
73
  "metric=#{key} interval=#{@interval} category=monitor #{value} #{timestamp}"
73
74
  end # def build_metric_line
74
75
 
76
+ def headers()
77
+ @headers ||= @header_builder.build_stats()
78
+ end
79
+
75
80
  end
76
- end; end; end
81
+ end; end; end
@@ -1,12 +1,11 @@
1
1
  # encoding: utf-8
2
- require "logstash/json"
3
- require "logstash/event"
4
-
5
- require "logstash/outputs/sumologic/common"
6
2
 
7
3
  module LogStash; module Outputs; class SumoLogic;
8
4
  class PayloadBuilder
9
5
 
6
+ require "logstash/json"
7
+ require "logstash/event"
8
+ require "logstash/outputs/sumologic/common"
10
9
  include LogStash::Outputs::SumoLogic::Common
11
10
 
12
11
  TIMESTAMP_FIELD = "@timestamp"
@@ -1,11 +1,11 @@
1
1
  # encoding: utf-8
2
- require "logstash/outputs/sumologic/common"
3
- require "logstash/outputs/sumologic/statistics"
4
- require "logstash/outputs/sumologic/message_queue"
5
2
 
6
3
  module LogStash; module Outputs; class SumoLogic;
7
4
  class Piler
8
5
 
6
+ require "logstash/outputs/sumologic/common"
7
+ require "logstash/outputs/sumologic/statistics"
8
+ require "logstash/outputs/sumologic/message_queue"
9
9
  include LogStash::Outputs::SumoLogic::Common
10
10
 
11
11
  attr_reader :is_pile
@@ -17,14 +17,14 @@ module LogStash; module Outputs; class SumoLogic;
17
17
  @queue = queue
18
18
  @stats = stats
19
19
  @stopping = Concurrent::AtomicBoolean.new(false)
20
+ @payload_builder = PayloadBuilder.new(@stats, config)
21
+ @header_builder = HeaderBuilder.new(config)
20
22
  @is_pile = (@interval > 0 && @pile_max > 0)
21
-
22
23
  if (@is_pile)
23
- @pile = Array.new
24
- @pile_size = 0
24
+ @pile = Hash.new("")
25
25
  @semaphore = Mutex.new
26
26
  end
27
-
27
+
28
28
  end # def initialize
29
29
 
30
30
  def start()
@@ -46,46 +46,44 @@ module LogStash; module Outputs; class SumoLogic;
46
46
  def stop()
47
47
  @stopping.make_true()
48
48
  if (@is_pile)
49
- log_info("shutting down piler...")
50
- @piler_t.join
49
+ log_info("shutting down piler in #{@interval * 2} secs ...")
50
+ @piler_t.join(@interval * 2)
51
51
  log_info("piler is fully shutted down")
52
52
  end
53
53
  end # def stop
54
54
 
55
- def input(entry)
55
+ def input(event)
56
56
  if (@stopping.true?)
57
- log_warn("piler is shutting down, message dropped",
58
- "message" => entry)
59
- elsif (@is_pile)
60
- @semaphore.synchronize {
61
- if @pile_size + entry.bytesize > @pile_max
62
- @queue.enq(@pile.join($/))
63
- @pile.clear
64
- @pile_size = 0
65
- @stats.record_clear_pile()
66
- end
67
- @pile << entry
68
- @pile_size += entry.bytesize
69
- @stats.record_input(entry)
70
- }
57
+ log_warn("piler is shutting down, event is dropped",
58
+ "event" => event)
71
59
  else
72
- @queue.enq(entry)
73
- end # if
60
+ headers = @header_builder.build(event)
61
+ payload = @payload_builder.build(event)
62
+ if (@is_pile)
63
+ @semaphore.synchronize {
64
+ content = @pile[headers]
65
+ size = content.bytesize
66
+ if size + payload.bytesize > @pile_max
67
+ @queue.enq(Batch.new(headers, content))
68
+ @pile[headers] = ""
69
+ end
70
+ @pile[headers] = blank?(@pile[headers]) ? payload : "#{@pile[headers]}\n#{payload}"
71
+ }
72
+ else
73
+ @queue.enq(Batch.new(headers, payload))
74
+ end # if
75
+ end
74
76
  end # def input
75
77
 
76
78
  private
77
79
  def enq_and_clear()
78
- if (@pile.size > 0)
79
- @semaphore.synchronize {
80
- if (@pile.size > 0)
81
- @queue.enq(@pile.join($/))
82
- @pile.clear
83
- @pile_size = 0
84
- @stats.record_clear_pile()
85
- end
86
- }
87
- end
80
+ @semaphore.synchronize {
81
+ @pile.each do |headers, content|
82
+ @queue.enq(Batch.new(headers, content))
83
+ end
84
+ @pile.clear()
85
+ }
88
86
  end # def enq_and_clear
89
87
 
90
88
  end
91
- end; end; end
89
+ end; end; end