logstash-output-sumologic 1.3.1 → 1.4.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 596877e694163207291385444d3e10ced75cba12422d57b21503126395002f03
4
- data.tar.gz: 157f6fc6262ef8990d44d597066bbbcde226e27dc3d48d332a29970d38bd962c
3
+ metadata.gz: a93e4aba1af87a7e0c3ea3570e5ff177e4307736dacf2fde1da78dbe4150a2da
4
+ data.tar.gz: 9ceb227f390a49467f7195daa96152335922275b714dda87215f11c16273d060
5
5
  SHA512:
6
- metadata.gz: 2da6958efa3878652965d1620206a3af61172e1f5ff3f369ed738a0346c0c14949cad1c5e36c4d889e8e3dd75373cf8a6f6f0d6be0c8800dfda14b6be855f74b
7
- data.tar.gz: b6021badd961cbe617108676906a55e16beeaa169e9f1a3e8a0b1244508c8271e661f7df8c951b38b7132d61588631a00f9595573abf50de43d69ce4dbcb6693
6
+ metadata.gz: 88b55069b26026545a9cbb755555fa53e0570b120ea37ded4ae487da989f0054a6fb1259998a8e9eb038c0314a4450afa70fe658d372030f4eae0f25e7247b5b
7
+ data.tar.gz: 6ca3a09f3f297c87c06707dd6a3b74d8210cb46ac60dde3f2663b2509cd87274d92d67d49203194a4def5e70dd314190a4887ac9401199889ecbaf79b22bb77a
data/CHANGELOG.md CHANGED
@@ -1,5 +1,17 @@
1
1
  # Change Log
2
2
 
3
+ ## 1.4.1 (2022-03-09)
4
+
5
+ - [#73](https://github.com/SumoLogic/logstash-output-sumologic/pull/73) fix: remove deadlock possibility by adding resend_queue queue
6
+
7
+ ## 1.4.0 (2021-09-27)
8
+
9
+ - [#68](https://github.com/SumoLogic/logstash-output-sumologic/pull/68) feat: retry on 502 error code
10
+
11
+ ## 1.3.2 (2021-02-03)
12
+
13
+ - [#60](https://github.com/SumoLogic/logstash-output-sumologic/pull/60) Fix plugin metrics not being sent
14
+
3
15
  ## 1.3.1 (2020-12-18)
4
16
 
5
17
  - [#53](https://github.com/SumoLogic/logstash-output-sumologic/pull/53) Fix "undefined method `blank?'"
data/CONTRIBUTING.md ADDED
@@ -0,0 +1,75 @@
1
+ # Contributing
2
+
3
+ ## Set up development environment
4
+
5
+ You can use the preconfigured Vagrant virtual machine for development
6
+ It includes everything that is needed to start coding.
7
+ See [./vagrant/README.md](./vagrant/README.md).
8
+
9
+ If you don't want to or cannot use Vagrant, you need to install the following dependencies:
10
+
11
+ - [Java SE](http://www.oracle.com/technetwork/java/javase/downloads/index.html) as a prerequisite to JRuby,
12
+ - [JRuby](https://www.jruby.org/),
13
+ - [Bundler](https://bundler.io/),
14
+ - optionally [Docker](https://docs.docker.com/get-docker/), if you want to build and run container images.
15
+
16
+ When your machine is ready (either in Vagrant or not), run this in the root directory of this repository:
17
+
18
+ ```sh
19
+ bundle install
20
+ ```
21
+
22
+ ## Running tests
23
+
24
+ Some of the tests try to send actual data to an actual Sumo Logic account.
25
+ To run those tests, you need to make `sumo_url` environment variable available.
26
+ If the `sumo_url` environment variable is not present, the tests reaching Sumo Logic will be skipped.
27
+
28
+ ```sh
29
+ export sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
30
+ ```
31
+
32
+ To run the tests, execute:
33
+
34
+ ```sh
35
+ bundle exec rspec
36
+ ```
37
+
38
+ To run tests in Docker, execute:
39
+
40
+ ```sh
41
+ docker build -t logstash-output-plugin .
42
+ docker run --rm -it -e 'sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX' logstash-output-plugin
43
+ ```
44
+
45
+ ## How to build .gem file from repository
46
+
47
+ Open logstash-output-sumologic.gemspec and make any necessary configuration changes.
48
+ In your local Git clone, run:
49
+
50
+ ```bash
51
+ gem build logstash-output-sumologic.gemspec
52
+ ```
53
+
54
+ You will get a .gem file in the same directory as `logstash-output-sumologic-x.y.z.gem`
55
+ Remove old version of plugin (optional):
56
+
57
+ ```bash
58
+ bin/logstash-plugin remove logstash-output-sumologic
59
+ ```
60
+
61
+ And then install the plugin locally:
62
+
63
+ ```bash
64
+ bin/logstash-plugin install <full path of .gem>
65
+ ```
66
+
67
+ ## Continuous Integration
68
+
69
+ The project uses GitHub Actions for:
70
+
71
+ - Testing pull requests and main branch commits: [.github/workflows/ci.yml](.github/workflows/ci.yml)
72
+ - Publishing new version of gem to RubyGems.org after tagging: [.github/workflows/publish.yml](.github/workflows/publish.yml)
73
+
74
+ Before publishing a new version, make sure the RubyGems account has MFA disabled for API access.
75
+ Go to [Settings](https://rubygems.org/settings/edit) and set `MFA Level` to `UI only`.
data/README.md CHANGED
@@ -112,6 +112,7 @@ Logon to Sumo Logic [web app](https://service.sumologic.com/) and run
112
112
  | `fields_include` | array | No | all fields | Working with `fields_as_metrics` parameter, only the fields which full name matching these RegEx pattern(s) will be included in metrics.
113
113
  | `fields_exclude` | array | No | none | Working with `fields_as_metrics` parameter, the fields which full name matching these RegEx pattern(s) will be ignored in metrics.
114
114
  | `sleep_before_requeue` | number | No | `30` | The message failed to send to server will be retried after (x) seconds. Not retried if negative number set
115
+ | `stats_category` | string | No | `Logstash.stats` | The source category this plugin's stats will be sent with when `stats_enabled` is `true`
115
116
  | `stats_enabled` | boolean | No | `false` | If `true`, stats of this plugin will be sent as metrics
116
117
  | `stats_interval` | number | No | `60` | The stats will be sent every (x) seconds
117
118
 
@@ -136,7 +137,7 @@ On the other side, this version is marked as thread safe so if necessary, multip
136
137
 
137
138
  ### Monitor throughput in metrics
138
139
 
139
- If your Sumo Logic account supports metrics feature, you can enable the stats monitor of this plugin with configuring `stats_enabled` to `true`. For every `stats_interval` seconds, a batch of metrics data points will be sent to Sumo Logic with source category `XXX.stats` (`XXX` is the source category of main output):
140
+ If your Sumo Logic account supports metrics feature, you can enable the stats monitor of this plugin with configuring `stats_enabled` to `true`. For every `stats_interval` seconds, a batch of metrics data points will be sent to Sumo Logic with source category specified by `stats_category` parameter:
140
141
 
141
142
  | Metric | Description |
142
143
  | ------------------------------- | ----------------------------------------------------------- |
@@ -22,6 +22,7 @@ module LogStash; module Outputs; class SumoLogic;
22
22
 
23
23
  CATEGORY_HEADER = "X-Sumo-Category"
24
24
  CATEGORY_HEADER_DEFAULT = "Logstash"
25
+ CATEGORY_HEADER_DEFAULT_STATS = "Logstash.stats"
25
26
  HOST_HEADER = "X-Sumo-Host"
26
27
  NAME_HEADER = "X-Sumo-Name"
27
28
  NAME_HEADER_DEFAULT = "logstash-output-sumologic"
@@ -11,6 +11,7 @@ module LogStash; module Outputs; class SumoLogic;
11
11
 
12
12
  @extra_headers = config["extra_headers"] ||= {}
13
13
  @source_category = config["source_category"] ||= CATEGORY_HEADER_DEFAULT
14
+ @stats_category = config["stats_category"] ||= CATEGORY_HEADER_DEFAULT_STATS
14
15
  @source_host = config["source_host"] ||= Socket.gethostname
15
16
  @source_name = config["source_name"] ||= NAME_HEADER_DEFAULT
16
17
  @metrics = config["metrics"]
@@ -33,6 +34,18 @@ module LogStash; module Outputs; class SumoLogic;
33
34
  headers
34
35
  end # def build
35
36
 
37
+ def build_stats()
38
+ headers = Hash.new
39
+ headers.merge!(@extra_headers)
40
+ headers[CLIENT_HEADER] = CLIENT_HEADER_VALUE
41
+ headers[CATEGORY_HEADER] = @stats_category
42
+ headers[HOST_HEADER] = Socket.gethostname
43
+ headers[NAME_HEADER] = NAME_HEADER_DEFAULT
44
+ headers[CONTENT_TYPE] = CONTENT_TYPE_CARBON2
45
+ append_compress_header(headers)
46
+ headers
47
+ end # def build_stats
48
+
36
49
  private
37
50
  def append_content_header(headers)
38
51
  contentType = CONTENT_TYPE_LOG
@@ -27,8 +27,8 @@ module LogStash; module Outputs; class SumoLogic;
27
27
  end
28
28
  end # def enq
29
29
 
30
- def deq()
31
- batch = @queue.deq()
30
+ def deq(non_block: false)
31
+ batch = @queue.deq(non_block: non_block)
32
32
  batch_size = batch.payload.bytesize
33
33
  @stats.record_deque(batch_size)
34
34
  @queue_bytesize.update { |v| v - batch_size }
@@ -14,6 +14,7 @@ module LogStash; module Outputs; class SumoLogic;
14
14
  @queue = queue
15
15
  @stats = stats
16
16
  @stopping = Concurrent::AtomicBoolean.new(false)
17
+ @header_builder = HeaderBuilder.new(config)
17
18
 
18
19
  @enabled = config["stats_enabled"] ||= false
19
20
  @interval = config["stats_interval"] ||= 60
@@ -27,8 +28,8 @@ module LogStash; module Outputs; class SumoLogic;
27
28
  @monitor_t = Thread.new {
28
29
  while @stopping.false?
29
30
  Stud.stoppable_sleep(@interval) { @stopping.true? }
30
- if @stats.total_input_events.value > 0
31
- @queue.enq(build_stats_payload())
31
+ if @stats.total_log_lines.value > 0 || @stats.total_metrics_datapoints.value > 0
32
+ @queue.enq(Batch.new(headers(), build_stats_payload()))
32
33
  end
33
34
  end # while
34
35
  }
@@ -72,5 +73,9 @@ module LogStash; module Outputs; class SumoLogic;
72
73
  "metric=#{key} interval=#{@interval} category=monitor #{value} #{timestamp}"
73
74
  end # def build_metric_line
74
75
 
76
+ def headers()
77
+ @headers ||= @header_builder.build_stats()
78
+ end
79
+
75
80
  end
76
- end; end; end
81
+ end; end; end
@@ -24,10 +24,14 @@ module LogStash; module Outputs; class SumoLogic;
24
24
  @sender_max = (config["sender_max"] ||= 1) < 1 ? 1 : config["sender_max"]
25
25
  @sleep_before_requeue = config["sleep_before_requeue"] ||= 30
26
26
  @stats_enabled = config["stats_enabled"] ||= false
27
+ @iteration_sleep = 0.3
27
28
 
28
29
  @tokens = SizedQueue.new(@sender_max)
29
30
  @sender_max.times { |t| @tokens << t }
30
31
 
32
+ # Make resend_queue twice as big as sender_max,
33
+ # because if one batch is processed, the next one is already waiting in the thread
34
+ @resend_queue = SizedQueue.new(2*@sender_max)
31
35
  @compressor = LogStash::Outputs::SumoLogic::Compressor.new(config)
32
36
 
33
37
  end # def initialize
@@ -39,9 +43,24 @@ module LogStash; module Outputs; class SumoLogic;
39
43
  @stopping.make_false()
40
44
  @sender_t = Thread.new {
41
45
  while @stopping.false?
42
- batch = @queue.deq()
46
+ begin
47
+ # Resend batch if any in the queue
48
+ batch = @resend_queue.deq(non_block: true)
49
+ rescue ThreadError
50
+ # send new batch otherwise
51
+ begin
52
+ batch = @queue.deq(non_block: true)
53
+ rescue ThreadError
54
+ Stud.stoppable_sleep(@iteration_sleep) { @stopping.true? }
55
+ next
56
+ end
57
+ end
43
58
  send_request(batch)
44
59
  end # while
60
+ @resend_queue.size.times.map { |queue|
61
+ batch = queue.deq()
62
+ send_request(batch)
63
+ }
45
64
  @queue.drain().map { |batch|
46
65
  send_request(batch)
47
66
  }
@@ -98,12 +117,11 @@ module LogStash; module Outputs; class SumoLogic;
98
117
  return
99
118
  end
100
119
 
120
+ # wait for token so we do not exceed number of request in background
101
121
  token = @tokens.pop()
102
122
 
103
123
  if @stats_enabled && content.start_with?(STATS_TAG)
104
124
  body = @compressor.compress(content[STATS_TAG.length..-1])
105
- headers[CATEGORY_HEADER] = "#{headers[CATEGORY_HEADER]}.stats"
106
- headers[CONTENT_TYPE] = CONTENT_TYPE_CARBON2
107
125
  else
108
126
  body = @compressor.compress(content)
109
127
  end
@@ -113,11 +131,9 @@ module LogStash; module Outputs; class SumoLogic;
113
131
  :content_size => content.size,
114
132
  :content => content[0..20],
115
133
  :payload_size => body.size)
134
+
135
+ # send request in background
116
136
  request = @client.send(:background).send(:post, @url, :body => body, :headers => headers)
117
-
118
- request.on_complete do
119
- @tokens << token
120
- end
121
137
 
122
138
  request.on_success do |response|
123
139
  @stats.record_response_success(response.code)
@@ -127,13 +143,17 @@ module LogStash; module Outputs; class SumoLogic;
127
143
  :code => response.code,
128
144
  :headers => headers,
129
145
  :contet => content[0..20])
130
- if response.code == 429 || response.code == 503 || response.code == 504
146
+ if response.code == 429 || response.code == 502 || response.code == 503 || response.code == 504
147
+ # requeue and release token
131
148
  requeue_message(batch)
149
+ @tokens << token
132
150
  end
133
151
  else
134
152
  log_dbg("request accepted",
135
153
  :token => token,
136
154
  :code => response.code)
155
+ # release token
156
+ @tokens << token
137
157
  end
138
158
  end
139
159
 
@@ -145,6 +165,8 @@ module LogStash; module Outputs; class SumoLogic;
145
165
  :class => exception.class.name,
146
166
  :backtrace => exception.backtrace)
147
167
  requeue_message(batch)
168
+ # requeue and release token
169
+ @tokens << token
148
170
  end
149
171
 
150
172
  @stats.record_request(content.bytesize, body.bytesize)
@@ -164,9 +186,9 @@ module LogStash; module Outputs; class SumoLogic;
164
186
  :content => content[0..20],
165
187
  :headers => batch.headers)
166
188
  Stud.stoppable_sleep(@sleep_before_requeue) { @stopping.true? }
167
- @queue.enq(batch)
189
+ @resend_queue.enq(batch)
168
190
  end
169
191
  end # def reque_message
170
192
 
171
193
  end
172
- end; end; end
194
+ end; end; end
@@ -102,9 +102,11 @@ class LogStash::Outputs::SumoLogic < LogStash::Outputs::Base
102
102
  # For carbon2 metrics format only, define the meta tags (which will NOT be used to identify the metrics)
103
103
  config :meta_tags, :validate => :hash, :default => {}
104
104
 
105
- # For messages fail to send or get 429/503/504 response, try to resend after (x) seconds; don't resend if (x) < 0
105
+ # For messages fail to send or get 429/502/503/504 response, try to resend after (x) seconds; don't resend if (x) < 0
106
106
  config :sleep_before_requeue, :validate => :number, :default => 30
107
107
 
108
+ config :stats_category, :validate => :string, :default => CATEGORY_HEADER_DEFAULT_STATS
109
+
108
110
  # Sending throughput data as metrics
109
111
  config :stats_enabled, :validate => :boolean, :default => false
110
112
 
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-output-sumologic'
3
- s.version = '1.3.1'
3
+ s.version = '1.4.1'
4
4
  s.licenses = ['Apache-2.0']
5
5
  s.summary = 'Deliever the log to Sumo Logic cloud service.'
6
6
  s.description = 'This gem is a Logstash output plugin to deliver the log or metrics to Sumo Logic cloud service. Go to https://github.com/SumoLogic/logstash-output-sumologic for getting help, reporting issues, etc.'
@@ -241,4 +241,90 @@ describe LogStash::Outputs::SumoLogic::HeaderBuilder do
241
241
 
242
242
  end # context
243
243
 
244
+ context "should build headers for stats" do
245
+ let(:builder) {
246
+ LogStash::Outputs::SumoLogic::HeaderBuilder.new("url" => "http://localhost/1234")
247
+ }
248
+
249
+ specify {
250
+ stats_result = builder.build_stats()
251
+ expected = {
252
+ "X-Sumo-Client" => "logstash-output-sumologic",
253
+ "X-Sumo-Name" => "logstash-output-sumologic",
254
+ "X-Sumo-Host" => Socket.gethostname,
255
+ "X-Sumo-Category" => "Logstash.stats",
256
+ "Content-Type" => "application/vnd.sumologic.carbon2"
257
+ }
258
+ expect(stats_result).to eq(expected)
259
+ }
260
+ end
261
+
262
+ context "should build headers for stats with overridden source category" do
263
+ let(:builder) {
264
+ LogStash::Outputs::SumoLogic::HeaderBuilder.new("url" => "http://localhost/1234", "stats_category" => "custom")
265
+ }
266
+
267
+ specify {
268
+ stats_result = builder.build_stats()
269
+ expect(stats_result["X-Sumo-Category"]).to eq("custom")
270
+ }
271
+ end
272
+
273
+ context "should build headers for stats with compression" do
274
+ let(:builder) {
275
+ LogStash::Outputs::SumoLogic::HeaderBuilder.new("url" => "http://localhost/1234", "compress" => true, "compress_encoding" => "gzip")
276
+ }
277
+
278
+ specify {
279
+ stats_result = builder.build_stats()
280
+ expect(stats_result["Content-Encoding"]).to eq("gzip")
281
+ }
282
+ end
283
+
284
+ context "should build headers for stats with extra_headers" do
285
+ let(:builder) {
286
+ LogStash::Outputs::SumoLogic::HeaderBuilder.new(
287
+ "url" => "http://localhost/1234",
288
+ "extra_headers" => {
289
+ "foo" => "bar"
290
+ })
291
+ }
292
+
293
+ specify {
294
+ stats_result = builder.build_stats()
295
+ expect(stats_result.count).to eq(6)
296
+ expect(stats_result["foo"]).to eq("bar")
297
+ }
298
+ end
299
+
300
+ context "should build headers for stats with extra_headers but never overwrite pre-defined headers" do
301
+
302
+ let(:builder) {
303
+ LogStash::Outputs::SumoLogic::HeaderBuilder.new(
304
+ "url" => "http://localhost/1234",
305
+ "extra_headers" => {
306
+ "foo" => "bar",
307
+ "X-Sumo-Client" => "a",
308
+ "X-Sumo-Name" => "b",
309
+ "X-Sumo-Host" => "c",
310
+ "X-Sumo-Category" => "d",
311
+ "Content-Type" => "e"
312
+ })
313
+ }
314
+
315
+ specify {
316
+ stats_result = builder.build_stats()
317
+ expected = {
318
+ "foo" => "bar",
319
+ "X-Sumo-Client" => "logstash-output-sumologic",
320
+ "X-Sumo-Name" => "logstash-output-sumologic",
321
+ "X-Sumo-Host" => Socket.gethostname,
322
+ "X-Sumo-Category" => "Logstash.stats",
323
+ "Content-Type" => "application/vnd.sumologic.carbon2"
324
+ }
325
+ expect(stats_result).to eq(expected)
326
+ }
327
+
328
+ end # context
329
+
244
330
  end # describe
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-output-sumologic
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.3.1
4
+ version: 1.4.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Sumo Logic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-12-18 00:00:00.000000000 Z
11
+ date: 2022-03-09 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: manticore
@@ -107,7 +107,7 @@ extensions: []
107
107
  extra_rdoc_files: []
108
108
  files:
109
109
  - CHANGELOG.md
110
- - DEVELOPER.md
110
+ - CONTRIBUTING.md
111
111
  - Gemfile
112
112
  - LICENSE
113
113
  - README.md
data/DEVELOPER.md DELETED
@@ -1,49 +0,0 @@
1
- # Development Guide
2
-
3
- Logstash output plugin for delivering log to Sumo Logic cloud service through HTTP source.
4
-
5
- ## How to build .gem file from repository
6
-
7
- Open logstash-output-sumologic.gemspec and make any necessary configuration changes.
8
- In your local Git clone, run:
9
-
10
- ```bash
11
- gem build logstash-output-sumologic.gemspec
12
- ```
13
-
14
- You will get a .gem file in the same directory as `logstash-output-sumologic-x.y.z.gem`
15
- Remove old version of plugin (optional):
16
-
17
- ```bash
18
- bin/logstash-plugin remove logstash-output-sumologic
19
- ```
20
-
21
- And then install the plugin locally:
22
-
23
- ```bash
24
- bin/logstash-plugin install <full path of .gem>
25
- ```
26
-
27
- ## How to run test with rspec
28
-
29
- ### Running in Docker
30
-
31
- ```bash
32
- docker build -t logstash-output-plugin .
33
- docker run --rm -it -e 'sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX' logstash-output-plugin
34
- ```
35
-
36
- ### Running on bare metal
37
-
38
- The test requires JRuby to run. So you need to install [JRuby](http://jruby.org/), [bundle](https://bundler.io/bundle_install.html) and [RVM](https://rvm.io/) (for switching between JRuby and Ruby) first.
39
- And then run:
40
-
41
- ```bash
42
- rvm use jruby
43
- bundle install
44
- export sumo_url=https://events.sumologic.net/receiver/v1/http/XXXXXXXXXX
45
- rspec spec/
46
- ```
47
-
48
- The project is integrated to Travis CI now.
49
- Make sure [all test passed](https://travis-ci.com/SumoLogic/logstash-output-sumologic) before creating PR