logstash-output-datadog_logs 0.3.1 → 0.5.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 474cfba6967ab1b3109dc47be8ed80a292957f09286015fd9506d44d9d72548c
4
- data.tar.gz: 17f42ddff7a84cacc24795468ba105a0622f7f5554324d717bf4d5f61618dd6b
3
+ metadata.gz: 635ef3d038ba35b57528f006e81455e20b7cd27217a8fe6188f65d5229e13fe3
4
+ data.tar.gz: 6b4325a4cf44791c40aa18360ba7fd9c92a451fb94b182c5a0803de711e3d462
5
5
  SHA512:
6
- metadata.gz: f1285fe2fc85df206d20315fd0f549190d856035410bc86267eac3e6da169bbd33b8fa2bc08dd9c7f6612189239ca44b0896ae6390b858d0ecaebfd8ae5a724b
7
- data.tar.gz: e387e7e3a92cd452fa4fa57cfe8b1c7fb26093436aada5d8a7e6a368bb302bea789f8e617e376d6ee1a7a8b900c3ccf334705656d2f9540a00151bc7ae00d7ba
6
+ metadata.gz: e555d642d57cd65971b9e05c2cd6d76c8919ec7413d1dab24eb9fe6eb8d9be3ce51dcfd0dc4751d77f052d87f49ea032fb6ce488c6b99350ae31b41f93908d2e
7
+ data.tar.gz: 2b89082813230155ebc747da6e5073aaa7f74d0ad2b1c922e8c577f670af6a2306447a0384229a0382f2c499de1741b20d11b03bc89f32f314939fee269e17c3
data/CHANGELOG.md CHANGED
@@ -1,3 +1,13 @@
1
+ ## 0.5.0
2
+ - Support Datadog v2 endpoints #28
3
+
4
+ ## 0.4.1
5
+ - Fix HTTP bug when remote server is timing out
6
+
7
+ ## 0.4.0
8
+ - Enable HTTP forwarding for logs
9
+ - Provide an option to disable SSL hostname verification for HTTPS
10
+
1
11
  ## 0.3.1
2
12
  - Make sure that we can disable retries
3
13
 
data/README.md CHANGED
@@ -3,6 +3,10 @@
3
3
 
4
4
  DatadogLogs lets you send logs to Datadog based on LogStash events.
5
5
 
6
+ ## Requirements
7
+
8
+ The plugin relies upon the `zlib` library for compressing data.
9
+
6
10
  ## How to install it?
7
11
 
8
12
  ```bash
@@ -12,13 +16,77 @@ logstash-plugin install logstash-output-datadog_logs
12
16
 
13
17
  ## How to use it?
14
18
 
15
- Configure `datadog_logs` plugin with your Datadog API key:
19
+ The `datadog_logs` plugin is configured by default to send logs to a US endpoint over an SSL-encrypted HTTP connection.
20
+ The logs are by default batched and compressed.
21
+
22
+ Configure the plugin with your Datadog API key:
23
+
24
+ ```
25
+ output {
26
+ datadog_logs {
27
+ api_key => "<DATADOG_API_KEY>"
28
+ }
29
+ }
30
+ ```
31
+
32
+ To enable TCP forwarding, configure your forwarder with:
33
+
34
+ ```
35
+ output {
36
+ datadog_logs {
37
+ api_key => "<DATADOG_API_KEY>"
38
+ host => "tcp-intake.logs.datadoghq.com"
39
+ port => 10516
40
+ use_http => false
41
+ }
42
+ }
43
+ ```
44
+
45
+ To send logs to the Datadog's EU HTTP endpoint, override the default `host`
16
46
 
17
47
  ```
18
48
  output {
19
49
  datadog_logs {
20
50
  api_key => "<DATADOG_API_KEY>"
51
+ host => "http-intake.logs.datadoghq.eu"
52
+ }
53
+ }
54
+ ```
55
+
56
+ ### Configuration properties
57
+
58
+ | Property | Description | Default value |
59
+ |-------------|--------------------------------------------------------------------------|----------------|
60
+ | **api_key** | The API key of your Datadog platform | nil |
61
+ | **host** | Proxy endpoint when logs are not directly forwarded to Datadog | intake.logs.datadoghq.com |
62
+ | **port** | Proxy port when logs are not directly forwarded to Datadog | 443 |
63
+ | **use_ssl** | If true, the agent initializes a secure connection to Datadog. Ensure to update the port if you disable it. | true |
64
+ | **max_retries** | The number of retries before the output plugin stops | 5 |
65
+ | **max_backoff** | The maximum time waited between each retry in seconds | 30 |
66
+ | **use_http** | Enable HTTP forwarding. If you disable it, make sure to update the port to 10516 if use_ssl is enabled or 10514 otherwise. | true |
67
+ | **use_compression** | Enable log compression for HTTP | true |
68
+ | **compression_level** | Set the log compression level for HTTP (1 to 9, 9 being the best ratio) | 6 |
69
+ | **no_ssl_validation** | Disable SSL validation (useful for proxy forwarding) | false |
70
+
71
+
72
+
73
+ For additional options, see the [Datadog endpoint documentation](https://docs.datadoghq.com/logs/?tab=eusite#datadog-logs-endpoints)
74
+
75
+ ## Add metadata to your logs
76
+
77
+ In order to get the best use out of your logs in Datadog, it is important to have the proper metadata associated with them (including hostname, service and source).
78
+ To add those to your logs, add them into your logs with a mutate filter:
79
+
80
+ ```
81
+ filter {
82
+ mutate {
83
+ add_field => {
84
+ "host" => "<HOST>"
85
+ "service" => "<SERVICE>"
86
+ "ddsource" => "<MY_SOURCE_VALUE>"
87
+ "ddtags" => "<KEY1:VALUE1>,<KEY2:VALUE2>"
21
88
  }
89
+ }
22
90
  }
23
91
  ```
24
92
 
@@ -6,67 +6,290 @@
6
6
  # encoding: utf-8
7
7
  require "logstash/outputs/base"
8
8
  require "logstash/namespace"
9
+ require "zlib"
10
+
11
+ require_relative "version"
9
12
 
10
13
  # DatadogLogs lets you send logs to Datadog
11
14
  # based on LogStash events.
12
15
  class LogStash::Outputs::DatadogLogs < LogStash::Outputs::Base
13
16
 
17
+ # Respect limit documented at https://docs.datadoghq.com/api/?lang=bash#logs
18
+ DD_MAX_BATCH_LENGTH = 500
19
+ DD_MAX_BATCH_SIZE = 5000000
20
+ DD_TRUNCATION_SUFFIX = "...TRUNCATED..."
21
+
14
22
  config_name "datadog_logs"
15
23
 
16
24
  default :codec, "json"
17
25
 
18
26
  # Datadog configuration parameters
19
- config :api_key, :validate => :string, :required => true
20
- config :host, :validate => :string, :required => true, :default => 'intake.logs.datadoghq.com'
21
- config :port, :validate => :number, :required => true, :default => 10516
22
- config :use_ssl, :validate => :boolean, :required => true, :default => true
23
- config :max_backoff, :validate => :number, :required => true, :default => 30
24
- config :max_retries, :validate => :number, :required => true, :default => 5
27
+ config :api_key, :validate => :string, :required => true
28
+ config :host, :validate => :string, :required => true, :default => "http-intake.logs.datadoghq.com"
29
+ config :port, :validate => :number, :required => true, :default => 443
30
+ config :use_ssl, :validate => :boolean, :required => true, :default => true
31
+ config :max_backoff, :validate => :number, :required => true, :default => 30
32
+ config :max_retries, :validate => :number, :required => true, :default => 5
33
+ config :use_http, :validate => :boolean, :required => false, :default => true
34
+ config :use_compression, :validate => :boolean, :required => false, :default => true
35
+ config :compression_level, :validate => :number, :required => false, :default => 6
36
+ config :no_ssl_validation, :validate => :boolean, :required => false, :default => false
37
+ config :force_v1_routes, :validate => :boolean, :required => false, :default => false # force using deprecated v1 routes
25
38
 
39
+ # Register the plugin to logstash
26
40
  public
27
41
  def register
28
- require "socket"
29
- client = nil
30
- @codec.on_event do |event, payload|
31
- message = "#{@api_key} #{payload}\n"
32
- retries = 0
42
+ @client = new_client(@logger, @api_key, @use_http, @use_ssl, @no_ssl_validation, @host, @port, @use_compression, @force_v1_routes)
43
+ end
44
+
45
+ # Logstash shutdown hook
46
+ def close
47
+ @client.close
48
+ end
49
+
50
+ # Entry point of the plugin, receiving a set of Logstash events
51
+ public
52
+ def multi_receive(events)
53
+ return if events.empty?
54
+ encoded_events = @codec.multi_encode(events)
55
+ begin
56
+ if @use_http
57
+ batches = batch_http_events(encoded_events, DD_MAX_BATCH_LENGTH, DD_MAX_BATCH_SIZE)
58
+ batches.each do |batched_event|
59
+ process_encoded_payload(format_http_event_batch(batched_event))
60
+ end
61
+ else
62
+ encoded_events.each do |encoded_event|
63
+ process_encoded_payload(format_tcp_event(encoded_event.last, @api_key, DD_MAX_BATCH_SIZE))
64
+ end
65
+ end
66
+ rescue => e
67
+ @logger.error("Uncaught processing exception in datadog forwarder #{e.message}")
68
+ end
69
+ end
70
+
71
+ # Process and send each encoded payload
72
+ def process_encoded_payload(payload)
73
+ if @use_compression and @use_http
74
+ payload = gzip_compress(payload, @compression_level)
75
+ end
76
+ @client.send_retries(payload, @max_retries, @max_backoff)
77
+ end
78
+
79
+ # Format TCP event
80
+ def format_tcp_event(payload, api_key, max_request_size)
81
+ formatted_payload = "#{api_key} #{payload}"
82
+ if (formatted_payload.bytesize > max_request_size)
83
+ return truncate(formatted_payload, max_request_size)
84
+ end
85
+ formatted_payload
86
+ end
87
+
88
+ # Format HTTP events
89
+ def format_http_event_batch(batched_events)
90
+ "[#{batched_events.join(',')}]"
91
+ end
92
+
93
+ # Group HTTP events in batches
94
+ def batch_http_events(encoded_events, max_batch_length, max_request_size)
95
+ batches = []
96
+ current_batch = []
97
+ current_batch_size = 0
98
+ encoded_events.each_with_index do |event, i|
99
+ encoded_event = event.last
100
+ current_event_size = encoded_event.bytesize
101
+ # If this unique log size is bigger than the request size, truncate it
102
+ if current_event_size > max_request_size
103
+ encoded_event = truncate(encoded_event, max_request_size)
104
+ current_event_size = encoded_event.bytesize
105
+ end
106
+
107
+ if (i > 0 and i % max_batch_length == 0) or (current_batch_size + current_event_size > max_request_size)
108
+ batches << current_batch
109
+ current_batch = []
110
+ current_batch_size = 0
111
+ end
112
+
113
+ current_batch_size += encoded_event.bytesize
114
+ current_batch << encoded_event
115
+ end
116
+ batches << current_batch
117
+ batches
118
+ end
119
+
120
+ # Truncate events over the provided max length, appending a marker when truncated
121
+ def truncate(event, max_length)
122
+ if event.length > max_length
123
+ event = event[0..max_length - 1]
124
+ event[max(0, max_length - DD_TRUNCATION_SUFFIX.length)..max_length - 1] = DD_TRUNCATION_SUFFIX
125
+ return event
126
+ end
127
+ event
128
+ end
129
+
130
+ def max(a, b)
131
+ a > b ? a : b
132
+ end
133
+
134
+ # Compress logs with GZIP
135
+ def gzip_compress(payload, compression_level)
136
+ gz = StringIO.new
137
+ gz.set_encoding("BINARY")
138
+ z = Zlib::GzipWriter.new(gz, compression_level)
139
+ begin
140
+ z.write(payload)
141
+ ensure
142
+ z.close
143
+ end
144
+ gz.string
145
+ end
146
+
147
+ # Build a new transport client
148
+ def new_client(logger, api_key, use_http, use_ssl, no_ssl_validation, host, port, use_compression, force_v1_routes)
149
+ if use_http
150
+ DatadogHTTPClient.new logger, use_ssl, no_ssl_validation, host, port, use_compression, api_key, force_v1_routes
151
+ else
152
+ DatadogTCPClient.new logger, use_ssl, no_ssl_validation, host, port
153
+ end
154
+ end
155
+
156
+ class RetryableError < StandardError;
157
+ end
158
+
159
+ class DatadogClient
160
+ def send_retries(payload, max_retries, max_backoff)
33
161
  backoff = 1
162
+ retries = 0
34
163
  begin
35
- client ||= new_client
36
- client.write(message)
37
- rescue => e
38
- @logger.warn("Could not send payload", :exception => e, :backtrace => e.backtrace)
39
- client.close rescue nil
40
- client = nil
164
+ send(payload)
165
+ rescue RetryableError => e
41
166
  if retries < max_retries || max_retries < 0
167
+ @logger.warn("Retrying send due to: #{e.message}")
42
168
  sleep backoff
43
169
  backoff = 2 * backoff unless backoff > max_backoff
44
170
  retries += 1
45
171
  retry
46
172
  end
47
- @logger.warn("Max number of retries reached, dropping the payload", :payload => payload, :max_retries => max_retries)
173
+ rescue => ex
174
+ @logger.error("Unmanaged exception while sending log to datadog #{ex.message}")
48
175
  end
49
176
  end
177
+
178
+ def send(payload)
179
+ raise NotImplementedError, "Datadog transport client should implement the send method"
180
+ end
181
+
182
+ def close
183
+ raise NotImplementedError, "Datadog transport client should implement the close method"
184
+ end
50
185
  end
51
186
 
52
- public
53
- def receive(event)
54
- # handle new event
55
- @codec.encode(event)
187
+ class DatadogHTTPClient < DatadogClient
188
+ require "manticore"
189
+
190
+ RETRYABLE_EXCEPTIONS = [
191
+ ::Manticore::Timeout,
192
+ ::Manticore::SocketException,
193
+ ::Manticore::ClientProtocolException,
194
+ ::Manticore::ResolutionFailure
195
+ ]
196
+
197
+ def initialize(logger, use_ssl, no_ssl_validation, host, port, use_compression, api_key, force_v1_routes)
198
+ @logger = logger
199
+ protocol = use_ssl ? "https" : "http"
200
+
201
+ @headers = {"Content-Type" => "application/json"}
202
+ if use_compression
203
+ @headers["Content-Encoding"] = "gzip"
204
+ end
205
+
206
+ if force_v1_routes
207
+ @url = "#{protocol}://#{host}:#{port.to_s}/v1/input/#{api_key}"
208
+ else
209
+ @url = "#{protocol}://#{host}:#{port.to_s}/api/v2/logs"
210
+ @headers["DD-API-KEY"] = api_key
211
+ @headers["DD-EVP-ORIGIN"] = "logstash"
212
+ @headers["DD-EVP-ORIGIN-VERSION"] = DatadogLogStashPlugin::VERSION
213
+ end
214
+
215
+ logger.info("Starting HTTP connection to #{protocol}://#{host}:#{port.to_s} with compression " + (use_compression ? "enabled" : "disabled") + (force_v1_routes ? " using v1 routes" : " using v2 routes"))
216
+
217
+ config = {}
218
+ config[:ssl][:verify] = :disable if no_ssl_validation
219
+ @client = Manticore::Client.new(config)
220
+ end
221
+
222
+ def send(payload)
223
+ begin
224
+ response = @client.post(@url, :body => payload, :headers => @headers).call
225
+ # in case of error or 429, we will retry sending this payload
226
+ if response.code >= 500 || response.code == 429
227
+ raise RetryableError.new "Unable to send payload: #{response.code} #{response.body}"
228
+ end
229
+ if response.code >= 400
230
+ @logger.error("Unable to send payload due to client error: #{response.code} #{response.body}")
231
+ end
232
+ rescue => client_exception
233
+ should_retry = retryable_exception?(client_exception)
234
+ if should_retry
235
+ raise RetryableError.new "Unable to send payload #{client_exception.message}"
236
+ else
237
+ raise client_exception
238
+ end
239
+ end
240
+
241
+ end
242
+
243
+ def retryable_exception?(exception)
244
+ RETRYABLE_EXCEPTIONS.any? { |e| exception.is_a?(e) }
245
+ end
246
+
247
+ def close
248
+ @client.close
249
+ end
56
250
  end
57
251
 
58
- private
59
- def new_client
60
- # open a secure connection with Datadog
61
- if @use_ssl
62
- @logger.info("Starting SSL connection", :host => @host, :port => @port)
63
- socket = TCPSocket.new @host, @port
64
- sslSocket = OpenSSL::SSL::SSLSocket.new socket
65
- sslSocket.connect
66
- return sslSocket
67
- else
68
- @logger.info("Starting plaintext connection", :host => @host, :port => @port)
69
- return TCPSocket.new @host, @port
252
+ class DatadogTCPClient < DatadogClient
253
+ require "socket"
254
+
255
+ def initialize(logger, use_ssl, no_ssl_validation, host, port)
256
+ @logger = logger
257
+ @use_ssl = use_ssl
258
+ @no_ssl_validation = no_ssl_validation
259
+ @host = host
260
+ @port = port
261
+ end
262
+
263
+ def connect
264
+ if @use_ssl
265
+ @logger.info("Starting SSL connection #{@host} #{@port}")
266
+ socket = TCPSocket.new @host, @port
267
+ ssl_context = OpenSSL::SSL::SSLContext.new
268
+ if @no_ssl_validation
269
+ ssl_context.set_params({:verify_mode => OpenSSL::SSL::VERIFY_NONE})
270
+ end
271
+ ssl_context = OpenSSL::SSL::SSLSocket.new socket, ssl_context
272
+ ssl_context.connect
273
+ ssl_context
274
+ else
275
+ @logger.info("Starting plaintext connection #{@host} #{@port}")
276
+ TCPSocket.new @host, @port
277
+ end
278
+ end
279
+
280
+ def send(payload)
281
+ begin
282
+ @socket ||= connect
283
+ @socket.puts(payload)
284
+ rescue => e
285
+ @socket.close rescue nil
286
+ @socket = nil
287
+ raise RetryableError.new "Unable to send payload: #{e.message}."
288
+ end
289
+ end
290
+
291
+ def close
292
+ @socket.close rescue nil
70
293
  end
71
294
  end
72
295
 
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DatadogLogStashPlugin
4
+ VERSION = '0.5.0'
5
+ end
@@ -1,6 +1,12 @@
1
+ # Load version.rb containing the DatadogLogStashPlugin::VERSION
2
+ # for current Gem version.
3
+ lib = File.expand_path('../lib', __FILE__)
4
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
5
+ require "logstash/outputs/version.rb"
6
+
1
7
  Gem::Specification.new do |s|
2
8
  s.name = 'logstash-output-datadog_logs'
3
- s.version = '0.3.1'
9
+ s.version = DatadogLogStashPlugin::VERSION
4
10
  s.licenses = ['Apache-2.0']
5
11
  s.summary = 'DatadogLogs lets you send logs to Datadog based on LogStash events.'
6
12
  s.homepage = 'https://www.datadoghq.com/'
@@ -9,14 +15,18 @@ Gem::Specification.new do |s|
9
15
  s.require_paths = ['lib']
10
16
 
11
17
  # Files
12
- s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
13
- # Tests
18
+ s.files = Dir['lib/**/*', 'spec/**/*', 'vendor/**/*', '*.gemspec', '*.md', 'CONTRIBUTORS', 'Gemfile', 'LICENSE', 'NOTICE.TXT']
19
+ # Tests
14
20
  s.test_files = s.files.grep(%r{^(test|spec|features)/})
15
21
 
16
22
  # Special flag to let us know this is actually a logstash plugin
17
- s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
23
+ s.metadata = {"logstash_plugin" => "true", "logstash_group" => "output"}
18
24
 
19
25
  # Gem dependencies
20
26
  s.add_runtime_dependency "logstash-core-plugin-api", "~> 2.0"
21
- s.add_development_dependency 'logstash-devutils'
27
+ s.add_runtime_dependency 'manticore', '>= 0.5.2', '< 1.0.0'
28
+ s.add_runtime_dependency 'logstash-codec-json'
29
+
30
+ s.add_development_dependency 'logstash-devutils', "= 1.3.6"
31
+ s.add_development_dependency 'webmock'
22
32
  end
@@ -4,3 +4,219 @@
4
4
  # Copyright 2017 Datadog, Inc.
5
5
 
6
6
  require "logstash/devutils/rspec/spec_helper"
7
+ require "logstash/outputs/datadog_logs"
8
+ require 'webmock/rspec'
9
+
10
+ describe LogStash::Outputs::DatadogLogs do
11
+ context "should register" do
12
+ it "with an api key" do
13
+ plugin = LogStash::Plugin.lookup("output", "datadog_logs").new({"api_key" => "xxx"})
14
+ expect { plugin.register }.to_not raise_error
15
+ end
16
+
17
+ it "without an api key" do
18
+ expect { LogStash::Plugin.lookup("output", "datadog_logs").new() }.to raise_error(LogStash::ConfigurationError)
19
+ end
20
+ end
21
+
22
+ subject do
23
+ plugin = LogStash::Plugin.lookup("output", "datadog_logs").new({"api_key" => "xxx"})
24
+ plugin.register
25
+ plugin
26
+ end
27
+
28
+ context "when truncating" do
29
+ it "should truncate messages of the given length" do
30
+ input = "foobarfoobarfoobarfoobar"
31
+ expect(subject.truncate(input, 15).length).to eq(15)
32
+ end
33
+
34
+ it "should replace the end of the message with a marker when truncated" do
35
+ input = "foobarfoobarfoobarfoobar"
36
+ expect(subject.truncate(input, 15)).to end_with("...TRUNCATED...")
37
+ end
38
+
39
+ it "should return the marker if the message length is smaller than the marker length" do
40
+ input = "foobar"
41
+ expect(subject.truncate(input, 1)).to eq("...TRUNCATED...")
42
+ end
43
+
44
+ it "should do nothing if the input length is smaller than the given length" do
45
+ input = "foobar"
46
+ expect(subject.truncate(input, 15)).to eq("foobar")
47
+ end
48
+ end
49
+
50
+ context "when using HTTP" do
51
+ it "should respect the batch length and create one batch of one event" do
52
+ input_events = [[LogStash::Event.new({"message" => "dd"}), "dd"]]
53
+ expect(subject.batch_http_events(input_events, 1, 1000).length).to eq(1)
54
+ end
55
+
56
+ it "should respect the batch length and create two batches of one event" do
57
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "dd2"}), "dd2"]]
58
+ actual_events = subject.batch_http_events(input_events, 1, 1000)
59
+ expect(actual_events.length).to eq(2)
60
+ expect(actual_events[0][0]).to eq("dd1")
61
+ expect(actual_events[1][0]).to eq("dd2")
62
+ end
63
+
64
+ it "should respect the request size and create two batches of one event" do
65
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "dd2"}), "dd2"]]
66
+ actual_events = subject.batch_http_events(input_events, 10, 3)
67
+ expect(actual_events.length).to eq(2)
68
+ expect(actual_events[0][0]).to eq("dd1")
69
+ expect(actual_events[1][0]).to eq("dd2")
70
+ end
71
+
72
+ it "should respect the request size and create two batches of two events" do
73
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "dd2"}), "dd2"], [LogStash::Event.new({"message" => "dd3"}), "dd3"], [LogStash::Event.new({"message" => "dd4"}), "dd4"]]
74
+ actual_events = subject.batch_http_events(input_events, 6, 6)
75
+ expect(actual_events.length).to eq(2)
76
+ expect(actual_events[0][0]).to eq("dd1")
77
+ expect(actual_events[0][1]).to eq("dd2")
78
+ expect(actual_events[1][0]).to eq("dd3")
79
+ expect(actual_events[1][1]).to eq("dd4")
80
+ end
81
+
82
+ it "should truncate events whose length is bigger than the max request size" do
83
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "foobarfoobarfoobar"}), "foobarfoobarfoobar"], [LogStash::Event.new({"message" => "dd2"}), "dd2"]]
84
+ actual_events = subject.batch_http_events(input_events, 10, 3)
85
+ expect(actual_events.length).to eq(3)
86
+ expect(actual_events[0][0]).to eq("dd1")
87
+ expect(actual_events[1][0]).to eq("...TRUNCATED...")
88
+ expect(actual_events[2][0]).to eq("dd2")
89
+ end
90
+ end
91
+
92
+ context "when facing HTTP connection issues" do
93
+ [true, false].each do |force_v1_routes|
94
+ it "should retry when server is returning 5XX " + (force_v1_routes ? "using v1 routes" : "using v2 routes") do
95
+ api_key = 'XXX'
96
+ stub_dd_request_with_return_code(api_key, 500, force_v1_routes)
97
+ payload = '{}'
98
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
99
+ expect { client.send(payload) }.to raise_error(LogStash::Outputs::DatadogLogs::RetryableError)
100
+ end
101
+
102
+ it "should not retry when server is returning 4XX" do
103
+ api_key = 'XXX'
104
+ stub_dd_request_with_return_code(api_key, 400, force_v1_routes)
105
+ payload = '{}'
106
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
107
+ expect { client.send(payload) }.to_not raise_error
108
+ end
109
+
110
+ it "should retry when server is returning 429" do
111
+ api_key = 'XXX'
112
+ stub_dd_request_with_return_code(api_key, 429, force_v1_routes)
113
+ payload = '{}'
114
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
115
+ expect { client.send(payload) }.to raise_error(LogStash::Outputs::DatadogLogs::RetryableError)
116
+ end
117
+
118
+ it "should retry when facing a timeout exception from manticore" do
119
+ api_key = 'XXX'
120
+ stub_dd_request_with_error(api_key, Manticore::Timeout, force_v1_routes)
121
+ payload = '{}'
122
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
123
+ expect { client.send(payload) }.to raise_error(LogStash::Outputs::DatadogLogs::RetryableError)
124
+ end
125
+
126
+ it "should retry when facing a socket exception from manticore" do
127
+ api_key = 'XXX'
128
+ stub_dd_request_with_error(api_key, Manticore::SocketException, force_v1_routes)
129
+ payload = '{}'
130
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
131
+ expect { client.send(payload) }.to raise_error(LogStash::Outputs::DatadogLogs::RetryableError)
132
+ end
133
+
134
+ it "should retry when facing a client protocol exception from manticore" do
135
+ api_key = 'XXX'
136
+ stub_dd_request_with_error(api_key, Manticore::ClientProtocolException, force_v1_routes)
137
+ payload = '{}'
138
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
139
+ expect { client.send(payload) }.to raise_error(LogStash::Outputs::DatadogLogs::RetryableError)
140
+ end
141
+
142
+ it "should retry when facing a dns failure from manticore" do
143
+ api_key = 'XXX'
144
+ stub_dd_request_with_error(api_key, Manticore::ResolutionFailure, force_v1_routes)
145
+ payload = '{}'
146
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
147
+ expect { client.send(payload) }.to raise_error(LogStash::Outputs::DatadogLogs::RetryableError)
148
+ end
149
+
150
+ it "should retry when facing a socket timeout from manticore" do
151
+ api_key = 'XXX'
152
+ stub_dd_request_with_error(api_key, Manticore::SocketTimeout, force_v1_routes)
153
+ payload = '{}'
154
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
155
+ expect { client.send(payload) }.to raise_error(LogStash::Outputs::DatadogLogs::RetryableError)
156
+ end
157
+
158
+ it "should not retry when facing any other general error" do
159
+ api_key = 'XXX'
160
+ stub_dd_request_with_error(api_key, StandardError, force_v1_routes)
161
+ payload = '{}'
162
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
163
+ expect { client.send(payload) }.to raise_error(StandardError)
164
+ end
165
+
166
+ it "should not stop the forwarder when facing any client uncaught exception" do
167
+ api_key = 'XXX'
168
+ stub_dd_request_with_error(api_key, StandardError, force_v1_routes)
169
+ payload = '{}'
170
+ client = LogStash::Outputs::DatadogLogs::DatadogHTTPClient.new Logger.new(STDOUT), false, false, "datadog.com", 80, false, api_key, force_v1_routes
171
+ expect { client.send_retries(payload, 2, 2) }.to_not raise_error
172
+ end
173
+ end
174
+ end
175
+
176
+ context "when using TCP" do
177
+ it "should re-encode events" do
178
+ input_event = "{message=dd}"
179
+ encoded_event = subject.format_tcp_event(input_event, "xxx", 1000)
180
+ expect(encoded_event).to eq("xxx " + input_event)
181
+ end
182
+
183
+ it "should truncate too long messages" do
184
+ input_event = "{message=foobarfoobarfoobar}"
185
+ encoded_event = subject.format_tcp_event(input_event, "xxx", 20)
186
+ expect(encoded_event).to eq("xxx {...TRUNCATED...")
187
+ end
188
+ end
189
+
190
+ def stub_dd_request_with_return_code(api_key, return_code, force_v1_routes)
191
+ stub_dd_request(api_key, force_v1_routes).
192
+ to_return(status: return_code, body: "", headers: {})
193
+ end
194
+
195
+ def stub_dd_request_with_error(api_key, error, force_v1_routes)
196
+ stub_dd_request(api_key, force_v1_routes).
197
+ to_raise(error)
198
+ end
199
+
200
+ def stub_dd_request(api_key, force_v1_routes)
201
+ if force_v1_routes
202
+ stub_request(:post, "http://datadog.com/v1/input/#{api_key}").
203
+ with(
204
+ body: "{}",
205
+ headers: {
206
+ 'Connection' => 'Keep-Alive',
207
+ 'Content-Type' => 'application/json'
208
+ })
209
+ else
210
+ stub_request(:post, "http://datadog.com/api/v2/logs").
211
+ with(
212
+ body: "{}",
213
+ headers: {
214
+ 'Connection' => 'Keep-Alive',
215
+ 'Content-Type' => 'application/json',
216
+ 'DD-API-KEY' => "#{api_key}",
217
+ 'DD-EVP-ORIGIN' => 'logstash',
218
+ 'DD-EVP-ORIGIN-VERSION' => DatadogLogStashPlugin::VERSION
219
+ })
220
+ end
221
+ end
222
+ end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-output-datadog_logs
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.1
4
+ version: 0.5.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Datadog
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2019-01-15 00:00:00.000000000 Z
12
+ date: 2022-04-25 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  requirement: !ruby/object:Gem::Requirement
@@ -25,15 +25,63 @@ dependencies:
25
25
  - - "~>"
26
26
  - !ruby/object:Gem::Version
27
27
  version: '2.0'
28
+ - !ruby/object:Gem::Dependency
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: 0.5.2
34
+ - - "<"
35
+ - !ruby/object:Gem::Version
36
+ version: 1.0.0
37
+ name: manticore
38
+ prerelease: false
39
+ type: :runtime
40
+ version_requirements: !ruby/object:Gem::Requirement
41
+ requirements:
42
+ - - ">="
43
+ - !ruby/object:Gem::Version
44
+ version: 0.5.2
45
+ - - "<"
46
+ - !ruby/object:Gem::Version
47
+ version: 1.0.0
28
48
  - !ruby/object:Gem::Dependency
29
49
  requirement: !ruby/object:Gem::Requirement
30
50
  requirements:
31
51
  - - ">="
32
52
  - !ruby/object:Gem::Version
33
53
  version: '0'
54
+ name: logstash-codec-json
55
+ prerelease: false
56
+ type: :runtime
57
+ version_requirements: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ version: '0'
62
+ - !ruby/object:Gem::Dependency
63
+ requirement: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - '='
66
+ - !ruby/object:Gem::Version
67
+ version: 1.3.6
34
68
  name: logstash-devutils
35
69
  prerelease: false
36
70
  type: :development
71
+ version_requirements: !ruby/object:Gem::Requirement
72
+ requirements:
73
+ - - '='
74
+ - !ruby/object:Gem::Version
75
+ version: 1.3.6
76
+ - !ruby/object:Gem::Dependency
77
+ requirement: !ruby/object:Gem::Requirement
78
+ requirements:
79
+ - - ">="
80
+ - !ruby/object:Gem::Version
81
+ version: '0'
82
+ name: webmock
83
+ prerelease: false
84
+ type: :development
37
85
  version_requirements: !ruby/object:Gem::Requirement
38
86
  requirements:
39
87
  - - ">="
@@ -52,6 +100,7 @@ files:
52
100
  - NOTICE.TXT
53
101
  - README.md
54
102
  - lib/logstash/outputs/datadog_logs.rb
103
+ - lib/logstash/outputs/version.rb
55
104
  - logstash-output-datadog_logs.gemspec
56
105
  - spec/outputs/datadog_logs_spec.rb
57
106
  homepage: https://www.datadoghq.com/
@@ -76,7 +125,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
76
125
  version: '0'
77
126
  requirements: []
78
127
  rubyforge_project:
79
- rubygems_version: 2.6.13
128
+ rubygems_version: 2.7.6
80
129
  signing_key:
81
130
  specification_version: 4
82
131
  summary: DatadogLogs lets you send logs to Datadog based on LogStash events.