logstash-output-datadog_logs 0.3.1 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 474cfba6967ab1b3109dc47be8ed80a292957f09286015fd9506d44d9d72548c
4
- data.tar.gz: 17f42ddff7a84cacc24795468ba105a0622f7f5554324d717bf4d5f61618dd6b
3
+ metadata.gz: 6f81e4d1f9e76634f0ee83cc649ad18a9ee9defc51c54c0cd630e8c9bae69d53
4
+ data.tar.gz: e15dfeba8935d842bc9db8ed8c1a8f166165376c4b0bc02d6feafca166313432
5
5
  SHA512:
6
- metadata.gz: f1285fe2fc85df206d20315fd0f549190d856035410bc86267eac3e6da169bbd33b8fa2bc08dd9c7f6612189239ca44b0896ae6390b858d0ecaebfd8ae5a724b
7
- data.tar.gz: e387e7e3a92cd452fa4fa57cfe8b1c7fb26093436aada5d8a7e6a368bb302bea789f8e617e376d6ee1a7a8b900c3ccf334705656d2f9540a00151bc7ae00d7ba
6
+ metadata.gz: 46a621add6073375b653ce158b67d9aec0c1e6544c7d69ce7740b1b7930cb95a28525765048ebcc66975f9fa99b755f10fee32127a14b842ae0be80f034ebf11
7
+ data.tar.gz: e9276dc886a503ed450a20ce986861835496b5f8f1d991389afd5ece5a0933b18ae2cad57d211a74d068b51024c560eabb7a859ba17ebd6b6f9ca44fe71fb2a2
data/CHANGELOG.md CHANGED
@@ -1,3 +1,7 @@
1
+ ## 0.4.0
2
+ - Enable HTTP forwarding for logs
3
+ - Provide an option to disable SSL hostname verification for HTTPS
4
+
1
5
  ## 0.3.1
2
6
  - Make sure that we can disable retries
3
7
 
data/README.md CHANGED
@@ -3,6 +3,10 @@
3
3
 
4
4
  DatadogLogs lets you send logs to Datadog based on LogStash events.
5
5
 
6
+ ## Requirements
7
+
8
+ The plugin relies upon the `zlib` library for compressing data.
9
+
6
10
  ## How to install it?
7
11
 
8
12
  ```bash
@@ -12,13 +16,77 @@ logstash-plugin install logstash-output-datadog_logs
12
16
 
13
17
  ## How to use it?
14
18
 
15
- Configure `datadog_logs` plugin with your Datadog API key:
19
+ The `datadog_logs` plugin is configured by default to send logs to a US endpoint over an SSL-encrypted HTTP connection.
20
+ The logs are by default batched and compressed.
21
+
22
+ Configure the plugin with your Datadog API key:
23
+
24
+ ```
25
+ output {
26
+ datadog_logs {
27
+ api_key => "<DATADOG_API_KEY>"
28
+ }
29
+ }
30
+ ```
31
+
32
+ To enable TCP forwarding, configure your forwarder with:
33
+
34
+ ```
35
+ output {
36
+ datadog_logs {
37
+ api_key => "<DATADOG_API_KEY>"
38
+ host => "tcp-intake.logs.datadoghq.com"
39
+ port => 10516
40
+ use_http => false
41
+ }
42
+ }
43
+ ```
44
+
45
+ To send logs to the Datadog's EU HTTP endpoint, override the default `host`
16
46
 
17
47
  ```
18
48
  output {
19
49
  datadog_logs {
20
50
  api_key => "<DATADOG_API_KEY>"
51
+ host => "http-intake.logs.datadoghq.eu"
52
+ }
53
+ }
54
+ ```
55
+
56
+ ### Configuration properties
57
+
58
+ | Property | Description | Default value |
59
+ |-------------|--------------------------------------------------------------------------|----------------|
60
+ | **api_key** | The API key of your Datadog platform | nil |
61
+ | **host** | Proxy endpoint when logs are not directly forwarded to Datadog | intake.logs.datadoghq.com |
62
+ | **port** | Proxy port when logs are not directly forwarded to Datadog | 443 |
63
+ | **use_ssl** | If true, the agent initializes a secure connection to Datadog. Ensure to update the port if you disable it. | true |
64
+ | **max_retries** | The number of retries before the output plugin stops | 5 |
65
+ | **max_backoff** | The maximum time waited between each retry in seconds | 30 |
66
+ | **use_http** | Enable HTTP forwarding. If you disable it, make sure to update the port to 10516 if use_ssl is enabled or 10514 otherwise. | true |
67
+ | **use_compression** | Enable log compression for HTTP | true |
68
+ | **compression_level** | Set the log compression level for HTTP (1 to 9, 9 being the best ratio) | 6 |
69
+ | **no_ssl_validation** | Disable SSL validation (useful for proxy forwarding) | false |
70
+
71
+
72
+
73
+ For additional options, see the [Datadog endpoint documentation](https://docs.datadoghq.com/logs/?tab=eusite#datadog-logs-endpoints)
74
+
75
+ ## Add metadata to your logs
76
+
77
+ In order to get the best use out of your logs in Datadog, it is important to have the proper metadata associated with them (including hostname, service and source).
78
+ To add those to your logs, add them into your logs with a mutate filter:
79
+
80
+ ```
81
+ filter {
82
+ mutate {
83
+ add_field => {
84
+ "host" => "<HOST>"
85
+ "service" => "<SERVICE>"
86
+ "ddsource" => "<MY_SOURCE_VALUE>"
87
+ "ddtags" => "<KEY1:VALUE1>,<KEY2:VALUE2>"
21
88
  }
89
+ }
22
90
  }
23
91
  ```
24
92
 
@@ -6,67 +6,249 @@
6
6
  # encoding: utf-8
7
7
  require "logstash/outputs/base"
8
8
  require "logstash/namespace"
9
+ require "zlib"
10
+
9
11
 
10
12
  # DatadogLogs lets you send logs to Datadog
11
13
  # based on LogStash events.
12
14
  class LogStash::Outputs::DatadogLogs < LogStash::Outputs::Base
13
15
 
16
+ # Respect limit documented at https://docs.datadoghq.com/api/?lang=bash#logs
17
+ DD_MAX_BATCH_LENGTH = 500
18
+ DD_MAX_BATCH_SIZE = 5000000
19
+ DD_TRUNCATION_SUFFIX = "...TRUNCATED..."
20
+
14
21
  config_name "datadog_logs"
15
22
 
16
23
  default :codec, "json"
17
24
 
18
25
  # Datadog configuration parameters
19
- config :api_key, :validate => :string, :required => true
20
- config :host, :validate => :string, :required => true, :default => 'intake.logs.datadoghq.com'
21
- config :port, :validate => :number, :required => true, :default => 10516
22
- config :use_ssl, :validate => :boolean, :required => true, :default => true
23
- config :max_backoff, :validate => :number, :required => true, :default => 30
24
- config :max_retries, :validate => :number, :required => true, :default => 5
26
+ config :api_key, :validate => :string, :required => true
27
+ config :host, :validate => :string, :required => true, :default => "http-intake.logs.datadoghq.com"
28
+ config :port, :validate => :number, :required => true, :default => 443
29
+ config :use_ssl, :validate => :boolean, :required => true, :default => true
30
+ config :max_backoff, :validate => :number, :required => true, :default => 30
31
+ config :max_retries, :validate => :number, :required => true, :default => 5
32
+ config :use_http, :validate => :boolean, :required => false, :default => true
33
+ config :use_compression, :validate => :boolean, :required => false, :default => true
34
+ config :compression_level, :validate => :number, :required => false, :default => 6
35
+ config :no_ssl_validation, :validate => :boolean, :required => false, :default => false
25
36
 
37
+ # Register the plugin to logstash
26
38
  public
27
39
  def register
28
- require "socket"
29
- client = nil
30
- @codec.on_event do |event, payload|
31
- message = "#{@api_key} #{payload}\n"
32
- retries = 0
40
+ @client = new_client(@logger, @api_key, @use_http, @use_ssl, @no_ssl_validation, @host, @port, @use_compression)
41
+ end
42
+
43
+ # Logstash shutdown hook
44
+ def close
45
+ @client.close
46
+ end
47
+
48
+ # Entry point of the plugin, receiving a set of Logstash events
49
+ public
50
+ def multi_receive(events)
51
+ return if events.empty?
52
+ encoded_events = @codec.multi_encode(events)
53
+ if @use_http
54
+ batches = batch_http_events(encoded_events, DD_MAX_BATCH_LENGTH, DD_MAX_BATCH_SIZE)
55
+ batches.each do |batched_event|
56
+ process_encoded_payload(format_http_event_batch(batched_event))
57
+ end
58
+ else
59
+ encoded_events.each do |encoded_event|
60
+ process_encoded_payload(format_tcp_event(encoded_event.last, @api_key, DD_MAX_BATCH_SIZE))
61
+ end
62
+ end
63
+ end
64
+
65
+ # Process and send each encoded payload
66
+ def process_encoded_payload(payload)
67
+ if @use_compression and @use_http
68
+ payload = gzip_compress(payload, @compression_level)
69
+ end
70
+ @client.send_retries(payload, @max_retries, @max_backoff)
71
+ end
72
+
73
+ # Format TCP event
74
+ def format_tcp_event(payload, api_key, max_request_size)
75
+ formatted_payload = "#{api_key} #{payload}"
76
+ if (formatted_payload.bytesize > max_request_size)
77
+ return truncate(formatted_payload, max_request_size)
78
+ end
79
+ formatted_payload
80
+ end
81
+
82
+ # Format HTTP events
83
+ def format_http_event_batch(batched_events)
84
+ "[#{batched_events.join(',')}]"
85
+ end
86
+
87
+ # Group HTTP events in batches
88
+ def batch_http_events(encoded_events, max_batch_length, max_request_size)
89
+ batches = []
90
+ current_batch = []
91
+ current_batch_size = 0
92
+ encoded_events.each_with_index do |event, i|
93
+ encoded_event = event.last
94
+ current_event_size = encoded_event.bytesize
95
+ # If this unique log size is bigger than the request size, truncate it
96
+ if current_event_size > max_request_size
97
+ encoded_event = truncate(encoded_event, max_request_size)
98
+ current_event_size = encoded_event.bytesize
99
+ end
100
+
101
+ if (i > 0 and i % max_batch_length == 0) or (current_batch_size + current_event_size > max_request_size)
102
+ batches << current_batch
103
+ current_batch = []
104
+ current_batch_size = 0
105
+ end
106
+
107
+ current_batch_size += encoded_event.bytesize
108
+ current_batch << encoded_event
109
+ end
110
+ batches << current_batch
111
+ batches
112
+ end
113
+
114
+ # Truncate events over the provided max length, appending a marker when truncated
115
+ def truncate(event, max_length)
116
+ if event.length > max_length
117
+ event = event[0..max_length - 1]
118
+ event[max(0, max_length - DD_TRUNCATION_SUFFIX.length)..max_length - 1] = DD_TRUNCATION_SUFFIX
119
+ return event
120
+ end
121
+ event
122
+ end
123
+
124
+ def max(a, b)
125
+ a > b ? a : b
126
+ end
127
+
128
+ # Compress logs with GZIP
129
+ def gzip_compress(payload, compression_level)
130
+ gz = StringIO.new
131
+ gz.set_encoding("BINARY")
132
+ z = Zlib::GzipWriter.new(gz, compression_level)
133
+ begin
134
+ z.write(payload)
135
+ ensure
136
+ z.close
137
+ end
138
+ gz.string
139
+ end
140
+
141
+ # Build a new transport client
142
+ def new_client(logger, api_key, use_http, use_ssl, no_ssl_validation, host, port, use_compression)
143
+ if use_http
144
+ DatadogHTTPClient.new logger, use_ssl, no_ssl_validation, host, port, use_compression, api_key
145
+ else
146
+ DatadogTCPClient.new logger, use_ssl, no_ssl_validation, host, port
147
+ end
148
+ end
149
+
150
+ class RetryableError < StandardError;
151
+ end
152
+
153
+ class DatadogClient
154
+ def send_retries(payload, max_retries, max_backoff)
33
155
  backoff = 1
156
+ retries = 0
34
157
  begin
35
- client ||= new_client
36
- client.write(message)
37
- rescue => e
38
- @logger.warn("Could not send payload", :exception => e, :backtrace => e.backtrace)
39
- client.close rescue nil
40
- client = nil
158
+ send(payload)
159
+ rescue RetryableError => e
41
160
  if retries < max_retries || max_retries < 0
161
+ @logger.warn("Retrying ", :exception => e, :backtrace => e.backtrace)
42
162
  sleep backoff
43
163
  backoff = 2 * backoff unless backoff > max_backoff
44
164
  retries += 1
45
165
  retry
46
166
  end
47
- @logger.warn("Max number of retries reached, dropping the payload", :payload => payload, :max_retries => max_retries)
48
167
  end
49
168
  end
169
+
170
+ def send(payload)
171
+ raise NotImplementedError, "Datadog transport client should implement the send method"
172
+ end
173
+
174
+ def close
175
+ raise NotImplementedError, "Datadog transport client should implement the close method"
176
+ end
50
177
  end
51
178
 
52
- public
53
- def receive(event)
54
- # handle new event
55
- @codec.encode(event)
179
+ class DatadogHTTPClient < DatadogClient
180
+ require "manticore"
181
+
182
+ def initialize(logger, use_ssl, no_ssl_validation, host, port, use_compression, api_key)
183
+ @logger = logger
184
+ protocol = use_ssl ? "https" : "http"
185
+ @url = "#{protocol}://#{host}:#{port.to_s}/v1/input/#{api_key}"
186
+ @headers = {"Content-Type" => "application/json"}
187
+ if use_compression
188
+ @headers["Content-Encoding"] = "gzip"
189
+ end
190
+ logger.info("Starting HTTP connection to #{protocol}://#{host}:#{port.to_s} with compression " + (use_compression ? "enabled" : "disabled"))
191
+ config = {}
192
+ config[:ssl][:verify] = :disable if no_ssl_validation
193
+ @client = Manticore::Client.new(config)
194
+ end
195
+
196
+ def send(payload)
197
+ response = @client.post(@url, :body => payload, :headers => @headers).call
198
+ if response.code >= 500
199
+ raise RetryableError.new "Unable to send payload: #{response.code} #{response.body}"
200
+ end
201
+ if response.code >= 400
202
+ @logger.error("Unable to send payload due to client error: #{response.code} #{response.body}")
203
+ end
204
+ end
205
+
206
+ def close
207
+ @client.close
208
+ end
56
209
  end
57
210
 
58
- private
59
- def new_client
60
- # open a secure connection with Datadog
61
- if @use_ssl
62
- @logger.info("Starting SSL connection", :host => @host, :port => @port)
63
- socket = TCPSocket.new @host, @port
64
- sslSocket = OpenSSL::SSL::SSLSocket.new socket
65
- sslSocket.connect
66
- return sslSocket
67
- else
68
- @logger.info("Starting plaintext connection", :host => @host, :port => @port)
69
- return TCPSocket.new @host, @port
211
+ class DatadogTCPClient < DatadogClient
212
+ require "socket"
213
+
214
+ def initialize(logger, use_ssl, no_ssl_validation, host, port)
215
+ @logger = logger
216
+ @use_ssl = use_ssl
217
+ @no_ssl_validation = no_ssl_validation
218
+ @host = host
219
+ @port = port
220
+ end
221
+
222
+ def connect
223
+ if @use_ssl
224
+ @logger.info("Starting SSL connection #{@host} #{@port}")
225
+ socket = TCPSocket.new @host, @port
226
+ ssl_context = OpenSSL::SSL::SSLContext.new
227
+ if @no_ssl_validation
228
+ ssl_context.set_params({:verify_mode => OpenSSL::SSL::VERIFY_NONE})
229
+ end
230
+ ssl_context = OpenSSL::SSL::SSLSocket.new socket, ssl_context
231
+ ssl_context.connect
232
+ ssl_context
233
+ else
234
+ @logger.info("Starting plaintext connection #{@host} #{@port}")
235
+ TCPSocket.new @host, @port
236
+ end
237
+ end
238
+
239
+ def send(payload)
240
+ begin
241
+ @socket ||= connect
242
+ @socket.puts(payload)
243
+ rescue => e
244
+ @socket.close rescue nil
245
+ @socket = nil
246
+ raise RetryableError.new "Unable to send payload: #{e.message}."
247
+ end
248
+ end
249
+
250
+ def close
251
+ @socket.close rescue nil
70
252
  end
71
253
  end
72
254
 
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-output-datadog_logs'
3
- s.version = '0.3.1'
3
+ s.version = '0.4.0'
4
4
  s.licenses = ['Apache-2.0']
5
5
  s.summary = 'DatadogLogs lets you send logs to Datadog based on LogStash events.'
6
6
  s.homepage = 'https://www.datadoghq.com/'
@@ -9,14 +9,17 @@ Gem::Specification.new do |s|
9
9
  s.require_paths = ['lib']
10
10
 
11
11
  # Files
12
- s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
13
- # Tests
12
+ s.files = Dir['lib/**/*', 'spec/**/*', 'vendor/**/*', '*.gemspec', '*.md', 'CONTRIBUTORS', 'Gemfile', 'LICENSE', 'NOTICE.TXT']
13
+ # Tests
14
14
  s.test_files = s.files.grep(%r{^(test|spec|features)/})
15
15
 
16
16
  # Special flag to let us know this is actually a logstash plugin
17
- s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
17
+ s.metadata = {"logstash_plugin" => "true", "logstash_group" => "output"}
18
18
 
19
19
  # Gem dependencies
20
20
  s.add_runtime_dependency "logstash-core-plugin-api", "~> 2.0"
21
+ s.add_runtime_dependency 'manticore', '>= 0.5.2', '< 1.0.0'
22
+ s.add_runtime_dependency 'logstash-codec-json'
23
+
21
24
  s.add_development_dependency 'logstash-devutils'
22
25
  end
@@ -4,3 +4,101 @@
4
4
  # Copyright 2017 Datadog, Inc.
5
5
 
6
6
  require "logstash/devutils/rspec/spec_helper"
7
+ require "logstash/outputs/datadog_logs"
8
+
9
+ describe LogStash::Outputs::DatadogLogs do
10
+ context "should register" do
11
+ it "with an api key" do
12
+ plugin = LogStash::Plugin.lookup("output", "datadog_logs").new({"api_key" => "xxx"})
13
+ expect { plugin.register }.to_not raise_error
14
+ end
15
+
16
+ it "without an api key" do
17
+ expect { LogStash::Plugin.lookup("output", "datadog_logs").new() }.to raise_error(LogStash::ConfigurationError)
18
+ end
19
+ end
20
+
21
+ subject do
22
+ plugin = LogStash::Plugin.lookup("output", "datadog_logs").new({"api_key" => "xxx"})
23
+ plugin.register
24
+ plugin
25
+ end
26
+
27
+ context "when truncating" do
28
+ it "should truncate messages of the given length" do
29
+ input = "foobarfoobarfoobarfoobar"
30
+ expect(subject.truncate(input, 15).length).to eq(15)
31
+ end
32
+
33
+ it "should replace the end of the message with a marker when truncated" do
34
+ input = "foobarfoobarfoobarfoobar"
35
+ expect(subject.truncate(input, 15)).to end_with("...TRUNCATED...")
36
+ end
37
+
38
+ it "should return the marker if the message length is smaller than the marker length" do
39
+ input = "foobar"
40
+ expect(subject.truncate(input, 1)).to eq("...TRUNCATED...")
41
+ end
42
+
43
+ it "should do nothing if the input length is smaller than the given length" do
44
+ input = "foobar"
45
+ expect(subject.truncate(input, 15)).to eq("foobar")
46
+ end
47
+ end
48
+
49
+ context "when using HTTP" do
50
+ it "should respect the batch length and create one batch of one event" do
51
+ input_events = [[LogStash::Event.new({"message" => "dd"}), "dd"]]
52
+ expect(subject.batch_http_events(input_events, 1, 1000).length).to eq(1)
53
+ end
54
+
55
+ it "should respect the batch length and create two batches of one event" do
56
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "dd2"}), "dd2"]]
57
+ actual_events = subject.batch_http_events(input_events, 1, 1000)
58
+ expect(actual_events.length).to eq(2)
59
+ expect(actual_events[0][0]).to eq("dd1")
60
+ expect(actual_events[1][0]).to eq("dd2")
61
+ end
62
+
63
+ it "should respect the request size and create two batches of one event" do
64
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "dd2"}), "dd2"]]
65
+ actual_events = subject.batch_http_events(input_events, 10, 3)
66
+ expect(actual_events.length).to eq(2)
67
+ expect(actual_events[0][0]).to eq("dd1")
68
+ expect(actual_events[1][0]).to eq("dd2")
69
+ end
70
+
71
+ it "should respect the request size and create two batches of two events" do
72
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "dd2"}), "dd2"], [LogStash::Event.new({"message" => "dd3"}), "dd3"], [LogStash::Event.new({"message" => "dd4"}), "dd4"]]
73
+ actual_events = subject.batch_http_events(input_events, 6, 6)
74
+ expect(actual_events.length).to eq(2)
75
+ expect(actual_events[0][0]).to eq("dd1")
76
+ expect(actual_events[0][1]).to eq("dd2")
77
+ expect(actual_events[1][0]).to eq("dd3")
78
+ expect(actual_events[1][1]).to eq("dd4")
79
+ end
80
+
81
+ it "should truncate events whose length is bigger than the max request size" do
82
+ input_events = [[LogStash::Event.new({"message" => "dd1"}), "dd1"], [LogStash::Event.new({"message" => "foobarfoobarfoobar"}),"foobarfoobarfoobar"], [LogStash::Event.new({"message" => "dd2"}), "dd2"]]
83
+ actual_events = subject.batch_http_events(input_events, 10, 3)
84
+ expect(actual_events.length).to eq(3)
85
+ expect(actual_events[0][0]).to eq("dd1")
86
+ expect(actual_events[1][0]).to eq("...TRUNCATED...")
87
+ expect(actual_events[2][0]).to eq("dd2")
88
+ end
89
+ end
90
+
91
+ context "when using TCP" do
92
+ it "should re-encode events" do
93
+ input_event = "{message=dd}"
94
+ encoded_event = subject.format_tcp_event(input_event, "xxx", 1000)
95
+ expect(encoded_event).to eq("xxx " + input_event)
96
+ end
97
+
98
+ it "should truncate too long messages" do
99
+ input_event = "{message=foobarfoobarfoobar}"
100
+ encoded_event = subject.format_tcp_event(input_event, "xxx", 20)
101
+ expect(encoded_event).to eq("xxx {...TRUNCATED...")
102
+ end
103
+ end
104
+ end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-output-datadog_logs
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.1
4
+ version: 0.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Datadog
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2019-01-15 00:00:00.000000000 Z
12
+ date: 2020-02-25 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  requirement: !ruby/object:Gem::Requirement
@@ -25,6 +25,40 @@ dependencies:
25
25
  - - "~>"
26
26
  - !ruby/object:Gem::Version
27
27
  version: '2.0'
28
+ - !ruby/object:Gem::Dependency
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: 0.5.2
34
+ - - "<"
35
+ - !ruby/object:Gem::Version
36
+ version: 1.0.0
37
+ name: manticore
38
+ prerelease: false
39
+ type: :runtime
40
+ version_requirements: !ruby/object:Gem::Requirement
41
+ requirements:
42
+ - - ">="
43
+ - !ruby/object:Gem::Version
44
+ version: 0.5.2
45
+ - - "<"
46
+ - !ruby/object:Gem::Version
47
+ version: 1.0.0
48
+ - !ruby/object:Gem::Dependency
49
+ requirement: !ruby/object:Gem::Requirement
50
+ requirements:
51
+ - - ">="
52
+ - !ruby/object:Gem::Version
53
+ version: '0'
54
+ name: logstash-codec-json
55
+ prerelease: false
56
+ type: :runtime
57
+ version_requirements: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ version: '0'
28
62
  - !ruby/object:Gem::Dependency
29
63
  requirement: !ruby/object:Gem::Requirement
30
64
  requirements:
@@ -76,7 +110,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
76
110
  version: '0'
77
111
  requirements: []
78
112
  rubyforge_project:
79
- rubygems_version: 2.6.13
113
+ rubygems_version: 2.7.10
80
114
  signing_key:
81
115
  specification_version: 4
82
116
  summary: DatadogLogs lets you send logs to Datadog based on LogStash events.