logstash-logger 0.15.2 → 0.16.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: df9a6bdcc6c820482f78f45ca8df59d79e710326
4
- data.tar.gz: 3970d44262a720e4e4a7ebae6d26429d437f8f7b
3
+ metadata.gz: c7f6fe8282858937026c5c9747835e68bd16390f
4
+ data.tar.gz: 8de048a5dab4c3a46e1f74fd7b192cc7e82ac58f
5
5
  SHA512:
6
- metadata.gz: 656fab36ad7b8165f2f3266250f1ed5e9c804c2e0de86e5d742b70eff3a4340022968e9cd94c6aa3fb632674e581b0e59464f8f2b0785824e9d32ab2b717b4af
7
- data.tar.gz: f0831f1e72cd7e1c604fcdf8519cf5cf6f31f60ad57fab6c69adbe5e14fe307eb315a60172c08cdfb7dd2bb7390bcd7d080d7c127b31ac944a2638136ff6b609
6
+ metadata.gz: c6e11150819e39e4ffb061160e332570a8d461974b88d6a933b6a9370c26bccde71a4f53942ddfc29b12e40f2f952df659c291a682dabd7a1a9f944430e02d37
7
+ data.tar.gz: 62077f3594cf26f2e92af9576bbd392e6059f7c141f5acc7d61e6ea850491f37eaaf642138d343c7b9d22436f8608ca3b9b3163941e9db0b7ce562dc4bae918a
data/.gitignore CHANGED
@@ -18,3 +18,4 @@ test/version_tmp
18
18
  tmp
19
19
  .idea/
20
20
  log/
21
+ .ruby-version
@@ -2,16 +2,34 @@ language: ruby
2
2
  rvm:
3
3
  - 1.9.3
4
4
  - 2.0.0
5
- - 2.1
6
- - 2.2
5
+ - 2.1.9
6
+ - 2.2.5
7
+ - 2.3.1
7
8
  - jruby-19mode
8
- - rbx-2
9
+ - jruby-9.0.5.0
10
+ - rbx-3.9
9
11
  gemfile:
10
12
  - gemfiles/rails_3.2.gemfile
11
13
  - gemfiles/rails_4.0.gemfile
12
14
  - gemfiles/rails_4.1.gemfile
13
15
  - gemfiles/rails_4.2.gemfile
16
+ - gemfiles/rails_5.0.gemfile
14
17
  matrix:
18
+ exclude:
19
+ - rvm: 1.9.3
20
+ gemfile: gemfiles/rails_5.0.gemfile
21
+ - rvm: jruby-19mode
22
+ gemfile: gemfiles/rails_5.0.gemfile
23
+ - rvm: 2.0.0
24
+ gemfile: gemfiles/rails_5.0.gemfile
25
+ - rvm: 2.1.9
26
+ gemfile: gemfiles/rails_5.0.gemfile
27
+ - rvm: 2.2.5
28
+ gemfile: gemfiles/rails_3.2.gemfile
29
+ - rvm: 2.3.1
30
+ gemfile: gemfiles/rails_3.2.gemfile
15
31
  allow_failures:
16
- - rbx-2
32
+ -rvm: rbx-3.9
17
33
  sudo: false
34
+ cache: bundler
35
+
data/Appraisals CHANGED
@@ -13,3 +13,7 @@ end
13
13
  appraise "rails-4.2" do
14
14
  gem "rails", "~> 4.2.0"
15
15
  end
16
+
17
+ appraise "rails-5.0" do
18
+ gem "rails", "~> 5.0.0"
19
+ end
@@ -1,3 +1,16 @@
1
+ ## 0.16.0
2
+
3
+ This release is focused on improving the reliability of LogStashLogger.
4
+ Connectable log devices are now less likely to block or
5
+ raise an exception, which avoids impacting the operation of the program. There was
6
+ also a fix to avoid leaking connections.
7
+
8
+ - Allow log messages to be dropped. [#81](https://github.com/dwbutler/logstash-logger/pull/81)
9
+ - Provide `max_message_size` option. [#80](https://github.com/dwbutler/logstash-logger/pull/80)
10
+ - Unify error logging. [#82](https://github.com/dwbutler/logstash-logger/pull/82)
11
+ - Drop message when there is an unrecoverable error. [#83](https://github.com/dwbutler/logstash-logger/pull/83)
12
+ - Safely close connection when reconnecting. [#84](https://github.com/dwbutler/logstash-logger/pull/84)
13
+
1
14
  ## 0.15.2
2
15
  - Fixes Windows support. [#64](https://github.com/dwbutler/logstash-logger/issues/64)
3
16
 
data/README.md CHANGED
@@ -162,6 +162,9 @@ input {
162
162
  }
163
163
  ```
164
164
 
165
+ File and Redis inputs should use the `json` codec instead. For more information
166
+ read the [Logstash docs](https://www.elastic.co/guide/en/logstash/current/plugins-codecs-json_lines.html).
167
+
165
168
  See the [samples](https://github.com/dwbutler/logstash-logger/tree/master/samples) directory for more configuration samples.
166
169
 
167
170
  ## SSL
@@ -237,16 +240,20 @@ This configuration would result in the following output.
237
240
 
238
241
  ## Buffering / Automatic Retries
239
242
 
240
- Log messages are buffered internally, and automatically re-sent if there is a connection problem.
243
+ For devices that establish a connection to a remote service, log messages are buffered internally
244
+ and automatically re-sent if there is a connection problem.
241
245
  Outputs that support batch writing (Redis and Kafka) will write log messages in bulk from the
242
246
  buffer. This functionality is implemented using
243
247
  [Stud::Buffer](https://github.com/jordansissel/ruby-stud/blob/master/lib/stud/buffer.rb).
244
248
  You can configure its behavior by passing the following options to LogStashLogger:
245
249
 
246
- :buffer_max_items - Max number of items to buffer before flushing. Defaults to 50.
247
- :buffer_max_interval - Max number of seconds to wait between flushes. Defaults to 5.
250
+ * :buffer_max_items - Max number of items to buffer before flushing. Defaults to 50.
251
+ * :buffer_max_interval - Max number of seconds to wait between flushes. Defaults to 5.
252
+ * :drop_messages_on_flush_error - Drop messages when there is a flush error. Defaults to false.
253
+ * :drop_messages_on_full_buffer - Drop messages when the buffer is full.
254
+ Defaults to true.
248
255
 
249
- You can turn this behavior off by setting `buffer_max_items` to `1` or `sync` to `true`.
256
+ You can turn buffering off by setting `buffer_max_items` to `1` or `sync` to `true`.
250
257
 
251
258
  Please be aware of the following caveats to this behavior:
252
259
 
@@ -259,19 +266,29 @@ Please be aware of the following caveats to this behavior:
259
266
  immediately. In my testing, it took Ruby about 4 seconds to notice the receiving end was down
260
267
  and start raising exceptions. Since logstash listeners over TCP/UDP do not acknowledge received
261
268
  messages, it's not possible to know which log messages to re-send.
262
- * If your output source is unavailable long enough, writing to the log will block until it is
263
- available again. This could make your application unresponsive.
269
+ * If your output source is unavailable long enough, the buffer will fill up
270
+ and messages will be dropped.
264
271
  * If your application suddenly terminates (for example, by SIGKILL or a power outage), the whole
265
272
  buffer will be lost.
266
273
 
267
- You can make message loss and application blockage less likely by increasing `buffer_max_items`
274
+ By default, messages are discarded when the buffer gets full. This can happen
275
+ if the output source is down for too long.
276
+ You can make message loss less likely by increasing `buffer_max_items`
268
277
  (so that more events can be held in the buffer), and increasing `buffer_max_interval` (to wait
269
278
  longer between flushes). This will increase memory pressure on your application as log messages
270
279
  accumulate in the buffer, so make sure you have allocated enough memory to your process.
271
280
 
281
+ ## Error handling
282
+
283
+ If an exception occurs while writing a message to the device, the exception is
284
+ logged using an internal logger. By default, this logs to $stderr. You can
285
+ change the error logger by setting `LogStashLogger.configuration.default_error_logger`, or by passsing
286
+ your own logger object in the `:error_logger` configuration key when
287
+ instantiating a LogStashLogger.
288
+
272
289
  ## Rails Integration
273
290
 
274
- Verified to work with both Rails 3 and 4.
291
+ Verified to work with both Rails 3, 4, and 5.
275
292
 
276
293
  By default, every Rails log message will be written to logstash in `LogStash::Event` JSON format.
277
294
 
@@ -304,11 +321,20 @@ config.logstash.uri = ENV['LOGSTASH_URI']
304
321
  # they will all share the same formatter.
305
322
  config.logstash.formatter = :json_lines
306
323
 
324
+ # Optional, the logger to log writing errors to. Defaults to logging to $stderr
325
+ config.logstash.error_logger = Logger.new($stderr)
326
+
307
327
  # Optional, max number of items to buffer before flushing. Defaults to 50
308
328
  config.logstash.buffer_max_items = 50
309
329
 
310
330
  # Optional, max number of seconds to wait between flushes. Defaults to 5
311
331
  config.logstash.buffer_max_interval = 5
332
+
333
+ # Optional, drop message when a connection error occurs. Defaults to false
334
+ config.logstash.drop_messages_on_flush_error = false
335
+
336
+ # Optional, drop messages when the buffer is full. Defaults to true
337
+ config.logstash.drop_messages_on_full_buffer = true
312
338
  ```
313
339
 
314
340
  #### UDP
@@ -510,9 +536,9 @@ end
510
536
 
511
537
  Verified to work with:
512
538
 
513
- * MRI Ruby 1.9.3, 2.0.x, 2.1.x, 2.2.x
514
- * JRuby 1.7+
515
- * Rubinius 2.2+
539
+ * MRI Ruby 1.9.3, 2.0, 2.1, 2.2, 2.3
540
+ * JRuby 1.7, 9.0
541
+ * Rubinius
516
542
 
517
543
  Ruby 1.8.7 is not supported.
518
544
 
@@ -556,6 +582,16 @@ If you're using UDP output and writing to a logstash listener, you are most like
556
582
  of the logstash listener. There is no known fix at this time. See [#43](https://github.com/dwbutler/logstash-logger/issues/43)
557
583
  for more information.
558
584
 
585
+ ### Errno::EMSGSIZE - Message too long
586
+ A known drawback of using UDP is its limit on total message size. To workaround
587
+ this issue, you will have to truncate the message by setting the max message size:
588
+
589
+ ```ruby
590
+ LogStashLogger.configure do |config|
591
+ config.max_message_size = 2000
592
+ end
593
+ ```
594
+
559
595
  ## Breaking changes
560
596
 
561
597
  ### Version 0.5+
@@ -594,6 +630,7 @@ logger = LogStashLogger.new('localhost', 5228, :tcp)
594
630
  * [Nikita Vorobei](https://github.com/Nikita-V)
595
631
  * [fireboy1919](https://github.com/fireboy1919)
596
632
  * [Mike Gunderloy](https://github.com/ffmike)
633
+ * [Vitaly Gorodetsky](https://github.com/vitalis)
597
634
 
598
635
  ## Contributing
599
636
 
@@ -0,0 +1,8 @@
1
+ # This file was generated by Appraisal
2
+
3
+ source "https://rubygems.org"
4
+
5
+ gem "codecov", :require => false, :group => :test
6
+ gem "rails", "~> 5.0.0"
7
+
8
+ gemspec :path => "../"
@@ -10,9 +10,12 @@ module LogStashLogger
10
10
 
11
11
  class Configuration
12
12
  attr_accessor :customize_event_block
13
+ attr_accessor :max_message_size
14
+ attr_accessor :default_error_logger
13
15
 
14
16
  def initialize(*args)
15
17
  @customize_event_block = nil
18
+ @default_error_logger = Logger.new($stderr)
16
19
 
17
20
  yield self if block_given?
18
21
  self
@@ -6,7 +6,7 @@ module LogStashLogger
6
6
  def initialize(opts)
7
7
  @io = self
8
8
  @devices = create_devices(opts[:outputs])
9
- self.class.delegate_to_all(:close, :flush)
9
+ self.class.delegate_to_all(:close, :close!, :flush)
10
10
  self.class.delegate_to_one(:write)
11
11
  end
12
12
 
@@ -3,9 +3,11 @@ module LogStashLogger
3
3
  class Base
4
4
  attr_reader :io
5
5
  attr_accessor :sync
6
+ attr_accessor :error_logger
6
7
 
7
8
  def initialize(opts={})
8
9
  @sync = opts[:sync]
10
+ @error_logger = opts.fetch(:error_logger, LogStashLogger.configuration.default_error_logger)
9
11
  end
10
12
 
11
13
  def to_io
@@ -13,20 +15,61 @@ module LogStashLogger
13
15
  end
14
16
 
15
17
  def write(message)
18
+ write_one(message)
19
+ end
20
+
21
+ def write_one(message)
16
22
  @io.write(message)
23
+ rescue => e
24
+ if unrecoverable_error?(e)
25
+ log_error(e)
26
+ log_warning("unrecoverable error, aborting write")
27
+ else
28
+ raise
29
+ end
30
+ end
31
+
32
+ def write_batch(messages, group = nil)
33
+ messages.each do |message|
34
+ write_one(message)
35
+ end
36
+ end
37
+
38
+ def write_batch(messages, group = nil)
39
+ messages.each do |message|
40
+ @io.write(message)
41
+ end
17
42
  end
18
43
 
19
44
  def flush
20
45
  @io && @io.flush
21
46
  end
22
47
 
23
- def close
24
- @io && @io.close
48
+ def close(opts = {})
49
+ close!
25
50
  rescue => e
26
- warn "#{self.class} - #{e.class} - #{e.message}"
51
+ log_error(e)
52
+ end
53
+
54
+ def close!
55
+ @io && @io.close
27
56
  ensure
28
57
  @io = nil
29
58
  end
59
+
60
+ def unrecoverable_error?(e)
61
+ e.is_a?(JSON::GeneratorError)
62
+ end
63
+
64
+ private
65
+
66
+ def log_error(e)
67
+ error_logger.error "[#{self.class}] #{e.class} - #{e.message}"
68
+ end
69
+
70
+ def log_warning(message)
71
+ error_logger.warn("[#{self.class}] #{message}")
72
+ end
30
73
  end
31
74
  end
32
75
  end
@@ -16,33 +16,69 @@ module LogStashLogger
16
16
  warn "The :batch_timeout option is deprecated. Please use :buffer_max_interval instead"
17
17
  end
18
18
 
19
+ @buffer_group = nil
19
20
  @buffer_max_items = opts[:batch_events] || opts[:buffer_max_items]
20
21
  @buffer_max_interval = opts[:batch_timeout] || opts[:buffer_max_interval]
22
+ @drop_messages_on_flush_error =
23
+ if opts.key?(:drop_messages_on_flush_error)
24
+ opts.delete(:drop_messages_on_flush_error)
25
+ else
26
+ false
27
+ end
21
28
 
22
- buffer_initialize max_items: @buffer_max_items, max_interval: @buffer_max_interval
29
+ @drop_messages_on_full_buffer =
30
+ if opts.key?(:drop_messages_on_full_buffer)
31
+ opts.delete(:drop_messages_on_full_buffer)
32
+ else
33
+ true
34
+ end
35
+
36
+ reset_buffer
23
37
  end
24
38
 
25
39
  def write(message)
26
- buffer_receive message
40
+ buffer_receive message, @buffer_group
27
41
  buffer_flush(force: true) if @sync
42
+ rescue
28
43
  end
29
44
 
30
45
  def flush(*args)
31
46
  if args.empty?
32
47
  buffer_flush
33
48
  else
34
- write_batch(args[0])
49
+ messages, group = *args
50
+ write_batch(messages, group)
35
51
  end
52
+ rescue
53
+ if @drop_messages_on_flush_error
54
+ reset_buffer
55
+ else
56
+ cancel_flush
57
+ end
58
+ raise
36
59
  end
37
60
 
38
- def close
39
- buffer_flush(final: true)
61
+ def on_flush_error(e)
62
+ raise e
63
+ end
64
+
65
+ def on_full_buffer_receive(args)
66
+ if @drop_messages_on_full_buffer
67
+ reset_buffer
68
+ end
69
+ end
70
+
71
+ def close(opts = {})
72
+ if opts.fetch(:flush, true)
73
+ buffer_flush(final: true)
74
+ end
75
+
40
76
  super
41
77
  end
42
78
 
43
79
  def to_io
44
80
  with_connection do
45
- @io
81
+ super
46
82
  end
47
83
  end
48
84
 
@@ -50,11 +86,15 @@ module LogStashLogger
50
86
  !!@io
51
87
  end
52
88
 
53
- def write_batch(messages)
89
+ def write_one(message)
54
90
  with_connection do
55
- messages.each do |message|
56
- @io.write(message)
57
- end
91
+ super
92
+ end
93
+ end
94
+
95
+ def write_batch(messages, group = nil)
96
+ with_connection do
97
+ super
58
98
  end
59
99
  end
60
100
 
@@ -64,7 +104,7 @@ module LogStashLogger
64
104
  end
65
105
 
66
106
  def reconnect
67
- @io = nil
107
+ close(flush: false)
68
108
  connect
69
109
  end
70
110
 
@@ -73,10 +113,41 @@ module LogStashLogger
73
113
  connect unless connected?
74
114
  yield
75
115
  rescue => e
76
- warn "#{self.class} - #{e.class} - #{e.message}"
77
- @io = nil
116
+ log_error(e)
117
+ close(flush: false)
78
118
  raise
79
119
  end
120
+
121
+ private
122
+
123
+ def reset_buffer
124
+ buffer_initialize max_items: @buffer_max_items, max_interval: @buffer_max_interval
125
+ @buffer_state[:timer] = Thread.new do
126
+ loop do
127
+ sleep(@buffer_config[:max_interval])
128
+ begin
129
+ buffer_flush(:force => true)
130
+ rescue
131
+ end
132
+ end
133
+ end
134
+ end
135
+
136
+ def buffer_clear_outgoing
137
+ @buffer_state[:outgoing_items] = Hash.new { |h, k| h[k] = [] }
138
+ @buffer_state[:outgoing_count] = 0
139
+ end
140
+
141
+ def cancel_flush
142
+ @buffer_state[:flush_mutex].lock rescue false
143
+ @buffer_state[:outgoing_items].each do |group, items|
144
+ @buffer_state[:pending_items][group].concat items
145
+ end
146
+ @buffer_state[:pending_count] += @buffer_state[:outgoing_count]
147
+ buffer_clear_outgoing
148
+ ensure
149
+ @buffer_state[:flush_mutex].unlock rescue false
150
+ end
80
151
  end
81
152
  end
82
153
  end
@@ -20,59 +20,38 @@ module LogStashLogger
20
20
  @topic = opts[:path] || DEFAULT_TOPIC
21
21
  @producer = opts[:producer] || DEFAULT_PRODUCER
22
22
  @backoff = opts[:backoff] || DEFAULT_BACKOFF
23
+ @buffer_group = @topic
23
24
  end
24
25
 
25
26
  def connect
26
27
  @io = ::Poseidon::Producer.new(@hosts, @producer)
27
28
  end
28
29
 
29
- def reconnect
30
- @io.close
31
- connect
32
- end
33
-
34
30
  def with_connection
35
- connect unless @io
31
+ connect unless connected?
36
32
  yield
37
33
  rescue ::Poseidon::Errors::ChecksumError, Poseidon::Errors::UnableToFetchMetadata => e
38
- warn "#{self.class} - #{e.class} -> reconnect/retry"
34
+ log_error(e)
35
+ log_warning("reconnect/retry")
39
36
  sleep backoff if backoff
40
37
  reconnect
41
38
  retry
42
39
  rescue => e
43
- warn "#{self.class} - #{e.class} - #{e.message} -> giving up"
44
- @io = nil
40
+ log_error(e)
41
+ log_warning("giving up")
42
+ close(flush: false)
45
43
  end
46
44
 
47
- def write(message)
48
- buffer_receive Poseidon::MessageToSend.new(@topic, message)
49
- buffer_flush(force: true) if @sync
50
- end
51
-
52
- def write_batch(messages)
45
+ def write_batch(messages, topic = nil)
46
+ topic ||= @topic
53
47
  with_connection do
54
- @io.send_messages messages
48
+ @io.send_messages messages.map { |message| Poseidon::MessageToSend.new(topic, message) }
55
49
  end
56
50
  end
57
51
 
58
- def close
59
- buffer_flush(final: true)
60
- @io && @io.close
61
- rescue => e
62
- warn "#{self.class} - #{e.class} - #{e.message}"
63
- ensure
64
- @io = nil
52
+ def write_one(message, topic = nil)
53
+ write_batch([message], topic)
65
54
  end
66
-
67
- def flush(*args)
68
- if args.empty?
69
- buffer_flush
70
- else
71
- messages = *args.first
72
- write_batch(messages)
73
- end
74
- end
75
-
76
55
  end
77
56
  end
78
57
  end
@@ -11,7 +11,7 @@ module LogStashLogger
11
11
  def initialize(opts)
12
12
  @io = self
13
13
  @devices = create_devices(opts[:outputs])
14
- self.class.delegate(:write, :close, :flush)
14
+ self.class.delegate(:write, :close, :close!, :flush)
15
15
  end
16
16
 
17
17
  private
@@ -10,6 +10,7 @@ module LogStashLogger
10
10
  def initialize(opts)
11
11
  super
12
12
  @list = opts.delete(:list) || DEFAULT_LIST
13
+ @buffer_group = @list
13
14
 
14
15
  normalize_path(opts)
15
16
 
@@ -22,49 +23,37 @@ module LogStashLogger
22
23
 
23
24
  def reconnect
24
25
  @io.client.reconnect
26
+ rescue => e
27
+ log_error(e)
25
28
  end
26
29
 
27
30
  def with_connection
28
- connect unless @io
31
+ connect unless connected?
29
32
  yield
30
33
  rescue ::Redis::InheritedError
31
34
  reconnect
32
35
  retry
33
36
  rescue => e
34
- warn "#{self.class} - #{e.class} - #{e.message}"
35
- @io = nil
37
+ log_error(e)
38
+ close(flush: false)
36
39
  raise
37
40
  end
38
41
 
39
- def write(message)
40
- buffer_receive message, @list
41
- buffer_flush(force: true) if @sync
42
- end
43
-
44
42
  def write_batch(messages, list = nil)
43
+ list ||= @list
45
44
  with_connection do
46
45
  @io.rpush(list, messages)
47
46
  end
48
47
  end
49
48
 
50
- def close
51
- buffer_flush(final: true)
52
- @io && @io.quit
53
- rescue => e
54
- warn "#{self.class} - #{e.class} - #{e.message}"
55
- ensure
56
- @io = nil
49
+ def write_one(message, list = nil)
50
+ write_batch(message, list)
57
51
  end
58
52
 
59
- def flush(*args)
60
- if args.empty?
61
- buffer_flush
62
- else
63
- messages, list = *args
64
- write_batch(messages, list)
65
- end
53
+ def close!
54
+ @io && @io.quit
66
55
  end
67
-
56
+
68
57
  private
69
58
 
70
59
  def normalize_path(opts)
@@ -12,6 +12,10 @@ module LogStashLogger
12
12
  @port = opts[:port] || fail(ArgumentError, "Port is required")
13
13
  @host = opts[:host] || DEFAULT_HOST
14
14
  end
15
+
16
+ def unrecoverable_error?(e)
17
+ e.is_a?(Errno::EMSGSIZE) || super
18
+ end
15
19
  end
16
20
  end
17
21
  end
@@ -5,7 +5,7 @@ module LogStashLogger
5
5
  super({io: $stderr}.merge(opts))
6
6
  end
7
7
 
8
- def close
8
+ def close!
9
9
  # no-op
10
10
  end
11
11
  end
@@ -5,7 +5,7 @@ module LogStashLogger
5
5
  super({io: $stdout}.merge(opts))
6
6
  end
7
7
 
8
- def close
8
+ def close!
9
9
  # no-op
10
10
  # Calling $stdout.close would be a bad idea
11
11
  end
@@ -44,6 +44,10 @@ module LogStashLogger
44
44
  if event.timestamp.is_a?(Time)
45
45
  event.timestamp = event.timestamp.iso8601(3)
46
46
  end
47
+
48
+ if LogStashLogger.configuration.max_message_size
49
+ event['message'.freeze] = event['message'.freeze].byteslice(0, LogStashLogger.configuration.max_message_size)
50
+ end
47
51
 
48
52
  event
49
53
  end
@@ -1,3 +1,3 @@
1
1
  module LogStashLogger
2
- VERSION = "0.15.2"
2
+ VERSION = "0.16.0"
3
3
  end
@@ -22,13 +22,20 @@ Gem::Specification.new do |gem|
22
22
  gem.add_runtime_dependency 'stud'
23
23
 
24
24
  gem.add_development_dependency 'rails'
25
+ if RUBY_VERSION < '2'
26
+ gem.add_development_dependency 'mime-types', '< 3'
27
+ end
25
28
  gem.add_development_dependency 'redis'
26
29
  gem.add_development_dependency 'poseidon'
27
30
 
28
- if RUBY_VERSION < '2'
31
+ if RUBY_VERSION < '2' || defined?(JRUBY_VERSION)
29
32
  gem.add_development_dependency 'SyslogLogger'
30
33
  end
31
34
 
35
+ if RUBY_VERSION < '2'
36
+ gem.add_development_dependency 'json', '~> 1.8'
37
+ end
38
+
32
39
  gem.add_development_dependency 'rspec', '>= 3'
33
40
  gem.add_development_dependency 'rake'
34
41
  gem.add_development_dependency 'pry'
@@ -1,7 +1,7 @@
1
1
  input {
2
2
  redis {
3
3
  data_type => "list"
4
- codec => json_lines
4
+ codec => json
5
5
  key => "logstash"
6
6
  }
7
7
  }
@@ -0,0 +1,74 @@
1
+ require 'logstash-logger'
2
+
3
+ describe LogStashLogger::Device::Connectable do
4
+ include_context 'device'
5
+
6
+ let(:io) { double("IO") }
7
+
8
+ subject { udp_device }
9
+
10
+ describe "#reconnect" do
11
+ context "with active IO connection" do
12
+ before do
13
+ subject.instance_variable_set(:@io, io)
14
+ end
15
+
16
+ it "closes the connection" do
17
+ expect(io).to receive(:close).once
18
+ subject.reconnect
19
+ end
20
+ end
21
+
22
+ context "with no active IO connection" do
23
+ before do
24
+ subject.instance_variable_set(:@io, nil)
25
+ end
26
+
27
+ it "does nothing" do
28
+ expect(io).to_not receive(:close)
29
+ subject.reconnect
30
+ end
31
+ end
32
+ end
33
+
34
+ describe "#with_connection" do
35
+ context "on exception" do
36
+ before do
37
+ allow(subject).to receive(:connected?) { raise(StandardError) }
38
+ allow(subject).to receive(:warn)
39
+ end
40
+
41
+ context "with active IO connection" do
42
+ before do
43
+ subject.instance_variable_set(:@io, io)
44
+ end
45
+
46
+ it "closes the connection" do
47
+ expect(io).to receive(:close).once
48
+
49
+ expect {
50
+ subject.with_connection do |connection|
51
+ connection
52
+ end
53
+ }.to raise_error(StandardError)
54
+ end
55
+ end
56
+
57
+ context "with no active IO connection" do
58
+ before do
59
+ subject.instance_variable_set(:@io, nil)
60
+ end
61
+
62
+ it "does nothing" do
63
+ expect(io).to_not receive(:close)
64
+
65
+ expect {
66
+ subject.with_connection do |connection|
67
+ connection
68
+ end
69
+ }.to raise_error(StandardError)
70
+ end
71
+ end
72
+ end
73
+ end
74
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-logger
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.15.2
4
+ version: 0.16.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - David Butler
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2015-11-06 00:00:00.000000000 Z
11
+ date: 2016-07-13 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: logstash-event
@@ -170,6 +170,7 @@ files:
170
170
  - gemfiles/rails_4.0.gemfile
171
171
  - gemfiles/rails_4.1.gemfile
172
172
  - gemfiles/rails_4.2.gemfile
173
+ - gemfiles/rails_5.0.gemfile
173
174
  - lib/logstash-logger.rb
174
175
  - lib/logstash-logger/configuration.rb
175
176
  - lib/logstash-logger/device.rb
@@ -211,6 +212,7 @@ files:
211
212
  - samples/unix.conf
212
213
  - spec/configuration_spec.rb
213
214
  - spec/device/balancer_spec.rb
215
+ - spec/device/connectable_spec.rb
214
216
  - spec/device/file_spec.rb
215
217
  - spec/device/io_spec.rb
216
218
  - spec/device/kafka_spec.rb
@@ -256,13 +258,14 @@ required_rubygems_version: !ruby/object:Gem::Requirement
256
258
  version: '0'
257
259
  requirements: []
258
260
  rubyforge_project:
259
- rubygems_version: 2.2.2
261
+ rubygems_version: 2.5.1
260
262
  signing_key:
261
263
  specification_version: 4
262
264
  summary: LogStash Logger for ruby
263
265
  test_files:
264
266
  - spec/configuration_spec.rb
265
267
  - spec/device/balancer_spec.rb
268
+ - spec/device/connectable_spec.rb
266
269
  - spec/device/file_spec.rb
267
270
  - spec/device/io_spec.rb
268
271
  - spec/device/kafka_spec.rb