logstash-logger 0.17.0 → 0.18.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 1de31545fcd13dd8e1d6a5c3bcd4d69b29e4579a
4
- data.tar.gz: c45861aa001389601e4ef8f4352879d58d0758fb
3
+ metadata.gz: 416ec549b79d37b497160c4adda407080ec37350
4
+ data.tar.gz: 394cdb7be907401581ec546ae9579256253b4fe2
5
5
  SHA512:
6
- metadata.gz: 308d6259229cc7aa957cdf5390ac3f7f244b61f6eb0c181498dce44eec7dad74326efb213561824e98349d120c5f39dbe4ef0a3a32919bea292b622a44643833
7
- data.tar.gz: 8a746761ac4c20717bb73f21117569c038438433c6fab5200175e6d0e9b71a752e7f7cecf3e44148f7e1962e344948b97485ff6be53548bbc8bf6d3376bb1a8a
6
+ metadata.gz: 8a1e557351c9d65ef6a471024dfd5e86cabe1d2efac52ae55b02810cfea26dbacac6105e462304edc43b486bba62892c90acd8c08c5edaa6bde797b86a575524
7
+ data.tar.gz: 0ab2f410b37fb0737afe6f20d70ad1fad7ce3cff71c4a95bda2a1f8b5e4d2555f3eed66a9a68b52471192d406d1d637001dc3b57e236543c6492858f56e460b1
@@ -1,3 +1,13 @@
1
+ ## 0.18.0
2
+
3
+ This release removes the dependency on `stud` and vendors in a forked version
4
+ of `Stud::Buffer`. This improves the buffering behavior of LogStashLogger by
5
+ flushing all log messages in a background thread by default. This eliminates
6
+ blocking behavior and exceptions bubbling up to the main process.
7
+
8
+ - Fixes `Attempt to unlock a mutex which is not locked (ThreadError)`.
9
+ [#88](https://github.com/dwbutler/logstash-logger/issues/88)
10
+
1
11
  ## 0.17.0
2
12
  - Support for logger silencing. [#87](https://github.com/dwbutler/logstash-logger/pull/87)
3
13
  - Fixes Rails 5 support. [#86](https://github.com/dwbutler/logstash-logger/issues/86)
data/README.md CHANGED
@@ -241,19 +241,20 @@ This configuration would result in the following output.
241
241
  ## Buffering / Automatic Retries
242
242
 
243
243
  For devices that establish a connection to a remote service, log messages are buffered internally
244
- and automatically re-sent if there is a connection problem.
244
+ and flushed in a background thread. If there is a connection problem, the
245
+ messages are held in the buffer and automatically resent until it is successful.
245
246
  Outputs that support batch writing (Redis and Kafka) will write log messages in bulk from the
246
- buffer. This functionality is implemented using
247
+ buffer. This functionality is implemented using a fork of
247
248
  [Stud::Buffer](https://github.com/jordansissel/ruby-stud/blob/master/lib/stud/buffer.rb).
248
249
  You can configure its behavior by passing the following options to LogStashLogger:
249
250
 
250
251
  * :buffer_max_items - Max number of items to buffer before flushing. Defaults to 50.
251
252
  * :buffer_max_interval - Max number of seconds to wait between flushes. Defaults to 5.
252
253
  * :drop_messages_on_flush_error - Drop messages when there is a flush error. Defaults to false.
253
- * :drop_messages_on_full_buffer - Drop messages when the buffer is full.
254
- Defaults to true.
254
+ * :drop_messages_on_full_buffer - Drop messages when the buffer is full. Defaults to true.
255
+ * :sync - Flush buffer every time a message is received (blocking). Defaults to false.
255
256
 
256
- You can turn buffering off by setting `buffer_max_items` to `1` or `sync` to `true`.
257
+ You can turn buffering off by setting `sync = true`.
257
258
 
258
259
  Please be aware of the following caveats to this behavior:
259
260
 
@@ -266,19 +267,39 @@ Please be aware of the following caveats to this behavior:
266
267
  immediately. In my testing, it took Ruby about 4 seconds to notice the receiving end was down
267
268
  and start raising exceptions. Since logstash listeners over TCP/UDP do not acknowledge received
268
269
  messages, it's not possible to know which log messages to re-send.
269
- * If your output source is unavailable long enough, the buffer will fill up
270
- and messages will be dropped.
271
- * If your application suddenly terminates (for example, by SIGKILL or a power outage), the whole
272
- buffer will be lost.
270
+
271
+ ## Full Buffer
273
272
 
274
273
  By default, messages are discarded when the buffer gets full. This can happen
275
- if the output source is down for too long.
274
+ if the output source is down for too long or log messages are being received
275
+ too quickly. If your application suddenly terminates (for example, by SIGKILL or a power outage),
276
+ the whole buffer will be lost.
277
+
276
278
  You can make message loss less likely by increasing `buffer_max_items`
277
- (so that more events can be held in the buffer), and increasing `buffer_max_interval` (to wait
278
- longer between flushes). This will increase memory pressure on your application as log messages
279
+ (so that more events can be held in the buffer), and decreasing `buffer_max_interval` (to wait
280
+ less time between flushes). This will increase memory pressure on your application as log messages
279
281
  accumulate in the buffer, so make sure you have allocated enough memory to your process.
280
282
 
281
- ## Error handling
283
+ If you don't want to lose messages when the buffer gets full, you can set
284
+ `drop_messages_on_full_buffer = false`. Note that if the buffer gets full, any
285
+ incoming log message will block, which could be undesirable.
286
+
287
+ ## Sync Mode
288
+
289
+ All logger outputs support a `sync` setting. This is analogous to the "sync mode" setting on Ruby IO
290
+ objects. When set to `true`, output is immediately flushed and is not buffered internally. Normally,
291
+ for devices that connect to a remote service, buffering is a good thing because
292
+ it improves performance and reduces the likelihood of errors affecting the program. For these devices,
293
+ `sync` defaults to `false`, and it is recommended to leave the default value.
294
+ You may want to turn sync mode on for testing, for example if you want to see
295
+ log messages immediately after they are written.
296
+
297
+ It is recommended to turn sync mode on for file and Unix socket outputs. This
298
+ ensures that log messages from different threads or proceses are written correctly on separate lines.
299
+
300
+ See [#44](https://github.com/dwbutler/logstash-logger/issues/44) for more details.
301
+
302
+ ## Error Handling
282
303
 
283
304
  If an exception occurs while writing a message to the device, the exception is
284
305
  logged using an internal logger. By default, this logs to $stderr. You can
@@ -0,0 +1,302 @@
1
+ # Forked from https://github.com/jordansissel/ruby-stud/blob/master/lib/stud/buffer.rb
2
+
3
+ module LogStashLogger
4
+
5
+ # @author {Alex Dean}[http://github.com/alexdean]
6
+ #
7
+ # Implements a generic framework for accepting events which are later flushed
8
+ # in batches. Flushing occurs whenever +:max_items+ or +:max_interval+ (seconds)
9
+ # has been reached.
10
+ #
11
+ # Including class must implement +flush+, which will be called with all
12
+ # accumulated items either when the output buffer fills (+:max_items+) or
13
+ # when a fixed amount of time (+:max_interval+) passes.
14
+ #
15
+ # == batch_receive and flush
16
+ # General receive/flush can be implemented in one of two ways.
17
+ #
18
+ # === batch_receive(event) / flush(events)
19
+ # +flush+ will receive an array of events which were passed to +buffer_receive+.
20
+ #
21
+ # batch_receive('one')
22
+ # batch_receive('two')
23
+ #
24
+ # will cause a flush invocation like
25
+ #
26
+ # flush(['one', 'two'])
27
+ #
28
+ # === batch_receive(event, group) / flush(events, group)
29
+ # flush() will receive an array of events, plus a grouping key.
30
+ #
31
+ # batch_receive('one', :server => 'a')
32
+ # batch_receive('two', :server => 'b')
33
+ # batch_receive('three', :server => 'a')
34
+ # batch_receive('four', :server => 'b')
35
+ #
36
+ # will result in the following flush calls
37
+ #
38
+ # flush(['one', 'three'], {:server => 'a'})
39
+ # flush(['two', 'four'], {:server => 'b'})
40
+ #
41
+ # Grouping keys can be anything which are valid Hash keys. (They don't have to
42
+ # be hashes themselves.) Strings or Fixnums work fine. Use anything which you'd
43
+ # like to receive in your +flush+ method to help enable different handling for
44
+ # various groups of events.
45
+ #
46
+ # == on_flush_error
47
+ # Including class may implement +on_flush_error+, which will be called with an
48
+ # Exception instance whenever buffer_flush encounters an error.
49
+ #
50
+ # * +buffer_flush+ will automatically re-try failed flushes, so +on_flush_error+
51
+ # should not try to implement retry behavior.
52
+ # * Exceptions occurring within +on_flush_error+ are not handled by
53
+ # +buffer_flush+.
54
+ #
55
+ # == on_full_buffer_receive
56
+ # Including class may implement +on_full_buffer_receive+, which will be called
57
+ # whenever +buffer_receive+ is called while the buffer is full.
58
+ #
59
+ # +on_full_buffer_receive+ will receive a Hash like <code>{:pending => 30,
60
+ # :outgoing => 20}</code> which describes the internal state of the module at
61
+ # the moment.
62
+ #
63
+ # == final flush
64
+ # Including class should call <code>buffer_flush(:final => true)</code>
65
+ # during a teardown/shutdown routine (after the last call to buffer_receive)
66
+ # to ensure that all accumulated messages are flushed.
67
+ module Buffer
68
+
69
+ public
70
+ # Initialize the buffer.
71
+ #
72
+ # Call directly from your constructor if you wish to set some non-default
73
+ # options. Otherwise buffer_initialize will be called automatically during the
74
+ # first buffer_receive call.
75
+ #
76
+ # Options:
77
+ # * :max_items, Max number of items to buffer before flushing. Default 50.
78
+ # * :max_interval, Max number of seconds to wait between flushes. Default 5.
79
+ # * :logger, A logger to write log messages to. No default. Optional.
80
+ # * :autoflush, Whether to immediately flush all inbound messages. Default true.
81
+ # * :drop_messages_on_flush_error, Whether to drop messages when there is a flush error. Default false.
82
+ # * :drop_messages_on_full_buffer, Whether to drop messages when the buffer is full. Default false.
83
+ #
84
+ # @param [Hash] options
85
+ def buffer_initialize(options={})
86
+ if ! self.class.method_defined?(:flush)
87
+ raise ArgumentError, "Any class including Stud::Buffer must define a flush() method."
88
+ end
89
+
90
+ @buffer_config = {
91
+ :max_items => options[:max_items] || 50,
92
+ :max_interval => options[:max_interval] || 5,
93
+ :logger => options[:logger] || nil,
94
+ :autoflush => options.fetch(:autoflush, true),
95
+ :has_on_flush_error => self.class.method_defined?(:on_flush_error),
96
+ :has_on_full_buffer_receive => self.class.method_defined?(:on_full_buffer_receive),
97
+ :drop_messages_on_flush_error => options.fetch(:drop_messages_on_flush_error, false),
98
+ :drop_messages_on_full_buffer => options.fetch(:drop_messages_on_full_buffer, false)
99
+ }
100
+
101
+ reset_buffer
102
+ end
103
+
104
+ def reset_buffer
105
+ @buffer_state = {
106
+ # items accepted from including class
107
+ :pending_items => {},
108
+ :pending_count => 0,
109
+
110
+ # guard access to pending_items & pending_count
111
+ :pending_mutex => Mutex.new,
112
+
113
+ # items which are currently being flushed
114
+ :outgoing_items => {},
115
+ :outgoing_count => 0,
116
+
117
+ # ensure only 1 flush is operating at once
118
+ :flush_mutex => Mutex.new,
119
+
120
+ # data for timed flushes
121
+ :last_flush => Time.now,
122
+ :timer => Thread.new do
123
+ loop do
124
+ sleep(@buffer_config[:max_interval])
125
+ begin
126
+ buffer_flush(:force => true)
127
+ rescue
128
+ end
129
+ end
130
+ end
131
+ }
132
+
133
+ # events we've accumulated
134
+ buffer_clear_pending
135
+ end
136
+
137
+ # Determine if +:max_items+ has been reached.
138
+ #
139
+ # buffer_receive calls will block while <code>buffer_full? == true</code>.
140
+ #
141
+ # @return [bool] Is the buffer full?
142
+ def buffer_full?
143
+ @buffer_state[:pending_count] + @buffer_state[:outgoing_count] >= @buffer_config[:max_items]
144
+ end
145
+
146
+ # Save an event for later delivery
147
+ #
148
+ # Events are grouped by the (optional) group parameter you provide.
149
+ # Groups of events, plus the group name, are later passed to +flush+.
150
+ #
151
+ # This call will block if +:max_items+ has been reached.
152
+ #
153
+ # @see Stud::Buffer The overview has more information on grouping and flushing.
154
+ #
155
+ # @param event An item to buffer for flushing later.
156
+ # @param group Optional grouping key. All events with the same key will be
157
+ # passed to +flush+ together, along with the grouping key itself.
158
+ def buffer_receive(event, group=nil)
159
+ buffer_initialize if ! @buffer_state
160
+
161
+ # block if we've accumulated too many events
162
+ while buffer_full? do
163
+ on_full_buffer_receive(
164
+ :pending => @buffer_state[:pending_count],
165
+ :outgoing => @buffer_state[:outgoing_count]
166
+ ) if @buffer_config[:has_on_full_buffer_receive]
167
+
168
+ if @buffer_config[:drop_messages_on_full_buffer]
169
+ reset_buffer
170
+ else
171
+ sleep 0.1
172
+ end
173
+ end
174
+
175
+ @buffer_state[:pending_mutex].synchronize do
176
+ @buffer_state[:pending_items][group] << event
177
+ @buffer_state[:pending_count] += 1
178
+ end
179
+
180
+ if @buffer_config[:autoflush]
181
+ buffer_flush(force: true)
182
+ end
183
+ end
184
+
185
+ # Try to flush events.
186
+ #
187
+ # Returns immediately if flushing is not necessary/possible at the moment:
188
+ # * :max_items have not been accumulated
189
+ # * :max_interval seconds have not elapased since the last flush
190
+ # * another flush is in progress
191
+ #
192
+ # <code>buffer_flush(:force => true)</code> will cause a flush to occur even
193
+ # if +:max_items+ or +:max_interval+ have not been reached. A forced flush
194
+ # will still return immediately (without flushing) if another flush is
195
+ # currently in progress.
196
+ #
197
+ # <code>buffer_flush(:final => true)</code> is identical to <code>buffer_flush(:force => true)</code>,
198
+ # except that if another flush is already in progress, <code>buffer_flush(:final => true)</code>
199
+ # will block/wait for the other flush to finish before proceeding.
200
+ #
201
+ # @param [Hash] options Optional. May be <code>{:force => true}</code> or <code>{:final => true}</code>.
202
+ # @return [Fixnum] The number of items successfully passed to +flush+.
203
+ def buffer_flush(options={})
204
+ force = options[:force] || options[:final]
205
+ final = options[:final]
206
+
207
+ # final flush will wait for lock, so we are sure to flush out all buffered events
208
+ if options[:final]
209
+ @buffer_state[:flush_mutex].lock
210
+ elsif ! @buffer_state[:flush_mutex].try_lock # failed to get lock, another flush already in progress
211
+ return 0
212
+ end
213
+
214
+ items_flushed = 0
215
+
216
+ begin
217
+ time_since_last_flush = (Time.now - @buffer_state[:last_flush])
218
+
219
+ return 0 if @buffer_state[:pending_count] == 0
220
+ return 0 if (!force) &&
221
+ (@buffer_state[:pending_count] < @buffer_config[:max_items]) &&
222
+ (time_since_last_flush < @buffer_config[:max_interval])
223
+
224
+ @buffer_state[:pending_mutex].synchronize do
225
+ @buffer_state[:outgoing_items] = @buffer_state[:pending_items]
226
+ @buffer_state[:outgoing_count] = @buffer_state[:pending_count]
227
+ buffer_clear_pending
228
+ end
229
+
230
+ @buffer_config[:logger].debug("Flushing output",
231
+ :outgoing_count => @buffer_state[:outgoing_count],
232
+ :time_since_last_flush => time_since_last_flush,
233
+ :outgoing_events => @buffer_state[:outgoing_items],
234
+ :batch_timeout => @buffer_config[:max_interval],
235
+ :force => force,
236
+ :final => final
237
+ ) if @buffer_config[:logger]
238
+
239
+ @buffer_state[:outgoing_items].each do |group, events|
240
+ begin
241
+ if group.nil?
242
+ flush(events,final)
243
+ else
244
+ flush(events, group, final)
245
+ end
246
+
247
+ @buffer_state[:outgoing_items].delete(group)
248
+ events_size = events.size
249
+ @buffer_state[:outgoing_count] -= events_size
250
+ items_flushed += events_size
251
+ @buffer_state[:last_flush] = Time.now
252
+
253
+ rescue => e
254
+
255
+ @buffer_config[:logger].warn("Failed to flush outgoing items",
256
+ :outgoing_count => @buffer_state[:outgoing_count],
257
+ :exception => e.class.name,
258
+ :backtrace => e.backtrace
259
+ ) if @buffer_config[:logger]
260
+
261
+ if @buffer_config[:has_on_flush_error]
262
+ on_flush_error e
263
+ end
264
+
265
+ if @buffer_config[:drop_messages_on_flush_error]
266
+ reset_buffer
267
+ else
268
+ cancel_flush
269
+ end
270
+
271
+ end
272
+ end
273
+
274
+ ensure
275
+ @buffer_state[:flush_mutex].unlock
276
+ end
277
+
278
+ return items_flushed
279
+ end
280
+
281
+ private
282
+ def buffer_clear_pending
283
+ @buffer_state[:pending_items] = Hash.new { |h, k| h[k] = [] }
284
+ @buffer_state[:pending_count] = 0
285
+ end
286
+
287
+ def buffer_clear_outgoing
288
+ @buffer_state[:outgoing_items] = Hash.new { |h, k| h[k] = [] }
289
+ @buffer_state[:outgoing_count] = 0
290
+ end
291
+
292
+ def cancel_flush
293
+ @buffer_state[:pending_mutex].synchronize do
294
+ @buffer_state[:outgoing_items].each do |group, items|
295
+ @buffer_state[:pending_items][group].concat items
296
+ end
297
+ @buffer_state[:pending_count] += @buffer_state[:outgoing_count]
298
+ end
299
+ buffer_clear_outgoing
300
+ end
301
+ end
302
+ end
@@ -1,9 +1,9 @@
1
- require 'stud/buffer'
1
+ require 'logstash-logger/buffer'
2
2
 
3
3
  module LogStashLogger
4
4
  module Device
5
5
  class Connectable < Base
6
- include Stud::Buffer
6
+ include LogStashLogger::Buffer
7
7
 
8
8
  def initialize(opts = {})
9
9
  super
@@ -19,7 +19,7 @@ module LogStashLogger
19
19
  @buffer_group = nil
20
20
  @buffer_max_items = opts[:batch_events] || opts[:buffer_max_items]
21
21
  @buffer_max_interval = opts[:batch_timeout] || opts[:buffer_max_interval]
22
- @drop_messages_on_flush_error =
22
+ @drop_messages_on_flush_error =
23
23
  if opts.key?(:drop_messages_on_flush_error)
24
24
  opts.delete(:drop_messages_on_flush_error)
25
25
  else
@@ -33,13 +33,17 @@ module LogStashLogger
33
33
  true
34
34
  end
35
35
 
36
- reset_buffer
36
+ buffer_initialize(
37
+ max_items: @buffer_max_items,
38
+ max_interval: @buffer_max_interval,
39
+ autoflush: @sync,
40
+ drop_messages_on_flush_error: @drop_messages_on_flush_error,
41
+ drop_messages_on_full_buffer: @drop_messages_on_full_buffer
42
+ )
37
43
  end
38
44
 
39
45
  def write(message)
40
46
  buffer_receive message, @buffer_group
41
- buffer_flush(force: true) if @sync
42
- rescue
43
47
  end
44
48
 
45
49
  def flush(*args)
@@ -49,23 +53,10 @@ module LogStashLogger
49
53
  messages, group = *args
50
54
  write_batch(messages, group)
51
55
  end
52
- rescue
53
- if @drop_messages_on_flush_error
54
- reset_buffer
55
- else
56
- cancel_flush
57
- end
58
- raise
59
- end
60
-
61
- def on_flush_error(e)
62
- raise e
63
56
  end
64
57
 
65
- def on_full_buffer_receive(args)
66
- if @drop_messages_on_full_buffer
67
- reset_buffer
68
- end
58
+ def on_full_buffer_receive(data)
59
+ log_warning("Buffer Full - #{data}")
69
60
  end
70
61
 
71
62
  def close(opts = {})
@@ -117,37 +108,6 @@ module LogStashLogger
117
108
  close(flush: false)
118
109
  raise
119
110
  end
120
-
121
- private
122
-
123
- def reset_buffer
124
- buffer_initialize max_items: @buffer_max_items, max_interval: @buffer_max_interval
125
- @buffer_state[:timer] = Thread.new do
126
- loop do
127
- sleep(@buffer_config[:max_interval])
128
- begin
129
- buffer_flush(:force => true)
130
- rescue
131
- end
132
- end
133
- end
134
- end
135
-
136
- def buffer_clear_outgoing
137
- @buffer_state[:outgoing_items] = Hash.new { |h, k| h[k] = [] }
138
- @buffer_state[:outgoing_count] = 0
139
- end
140
-
141
- def cancel_flush
142
- @buffer_state[:flush_mutex].lock rescue false
143
- @buffer_state[:outgoing_items].each do |group, items|
144
- @buffer_state[:pending_items][group].concat items
145
- end
146
- @buffer_state[:pending_count] += @buffer_state[:outgoing_count]
147
- buffer_clear_outgoing
148
- ensure
149
- @buffer_state[:flush_mutex].unlock rescue false
150
- end
151
111
  end
152
112
  end
153
113
  end
@@ -1,3 +1,3 @@
1
1
  module LogStashLogger
2
- VERSION = "0.17.0"
2
+ VERSION = "0.18.0"
3
3
  end
@@ -19,7 +19,6 @@ Gem::Specification.new do |gem|
19
19
  gem.require_paths = ["lib"]
20
20
 
21
21
  gem.add_runtime_dependency 'logstash-event', '~> 1.2'
22
- gem.add_runtime_dependency 'stud'
23
22
 
24
23
  gem.add_development_dependency 'rails'
25
24
  if RUBY_VERSION < '2'
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-logger
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.17.0
4
+ version: 0.18.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - David Butler
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2016-07-24 00:00:00.000000000 Z
11
+ date: 2016-07-27 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: logstash-event
@@ -24,20 +24,6 @@ dependencies:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '1.2'
27
- - !ruby/object:Gem::Dependency
28
- name: stud
29
- requirement: !ruby/object:Gem::Requirement
30
- requirements:
31
- - - ">="
32
- - !ruby/object:Gem::Version
33
- version: '0'
34
- type: :runtime
35
- prerelease: false
36
- version_requirements: !ruby/object:Gem::Requirement
37
- requirements:
38
- - - ">="
39
- - !ruby/object:Gem::Version
40
- version: '0'
41
27
  - !ruby/object:Gem::Dependency
42
28
  name: rails
43
29
  requirement: !ruby/object:Gem::Requirement
@@ -172,6 +158,7 @@ files:
172
158
  - gemfiles/rails_4.2.gemfile
173
159
  - gemfiles/rails_5.0.gemfile
174
160
  - lib/logstash-logger.rb
161
+ - lib/logstash-logger/buffer.rb
175
162
  - lib/logstash-logger/configuration.rb
176
163
  - lib/logstash-logger/device.rb
177
164
  - lib/logstash-logger/device/balancer.rb