semantic_logger 4.0.0 → 4.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +55 -8
- data/lib/semantic_logger.rb +1 -2
- data/lib/semantic_logger/ansi_colors.rb +1 -2
- data/lib/semantic_logger/appender.rb +17 -15
- data/lib/semantic_logger/appender/bugsnag.rb +5 -4
- data/lib/semantic_logger/appender/elasticsearch.rb +102 -16
- data/lib/semantic_logger/appender/elasticsearch_http.rb +76 -0
- data/lib/semantic_logger/appender/file.rb +9 -25
- data/lib/semantic_logger/appender/graylog.rb +43 -38
- data/lib/semantic_logger/appender/honeybadger.rb +3 -5
- data/lib/semantic_logger/appender/http.rb +12 -15
- data/lib/semantic_logger/appender/kafka.rb +183 -0
- data/lib/semantic_logger/appender/mongodb.rb +3 -3
- data/lib/semantic_logger/appender/new_relic.rb +3 -7
- data/lib/semantic_logger/appender/sentry.rb +2 -5
- data/lib/semantic_logger/appender/splunk.rb +7 -10
- data/lib/semantic_logger/appender/splunk_http.rb +16 -16
- data/lib/semantic_logger/appender/syslog.rb +43 -122
- data/lib/semantic_logger/appender/tcp.rb +28 -9
- data/lib/semantic_logger/appender/udp.rb +4 -7
- data/lib/semantic_logger/appender/wrapper.rb +3 -7
- data/lib/semantic_logger/base.rb +47 -7
- data/lib/semantic_logger/formatters/base.rb +29 -10
- data/lib/semantic_logger/formatters/color.rb +75 -45
- data/lib/semantic_logger/formatters/default.rb +53 -28
- data/lib/semantic_logger/formatters/json.rb +7 -8
- data/lib/semantic_logger/formatters/raw.rb +97 -1
- data/lib/semantic_logger/formatters/syslog.rb +46 -80
- data/lib/semantic_logger/formatters/syslog_cee.rb +57 -0
- data/lib/semantic_logger/log.rb +17 -67
- data/lib/semantic_logger/logger.rb +17 -27
- data/lib/semantic_logger/processor.rb +70 -46
- data/lib/semantic_logger/semantic_logger.rb +130 -69
- data/lib/semantic_logger/subscriber.rb +18 -32
- data/lib/semantic_logger/version.rb +1 -1
- data/test/appender/elasticsearch_http_test.rb +75 -0
- data/test/appender/elasticsearch_test.rb +34 -27
- data/test/appender/file_test.rb +2 -2
- data/test/appender/honeybadger_test.rb +1 -1
- data/test/appender/kafka_test.rb +36 -0
- data/test/appender/new_relic_test.rb +1 -1
- data/test/appender/sentry_test.rb +1 -1
- data/test/appender/syslog_test.rb +2 -2
- data/test/appender/wrapper_test.rb +1 -1
- data/test/formatters/color_test.rb +154 -0
- data/test/formatters/default_test.rb +176 -0
- data/test/loggable_test.rb +1 -1
- data/test/logger_test.rb +47 -4
- data/test/measure_test.rb +2 -2
- data/test/semantic_logger_test.rb +34 -6
- data/test/test_helper.rb +8 -0
- metadata +14 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: b0ad46c624c9904d87ec5b38c7fdc28e12dcc62d
|
4
|
+
data.tar.gz: 92ce4e537d95a70422e9662001683c82a75ce7dd
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: fb9845a97ecaed5b6e91f0a97e6fc61464411969af7798ebd04e336eea66e115df1447514a3fd80c3d1e36236f79071ebd8dfc2b9bdcf2af1100deeb2297d389
|
7
|
+
data.tar.gz: 23cd01a8709c925132f82ddf697aa8b29a57a35a595ad308ba772c948cef722b25d9aacbd6c89d56531f08dcfdd25727dc486223d867db7e868564e2d29e0f86
|
data/README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
# semantic_logger
|
2
2
|
[![Gem Version](https://img.shields.io/gem/v/semantic_logger.svg)](https://rubygems.org/gems/semantic_logger) [![Build Status](https://travis-ci.org/rocketjob/semantic_logger.svg?branch=master)](https://travis-ci.org/rocketjob/semantic_logger) [![Downloads](https://img.shields.io/gem/dt/semantic_logger.svg)](https://rubygems.org/gems/semantic_logger) [![License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](http://opensource.org/licenses/Apache-2.0) ![](https://img.shields.io/badge/status-Production%20Ready-blue.svg) [![Gitter chat](https://img.shields.io/badge/IRC%20(gitter)-Support-brightgreen.svg)](https://gitter.im/rocketjob/support)
|
3
3
|
|
4
|
-
Low latency, high throughput, enterprise-scale logging system for Ruby
|
4
|
+
Low latency, high throughput, enterprise-scale logging system for Ruby.
|
5
5
|
|
6
6
|
* http://github.com/rocketjob/semantic_logger
|
7
7
|
|
@@ -17,29 +17,44 @@ Logging to the following destinations are all supported "out-of-the-box":
|
|
17
17
|
|
18
18
|
* File
|
19
19
|
* Screen
|
20
|
-
*
|
20
|
+
* ElasticSearch. (Use with Kibana for Dashboards and Visualizations)
|
21
|
+
* Graylog
|
21
22
|
* BugSnag
|
22
23
|
* NewRelic
|
23
24
|
* Splunk
|
25
|
+
* MongoDB
|
26
|
+
* Honeybadger
|
27
|
+
* Sentry
|
28
|
+
* HTTP
|
29
|
+
* TCP
|
30
|
+
* UDP
|
24
31
|
* Syslog
|
32
|
+
* Add any existing Ruby logger as another destination.
|
25
33
|
* Roll-your-own
|
26
34
|
|
27
35
|
Semantic Logger is capable of logging thousands of lines per second without slowing
|
28
36
|
down the application. Traditional logging systems make the application wait while
|
29
37
|
the log information is being saved. Semantic Logger avoids this slowdown by pushing
|
30
38
|
log events to an in-memory queue that is serviced by a separate thread that only
|
31
|
-
handles saving log information to multiple destinations
|
39
|
+
handles saving log information to multiple destinations / appenders.
|
32
40
|
|
33
41
|
## Rails
|
34
42
|
|
35
43
|
When running Rails, use [rails_semantic_logger](http://github.com/rocketjob/rails_semantic_logger)
|
36
44
|
instead of Semantic Logger directly since it will automatically replace the Rails default logger with Semantic Logger.
|
37
45
|
|
46
|
+
## Rocket Job
|
47
|
+
|
48
|
+
Checkout the sister project [Rocket Job](http://rocketjob.io): Ruby's missing batch system.
|
49
|
+
|
50
|
+
Fully supports Semantic Logger when running jobs in the background. Complete support for job metrics
|
51
|
+
sent via Semantic Logger to your favorite dashboards.
|
52
|
+
|
38
53
|
## Supports
|
39
54
|
|
40
55
|
Semantic Logger is tested and supported on the following Ruby platforms:
|
41
|
-
- Ruby 2.1
|
42
|
-
- JRuby
|
56
|
+
- Ruby 2.1 and higher.
|
57
|
+
- JRuby 9.1 and higher.
|
43
58
|
|
44
59
|
The following gems are only required when their corresponding appenders are being used,
|
45
60
|
and are therefore not automatically included by this gem:
|
@@ -49,6 +64,8 @@ and are therefore not automatically included by this gem:
|
|
49
64
|
- Syslog Appender: gem 'syslog_protocol' 0.9.2 or above
|
50
65
|
- Syslog Appender to a remote syslogng server over TCP or UDP: gem 'net_tcp_client'
|
51
66
|
- Splunk Appender: gem 'splunk-sdk-ruby'
|
67
|
+
- Elasticsearch Appender: gem 'elasticsearch'
|
68
|
+
- Kafka Appender: gem 'ruby-kafka'
|
52
69
|
|
53
70
|
## V4 Upgrade notes
|
54
71
|
|
@@ -56,9 +73,39 @@ The following changes need to be made when upgrading to V4:
|
|
56
73
|
- Ruby V2.1 / JRuby V9.1 is now the minimum runtime version.
|
57
74
|
- Replace calls to Logger#with_payload with SemanticLogger.named_tagged.
|
58
75
|
- Replace calls to Logger#payload with SemanticLogger.named_tags.
|
76
|
+
- MongoDB Appender requires Mongo Ruby Client V2 or greater.
|
59
77
|
- Appenders now write payload data in a seperate :payload tag instead of mixing them.
|
60
78
|
directly into the root elements to avoid name clashes.
|
61
|
-
|
79
|
+
|
80
|
+
As a result any calls like the following:
|
81
|
+
|
82
|
+
~~~ruby
|
83
|
+
logger.debug foo: 'foo', bar: 'bar'
|
84
|
+
~~~
|
85
|
+
|
86
|
+
Must be replaced with the following in v4:
|
87
|
+
|
88
|
+
~~~ruby
|
89
|
+
logger.debug payload: {foo: 'foo', bar: 'bar'}
|
90
|
+
~~~
|
91
|
+
|
92
|
+
Similarly, for measure blocks:
|
93
|
+
|
94
|
+
~~~ruby
|
95
|
+
logger.measure_info('How long is the sleep', foo: 'foo', bar: 'bar') { sleep 1 }
|
96
|
+
~~~
|
97
|
+
|
98
|
+
Must be replaced with the following in v4:
|
99
|
+
|
100
|
+
~~~ruby
|
101
|
+
logger.measure_info('How long is the sleep', payload: {foo: 'foo', bar: 'bar'}) { sleep 1 }
|
102
|
+
~~~
|
103
|
+
|
104
|
+
The common log call has not changed, and the payload is still logged directly:
|
105
|
+
|
106
|
+
~~~ruby
|
107
|
+
logger.debug('log this', foo: 'foo', bar: 'bar')
|
108
|
+
~~~
|
62
109
|
|
63
110
|
## Install
|
64
111
|
|
@@ -66,7 +113,7 @@ The following changes need to be made when upgrading to V4:
|
|
66
113
|
|
67
114
|
To configure a stand-alone application for Semantic Logger:
|
68
115
|
|
69
|
-
|
116
|
+
~~~ruby
|
70
117
|
require 'semantic_logger'
|
71
118
|
|
72
119
|
# Set the global default log level
|
@@ -74,7 +121,7 @@ SemanticLogger.default_level = :trace
|
|
74
121
|
|
75
122
|
# Log to a file, and use the colorized formatter
|
76
123
|
SemanticLogger.add_appender(file_name: 'development.log', formatter: :color)
|
77
|
-
|
124
|
+
~~~
|
78
125
|
|
79
126
|
If running rails, see: [Semantic Logger Rails](http://rocketjob.github.io/semantic_logger/rails.html)
|
80
127
|
|
data/lib/semantic_logger.rb
CHANGED
@@ -44,6 +44,5 @@ end
|
|
44
44
|
# Close and flush all appenders at exit, waiting for outstanding messages on the queue
|
45
45
|
# to be written first
|
46
46
|
at_exit do
|
47
|
-
|
48
|
-
SemanticLogger.flush
|
47
|
+
SemanticLogger.close
|
49
48
|
end
|
@@ -12,8 +12,7 @@ module SemanticLogger
|
|
12
12
|
CYAN = "\e[36m"
|
13
13
|
WHITE = "\e[37m"
|
14
14
|
|
15
|
-
#
|
16
|
-
# Since this map is not frozen, it can be modified as needed
|
15
|
+
# DEPRECATED - NOT USED
|
17
16
|
LEVEL_MAP = {
|
18
17
|
trace: MAGENTA,
|
19
18
|
debug: GREEN,
|
@@ -1,21 +1,23 @@
|
|
1
1
|
module SemanticLogger
|
2
2
|
module Appender
|
3
3
|
# @formatter:off
|
4
|
-
autoload :Bugsnag,
|
5
|
-
autoload :Elasticsearch,
|
6
|
-
autoload :
|
7
|
-
autoload :
|
8
|
-
autoload :
|
9
|
-
autoload :
|
10
|
-
autoload :
|
11
|
-
autoload :
|
12
|
-
autoload :
|
13
|
-
autoload :
|
14
|
-
autoload :
|
15
|
-
autoload :
|
16
|
-
autoload :
|
17
|
-
autoload :
|
18
|
-
autoload :
|
4
|
+
autoload :Bugsnag, 'semantic_logger/appender/bugsnag'
|
5
|
+
autoload :Elasticsearch, 'semantic_logger/appender/elasticsearch'
|
6
|
+
autoload :ElasticsearchHttp, 'semantic_logger/appender/elasticsearch_http'
|
7
|
+
autoload :File, 'semantic_logger/appender/file'
|
8
|
+
autoload :Graylog, 'semantic_logger/appender/graylog'
|
9
|
+
autoload :Honeybadger, 'semantic_logger/appender/honeybadger'
|
10
|
+
autoload :Kafka, 'semantic_logger/appender/kafka'
|
11
|
+
autoload :Sentry, 'semantic_logger/appender/sentry'
|
12
|
+
autoload :Http, 'semantic_logger/appender/http'
|
13
|
+
autoload :MongoDB, 'semantic_logger/appender/mongodb'
|
14
|
+
autoload :NewRelic, 'semantic_logger/appender/new_relic'
|
15
|
+
autoload :Splunk, 'semantic_logger/appender/splunk'
|
16
|
+
autoload :SplunkHttp, 'semantic_logger/appender/splunk_http'
|
17
|
+
autoload :Syslog, 'semantic_logger/appender/syslog'
|
18
|
+
autoload :Tcp, 'semantic_logger/appender/tcp'
|
19
|
+
autoload :Udp, 'semantic_logger/appender/udp'
|
20
|
+
autoload :Wrapper, 'semantic_logger/appender/wrapper'
|
19
21
|
# @formatter:on
|
20
22
|
|
21
23
|
# DEPRECATED, use SemanticLogger::AnsiColors
|
@@ -27,19 +27,20 @@ class SemanticLogger::Appender::Bugsnag < SemanticLogger::Subscriber
|
|
27
27
|
# regular expression. All other messages will be ignored.
|
28
28
|
# Proc: Only include log messages where the supplied Proc returns true
|
29
29
|
# The Proc must return true or false.
|
30
|
-
def initialize(level: :error, formatter: nil, filter: nil,
|
31
|
-
raise 'Bugsnag only supports :info, :warn, or :error log levels' unless [:info, :warn, :error].include?(level)
|
30
|
+
def initialize(level: :error, formatter: nil, filter: nil, application: nil, host: nil, &block)
|
31
|
+
raise 'Bugsnag only supports :info, :warn, or :error log levels' unless [:info, :warn, :error, :fatal].include?(level)
|
32
32
|
|
33
33
|
# Replace the Bugsnag logger so that we can identify its log messages and not forward them to Bugsnag
|
34
34
|
Bugsnag.configure { |config| config.logger = SemanticLogger[Bugsnag] }
|
35
35
|
|
36
|
-
super(level: level, formatter: formatter, filter: filter,
|
36
|
+
super(level: level, formatter: formatter, filter: filter, application: application, host: host, &block)
|
37
37
|
end
|
38
38
|
|
39
39
|
# Returns [Hash] of parameters to send to Bugsnag.
|
40
40
|
def call(log, logger)
|
41
|
-
h
|
41
|
+
h = SemanticLogger::Formatters::Raw.new.call(log, logger)
|
42
42
|
h[:severity] = log_level(log)
|
43
|
+
h.delete(:message) if h[:exception] && (h[:message] == h[:exception][:message])
|
43
44
|
h.delete(:time)
|
44
45
|
h.delete(:exception)
|
45
46
|
h
|
@@ -1,17 +1,29 @@
|
|
1
|
+
begin
|
2
|
+
require 'elasticsearch'
|
3
|
+
rescue LoadError
|
4
|
+
raise 'Gem elasticsearch is required for logging to Elasticsearch. Please add the gem "elasticsearch" to your Gemfile.'
|
5
|
+
end
|
6
|
+
|
1
7
|
require 'date'
|
8
|
+
|
2
9
|
# Forward all log messages to Elasticsearch.
|
3
10
|
#
|
4
11
|
# Example:
|
12
|
+
#
|
5
13
|
# SemanticLogger.add_appender(
|
6
14
|
# appender: :elasticsearch,
|
7
15
|
# url: 'http://localhost:9200'
|
8
16
|
# )
|
9
|
-
class SemanticLogger::Appender::Elasticsearch < SemanticLogger::
|
10
|
-
attr_accessor :index, :type
|
17
|
+
class SemanticLogger::Appender::Elasticsearch < SemanticLogger::Subscriber
|
18
|
+
attr_accessor :url, :index, :type, :client, :flush_interval, :timeout_interval, :batch_size
|
11
19
|
|
12
20
|
# Create Elasticsearch appender over persistent HTTP(S)
|
13
21
|
#
|
14
22
|
# Parameters:
|
23
|
+
# url: [String]
|
24
|
+
# Fully qualified address to the Elasticsearch service.
|
25
|
+
# Default: 'http://localhost:9200'
|
26
|
+
#
|
15
27
|
# index: [String]
|
16
28
|
# Prefix of the index to store the logs in Elasticsearch.
|
17
29
|
# The final index appends the date so that indexes are used per day.
|
@@ -22,6 +34,18 @@ class SemanticLogger::Appender::Elasticsearch < SemanticLogger::Appender::Http
|
|
22
34
|
# Document type to associate with logs when they are written.
|
23
35
|
# Default: 'log'
|
24
36
|
#
|
37
|
+
# batch_size: [Fixnum]
|
38
|
+
# Size of list when sending to Elasticsearch. May be smaller if flush is triggered early.
|
39
|
+
# Default: 500
|
40
|
+
#
|
41
|
+
# flush_interval: [Fixnum]
|
42
|
+
# Seconds to wait before attempting a flush to Elasticsearch. If no messages queued it's a NOOP.
|
43
|
+
# Default: 1
|
44
|
+
#
|
45
|
+
# timeout_interval: [Fixnum]
|
46
|
+
# Seconds to allow the Elasticsearch client to flush the bulk message.
|
47
|
+
# Default: 10
|
48
|
+
#
|
25
49
|
# level: [:trace | :debug | :info | :warn | :error | :fatal]
|
26
50
|
# Override the log level for this appender.
|
27
51
|
# Default: SemanticLogger.default_level
|
@@ -29,7 +53,7 @@ class SemanticLogger::Appender::Elasticsearch < SemanticLogger::Appender::Http
|
|
29
53
|
# formatter: [Object|Proc|Symbol|Hash]
|
30
54
|
# An instance of a class that implements #call, or a Proc to be used to format
|
31
55
|
# the output from this appender
|
32
|
-
# Default:
|
56
|
+
# Default: :raw_json (See: #call)
|
33
57
|
#
|
34
58
|
# filter: [Regexp|Proc]
|
35
59
|
# RegExp: Only include log messages where the class name matches the supplied.
|
@@ -44,26 +68,88 @@ class SemanticLogger::Appender::Elasticsearch < SemanticLogger::Appender::Http
|
|
44
68
|
# application: [String]
|
45
69
|
# Name of this application to appear in log messages.
|
46
70
|
# Default: SemanticLogger.application
|
47
|
-
def initialize(
|
48
|
-
|
49
|
-
|
50
|
-
@
|
51
|
-
|
52
|
-
|
53
|
-
@
|
54
|
-
@
|
71
|
+
def initialize(url: 'http://localhost:9200', index: 'semantic_logger', type: 'log', flush_interval: 1, timeout_interval: 10, batch_size: 500,
|
72
|
+
level: nil, formatter: nil, filter: nil, application: nil, host: nil, &block)
|
73
|
+
|
74
|
+
@url = url
|
75
|
+
@index = index
|
76
|
+
@type = type
|
77
|
+
@flush_interval = flush_interval
|
78
|
+
@timeout_interval = timeout_interval
|
79
|
+
@batch_size = batch_size
|
80
|
+
|
81
|
+
@messages_mutex = Mutex.new
|
82
|
+
@messages = Array.new
|
83
|
+
|
84
|
+
super(level: level, formatter: formatter, filter: filter, application: application, host: host, &block)
|
85
|
+
reopen
|
86
|
+
end
|
87
|
+
|
88
|
+
def reopen
|
89
|
+
@client = Elasticsearch::Client.new(url: url, logger: SemanticLogger::Processor.logger.clone)
|
90
|
+
|
91
|
+
@messages_mutex.synchronize { @messages = [] }
|
92
|
+
|
93
|
+
@flush_task = Concurrent::TimerTask.new(execution_interval: flush_interval, timeout_interval: timeout_interval) do
|
94
|
+
flush
|
95
|
+
end.execute
|
96
|
+
end
|
97
|
+
|
98
|
+
def close
|
99
|
+
@flush_task.shutdown if @flush_task
|
100
|
+
@flush_task = nil
|
101
|
+
# No api to close connections in the elasticsearch client!
|
102
|
+
#@client.close if @client
|
103
|
+
#@client = nil
|
104
|
+
end
|
105
|
+
|
106
|
+
def call(log, logger)
|
107
|
+
h = SemanticLogger::Formatters::Raw.new.call(log, logger)
|
108
|
+
h.delete(:time)
|
109
|
+
h[:timestamp] = log.time.utc.iso8601(SemanticLogger::Formatters::Base::PRECISION)
|
110
|
+
h
|
111
|
+
end
|
112
|
+
|
113
|
+
def flush
|
114
|
+
collected_messages = nil
|
115
|
+
@messages_mutex.synchronize do
|
116
|
+
if @messages.length > 0
|
117
|
+
collected_messages = @messages
|
118
|
+
@messages = []
|
119
|
+
end
|
120
|
+
end
|
121
|
+
|
122
|
+
if collected_messages
|
123
|
+
bulk_result = @client.bulk(body: collected_messages)
|
124
|
+
if bulk_result["errors"]
|
125
|
+
failed = bulk_result["items"].select { |x| x["status"] != 201 }
|
126
|
+
SemanticLogger::Processor.logger.error("ElasticSearch: Write failed. Messages discarded. : #{failed}")
|
127
|
+
end
|
128
|
+
end
|
129
|
+
rescue Exception => exc
|
130
|
+
SemanticLogger::Processor.logger.error('ElasticSearch: Failed to bulk insert log messages', exc)
|
55
131
|
end
|
56
132
|
|
57
133
|
# Log to the index for today
|
58
134
|
def log(log)
|
59
135
|
return false unless should_log?(log)
|
60
136
|
|
61
|
-
|
62
|
-
end
|
137
|
+
daily_index = log.time.strftime("#{@index}-%Y.%m.%d")
|
63
138
|
|
64
|
-
|
65
|
-
|
66
|
-
|
139
|
+
bulk_index = {'index' => {'_index' => daily_index, '_type' => @type}}
|
140
|
+
bulk_payload = formatter.call(log, self)
|
141
|
+
|
142
|
+
enqueue(bulk_index, bulk_payload)
|
67
143
|
end
|
68
144
|
|
145
|
+
def enqueue(bulk_index, bulk_payload)
|
146
|
+
messages_len =
|
147
|
+
@messages_mutex.synchronize do
|
148
|
+
@messages.push(bulk_index)
|
149
|
+
@messages.push(bulk_payload)
|
150
|
+
@messages.length
|
151
|
+
end
|
152
|
+
|
153
|
+
flush if messages_len >= batch_size
|
154
|
+
end
|
69
155
|
end
|
@@ -0,0 +1,76 @@
|
|
1
|
+
require 'date'
|
2
|
+
# Forward all log messages to Elasticsearch one at a time via a HTTP post.
|
3
|
+
#
|
4
|
+
# Note:
|
5
|
+
# * Other than in very low volume environments it is recommended to rather use the Elasticsearch appender,
|
6
|
+
# since it supports bulk logging.
|
7
|
+
#
|
8
|
+
# Example:
|
9
|
+
# SemanticLogger.add_appender(
|
10
|
+
# appender: :elasticsearch_http,
|
11
|
+
# url: 'http://localhost:9200'
|
12
|
+
# )
|
13
|
+
class SemanticLogger::Appender::ElasticsearchHttp < SemanticLogger::Appender::Http
|
14
|
+
attr_accessor :index, :type
|
15
|
+
|
16
|
+
# Create Elasticsearch appender over persistent HTTP(S)
|
17
|
+
#
|
18
|
+
# Parameters:
|
19
|
+
# index: [String]
|
20
|
+
# Prefix of the index to store the logs in Elasticsearch.
|
21
|
+
# The final index appends the date so that indexes are used per day.
|
22
|
+
# I.e. The final index will look like 'semantic_logger-YYYY.MM.DD'
|
23
|
+
# Default: 'semantic_logger'
|
24
|
+
#
|
25
|
+
# type: [String]
|
26
|
+
# Document type to associate with logs when they are written.
|
27
|
+
# Default: 'log'
|
28
|
+
#
|
29
|
+
# level: [:trace | :debug | :info | :warn | :error | :fatal]
|
30
|
+
# Override the log level for this appender.
|
31
|
+
# Default: SemanticLogger.default_level
|
32
|
+
#
|
33
|
+
# formatter: [Object|Proc|Symbol|Hash]
|
34
|
+
# An instance of a class that implements #call, or a Proc to be used to format
|
35
|
+
# the output from this appender
|
36
|
+
# Default: Use the built-in formatter (See: #call)
|
37
|
+
#
|
38
|
+
# filter: [Regexp|Proc]
|
39
|
+
# RegExp: Only include log messages where the class name matches the supplied.
|
40
|
+
# regular expression. All other messages will be ignored.
|
41
|
+
# Proc: Only include log messages where the supplied Proc returns true
|
42
|
+
# The Proc must return true or false.
|
43
|
+
#
|
44
|
+
# host: [String]
|
45
|
+
# Name of this host to appear in log messages.
|
46
|
+
# Default: SemanticLogger.host
|
47
|
+
#
|
48
|
+
# application: [String]
|
49
|
+
# Name of this application to appear in log messages.
|
50
|
+
# Default: SemanticLogger.application
|
51
|
+
def initialize(index: 'semantic_logger', type: 'log',
|
52
|
+
url: 'http://localhost:9200', compress: false, ssl: {}, open_timeout: 2.0, read_timeout: 1.0, continue_timeout: 1.0,
|
53
|
+
level: nil, formatter: nil, filter: nil, application: nil, host: nil, &block)
|
54
|
+
|
55
|
+
@index = index
|
56
|
+
@type = type
|
57
|
+
super(url: url, compress: compress, ssl: ssl, open_timeout: 2.0, read_timeout: open_timeout, continue_timeout: continue_timeout,
|
58
|
+
level: level, formatter: formatter, filter: filter, application: application, host: host, &block)
|
59
|
+
|
60
|
+
@request_path = "#{@path.end_with?('/') ? @path : "#{@path}/"}#{@index}-%Y.%m.%d"
|
61
|
+
@logging_path = "#{@request_path}/#{type}"
|
62
|
+
end
|
63
|
+
|
64
|
+
# Log to the index for today.
|
65
|
+
def log(log)
|
66
|
+
return false unless should_log?(log)
|
67
|
+
|
68
|
+
post(formatter.call(log, self), log.time.strftime(@logging_path))
|
69
|
+
end
|
70
|
+
|
71
|
+
# Deletes all log data captured for a day.
|
72
|
+
def delete_all(date = Date.today)
|
73
|
+
delete(date.strftime(@request_path))
|
74
|
+
end
|
75
|
+
|
76
|
+
end
|