logstash-output-jdbc 0.3.0-java
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/LICENSE.txt +21 -0
- data/README.md +84 -0
- data/lib/logstash-output-jdbc_jars.rb +5 -0
- data/lib/logstash/outputs/jdbc.rb +322 -0
- data/spec/jdbc_spec_helper.rb +135 -0
- data/spec/outputs/jdbc_derby_spec.rb +25 -0
- data/spec/outputs/jdbc_mysql_spec.rb +25 -0
- data/spec/outputs/jdbc_spec.rb +11 -0
- data/spec/outputs/jdbc_sqlite_spec.rb +27 -0
- data/vendor/jar-dependencies/runtime-jars/HikariCP-2.4.2.jar +0 -0
- data/vendor/jar-dependencies/runtime-jars/log4j-1.2.17.jar +0 -0
- data/vendor/jar-dependencies/runtime-jars/slf4j-api-1.7.12.jar +0 -0
- data/vendor/jar-dependencies/runtime-jars/slf4j-log4j12-1.7.21.jar +0 -0
- metadata +165 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: 35dd5805a0cdbb3e95762b7416d3d7f438af203b
|
4
|
+
data.tar.gz: 641c6421320eb3a3374600404f3c7b2c12759a3d
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: fdb52ceb04117f3a06f09960286305660a4b3349a5b81292aa9b72a7f32a42b487ebb7fa518ca62e61d55474cdb775ada7f35f9b0c05956efaae2f3ba4741d67
|
7
|
+
data.tar.gz: 390fe8864c37a0d27bb14fd29112e0b57230546ca2af0eeee31c5d6a07a83b40c9dcf7f1064878e0a70e1b7c0f5ed7c8648e931b9ecd93038eb5b01c6d55fdb0
|
data/LICENSE.txt
ADDED
@@ -0,0 +1,21 @@
|
|
1
|
+
The MIT License (MIT)
|
2
|
+
|
3
|
+
Copyright (c) 2014
|
4
|
+
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
7
|
+
in the Software without restriction, including without limitation the rights
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
10
|
+
furnished to do so, subject to the following conditions:
|
11
|
+
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
13
|
+
copies or substantial portions of the Software.
|
14
|
+
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21
|
+
SOFTWARE.
|
data/README.md
ADDED
@@ -0,0 +1,84 @@
|
|
1
|
+
# logstash-output-jdbc
|
2
|
+
|
3
|
+
[](https://travis-ci.org/theangryangel/logstash-output-jdbc)
|
4
|
+
|
5
|
+
This plugin is provided as an external plugin and is not part of the Logstash project.
|
6
|
+
|
7
|
+
This plugin allows you to output to SQL databases, using JDBC adapters.
|
8
|
+
See below for tested adapters, and example configurations.
|
9
|
+
|
10
|
+
This has not yet been extensively tested with all JDBC drivers and may not yet work for you.
|
11
|
+
|
12
|
+
If you do find this works for a JDBC driver without an example, let me know and provide a small example configuration if you can.
|
13
|
+
|
14
|
+
This plugin does not bundle any JDBC jar files, and does expect them to be in a
|
15
|
+
particular location. Please ensure you read the 4 installation lines below.
|
16
|
+
|
17
|
+
## Changelog
|
18
|
+
See CHANGELOG.md
|
19
|
+
|
20
|
+
## Versions
|
21
|
+
Released versions are available via rubygems, and typically tagged.
|
22
|
+
|
23
|
+
For development:
|
24
|
+
- See master branch for logstash v5 (currently **development only**)
|
25
|
+
- See v2.x branch for logstash v2
|
26
|
+
- See v1.5 branch for logstash v1.5
|
27
|
+
- See v1.4 branch for logstash 1.4
|
28
|
+
|
29
|
+
## Installation
|
30
|
+
- Run `bin/logstash-plugin install logstash-output-jdbc` in your logstash installation directory
|
31
|
+
- Now either:
|
32
|
+
- Use driver_jar_path in your configuraton to specify a path to your jar file
|
33
|
+
- Or:
|
34
|
+
- Create the directory vendor/jar/jdbc in your logstash installation (`mkdir -p vendor/jar/jdbc/`)
|
35
|
+
- Add JDBC jar files to vendor/jar/jdbc in your logstash installation
|
36
|
+
- And then configure (examples can be found in the examples directory)
|
37
|
+
|
38
|
+
## Configuration options
|
39
|
+
|
40
|
+
| Option | Type | Description | Required? | Default |
|
41
|
+
| ------ | ---- | ----------- | --------- | ------- |
|
42
|
+
| driver_class | String | Specify a driver class if autoloading fails | No | |
|
43
|
+
| driver_auto_commit | Boolean | If the driver does not support auto commit, you should set this to false | No | True |
|
44
|
+
| driver_jar_path | String | File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. | No | |
|
45
|
+
| connection_string | String | JDBC connection URL | Yes | |
|
46
|
+
| username | String | JDBC username - this is optional as it may be included in the connection string, for many drivers | No | |
|
47
|
+
| password | String | JDBC password - this is optional as it may be included in the connection string, for many drivers | No | |
|
48
|
+
| statement | Array | An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | Yes | |
|
49
|
+
| unsafe_statement | Boolean | If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. **This is highly dangerous** and you should **not** use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Please be aware that there is also a potential performance penalty as each event must be evaluated and inserted into SQL one at a time, where as when this is false multiple events are inserted at once. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] | No | False |
|
50
|
+
| max_pool_size | Number | Maximum number of connections to open to the SQL server at any 1 time. Default set to same as Logstash default number of workers | No | 24 |
|
51
|
+
| connection_timeout | Number | Number of seconds before a SQL connection is closed | No | 2800 |
|
52
|
+
| flush_size | Number | Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time | No | 1000 |
|
53
|
+
| max_flush_exceptions | Number | Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available | No | 10 |
|
54
|
+
| retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 |
|
55
|
+
| retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
|
56
|
+
| retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
|
57
|
+
|
58
|
+
## Example configurations
|
59
|
+
Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar.
|
60
|
+
|
61
|
+
If you have a working sample configuration, for a DB thats not listed, pull requests are welcome.
|
62
|
+
|
63
|
+
## Development and Running tests
|
64
|
+
For development tests are recommended to run inside a virtual machine (Vagrantfile is included in the repo), as it requires
|
65
|
+
access to various database engines and could completely destroy any data in a live system.
|
66
|
+
|
67
|
+
If you have vagrant available (this is temporary whilst I'm hacking on v5 support. I'll make this more streamlined later):
|
68
|
+
- `vagrant up`
|
69
|
+
- `vagrant ssh`
|
70
|
+
- `cd /vagrant`
|
71
|
+
- `gem install bundler`
|
72
|
+
- `cd /vagrant && bundle install && bundle exec rake vendor && bundle exec rake install_jars`
|
73
|
+
- `./scripts/travis-before_script.sh && source ./scripts/travis-variables.sh`
|
74
|
+
- `bundle exec rspec`
|
75
|
+
|
76
|
+
## Releasing
|
77
|
+
- Update Changelog
|
78
|
+
- Bump version in gemspec
|
79
|
+
- Commit
|
80
|
+
- Create tag `git tag v<version-number-in-gemspec>`
|
81
|
+
- `bundle exec rake install_jars`
|
82
|
+
- `bundle exec rake pre_release_checks`
|
83
|
+
- `gem build logstash-output-jdbc.gemspec`
|
84
|
+
- `gem push`
|
@@ -0,0 +1,322 @@
|
|
1
|
+
# encoding: utf-8
|
2
|
+
require 'logstash/outputs/base'
|
3
|
+
require 'logstash/namespace'
|
4
|
+
require 'concurrent'
|
5
|
+
require 'stud/interval'
|
6
|
+
require 'java'
|
7
|
+
require 'logstash-output-jdbc_jars'
|
8
|
+
|
9
|
+
# Write events to a SQL engine, using JDBC.
|
10
|
+
#
|
11
|
+
# It is upto the user of the plugin to correctly configure the plugin. This
|
12
|
+
# includes correctly crafting the SQL statement, and matching the number of
|
13
|
+
# parameters correctly.
|
14
|
+
class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
15
|
+
declare_threadsafe! if self.respond_to?(:declare_threadsafe!)
|
16
|
+
|
17
|
+
STRFTIME_FMT = '%Y-%m-%d %T.%L'.freeze
|
18
|
+
|
19
|
+
RETRYABLE_SQLSTATE_CLASSES = [
|
20
|
+
# Classes of retryable SQLSTATE codes
|
21
|
+
# Not all in the class will be retryable. However, this is the best that
|
22
|
+
# we've got right now.
|
23
|
+
# If a custom state code is required, set it in retry_sql_states.
|
24
|
+
'08', # Connection Exception
|
25
|
+
'24', # Invalid Cursor State (Maybe retry-able in some circumstances)
|
26
|
+
'25', # Invalid Transaction State
|
27
|
+
'40', # Transaction Rollback
|
28
|
+
'53', # Insufficient Resources
|
29
|
+
'54', # Program Limit Exceeded (MAYBE)
|
30
|
+
'55', # Object Not In Prerequisite State
|
31
|
+
'57', # Operator Intervention
|
32
|
+
'58', # System Error
|
33
|
+
].freeze
|
34
|
+
|
35
|
+
config_name 'jdbc'
|
36
|
+
|
37
|
+
# Driver class - Reintroduced for https://github.com/theangryangel/logstash-output-jdbc/issues/26
|
38
|
+
config :driver_class, validate: :string
|
39
|
+
|
40
|
+
# Does the JDBC driver support autocommit?
|
41
|
+
config :driver_auto_commit, validate: :boolean, default: true, required: true
|
42
|
+
|
43
|
+
# Where to find the jar
|
44
|
+
# Defaults to not required, and to the original behaviour
|
45
|
+
config :driver_jar_path, validate: :string, required: false
|
46
|
+
|
47
|
+
# jdbc connection string
|
48
|
+
config :connection_string, validate: :string, required: true
|
49
|
+
|
50
|
+
# jdbc username - optional, maybe in the connection string
|
51
|
+
config :username, validate: :string, required: false
|
52
|
+
|
53
|
+
# jdbc password - optional, maybe in the connection string
|
54
|
+
config :password, validate: :string, required: false
|
55
|
+
|
56
|
+
# [ "insert into table (message) values(?)", "%{message}" ]
|
57
|
+
config :statement, validate: :array, required: true
|
58
|
+
|
59
|
+
# If this is an unsafe statement, use event.sprintf
|
60
|
+
# This also has potential performance penalties due to having to create a
|
61
|
+
# new statement for each event, rather than adding to the batch and issuing
|
62
|
+
# multiple inserts in 1 go
|
63
|
+
config :unsafe_statement, validate: :boolean, default: false
|
64
|
+
|
65
|
+
# Number of connections in the pool to maintain
|
66
|
+
config :max_pool_size, validate: :number, default: 24
|
67
|
+
|
68
|
+
# Connection timeout
|
69
|
+
config :connection_timeout, validate: :number, default: 10000
|
70
|
+
|
71
|
+
# We buffer a certain number of events before flushing that out to SQL.
|
72
|
+
# This setting controls how many events will be buffered before sending a
|
73
|
+
# batch of events.
|
74
|
+
config :flush_size, validate: :number, default: 1000
|
75
|
+
|
76
|
+
# Set initial interval in seconds between retries. Doubled on each retry up to `retry_max_interval`
|
77
|
+
config :retry_initial_interval, validate: :number, default: 2
|
78
|
+
|
79
|
+
# Maximum time between retries, in seconds
|
80
|
+
config :retry_max_interval, validate: :number, default: 128
|
81
|
+
|
82
|
+
# Any additional custom, retryable SQL state codes.
|
83
|
+
# Suitable for configuring retryable custom JDBC SQL state codes.
|
84
|
+
config :retry_sql_states, validate: :array, default: []
|
85
|
+
|
86
|
+
# Maximum number of sequential failed attempts, before we stop retrying.
|
87
|
+
# If set to < 1, then it will infinitely retry.
|
88
|
+
# At the default values this is a little over 10 minutes
|
89
|
+
config :max_flush_exceptions, validate: :number, default: 10
|
90
|
+
|
91
|
+
config :max_repeat_exceptions, obsolete: 'This has been replaced by max_flush_exceptions - which behaves slightly differently. Please check the documentation.'
|
92
|
+
config :max_repeat_exceptions_time, obsolete: 'This is no longer required'
|
93
|
+
config :idle_flush_time, obsolete: 'No longer necessary under Logstash v5'
|
94
|
+
|
95
|
+
def register
|
96
|
+
@logger.info('JDBC - Starting up')
|
97
|
+
|
98
|
+
LogStash::Logger.setup_log4j(@logger)
|
99
|
+
load_jar_files!
|
100
|
+
|
101
|
+
@stopping = Concurrent::AtomicBoolean.new(false)
|
102
|
+
|
103
|
+
@logger.warn('JDBC - Flush size is set to > 1000') if @flush_size > 1000
|
104
|
+
|
105
|
+
if @statement.empty?
|
106
|
+
@logger.error('JDBC - No statement provided. Configuration error.')
|
107
|
+
end
|
108
|
+
|
109
|
+
if !@unsafe_statement && @statement.length < 2
|
110
|
+
@logger.error("JDBC - Statement has no parameters. No events will be inserted into SQL as you're not passing any event data. Likely configuration error.")
|
111
|
+
end
|
112
|
+
|
113
|
+
setup_and_test_pool!
|
114
|
+
end
|
115
|
+
|
116
|
+
def multi_receive(events)
|
117
|
+
events.each_slice(@flush_size) do |slice|
|
118
|
+
retrying_submit(slice)
|
119
|
+
end
|
120
|
+
end
|
121
|
+
|
122
|
+
def receive(event)
|
123
|
+
retrying_submit([event])
|
124
|
+
end
|
125
|
+
|
126
|
+
def close
|
127
|
+
@stopping.make_true
|
128
|
+
@pool.close
|
129
|
+
super
|
130
|
+
end
|
131
|
+
|
132
|
+
private
|
133
|
+
|
134
|
+
def setup_and_test_pool!
|
135
|
+
# Setup pool
|
136
|
+
@pool = Java::ComZaxxerHikari::HikariDataSource.new
|
137
|
+
|
138
|
+
@pool.setAutoCommit(@driver_auto_commit)
|
139
|
+
@pool.setDriverClassName(@driver_class) if @driver_class
|
140
|
+
|
141
|
+
@pool.setJdbcUrl(@connection_string)
|
142
|
+
|
143
|
+
@pool.setUsername(@username) if @username
|
144
|
+
@pool.setPassword(@password) if @password
|
145
|
+
|
146
|
+
@pool.setMaximumPoolSize(@max_pool_size)
|
147
|
+
@pool.setConnectionTimeout(@connection_timeout)
|
148
|
+
|
149
|
+
validate_connection_timeout = (@connection_timeout / 1000) / 2
|
150
|
+
|
151
|
+
# Test connection
|
152
|
+
test_connection = @pool.getConnection
|
153
|
+
unless test_connection.isValid(validate_connection_timeout)
|
154
|
+
@logger.error('JDBC - Connection is not valid. Please check connection string or that your JDBC endpoint is available.')
|
155
|
+
end
|
156
|
+
test_connection.close
|
157
|
+
end
|
158
|
+
|
159
|
+
def load_jar_files!
|
160
|
+
# Load jar from driver path
|
161
|
+
unless @driver_jar_path.nil?
|
162
|
+
raise LogStash::ConfigurationError, 'JDBC - Could not find jar file at given path. Check config.' unless File.exist? @driver_jar_path
|
163
|
+
require @driver_jar_path
|
164
|
+
return
|
165
|
+
end
|
166
|
+
|
167
|
+
# Revert original behaviour of loading from vendor directory
|
168
|
+
# if no path given
|
169
|
+
jarpath = if ENV['LOGSTASH_HOME']
|
170
|
+
File.join(ENV['LOGSTASH_HOME'], '/vendor/jar/jdbc/*.jar')
|
171
|
+
else
|
172
|
+
File.join(File.dirname(__FILE__), '../../../vendor/jar/jdbc/*.jar')
|
173
|
+
end
|
174
|
+
|
175
|
+
@logger.debug('JDBC - jarpath', path: jarpath)
|
176
|
+
|
177
|
+
jars = Dir[jarpath]
|
178
|
+
raise LogStash::ConfigurationError, 'JDBC - No jars found. Have you read the README?' if jars.empty?
|
179
|
+
|
180
|
+
jars.each do |jar|
|
181
|
+
@logger.debug('JDBC - Loaded jar', jar: jar)
|
182
|
+
require jar
|
183
|
+
end
|
184
|
+
end
|
185
|
+
|
186
|
+
def submit(events)
|
187
|
+
connection = nil
|
188
|
+
statement = nil
|
189
|
+
events_to_retry = []
|
190
|
+
|
191
|
+
begin
|
192
|
+
connection = @pool.getConnection
|
193
|
+
rescue => e
|
194
|
+
log_jdbc_exception(e, true)
|
195
|
+
# If a connection is not available, then the server has gone away
|
196
|
+
# We're not counting that towards our retry count.
|
197
|
+
return events, false
|
198
|
+
end
|
199
|
+
|
200
|
+
events.each do |event|
|
201
|
+
begin
|
202
|
+
statement = connection.prepareStatement(
|
203
|
+
(@unsafe_statement == true) ? event.sprintf(@statement[0]) : @statement[0]
|
204
|
+
)
|
205
|
+
statement = add_statement_event_params(statement, event) if @statement.length > 1
|
206
|
+
statement.execute
|
207
|
+
rescue => e
|
208
|
+
if retry_exception?(e)
|
209
|
+
events_to_retry.push(event)
|
210
|
+
end
|
211
|
+
ensure
|
212
|
+
statement.close unless statement.nil?
|
213
|
+
end
|
214
|
+
end
|
215
|
+
|
216
|
+
connection.close unless connection.nil?
|
217
|
+
|
218
|
+
return events_to_retry, true
|
219
|
+
end
|
220
|
+
|
221
|
+
def retrying_submit(actions)
|
222
|
+
# Initially we submit the full list of actions
|
223
|
+
submit_actions = actions
|
224
|
+
count_as_attempt = true
|
225
|
+
|
226
|
+
attempts = 1
|
227
|
+
|
228
|
+
sleep_interval = @retry_initial_interval
|
229
|
+
while @stopping.false? and (submit_actions and !submit_actions.empty?)
|
230
|
+
return if !submit_actions || submit_actions.empty? # If everything's a success we move along
|
231
|
+
# We retry whatever didn't succeed
|
232
|
+
submit_actions, count_as_attempt = submit(submit_actions)
|
233
|
+
|
234
|
+
# Everything was a success!
|
235
|
+
break if !submit_actions || submit_actions.empty?
|
236
|
+
|
237
|
+
if @max_flush_exceptions > 0 and count_as_attempt == true
|
238
|
+
attempts += 1
|
239
|
+
|
240
|
+
if attempts > @max_flush_exceptions
|
241
|
+
@logger.error("JDBC - max_flush_exceptions has been reached. #{submit_actions.length} events have been unable to be sent to SQL and are being dropped. See previously logged exceptions for details.")
|
242
|
+
break
|
243
|
+
end
|
244
|
+
end
|
245
|
+
|
246
|
+
# If we're retrying the action sleep for the recommended interval
|
247
|
+
# Double the interval for the next time through to achieve exponential backoff
|
248
|
+
Stud.stoppable_sleep(sleep_interval) { @stopping.true? }
|
249
|
+
sleep_interval = next_sleep_interval(sleep_interval)
|
250
|
+
end
|
251
|
+
end
|
252
|
+
|
253
|
+
def add_statement_event_params(statement, event)
|
254
|
+
@statement[1..-1].each_with_index do |i, idx|
|
255
|
+
if i.is_a? String
|
256
|
+
value = event[i]
|
257
|
+
if value.nil? and i =~ /%\{/
|
258
|
+
value = event.sprintf(i)
|
259
|
+
end
|
260
|
+
else
|
261
|
+
value = i
|
262
|
+
end
|
263
|
+
|
264
|
+
case value
|
265
|
+
when Time
|
266
|
+
# See LogStash::Timestamp, below, for the why behind strftime.
|
267
|
+
statement.setString(idx + 1, value.strftime(STRFTIME_FMT))
|
268
|
+
when LogStash::Timestamp
|
269
|
+
# XXX: Using setString as opposed to setTimestamp, because setTimestamp
|
270
|
+
# doesn't behave correctly in some drivers (Known: sqlite)
|
271
|
+
#
|
272
|
+
# Additionally this does not use `to_iso8601`, since some SQL databases
|
273
|
+
# choke on the 'T' in the string (Known: Derby).
|
274
|
+
#
|
275
|
+
# strftime appears to be the most reliable across drivers.
|
276
|
+
statement.setString(idx + 1, value.time.strftime(STRFTIME_FMT))
|
277
|
+
when Fixnum, Integer
|
278
|
+
statement.setInt(idx + 1, value)
|
279
|
+
when Float
|
280
|
+
statement.setFloat(idx + 1, value)
|
281
|
+
when String
|
282
|
+
statement.setString(idx + 1, value)
|
283
|
+
when true, false
|
284
|
+
statement.setBoolean(idx + 1, value)
|
285
|
+
else
|
286
|
+
statement.setString(idx + 1, nil)
|
287
|
+
end
|
288
|
+
end
|
289
|
+
|
290
|
+
statement
|
291
|
+
end
|
292
|
+
|
293
|
+
def retry_exception?(exception)
|
294
|
+
retrying = (exception.respond_to? 'getSQLState' and (RETRYABLE_SQLSTATE_CLASSES.include?(exception.getSQLState.to_s[0,2]) or @retry_sql_states.include?(exception.getSQLState)))
|
295
|
+
log_jdbc_exception(exception, retrying)
|
296
|
+
|
297
|
+
retrying
|
298
|
+
end
|
299
|
+
|
300
|
+
def log_jdbc_exception(exception, retrying)
|
301
|
+
current_exception = exception
|
302
|
+
log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying') + '.'
|
303
|
+
log_method = (retrying ? 'warn' : 'error')
|
304
|
+
|
305
|
+
loop do
|
306
|
+
@logger.send(log_method, log_text, :exception => current_exception, :backtrace => current_exception.backtrace)
|
307
|
+
|
308
|
+
if current_exception.respond_to? 'getNextException'
|
309
|
+
current_exception = current_exception.getNextException()
|
310
|
+
else
|
311
|
+
current_exception = nil
|
312
|
+
end
|
313
|
+
|
314
|
+
break if current_exception == nil
|
315
|
+
end
|
316
|
+
end
|
317
|
+
|
318
|
+
def next_sleep_interval(current_interval)
|
319
|
+
doubled = current_interval * 2
|
320
|
+
doubled > @retry_max_interval ? @retry_max_interval : doubled
|
321
|
+
end
|
322
|
+
end # class LogStash::Outputs::jdbc
|
@@ -0,0 +1,135 @@
|
|
1
|
+
require 'logstash/devutils/rspec/spec_helper'
|
2
|
+
require 'logstash/outputs/jdbc'
|
3
|
+
require 'stud/temporary'
|
4
|
+
require 'java'
|
5
|
+
require 'securerandom'
|
6
|
+
|
7
|
+
RSpec.shared_context 'rspec setup' do
|
8
|
+
it 'ensure jar is available' do
|
9
|
+
expect(ENV[jdbc_jar_env]).not_to be_nil, "#{jdbc_jar_env} not defined, required to run tests"
|
10
|
+
expect(File.exist?(ENV[jdbc_jar_env])).to eq(true), "#{jdbc_jar_env} defined, but not valid"
|
11
|
+
end
|
12
|
+
end
|
13
|
+
|
14
|
+
RSpec.shared_context 'when initializing' do
|
15
|
+
it 'shouldn\'t register with a missing jar file' do
|
16
|
+
jdbc_settings['driver_jar_path'] = nil
|
17
|
+
plugin = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
|
18
|
+
expect { plugin.register }.to raise_error(LogStash::ConfigurationError)
|
19
|
+
end
|
20
|
+
end
|
21
|
+
|
22
|
+
RSpec.shared_context 'when outputting messages' do
|
23
|
+
let(:logger) { double("logger") }
|
24
|
+
|
25
|
+
let(:jdbc_test_table) do
|
26
|
+
'logstash_output_jdbc_test'
|
27
|
+
end
|
28
|
+
|
29
|
+
let(:jdbc_drop_table) do
|
30
|
+
"DROP TABLE #{jdbc_test_table}"
|
31
|
+
end
|
32
|
+
|
33
|
+
let(:jdbc_create_table) do
|
34
|
+
"CREATE table #{jdbc_test_table} (created_at datetime not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit bit not null)"
|
35
|
+
end
|
36
|
+
|
37
|
+
let(:jdbc_statement) do
|
38
|
+
["insert into #{jdbc_test_table} (created_at, message, message_sprintf, static_int, static_bit) values(?, ?, ?, ?, ?)", '@timestamp', 'message', 'sprintf-%{message}', 1, true]
|
39
|
+
end
|
40
|
+
|
41
|
+
let(:systemd_database_service) do
|
42
|
+
nil
|
43
|
+
end
|
44
|
+
|
45
|
+
let(:event_fields) do
|
46
|
+
{ 'message' => "test-message #{SecureRandom.uuid}" }
|
47
|
+
end
|
48
|
+
|
49
|
+
let(:event) { LogStash::Event.new(event_fields) }
|
50
|
+
|
51
|
+
let(:plugin) do
|
52
|
+
# Setup plugin
|
53
|
+
output = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
|
54
|
+
output.register
|
55
|
+
output.logger = logger
|
56
|
+
|
57
|
+
# Setup table
|
58
|
+
c = output.instance_variable_get(:@pool).getConnection
|
59
|
+
|
60
|
+
# Derby doesn't support IF EXISTS.
|
61
|
+
# Seems like the quickest solution. Bleurgh.
|
62
|
+
begin
|
63
|
+
stmt = c.createStatement
|
64
|
+
stmt.executeUpdate(jdbc_drop_table)
|
65
|
+
rescue
|
66
|
+
# noop
|
67
|
+
ensure
|
68
|
+
stmt.close
|
69
|
+
|
70
|
+
stmt = c.createStatement
|
71
|
+
stmt.executeUpdate(jdbc_create_table)
|
72
|
+
stmt.close
|
73
|
+
c.close
|
74
|
+
end
|
75
|
+
|
76
|
+
output
|
77
|
+
end
|
78
|
+
|
79
|
+
it 'should save a event' do
|
80
|
+
expect { plugin.multi_receive([event]) }.to_not raise_error
|
81
|
+
|
82
|
+
# Verify the number of items in the output table
|
83
|
+
c = plugin.instance_variable_get(:@pool).getConnection
|
84
|
+
stmt = c.prepareStatement("select count(*) as total from #{jdbc_test_table} where message = ?")
|
85
|
+
stmt.setString(1, event['message'])
|
86
|
+
rs = stmt.executeQuery
|
87
|
+
count = 0
|
88
|
+
count = rs.getInt('total') while rs.next
|
89
|
+
stmt.close
|
90
|
+
c.close
|
91
|
+
|
92
|
+
expect(count).to eq(1)
|
93
|
+
end
|
94
|
+
|
95
|
+
it 'should not save event, and log an unretryable exception' do
|
96
|
+
e = LogStash::Event.new({})
|
97
|
+
|
98
|
+
expect(logger).to receive(:error).once.with(/JDBC - Exception. Not retrying/, Hash)
|
99
|
+
expect { plugin.multi_receive([e]) }.to_not raise_error
|
100
|
+
end
|
101
|
+
|
102
|
+
it 'it should retry after a connection loss, and log a warning' do
|
103
|
+
skip "does not run as a service" if systemd_database_service.nil?
|
104
|
+
|
105
|
+
p = plugin
|
106
|
+
|
107
|
+
# Check that everything is fine right now
|
108
|
+
expect { p.multi_receive([event]) }.not_to raise_error
|
109
|
+
|
110
|
+
# Start a thread to stop and restart the service.
|
111
|
+
t = Thread.new(systemd_database_service) { |systemd_database_service|
|
112
|
+
start_stop_cmd = 'sudo /etc/init.d/%<service>s* %<action>s'
|
113
|
+
|
114
|
+
`which systemctl`
|
115
|
+
if $?.success?
|
116
|
+
start_stop_cmd = 'sudo systemctl %<action>s %<service>s'
|
117
|
+
end
|
118
|
+
|
119
|
+
cmd = start_stop_cmd % { action: 'stop', service: systemd_database_service }
|
120
|
+
`#{cmd}`
|
121
|
+
sleep 10
|
122
|
+
|
123
|
+
cmd = start_stop_cmd % { action: 'start', service: systemd_database_service }
|
124
|
+
`#{cmd}`
|
125
|
+
}
|
126
|
+
|
127
|
+
# Wait a few seconds to the service to stop
|
128
|
+
sleep 5
|
129
|
+
|
130
|
+
expect(logger).to receive(:warn).at_least(:once).with(/JDBC - Exception. Retrying/, Hash)
|
131
|
+
expect { p.multi_receive([event]) }.to_not raise_error
|
132
|
+
|
133
|
+
t.join
|
134
|
+
end
|
135
|
+
end
|
@@ -0,0 +1,25 @@
|
|
1
|
+
require_relative '../jdbc_spec_helper'
|
2
|
+
|
3
|
+
describe 'logstash-output-jdbc: derby', if: ENV['JDBC_DERBY_JAR'] do
|
4
|
+
include_context 'rspec setup'
|
5
|
+
include_context 'when initializing'
|
6
|
+
include_context 'when outputting messages'
|
7
|
+
|
8
|
+
let(:jdbc_jar_env) do
|
9
|
+
'JDBC_DERBY_JAR'
|
10
|
+
end
|
11
|
+
|
12
|
+
let(:jdbc_create_table) do
|
13
|
+
"CREATE table #{jdbc_test_table} (created_at timestamp not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit boolean not null)"
|
14
|
+
end
|
15
|
+
|
16
|
+
let(:jdbc_settings) do
|
17
|
+
{
|
18
|
+
'driver_class' => 'org.apache.derby.jdbc.EmbeddedDriver',
|
19
|
+
'connection_string' => 'jdbc:derby:memory:testdb;create=true',
|
20
|
+
'driver_jar_path' => ENV[jdbc_jar_env],
|
21
|
+
'statement' => jdbc_statement,
|
22
|
+
'max_flush_exceptions' => 1
|
23
|
+
}
|
24
|
+
end
|
25
|
+
end
|
@@ -0,0 +1,25 @@
|
|
1
|
+
require_relative '../jdbc_spec_helper'
|
2
|
+
|
3
|
+
describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
|
4
|
+
include_context 'rspec setup'
|
5
|
+
include_context 'when initializing'
|
6
|
+
include_context 'when outputting messages'
|
7
|
+
|
8
|
+
let(:jdbc_jar_env) do
|
9
|
+
'JDBC_MYSQL_JAR'
|
10
|
+
end
|
11
|
+
|
12
|
+
let(:systemd_database_service) do
|
13
|
+
'mysql'
|
14
|
+
end
|
15
|
+
|
16
|
+
let(:jdbc_settings) do
|
17
|
+
{
|
18
|
+
'driver_class' => 'com.mysql.jdbc.Driver',
|
19
|
+
'connection_string' => 'jdbc:mysql://localhost/logstash_output_jdbc_test?user=root',
|
20
|
+
'driver_jar_path' => ENV[jdbc_jar_env],
|
21
|
+
'statement' => jdbc_statement,
|
22
|
+
'max_flush_exceptions' => 1
|
23
|
+
}
|
24
|
+
end
|
25
|
+
end
|
@@ -0,0 +1,11 @@
|
|
1
|
+
require_relative '../jdbc_spec_helper'
|
2
|
+
|
3
|
+
describe LogStash::Outputs::Jdbc do
|
4
|
+
context 'when initializing' do
|
5
|
+
it 'shouldn\'t register without a config' do
|
6
|
+
expect do
|
7
|
+
LogStash::Plugin.lookup('output', 'jdbc').new
|
8
|
+
end.to raise_error(LogStash::ConfigurationError)
|
9
|
+
end
|
10
|
+
end
|
11
|
+
end
|
@@ -0,0 +1,27 @@
|
|
1
|
+
require_relative '../jdbc_spec_helper'
|
2
|
+
|
3
|
+
describe 'logstash-output-jdbc: sqlite', if: ENV['JDBC_SQLITE_JAR'] do
|
4
|
+
JDBC_SQLITE_FILE = '/tmp/logstash_output_jdbc_test.db'.freeze
|
5
|
+
|
6
|
+
before(:context) do
|
7
|
+
File.delete(JDBC_SQLITE_FILE) if File.exist? JDBC_SQLITE_FILE
|
8
|
+
end
|
9
|
+
|
10
|
+
include_context 'rspec setup'
|
11
|
+
include_context 'when initializing'
|
12
|
+
include_context 'when outputting messages'
|
13
|
+
|
14
|
+
let(:jdbc_jar_env) do
|
15
|
+
'JDBC_SQLITE_JAR'
|
16
|
+
end
|
17
|
+
|
18
|
+
let(:jdbc_settings) do
|
19
|
+
{
|
20
|
+
'driver_class' => 'org.sqlite.JDBC',
|
21
|
+
'connection_string' => "jdbc:sqlite:#{JDBC_SQLITE_FILE}",
|
22
|
+
'driver_jar_path' => ENV[jdbc_jar_env],
|
23
|
+
'statement' => jdbc_statement,
|
24
|
+
'max_flush_exceptions' => 1
|
25
|
+
}
|
26
|
+
end
|
27
|
+
end
|
Binary file
|
Binary file
|
Binary file
|
Binary file
|
metadata
ADDED
@@ -0,0 +1,165 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: logstash-output-jdbc
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.3.0
|
5
|
+
platform: java
|
6
|
+
authors:
|
7
|
+
- the_angry_angel
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2016-07-24 00:00:00.000000000 Z
|
12
|
+
dependencies:
|
13
|
+
- !ruby/object:Gem::Dependency
|
14
|
+
name: logstash-core-plugin-api
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
16
|
+
requirements:
|
17
|
+
- - ~>
|
18
|
+
- !ruby/object:Gem::Version
|
19
|
+
version: '1.0'
|
20
|
+
type: :runtime
|
21
|
+
prerelease: false
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - ~>
|
25
|
+
- !ruby/object:Gem::Version
|
26
|
+
version: '1.0'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
name: stud
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
30
|
+
requirements:
|
31
|
+
- - '>='
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '0'
|
34
|
+
type: :runtime
|
35
|
+
prerelease: false
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - '>='
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: '0'
|
41
|
+
- !ruby/object:Gem::Dependency
|
42
|
+
name: logstash-codec-plain
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
44
|
+
requirements:
|
45
|
+
- - '>='
|
46
|
+
- !ruby/object:Gem::Version
|
47
|
+
version: '0'
|
48
|
+
type: :runtime
|
49
|
+
prerelease: false
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
51
|
+
requirements:
|
52
|
+
- - '>='
|
53
|
+
- !ruby/object:Gem::Version
|
54
|
+
version: '0'
|
55
|
+
- !ruby/object:Gem::Dependency
|
56
|
+
name: jar-dependencies
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
58
|
+
requirements:
|
59
|
+
- - '>='
|
60
|
+
- !ruby/object:Gem::Version
|
61
|
+
version: '0'
|
62
|
+
type: :development
|
63
|
+
prerelease: false
|
64
|
+
version_requirements: !ruby/object:Gem::Requirement
|
65
|
+
requirements:
|
66
|
+
- - '>='
|
67
|
+
- !ruby/object:Gem::Version
|
68
|
+
version: '0'
|
69
|
+
- !ruby/object:Gem::Dependency
|
70
|
+
name: ruby-maven
|
71
|
+
requirement: !ruby/object:Gem::Requirement
|
72
|
+
requirements:
|
73
|
+
- - ~>
|
74
|
+
- !ruby/object:Gem::Version
|
75
|
+
version: '3.3'
|
76
|
+
type: :development
|
77
|
+
prerelease: false
|
78
|
+
version_requirements: !ruby/object:Gem::Requirement
|
79
|
+
requirements:
|
80
|
+
- - ~>
|
81
|
+
- !ruby/object:Gem::Version
|
82
|
+
version: '3.3'
|
83
|
+
- !ruby/object:Gem::Dependency
|
84
|
+
name: logstash-devutils
|
85
|
+
requirement: !ruby/object:Gem::Requirement
|
86
|
+
requirements:
|
87
|
+
- - '>='
|
88
|
+
- !ruby/object:Gem::Version
|
89
|
+
version: '0'
|
90
|
+
type: :development
|
91
|
+
prerelease: false
|
92
|
+
version_requirements: !ruby/object:Gem::Requirement
|
93
|
+
requirements:
|
94
|
+
- - '>='
|
95
|
+
- !ruby/object:Gem::Version
|
96
|
+
version: '0'
|
97
|
+
- !ruby/object:Gem::Dependency
|
98
|
+
name: rubocop
|
99
|
+
requirement: !ruby/object:Gem::Requirement
|
100
|
+
requirements:
|
101
|
+
- - '>='
|
102
|
+
- !ruby/object:Gem::Version
|
103
|
+
version: '0'
|
104
|
+
type: :development
|
105
|
+
prerelease: false
|
106
|
+
version_requirements: !ruby/object:Gem::Requirement
|
107
|
+
requirements:
|
108
|
+
- - '>='
|
109
|
+
- !ruby/object:Gem::Version
|
110
|
+
version: '0'
|
111
|
+
description: This gem is a logstash plugin required to be installed on top of the
|
112
|
+
Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not
|
113
|
+
a stand-alone program
|
114
|
+
email: karl+github@theangryangel.co.uk
|
115
|
+
executables: []
|
116
|
+
extensions: []
|
117
|
+
extra_rdoc_files: []
|
118
|
+
files:
|
119
|
+
- lib/logstash/outputs/jdbc.rb
|
120
|
+
- lib/logstash-output-jdbc_jars.rb
|
121
|
+
- spec/jdbc_spec_helper.rb
|
122
|
+
- spec/outputs/jdbc_derby_spec.rb
|
123
|
+
- spec/outputs/jdbc_mysql_spec.rb
|
124
|
+
- spec/outputs/jdbc_spec.rb
|
125
|
+
- spec/outputs/jdbc_sqlite_spec.rb
|
126
|
+
- vendor/jar-dependencies/runtime-jars/HikariCP-2.4.2.jar
|
127
|
+
- vendor/jar-dependencies/runtime-jars/log4j-1.2.17.jar
|
128
|
+
- vendor/jar-dependencies/runtime-jars/slf4j-api-1.7.12.jar
|
129
|
+
- vendor/jar-dependencies/runtime-jars/slf4j-log4j12-1.7.21.jar
|
130
|
+
- LICENSE.txt
|
131
|
+
- README.md
|
132
|
+
homepage: https://github.com/theangryangel/logstash-output-jdbc
|
133
|
+
licenses:
|
134
|
+
- Apache License (2.0)
|
135
|
+
metadata:
|
136
|
+
logstash_plugin: 'true'
|
137
|
+
logstash_group: output
|
138
|
+
post_install_message:
|
139
|
+
rdoc_options: []
|
140
|
+
require_paths:
|
141
|
+
- lib
|
142
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
143
|
+
requirements:
|
144
|
+
- - '>='
|
145
|
+
- !ruby/object:Gem::Version
|
146
|
+
version: '0'
|
147
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
148
|
+
requirements:
|
149
|
+
- - '>='
|
150
|
+
- !ruby/object:Gem::Version
|
151
|
+
version: '0'
|
152
|
+
requirements:
|
153
|
+
- jar 'com.zaxxer:HikariCP', '2.4.2'
|
154
|
+
- jar 'org.slf4j:slf4j-log4j12', '1.7.21'
|
155
|
+
rubyforge_project:
|
156
|
+
rubygems_version: 2.0.14.1
|
157
|
+
signing_key:
|
158
|
+
specification_version: 4
|
159
|
+
summary: This plugin allows you to output to SQL, via JDBC
|
160
|
+
test_files:
|
161
|
+
- spec/jdbc_spec_helper.rb
|
162
|
+
- spec/outputs/jdbc_derby_spec.rb
|
163
|
+
- spec/outputs/jdbc_mysql_spec.rb
|
164
|
+
- spec/outputs/jdbc_spec.rb
|
165
|
+
- spec/outputs/jdbc_sqlite_spec.rb
|