logstash-filter-jdbc_static 1.0.1 → 1.0.2
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +7 -2
- data/Gemfile +9 -0
- data/docs/index.asciidoc +577 -0
- data/lib/logstash-filter-jdbc_static_jars.rb +4 -4
- data/lib/logstash/filters/jdbc/basic_database.rb +15 -1
- data/lib/logstash/filters/jdbc/column.rb +1 -0
- data/lib/logstash/filters/jdbc/db_object.rb +4 -6
- data/lib/logstash/filters/jdbc/loader.rb +8 -3
- data/lib/logstash/filters/jdbc/loader_schedule.rb +30 -4
- data/lib/logstash/filters/jdbc/lookup.rb +1 -0
- data/lib/logstash/filters/jdbc/lookup_processor.rb +5 -6
- data/lib/logstash/filters/jdbc/lookup_result.rb +1 -0
- data/lib/logstash/filters/jdbc/read_write_database.rb +28 -16
- data/lib/logstash/filters/jdbc/repeating_load_runner.rb +2 -0
- data/lib/logstash/filters/jdbc/single_load_runner.rb +8 -5
- data/lib/logstash/filters/jdbc/validatable.rb +2 -5
- data/lib/logstash/filters/jdbc_static.rb +26 -7
- data/logstash-filter-jdbc_static.gemspec +7 -12
- data/spec/filters/jdbc/column_spec.rb +2 -2
- data/spec/filters/jdbc/loader_spec.rb +1 -0
- data/spec/filters/jdbc/read_only_database_spec.rb +26 -2
- data/spec/filters/jdbc/read_write_database_spec.rb +18 -17
- data/spec/filters/jdbc/repeating_load_runner_spec.rb +1 -1
- data/spec/filters/jdbc_static_spec.rb +95 -17
- data/spec/filters/shared_helpers.rb +3 -4
- data/vendor/jar-dependencies/{runtime-jars → org/apache/derby/derby/10.14.1.0}/derby-10.14.1.0.jar +0 -0
- data/vendor/jar-dependencies/{runtime-jars → org/apache/derby/derbyclient/10.14.1.0}/derbyclient-10.14.1.0.jar +0 -0
- metadata +22 -9
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 14fbe648ee79593894c1735cc82823cc34dc423db0eb36e7d8f400e00290fbed
|
4
|
+
data.tar.gz: e8d9c13670e105dea7c59544d92e8eed15fb7bdcb7a0d2fc8e4069c4b139e20d
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 89fbedbb6c4258964e1a947830b95e3551118e04262637a5807d16b1dc8063517351d84ffc7ce31d72ed4eb16f352d20496f9f009bd570c70024d51827b84740
|
7
|
+
data.tar.gz: a132cdb916d82f0ea1eed798363bf50c092b1713a68df623ded472538e2bc5310c3b07a6de89c71223e20eeae99add66a6b413fecfa53518614dc0aa54bcdc64
|
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,10 @@
|
|
1
|
+
## 1.0.2
|
2
|
+
- Fix Fix [jdbc_static filter - #22](https://github.com/logstash-plugins/logstash-filter-jdbc_static/issues/22) Support multiple driver libraries.
|
3
|
+
- Fixes for [jdbc_static filter - #18](https://github.com/logstash-plugins/logstash-filter-jdbc_static/issues/18), [jdbc_static filter - #17](https://github.com/logstash-plugins/logstash-filter-jdbc_static/issues/17), [jdbc_static filter - #12](https://github.com/logstash-plugins/logstash-filter-jdbc_static/issues/12) Use Java classloader to load driver jar. Use system import from file to loader local database. Prevent locking errors when no records returned.
|
4
|
+
- Fix [jdbc_static filter - #8](https://github.com/logstash-plugins/logstash-filter-jdbc_static/issues/8) loader_schedule now works as designed.
|
5
|
+
|
1
6
|
## 1.0.1
|
2
7
|
- Docs: Edit documentation
|
3
|
-
|
8
|
+
|
4
9
|
## 1.0.0
|
5
|
-
- Initial commit
|
10
|
+
- Initial commit
|
data/Gemfile
CHANGED
@@ -1,2 +1,11 @@
|
|
1
1
|
source 'https://rubygems.org'
|
2
2
|
gemspec
|
3
|
+
|
4
|
+
|
5
|
+
logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash"
|
6
|
+
use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1"
|
7
|
+
|
8
|
+
if Dir.exist?(logstash_path) && use_logstash_source
|
9
|
+
gem 'logstash-core', :path => "#{logstash_path}/logstash-core"
|
10
|
+
gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api"
|
11
|
+
end
|
data/docs/index.asciidoc
ADDED
@@ -0,0 +1,577 @@
|
|
1
|
+
:plugin: jdbc_static
|
2
|
+
:type: filter
|
3
|
+
|
4
|
+
///////////////////////////////////////////
|
5
|
+
START - GENERATED VARIABLES, DO NOT EDIT!
|
6
|
+
///////////////////////////////////////////
|
7
|
+
:version: %VERSION%
|
8
|
+
:release_date: %RELEASE_DATE%
|
9
|
+
:changelog_url: %CHANGELOG_URL%
|
10
|
+
:include_path: ../../../../logstash/docs/include
|
11
|
+
///////////////////////////////////////////
|
12
|
+
END - GENERATED VARIABLES, DO NOT EDIT!
|
13
|
+
///////////////////////////////////////////
|
14
|
+
|
15
|
+
[id="plugins-{type}s-{plugin}"]
|
16
|
+
|
17
|
+
=== Jdbc_static filter plugin
|
18
|
+
|
19
|
+
include::{include_path}/plugin_header.asciidoc[]
|
20
|
+
|
21
|
+
==== Description
|
22
|
+
|
23
|
+
This filter enriches events with data pre-loaded from a remote database.
|
24
|
+
|
25
|
+
This filter is best suited for enriching events with reference data that is
|
26
|
+
static or does not change very often, such as environments, users, and products.
|
27
|
+
|
28
|
+
This filter works by fetching data from a remote database, caching it in a
|
29
|
+
local, in-memory https://db.apache.org/derby/manuals/#docs_10.14[Apache Derby]
|
30
|
+
database, and using lookups to enrich events with data cached in the local
|
31
|
+
database. You can set up the filter to load the remote data once (for static
|
32
|
+
data), or you can schedule remote loading to run periodically (for data that
|
33
|
+
needs to be refreshed).
|
34
|
+
|
35
|
+
To define the filter, you specify three main sections: local_db_objects, loaders,
|
36
|
+
and lookups.
|
37
|
+
|
38
|
+
*local_db_objects*::
|
39
|
+
|
40
|
+
Define the columns, types, and indexes used to build the local database
|
41
|
+
structure. The column names and types should match the external database.
|
42
|
+
Define as many of these objects as needed to build the local database
|
43
|
+
structure.
|
44
|
+
|
45
|
+
*loaders*::
|
46
|
+
|
47
|
+
Query the external database to fetch the dataset that will be cached locally.
|
48
|
+
Define as many loaders as needed to fetch the remote data. Each
|
49
|
+
loader should fill a table defined by `local_db_objects`. Make sure
|
50
|
+
the column names and datatypes in the loader SQL statement match the
|
51
|
+
columns defined under `local_db_objects`. Each loader has an independent remote
|
52
|
+
database connection.
|
53
|
+
|
54
|
+
*lookups*::
|
55
|
+
|
56
|
+
Perform lookup queries on the local database to enrich the events.
|
57
|
+
Define as many lookups as needed to enhance the event from all
|
58
|
+
lookup tables in one pass. Ideally the SQL statement should only
|
59
|
+
return one row. Any rows are converted to Hash objects and are
|
60
|
+
stored in a target field that is an Array.
|
61
|
+
|
62
|
+
The following example config fetches data from a remote database, caches it in a
|
63
|
+
local database, and uses lookups to enrich events with data cached in the local
|
64
|
+
database.
|
65
|
+
+
|
66
|
+
["source","json",subs="callouts"]
|
67
|
+
-----
|
68
|
+
filter {
|
69
|
+
jdbc_static {
|
70
|
+
loaders => [ <1>
|
71
|
+
{
|
72
|
+
id => "remote-servers"
|
73
|
+
query => "select ip, descr from ref.local_ips order by ip"
|
74
|
+
local_table => "servers"
|
75
|
+
},
|
76
|
+
{
|
77
|
+
id => "remote-users"
|
78
|
+
query => "select firstname, lastname, userid from ref.local_users order by userid"
|
79
|
+
local_table => "users"
|
80
|
+
}
|
81
|
+
]
|
82
|
+
local_db_objects => [ <2>
|
83
|
+
{
|
84
|
+
name => "servers"
|
85
|
+
index_columns => ["ip"]
|
86
|
+
columns => [
|
87
|
+
["ip", "varchar(15)"],
|
88
|
+
["descr", "varchar(255)"]
|
89
|
+
]
|
90
|
+
},
|
91
|
+
{
|
92
|
+
name => "users"
|
93
|
+
index_columns => ["userid"]
|
94
|
+
columns => [
|
95
|
+
["firstname", "varchar(255)"],
|
96
|
+
["lastname", "varchar(255)"],
|
97
|
+
["userid", "int"]
|
98
|
+
]
|
99
|
+
}
|
100
|
+
]
|
101
|
+
local_lookups => [ <3>
|
102
|
+
{
|
103
|
+
id => "local-servers"
|
104
|
+
query => "select descr as description from servers WHERE ip = :ip"
|
105
|
+
parameters => {ip => "[from_ip]"}
|
106
|
+
target => "server"
|
107
|
+
},
|
108
|
+
{
|
109
|
+
id => "local-users"
|
110
|
+
query => "select firstname, lastname from users WHERE userid = :id"
|
111
|
+
parameters => {id => "[loggedin_userid]"}
|
112
|
+
target => "user" <4>
|
113
|
+
}
|
114
|
+
]
|
115
|
+
# using add_field here to add & rename values to the event root
|
116
|
+
add_field => { server_name => "%{[server][0][description]}" }
|
117
|
+
add_field => { user_firstname => "%{[user][0][firstname]}" } <5>
|
118
|
+
add_field => { user_lastname => "%{[user][0][lastname]}" } <5>
|
119
|
+
remove_field => ["server", "user"]
|
120
|
+
staging_directory => "/tmp/logstash/jdbc_static/import_data"
|
121
|
+
loader_schedule => "* */2 * * *" # run loaders every 2 hours
|
122
|
+
jdbc_user => "logstash"
|
123
|
+
jdbc_password => "example"
|
124
|
+
jdbc_driver_class => "org.postgresql.Driver"
|
125
|
+
jdbc_driver_library => "/tmp/logstash/vendor/postgresql-42.1.4.jar"
|
126
|
+
jdbc_connection_string => "jdbc:postgresql://remotedb:5432/ls_test_2"
|
127
|
+
}
|
128
|
+
}
|
129
|
+
-----
|
130
|
+
<1> Queries an external database to fetch the dataset that will be cached
|
131
|
+
locally.
|
132
|
+
<2> Defines the columns, types, and indexes used to build the local database
|
133
|
+
structure. The column names and types should match the external database.
|
134
|
+
<3> Performs lookup queries on the local database to enrich the events.
|
135
|
+
<4> Specifies the event field that will store the looked-up data. If the lookup
|
136
|
+
returns multiple columns, the data is stored as a JSON object within the field.
|
137
|
+
<5> Takes data from the JSON object and stores it in top-level event fields for
|
138
|
+
easier analysis in Kibana.
|
139
|
+
|
140
|
+
Here's a full example:
|
141
|
+
|
142
|
+
[source,json]
|
143
|
+
-----
|
144
|
+
input {
|
145
|
+
generator {
|
146
|
+
lines => [
|
147
|
+
'{"from_ip": "10.2.3.20", "app": "foobar", "amount": 32.95}',
|
148
|
+
'{"from_ip": "10.2.3.30", "app": "barfoo", "amount": 82.95}',
|
149
|
+
'{"from_ip": "10.2.3.40", "app": "bazfoo", "amount": 22.95}'
|
150
|
+
]
|
151
|
+
count => 200
|
152
|
+
}
|
153
|
+
}
|
154
|
+
|
155
|
+
filter {
|
156
|
+
json {
|
157
|
+
source => "message"
|
158
|
+
}
|
159
|
+
|
160
|
+
jdbc_static {
|
161
|
+
loaders => [
|
162
|
+
{
|
163
|
+
id => "servers"
|
164
|
+
query => "select ip, descr from ref.local_ips order by ip"
|
165
|
+
local_table => "servers"
|
166
|
+
}
|
167
|
+
]
|
168
|
+
local_db_objects => [
|
169
|
+
{
|
170
|
+
name => "servers"
|
171
|
+
index_columns => ["ip"]
|
172
|
+
columns => [
|
173
|
+
["ip", "varchar(15)"],
|
174
|
+
["descr", "varchar(255)"]
|
175
|
+
]
|
176
|
+
}
|
177
|
+
]
|
178
|
+
local_lookups => [
|
179
|
+
{
|
180
|
+
query => "select descr as description from servers WHERE ip = :ip"
|
181
|
+
parameters => {ip => "[from_ip]"}
|
182
|
+
target => "server"
|
183
|
+
}
|
184
|
+
]
|
185
|
+
staging_directory => "/tmp/logstash/jdbc_static/import_data"
|
186
|
+
loader_schedule => "*/30 * * * *"
|
187
|
+
jdbc_user => "logstash"
|
188
|
+
jdbc_password => "logstash??"
|
189
|
+
jdbc_driver_class => "org.postgresql.Driver"
|
190
|
+
jdbc_driver_library => "/Users/guy/tmp/logstash-6.0.0/vendor/postgresql-42.1.4.jar"
|
191
|
+
jdbc_connection_string => "jdbc:postgresql://localhost:5432/ls_test_2"
|
192
|
+
}
|
193
|
+
}
|
194
|
+
|
195
|
+
output {
|
196
|
+
stdout {
|
197
|
+
codec => rubydebug {metadata => true}
|
198
|
+
}
|
199
|
+
}
|
200
|
+
-----
|
201
|
+
|
202
|
+
Assuming the loader fetches the following data from a Postgres database:
|
203
|
+
|
204
|
+
[source,shell]
|
205
|
+
select * from ref.local_ips order by ip;
|
206
|
+
ip | descr
|
207
|
+
-----------+-----------------------
|
208
|
+
10.2.3.10 | Authentication Server
|
209
|
+
10.2.3.20 | Payments Server
|
210
|
+
10.2.3.30 | Events Server
|
211
|
+
10.2.3.40 | Payroll Server
|
212
|
+
10.2.3.50 | Uploads Server
|
213
|
+
|
214
|
+
|
215
|
+
The events are enriched with a description of the server based on the value of
|
216
|
+
the IP:
|
217
|
+
|
218
|
+
[source,shell]
|
219
|
+
{
|
220
|
+
"app" => "bazfoo",
|
221
|
+
"sequence" => 0,
|
222
|
+
"server" => [
|
223
|
+
[0] {
|
224
|
+
"description" => "Payroll Server"
|
225
|
+
}
|
226
|
+
],
|
227
|
+
"amount" => 22.95,
|
228
|
+
"@timestamp" => 2017-11-30T18:08:15.694Z,
|
229
|
+
"@version" => "1",
|
230
|
+
"host" => "Elastics-MacBook-Pro.local",
|
231
|
+
"message" => "{\"from_ip\": \"10.2.3.40\", \"app\": \"bazfoo\", \"amount\": 22.95}",
|
232
|
+
"from_ip" => "10.2.3.40"
|
233
|
+
}
|
234
|
+
|
235
|
+
|
236
|
+
==== Using this plugin with multiple pipelines
|
237
|
+
|
238
|
+
[IMPORTANT]
|
239
|
+
===============================
|
240
|
+
Logstash uses a single, in-memory Apache Derby instance as the lookup database
|
241
|
+
engine for the entire JVM. Because each plugin instance uses a unique database
|
242
|
+
inside the shared Derby engine, there should be no conflicts with plugins
|
243
|
+
attempting to create and populate the same tables. This is true regardless of
|
244
|
+
whether the plugins are defined in a single pipeline, or multiple pipelines.
|
245
|
+
However, after setting up the filter, you should watch the lookup results and
|
246
|
+
view the logs to verify correct operation.
|
247
|
+
===============================
|
248
|
+
|
249
|
+
[id="plugins-{type}s-{plugin}-options"]
|
250
|
+
==== Jdbc_static Filter Configuration Options
|
251
|
+
|
252
|
+
This plugin supports the following configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
|
253
|
+
|
254
|
+
[cols="<,<,<",options="header",]
|
255
|
+
|=======================================================================
|
256
|
+
|Setting |Input type|Required
|
257
|
+
| <<plugins-{type}s-{plugin}-jdbc_connection_string>> |<<string,string>>|Yes
|
258
|
+
| <<plugins-{type}s-{plugin}-jdbc_driver_class>> |<<string,string>>|Yes
|
259
|
+
| <<plugins-{type}s-{plugin}-jdbc_driver_library>> |a valid filesystem path|No
|
260
|
+
| <<plugins-{type}s-{plugin}-jdbc_password>> |<<password,password>>|No
|
261
|
+
| <<plugins-{type}s-{plugin}-jdbc_user>> |<<string,string>>|No
|
262
|
+
| <<plugins-{type}s-{plugin}-tag_on_failure>> |<<array,array>>|No
|
263
|
+
| <<plugins-{type}s-{plugin}-tag_on_default_use>> |<<array,array>>|No
|
264
|
+
| <<plugins-{type}s-{plugin}-staging_directory>> |<<string,string>>|No
|
265
|
+
| <<plugins-{type}s-{plugin}-loader_schedule>>|<<string,string>>|No
|
266
|
+
| <<plugins-{type}s-{plugin}-loaders>>|<<array,array>>|No
|
267
|
+
| <<plugins-{type}s-{plugin}-local_db_objects>>|<<array,array>>|No
|
268
|
+
| <<plugins-{type}s-{plugin}-local_lookups>>|<<array,array>>|No
|
269
|
+
|=======================================================================
|
270
|
+
|
271
|
+
Also see <<plugins-{type}s-{plugin}-common-options>> for a list of options supported by all
|
272
|
+
filter plugins.
|
273
|
+
|
274
|
+
|
275
|
+
|
276
|
+
[id="plugins-{type}s-{plugin}-jdbc_connection_string"]
|
277
|
+
===== `jdbc_connection_string`
|
278
|
+
|
279
|
+
* This is a required setting.
|
280
|
+
* Value type is <<string,string>>
|
281
|
+
* There is no default value for this setting.
|
282
|
+
|
283
|
+
JDBC connection string.
|
284
|
+
|
285
|
+
[id="plugins-{type}s-{plugin}-jdbc_driver_class"]
|
286
|
+
===== `jdbc_driver_class`
|
287
|
+
|
288
|
+
* This is a required setting.
|
289
|
+
* Value type is <<string,string>>
|
290
|
+
* There is no default value for this setting.
|
291
|
+
|
292
|
+
JDBC driver class to load, for example, "org.apache.derby.jdbc.ClientDriver".
|
293
|
+
|
294
|
+
NOTE: According to https://github.com/logstash-plugins/logstash-input-jdbc/issues/43[Issue 43],
|
295
|
+
if you are using the Oracle JDBC driver (ojdbc6.jar), the correct
|
296
|
+
`jdbc_driver_class` is `"Java::oracle.jdbc.driver.OracleDriver"`.
|
297
|
+
|
298
|
+
[id="plugins-{type}s-{plugin}-jdbc_driver_library"]
|
299
|
+
===== `jdbc_driver_library`
|
300
|
+
|
301
|
+
* Value type is <<string,string>>
|
302
|
+
* There is no default value for this setting.
|
303
|
+
|
304
|
+
JDBC driver library path to third-party driver library. Use comma separated paths
|
305
|
+
in one string if you need multiple libraries.
|
306
|
+
|
307
|
+
If the driver class is not provided, the plugin looks for it in the Logstash
|
308
|
+
Java classpath.
|
309
|
+
|
310
|
+
[id="plugins-{type}s-{plugin}-jdbc_password"]
|
311
|
+
===== `jdbc_password`
|
312
|
+
|
313
|
+
* Value type is <<password,password>>
|
314
|
+
* There is no default value for this setting.
|
315
|
+
|
316
|
+
JDBC password.
|
317
|
+
|
318
|
+
[id="plugins-{type}s-{plugin}-jdbc_user"]
|
319
|
+
===== `jdbc_user`
|
320
|
+
|
321
|
+
* This is a required setting.
|
322
|
+
* Value type is <<string,string>>
|
323
|
+
* There is no default value for this setting.
|
324
|
+
|
325
|
+
JDBC user.
|
326
|
+
|
327
|
+
[id="plugins-{type}s-{plugin}-tag_on_default_use"]
|
328
|
+
===== `tag_on_default_use`
|
329
|
+
|
330
|
+
* Value type is <<array,array>>
|
331
|
+
* Default value is `["_jdbcstaticdefaultsused"]`
|
332
|
+
|
333
|
+
Append values to the `tags` field if no record was found and default values were used.
|
334
|
+
|
335
|
+
[id="plugins-{type}s-{plugin}-tag_on_failure"]
|
336
|
+
===== `tag_on_failure`
|
337
|
+
|
338
|
+
* Value type is <<array,array>>
|
339
|
+
* Default value is `["_jdbcstaticfailure"]`
|
340
|
+
|
341
|
+
Append values to the `tags` field if a SQL error occurred.
|
342
|
+
|
343
|
+
[id="plugins-{type}s-{plugin}-staging_directory"]
|
344
|
+
===== `staging_directory`
|
345
|
+
|
346
|
+
* Value type is <<string,string>>
|
347
|
+
* Default value is derived from the Ruby temp directory + plugin_name + "import_data"
|
348
|
+
* e.g. `"/tmp/logstash/jdbc_static/import_data"`
|
349
|
+
|
350
|
+
The directory used stage the data for bulk loading, there should be sufficient
|
351
|
+
disk space to handle the data you wish to use to enrich events.
|
352
|
+
Previous versions of this plugin did not handle loading datasets of more than
|
353
|
+
several thousand rows well due to an open bug in Apache Derby. This setting
|
354
|
+
introduces an alternative way of loading large recordsets. As each row is
|
355
|
+
received it is spooled to file and then that file is imported using a
|
356
|
+
system 'import table' system call.
|
357
|
+
|
358
|
+
Append values to the `tags` field if a SQL error occurred.
|
359
|
+
|
360
|
+
[id="plugins-{type}s-{plugin}-loader_schedule"]
|
361
|
+
===== `loader_schedule`
|
362
|
+
|
363
|
+
* Value type is <<string,string>>
|
364
|
+
* There is no default value for this setting.
|
365
|
+
|
366
|
+
You can schedule remote loading to run periodically according to a
|
367
|
+
specific schedule. This scheduling syntax is powered by
|
368
|
+
https://github.com/jmettraux/rufus-scheduler[rufus-scheduler]. The
|
369
|
+
syntax is cron-like with some extensions specific to Rufus
|
370
|
+
(for example, timezone support). For more about this syntax, see
|
371
|
+
https://github.com/jmettraux/rufus-scheduler#parsing-cronlines-and-time-strings[parsing cronlines and time strings].
|
372
|
+
|
373
|
+
Examples:
|
374
|
+
|
375
|
+
|==========================================================
|
376
|
+
| `*/30 * * * *` | will execute on the 0th and 30th minute of every hour every day.
|
377
|
+
| `* 5 * 1-3 *` | will execute every minute of 5am every day of January through March.
|
378
|
+
| `0 * * * *` | will execute on the 0th minute of every hour every day.
|
379
|
+
| `0 6 * * * America/Chicago` | will execute at 6:00am (UTC/GMT -5) every day.
|
380
|
+
|==========================================================
|
381
|
+
|
382
|
+
Debugging using the Logstash interactive shell:
|
383
|
+
[source,shell]
|
384
|
+
bin/logstash -i irb
|
385
|
+
irb(main):001:0> require 'rufus-scheduler'
|
386
|
+
=> true
|
387
|
+
irb(main):002:0> Rufus::Scheduler.parse('*/10 * * * *')
|
388
|
+
=> #<Rufus::Scheduler::CronLine:0x230f8709 @timezone=nil, @weekdays=nil, @days=nil, @seconds=[0], @minutes=[0, 10, 20, 30, 40, 50], @hours=nil, @months=nil, @monthdays=nil, @original="*/10 * * * *">
|
389
|
+
irb(main):003:0> exit
|
390
|
+
|
391
|
+
|
392
|
+
The object returned by the above call, an instance of `Rufus::Scheduler::CronLine` shows the seconds, minutes etc. of execution.
|
393
|
+
|
394
|
+
[id="plugins-{type}s-{plugin}-loaders"]
|
395
|
+
===== `loaders`
|
396
|
+
|
397
|
+
* Value type is <<array,array>>
|
398
|
+
* Default value is `[]`
|
399
|
+
|
400
|
+
The array should contain one or more Hashes. Each Hash is validated
|
401
|
+
according to the table below.
|
402
|
+
|
403
|
+
[cols="<,<,<",options="header",]
|
404
|
+
|=======================================================================
|
405
|
+
|Setting |Input type|Required
|
406
|
+
| id|string|No
|
407
|
+
| table|string|Yes
|
408
|
+
| query|string|Yes
|
409
|
+
| max_rows|number|No
|
410
|
+
| jdbc_connection_string|string|No
|
411
|
+
| jdbc_driver_class|string|No
|
412
|
+
| jdbc_driver_library|a valid filesystem path|No
|
413
|
+
| jdbc_password|password|No
|
414
|
+
| jdbc_user|string|No
|
415
|
+
|=======================================================================
|
416
|
+
|
417
|
+
*Loader Field Descriptions:*
|
418
|
+
|
419
|
+
id::
|
420
|
+
An optional identifier. This is used to identify the loader that is
|
421
|
+
generating error messages and log lines.
|
422
|
+
|
423
|
+
table::
|
424
|
+
The destination table in the local lookup database that the loader will fill.
|
425
|
+
|
426
|
+
query::
|
427
|
+
The SQL statement that is executed to fetch the remote records. Use SQL
|
428
|
+
aliases and casts to ensure that the record's columns and datatype match the
|
429
|
+
table structure in the local database as defined in the `local_db_objects`.
|
430
|
+
|
431
|
+
max_rows::
|
432
|
+
The default for this setting is 1 million. Because the lookup database is
|
433
|
+
in-memory, it will take up JVM heap space. If the query returns many millions
|
434
|
+
of rows, you should increase the JVM memory given to Logstash or limit the
|
435
|
+
number of rows returned, perhaps to those most frequently found in the
|
436
|
+
event data.
|
437
|
+
|
438
|
+
jdbc_connection_string::
|
439
|
+
If not set in a loader, this setting defaults to the plugin-level
|
440
|
+
`jdbc_connection_string` setting.
|
441
|
+
|
442
|
+
jdbc_driver_class::
|
443
|
+
If not set in a loader, this setting defaults to the plugin-level
|
444
|
+
`jdbc_driver_class` setting.
|
445
|
+
|
446
|
+
jdbc_driver_library::
|
447
|
+
If not set in a loader, this setting defaults to the plugin-level
|
448
|
+
`jdbc_driver_library` setting.
|
449
|
+
|
450
|
+
jdbc_password::
|
451
|
+
If not set in a loader, this setting defaults to the plugin-level
|
452
|
+
`jdbc_password` setting.
|
453
|
+
|
454
|
+
jdbc_user::
|
455
|
+
If not set in a loader, this setting defaults to the plugin-level
|
456
|
+
`jdbc_user` setting.
|
457
|
+
|
458
|
+
[id="plugins-{type}s-{plugin}-local_db_objects"]
|
459
|
+
===== `local_db_objects`
|
460
|
+
|
461
|
+
* Value type is <<array,array>>
|
462
|
+
* Default value is `[]`
|
463
|
+
|
464
|
+
The array should contain one or more Hashes. Each Hash represents a table
|
465
|
+
schema for the local lookups database. Each Hash is validated
|
466
|
+
according to the table below.
|
467
|
+
|
468
|
+
[cols="<,<,<",options="header",]
|
469
|
+
|=======================================================================
|
470
|
+
|Setting |Input type|Required
|
471
|
+
| name|string|Yes
|
472
|
+
| columns|array|Yes
|
473
|
+
| index_columns|number|No
|
474
|
+
| preserve_existing|boolean|No
|
475
|
+
|=======================================================================
|
476
|
+
|
477
|
+
*Local_db_objects Field Descriptions:*
|
478
|
+
|
479
|
+
name::
|
480
|
+
The name of the table to be created in the database.
|
481
|
+
|
482
|
+
columns::
|
483
|
+
An array of column specifications. Each column specification is an array
|
484
|
+
of exactly two elements, for example `["ip", "varchar(15)"]`. The first
|
485
|
+
element is the column name string. The second element is a string that
|
486
|
+
is an
|
487
|
+
https://db.apache.org/derby/docs/10.14/ref/crefsqlj31068.html[Apache Derby SQL type].
|
488
|
+
The string content is checked when the local lookup tables are built, not when
|
489
|
+
the settings are validated. Therefore, any misspelled SQL type strings result in
|
490
|
+
errors.
|
491
|
+
|
492
|
+
index_columns::
|
493
|
+
An array of strings. Each string must be defined in the `columns` setting. The
|
494
|
+
index name will be generated internally. Unique or sorted indexes are not
|
495
|
+
supported.
|
496
|
+
|
497
|
+
preserve_existing::
|
498
|
+
This setting, when `true`, checks whether the table already exists in the local
|
499
|
+
lookup database. If you have multiple pipelines running in the same
|
500
|
+
instance of Logstash, and more than one pipeline is using this plugin, then you
|
501
|
+
must read the important multiple pipeline notice at the top of the page.
|
502
|
+
|
503
|
+
[id="plugins-{type}s-{plugin}-local_lookups"]
|
504
|
+
===== `local_lookups`
|
505
|
+
|
506
|
+
* Value type is <<array,array>>
|
507
|
+
* Default value is `[]`
|
508
|
+
|
509
|
+
The array should contain one or more Hashes. Each Hash represents a lookup
|
510
|
+
enhancement. Each Hash is validated according to the table below.
|
511
|
+
|
512
|
+
[cols="<,<,<",options="header",]
|
513
|
+
|=======================================================================
|
514
|
+
|Setting |Input type|Required
|
515
|
+
| id|string|No
|
516
|
+
| query|string|Yes
|
517
|
+
| parameters|hash|Yes
|
518
|
+
| target|string|No
|
519
|
+
| default_hash|hash|No
|
520
|
+
| tag_on_failure|string|No
|
521
|
+
| tag_on_default_use|string|No
|
522
|
+
|=======================================================================
|
523
|
+
|
524
|
+
*Local_lookups Field Descriptions:*
|
525
|
+
|
526
|
+
id::
|
527
|
+
An optional identifier. This is used to identify the lookup that is
|
528
|
+
generating error messages and log lines. If you omit this setting then a
|
529
|
+
default id is used instead.
|
530
|
+
|
531
|
+
query::
|
532
|
+
A SQL SELECT statement that is executed to achieve the lookup. To use
|
533
|
+
parameters, use named parameter syntax, for example
|
534
|
+
`"SELECT * FROM MYTABLE WHERE ID = :id"`.
|
535
|
+
|
536
|
+
parameters::
|
537
|
+
A key/value Hash or dictionary. The key (LHS) is the text that is
|
538
|
+
substituted for in the SQL statement
|
539
|
+
`SELECT * FROM sensors WHERE reference = :p1`. The value (RHS)
|
540
|
+
is the field name in your event. The plugin reads the value from
|
541
|
+
this key out of the event and substitutes that value into the
|
542
|
+
statement, for example, `parameters => { "p1" => "ref" }`. Quoting is
|
543
|
+
automatic - you do not need to put quotes in the statement.
|
544
|
+
Only use the field interpolation syntax on the RHS if you need to
|
545
|
+
add a prefix/suffix or join two event field values together to build
|
546
|
+
the substitution value. For example, imagine an IOT message that has
|
547
|
+
an id and a location, and you have a table of sensors that have a
|
548
|
+
column of `id-loc_id`. In this case your parameter hash would look
|
549
|
+
like this: `parameters => { "p1" => "%{[id]}-%{[loc_id]}" }`.
|
550
|
+
|
551
|
+
target::
|
552
|
+
An optional name for the field that will receive the looked-up data.
|
553
|
+
If you omit this setting then the `id` setting (or the default id) is
|
554
|
+
used. The looked-up data, an array of results converted to Hashes, is
|
555
|
+
never added to the root of the event. If you want to do this, you
|
556
|
+
should use the `add_field` setting. This means that
|
557
|
+
you are in full control of how the fields/values are put in the root
|
558
|
+
of the event, for example,
|
559
|
+
`add_field => { user_firstname => "%{[user][0][firstname]}" }` -
|
560
|
+
where `[user]` is the target field, `[0]` is the first result in the
|
561
|
+
array, and `[firstname]` is the key in the result hash.
|
562
|
+
|
563
|
+
default_hash::
|
564
|
+
An optional hash that will be put in the target field array when the
|
565
|
+
lookup returns no results. Use this setting if you need to ensure that later
|
566
|
+
references in other parts of the config actually refer to something.
|
567
|
+
|
568
|
+
tag_on_failure::
|
569
|
+
An optional string that overrides the plugin-level setting. This is
|
570
|
+
useful when defining multiple lookups.
|
571
|
+
|
572
|
+
tag_on_default_use::
|
573
|
+
An optional string that overrides the plugin-level setting. This is
|
574
|
+
useful when defining multiple lookups.
|
575
|
+
|
576
|
+
[id="plugins-{type}s-{plugin}-common-options"]
|
577
|
+
include::{include_path}/{type}.asciidoc[]
|