prometheus-splash 0.8.3 → 0.8.4
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +22 -4
- data/README.md +400 -178
- data/config/splash.yml +1 -1
- data/lib/splash/backends.rb +7 -1
- data/lib/splash/backends/file.rb +1 -1
- data/lib/splash/cli/commands.rb +22 -58
- data/lib/splash/cli/config.rb +12 -1
- data/lib/splash/cli/logs.rb +34 -2
- data/lib/splash/cli/process.rb +33 -0
- data/lib/splash/config.rb +11 -8
- data/lib/splash/config/flush.rb +2 -2
- data/lib/splash/constants.rb +1 -1
- data/lib/splash/logs.rb +5 -1
- data/lib/splash/processes.rb +4 -0
- data/lib/splash/webadmin/api/routes/config.rb +50 -8
- data/lib/splash/webadmin/api/routes/process.rb +21 -9
- data/lib/splash/webadmin/main.rb +2 -0
- data/lib/splash/webadmin/portal/controllers/logs.rb +15 -2
- data/lib/splash/webadmin/portal/controllers/processes.rb +60 -4
- data/lib/splash/webadmin/portal/views/{logs_form.slim → log_form.slim} +0 -0
- data/lib/splash/webadmin/portal/views/log_history.slim +24 -0
- data/lib/splash/webadmin/portal/views/logs.slim +26 -1
- data/lib/splash/webadmin/portal/views/process_form.slim +21 -0
- data/lib/splash/webadmin/portal/views/process_history.slim +24 -0
- data/lib/splash/webadmin/portal/views/processes.slim +72 -2
- data/prometheus-splash.gemspec +1 -1
- metadata +12 -9
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 285fc8b213dd02cccc8f43153f9e63ba261d96a6f29dd87b0c7f3194130b09d5
|
4
|
+
data.tar.gz: be38d7d3804b186ed62f9e3a50b18dd46ccf37c2fe9406b22fbab8df6ccc7af7
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 9a91bf0cd11d39881f49cb81e3b070da161d8cf24664025360dd1036842c448ce4b86d2498f6d881ec190213780c93b6b15b90a19a4b0410da3a72aa961b2250
|
7
|
+
data.tar.gz: 1c86c9ba9933d8031e5df00543e254cce43732f2f400f729bd2740f4a7f7ec765bc7d14610bd811cf1b8ceeab2d039d791267e9837db4132e6e193e069a2c2fd
|
data/CHANGELOG.md
CHANGED
@@ -127,7 +127,7 @@
|
|
127
127
|
|
128
128
|
## V 0.6.0 2020/10/10
|
129
129
|
|
130
|
-
###
|
130
|
+
### FEATURES
|
131
131
|
|
132
132
|
* read only WebAdmin (Major update)
|
133
133
|
* API REST
|
@@ -135,7 +135,7 @@
|
|
135
135
|
|
136
136
|
## V 0.7.0
|
137
137
|
|
138
|
-
###
|
138
|
+
### FEATURES
|
139
139
|
|
140
140
|
* sequences
|
141
141
|
* API sequences
|
@@ -146,14 +146,16 @@
|
|
146
146
|
|
147
147
|
## V 0.8.0
|
148
148
|
|
149
|
-
###
|
149
|
+
### FEATURES
|
150
|
+
|
150
151
|
* orchestrator rebuild
|
151
152
|
* reshash config and reset + grammar and Cli
|
152
153
|
* refacto config
|
153
154
|
|
154
155
|
## V 0.8.1
|
155
156
|
|
156
|
-
###
|
157
|
+
### FEATURES
|
158
|
+
|
157
159
|
* full Web UI features for Logs (R/W)
|
158
160
|
* API Logs Full
|
159
161
|
|
@@ -167,3 +169,19 @@
|
|
167
169
|
|
168
170
|
### SECURITY
|
169
171
|
* kramdown dependencies update
|
172
|
+
|
173
|
+
## V 0.8.4
|
174
|
+
|
175
|
+
### FEATURES
|
176
|
+
|
177
|
+
* backends flushs #64
|
178
|
+
* get_results refactoring #62
|
179
|
+
* full process UI and API R/W
|
180
|
+
|
181
|
+
### CHANGES
|
182
|
+
|
183
|
+
* delete_record purge
|
184
|
+
|
185
|
+
### FIX
|
186
|
+
|
187
|
+
* always missing status for log history #65
|
data/README.md
CHANGED
@@ -9,7 +9,7 @@ SPLASH is **Supervision with Prometheus of Logs and Asynchronous tasks orchestra
|
|
9
9
|
* Web : http://www.ultragreen.net
|
10
10
|
* Github : https://github.com/Ultragreen/prometheus-splash
|
11
11
|
* Rubygems : https://rubygems.org/gems/prometheus-splash
|
12
|
-
* DOC yardoc : https://www.rubydoc.info/gems/prometheus-splash/0.8.
|
12
|
+
* DOC yardoc : https://www.rubydoc.info/gems/prometheus-splash/0.8.4
|
13
13
|
|
14
14
|
Prometheus Logs and Batchs supervision over PushGateway
|
15
15
|
|
@@ -31,7 +31,7 @@ Splash is succesfully tested with Ruby 2.7.0, but it should works correctly with
|
|
31
31
|
|
32
32
|
On Ubuntu :
|
33
33
|
|
34
|
-
# apt install ruby
|
34
|
+
# apt install ruby ruby-dev
|
35
35
|
|
36
36
|
In some use case, Splash also require some other components :
|
37
37
|
|
@@ -39,7 +39,7 @@ In some use case, Splash also require some other components :
|
|
39
39
|
- RabbitMQ
|
40
40
|
|
41
41
|
It's not strictly required, Redis is a real option for backend; you could configure backend to flat file, but
|
42
|
-
RabbitMQ is required by the Splash Daemon when using host2host sequence execution.
|
42
|
+
RabbitMQ is required by the Splash Daemon when using host2host commands/sequence execution.
|
43
43
|
|
44
44
|
Redis, is usefull when you need a centralized Splash management.
|
45
45
|
|
@@ -55,7 +55,7 @@ See Backends Configuration and Transports Configuration to specify this service
|
|
55
55
|
|
56
56
|
Install with gem command :
|
57
57
|
|
58
|
-
$ gem install splash
|
58
|
+
$ gem install prometheus-splash
|
59
59
|
|
60
60
|
|
61
61
|
## Configuration
|
@@ -68,7 +68,8 @@ As root or with rvmsudo, if you use RVM.
|
|
68
68
|
👍 Splash Initialisation
|
69
69
|
👍 Installing template file : /etc/splash_execution_report.tpl
|
70
70
|
👍 Creating/Checking pid file path : /var/run/splash
|
71
|
-
👍 Creating/Checking trace file path : /var/run/
|
71
|
+
👍 Creating/Checking trace file path : /var/run/splash/traces :
|
72
|
+
💪 Splash Setup terminated successfully
|
72
73
|
|
73
74
|
*NOTE : you can just type 'splash' withou any arguments, for the first setup because, Splash come with an automatic recovery mode, when configuration file is missing, run at the very beginnning of his the execution*
|
74
75
|
|
@@ -80,15 +81,12 @@ As root, edit /etc/splash.conf and adapt Prometheus Pushgateway Configuration :
|
|
80
81
|
# vi /etc/splash.yml
|
81
82
|
[..]
|
82
83
|
:prometheus:
|
83
|
-
:pushgateway:
|
84
|
-
|
85
|
-
|
86
|
-
[..]
|
84
|
+
:pushgateway: 'http://localhost:9091'
|
85
|
+
:url: 'http://localhost:9090'
|
86
|
+
:alertmanager: 'http://localhost:9093'
|
87
87
|
|
88
|
-
|
88
|
+
[..]
|
89
89
|
|
90
|
-
- SERVER : IP or fqdn of the Gateway.
|
91
|
-
- PORT : the specific TCP port of the Gateway.
|
92
90
|
|
93
91
|
If you have already setup, you could use --preserve option to keep your active configuration and report file on place
|
94
92
|
This is usefull for automatique Idempotent installation like with Ansible :
|
@@ -101,12 +99,12 @@ This is usefull for automatique Idempotent installation like with Ansible :
|
|
101
99
|
As root or with rvmsudo, if you use RVM.
|
102
100
|
|
103
101
|
# splash conf san
|
104
|
-
Splash -> sanitycheck :
|
105
|
-
|
106
|
-
|
107
|
-
|
108
|
-
|
109
|
-
|
102
|
+
ℹ Splash -> sanitycheck :
|
103
|
+
👍 Config file : /etc/splash.yml
|
104
|
+
👍 PID Path : /var/run/splash
|
105
|
+
👍 Trace Path : /var/run/splash/traces
|
106
|
+
👍 Prometheus PushGateway Service running
|
107
|
+
💪 Splash Sanitycheck terminated successfully
|
110
108
|
|
111
109
|
*WARNING* : setup or Sanitycheck could precises errors if path defined in configuration is *Symbolic links*, type :mode.
|
112
110
|
But it's not a problem for Splash to be operational.
|
@@ -123,8 +121,8 @@ For file/folders if problems is detected, it could be such as :
|
|
123
121
|
run :
|
124
122
|
|
125
123
|
$ splash config version
|
126
|
-
Splash version : 0.
|
127
|
-
Ultragreen (c) 2020 BSD-2-Clause
|
124
|
+
ℹ Splash version : 0.8.2, Author : Romain GEORGES <gems@ultragreen.net>
|
125
|
+
ℹ Ultragreen (c) 2020 BSD-2-Clause
|
128
126
|
|
129
127
|
|
130
128
|
## Usage
|
@@ -140,18 +138,25 @@ In the /etc/splash.yml, you need to adapt default config to monitor your logs.
|
|
140
138
|
[..]
|
141
139
|
### configuration of monitored logs
|
142
140
|
:logs:
|
143
|
-
- :
|
141
|
+
- :label: :a_label
|
142
|
+
:log: /a/log/path.log
|
144
143
|
:pattern: <regexp pattern>
|
145
|
-
|
146
|
-
|
144
|
+
:retention:
|
145
|
+
:hours: 10
|
146
|
+
- :label: :an_other_label
|
147
|
+
:log: /an/other/log/path.log
|
148
|
+
:pattern: <regexp pattern>
|
149
|
+
:retention:
|
150
|
+
:days: 1
|
147
151
|
- <etc...>
|
148
152
|
[..]
|
149
153
|
|
150
154
|
Config for log is a YAML list of Hash, with keys :
|
151
155
|
|
156
|
+
- :label : a Symbol like ':xxxxxx' used in Splash internaly to identify logs records
|
152
157
|
- :log : a log absolut paths
|
153
158
|
- :pattern : a regular expression splash need to detect
|
154
|
-
|
159
|
+
- :retention : a hash with keys like (:days or :hours) and a periode in value
|
155
160
|
|
156
161
|
#### Prerequisite
|
157
162
|
|
@@ -169,11 +174,12 @@ or
|
|
169
174
|
|
170
175
|
# slash logs help
|
171
176
|
Commands:
|
172
|
-
splash logs analyse # analyze logs in config
|
177
|
+
splash logs analyse # analyze logs defined in Splash config
|
173
178
|
splash logs help [COMMAND] # Describe subcommands or one specific subcommand
|
174
|
-
splash logs
|
175
|
-
splash logs
|
176
|
-
splash logs
|
179
|
+
splash logs history LABEL # show logs monitoring history
|
180
|
+
splash logs list # List all Splash configured logs monitoring
|
181
|
+
splash logs monitor # monitor logs defined in Splash config
|
182
|
+
splash logs show LOG # show Splash configured log monitoring for LOG
|
177
183
|
|
178
184
|
*Typicallly, the way work with all Splash commands or subcommands*
|
179
185
|
|
@@ -188,9 +194,9 @@ Verify /tmp/test and /tmp/test2 not existence
|
|
188
194
|
Verify configured logs :
|
189
195
|
|
190
196
|
# splash logs list
|
191
|
-
Splash configured log monitoring :
|
192
|
-
|
193
|
-
|
197
|
+
ℹ Splash configured log monitoring :
|
198
|
+
🔹 log monitor : /tmp/test label : log_app_1
|
199
|
+
🔹 log monitor : /tmp/test2 label : log_app_2
|
194
200
|
|
195
201
|
You could run list commands with --detail option , verify it with :
|
196
202
|
|
@@ -199,29 +205,31 @@ You could run list commands with --detail option , verify it with :
|
|
199
205
|
like :
|
200
206
|
|
201
207
|
# splash logs list --detail
|
202
|
-
Splash configured log monitoring :
|
203
|
-
|
204
|
-
|
205
|
-
|
206
|
-
|
208
|
+
ℹ Splash configured log monitoring :
|
209
|
+
🔹 log monitor : /tmp/test label : log_app_1
|
210
|
+
➡ pattern : /ERROR/
|
211
|
+
🔹 log monitor : /tmp/test2 label : log_app_2
|
212
|
+
➡ pattern : /ERROR/
|
207
213
|
|
208
|
-
You cloud view a specific logs record detail with
|
209
214
|
|
210
|
-
|
211
|
-
|
212
|
-
|
215
|
+
You cloud view a specific logs record detail with :
|
216
|
+
|
217
|
+
# splash logs show /tmp/test
|
218
|
+
ℹ Splash log monitor : /tmp/test
|
219
|
+
🔹 pattern : /ERROR/
|
220
|
+
🔹 label : log_app_1
|
221
|
+
|
222
|
+
*this command Work with a logname or the label*
|
213
223
|
|
214
224
|
Run a first analyse, you would see :
|
215
225
|
|
216
226
|
# splash logs analyse
|
217
|
-
SPlash Configured
|
218
|
-
|
219
|
-
|
220
|
-
|
221
|
-
|
222
|
-
|
223
|
-
- detailled Status : missing
|
224
|
-
Global Status : [KO]
|
227
|
+
ℹ SPlash Configured log monitors :
|
228
|
+
👎 Log : /tmp/test with label : log_app_1 : missing !
|
229
|
+
🔹 Detected pattern : ERROR
|
230
|
+
👎 Log : /tmp/test2 with label : log_app_2 : missing !
|
231
|
+
🔹 Detected pattern : ERROR
|
232
|
+
🚫 Global status : some error found
|
225
233
|
|
226
234
|
Create empty Files, or without ERROR string in.
|
227
235
|
|
@@ -231,16 +239,14 @@ Create empty Files, or without ERROR string in.
|
|
231
239
|
Re-run analyse :
|
232
240
|
|
233
241
|
# splash log an
|
234
|
-
SPlash Configured
|
235
|
-
|
236
|
-
|
237
|
-
|
238
|
-
|
239
|
-
|
240
|
-
|
241
|
-
|
242
|
-
nb lines = 0
|
243
|
-
Global Status : [OK]
|
242
|
+
ℹ SPlash Configured log monitors :
|
243
|
+
👍 Log : /tmp/test with label : log_app_1 : no errors
|
244
|
+
🔹 Detected pattern : ERROR
|
245
|
+
🔹 Nb lines = 1
|
246
|
+
👍 Log : /tmp/test2 with label : log_app_2 : no errors
|
247
|
+
🔹 Detected pattern : ERROR
|
248
|
+
🔹 Nb lines = 0
|
249
|
+
👍 Global status : no error found
|
244
250
|
|
245
251
|
It's alright, log monitoring work fine.
|
246
252
|
|
@@ -249,15 +255,13 @@ It's alright, log monitoring work fine.
|
|
249
255
|
Splash is made to run a specific daemon to do this job, but you could do one time, with :
|
250
256
|
|
251
257
|
# splash logs monitor
|
252
|
-
Sending metrics to Prometheus Pushgateway
|
253
|
-
|
254
|
-
|
255
|
-
Sending done.
|
258
|
+
ℹ Sending metrics to Prometheus Pushgateway
|
259
|
+
👍 Sending metrics for log /tmp/test to Prometheus Pushgateway
|
260
|
+
👍 Sending metrics for log /tmp/test2 to Prometheus Pushgateway
|
256
261
|
|
257
262
|
if Prometheus Gateway is not running or misconfigured, you could see :
|
258
263
|
|
259
|
-
|
260
|
-
Exit without notification.
|
264
|
+
⛔ Splash Service dependence missing : Prometheus Notification not send.
|
261
265
|
|
262
266
|
Otherwise Prometheus PushGateway have received the metrics :
|
263
267
|
|
@@ -293,12 +297,16 @@ To see all the commands in the 'commands' submenu :
|
|
293
297
|
|
294
298
|
$ splash commands
|
295
299
|
Commands:
|
296
|
-
splash commands
|
297
|
-
splash commands
|
298
|
-
splash commands
|
299
|
-
splash commands
|
300
|
-
splash commands
|
301
|
-
splash commands
|
300
|
+
splash commands execute NAME # run for command/sequence or ack result
|
301
|
+
splash commands getreportlist # list all executions report results
|
302
|
+
splash commands help [COMMAND] # Describe subcommands or one specific subcommand
|
303
|
+
splash commands history LABEL # show commands executions history
|
304
|
+
splash commands lastrun COMMAND # Show last running result for specific configured command COMMAND
|
305
|
+
splash commands list # Show configured commands
|
306
|
+
splash commands onerun COMMAND -D, --date=DATE # Show running result for specific configured command COMMAND
|
307
|
+
splash commands schedule NAME # Schedule excution of command on Splash daemon
|
308
|
+
splash commands show COMMAND # Show specific configured command COMMAND
|
309
|
+
splash commands treeview # Show commands sequence tree
|
302
310
|
|
303
311
|
#### Prepare test with default configuration
|
304
312
|
|
@@ -400,40 +408,44 @@ if you want to inject default configuration, again as root :
|
|
400
408
|
You could list the defined commands, in your case :
|
401
409
|
|
402
410
|
$ splash commands list
|
403
|
-
Splash configured commands :
|
404
|
-
|
405
|
-
|
406
|
-
|
407
|
-
|
408
|
-
|
409
|
-
|
410
|
-
|
411
|
-
|
411
|
+
ℹ Splash configured commands :
|
412
|
+
🔹 id_root
|
413
|
+
🔹 true_test
|
414
|
+
🔹 false_test
|
415
|
+
🔹 ls_slash_tmp
|
416
|
+
🔹 pwd
|
417
|
+
🔹 echo1
|
418
|
+
🔹 echo2
|
419
|
+
🔹 echo3
|
420
|
+
🔹 rand_sleep_5
|
421
|
+
🔹 test_remote_call
|
422
|
+
|
412
423
|
|
413
424
|
#### Show specific commands
|
414
425
|
|
415
426
|
You could show a specific command :
|
416
427
|
|
417
428
|
$ splash com show pwd
|
418
|
-
Splash command : pwd
|
419
|
-
|
420
|
-
|
421
|
-
|
422
|
-
|
429
|
+
ℹ Splash command : pwd
|
430
|
+
🔹 command line : 'pwd'
|
431
|
+
🔹 command description : 'run pwd'
|
432
|
+
🔹 command failure callback : 'echo2'
|
433
|
+
🔹 command success callback : 'echo1'
|
434
|
+
|
423
435
|
|
424
436
|
#### View Sequence execution for commands
|
425
437
|
|
426
438
|
You could trace execution sequence for a commands as a tree, with :
|
427
439
|
|
428
440
|
# splash com treeview
|
429
|
-
Command : true_test
|
430
|
-
|
431
|
-
|
432
|
-
|
433
|
-
|
434
|
-
|
435
|
-
|
436
|
-
|
441
|
+
ℹ Command : true_test
|
442
|
+
* on failure => ls_slash_tmp
|
443
|
+
* on success => echo1
|
444
|
+
* on failure => echo3
|
445
|
+
* on success => pwd
|
446
|
+
* on failure => echo2
|
447
|
+
* on success => echo1
|
448
|
+
* on failure => echo3
|
437
449
|
|
438
450
|
In your sample, in all case :
|
439
451
|
- :true_test return 0
|
@@ -451,10 +463,12 @@ commands execution sequence will be :
|
|
451
463
|
Running a standalone command with ONLY as root
|
452
464
|
|
453
465
|
# splash com execute echo1
|
454
|
-
Executing command : 'echo1'
|
455
|
-
|
456
|
-
|
457
|
-
|
466
|
+
ℹ Executing command : 'echo1'
|
467
|
+
🔹Tracefull execution
|
468
|
+
👍 Command executed
|
469
|
+
➡ exitcode 0
|
470
|
+
👍 Sending metrics to Prometheus Pushgateway
|
471
|
+
|
458
472
|
|
459
473
|
This command :
|
460
474
|
|
@@ -485,54 +499,59 @@ Splash allow execution of callback (:on_failure, :on_success), you have already
|
|
485
499
|
In our example, we have see :true_test have a execution sequence, we're going to test this, as root :
|
486
500
|
|
487
501
|
# splash com exe true_test
|
488
|
-
Executing command : 'true_test'
|
489
|
-
|
490
|
-
|
491
|
-
|
492
|
-
|
493
|
-
|
494
|
-
|
495
|
-
|
496
|
-
|
497
|
-
|
498
|
-
|
499
|
-
|
500
|
-
|
501
|
-
|
502
|
+
ℹ Executing command : 'true_test'
|
503
|
+
🔹 Tracefull execution
|
504
|
+
👍 Command executed
|
505
|
+
➡ exitcode 0
|
506
|
+
👍 Sending metrics to Prometheus Pushgateway
|
507
|
+
🔹 On success callback : pwd
|
508
|
+
ℹ Executing command : 'pwd'
|
509
|
+
🔹 Tracefull execution
|
510
|
+
👍 Command executed
|
511
|
+
➡ exitcode 0
|
512
|
+
👍 Sending metrics to Prometheus Pushgateway
|
513
|
+
🔹 On success callback : echo1
|
514
|
+
ℹ Executing command : 'echo1'
|
515
|
+
🔹 Tracefull execution
|
516
|
+
👍 Command executed
|
517
|
+
➡ exitcode 0
|
518
|
+
👍 Sending metrics to Prometheus Pushgateway
|
519
|
+
|
502
520
|
|
503
521
|
We could verify the sequence determined with lastrun command.
|
504
522
|
|
505
523
|
If you want to prevent callback execution, as root :
|
506
524
|
|
507
525
|
# splash com exe true_test --no-callback
|
508
|
-
Executing command : 'true_test'
|
509
|
-
|
510
|
-
|
511
|
-
|
512
|
-
|
526
|
+
ℹ Executing command : 'true_test'
|
527
|
+
🔹 Tracefull execution
|
528
|
+
👍 Command executed
|
529
|
+
➡ exitcode 0
|
530
|
+
👍 Sending metrics to Prometheus Pushgateway
|
531
|
+
🔹 Without callbacks sequences
|
513
532
|
|
514
533
|
#### Display the last execution trace for a command
|
515
534
|
|
516
535
|
If you want to view the last execution trace for commande, (only if executed with --trace : default)
|
517
536
|
|
518
|
-
# splash com lastrun
|
519
|
-
Splash command pwd previous execution report:
|
537
|
+
# splash com lastrun pwd
|
538
|
+
ℹ Splash command pwd previous execution report:
|
520
539
|
|
521
540
|
Command Execution report
|
522
541
|
========================
|
523
542
|
|
524
|
-
Date START: 2020-
|
525
|
-
Date END: 2020-
|
543
|
+
Date START: 2020-10-28T13:38:36+01:00
|
544
|
+
Date END: 2020-10-28T13:38:36+01:00
|
526
545
|
Command : pwd
|
527
546
|
full command line : pwd
|
528
547
|
Description : run pwd
|
529
|
-
errorcode : pid
|
530
|
-
Execution time (sec) : 0.
|
548
|
+
errorcode : pid 10958 exit 0
|
549
|
+
Execution time (sec) : 0.00737092
|
531
550
|
|
532
551
|
STDOUT:
|
533
552
|
-------
|
534
553
|
|
535
|
-
/
|
554
|
+
/home/xxx/prometheus-splash
|
536
555
|
|
537
556
|
|
538
557
|
|
@@ -540,6 +559,8 @@ If you want to view the last execution trace for commande, (only if executed wi
|
|
540
559
|
-------
|
541
560
|
|
542
561
|
|
562
|
+
|
563
|
+
|
543
564
|
Lastrun could receive the --hostname option to get the execution report of command
|
544
565
|
|
545
566
|
|
@@ -554,6 +575,9 @@ For the moment Splash come with two types of backend :
|
|
554
575
|
backend are usable for :
|
555
576
|
|
556
577
|
- execution trace
|
578
|
+
- transfers_trace
|
579
|
+
- logs_trace
|
580
|
+
- process_trace
|
557
581
|
|
558
582
|
##### File backend
|
559
583
|
|
@@ -597,20 +621,22 @@ Edit /etc/splash.yml, as root :
|
|
597
621
|
- :base must be set as the Redis base number (default: 1)
|
598
622
|
- :auth should be set if Redis need an simple authentification key <mykey>
|
599
623
|
|
600
|
-
##### Prometheus
|
624
|
+
##### Prometheus configuration
|
601
625
|
|
602
|
-
Prometheus
|
626
|
+
Prometheus services could be configured in /etc/splash.yaml
|
603
627
|
|
604
628
|
# vi /etc/splash.yml
|
605
629
|
[...]
|
606
630
|
:prometheus:
|
607
|
-
:pushgateway:
|
608
|
-
|
609
|
-
|
631
|
+
:pushgateway: http://localhost:9091
|
632
|
+
:url: http://localhost:9090
|
633
|
+
:alertmanager: http://localhost:9093
|
634
|
+
|
610
635
|
[...]
|
611
636
|
|
612
|
-
- :
|
613
|
-
- :
|
637
|
+
- :pushgateway should be set as the Prometheus PushGateway url (default: http://localhost:9091 )
|
638
|
+
- :url should be set as the Prometheus main service (default: http://localhost:9090)
|
639
|
+
- :alertmanager should be set as the Prometheus Alertmanager service (default: http://localhost:9093)
|
614
640
|
|
615
641
|
### The Splash daemon
|
616
642
|
|
@@ -621,15 +647,144 @@ We're going to discover the Big part of Splash the Daemon, usefull to :
|
|
621
647
|
- orchestration
|
622
648
|
- scheduling
|
623
649
|
- Log monitoring (without CRON scheduling)
|
650
|
+
- Process monitoring (without CRON scheduling)
|
651
|
+
- Transfers scheduling (TODO)
|
624
652
|
- host2host sequences execution (optionnal )
|
625
653
|
|
654
|
+
|
655
|
+
#### Prerequisite
|
656
|
+
|
657
|
+
Splash Daemon requiere Rabbitmq Configured and launched
|
658
|
+
if you try to run Splash with Rabbitmq, it will be failed :
|
659
|
+
|
660
|
+
# sudo splash dae start
|
661
|
+
⛔ Splash Service dependence missing : RabbitMQ Transport not available.
|
662
|
+
|
663
|
+
*WARNING : if RabbitMQ service shutdown, Splash will shutdown also !*
|
664
|
+
|
665
|
+
You cloud configure RabbitMQ in the /etc/splash.yml :
|
666
|
+
|
667
|
+
[...]
|
668
|
+
:transports:
|
669
|
+
:active: :rabbitmq
|
670
|
+
:rabbitmq:
|
671
|
+
:vhost: "/"
|
672
|
+
:port: 5672
|
673
|
+
:host: localhost
|
674
|
+
[...]
|
675
|
+
|
676
|
+
*RabbitMQ, is the only transport service usable actually in Splash*
|
677
|
+
|
678
|
+
Where :
|
679
|
+
* vhost: is the RabbitMQ vhost used to store Splash Queues
|
680
|
+
* port : the TCP RabbitMQ port (default : 5672)
|
681
|
+
* Host : the hostname or IP of the RabbitMQ service (default : localhost)
|
682
|
+
|
683
|
+
#### the Daemon Splash subcommand
|
684
|
+
|
685
|
+
run this command :
|
686
|
+
|
687
|
+
# splash daemon
|
688
|
+
Commands:
|
689
|
+
splash daemon getjobs # send a get_jobs verb to HOSTNAME daemon over transport (need an active tranport), Typicallly Ra...
|
690
|
+
splash daemon getjobs # send a reset verb to HOSTNAME daemon over transport (need an active tranport), Typicallly RabbitMQ
|
691
|
+
splash daemon help [COMMAND] # Describe subcommands or one specific subcommand
|
692
|
+
splash daemon ping HOSTNAME # send a ping to HOSTNAME daemon over transport (need an active tranport), Typicallly RabbitMQ
|
693
|
+
splash daemon purge # Purge Transport Input queue of Daemon
|
694
|
+
splash daemon start # Starting Splash Daemon
|
695
|
+
splash daemon status # Splash Daemon status
|
696
|
+
splash daemon stop # Stopping Splash Daemon
|
697
|
+
|
626
698
|
#### Controlling the daemon
|
627
699
|
|
628
|
-
|
700
|
+
##### Running Daemon
|
701
|
+
|
702
|
+
# sudo splash dae start
|
703
|
+
ℹ Queue : splash.live.input purged
|
704
|
+
👍 Splash Daemon Started, with PID : 16904
|
705
|
+
💪 Splash Daemon successfully loaded.
|
706
|
+
|
707
|
+
Start command support multiples options, you cloud see it by typing :
|
708
|
+
|
709
|
+
# sudo splash dae help start
|
710
|
+
Usage:
|
711
|
+
splash daemon start
|
712
|
+
|
713
|
+
Options:
|
714
|
+
-F, [--foreground], [--no-foreground]
|
715
|
+
[--purge], [--no-purge]
|
716
|
+
# Default: true
|
717
|
+
[--scheduling], [--no-scheduling]
|
718
|
+
# Default: true
|
719
|
+
|
720
|
+
Description:
|
721
|
+
Starting Splash Daemon
|
722
|
+
|
723
|
+
With --foreground, run Splash in foreground
|
724
|
+
|
725
|
+
With --no-scheduling, inhibit commands scheduling
|
726
|
+
|
727
|
+
With --no-purge, inhibit purge Input Queue for Splash Daemon
|
728
|
+
|
729
|
+
|
730
|
+
##### Status Daemon
|
731
|
+
|
732
|
+
if daemon is stopped :
|
733
|
+
|
734
|
+
# sudo splash dae status
|
735
|
+
🔹 Splash Process not found
|
736
|
+
🔹 and PID file don't exist
|
737
|
+
💪 Status OK
|
738
|
+
|
739
|
+
|
740
|
+
If daemon is running :
|
741
|
+
|
742
|
+
# splash dae status
|
743
|
+
🔹 Splash Process is running with PID 974
|
744
|
+
🔹 and PID file exist with PID 974
|
745
|
+
💪 Status OK
|
746
|
+
|
747
|
+
|
748
|
+
##### Stopping Daemon
|
749
|
+
|
750
|
+
# sudo splash dae stop
|
751
|
+
💪 Splash stopped succesfully
|
752
|
+
|
629
753
|
|
630
754
|
#### Configuring the daemon
|
631
755
|
|
632
|
-
|
756
|
+
the configuration of the daemon could be done in the /etc/splash.yml
|
757
|
+
[...]
|
758
|
+
:daemon:
|
759
|
+
:logmon_scheduling:
|
760
|
+
:every: 20s
|
761
|
+
:metrics_scheduling:
|
762
|
+
:every: 15s
|
763
|
+
:procmon_scheduling:
|
764
|
+
:every: 20s
|
765
|
+
:process_name: 'Splash : daemon.'
|
766
|
+
:files:
|
767
|
+
:stdout_trace: stdout.txt
|
768
|
+
:stderr_trace: stderr.txt
|
769
|
+
:pid_file: splash.pid
|
770
|
+
[...]
|
771
|
+
|
772
|
+
Where :
|
773
|
+
|
774
|
+
* logmon_scheduling : (Hash) a scheduling for Log monitoring, (default: every 20s) it support :
|
775
|
+
* :every: "<timing>" ex: "1s", "3m", "2h"
|
776
|
+
* :at: "<date/time>" ex: "2030/12/12 23:30:00"
|
777
|
+
* :cron: * * * * * a cron format
|
778
|
+
|
779
|
+
|
780
|
+
* metrics_scheduling : (Hash) a scheduling for internals metrics for daemon, (default: every 20s), scheduled as logmon_scheduling
|
781
|
+
|
782
|
+
* procmon_scheduling : (Hash) a scheduling for Process monitoring, (default: every 20s), scheduled as logmon_scheduling
|
783
|
+
|
784
|
+
[Rufus Scheduler Doc](https://github.com/jmettraux/rufus-scheduler)
|
785
|
+
|
786
|
+
|
787
|
+
#### Daemon metrics
|
633
788
|
|
634
789
|
|
635
790
|
### Ecosystem
|
@@ -656,74 +811,141 @@ TODO
|
|
656
811
|
|
657
812
|
|
658
813
|
|
659
|
-
|
660
|
-
|
814
|
+
# Current splash version
|
815
|
+
VERSION = "0.8.3"
|
816
|
+
# the path to th config file, not overridable by config
|
817
|
+
CONFIG_FILE = "/etc/splash.yml"
|
818
|
+
# the default execution trace_path if backend file
|
819
|
+
TRACE_PATH="/var/run/splash"
|
820
|
+
# the default pid file path
|
821
|
+
PID_PATH="/var/run"
|
822
|
+
|
823
|
+
|
824
|
+
# default scheduling criteria for log monitoring
|
825
|
+
DAEMON_LOGMON_SCHEDULING={ :every => '20s'}
|
826
|
+
# default scheduling criteria for metrics notifications
|
827
|
+
DAEMON_METRICS_SCHEDULING={ :every => '15s'}
|
828
|
+
# default scheduling criteria for process monitoring
|
829
|
+
DAEMON_PROCMON_SCHEDULING={ :every => '20s'}
|
830
|
+
|
831
|
+
# the display name of daemon in proc info (ps/top)
|
832
|
+
DAEMON_PROCESS_NAME="Splash : daemon."
|
833
|
+
# the default pid file name
|
834
|
+
DAEMON_PID_FILE="splash.pid"
|
835
|
+
# the default sdtout trace file
|
836
|
+
DAEMON_STDOUT_TRACE="stdout.txt"
|
837
|
+
# the default sdterr trace file
|
838
|
+
DAEMON_STDERR_TRACE="stderr.txt"
|
839
|
+
|
840
|
+
# the Author name
|
841
|
+
AUTHOR="Romain GEORGES"
|
842
|
+
# the maintainer mail
|
843
|
+
EMAIL = "gems@ultragreen.net"
|
844
|
+
# legal Copyright (c) 2020 Copyright Utragreen All Rights Reserved.
|
845
|
+
COPYRIGHT="Ultragreen (c) 2020"
|
846
|
+
# type of licence
|
847
|
+
LICENSE="BSD-2-Clause"
|
848
|
+
|
849
|
+
# the default prometheus pushgateway URL
|
850
|
+
PROMETHEUS_PUSHGATEWAY_URL = 'http://localhost:9091/'
|
851
|
+
|
852
|
+
# the default prometheus Alertmanager URL
|
853
|
+
PROMETHEUS_ALERTMANAGER_URL = 'http://localhost:9092/'
|
854
|
+
|
855
|
+
# the default prometheus URL
|
856
|
+
PROMETHEUS_URL = "http://localhost:9090/"
|
857
|
+
|
858
|
+
# the default path fo execution report template
|
859
|
+
EXECUTION_TEMPLATE="/etc/splash_execution_report.tpl"
|
860
|
+
|
861
|
+
# the list of authorized tokens for template, carefull override,
|
862
|
+
EXECUTION_TEMPLATE_TOKENS_LIST = [:end_date,:start_date,:cmd_name,:cmd_line,:stdout,:stderr,:desc,:status,:exec_time]
|
863
|
+
|
864
|
+
# backends default settings
|
865
|
+
BACKENDS_STRUCT = { :list => [:file,:redis],
|
866
|
+
:stores => { :execution_trace => { :type => :file, :path => "/var/run/splash" }}}
|
867
|
+
# transports default settings
|
868
|
+
TRANSPORTS_STRUCT = { :list => [:rabbitmq],
|
869
|
+
:active => :rabbitmq,
|
870
|
+
:rabbitmq => { :port => 5672, :host => "localhost", :vhost => '/'} }
|
871
|
+
|
872
|
+
# loggers default settings
|
873
|
+
LOGGERS_STRUCT = { :list => [:cli,:daemon, :dual, :web],
|
874
|
+
:default => :cli,
|
875
|
+
:level => :info,
|
876
|
+
:daemon => {:file => '/var/log/splash.log'},
|
877
|
+
:web => {:file => '/var/log/splash_web.log'},
|
878
|
+
:cli => {:color => true, :emoji => true } }
|
879
|
+
|
880
|
+
WEBADMIN_IP = "127.0.0.1"
|
881
|
+
WEBADMIN_PORT = "9234"
|
882
|
+
WEBADMIN_PROXY = false
|
883
|
+
# the display name of daemon in proc info (ps/top)
|
884
|
+
WEBADMIN_PROCESS_NAME="Splash : WebAdmin."
|
885
|
+
# the default pid file path
|
886
|
+
WEBADMIN_PID_PATH="/var/run"
|
887
|
+
# the default pid file name
|
888
|
+
WEBADMIN_PID_FILE="splash.pid"
|
889
|
+
# the default sdtout trace file
|
890
|
+
WEBADMIN_STDOUT_TRACE="stdout.txt"
|
891
|
+
# the default sdterr trace file
|
892
|
+
WEBADMIN_STDERR_TRACE="stderr.txt"
|
893
|
+
|
894
|
+
# default retention for trace
|
895
|
+
DEFAULT_RETENTION=1
|
661
896
|
|
662
897
|
|
663
|
-
TRACE_PATH="/var/run/splash"
|
664
898
|
|
665
|
-
|
666
|
-
DAEMON_PROCESS_NAME="Splash : daemon."
|
667
|
-
DAEMON_PID_PATH="/var/run"
|
668
|
-
DAEMON_PID_FILE="splash.pid"
|
669
|
-
DAEMON_STDOUT_TRACE="stdout.txt"
|
670
|
-
DAEMON_STDERR_TRACE="stderr.txt"
|
899
|
+
#### Splash CLI return code significations
|
671
900
|
|
672
|
-
AUTHOR="Romain GEORGES"
|
673
|
-
EMAIL = "gems@ultragreen.net"
|
674
|
-
COPYRIGHT="Ultragreen (c) 2020"
|
675
|
-
LICENSE="BSD-2-Clause"
|
676
901
|
|
677
|
-
|
678
|
-
PROMETHEUS_PUSHGATEWAY_PORT = "9091"
|
902
|
+
EXIT_MAP= {
|
679
903
|
|
680
|
-
|
681
|
-
|
904
|
+
# context execution
|
905
|
+
:not_root => {:message => "This operation need to be run as root (use sudo or rvmsudo)", :code => 10},
|
906
|
+
:options_incompatibility => {:message => "Options incompatibility", :code => 40},
|
907
|
+
:service_dependence_missing => {:message => "Splash Service dependence missing", :code => 60},
|
682
908
|
|
683
|
-
|
684
|
-
|
685
|
-
|
686
|
-
|
687
|
-
|
909
|
+
# config
|
910
|
+
:specific_config_required => {:message => "Specific configuration required", :code => 30},
|
911
|
+
:splash_setup_error => {:message => "Splash Setup terminated unsuccessfully", :code => 25},
|
912
|
+
:splash_setup_success => {:message => "Splash Setup terminated successfully", :code => 0},
|
913
|
+
:splash_sanitycheck_error => {:message => "Splash Sanitycheck terminated unsuccessfully", :code => 20},
|
914
|
+
:splash_sanitycheck_success => {:message => "Splash Sanitycheck terminated successfully", :code => 0},
|
915
|
+
:configuration_error => {:message => "Splash Configuration Error", :code => 50},
|
688
916
|
|
689
917
|
|
690
|
-
|
918
|
+
# global
|
919
|
+
:quiet_exit => {:code => 0},
|
920
|
+
:error_exit => {:code => 99, :message => "Operation failure"},
|
691
921
|
|
922
|
+
# events
|
923
|
+
:interrupt => {:message => "Splash user operation interrupted", :code => 33},
|
692
924
|
|
693
|
-
|
694
|
-
|
695
|
-
|
696
|
-
:service_dependence_missing => {:message => "Splash Service dependence missing", :code => 60},
|
925
|
+
# request
|
926
|
+
:not_found => {:message => "Object not found", :code => 44},
|
927
|
+
:already_exist => {:message => "Object already exist", :code => 48},
|
697
928
|
|
698
|
-
|
699
|
-
|
700
|
-
|
701
|
-
:splash_setup_success => {:message => "Splash Setup terminated successfully", :code => 0},
|
702
|
-
:splash_sanitycheck_error => {:message => "Splash Sanitycheck terminated unsuccessfully", :code => 20},
|
703
|
-
:splash_sanitycheck_success => {:message => "Splash Sanitycheck terminated successfully", :code => 0},
|
704
|
-
:configuration_error => {:message => "Splash Configuration Error", :code => 50},
|
929
|
+
# daemon
|
930
|
+
:status_ok => {:message => "Status OK", :code => 0},
|
931
|
+
:status_ko => {:message => "Status KO", :code => 31}
|
705
932
|
|
933
|
+
}
|
706
934
|
|
707
|
-
# global
|
708
|
-
:quiet_exit => {:code => 0},
|
709
935
|
|
710
|
-
|
711
|
-
:interrupt => {:message => "Splash user operation interrupted", :code => 33},
|
936
|
+
### The Splash WebAdmin
|
712
937
|
|
713
|
-
|
714
|
-
:not_found => {:message => "Object not found", :code => 44},
|
715
|
-
:already_exist => {:message => "Object already exist", :code => 48},
|
938
|
+
#### Controlling WebAdmin
|
716
939
|
|
717
|
-
|
718
|
-
|
719
|
-
|
940
|
+
#### Starting
|
941
|
+
#### Stopping
|
942
|
+
#### Status
|
720
943
|
|
721
944
|
|
722
|
-
|
945
|
+
#### Accessing WebAdmin
|
723
946
|
|
724
|
-
- IHM
|
725
|
-
- Webservice
|
726
947
|
|
948
|
+
### the SPlash API
|
727
949
|
|
728
950
|
|
729
951
|
## Contributing
|