influxdb-plugin-fluent 1.0.0.pre.183

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,60 @@
1
+ #!/usr/bin/env bash
2
+ #
3
+ # The MIT License
4
+ #
5
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ # of this software and associated documentation files (the "Software"), to deal
7
+ # in the Software without restriction, including without limitation the rights
8
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ # copies of the Software, and to permit persons to whom the Software is
10
+ # furnished to do so, subject to the following conditions:
11
+ #
12
+ # The above copyright notice and this permission notice shall be included in
13
+ # all copies or substantial portions of the Software.
14
+ #
15
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ # THE SOFTWARE.
22
+ #
23
+
24
+ set -e
25
+
26
+ DEFAULT_DOCKER_REGISTRY="quay.io/influxdb/"
27
+ DOCKER_REGISTRY="${DOCKER_REGISTRY:-$DEFAULT_DOCKER_REGISTRY}"
28
+
29
+ DEFAULT_INFLUXDB_V2_REPOSITORY="influxdb"
30
+ DEFAULT_INFLUXDB_V2_VERSION="2.0.0-beta"
31
+ INFLUXDB_V2_REPOSITORY="${INFLUXDB_V2_REPOSITORY:-$DEFAULT_INFLUXDB_V2_REPOSITORY}"
32
+ INFLUXDB_V2_VERSION="${INFLUXDB_V2_VERSION:-$DEFAULT_INFLUXDB_V2_VERSION}"
33
+ INFLUXDB_V2_IMAGE=${DOCKER_REGISTRY}${INFLUXDB_V2_REPOSITORY}:${INFLUXDB_V2_VERSION}
34
+
35
+ SCRIPT_PATH="$( cd "$(dirname "$0")" ; pwd -P )"
36
+
37
+ docker kill influxdb_v2 || true
38
+ docker rm influxdb_v2 || true
39
+ docker network rm influx_network || true
40
+ docker network create -d bridge influx_network --subnet 192.168.0.0/24 --gateway 192.168.0.1
41
+
42
+ #
43
+ # InfluxDB 2.0
44
+ #
45
+ echo
46
+ echo "Restarting InfluxDB 2.0 [${INFLUXDB_V2_IMAGE}] ... "
47
+ echo
48
+
49
+ docker pull "${INFLUXDB_V2_IMAGE}" || true
50
+ docker run \
51
+ --detach \
52
+ --name influxdb_v2 \
53
+ --network influx_network \
54
+ --publish 9999:9999 \
55
+ "${INFLUXDB_V2_IMAGE}"
56
+
57
+ #
58
+ # Post onBoarding request to InfluxDB 2
59
+ #
60
+ "${SCRIPT_PATH}"/influxdb-onboarding.sh
@@ -0,0 +1,190 @@
1
+ # InfluxDB 2 + Fluentd
2
+
3
+ InfluxDB 2 and Fluentd together are able to collect large amount of logs and transform it to useful metrics.
4
+ InfluxDB 2 provide a solution for realtime analysis and alerting over collected metrics.
5
+
6
+ <img src="architecture.png" height="400px">
7
+
8
+ ## Introduction
9
+
10
+ [Fluentd](https://www.fluentd.org/architecture) is an open source data collector, which lets you unify the data collection and consumption for a better use and understanding of data.
11
+
12
+ [InfluxDB](https://www.influxdata.com) open source time series database, purpose-built by InfluxData for monitoring metrics and events, provides real-time visibility into stacks, sensors, and systems.
13
+
14
+ ## Demo
15
+
16
+ The following demo show how to analyze logs (Apache Access Log) from dockerized environment.
17
+
18
+ > The steps from 1 to 6 could be skipped if you use a script:
19
+ >
20
+ > [`run-example.sh`](run-example.sh)
21
+ ### Prerequisites
22
+
23
+ - Docker installed on your computer
24
+
25
+ ### Step 1 — Create Docker Network
26
+
27
+ Create bridge network that allows smoothly communication between containers:
28
+
29
+ ```bash
30
+ docker network create -d bridge influx_network --subnet 192.168.0.0/24 --gateway 192.168.0.1
31
+ ```
32
+
33
+ ### Step 2 — Start InfluxDB
34
+
35
+ Start latest `InfluxDB 2`:
36
+
37
+ ```bash
38
+ docker run \
39
+ --detach \
40
+ --name influxdb_v2 \
41
+ --network influx_network \
42
+ --publish 9999:9999 \
43
+ quay.io/influxdb/influxdb:2.0.0-beta
44
+ ```
45
+
46
+ Create default organization, user and bucket:
47
+ ```bash
48
+ curl -i -X POST http://localhost:9999/api/v2/setup -H 'accept: application/json' \
49
+ -d '{
50
+ "username": "my-user",
51
+ "password": "my-password",
52
+ "org": "my-org",
53
+ "bucket": "my-bucket",
54
+ "token": "my-token"
55
+ }'
56
+ ```
57
+
58
+ ### Step 3 — Prepare Fluentd Docker
59
+ We have to prepare a docker image that will contains Fluentd with configured [InfluxDB 2 output plugin](https://github.com/bonitoo-io/influxdb-plugin-fluent/).
60
+
61
+ Fluentd is configured to parse incoming events with tag `httpd.access` by regexp: `/^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)$/` to structured event with: `time`, `host`, `user`, `method`, `code` and `size`.
62
+ These structured events are routed to InfluxDB 2 output plugin.
63
+ #### Required files:
64
+
65
+ ##### Dockerfile
66
+
67
+ ```dockerfile
68
+ FROM fluent/fluentd:edge-debian
69
+
70
+ USER root
71
+
72
+ RUN fluent-gem install influxdb-plugin-fluent
73
+
74
+ COPY ./fluent.conf /fluentd/etc/
75
+ COPY entrypoint.sh /bin/
76
+
77
+ USER fluent
78
+ ```
79
+
80
+ ##### fluent.conf
81
+
82
+ ```xml
83
+ <source>
84
+ @type forward
85
+ port 24224
86
+ bind 0.0.0.0
87
+ </source>
88
+ <source>
89
+ @type monitor_agent
90
+ bind 0.0.0.0
91
+ port 24220
92
+ </source>
93
+ <filter httpd.access>
94
+ @type parser
95
+ key_name log
96
+ <parse>
97
+ @type regexp
98
+ expression /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)$/
99
+ time_format %d/%b/%Y:%H:%M:%S %z
100
+ </parse>
101
+ </filter>
102
+ <match httpd.access>
103
+ @type copy
104
+ <store>
105
+ @type influxdb2
106
+ url http://influxdb_v2:9999
107
+ token my-token
108
+ bucket my-bucket
109
+ org my-org
110
+ use_ssl false
111
+ time_precision s
112
+ tag_keys ["method", "host", "path"]
113
+ <buffer tag>
114
+ @type memory
115
+ flush_interval 5
116
+ </buffer>
117
+ </store>
118
+ </match>
119
+ ```
120
+ Build image:
121
+
122
+ ```bash
123
+ docker build -t fluentd_influx .
124
+ ```
125
+ ### Step 4 — Start Fluentd image
126
+
127
+ ```bash
128
+ docker run \
129
+ --detach \
130
+ --name fluentd_influx \
131
+ --network influx_network \
132
+ --publish 24224:24224 \
133
+ --publish 24220:24220 \
134
+ fluentd_influx
135
+ ```
136
+
137
+ ### Step 5 — Start Apache HTTP Server
138
+
139
+ Docker includes multiple logging mechanisms to help you get information from running containers and services.
140
+
141
+ We will use [Fluentd](https://docs.docker.com/config/containers/logging/fluentd/) logging driver with configured tag as `httpd.access`:
142
+
143
+ ```bash
144
+ docker run \
145
+ --detach \
146
+ --name web \
147
+ --network influx_network \
148
+ --publish 8080:80 \
149
+ --log-driver fluentd \
150
+ --log-opt tag=httpd.access \
151
+ httpd
152
+ ```
153
+
154
+ ### Step 6 — Generate httpd Access Logs
155
+
156
+ Generate some access logs by curl:
157
+ ```bash
158
+ curl http://localhost:8080/
159
+ curl http://localhost:8080/not_exists
160
+ ```
161
+
162
+ ### Step 7 — Import Dashboard
163
+
164
+ Open [InfluxDB](http://localhost:9999) and import dashboard [web_app_access.json](influxdb/web_app_access.json) by following steps:
165
+
166
+ ```
167
+ username: my-user
168
+ password: my-password
169
+ ```
170
+
171
+ 1. Click the **Dashboards** icon in the navigation bar.
172
+ 1. Click the **Create Dashboard** menu in the upper right and select **Import Dashboard**.
173
+ 1. Select **Upload File** to drag-and-drop or select a **web_app_access.json**.
174
+ 1. Click **Import JSON** as Dashboard.
175
+
176
+ The imported dashboard should look like this:
177
+
178
+ <img src="dashboard.png" height="400px">
179
+
180
+
181
+ ## Conclusion
182
+
183
+ Analyze Apache Access Log is just one way how to use a power of InfluxDB and Fluentd.
184
+ There are other things you could do with InfluxDB and Fluentd such as: [Monitoring and alerting](https://v2.docs.influxdata.com/v2.0/monitor-alert/#manage-your-monitoring-and-alerting-pipeline).
185
+
186
+ ## Links
187
+
188
+ - https://www.digitalocean.com/community/tutorials/how-to-centralize-your-docker-logs-with-fluentd-and-elasticsearch-on-ubuntu-16-04
189
+ - https://stackoverflow.com/questions/58563760/docker-compose-pulls-an-two-images-an-app-and-fluentd-but-no-logs-are-sent-to-s
190
+ - https://docs.fluentd.org/v/0.12/container-deployment/docker-compose
@@ -0,0 +1,25 @@
1
+ #!/usr/bin/env bash
2
+ #
3
+ # The MIT License
4
+ #
5
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ # of this software and associated documentation files (the "Software"), to deal
7
+ # in the Software without restriction, including without limitation the rights
8
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ # copies of the Software, and to permit persons to whom the Software is
10
+ # furnished to do so, subject to the following conditions:
11
+ #
12
+ # The above copyright notice and this permission notice shall be included in
13
+ # all copies or substantial portions of the Software.
14
+ #
15
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ # THE SOFTWARE.
22
+ #
23
+
24
+ apk add parallel
25
+ cat /tmp/urls.txt | parallel "watch -n 5 ab -n 1000 -c 10 {}"
@@ -0,0 +1,4 @@
1
+ http://web:80/foo/bar/
2
+ http://web:80/
3
+ http://web:80/foo.html
4
+ http://web:80/parallelfoo/bar/?v=1
Binary file
Binary file
@@ -0,0 +1,12 @@
1
+ FROM fluent/fluentd:edge-debian
2
+
3
+ USER root
4
+
5
+ COPY ./influxdb-plugin-fluent.gem /fluentd/plugins
6
+
7
+ RUN fluent-gem install /fluentd/plugins/influxdb-plugin-fluent.gem
8
+
9
+ COPY ./fluent.conf /fluentd/etc/
10
+ COPY entrypoint.sh /bin/
11
+
12
+ USER fluent
@@ -0,0 +1,28 @@
1
+ #!/bin/sh
2
+
3
+ #source vars if file exists
4
+ DEFAULT=/etc/default/fluentd
5
+
6
+ if [ -r $DEFAULT ]; then
7
+ set -o allexport
8
+ . $DEFAULT
9
+ set +o allexport
10
+ fi
11
+
12
+ # If the user has supplied only arguments append them to `fluentd` command
13
+ if [ "${1#-}" != "$1" ]; then
14
+ set -- fluentd "$@"
15
+ fi
16
+
17
+ # If user does not supply config file or plugins, use the default
18
+ if [ "$1" = "fluentd" ]; then
19
+ if ! echo $@ | grep ' \-c' ; then
20
+ set -- "$@" -c /fluentd/etc/${FLUENTD_CONF}
21
+ fi
22
+
23
+ if ! echo $@ | grep ' \-p' ; then
24
+ set -- "$@" -p /fluentd/plugins
25
+ fi
26
+ fi
27
+
28
+ exec "$@"
@@ -0,0 +1,40 @@
1
+ <source>
2
+ @type forward
3
+ port 24224
4
+ bind 0.0.0.0
5
+ </source>
6
+ <source>
7
+ @type monitor_agent
8
+ bind 0.0.0.0
9
+ port 24220
10
+ </source>
11
+ <filter httpd.access>
12
+ @type parser
13
+ key_name log
14
+ <parse>
15
+ @type regexp
16
+ expression /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)$/
17
+ time_format %d/%b/%Y:%H:%M:%S %z
18
+ </parse>
19
+ </filter>
20
+ <match httpd.access>
21
+ @type copy
22
+ <store>
23
+ @type influxdb2
24
+ # @log_level trace
25
+ url http://influxdb_v2:9999
26
+ token my-token
27
+ bucket my-bucket
28
+ org my-org
29
+ use_ssl false
30
+ time_precision s
31
+ tag_keys ["method", "host", "path"]
32
+ <buffer tag>
33
+ @type memory
34
+ flush_interval 5
35
+ </buffer>
36
+ </store>
37
+ # <store>
38
+ # @type stdout
39
+ # </store>
40
+ </match>
@@ -0,0 +1,365 @@
1
+ {
2
+ "meta": {
3
+ "version": "1",
4
+ "type": "dashboard",
5
+ "name": "Web App Access-Template",
6
+ "description": "template created from dashboard: Web App Access"
7
+ },
8
+ "content": {
9
+ "data": {
10
+ "type": "dashboard",
11
+ "attributes": {
12
+ "name": "Web App Access",
13
+ "description": ""
14
+ },
15
+ "relationships": {
16
+ "label": {
17
+ "data": []
18
+ },
19
+ "cell": {
20
+ "data": [
21
+ {
22
+ "type": "cell",
23
+ "id": "051559b2aa2a2000"
24
+ },
25
+ {
26
+ "type": "cell",
27
+ "id": "051559b2ae2a2000"
28
+ },
29
+ {
30
+ "type": "cell",
31
+ "id": "051559b2af6a2000"
32
+ },
33
+ {
34
+ "type": "cell",
35
+ "id": "05155aef3daa2000"
36
+ }
37
+ ]
38
+ },
39
+ "variable": {
40
+ "data": []
41
+ }
42
+ }
43
+ },
44
+ "included": [
45
+ {
46
+ "id": "051559b2aa2a2000",
47
+ "type": "cell",
48
+ "attributes": {
49
+ "x": 0,
50
+ "y": 0,
51
+ "w": 4,
52
+ "h": 4
53
+ },
54
+ "relationships": {
55
+ "view": {
56
+ "data": {
57
+ "type": "view",
58
+ "id": "051559b2aa2a2000"
59
+ }
60
+ }
61
+ }
62
+ },
63
+ {
64
+ "id": "051559b2ae2a2000",
65
+ "type": "cell",
66
+ "attributes": {
67
+ "x": 0,
68
+ "y": 4,
69
+ "w": 12,
70
+ "h": 4
71
+ },
72
+ "relationships": {
73
+ "view": {
74
+ "data": {
75
+ "type": "view",
76
+ "id": "051559b2ae2a2000"
77
+ }
78
+ }
79
+ }
80
+ },
81
+ {
82
+ "id": "051559b2af6a2000",
83
+ "type": "cell",
84
+ "attributes": {
85
+ "x": 4,
86
+ "y": 0,
87
+ "w": 4,
88
+ "h": 4
89
+ },
90
+ "relationships": {
91
+ "view": {
92
+ "data": {
93
+ "type": "view",
94
+ "id": "051559b2af6a2000"
95
+ }
96
+ }
97
+ }
98
+ },
99
+ {
100
+ "id": "05155aef3daa2000",
101
+ "type": "cell",
102
+ "attributes": {
103
+ "x": 8,
104
+ "y": 0,
105
+ "w": 4,
106
+ "h": 4
107
+ },
108
+ "relationships": {
109
+ "view": {
110
+ "data": {
111
+ "type": "view",
112
+ "id": "05155aef3daa2000"
113
+ }
114
+ }
115
+ }
116
+ },
117
+ {
118
+ "type": "view",
119
+ "id": "051559b2aa2a2000",
120
+ "attributes": {
121
+ "name": "Requests Count",
122
+ "properties": {
123
+ "shape": "chronograf-v2",
124
+ "type": "single-stat",
125
+ "queries": [
126
+ {
127
+ "text": "from(bucket: \"my-bucket\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r._measurement == \"httpd.access\")\n |> filter(fn: (r) => r._field == \"size\")\n |> drop(columns: [\"host\", \"method\", \"path\"])\n |> count()",
128
+ "editMode": "advanced",
129
+ "name": "",
130
+ "builderConfig": {
131
+ "buckets": [],
132
+ "tags": [
133
+ {
134
+ "key": "_measurement",
135
+ "values": []
136
+ }
137
+ ],
138
+ "functions": [],
139
+ "aggregateWindow": {
140
+ "period": "auto"
141
+ }
142
+ }
143
+ }
144
+ ],
145
+ "prefix": "",
146
+ "suffix": "",
147
+ "colors": [
148
+ {
149
+ "id": "base",
150
+ "type": "text",
151
+ "hex": "#00C9FF",
152
+ "name": "laser",
153
+ "value": 0
154
+ }
155
+ ],
156
+ "decimalPlaces": {
157
+ "isEnforced": true,
158
+ "digits": 2
159
+ },
160
+ "note": "",
161
+ "showNoteWhenEmpty": false
162
+ }
163
+ }
164
+ },
165
+ {
166
+ "type": "view",
167
+ "id": "051559b2ae2a2000",
168
+ "attributes": {
169
+ "name": "Requests",
170
+ "properties": {
171
+ "shape": "chronograf-v2",
172
+ "queries": [
173
+ {
174
+ "text": "from(bucket: \"my-bucket\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r._measurement == \"httpd.access\")\n |> filter(fn: (r) => r._field == \"code\")\n |> drop(columns: [\"method\", \"host\"])\n |> aggregateWindow(every: 1m, fn: count)",
175
+ "editMode": "advanced",
176
+ "name": "",
177
+ "builderConfig": {
178
+ "buckets": [],
179
+ "tags": [
180
+ {
181
+ "key": "_measurement",
182
+ "values": []
183
+ }
184
+ ],
185
+ "functions": [],
186
+ "aggregateWindow": {
187
+ "period": "auto"
188
+ }
189
+ }
190
+ }
191
+ ],
192
+ "axes": {
193
+ "x": {
194
+ "bounds": [
195
+ "",
196
+ ""
197
+ ],
198
+ "label": "",
199
+ "prefix": "",
200
+ "suffix": "",
201
+ "base": "10",
202
+ "scale": "linear"
203
+ },
204
+ "y": {
205
+ "bounds": [
206
+ "",
207
+ ""
208
+ ],
209
+ "label": "",
210
+ "prefix": "",
211
+ "suffix": "",
212
+ "base": "10",
213
+ "scale": "linear"
214
+ }
215
+ },
216
+ "type": "xy",
217
+ "legend": {},
218
+ "geom": "line",
219
+ "colors": [
220
+ {
221
+ "id": "56d087db-1d89-453c-8787-d5b5368b608f",
222
+ "type": "scale",
223
+ "hex": "#31C0F6",
224
+ "name": "Nineteen Eighty Four",
225
+ "value": 0
226
+ },
227
+ {
228
+ "id": "e5d0bd7c-cded-4ed2-aa55-94e920105d6e",
229
+ "type": "scale",
230
+ "hex": "#A500A5",
231
+ "name": "Nineteen Eighty Four",
232
+ "value": 0
233
+ },
234
+ {
235
+ "id": "e5f1b958-099c-4d84-8e78-8a460b8193e5",
236
+ "type": "scale",
237
+ "hex": "#FF7E27",
238
+ "name": "Nineteen Eighty Four",
239
+ "value": 0
240
+ }
241
+ ],
242
+ "note": "",
243
+ "showNoteWhenEmpty": false,
244
+ "xColumn": "_time",
245
+ "yColumn": "_value",
246
+ "shadeBelow": true,
247
+ "position": "overlaid",
248
+ "timeFormat": ""
249
+ }
250
+ }
251
+ },
252
+ {
253
+ "type": "view",
254
+ "id": "051559b2af6a2000",
255
+ "attributes": {
256
+ "name": "Request Size",
257
+ "properties": {
258
+ "shape": "chronograf-v2",
259
+ "type": "single-stat",
260
+ "queries": [
261
+ {
262
+ "text": "from(bucket: \"my-bucket\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r._measurement == \"httpd.access\")\n |> filter(fn: (r) => r.method == \"GET\")\n |> filter(fn: (r) => r._field == \"size\")\n |> drop(columns: [\"path\"])\n |> toInt()\n |> sum(column: \"_value\")\n |> map(fn: (r) => ({\n r with\n _value: r._value / 1024\n })\n )",
263
+ "editMode": "advanced",
264
+ "name": "",
265
+ "builderConfig": {
266
+ "buckets": [],
267
+ "tags": [
268
+ {
269
+ "key": "_measurement",
270
+ "values": []
271
+ }
272
+ ],
273
+ "functions": [],
274
+ "aggregateWindow": {
275
+ "period": "auto"
276
+ }
277
+ }
278
+ }
279
+ ],
280
+ "prefix": "",
281
+ "suffix": " KiB",
282
+ "colors": [
283
+ {
284
+ "id": "base",
285
+ "type": "text",
286
+ "hex": "#F48D38",
287
+ "name": "tiger",
288
+ "value": 0
289
+ }
290
+ ],
291
+ "decimalPlaces": {
292
+ "isEnforced": true,
293
+ "digits": 2
294
+ },
295
+ "note": "",
296
+ "showNoteWhenEmpty": false
297
+ }
298
+ }
299
+ },
300
+ {
301
+ "type": "view",
302
+ "id": "05155aef3daa2000",
303
+ "attributes": {
304
+ "name": "Success rate",
305
+ "properties": {
306
+ "shape": "chronograf-v2",
307
+ "type": "single-stat",
308
+ "queries": [
309
+ {
310
+ "text": "from(bucket: \"my-bucket\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r._measurement == \"httpd.access\")\n |> filter(fn: (r) => r._field == \"code\")\n |> drop(columns: [\"path\", \"method\", \"host\"])\n |> toInt()\n |> map(fn: (r) => ({ \n _time: r._start,\n _success:\n if r._value < 300 then 1\n else 0,\n _failure:\n if r._value >= 300 then 1\n else 0\n })\n )\n |> cumulativeSum(columns: [\"_failure\", \"_success\"])\n |> drop(columns: [\"_time\"])\n |> map(fn: (r) => ({\n r with\n _value: (float(v: r._success) / (float(v: r._success) + float(v: r._failure))) * 100.0\n })\n)\n |> last()",
311
+ "editMode": "advanced",
312
+ "name": "",
313
+ "builderConfig": {
314
+ "buckets": [],
315
+ "tags": [
316
+ {
317
+ "key": "_measurement",
318
+ "values": []
319
+ }
320
+ ],
321
+ "functions": [],
322
+ "aggregateWindow": {
323
+ "period": "auto"
324
+ }
325
+ }
326
+ }
327
+ ],
328
+ "prefix": "",
329
+ "suffix": "%",
330
+ "colors": [
331
+ {
332
+ "id": "base",
333
+ "type": "text",
334
+ "hex": "#00C9FF",
335
+ "name": "laser",
336
+ "value": 0
337
+ },
338
+ {
339
+ "id": "5e50c5ba-51cc-4139-870f-663050d6e79d",
340
+ "type": "text",
341
+ "hex": "#F48D38",
342
+ "name": "tiger",
343
+ "value": 80
344
+ },
345
+ {
346
+ "id": "305d340f-09e5-4137-b88b-0a6599209dc0",
347
+ "type": "text",
348
+ "hex": "#7CE490",
349
+ "name": "honeydew",
350
+ "value": 90
351
+ }
352
+ ],
353
+ "decimalPlaces": {
354
+ "isEnforced": true,
355
+ "digits": 2
356
+ },
357
+ "note": "",
358
+ "showNoteWhenEmpty": false
359
+ }
360
+ }
361
+ }
362
+ ]
363
+ },
364
+ "labels": []
365
+ }