logstash-output-kusto 0.1.6 → 0.2.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +5 -1
- data/README.md +44 -68
- data/lib/kusto/{KustoClient-0.1.6.jar → kusto-ingest-1.0.0-BETA-01-jar-with-dependencies.jar} +0 -0
- data/lib/logstash/outputs/kusto.rb +26 -24
- data/lib/logstash/outputs/kusto/ingestor.rb +116 -94
- data/lib/logstash/outputs/kusto/interval.rb +81 -81
- data/logstash-output-kusto.gemspec +8 -8
- data/spec/outputs/kusto/ingestor_spec.rb +109 -0
- data/spec/outputs/kusto_spec.rb +45 -13
- data/spec/spec_helpers.rb +5 -0
- metadata +30 -27
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 606d298b80da2887ad2223db406fd23e38351b46ef7ec87861468828f1b3ec24
|
4
|
+
data.tar.gz: b7f420f9f84c886460c145874625400198acbcde1aa05cafac9ac4a03e7da008
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a4e96bcb287915d805ef9fddc71f63399f4d9299888ac4da8f06d5baf6d7594825ec503b657ddae2812daa581cdc7465c2882575a236f0b9e28cd339dae2702a
|
7
|
+
data.tar.gz: 17041bdad667f37658a9246bc2068f5dcf53ec3cd41c6a665e9b100727194e7cf81baaa6d2927b942c1592cb6b3f15b7382f43e17669ccdbd039a1ad2c85b132
|
data/CHANGELOG.md
CHANGED
@@ -1,4 +1,8 @@
|
|
1
1
|
## 0.1.0
|
2
2
|
- Plugin created with the logstash plugin generator
|
3
3
|
## 0.1.6
|
4
|
-
- plugin published to the public. supports json events
|
4
|
+
- plugin published to the public. supports ingestion json events into a specific table-database (without dynamic routing currently)
|
5
|
+
## 0.1.7
|
6
|
+
- fixed app_key (password) bug, include 0.1.7 of the kusto-java-sdk to allow working through a proxy
|
7
|
+
## 0.2.0
|
8
|
+
- move to version 1.0.0-BETA-01 of azure-kusto-kava sdk
|
data/README.md
CHANGED
@@ -1,86 +1,62 @@
|
|
1
|
-
# Logstash Plugin
|
1
|
+
# Logstash Output Plugin for Azure Data Explorer (Kusto)
|
2
2
|
|
3
|
-
|
4
|
-
|
5
|
-
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
|
6
|
-
|
7
|
-
## Documentation
|
8
|
-
|
9
|
-
Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
|
10
|
-
|
11
|
-
- For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
|
12
|
-
- For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
|
13
|
-
|
14
|
-
## Need Help?
|
15
|
-
|
16
|
-
Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
|
17
|
-
|
18
|
-
## Developing
|
3
|
+
master: [![Build Status](https://travis-ci.org/Azure/logstash-output-kusto.svg)](https://travis-ci.org/Azure/logstash-output-kusto)
|
4
|
+
dev: [![Build Status](https://travis-ci.org/Azure/logstash-output-kusto.svg?branch=dev)](https://travis-ci.org/Azure/logstash-output-kusto)
|
19
5
|
|
20
|
-
|
21
|
-
|
22
|
-
#### Code
|
23
|
-
- To get started, you'll need JRuby with the Bundler gem installed.
|
6
|
+
This is a plugin for [Logstash](https://github.com/elastic/logstash).
|
24
7
|
|
25
|
-
|
8
|
+
It is fully free and open source. The license is Apache 2.0.
|
26
9
|
|
27
|
-
|
28
|
-
```sh
|
29
|
-
bundle install
|
30
|
-
```
|
10
|
+
This Azure Data Explorer (ADX) Logstash plugin enables you to process events from Logstash into an **Azure Data Explorer** database for later analysis.
|
31
11
|
|
32
|
-
|
12
|
+
## Requirements
|
33
13
|
|
34
|
-
-
|
35
|
-
|
36
|
-
|
37
|
-
bundle install
|
38
|
-
```
|
14
|
+
- Logstash version 6+. [Installation instructions](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html)
|
15
|
+
- Azure Data Explorer cluster with a database. Read [Create a cluster and database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal) for more information.
|
16
|
+
- AAD Application credentials with permission to ingest data into Azure Data Explorer. Read [Creating an AAD Application](https://docs.microsoft.com/en-us/azure/kusto/management/access-control/how-to-provision-aad-app) for more information.
|
39
17
|
|
40
|
-
|
18
|
+
## Installation
|
41
19
|
|
20
|
+
To make the Azure Data Explorer plugin available in your Logstash environment, run the following command:
|
42
21
|
```sh
|
43
|
-
|
22
|
+
bin/logstash-plugin install logstash-output-kusto
|
44
23
|
```
|
45
24
|
|
46
|
-
|
25
|
+
## Configuration
|
47
26
|
|
48
|
-
|
27
|
+
Perform configuration before sending events from Logstash to Azure Data Explorer. The following example shows the minimum you need to provide. It should be enough for most use-cases:
|
49
28
|
|
50
|
-
- Edit Logstash `Gemfile` and add the local plugin path, for example:
|
51
29
|
```ruby
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
74
|
-
|
75
|
-
|
76
|
-
|
30
|
+
output {
|
31
|
+
kusto {
|
32
|
+
path => "/tmp/kusto/%{+YYYY-MM-dd-HH-mm}.txt"
|
33
|
+
ingest_url => "https://ingest-<cluster-name>.kusto.windows.net/"
|
34
|
+
app_id => "<application id>"
|
35
|
+
app_key => "<application key/secret>"
|
36
|
+
app_tenant => "<tenant id>"
|
37
|
+
database => "<database name>"
|
38
|
+
table => "<target table>"
|
39
|
+
mapping => "<mapping name>"
|
40
|
+
}
|
41
|
+
}
|
42
|
+
```
|
43
|
+
|
44
|
+
### Available Configuration Keys
|
45
|
+
|
46
|
+
| Parameter Name | Description | Notes |
|
47
|
+
| --- | --- | --- |
|
48
|
+
| **path** | The plugin writes events to temporary files before sending them to ADX. This parameter includes a path where files should be written and a time expression for file rotation to trigger an upload to the ADX service. The example above shows how to rotate the files every minute and check the Logstash docs for more information on time expressions. | Required
|
49
|
+
| **ingest_url** | The Kusto endpoint for ingestion-related communication. See it on the Azure Portal.| Required|
|
50
|
+
| **app_id, app_key, app_tenant**| Credentials required to connect to the ADX service. Be sure to use an application with 'ingest' priviledges. | Required|
|
51
|
+
| **database**| Database name to place events | Required |
|
52
|
+
| **table** | Target table name to place events | Required
|
53
|
+
| **mapping** | Mapping is used to map an incoming event json string into the correct row format (which property goes into which column) | Required |
|
54
|
+
| **recovery** | If set to true (default), plugin will attempt to resend pre-existing temp files found in the path upon startup | |
|
55
|
+
| **delete_temp_files** | Determines if temp files will be deleted after a successful upload (true is default; set false for debug purposes only)| |
|
56
|
+
| **flush_interval** | The time (in seconds) for flushing writes to temporary files. Default is 2 seconds, 0 will flush on every event. Increase this value to reduce IO calls but keep in mind that events in the buffer will be lost in case of abrupt failure.| |
|
77
57
|
|
78
58
|
## Contributing
|
79
59
|
|
80
|
-
All contributions are welcome: ideas, patches, documentation, bug reports,
|
81
|
-
|
82
|
-
Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
|
83
|
-
|
84
|
-
It is more important to the community that you are able to contribute.
|
85
|
-
|
60
|
+
All contributions are welcome: ideas, patches, documentation, bug reports, and complaints.
|
61
|
+
Programming is not a required skill. It is more important to the community that you are able to contribute.
|
86
62
|
For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
|
data/lib/kusto/{KustoClient-0.1.6.jar → kusto-ingest-1.0.0-BETA-01-jar-with-dependencies.jar}
RENAMED
Binary file
|
@@ -8,7 +8,7 @@ require 'logstash/outputs/kusto/ingestor'
|
|
8
8
|
require 'logstash/outputs/kusto/interval'
|
9
9
|
|
10
10
|
##
|
11
|
-
# This plugin
|
11
|
+
# This plugin sends messages to Azure Kusto in batches.
|
12
12
|
#
|
13
13
|
class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
14
14
|
config_name 'kusto'
|
@@ -30,8 +30,9 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
|
30
30
|
# E.g: `/%{myfield}/`, `/test-%{myfield}/` are not valid paths
|
31
31
|
config :path, validate: :string, required: true
|
32
32
|
|
33
|
-
# Flush interval (in seconds) for flushing writes to
|
34
|
-
# 0 will flush on every message.
|
33
|
+
# Flush interval (in seconds) for flushing writes to files.
|
34
|
+
# 0 will flush on every message. Increase this value to recude IO calls but keep
|
35
|
+
# in mind that events buffered before flush can be lost in case of abrupt failure.
|
35
36
|
config :flush_interval, validate: :number, default: 2
|
36
37
|
|
37
38
|
# If the generated path is invalid, the events will be saved
|
@@ -54,15 +55,6 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
|
54
55
|
# Example: `"file_mode" => 0640`
|
55
56
|
config :file_mode, validate: :number, default: -1
|
56
57
|
|
57
|
-
# How should the file be written?
|
58
|
-
#
|
59
|
-
# If `append`, the file will be opened for appending and each new event will
|
60
|
-
# be written at the end of the file.
|
61
|
-
#
|
62
|
-
# If `overwrite`, the file will be truncated before writing and only the most
|
63
|
-
# recent event will appear in the file.
|
64
|
-
config :write_behavior, validate: %w[overwrite append], default: 'append'
|
65
|
-
|
66
58
|
# TODO: fix the interval type...
|
67
59
|
config :stale_cleanup_interval, validate: :number, default: 10
|
68
60
|
config :stale_cleanup_type, validate: %w[events interval], default: 'events'
|
@@ -75,16 +67,27 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
|
75
67
|
# If `false`, the plugin will disregard temp files found
|
76
68
|
config :recovery, validate: :boolean, default: true
|
77
69
|
|
78
|
-
|
70
|
+
|
71
|
+
# The Kusto endpoint for ingestion related communication. You can see it on the Azure Portal.
|
79
72
|
config :ingest_url, validate: :string, required: true
|
73
|
+
|
74
|
+
# The following are the credentails used to connect to the Kusto service
|
75
|
+
# application id
|
80
76
|
config :app_id, validate: :string, required: true
|
81
|
-
|
77
|
+
# application key (secret)
|
78
|
+
config :app_key, validate: :password, required: true
|
79
|
+
# aad tenant id
|
82
80
|
config :app_tenant, validate: :string, default: nil
|
83
81
|
|
82
|
+
# The following are the data settings that impact where events are written to
|
83
|
+
# Database name
|
84
84
|
config :database, validate: :string, required: true
|
85
|
+
# Target table name
|
85
86
|
config :table, validate: :string, required: true
|
87
|
+
# Mapping name - used by kusto to map an incoming event to the right row format (what value goes into which column)
|
86
88
|
config :mapping, validate: :string
|
87
89
|
|
90
|
+
|
88
91
|
# Determines if local files used for temporary storage will be deleted
|
89
92
|
# after upload is successful
|
90
93
|
config :delete_temp_files, validate: :boolean, default: true
|
@@ -149,9 +152,14 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
|
149
152
|
private
|
150
153
|
def validate_path
|
151
154
|
if (root_directory =~ FIELD_REF) != nil
|
152
|
-
@logger.error('
|
155
|
+
@logger.error('The starting part of the path should not be dynamic.', path: @path)
|
153
156
|
raise LogStash::ConfigurationError.new('The starting part of the path should not be dynamic.')
|
154
157
|
end
|
158
|
+
|
159
|
+
if !path_with_field_ref?
|
160
|
+
@logger.error('Path should include some time related fields to allow for file rotation.', path: @path)
|
161
|
+
raise LogStash::ConfigurationError.new('Path should include some time related fields to allow for file rotation.')
|
162
|
+
end
|
155
163
|
end
|
156
164
|
|
157
165
|
private
|
@@ -177,14 +185,8 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
|
177
185
|
@io_mutex.synchronize do
|
178
186
|
encoded_by_path.each do |path, chunks|
|
179
187
|
fd = open(path)
|
180
|
-
|
181
|
-
|
182
|
-
fd.seek(0, IO::SEEK_SET)
|
183
|
-
fd.write(chunks.last)
|
184
|
-
else
|
185
|
-
# append to the file
|
186
|
-
chunks.each { |chunk| fd.write(chunk) }
|
187
|
-
end
|
188
|
+
# append to the file
|
189
|
+
chunks.each { |chunk| fd.write(chunk) }
|
188
190
|
fd.flush unless @flusher && @flusher.alive?
|
189
191
|
end
|
190
192
|
|
@@ -210,7 +212,7 @@ class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
|
210
212
|
end
|
211
213
|
end
|
212
214
|
|
213
|
-
@ingestor.stop
|
215
|
+
@ingestor.stop unless @ingestor.nil?
|
214
216
|
end
|
215
217
|
|
216
218
|
private
|
@@ -1,94 +1,116 @@
|
|
1
|
-
# encoding: utf-8
|
2
|
-
|
3
|
-
require 'logstash/outputs/base'
|
4
|
-
require 'logstash/namespace'
|
5
|
-
require 'logstash/errors'
|
6
|
-
|
7
|
-
class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
8
|
-
##
|
9
|
-
# This handles the overall logic and communication with Kusto
|
10
|
-
#
|
11
|
-
class Ingestor
|
12
|
-
require 'kusto/
|
13
|
-
|
14
|
-
RETRY_DELAY_SECONDS = 3
|
15
|
-
DEFAULT_THREADPOOL = Concurrent::ThreadPoolExecutor.new(
|
16
|
-
min_threads: 1,
|
17
|
-
max_threads: 8,
|
18
|
-
max_queue: 1,
|
19
|
-
fallback_policy: :caller_runs
|
20
|
-
)
|
21
|
-
LOW_QUEUE_LENGTH = 3
|
22
|
-
|
23
|
-
|
24
|
-
|
25
|
-
@
|
26
|
-
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
@
|
31
|
-
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
@
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
|
48
|
-
end
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
#
|
81
|
-
#
|
82
|
-
|
83
|
-
#
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
90
|
-
|
91
|
-
|
92
|
-
|
93
|
-
|
94
|
-
|
1
|
+
# encoding: utf-8
|
2
|
+
|
3
|
+
require 'logstash/outputs/base'
|
4
|
+
require 'logstash/namespace'
|
5
|
+
require 'logstash/errors'
|
6
|
+
|
7
|
+
class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
8
|
+
##
|
9
|
+
# This handles the overall logic and communication with Kusto
|
10
|
+
#
|
11
|
+
class Ingestor
|
12
|
+
require 'kusto/kusto-ingest-1.0.0-BETA-01-jar-with-dependencies.jar'
|
13
|
+
|
14
|
+
RETRY_DELAY_SECONDS = 3
|
15
|
+
DEFAULT_THREADPOOL = Concurrent::ThreadPoolExecutor.new(
|
16
|
+
min_threads: 1,
|
17
|
+
max_threads: 8,
|
18
|
+
max_queue: 1,
|
19
|
+
fallback_policy: :caller_runs
|
20
|
+
)
|
21
|
+
LOW_QUEUE_LENGTH = 3
|
22
|
+
FIELD_REF = /%\{[^}]+\}/
|
23
|
+
|
24
|
+
def initialize(ingest_url, app_id, app_key, app_tenant, database, table, mapping, delete_local, logger, threadpool = DEFAULT_THREADPOOL)
|
25
|
+
@workers_pool = threadpool
|
26
|
+
@logger = logger
|
27
|
+
|
28
|
+
validate_config(database, table, mapping)
|
29
|
+
|
30
|
+
@logger.debug('Preparing Kusto resources.')
|
31
|
+
|
32
|
+
kusto_connection_string = Java::com.microsoft.azure.kusto.data.ConnectionStringBuilder.createWithAadApplicationCredentials(ingest_url, app_id, app_key.value, app_tenant)
|
33
|
+
|
34
|
+
@kusto_client = Java::com.microsoft.azure.kusto.ingest.IngestClientFactory.createClient(kusto_connection_string)
|
35
|
+
|
36
|
+
@ingestion_properties = Java::com.microsoft.azure.kusto.ingest.IngestionProperties.new(database, table)
|
37
|
+
@ingestion_properties.setJsonMappingName(mapping)
|
38
|
+
|
39
|
+
@delete_local = delete_local
|
40
|
+
|
41
|
+
@logger.debug('Kusto resources are ready.')
|
42
|
+
end
|
43
|
+
|
44
|
+
def validate_config(database, table, mapping)
|
45
|
+
if database =~ FIELD_REF
|
46
|
+
@logger.error('database config value should not be dynamic.', database)
|
47
|
+
raise LogStash::ConfigurationError.new('database config value should not be dynamic.')
|
48
|
+
end
|
49
|
+
|
50
|
+
if table =~ FIELD_REF
|
51
|
+
@logger.error('table config value should not be dynamic.', table)
|
52
|
+
raise LogStash::ConfigurationError.new('table config value should not be dynamic.')
|
53
|
+
end
|
54
|
+
|
55
|
+
if mapping =~ FIELD_REF
|
56
|
+
@logger.error('mapping config value should not be dynamic.', mapping)
|
57
|
+
raise LogStash::ConfigurationError.new('mapping config value should not be dynamic.')
|
58
|
+
end
|
59
|
+
end
|
60
|
+
|
61
|
+
def upload_async(path, delete_on_success)
|
62
|
+
if @workers_pool.remaining_capacity <= LOW_QUEUE_LENGTH
|
63
|
+
@logger.warn("Ingestor queue capacity is running low with #{@workers_pool.remaining_capacity} free slots.")
|
64
|
+
end
|
65
|
+
|
66
|
+
@workers_pool.post do
|
67
|
+
LogStash::Util.set_thread_name("Kusto to ingest file: #{path}")
|
68
|
+
upload(path, delete_on_success)
|
69
|
+
end
|
70
|
+
rescue Exception => e
|
71
|
+
@logger.error('StandardError.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
|
72
|
+
raise e
|
73
|
+
end
|
74
|
+
|
75
|
+
def upload(path, delete_on_success)
|
76
|
+
file_size = File.size(path)
|
77
|
+
@logger.debug("Sending file to kusto: #{path}. size: #{file_size}")
|
78
|
+
|
79
|
+
# TODO: dynamic routing
|
80
|
+
# file_metadata = path.partition('.kusto.').last
|
81
|
+
# file_metadata_parts = file_metadata.split('.')
|
82
|
+
|
83
|
+
# if file_metadata_parts.length == 3
|
84
|
+
# # this is the number we expect - database, table, mapping
|
85
|
+
# database = file_metadata_parts[0]
|
86
|
+
# table = file_metadata_parts[1]
|
87
|
+
# mapping = file_metadata_parts[2]
|
88
|
+
|
89
|
+
# local_ingestion_properties = Java::KustoIngestionProperties.new(database, table)
|
90
|
+
# local_ingestion_properties.addJsonMappingName(mapping)
|
91
|
+
# end
|
92
|
+
|
93
|
+
file_source_info = Java::com.microsoft.azure.kusto.ingest.source.FileSourceInfo.new(path, 0); # 0 - let the sdk figure out the size of the file
|
94
|
+
@kusto_client.ingestFromFile(file_source_info, @ingestion_properties)
|
95
|
+
|
96
|
+
File.delete(path) if delete_on_success
|
97
|
+
|
98
|
+
@logger.debug("File #{path} sent to kusto.")
|
99
|
+
rescue Java::JavaNioFile::NoSuchFileException => e
|
100
|
+
@logger.error("File doesn't exist! Unrecoverable error.", exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
|
101
|
+
rescue => e
|
102
|
+
# When the retry limit is reached or another error happen we will wait and retry.
|
103
|
+
#
|
104
|
+
# Thread might be stuck here, but I think its better than losing anything
|
105
|
+
# its either a transient errors or something bad really happened.
|
106
|
+
@logger.error('Uploading failed, retrying.', exception: e.class, message: e.message, path: path, backtrace: e.backtrace)
|
107
|
+
sleep RETRY_DELAY_SECONDS
|
108
|
+
retry
|
109
|
+
end
|
110
|
+
|
111
|
+
def stop
|
112
|
+
@workers_pool.shutdown
|
113
|
+
@workers_pool.wait_for_termination(nil) # block until its done
|
114
|
+
end
|
115
|
+
end
|
116
|
+
end
|
@@ -1,81 +1,81 @@
|
|
1
|
-
# encoding: utf-8
|
2
|
-
|
3
|
-
require 'logstash/outputs/base'
|
4
|
-
require 'logstash/namespace'
|
5
|
-
require 'logstash/errors'
|
6
|
-
|
7
|
-
class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
8
|
-
##
|
9
|
-
# Bare-bones utility for running a block of code at an interval.
|
10
|
-
#
|
11
|
-
class Interval
|
12
|
-
##
|
13
|
-
# Initializes a new Interval with the given arguments and starts it
|
14
|
-
# before returning it.
|
15
|
-
#
|
16
|
-
# @param interval [Integer] (see: Interval#initialize)
|
17
|
-
# @param procsy [#call] (see: Interval#initialize)
|
18
|
-
#
|
19
|
-
# @return [Interval]
|
20
|
-
#
|
21
|
-
def self.start(interval, procsy)
|
22
|
-
new(interval, procsy).tap(&:start)
|
23
|
-
end
|
24
|
-
|
25
|
-
##
|
26
|
-
# @param interval [Integer]: time in seconds to wait between calling the given proc
|
27
|
-
# @param procsy [#call]: proc or lambda to call periodically; must not raise exceptions.
|
28
|
-
def initialize(interval, procsy)
|
29
|
-
@interval = interval
|
30
|
-
@procsy = procsy
|
31
|
-
|
32
|
-
# Mutex, ConditionVariable, etc.
|
33
|
-
@mutex = Mutex.new
|
34
|
-
@sleeper = ConditionVariable.new
|
35
|
-
end
|
36
|
-
|
37
|
-
##
|
38
|
-
# Starts the interval, or returns if it has already been started.
|
39
|
-
#
|
40
|
-
# @return [void]
|
41
|
-
def start
|
42
|
-
@mutex.synchronize do
|
43
|
-
return if @thread && @thread.alive?
|
44
|
-
|
45
|
-
@thread = Thread.new { run }
|
46
|
-
end
|
47
|
-
end
|
48
|
-
|
49
|
-
##
|
50
|
-
# Stop the interval.
|
51
|
-
# Does not interrupt if execution is in-progress.
|
52
|
-
def stop
|
53
|
-
@mutex.synchronize do
|
54
|
-
@stopped = true
|
55
|
-
end
|
56
|
-
|
57
|
-
@thread && @thread.join
|
58
|
-
end
|
59
|
-
|
60
|
-
##
|
61
|
-
# @return [Boolean]
|
62
|
-
def alive?
|
63
|
-
@thread && @thread.alive?
|
64
|
-
end
|
65
|
-
|
66
|
-
private
|
67
|
-
|
68
|
-
def run
|
69
|
-
@mutex.synchronize do
|
70
|
-
loop do
|
71
|
-
@sleeper.wait(@mutex, @interval)
|
72
|
-
break if @stopped
|
73
|
-
|
74
|
-
@procsy.call
|
75
|
-
end
|
76
|
-
end
|
77
|
-
ensure
|
78
|
-
@sleeper.broadcast
|
79
|
-
end
|
80
|
-
end
|
81
|
-
end
|
1
|
+
# encoding: utf-8
|
2
|
+
|
3
|
+
require 'logstash/outputs/base'
|
4
|
+
require 'logstash/namespace'
|
5
|
+
require 'logstash/errors'
|
6
|
+
|
7
|
+
class LogStash::Outputs::Kusto < LogStash::Outputs::Base
|
8
|
+
##
|
9
|
+
# Bare-bones utility for running a block of code at an interval.
|
10
|
+
#
|
11
|
+
class Interval
|
12
|
+
##
|
13
|
+
# Initializes a new Interval with the given arguments and starts it
|
14
|
+
# before returning it.
|
15
|
+
#
|
16
|
+
# @param interval [Integer] (see: Interval#initialize)
|
17
|
+
# @param procsy [#call] (see: Interval#initialize)
|
18
|
+
#
|
19
|
+
# @return [Interval]
|
20
|
+
#
|
21
|
+
def self.start(interval, procsy)
|
22
|
+
new(interval, procsy).tap(&:start)
|
23
|
+
end
|
24
|
+
|
25
|
+
##
|
26
|
+
# @param interval [Integer]: time in seconds to wait between calling the given proc
|
27
|
+
# @param procsy [#call]: proc or lambda to call periodically; must not raise exceptions.
|
28
|
+
def initialize(interval, procsy)
|
29
|
+
@interval = interval
|
30
|
+
@procsy = procsy
|
31
|
+
|
32
|
+
# Mutex, ConditionVariable, etc.
|
33
|
+
@mutex = Mutex.new
|
34
|
+
@sleeper = ConditionVariable.new
|
35
|
+
end
|
36
|
+
|
37
|
+
##
|
38
|
+
# Starts the interval, or returns if it has already been started.
|
39
|
+
#
|
40
|
+
# @return [void]
|
41
|
+
def start
|
42
|
+
@mutex.synchronize do
|
43
|
+
return if @thread && @thread.alive?
|
44
|
+
|
45
|
+
@thread = Thread.new { run }
|
46
|
+
end
|
47
|
+
end
|
48
|
+
|
49
|
+
##
|
50
|
+
# Stop the interval.
|
51
|
+
# Does not interrupt if execution is in-progress.
|
52
|
+
def stop
|
53
|
+
@mutex.synchronize do
|
54
|
+
@stopped = true
|
55
|
+
end
|
56
|
+
|
57
|
+
@thread && @thread.join
|
58
|
+
end
|
59
|
+
|
60
|
+
##
|
61
|
+
# @return [Boolean]
|
62
|
+
def alive?
|
63
|
+
@thread && @thread.alive?
|
64
|
+
end
|
65
|
+
|
66
|
+
private
|
67
|
+
|
68
|
+
def run
|
69
|
+
@mutex.synchronize do
|
70
|
+
loop do
|
71
|
+
@sleeper.wait(@mutex, @interval)
|
72
|
+
break if @stopped
|
73
|
+
|
74
|
+
@procsy.call
|
75
|
+
end
|
76
|
+
end
|
77
|
+
ensure
|
78
|
+
@sleeper.broadcast
|
79
|
+
end
|
80
|
+
end
|
81
|
+
end
|
@@ -1,17 +1,17 @@
|
|
1
1
|
Gem::Specification.new do |s|
|
2
2
|
s.name = 'logstash-output-kusto'
|
3
|
-
s.version = '0.
|
3
|
+
s.version = '0.2.0'
|
4
4
|
s.licenses = ['Apache-2.0']
|
5
|
-
s.summary = 'Writes events to Azure
|
6
|
-
s.description = 'This is a logstash output plugin used to write events to an Azure
|
7
|
-
s.homepage = 'https://github.com/Azure/
|
5
|
+
s.summary = 'Writes events to Azure Data Explorer (Kusto)'
|
6
|
+
s.description = 'This is a logstash output plugin used to write events to an Azure Data Explorer (a.k.a Kusto)'
|
7
|
+
s.homepage = 'https://github.com/Azure/logstash-output-kusto'
|
8
8
|
s.authors = ['Tamir Kamara']
|
9
|
-
s.email = '
|
9
|
+
s.email = 'nugetkusto@microsoft.com'
|
10
10
|
s.require_paths = ['lib']
|
11
11
|
|
12
12
|
# Files
|
13
|
-
s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
|
14
|
-
|
13
|
+
s.files = Dir['lib/**/*', 'spec/**/*', 'vendor/**/*', '*.gemspec', '*.md', 'CONTRIBUTORS', 'Gemfile', 'LICENSE', 'NOTICE.TXT']
|
14
|
+
|
15
15
|
# Tests
|
16
16
|
s.test_files = s.files.grep(%r{^(test|spec|features)/})
|
17
17
|
|
@@ -19,7 +19,7 @@ Gem::Specification.new do |s|
|
|
19
19
|
s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
|
20
20
|
|
21
21
|
# Gem dependencies
|
22
|
-
s.add_runtime_dependency
|
22
|
+
s.add_runtime_dependency 'logstash-core-plugin-api', '~> 2.0'
|
23
23
|
s.add_runtime_dependency 'logstash-codec-json_lines'
|
24
24
|
s.add_runtime_dependency 'logstash-codec-line'
|
25
25
|
|
@@ -0,0 +1,109 @@
|
|
1
|
+
# encoding: utf-8
|
2
|
+
require_relative "../../spec_helpers.rb"
|
3
|
+
require 'logstash/outputs/kusto'
|
4
|
+
require 'logstash/outputs/kusto/ingestor'
|
5
|
+
|
6
|
+
describe LogStash::Outputs::Kusto::Ingestor do
|
7
|
+
|
8
|
+
let(:ingest_url) { "mycluster" }
|
9
|
+
let(:app_id) { "myid" }
|
10
|
+
let(:app_key) { LogStash::Util::Password.new("mykey") }
|
11
|
+
let(:app_tenant) { "mytenant" }
|
12
|
+
let(:database) { "mydatabase" }
|
13
|
+
let(:table) { "mytable" }
|
14
|
+
let(:mapping) { "mymapping" }
|
15
|
+
let(:delete_local) { false }
|
16
|
+
let(:logger) { spy('logger') }
|
17
|
+
|
18
|
+
describe '#initialize' do
|
19
|
+
|
20
|
+
it 'does not throw an error when initializing' do
|
21
|
+
# note that this will cause an internal error since connection is being tried.
|
22
|
+
# however we still want to test that all the java stuff is working as expected
|
23
|
+
expect {
|
24
|
+
ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, database, table, mapping, delete_local, logger)
|
25
|
+
ingestor.stop
|
26
|
+
}.not_to raise_error
|
27
|
+
end
|
28
|
+
|
29
|
+
dynamic_name_array = ['/a%{name}/', '/a %{name}/', '/a- %{name}/', '/a- %{name}']
|
30
|
+
|
31
|
+
context 'doesnt allow database to have some dynamic part' do
|
32
|
+
dynamic_name_array.each do |test_database|
|
33
|
+
it "with database: #{test_database}" do
|
34
|
+
expect {
|
35
|
+
ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, test_database, table, mapping, delete_local, logger)
|
36
|
+
ingestor.stop
|
37
|
+
}.to raise_error(LogStash::ConfigurationError)
|
38
|
+
end
|
39
|
+
end
|
40
|
+
end
|
41
|
+
|
42
|
+
context 'doesnt allow table to have some dynamic part' do
|
43
|
+
dynamic_name_array.each do |test_table|
|
44
|
+
it "with database: #{test_table}" do
|
45
|
+
expect {
|
46
|
+
ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, database, test_table, mapping, delete_local, logger)
|
47
|
+
ingestor.stop
|
48
|
+
}.to raise_error(LogStash::ConfigurationError)
|
49
|
+
end
|
50
|
+
end
|
51
|
+
end
|
52
|
+
|
53
|
+
context 'doesnt allow mapping to have some dynamic part' do
|
54
|
+
dynamic_name_array.each do |test_mapping|
|
55
|
+
it "with database: #{test_mapping}" do
|
56
|
+
expect {
|
57
|
+
ingestor = described_class.new(ingest_url, app_id, app_key, app_tenant, database, table, test_mapping, delete_local, logger)
|
58
|
+
ingestor.stop
|
59
|
+
}.to raise_error(LogStash::ConfigurationError)
|
60
|
+
end
|
61
|
+
end
|
62
|
+
end
|
63
|
+
|
64
|
+
end
|
65
|
+
|
66
|
+
# describe 'receiving events' do
|
67
|
+
|
68
|
+
# context 'with non-zero flush interval' do
|
69
|
+
# let(:temporary_output_file) { Stud::Temporary.pathname }
|
70
|
+
|
71
|
+
# let(:event_count) { 100 }
|
72
|
+
# let(:flush_interval) { 5 }
|
73
|
+
|
74
|
+
# let(:events) do
|
75
|
+
# event_count.times.map do |idx|
|
76
|
+
# LogStash::Event.new('subject' => idx)
|
77
|
+
# end
|
78
|
+
# end
|
79
|
+
|
80
|
+
# let(:output) { described_class.new(options.merge( {'path' => temporary_output_file, 'flush_interval' => flush_interval, 'delete_temp_files' => false } )) }
|
81
|
+
|
82
|
+
# before(:each) { output.register }
|
83
|
+
|
84
|
+
# after(:each) do
|
85
|
+
# output.close
|
86
|
+
# File.exist?(temporary_output_file) && File.unlink(temporary_output_file)
|
87
|
+
# File.exist?(temporary_output_file + '.kusto') && File.unlink(temporary_output_file + '.kusto')
|
88
|
+
# end
|
89
|
+
|
90
|
+
# it 'eventually flushes without receiving additional events' do
|
91
|
+
# output.multi_receive_encoded(events)
|
92
|
+
|
93
|
+
# # events should not all be flushed just yet...
|
94
|
+
# expect(File.read(temporary_output_file)).to satisfy("have less than #{event_count} lines") do |contents|
|
95
|
+
# contents && contents.lines.count < event_count
|
96
|
+
# end
|
97
|
+
|
98
|
+
# # wait for the flusher to run...
|
99
|
+
# sleep(flush_interval + 1)
|
100
|
+
|
101
|
+
# # events should all be flushed
|
102
|
+
# expect(File.read(temporary_output_file)).to satisfy("have exactly #{event_count} lines") do |contents|
|
103
|
+
# contents && contents.lines.count == event_count
|
104
|
+
# end
|
105
|
+
# end
|
106
|
+
# end
|
107
|
+
|
108
|
+
# end
|
109
|
+
end
|
data/spec/outputs/kusto_spec.rb
CHANGED
@@ -1,22 +1,54 @@
|
|
1
1
|
# encoding: utf-8
|
2
|
-
require
|
3
|
-
require
|
4
|
-
require
|
5
|
-
require
|
2
|
+
require 'logstash/devutils/rspec/spec_helper'
|
3
|
+
require 'logstash/outputs/kusto'
|
4
|
+
require 'logstash/codecs/plain'
|
5
|
+
require 'logstash/event'
|
6
6
|
|
7
7
|
describe LogStash::Outputs::Kusto do
|
8
|
-
let(:sample_event) { LogStash::Event.new }
|
9
|
-
let(:output) { LogStash::Outputs::Kusto.new }
|
10
8
|
|
11
|
-
|
12
|
-
|
13
|
-
|
9
|
+
let(:options) { { "path" => "./kusto_tst/%{+YYYY-MM-dd-HH-mm}",
|
10
|
+
"ingest_url" => "mycluster",
|
11
|
+
"app_id" => "myid",
|
12
|
+
"app_key" => "mykey",
|
13
|
+
"app_tenant" => "mytenant",
|
14
|
+
"database" => "mydatabase",
|
15
|
+
"table" => "mytable",
|
16
|
+
"mapping" => "mymapping"
|
17
|
+
} }
|
18
|
+
|
19
|
+
describe '#register' do
|
20
|
+
|
21
|
+
it 'doesnt allow the path to start with a dynamic string' do
|
22
|
+
kusto = described_class.new(options.merge( {'path' => '/%{name}'} ))
|
23
|
+
expect { kusto.register }.to raise_error(LogStash::ConfigurationError)
|
24
|
+
kusto.close
|
25
|
+
end
|
26
|
+
|
27
|
+
it 'path must include a dynamic string to allow file rotation' do
|
28
|
+
kusto = described_class.new(options.merge( {'path' => '/{name}'} ))
|
29
|
+
expect { kusto.register }.to raise_error(LogStash::ConfigurationError)
|
30
|
+
kusto.close
|
31
|
+
end
|
32
|
+
|
14
33
|
|
15
|
-
|
16
|
-
subject { output.receive(sample_event) }
|
34
|
+
dynamic_name_array = ['/a%{name}/', '/a %{name}/', '/a- %{name}/', '/a- %{name}']
|
17
35
|
|
18
|
-
|
19
|
-
|
36
|
+
context 'doesnt allow the root directory to have some dynamic part' do
|
37
|
+
dynamic_name_array.each do |test_path|
|
38
|
+
it "with path: #{test_path}" do
|
39
|
+
kusto = described_class.new(options.merge( {'path' => test_path} ))
|
40
|
+
expect { kusto.register }.to raise_error(LogStash::ConfigurationError)
|
41
|
+
kusto.close
|
42
|
+
end
|
43
|
+
end
|
20
44
|
end
|
45
|
+
|
46
|
+
it 'allow to have dynamic part after the file root' do
|
47
|
+
kusto = described_class.new(options.merge({'path' => '/tmp/%{name}'}))
|
48
|
+
expect { kusto.register }.not_to raise_error
|
49
|
+
kusto.close
|
50
|
+
end
|
51
|
+
|
21
52
|
end
|
53
|
+
|
22
54
|
end
|
metadata
CHANGED
@@ -1,116 +1,116 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: logstash-output-kusto
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.2.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Tamir Kamara
|
8
|
-
autorequire:
|
8
|
+
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2019-01-16 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
|
+
name: logstash-core-plugin-api
|
14
15
|
requirement: !ruby/object:Gem::Requirement
|
15
16
|
requirements:
|
16
17
|
- - "~>"
|
17
18
|
- !ruby/object:Gem::Version
|
18
19
|
version: '2.0'
|
19
|
-
name: logstash-core-plugin-api
|
20
|
-
prerelease: false
|
21
20
|
type: :runtime
|
21
|
+
prerelease: false
|
22
22
|
version_requirements: !ruby/object:Gem::Requirement
|
23
23
|
requirements:
|
24
24
|
- - "~>"
|
25
25
|
- !ruby/object:Gem::Version
|
26
26
|
version: '2.0'
|
27
27
|
- !ruby/object:Gem::Dependency
|
28
|
+
name: logstash-codec-json_lines
|
28
29
|
requirement: !ruby/object:Gem::Requirement
|
29
30
|
requirements:
|
30
31
|
- - ">="
|
31
32
|
- !ruby/object:Gem::Version
|
32
33
|
version: '0'
|
33
|
-
name: logstash-codec-json_lines
|
34
|
-
prerelease: false
|
35
34
|
type: :runtime
|
35
|
+
prerelease: false
|
36
36
|
version_requirements: !ruby/object:Gem::Requirement
|
37
37
|
requirements:
|
38
38
|
- - ">="
|
39
39
|
- !ruby/object:Gem::Version
|
40
40
|
version: '0'
|
41
41
|
- !ruby/object:Gem::Dependency
|
42
|
+
name: logstash-codec-line
|
42
43
|
requirement: !ruby/object:Gem::Requirement
|
43
44
|
requirements:
|
44
45
|
- - ">="
|
45
46
|
- !ruby/object:Gem::Version
|
46
47
|
version: '0'
|
47
|
-
name: logstash-codec-line
|
48
|
-
prerelease: false
|
49
48
|
type: :runtime
|
49
|
+
prerelease: false
|
50
50
|
version_requirements: !ruby/object:Gem::Requirement
|
51
51
|
requirements:
|
52
52
|
- - ">="
|
53
53
|
- !ruby/object:Gem::Version
|
54
54
|
version: '0'
|
55
55
|
- !ruby/object:Gem::Dependency
|
56
|
+
name: logstash-devutils
|
56
57
|
requirement: !ruby/object:Gem::Requirement
|
57
58
|
requirements:
|
58
59
|
- - ">="
|
59
60
|
- !ruby/object:Gem::Version
|
60
61
|
version: '0'
|
61
|
-
name: logstash-devutils
|
62
|
-
prerelease: false
|
63
62
|
type: :development
|
63
|
+
prerelease: false
|
64
64
|
version_requirements: !ruby/object:Gem::Requirement
|
65
65
|
requirements:
|
66
66
|
- - ">="
|
67
67
|
- !ruby/object:Gem::Version
|
68
68
|
version: '0'
|
69
69
|
- !ruby/object:Gem::Dependency
|
70
|
+
name: flores
|
70
71
|
requirement: !ruby/object:Gem::Requirement
|
71
72
|
requirements:
|
72
73
|
- - ">="
|
73
74
|
- !ruby/object:Gem::Version
|
74
75
|
version: '0'
|
75
|
-
name: flores
|
76
|
-
prerelease: false
|
77
76
|
type: :development
|
77
|
+
prerelease: false
|
78
78
|
version_requirements: !ruby/object:Gem::Requirement
|
79
79
|
requirements:
|
80
80
|
- - ">="
|
81
81
|
- !ruby/object:Gem::Version
|
82
82
|
version: '0'
|
83
83
|
- !ruby/object:Gem::Dependency
|
84
|
+
name: logstash-input-generator
|
84
85
|
requirement: !ruby/object:Gem::Requirement
|
85
86
|
requirements:
|
86
87
|
- - ">="
|
87
88
|
- !ruby/object:Gem::Version
|
88
89
|
version: '0'
|
89
|
-
name: logstash-input-generator
|
90
|
-
prerelease: false
|
91
90
|
type: :development
|
91
|
+
prerelease: false
|
92
92
|
version_requirements: !ruby/object:Gem::Requirement
|
93
93
|
requirements:
|
94
94
|
- - ">="
|
95
95
|
- !ruby/object:Gem::Version
|
96
96
|
version: '0'
|
97
97
|
- !ruby/object:Gem::Dependency
|
98
|
+
name: jar-dependencies
|
98
99
|
requirement: !ruby/object:Gem::Requirement
|
99
100
|
requirements:
|
100
101
|
- - ">="
|
101
102
|
- !ruby/object:Gem::Version
|
102
103
|
version: '0'
|
103
|
-
name: jar-dependencies
|
104
|
-
prerelease: false
|
105
104
|
type: :runtime
|
105
|
+
prerelease: false
|
106
106
|
version_requirements: !ruby/object:Gem::Requirement
|
107
107
|
requirements:
|
108
108
|
- - ">="
|
109
109
|
- !ruby/object:Gem::Version
|
110
110
|
version: '0'
|
111
|
-
description: This is a logstash output plugin used to write events to an Azure
|
112
|
-
|
113
|
-
email:
|
111
|
+
description: This is a logstash output plugin used to write events to an Azure Data
|
112
|
+
Explorer (a.k.a Kusto)
|
113
|
+
email: nugetkusto@microsoft.com
|
114
114
|
executables: []
|
115
115
|
extensions: []
|
116
116
|
extra_rdoc_files: []
|
@@ -120,19 +120,21 @@ files:
|
|
120
120
|
- Gemfile
|
121
121
|
- LICENSE
|
122
122
|
- README.md
|
123
|
-
- lib/kusto/
|
123
|
+
- lib/kusto/kusto-ingest-1.0.0-BETA-01-jar-with-dependencies.jar
|
124
124
|
- lib/logstash/outputs/kusto.rb
|
125
125
|
- lib/logstash/outputs/kusto/ingestor.rb
|
126
126
|
- lib/logstash/outputs/kusto/interval.rb
|
127
127
|
- logstash-output-kusto.gemspec
|
128
|
+
- spec/outputs/kusto/ingestor_spec.rb
|
128
129
|
- spec/outputs/kusto_spec.rb
|
129
|
-
|
130
|
+
- spec/spec_helpers.rb
|
131
|
+
homepage: https://github.com/Azure/logstash-output-kusto
|
130
132
|
licenses:
|
131
133
|
- Apache-2.0
|
132
134
|
metadata:
|
133
135
|
logstash_plugin: 'true'
|
134
136
|
logstash_group: output
|
135
|
-
post_install_message:
|
137
|
+
post_install_message:
|
136
138
|
rdoc_options: []
|
137
139
|
require_paths:
|
138
140
|
- lib
|
@@ -147,10 +149,11 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
147
149
|
- !ruby/object:Gem::Version
|
148
150
|
version: '0'
|
149
151
|
requirements: []
|
150
|
-
|
151
|
-
|
152
|
-
signing_key:
|
152
|
+
rubygems_version: 3.0.2
|
153
|
+
signing_key:
|
153
154
|
specification_version: 4
|
154
|
-
summary: Writes events to Azure
|
155
|
+
summary: Writes events to Azure Data Explorer (Kusto)
|
155
156
|
test_files:
|
157
|
+
- spec/outputs/kusto/ingestor_spec.rb
|
156
158
|
- spec/outputs/kusto_spec.rb
|
159
|
+
- spec/spec_helpers.rb
|