dogstatsd-ruby 4.8.1 → 5.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +100 -1
- data/lib/datadog/statsd.rb +86 -56
- data/lib/datadog/statsd/connection.rb +5 -8
- data/lib/datadog/statsd/forwarder.rb +120 -0
- data/lib/datadog/statsd/message_buffer.rb +88 -0
- data/lib/datadog/statsd/sender.rb +117 -0
- data/lib/datadog/statsd/serialization/event_serializer.rb +5 -1
- data/lib/datadog/statsd/serialization/stat_serializer.rb +22 -29
- data/lib/datadog/statsd/serialization/tag_serializer.rb +20 -15
- data/lib/datadog/statsd/telemetry.rb +21 -23
- data/lib/datadog/statsd/udp_connection.rb +3 -3
- data/lib/datadog/statsd/uds_connection.rb +3 -3
- data/lib/datadog/statsd/version.rb +1 -1
- metadata +15 -11
- data/lib/datadog/statsd/batch.rb +0 -56
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 264d78a169508453151e7a5c3260c2299ad05c727dc9c72cfa52f8be721809d0
|
4
|
+
data.tar.gz: 8e0726168a20f7efcb2bdcf72ab3d748184cd652de4fd25871f9fdf676020baf
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 1875c615a043db68c69d855af2b454e45ef41af1d7e1f726a6067c35e260eb0dbc9b706a0588608e31bd5087e500b1b77b1b604f3dc22750f1376b6583b78c9a
|
7
|
+
data.tar.gz: 36f60d62312f6844efa9ce1b53a86cde380140582e009b5e5bf07d3a4a859e4d762e92a2ebcfe38a668145255f82f72c84504caabb93f25475e895a5166a9969
|
data/README.md
CHANGED
@@ -24,15 +24,49 @@ require 'datadog/statsd'
|
|
24
24
|
|
25
25
|
# Create a DogStatsD client instance.
|
26
26
|
statsd = Datadog::Statsd.new('localhost', 8125)
|
27
|
+
...
|
28
|
+
# release resources used by the client instance
|
29
|
+
statsd.close()
|
27
30
|
```
|
28
31
|
Or if you want to connect over Unix Domain Socket:
|
29
32
|
```ruby
|
30
33
|
# Connection over Unix Domain Socket
|
31
34
|
statsd = Datadog::Statsd.new(socket_path: '/path/to/socket/file')
|
35
|
+
...
|
36
|
+
# release resources used by the client instance
|
37
|
+
statsd.close()
|
32
38
|
```
|
33
39
|
|
34
40
|
Find a list of all the available options for your DogStatsD Client in the [DogStatsD-ruby rubydoc](https://www.rubydoc.info/github/DataDog/dogstatsd-ruby/master/Datadog/Statsd) or in the [Datadog public DogStatsD documentation](https://docs.datadoghq.com/developers/dogstatsd/?tab=ruby#client-instantiation-parameters).
|
35
41
|
|
42
|
+
### Migrating from v4.x to v5.x
|
43
|
+
|
44
|
+
If you are already using DogStatsD-ruby v4.x and you want to migrate to a version v5.x, the major
|
45
|
+
change concerning you is the new threading model (please see section Threading model):
|
46
|
+
|
47
|
+
In practice, it means two things:
|
48
|
+
|
49
|
+
1. Now that the client is buffering metrics before sending them, you have to manually
|
50
|
+
call the method `Datadog::Statsd#flush` if you want to force the sending of metrics. Note that the companion thread will automatically flush the buffered metrics if the buffer gets full or when you are closing the instance.
|
51
|
+
|
52
|
+
2. You have to make sure you are either:
|
53
|
+
|
54
|
+
* using singletons instances of the DogStatsD client and not allocating one each time you need one, letting the buffering mechanism flush metrics, or,
|
55
|
+
* properly closing your DogStatsD client instance when it is not needed anymore using the method `Datadog::Statsd#close` to release the resources used by the instance and to close the socket
|
56
|
+
|
57
|
+
### v5.x Common Pitfalls
|
58
|
+
|
59
|
+
Version v5.x of `dogstatsd-ruby` is using a companion thread for preemptive flushing, it brings better performances for application having a high-throughput of statsd metrics, but it comes with new pitfalls:
|
60
|
+
|
61
|
+
* Applications forking after having created the dogstatsd instance: forking a process can't duplicate the existing threads, meaning that one of the processes won't have a companion thread to flush the metrics and will lead to missing metrics.
|
62
|
+
* Applications creating a lot of different instances of the client without closing them: it is important to close the instance to free the thread and the socket it is using or it will lead to thread leaks.
|
63
|
+
|
64
|
+
If you are using [Sidekiq](https://github.com/mperham/sidekiq), please make sure to close the client instances that are instantiated. [See this example on using DogStatsD-ruby v5.x with Sidekiq](https://github.com/DataDog/dogstatsd-ruby/blob/master/examples/sidekiq_example.rb).
|
65
|
+
|
66
|
+
If you are using [Puma](https://github.com/puma/puma) or [Unicorn](https://yhbt.net/unicorn.git), please make sure to create the instance of DogStatsD in the workers, not in the main process before it forks to create its workers. See [this comment for more details](https://github.com/DataDog/dogstatsd-ruby/issues/179#issuecomment-845570345).
|
67
|
+
|
68
|
+
Applications that are in these situations and can't apply these recommendations should pin dogstatsd-ruby v4.x using `gem 'dogstatsd-ruby', '~> 4.0'`. Note that v4.x will continue to be maintained until a future v5.x version can more easily fit these use cases.
|
69
|
+
|
36
70
|
### Origin detection over UDP
|
37
71
|
|
38
72
|
Origin detection is a method to detect which pod DogStatsD packets are coming from in order to add the pod's tags to the tag list.
|
@@ -71,9 +105,74 @@ After the client is created, you can start sending events to your Datadog Event
|
|
71
105
|
|
72
106
|
After the client is created, you can start sending Service Checks to Datadog. See the dedicated [Service Check Submission: DogStatsD documentation](https://docs.datadoghq.com/developers/service_checks/dogstatsd_service_checks_submission/?tab=ruby) to see how to submit a Service Check to Datadog.
|
73
107
|
|
108
|
+
### Maximum packets size in high-throughput scenarios
|
109
|
+
|
110
|
+
In order to have the most efficient use of this library in high-throughput scenarios,
|
111
|
+
default values for the maximum packets size have already been set for both UDS (8192 bytes)
|
112
|
+
and UDP (1432 bytes) in order to have the best usage of the underlying network.
|
113
|
+
However, if you perfectly know your network and you know that a different value for the maximum packets
|
114
|
+
size should be used, you can set it with the parameter `buffer_max_payload_size`. Example:
|
115
|
+
|
116
|
+
```ruby
|
117
|
+
# Create a DogStatsD client instance.
|
118
|
+
statsd = Datadog::Statsd.new('localhost', 8125, buffer_max_payload_size: 4096)
|
119
|
+
```
|
120
|
+
|
121
|
+
## Threading model
|
122
|
+
|
123
|
+
On versions greater than 5.0, we changed the threading model of the library so that one instance of `Datadog::Statsd` could be shared between threads and so that the writes in the socket are non blocking.
|
124
|
+
|
125
|
+
When you instantiate a `Datadog::Statsd`, a companion thread is spawned. This thread will be called the Sender thread, as it is modeled by the [Sender](../lib/datadog/statsd/sender.rb) class.
|
126
|
+
|
127
|
+
This thread is stopped when you close the statsd client (`Datadog::Statsd#close`). It also means that allocating a lot of statsd clients without closing them properly when not used anymore
|
128
|
+
could lead to a thread leak (even though they will be sleeping, blocked on IO).
|
129
|
+
The communication between the current thread is managed through a standard Ruby Queue.
|
130
|
+
|
131
|
+
The sender thread has the following logic (Code present in the method `Datadog::Statsd::Sender#send_loop`):
|
132
|
+
|
133
|
+
```
|
134
|
+
while the sender message queue is not closed do
|
135
|
+
read message from sender message queue
|
136
|
+
|
137
|
+
if message is a Control message to flush
|
138
|
+
flush buffer in connection
|
139
|
+
else if message is a Control message to synchronize
|
140
|
+
synchronize with calling thread
|
141
|
+
else
|
142
|
+
add message to the buffer
|
143
|
+
end
|
144
|
+
end while
|
145
|
+
```
|
146
|
+
|
147
|
+
Most of the time, the sender thread is blocked and sleeping when doing a blocking read from the sender message queue.
|
148
|
+
|
149
|
+
We can see that there is 3 different kind of messages:
|
150
|
+
|
151
|
+
* a control message to flush the buffer in the connection
|
152
|
+
* a control message to synchronize any thread with the sender thread
|
153
|
+
* a message to append to the buffer
|
154
|
+
|
155
|
+
There is also an implicit message which is closing the queue as it will stop blocking read from the message queue (if happening) and thus, stop the sender thread.
|
156
|
+
|
157
|
+
### Usual workflow
|
158
|
+
|
159
|
+
You push metrics to the statsd client which writes them quickly to the sender message queue. The sender thread receives those message, buffers them and flushes them to the connection when the buffer limit is reached.
|
160
|
+
|
161
|
+
### Flushing
|
162
|
+
|
163
|
+
When calling a flush, a specific control message (the `:flush` symbol) is sent to the sender thread. When finding it, it flushes its internal buffer into the connection.
|
164
|
+
|
165
|
+
### Rendez-vous
|
166
|
+
|
167
|
+
It is possible to ensure a message has been consumed by the sender thread and written to the buffer by simply calling a rendez-vous right after. This is done when you are doing a synchronized flush (calling `Datadog::Statsd#flush` with the `sync: true` option).
|
168
|
+
|
169
|
+
This means the current thread is going to sleep and wait for a Queue which is given to the sender thread. When the sender thread reads this queue from its own message queue, it puts a placeholder message in it so that it wakes up the calling thread.
|
170
|
+
|
171
|
+
This is useful when closing the application or when checking unit tests.
|
172
|
+
|
74
173
|
## Credits
|
75
174
|
|
76
|
-
dogstatsd-ruby is forked from
|
175
|
+
dogstatsd-ruby is forked from Rein Henrichs [original Statsd
|
77
176
|
client](https://github.com/reinh/statsd).
|
78
177
|
|
79
178
|
Copyright (c) 2011 Rein Henrichs. See LICENSE.txt for
|
data/lib/datadog/statsd.rb
CHANGED
@@ -5,8 +5,10 @@ require_relative 'statsd/version'
|
|
5
5
|
require_relative 'statsd/telemetry'
|
6
6
|
require_relative 'statsd/udp_connection'
|
7
7
|
require_relative 'statsd/uds_connection'
|
8
|
-
require_relative 'statsd/
|
8
|
+
require_relative 'statsd/message_buffer'
|
9
9
|
require_relative 'statsd/serialization'
|
10
|
+
require_relative 'statsd/sender'
|
11
|
+
require_relative 'statsd/forwarder'
|
10
12
|
|
11
13
|
# = Datadog::Statsd: A DogStatsd client (https://www.datadoghq.com)
|
12
14
|
#
|
@@ -26,12 +28,17 @@ require_relative 'statsd/serialization'
|
|
26
28
|
# statsd = Datadog::Statsd.new 'localhost', 8125, tags: 'tag1:true'
|
27
29
|
module Datadog
|
28
30
|
class Statsd
|
31
|
+
class Error < StandardError
|
32
|
+
end
|
33
|
+
|
29
34
|
OK = 0
|
30
35
|
WARNING = 1
|
31
36
|
CRITICAL = 2
|
32
37
|
UNKNOWN = 3
|
33
38
|
|
34
|
-
|
39
|
+
UDP_DEFAULT_BUFFER_SIZE = 1_432
|
40
|
+
UDS_DEFAULT_BUFFER_SIZE = 8_192
|
41
|
+
DEFAULT_BUFFER_POOL_SIZE = Float::INFINITY
|
35
42
|
MAX_EVENT_SIZE = 8 * 1_024
|
36
43
|
# minimum flush interval for the telemetry in seconds
|
37
44
|
DEFAULT_TELEMETRY_FLUSH_INTERVAL = 10
|
@@ -51,67 +58,59 @@ module Datadog
|
|
51
58
|
serializer.global_tags
|
52
59
|
end
|
53
60
|
|
54
|
-
# Buffer containing the statsd message before they are sent in batch
|
55
|
-
attr_reader :buffer
|
56
|
-
|
57
|
-
# Maximum buffer size in bytes before it is flushed
|
58
|
-
attr_reader :max_buffer_bytes
|
59
|
-
|
60
61
|
# Default sample rate
|
61
62
|
attr_reader :sample_rate
|
62
63
|
|
63
|
-
# Connection
|
64
|
-
attr_reader :connection
|
65
|
-
|
66
64
|
# @param [String] host your statsd host
|
67
65
|
# @param [Integer] port your statsd port
|
68
66
|
# @option [String] namespace set a namespace to be prepended to every metric name
|
69
67
|
# @option [Array<String>|Hash] tags tags to be added to every metric
|
70
68
|
# @option [Logger] logger for debugging
|
71
|
-
# @option [Integer]
|
69
|
+
# @option [Integer] buffer_max_payload_size max bytes to buffer
|
70
|
+
# @option [Integer] buffer_max_pool_size max messages to buffer
|
72
71
|
# @option [String] socket_path unix socket path
|
73
72
|
# @option [Float] default sample rate if not overridden
|
74
73
|
def initialize(
|
75
74
|
host = nil,
|
76
75
|
port = nil,
|
76
|
+
socket_path: nil,
|
77
|
+
|
77
78
|
namespace: nil,
|
78
79
|
tags: nil,
|
79
|
-
max_buffer_bytes: DEFAULT_BUFFER_SIZE,
|
80
|
-
socket_path: nil,
|
81
|
-
logger: nil,
|
82
80
|
sample_rate: nil,
|
83
|
-
|
81
|
+
|
82
|
+
buffer_max_payload_size: nil,
|
83
|
+
buffer_max_pool_size: nil,
|
84
|
+
buffer_overflowing_stategy: :drop,
|
85
|
+
|
86
|
+
logger: nil,
|
87
|
+
|
88
|
+
telemetry_enable: true,
|
84
89
|
telemetry_flush_interval: DEFAULT_TELEMETRY_FLUSH_INTERVAL
|
85
90
|
)
|
86
91
|
unless tags.nil? || tags.is_a?(Array) || tags.is_a?(Hash)
|
87
|
-
raise ArgumentError, 'tags must be
|
92
|
+
raise ArgumentError, 'tags must be an array of string tags or a Hash'
|
88
93
|
end
|
89
94
|
|
90
95
|
@namespace = namespace
|
91
96
|
@prefix = @namespace ? "#{@namespace}.".freeze : nil
|
92
|
-
|
93
97
|
@serializer = Serialization::Serializer.new(prefix: @prefix, global_tags: tags)
|
98
|
+
@sample_rate = sample_rate
|
94
99
|
|
95
|
-
|
100
|
+
@forwarder = Forwarder.new(
|
101
|
+
host: host,
|
102
|
+
port: port,
|
103
|
+
socket_path: socket_path,
|
96
104
|
|
97
|
-
@telemetry = Telemetry.new(disable_telemetry, telemetry_flush_interval,
|
98
105
|
global_tags: tags,
|
99
|
-
|
100
|
-
)
|
106
|
+
logger: logger,
|
101
107
|
|
102
|
-
|
103
|
-
|
104
|
-
|
105
|
-
when :uds
|
106
|
-
UDSConnection.new(socket_path, logger, telemetry)
|
107
|
-
end
|
108
|
+
buffer_max_payload_size: buffer_max_payload_size,
|
109
|
+
buffer_max_pool_size: buffer_max_pool_size,
|
110
|
+
buffer_overflowing_stategy: buffer_overflowing_stategy,
|
108
111
|
|
109
|
-
|
110
|
-
|
111
|
-
@sample_rate = sample_rate
|
112
|
-
|
113
|
-
# we reduce max_buffer_bytes by a the rough estimate of the telemetry payload
|
114
|
-
@batch = Batch.new(connection, (max_buffer_bytes - telemetry.estimate_max_size))
|
112
|
+
telemetry_flush_interval: telemetry_enable ? telemetry_flush_interval : nil,
|
113
|
+
)
|
115
114
|
end
|
116
115
|
|
117
116
|
# yield a new instance to a block and close it when done
|
@@ -270,9 +269,9 @@ module Datadog
|
|
270
269
|
# @example Report a critical service check status
|
271
270
|
# $statsd.service_check('my.service.check', Statsd::CRITICAL, :tags=>['urgent'])
|
272
271
|
def service_check(name, status, opts = EMPTY_OPTIONS)
|
273
|
-
telemetry.sent(service_checks: 1)
|
272
|
+
telemetry.sent(service_checks: 1) if telemetry
|
274
273
|
|
275
|
-
|
274
|
+
forwarder.send_message(serializer.to_service_check(name, status, opts))
|
276
275
|
end
|
277
276
|
|
278
277
|
# This end point allows you to post events to the stream. You can tag them, set priority and even aggregate them with other events.
|
@@ -290,17 +289,25 @@ module Datadog
|
|
290
289
|
# @option opts [String, nil] :priority ('normal') Can be "normal" or "low"
|
291
290
|
# @option opts [String, nil] :source_type_name (nil) Assign a source type to the event
|
292
291
|
# @option opts [String, nil] :alert_type ('info') Can be "error", "warning", "info" or "success".
|
292
|
+
# @option opts [Boolean, false] :truncate_if_too_long (false) Truncate the event if it is too long
|
293
293
|
# @option opts [Array<String>] :tags tags to be added to every metric
|
294
294
|
# @example Report an awful event:
|
295
295
|
# $statsd.event('Something terrible happened', 'The end is near if we do nothing', :alert_type=>'warning', :tags=>['end_of_times','urgent'])
|
296
296
|
def event(title, text, opts = EMPTY_OPTIONS)
|
297
|
-
telemetry.sent(events: 1)
|
297
|
+
telemetry.sent(events: 1) if telemetry
|
298
298
|
|
299
|
-
|
299
|
+
forwarder.send_message(serializer.to_event(title, text, opts))
|
300
300
|
end
|
301
301
|
|
302
|
-
# Send several metrics in the same
|
303
|
-
# They will be buffered and flushed when the block finishes
|
302
|
+
# Send several metrics in the same packet.
|
303
|
+
# They will be buffered and flushed when the block finishes.
|
304
|
+
#
|
305
|
+
# This method exists for compatibility with v4.x versions, it is not needed
|
306
|
+
# anymore since the batching is now automatically done internally.
|
307
|
+
# It also means that an automatic flush could occur if the buffer is filled
|
308
|
+
# during the execution of the batch block.
|
309
|
+
#
|
310
|
+
# This method is DEPRECATED and will be removed in future v6.x API.
|
304
311
|
#
|
305
312
|
# @example Send several metrics in one packet:
|
306
313
|
# $statsd.batch do |s|
|
@@ -308,19 +315,50 @@ module Datadog
|
|
308
315
|
# s.increment('page.views')
|
309
316
|
# end
|
310
317
|
def batch
|
311
|
-
|
312
|
-
|
313
|
-
end
|
318
|
+
yield self
|
319
|
+
flush(sync: true)
|
314
320
|
end
|
315
321
|
|
316
322
|
# Close the underlying socket
|
317
|
-
|
318
|
-
|
323
|
+
#
|
324
|
+
# @param [Boolean, true] flush Should we flush the metrics before closing
|
325
|
+
def close(flush: true)
|
326
|
+
flush(sync: true) if flush
|
327
|
+
forwarder.close
|
328
|
+
end
|
329
|
+
|
330
|
+
def sync_with_outbound_io
|
331
|
+
forwarder.sync_with_outbound_io
|
332
|
+
end
|
333
|
+
|
334
|
+
# Flush the buffer into the connection
|
335
|
+
def flush(flush_telemetry: false, sync: false)
|
336
|
+
forwarder.flush(flush_telemetry: flush_telemetry, sync: sync)
|
337
|
+
end
|
338
|
+
|
339
|
+
def telemetry
|
340
|
+
forwarder.telemetry
|
341
|
+
end
|
342
|
+
|
343
|
+
def host
|
344
|
+
forwarder.host
|
345
|
+
end
|
346
|
+
|
347
|
+
def port
|
348
|
+
forwarder.port
|
349
|
+
end
|
350
|
+
|
351
|
+
def socket_path
|
352
|
+
forwarder.socket_path
|
353
|
+
end
|
354
|
+
|
355
|
+
def transport_type
|
356
|
+
forwarder.transport_type
|
319
357
|
end
|
320
358
|
|
321
359
|
private
|
322
360
|
attr_reader :serializer
|
323
|
-
attr_reader :
|
361
|
+
attr_reader :forwarder
|
324
362
|
|
325
363
|
PROCESS_TIME_SUPPORTED = (RUBY_VERSION >= '2.1.0')
|
326
364
|
EMPTY_OPTIONS = {}.freeze
|
@@ -336,22 +374,14 @@ module Datadog
|
|
336
374
|
end
|
337
375
|
|
338
376
|
def send_stats(stat, delta, type, opts = EMPTY_OPTIONS)
|
339
|
-
telemetry.sent(metrics: 1)
|
377
|
+
telemetry.sent(metrics: 1) if telemetry
|
340
378
|
|
341
379
|
sample_rate = opts[:sample_rate] || @sample_rate || 1
|
342
380
|
|
343
381
|
if sample_rate == 1 || rand <= sample_rate
|
344
382
|
full_stat = serializer.to_stat(stat, delta, type, tags: opts[:tags], sample_rate: sample_rate)
|
345
383
|
|
346
|
-
|
347
|
-
end
|
348
|
-
end
|
349
|
-
|
350
|
-
def send_stat(message)
|
351
|
-
if @batch.open?
|
352
|
-
@batch.add(message)
|
353
|
-
else
|
354
|
-
@connection.write(message)
|
384
|
+
forwarder.send_message(full_stat)
|
355
385
|
end
|
356
386
|
end
|
357
387
|
end
|
@@ -3,8 +3,9 @@
|
|
3
3
|
module Datadog
|
4
4
|
class Statsd
|
5
5
|
class Connection
|
6
|
-
def initialize(telemetry)
|
6
|
+
def initialize(telemetry: nil, logger: nil)
|
7
7
|
@telemetry = telemetry
|
8
|
+
@logger = logger
|
8
9
|
end
|
9
10
|
|
10
11
|
# Close the underlying socket
|
@@ -20,15 +21,11 @@ module Datadog
|
|
20
21
|
def write(payload)
|
21
22
|
logger.debug { "Statsd: #{payload}" } if logger
|
22
23
|
|
23
|
-
flush_telemetry = telemetry.flush?
|
24
|
-
|
25
|
-
payload += telemetry.flush if flush_telemetry
|
26
|
-
|
27
24
|
send_message(payload)
|
28
25
|
|
29
|
-
telemetry.
|
26
|
+
telemetry.sent(packets: 1, bytes: payload.length) if telemetry
|
30
27
|
|
31
|
-
|
28
|
+
true
|
32
29
|
rescue StandardError => boom
|
33
30
|
# Try once to reconnect if the socket has been closed
|
34
31
|
retries ||= 1
|
@@ -45,7 +42,7 @@ module Datadog
|
|
45
42
|
end
|
46
43
|
end
|
47
44
|
|
48
|
-
telemetry.dropped(packets: 1, bytes: payload.length)
|
45
|
+
telemetry.dropped(packets: 1, bytes: payload.length) if telemetry
|
49
46
|
logger.error { "Statsd: #{boom.class} #{boom}" } if logger
|
50
47
|
nil
|
51
48
|
end
|
@@ -0,0 +1,120 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Datadog
|
4
|
+
class Statsd
|
5
|
+
class Forwarder
|
6
|
+
attr_reader :telemetry
|
7
|
+
attr_reader :transport_type
|
8
|
+
|
9
|
+
def initialize(
|
10
|
+
host: nil,
|
11
|
+
port: nil,
|
12
|
+
socket_path: nil,
|
13
|
+
|
14
|
+
buffer_max_payload_size: nil,
|
15
|
+
buffer_max_pool_size: nil,
|
16
|
+
buffer_overflowing_stategy: :drop,
|
17
|
+
|
18
|
+
telemetry_flush_interval: nil,
|
19
|
+
global_tags: [],
|
20
|
+
|
21
|
+
logger: nil
|
22
|
+
)
|
23
|
+
@transport_type = socket_path.nil? ? :udp : :uds
|
24
|
+
|
25
|
+
if telemetry_flush_interval
|
26
|
+
@telemetry = Telemetry.new(telemetry_flush_interval,
|
27
|
+
global_tags: global_tags,
|
28
|
+
transport_type: transport_type
|
29
|
+
)
|
30
|
+
end
|
31
|
+
|
32
|
+
@connection = case transport_type
|
33
|
+
when :udp
|
34
|
+
UDPConnection.new(host, port, logger: logger, telemetry: telemetry)
|
35
|
+
when :uds
|
36
|
+
UDSConnection.new(socket_path, logger: logger, telemetry: telemetry)
|
37
|
+
end
|
38
|
+
|
39
|
+
# Initialize buffer
|
40
|
+
buffer_max_payload_size ||= (transport_type == :udp ? UDP_DEFAULT_BUFFER_SIZE : UDS_DEFAULT_BUFFER_SIZE)
|
41
|
+
|
42
|
+
if buffer_max_payload_size <= 0
|
43
|
+
raise ArgumentError, 'buffer_max_payload_size cannot be <= 0'
|
44
|
+
end
|
45
|
+
|
46
|
+
unless telemetry.nil? || telemetry.would_fit_in?(buffer_max_payload_size)
|
47
|
+
raise ArgumentError, "buffer_max_payload_size is not high enough to use telemetry (tags=(#{global_tags.inspect}))"
|
48
|
+
end
|
49
|
+
|
50
|
+
@buffer = MessageBuffer.new(@connection,
|
51
|
+
max_payload_size: buffer_max_payload_size,
|
52
|
+
max_pool_size: buffer_max_pool_size || DEFAULT_BUFFER_POOL_SIZE,
|
53
|
+
overflowing_stategy: buffer_overflowing_stategy,
|
54
|
+
)
|
55
|
+
|
56
|
+
@sender = Sender.new(buffer)
|
57
|
+
@sender.start
|
58
|
+
end
|
59
|
+
|
60
|
+
def send_message(message)
|
61
|
+
sender.add(message)
|
62
|
+
|
63
|
+
tick_telemetry
|
64
|
+
end
|
65
|
+
|
66
|
+
def sync_with_outbound_io
|
67
|
+
sender.rendez_vous
|
68
|
+
end
|
69
|
+
|
70
|
+
def flush(flush_telemetry: false, sync: false)
|
71
|
+
do_flush_telemetry if telemetry && flush_telemetry
|
72
|
+
|
73
|
+
sender.flush(sync: sync)
|
74
|
+
end
|
75
|
+
|
76
|
+
def host
|
77
|
+
return nil unless transport_type == :udp
|
78
|
+
|
79
|
+
connection.host
|
80
|
+
end
|
81
|
+
|
82
|
+
def port
|
83
|
+
return nil unless transport_type == :udp
|
84
|
+
|
85
|
+
connection.port
|
86
|
+
end
|
87
|
+
|
88
|
+
def socket_path
|
89
|
+
return nil unless transport_type == :uds
|
90
|
+
|
91
|
+
connection.socket_path
|
92
|
+
end
|
93
|
+
|
94
|
+
def close
|
95
|
+
sender.stop
|
96
|
+
connection.close
|
97
|
+
end
|
98
|
+
|
99
|
+
private
|
100
|
+
attr_reader :buffer
|
101
|
+
attr_reader :sender
|
102
|
+
attr_reader :connection
|
103
|
+
|
104
|
+
def do_flush_telemetry
|
105
|
+
telemetry_snapshot = telemetry.flush
|
106
|
+
telemetry.reset
|
107
|
+
|
108
|
+
telemetry_snapshot.each do |message|
|
109
|
+
sender.add(message)
|
110
|
+
end
|
111
|
+
end
|
112
|
+
|
113
|
+
def tick_telemetry
|
114
|
+
return nil unless telemetry
|
115
|
+
|
116
|
+
do_flush_telemetry if telemetry.should_flush?
|
117
|
+
end
|
118
|
+
end
|
119
|
+
end
|
120
|
+
end
|
@@ -0,0 +1,88 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Datadog
|
4
|
+
class Statsd
|
5
|
+
class MessageBuffer
|
6
|
+
PAYLOAD_SIZE_TOLERANCE = 0.05
|
7
|
+
|
8
|
+
def initialize(connection,
|
9
|
+
max_payload_size: nil,
|
10
|
+
max_pool_size: DEFAULT_BUFFER_POOL_SIZE,
|
11
|
+
overflowing_stategy: :drop
|
12
|
+
)
|
13
|
+
raise ArgumentError, 'max_payload_size keyword argument must be provided' unless max_payload_size
|
14
|
+
raise ArgumentError, 'max_pool_size keyword argument must be provided' unless max_pool_size
|
15
|
+
|
16
|
+
@connection = connection
|
17
|
+
@max_payload_size = max_payload_size
|
18
|
+
@max_pool_size = max_pool_size
|
19
|
+
@overflowing_stategy = overflowing_stategy
|
20
|
+
|
21
|
+
@buffer = String.new
|
22
|
+
@message_count = 0
|
23
|
+
end
|
24
|
+
|
25
|
+
def add(message)
|
26
|
+
message_size = message.bytesize
|
27
|
+
|
28
|
+
return nil unless message_size > 0 # to avoid adding empty messages to the buffer
|
29
|
+
return nil unless ensure_sendable!(message_size)
|
30
|
+
|
31
|
+
flush if should_flush?(message_size)
|
32
|
+
|
33
|
+
buffer << "\n" unless buffer.empty?
|
34
|
+
buffer << message
|
35
|
+
|
36
|
+
@message_count += 1
|
37
|
+
|
38
|
+
# flush when we're pretty sure that we won't be able
|
39
|
+
# to add another message to the buffer
|
40
|
+
flush if preemptive_flush?
|
41
|
+
|
42
|
+
true
|
43
|
+
end
|
44
|
+
|
45
|
+
def flush
|
46
|
+
return if buffer.empty?
|
47
|
+
|
48
|
+
connection.write(buffer)
|
49
|
+
|
50
|
+
buffer.clear
|
51
|
+
@message_count = 0
|
52
|
+
end
|
53
|
+
|
54
|
+
private
|
55
|
+
attr :max_payload_size
|
56
|
+
attr :max_pool_size
|
57
|
+
|
58
|
+
attr :overflowing_stategy
|
59
|
+
|
60
|
+
attr :connection
|
61
|
+
attr :buffer
|
62
|
+
|
63
|
+
def should_flush?(message_size)
|
64
|
+
return true if buffer.bytesize + 1 + message_size >= max_payload_size
|
65
|
+
|
66
|
+
false
|
67
|
+
end
|
68
|
+
|
69
|
+
def preemptive_flush?
|
70
|
+
@message_count == max_pool_size || buffer.bytesize > bytesize_threshold
|
71
|
+
end
|
72
|
+
|
73
|
+
def ensure_sendable!(message_size)
|
74
|
+
return true if message_size <= max_payload_size
|
75
|
+
|
76
|
+
if overflowing_stategy == :raise
|
77
|
+
raise Error, 'Message too big for payload limit'
|
78
|
+
end
|
79
|
+
|
80
|
+
false
|
81
|
+
end
|
82
|
+
|
83
|
+
def bytesize_threshold
|
84
|
+
@bytesize_threshold ||= (max_payload_size - PAYLOAD_SIZE_TOLERANCE * max_payload_size).to_i
|
85
|
+
end
|
86
|
+
end
|
87
|
+
end
|
88
|
+
end
|
@@ -0,0 +1,117 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Datadog
|
4
|
+
class Statsd
|
5
|
+
class Sender
|
6
|
+
CLOSEABLE_QUEUES = Queue.instance_methods.include?(:close)
|
7
|
+
|
8
|
+
def initialize(message_buffer)
|
9
|
+
@message_buffer = message_buffer
|
10
|
+
end
|
11
|
+
|
12
|
+
def flush(sync: false)
|
13
|
+
# don't try to flush if there is no message_queue instantiated
|
14
|
+
return unless message_queue
|
15
|
+
|
16
|
+
message_queue.push(:flush)
|
17
|
+
|
18
|
+
rendez_vous if sync
|
19
|
+
end
|
20
|
+
|
21
|
+
def rendez_vous
|
22
|
+
# Initialize and get the thread's sync queue
|
23
|
+
queue = (Thread.current[:statsd_sync_queue] ||= Queue.new)
|
24
|
+
# tell sender-thread to notify us in the current
|
25
|
+
# thread's queue
|
26
|
+
message_queue.push(queue)
|
27
|
+
# wait for the sender thread to send a message
|
28
|
+
# once the flush is done
|
29
|
+
queue.pop
|
30
|
+
end
|
31
|
+
|
32
|
+
def add(message)
|
33
|
+
raise ArgumentError, 'Start sender first' unless message_queue
|
34
|
+
|
35
|
+
message_queue << message
|
36
|
+
end
|
37
|
+
|
38
|
+
def start
|
39
|
+
raise ArgumentError, 'Sender already started' if message_queue
|
40
|
+
|
41
|
+
# initialize message queue for background thread
|
42
|
+
@message_queue = Queue.new
|
43
|
+
# start background thread
|
44
|
+
@sender_thread = Thread.new(&method(:send_loop))
|
45
|
+
end
|
46
|
+
|
47
|
+
if CLOSEABLE_QUEUES
|
48
|
+
def stop(join_worker: true)
|
49
|
+
message_queue = @message_queue
|
50
|
+
message_queue.close if message_queue
|
51
|
+
|
52
|
+
sender_thread = @sender_thread
|
53
|
+
sender_thread.join if sender_thread && join_worker
|
54
|
+
end
|
55
|
+
else
|
56
|
+
def stop(join_worker: true)
|
57
|
+
message_queue = @message_queue
|
58
|
+
message_queue << :close if message_queue
|
59
|
+
|
60
|
+
sender_thread = @sender_thread
|
61
|
+
sender_thread.join if sender_thread && join_worker
|
62
|
+
end
|
63
|
+
end
|
64
|
+
|
65
|
+
private
|
66
|
+
|
67
|
+
attr_reader :message_buffer
|
68
|
+
|
69
|
+
attr_reader :message_queue
|
70
|
+
attr_reader :sender_thread
|
71
|
+
|
72
|
+
if CLOSEABLE_QUEUES
|
73
|
+
def send_loop
|
74
|
+
until (message = message_queue.pop).nil? && message_queue.closed?
|
75
|
+
# skip if message is nil, e.g. when message_queue
|
76
|
+
# is empty and closed
|
77
|
+
next unless message
|
78
|
+
|
79
|
+
case message
|
80
|
+
when :flush
|
81
|
+
message_buffer.flush
|
82
|
+
when Queue
|
83
|
+
message.push(:go_on)
|
84
|
+
else
|
85
|
+
message_buffer.add(message)
|
86
|
+
end
|
87
|
+
end
|
88
|
+
|
89
|
+
@message_queue = nil
|
90
|
+
@sender_thread = nil
|
91
|
+
end
|
92
|
+
else
|
93
|
+
def send_loop
|
94
|
+
loop do
|
95
|
+
message = message_queue.pop
|
96
|
+
|
97
|
+
next unless message
|
98
|
+
|
99
|
+
case message
|
100
|
+
when :close
|
101
|
+
break
|
102
|
+
when :flush
|
103
|
+
message_buffer.flush
|
104
|
+
when Queue
|
105
|
+
message.push(:go_on)
|
106
|
+
else
|
107
|
+
message_buffer.add(message)
|
108
|
+
end
|
109
|
+
end
|
110
|
+
|
111
|
+
@message_queue = nil
|
112
|
+
@sender_thread = nil
|
113
|
+
end
|
114
|
+
end
|
115
|
+
end
|
116
|
+
end
|
117
|
+
end
|
@@ -48,7 +48,11 @@ module Datadog
|
|
48
48
|
end
|
49
49
|
|
50
50
|
if event.bytesize > MAX_EVENT_SIZE
|
51
|
-
|
51
|
+
if options[:truncate_if_too_long]
|
52
|
+
event.slice!(MAX_EVENT_SIZE..event.length)
|
53
|
+
else
|
54
|
+
raise "Event #{title} payload is too big (more that 8KB), event discarded"
|
55
|
+
end
|
52
56
|
end
|
53
57
|
end
|
54
58
|
end
|
@@ -6,34 +6,24 @@ module Datadog
|
|
6
6
|
class StatSerializer
|
7
7
|
def initialize(prefix, global_tags: [])
|
8
8
|
@prefix = prefix
|
9
|
+
@prefix_str = prefix.to_s
|
9
10
|
@tag_serializer = TagSerializer.new(global_tags)
|
10
11
|
end
|
11
12
|
|
12
13
|
def format(name, delta, type, tags: [], sample_rate: 1)
|
13
|
-
|
14
|
-
stat << prefix if prefix
|
15
|
-
|
16
|
-
# stat value
|
17
|
-
stat << formated_name(name)
|
18
|
-
stat << ':'
|
19
|
-
stat << delta.to_s
|
20
|
-
|
21
|
-
# stat type
|
22
|
-
stat << '|'
|
23
|
-
stat << type
|
24
|
-
|
25
|
-
# sample_rate
|
26
|
-
if sample_rate != 1
|
27
|
-
stat << '|'
|
28
|
-
stat << '@'
|
29
|
-
stat << sample_rate.to_s
|
30
|
-
end
|
14
|
+
name = formated_name(name)
|
31
15
|
|
32
|
-
|
16
|
+
if sample_rate != 1
|
17
|
+
if tags_list = tag_serializer.format(tags)
|
18
|
+
"#{@prefix_str}#{name}:#{delta}|#{type}|@#{sample_rate}|##{tags_list}"
|
19
|
+
else
|
20
|
+
"#{@prefix_str}#{name}:#{delta}|#{type}|@#{sample_rate}"
|
21
|
+
end
|
22
|
+
else
|
33
23
|
if tags_list = tag_serializer.format(tags)
|
34
|
-
|
35
|
-
|
36
|
-
|
24
|
+
"#{@prefix_str}#{name}:#{delta}|#{type}|##{tags_list}"
|
25
|
+
else
|
26
|
+
"#{@prefix_str}#{name}:#{delta}|#{type}"
|
37
27
|
end
|
38
28
|
end
|
39
29
|
end
|
@@ -43,18 +33,21 @@ module Datadog
|
|
43
33
|
end
|
44
34
|
|
45
35
|
private
|
36
|
+
|
46
37
|
attr_reader :prefix
|
47
38
|
attr_reader :tag_serializer
|
48
39
|
|
49
40
|
def formated_name(name)
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
f.tr!(':|@', '_')
|
41
|
+
if name.is_a?(String)
|
42
|
+
# DEV: gsub is faster than dup.gsub!
|
43
|
+
formated = name.gsub('::', '.')
|
44
|
+
else
|
45
|
+
formated = name.to_s
|
46
|
+
formated.gsub!('::', '.')
|
57
47
|
end
|
48
|
+
|
49
|
+
formated.tr!(':|@', '_')
|
50
|
+
formated
|
58
51
|
end
|
59
52
|
end
|
60
53
|
end
|
@@ -13,18 +13,25 @@ module Datadog
|
|
13
13
|
|
14
14
|
# Convert to tag list and set
|
15
15
|
@global_tags = to_tags_list(global_tags)
|
16
|
+
if @global_tags.any?
|
17
|
+
@global_tags_formatted = @global_tags.join(',')
|
18
|
+
else
|
19
|
+
@global_tags_formatted = nil
|
20
|
+
end
|
16
21
|
end
|
17
22
|
|
18
23
|
def format(message_tags)
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
global_tags + to_tags_list(message_tags)
|
23
|
-
else
|
24
|
-
global_tags
|
25
|
-
end
|
24
|
+
if !message_tags || message_tags.empty?
|
25
|
+
return @global_tags_formatted
|
26
|
+
end
|
26
27
|
|
27
|
-
|
28
|
+
tags = if @global_tags_formatted
|
29
|
+
[@global_tags_formatted, to_tags_list(message_tags)]
|
30
|
+
else
|
31
|
+
to_tags_list(message_tags)
|
32
|
+
end
|
33
|
+
|
34
|
+
tags.join(',')
|
28
35
|
end
|
29
36
|
|
30
37
|
attr_reader :global_tags
|
@@ -51,19 +58,17 @@ module Datadog
|
|
51
58
|
def to_tags_list(tags)
|
52
59
|
case tags
|
53
60
|
when Hash
|
54
|
-
tags.
|
55
|
-
if
|
56
|
-
|
61
|
+
tags.map do |name, value|
|
62
|
+
if value
|
63
|
+
escape_tag_content("#{name}:#{value}")
|
57
64
|
else
|
58
|
-
|
65
|
+
escape_tag_content(name)
|
59
66
|
end
|
60
67
|
end
|
61
68
|
when Array
|
62
|
-
tags.
|
69
|
+
tags.map { |tag| escape_tag_content(tag) }
|
63
70
|
else
|
64
71
|
[]
|
65
|
-
end.map! do |tag|
|
66
|
-
escape_tag_content(tag)
|
67
72
|
end
|
68
73
|
end
|
69
74
|
|
@@ -11,10 +11,11 @@ module Datadog
|
|
11
11
|
attr_reader :bytes_dropped
|
12
12
|
attr_reader :packets_sent
|
13
13
|
attr_reader :packets_dropped
|
14
|
-
attr_reader :estimate_max_size
|
15
14
|
|
16
|
-
|
17
|
-
|
15
|
+
# Rough estimation of maximum telemetry message size without tags
|
16
|
+
MAX_TELEMETRY_MESSAGE_SIZE_WT_TAGS = 50 # bytes
|
17
|
+
|
18
|
+
def initialize(flush_interval, global_tags: [], transport_type: :udp)
|
18
19
|
@flush_interval = flush_interval
|
19
20
|
@global_tags = global_tags
|
20
21
|
@transport_type = transport_type
|
@@ -27,15 +28,10 @@ module Datadog
|
|
27
28
|
client_version: VERSION,
|
28
29
|
client_transport: transport_type,
|
29
30
|
).format(global_tags)
|
31
|
+
end
|
30
32
|
|
31
|
-
|
32
|
-
|
33
|
-
# 'max_buffer_bytes', we have to adjust with the size of the telemetry
|
34
|
-
# (and any tags used). The telemetry payload size will change depending
|
35
|
-
# on the actual value of metrics: metrics received, packet dropped,
|
36
|
-
# etc. This is why we add a 63bytes margin: 9 bytes for each of the 7
|
37
|
-
# telemetry metrics.
|
38
|
-
@estimate_max_size = disabled ? 0 : flush.length + 9 * 7
|
33
|
+
def would_fit_in?(max_buffer_payload_size)
|
34
|
+
MAX_TELEMETRY_MESSAGE_SIZE_WT_TAGS + serialized_tags.size < max_buffer_payload_size
|
39
35
|
end
|
40
36
|
|
41
37
|
def reset
|
@@ -63,27 +59,29 @@ module Datadog
|
|
63
59
|
@packets_dropped += packets
|
64
60
|
end
|
65
61
|
|
66
|
-
def
|
62
|
+
def should_flush?
|
67
63
|
@next_flush_time < now_in_s
|
68
64
|
end
|
69
65
|
|
70
66
|
def flush
|
71
|
-
|
72
|
-
|
73
|
-
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
datadog.dogstatsd.client.packets_sent:#{@packets_sent}|#{COUNTER_TYPE}|##{serialized_tags}
|
81
|
-
datadog.dogstatsd.client.packets_dropped:#{@packets_dropped}|#{COUNTER_TYPE}|##{serialized_tags})
|
67
|
+
[
|
68
|
+
sprintf(pattern, 'metrics', @metrics),
|
69
|
+
sprintf(pattern, 'events', @events),
|
70
|
+
sprintf(pattern, 'service_checks', @service_checks),
|
71
|
+
sprintf(pattern, 'bytes_sent', @bytes_sent),
|
72
|
+
sprintf(pattern, 'bytes_dropped', @bytes_dropped),
|
73
|
+
sprintf(pattern, 'packets_sent', @packets_sent),
|
74
|
+
sprintf(pattern, 'packets_dropped', @packets_dropped),
|
75
|
+
]
|
82
76
|
end
|
83
77
|
|
84
78
|
private
|
85
79
|
attr_reader :serialized_tags
|
86
80
|
|
81
|
+
def pattern
|
82
|
+
@pattern ||= "datadog.dogstatsd.client.%s:%d|#{COUNTER_TYPE}|##{serialized_tags}"
|
83
|
+
end
|
84
|
+
|
87
85
|
if Kernel.const_defined?('Process') && Process.respond_to?(:clock_gettime)
|
88
86
|
def now_in_s
|
89
87
|
Process.clock_gettime(Process::CLOCK_MONOTONIC, :second)
|
@@ -14,11 +14,11 @@ module Datadog
|
|
14
14
|
# StatsD port. Defaults to 8125.
|
15
15
|
attr_reader :port
|
16
16
|
|
17
|
-
def initialize(host, port,
|
18
|
-
super(
|
17
|
+
def initialize(host, port, **kwargs)
|
18
|
+
super(**kwargs)
|
19
|
+
|
19
20
|
@host = host || ENV.fetch('DD_AGENT_HOST', DEFAULT_HOST)
|
20
21
|
@port = port || ENV.fetch('DD_DOGSTATSD_PORT', DEFAULT_PORT).to_i
|
21
|
-
@logger = logger
|
22
22
|
end
|
23
23
|
|
24
24
|
private
|
@@ -10,10 +10,10 @@ module Datadog
|
|
10
10
|
# DogStatsd unix socket path
|
11
11
|
attr_reader :socket_path
|
12
12
|
|
13
|
-
def initialize(socket_path,
|
14
|
-
super(
|
13
|
+
def initialize(socket_path, **kwargs)
|
14
|
+
super(**kwargs)
|
15
|
+
|
15
16
|
@socket_path = socket_path
|
16
|
-
@logger = logger
|
17
17
|
end
|
18
18
|
|
19
19
|
private
|
metadata
CHANGED
@@ -1,16 +1,17 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: dogstatsd-ruby
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version:
|
4
|
+
version: 5.1.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Rein Henrichs
|
8
|
-
|
8
|
+
- Karim Bogtob
|
9
|
+
autorequire:
|
9
10
|
bindir: bin
|
10
11
|
cert_chain: []
|
11
|
-
date:
|
12
|
+
date: 2021-06-17 00:00:00.000000000 Z
|
12
13
|
dependencies: []
|
13
|
-
description: A Ruby
|
14
|
+
description: A Ruby DogStatsd client
|
14
15
|
email: code@datadoghq.com
|
15
16
|
executables: []
|
16
17
|
extensions: []
|
@@ -21,8 +22,10 @@ files:
|
|
21
22
|
- LICENSE.txt
|
22
23
|
- README.md
|
23
24
|
- lib/datadog/statsd.rb
|
24
|
-
- lib/datadog/statsd/batch.rb
|
25
25
|
- lib/datadog/statsd/connection.rb
|
26
|
+
- lib/datadog/statsd/forwarder.rb
|
27
|
+
- lib/datadog/statsd/message_buffer.rb
|
28
|
+
- lib/datadog/statsd/sender.rb
|
26
29
|
- lib/datadog/statsd/serialization.rb
|
27
30
|
- lib/datadog/statsd/serialization/event_serializer.rb
|
28
31
|
- lib/datadog/statsd/serialization/serializer.rb
|
@@ -38,10 +41,10 @@ licenses:
|
|
38
41
|
- MIT
|
39
42
|
metadata:
|
40
43
|
bug_tracker_uri: https://github.com/DataDog/dogstatsd-ruby/issues
|
41
|
-
changelog_uri: https://github.com/DataDog/dogstatsd-ruby/blob/
|
42
|
-
documentation_uri: https://www.rubydoc.info/gems/dogstatsd-ruby/
|
43
|
-
source_code_uri: https://github.com/DataDog/dogstatsd-ruby/tree/
|
44
|
-
post_install_message:
|
44
|
+
changelog_uri: https://github.com/DataDog/dogstatsd-ruby/blob/v5.1.0/CHANGELOG.md
|
45
|
+
documentation_uri: https://www.rubydoc.info/gems/dogstatsd-ruby/5.1.0
|
46
|
+
source_code_uri: https://github.com/DataDog/dogstatsd-ruby/tree/v5.1.0
|
47
|
+
post_install_message:
|
45
48
|
rdoc_options: []
|
46
49
|
require_paths:
|
47
50
|
- lib
|
@@ -56,8 +59,9 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
56
59
|
- !ruby/object:Gem::Version
|
57
60
|
version: '0'
|
58
61
|
requirements: []
|
59
|
-
|
60
|
-
|
62
|
+
rubyforge_project:
|
63
|
+
rubygems_version: 2.7.10
|
64
|
+
signing_key:
|
61
65
|
specification_version: 4
|
62
66
|
summary: A Ruby DogStatsd client
|
63
67
|
test_files: []
|
data/lib/datadog/statsd/batch.rb
DELETED
@@ -1,56 +0,0 @@
|
|
1
|
-
# frozen_string_literal: true
|
2
|
-
|
3
|
-
module Datadog
|
4
|
-
class Statsd
|
5
|
-
class Batch
|
6
|
-
def initialize(connection, max_buffer_bytes)
|
7
|
-
@connection = connection
|
8
|
-
@max_buffer_bytes = max_buffer_bytes
|
9
|
-
@depth = 0
|
10
|
-
reset
|
11
|
-
end
|
12
|
-
|
13
|
-
def open
|
14
|
-
@depth += 1
|
15
|
-
|
16
|
-
yield
|
17
|
-
ensure
|
18
|
-
@depth -= 1
|
19
|
-
flush if !open?
|
20
|
-
end
|
21
|
-
|
22
|
-
def open?
|
23
|
-
@depth > 0
|
24
|
-
end
|
25
|
-
|
26
|
-
def add(message)
|
27
|
-
message_bytes = message.bytesize
|
28
|
-
|
29
|
-
unless @buffer_bytes == 0
|
30
|
-
if @buffer_bytes + 1 + message_bytes >= @max_buffer_bytes
|
31
|
-
flush
|
32
|
-
else
|
33
|
-
@buffer << "\n"
|
34
|
-
@buffer_bytes += 1
|
35
|
-
end
|
36
|
-
end
|
37
|
-
|
38
|
-
@buffer << message
|
39
|
-
@buffer_bytes += message_bytes
|
40
|
-
end
|
41
|
-
|
42
|
-
def flush
|
43
|
-
return if @buffer_bytes == 0
|
44
|
-
@connection.write(@buffer)
|
45
|
-
reset
|
46
|
-
end
|
47
|
-
|
48
|
-
private
|
49
|
-
|
50
|
-
def reset
|
51
|
-
@buffer = String.new
|
52
|
-
@buffer_bytes = 0
|
53
|
-
end
|
54
|
-
end
|
55
|
-
end
|
56
|
-
end
|