statsd 0.0.4

Sign up to get free protection for your applications and to get access to all the features.
data/LICENSE ADDED
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2010 Etsy
2
+
3
+ Permission is hereby granted, free of charge, to any person
4
+ obtaining a copy of this software and associated documentation
5
+ files (the "Software"), to deal in the Software without
6
+ restriction, including without limitation the rights to use,
7
+ copy, modify, merge, publish, distribute, sublicense, and/or sell
8
+ copies of the Software, and to permit persons to whom the
9
+ Software is furnished to do so, subject to the following
10
+ conditions:
11
+
12
+ The above copyright notice and this permission notice shall be
13
+ included in all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
16
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
17
+ OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
18
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
19
+ HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
20
+ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
21
+ FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
22
+ OTHER DEALINGS IN THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,137 @@
1
+ StatsD
2
+ ======
3
+
4
+ A network daemon for aggregating statistics (counters and timers), rolling them up, then sending them to [graphite][graphite].
5
+
6
+ We ([Etsy][etsy]) [blogged][blog post] about how it works and why we created it.
7
+
8
+
9
+ Concepts
10
+ --------
11
+
12
+ * *buckets*
13
+ Each stat is in it's own "bucket". They are not predefined anywhere. Buckets can be named anything that will translate to Graphite (periods make folders, etc)
14
+
15
+ * *values*
16
+ Each stat will have a value. How it is interpreted depends on modifiers
17
+
18
+ * *flush*
19
+ After the flush interval timeout (default 10 seconds), stats are munged and sent over to Graphite.
20
+
21
+ Counting
22
+ --------
23
+
24
+ gorets:1|c
25
+
26
+ This is a simple counter. Add 1 to the "gorets" bucket. It stays in memory until the flush interval.
27
+
28
+
29
+ Timing
30
+ ------
31
+
32
+ glork:320|ms
33
+
34
+ The glork took 320ms to complete this time. StatsD figures out 90th percentile, average (mean), lower and upper bounds for the flush interval.
35
+
36
+ Sampling
37
+ --------
38
+
39
+ gorets:1|c|@0.1
40
+
41
+ Tells StatsD that this counter is being sent sampled ever 1/10th of the time.
42
+
43
+
44
+ Guts
45
+ ----
46
+
47
+ * [UDP][udp]
48
+ Client libraries use UDP to send information to the StatsD daemon.
49
+
50
+ * [NodeJS][node]
51
+ * [Graphite][graphite]
52
+
53
+ Graphite uses "schemas" to define the different round robin datasets it houses (analogous to RRAs in rrdtool). Here's what Etsy is using for the stats databases:
54
+
55
+ [stats]
56
+ priority = 110
57
+ pattern = ^stats\..*
58
+ retentions = 10:2160,60:10080,600:262974
59
+
60
+ That translates to:
61
+
62
+ * 6 hours of 10 second data (what we consider "near-realtime")
63
+ * 1 week of 1 minute data
64
+ * 5 years of 10 minute data
65
+
66
+ This has been a good tradeoff so far between size-of-file (round robin databases are fixed size) and data we care about. Each "stats" database is about 3.2 megs with these retentions.
67
+
68
+ Ruby
69
+ ----
70
+ A ruby version of statsd.
71
+
72
+ ### Installation
73
+
74
+ gem install statsd
75
+
76
+ ### Configuration
77
+
78
+ Edit the example config.yml and em-server.rb to your liking. There are 2 flush protocols: graphite and mongo (experimental).
79
+
80
+ ### Server
81
+ Run the server:
82
+
83
+ ruby em-server.rb
84
+
85
+ ### Client
86
+ In your client code:
87
+
88
+ require 'rubygems'
89
+ require 'statsd'
90
+ STATSD = Statsd::Client.new('localhost',8125)
91
+
92
+ STATSD.increment('some_counter') # basic incrementing
93
+ STATSD.increment('system.nested_counter', 0.1) # incrementing with sampling (10%)
94
+
95
+ STATSD.decrement(:some_other_counter) # basic decrememting using a symbol
96
+ STATSD.decrement('system.nested_counter', 0.1) # decrementing with sampling (10%)
97
+
98
+ STATSD.timing('some_job_time', 20) # reporting job that took 20ms
99
+ STATSD.timing('some_job_time', 20, 0.05) # reporting job that took 20ms with sampling (5% sampling)
100
+
101
+
102
+ Inspiration
103
+ -----------
104
+
105
+ StatsD was inspired (heavily) by the project (of the same name) at Flickr. Here's a post where Cal Henderson described it in depth:
106
+ [Counting and timing](http://code.flickr.com/blog/2008/10/27/counting-timing/). Cal re-released the code recently: [Perl StatsD](https://github.com/iamcal/Flickr-StatsD)
107
+
108
+
109
+ Contribute
110
+ ---------------------
111
+
112
+ You're interested in contributing to StatsD? *AWESOME*. Here are the basic steps:
113
+
114
+ fork StatsD from here: http://github.com/etsy/statsd
115
+
116
+ 1. Clone your fork
117
+ 2. Hack away
118
+ 3. If you are adding new functionality, document it in the README
119
+ 4. If necessary, rebase your commits into logical chunks, without errors
120
+ 5. Push the branch up to GitHub
121
+ 6. Send a pull request to the etsy/statsd project.
122
+
123
+ We'll do our best to get your changes in!
124
+
125
+ [graphite]: http://graphite.wikidot.com
126
+ [etsy]: http://www.etsy.com
127
+ [blog post]: http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/
128
+ [node]: http://nodejs.org
129
+ [udp]: http://enwp.org/udp
130
+ [eventmachine]: http://rubyeventmachine.com/
131
+ [mongodb]: http://www.mongodb.org/
132
+
133
+ Contributors
134
+ -----------------
135
+
136
+ In lieu of a list of contributors, check out the commit history for the project:
137
+ http://github.com/etsy/statsd/commits/master
data/config.js ADDED
@@ -0,0 +1,39 @@
1
+ var fs = require('fs')
2
+ , sys = require('sys')
3
+
4
+ var Configurator = function (file) {
5
+
6
+ var self = this;
7
+ var config = {};
8
+ var oldConfig = {};
9
+
10
+ this.updateConfig = function () {
11
+ sys.log('reading config file: ' + file);
12
+
13
+ fs.readFile(file, function (err, data) {
14
+ if (err) { throw err; }
15
+ old_config = self.config;
16
+
17
+ self.config = process.compile('config = ' + data, file);
18
+ self.emit('configChanged', self.config);
19
+ });
20
+ };
21
+
22
+ this.updateConfig();
23
+
24
+ fs.watchFile(file, function (curr, prev) {
25
+ if (curr.ino != prev.ino) { self.updateConfig(); }
26
+ });
27
+ };
28
+
29
+ sys.inherits(Configurator, process.EventEmitter);
30
+
31
+ exports.Configurator = Configurator;
32
+
33
+ exports.configFile = function(file, callbackFunc) {
34
+ var config = new Configurator(file);
35
+ config.on('configChanged', function() {
36
+ callbackFunc(config.config, config.oldConfig);
37
+ });
38
+ };
39
+
data/config.yml ADDED
@@ -0,0 +1,34 @@
1
+ ---
2
+ # Flush interval should be your finest retention in seconds
3
+ flush_interval: 10
4
+
5
+ # Graphite
6
+ graphite_host: localhost
7
+ graphite_port: 8125
8
+
9
+ # Mongo
10
+ mongo_host: statsd.example.com
11
+ mongo_database: statsdb
12
+
13
+ # If you change these, you need to delete the capped collections yourself!
14
+ # Average mongo record size is 152 bytes
15
+ # 10s or 1min data is transient so we'll use MongoDB's capped collections. These collections are fixed in size.
16
+ # 5min and 1d data is interesting to preserve long-term. These collections are not capped.
17
+ retentions:
18
+ - name: stats_per_10s
19
+ seconds: 10
20
+ capped: true
21
+ cap_bytes: 268_435_456 # 2**28
22
+ - name: stats_per_1min
23
+ seconds: 60
24
+ capped: true
25
+ cap_bytes: 1_073_741_824 # 2**30
26
+ - name: stats_per_5min
27
+ seconds: 600
28
+ cap_bytes: 0
29
+ capped: false
30
+ - name: stats_per_day
31
+ seconds: 86400
32
+ cap_bytes: 0
33
+ capped: false
34
+
data/em-server.rb ADDED
@@ -0,0 +1,51 @@
1
+ require 'eventmachine'
2
+ require 'statsd'
3
+ require 'statsd/server'
4
+ require 'statsd/mongo'
5
+ require 'statsd/graphite'
6
+
7
+ require 'yaml'
8
+ require 'erb'
9
+ ROOT = File.expand_path(File.dirname(__FILE__))
10
+ APP_CONFIG = YAML::load(ERB.new(IO.read(File.join(ROOT,'config.yml'))).result)
11
+
12
+ # Setup retention store
13
+ db = Mongo::Connection.new(APP_CONFIG['mongo_host']).db(APP_CONFIG['mongo_database'])
14
+ APP_CONFIG['retentions'].each do |retention|
15
+ collection_name = retention['name']
16
+ unless db.collection_names.include?(collection_name)
17
+ db.create_collection(collection_name, :capped => retention['capped'], :size => retention['cap_bytes'])
18
+ end
19
+ db.collection(collection_name).ensure_index([['ts', Mongo::ASCENDING]])
20
+ end
21
+
22
+ # Start the server
23
+ Statsd::Mongo.hostname = APP_CONFIG['mongo_host']
24
+ Statsd::Mongo.database = APP_CONFIG['mongo_database']
25
+ Statsd::Mongo.retentions = APP_CONFIG['retentions']
26
+ Statsd::Mongo.flush_interval = APP_CONFIG['flush_interval']
27
+ EventMachine::run do
28
+ EventMachine::open_datagram_socket('127.0.0.1', 8125, Statsd::Server)
29
+ EventMachine::add_periodic_timer(APP_CONFIG['flush_interval']) do
30
+ counters,timers = Statsd::Server.get_and_clear_stats!
31
+
32
+ #
33
+ # Flush Adapters
34
+ #
35
+ # Mongo
36
+ # EM.defer do
37
+ # Statsd::Mongo.flush_stats(counters,timers)
38
+ # end
39
+ #
40
+
41
+ # Graphite
42
+ EventMachine.connect APP_CONFIG['graphite_host'], APP_CONFIG['graphite_port'], Statsd::Graphite do |conn|
43
+ conn.counters = counters
44
+ conn.timers = timers
45
+ conn.flush_interval = 10
46
+ conn.flush_stats
47
+ end
48
+ end
49
+
50
+
51
+ end
data/exampleConfig.js ADDED
@@ -0,0 +1,8 @@
1
+ {
2
+ debug:true
3
+ , dumpMessages:true
4
+ , graphitePort: 2003
5
+ , graphiteHost: "graphite.host.com"
6
+ , port: 8125
7
+ }
8
+
@@ -0,0 +1,21 @@
1
+ #!/usr/bin/env ruby
2
+ #
3
+
4
+ require 'rubygems'
5
+ require 'eventmachine'
6
+
7
+ module EchoServer
8
+ def post_init
9
+ puts "-- someone connected to the server!"
10
+ end
11
+
12
+ def receive_data data
13
+ puts data
14
+ send_data ">>> you sent: #{data}"
15
+ end
16
+ end
17
+
18
+ EventMachine::run {
19
+ EventMachine::start_server "127.0.0.1", 8125, EchoServer
20
+ puts 'running dummy graphite echo server on 8125'
21
+ }
@@ -0,0 +1,91 @@
1
+ require 'benchmark'
2
+ require 'eventmachine'
3
+ module Statsd
4
+ class Graphite < EM::Connection
5
+ attr_accessor :counters, :timers, :flush_interval
6
+
7
+ def initialize(*args)
8
+ puts args
9
+ super
10
+ # stuff here...
11
+ end
12
+
13
+ def post_init
14
+ # puts counters.size
15
+ # send_data 'Hello'
16
+ # puts 'hello'
17
+ # close_connection_after_writing
18
+ end
19
+
20
+ def receive_data(data)
21
+ p data
22
+ end
23
+
24
+ # def unbind
25
+ # p ' connection totally closed'
26
+ # EventMachine::stop_event_loop
27
+ # end
28
+
29
+ def flush_stats
30
+ print "#{Time.now} Flushing #{counters.count} counters and #{timers.count} timers to Graphite"
31
+ stat_string = ''
32
+ time = ::Benchmark.realtime do
33
+ ts = Time.now.to_i
34
+ num_stats = 0
35
+
36
+ # store counters
37
+ counters.each_pair do |key,value|
38
+ value /= flush_interval
39
+ message = "stats.#{key} #{value} #{ts}\n"
40
+ stat_string += message
41
+ counters[key] = 0
42
+
43
+ num_stats += 1
44
+ end
45
+
46
+ # store timers
47
+ timers.each_pair do |key, values|
48
+ if (values.length > 0)
49
+ pct_threshold = 90
50
+ values.sort!
51
+ count = values.count
52
+ min = values.first
53
+ max = values.last
54
+
55
+ mean = min
56
+ max_at_threshold = max
57
+
58
+ if (count > 1)
59
+ # strip off the top 100-threshold
60
+ threshold_index = (((100 - pct_threshold) / 100.0) * count).round
61
+ values = values[0..-threshold_index]
62
+ max_at_threshold = values.last
63
+
64
+ # average the remaining timings
65
+ sum = values.inject( 0 ) { |s,x| s+x }
66
+ mean = sum / values.count
67
+ end
68
+
69
+ message = ""
70
+ message += "stats.timers.#{key}.mean #{mean} #{ts}\n"
71
+ message += "stats.timers.#{key}.upper #{max} #{ts}\n"
72
+ message += "stats.timers.#{key}.upper_#{pct_threshold} #{max_at_threshold} #{ts}\n"
73
+ message += "stats.timers.#{key}.lower #{min} #{ts}\n"
74
+ message += "stats.timers.#{key}.count #{count} #{ts}\n"
75
+ stat_string += message
76
+
77
+ timers[key] = []
78
+
79
+ num_stats += 1
80
+ end
81
+ end
82
+ stat_string += "statsd.numStats #{num_stats} #{ts}\n"
83
+
84
+ end
85
+ # send to graphite
86
+ send_data stat_string
87
+ puts "complete. (#{time.round(3)}s)"
88
+ close_connection_after_writing
89
+ end
90
+ end
91
+ end
@@ -0,0 +1,146 @@
1
+ require 'benchmark'
2
+ require 'mongo'
3
+ module Statsd
4
+ class Mongo
5
+ class << self
6
+ attr_accessor :database, :hostname, :retentions, :flush_interval
7
+ end
8
+
9
+ def self.flush_stats(counters, timers)
10
+ raise 'Invalid retention config' if retentions.empty?
11
+ print "#{Time.now} Flushing #{counters.count} counters and #{timers.count} timers to MongoDB"
12
+ stat_string = ''
13
+ time = ::Benchmark.realtime do
14
+ docs = []
15
+ ts = Time.now.to_i
16
+ num_stats = 0
17
+ retention = retentions.first # always write at the fineset granularity
18
+ ts_bucket = ts / retention['seconds'].to_i * retention['seconds'].to_i
19
+
20
+ # connect to store
21
+ db = ::Mongo::Connection.new(hostname).db(database)
22
+ coll = db.collection(retention['name'])
23
+
24
+ # store counters
25
+ counters.each_pair do |key,value|
26
+ value /= flush_interval
27
+ doc = {:stat => key, :value => value, :ts => ts_bucket, :type => "counter" }
28
+ docs.push(doc)
29
+ message = "stats.#{key} #{value} #{ts}\n"
30
+ stat_string += message
31
+ counters[key] = 0
32
+
33
+ num_stats += 1
34
+ end
35
+
36
+ # store timers
37
+ timers.each_pair do |key, values|
38
+ if (values.length > 0)
39
+ pct_threshold = 90
40
+ values.sort!
41
+ count = values.count
42
+ min = values.first
43
+ max = values.last
44
+
45
+ mean = min
46
+ max_at_threshold = max
47
+
48
+ if (count > 1)
49
+ # strip off the top 100-threshold
50
+ threshold_index = (((100 - pct_threshold) / 100.0) * count).round
51
+ values = values[0..-threshold_index]
52
+ max_at_threshold = values.last
53
+
54
+ # average the remaining timings
55
+ sum = values.inject( 0 ) { |s,x| s+x }
56
+ mean = sum / values.count
57
+ end
58
+
59
+ timers[key] = []
60
+
61
+ # Flush Values to Store
62
+ doc = { :stat => key,
63
+ :values => {
64
+ :mean => mean,
65
+ :max => max,
66
+ :min => min,
67
+ "upper_#{pct_threshold}".to_sym => max_at_threshold,
68
+ :count => count
69
+ },
70
+ :type => "timer",
71
+ :ts => ts_bucket
72
+ }
73
+ docs.push(doc)
74
+
75
+ num_stats += 1
76
+ end
77
+ end
78
+ stat_string += "statsd.numStats #{num_stats} #{ts}\n"
79
+ coll.insert(docs)
80
+
81
+ aggregate(ts_bucket)
82
+ end
83
+ puts "complete. (#{time.round(3)}s)"
84
+ end
85
+
86
+ # For each coarse granularity of retention
87
+ # Look up the previous bucket
88
+ # If there is no data, aggregate the finest Fill it if empty
89
+ # TODO consider doing this inside Mongo with M/R
90
+ def self.aggregate(current_bucket)
91
+ db = ::Mongo::Connection.new(hostname).db(database)
92
+ retentions.sort_by! {|r| r['seconds']}
93
+ docs = []
94
+ fine_stats_collection = db.collection(retentions.first['name']) # Use the finest granularity for now
95
+ retentions[1..-1].each_with_index do |retention,index|
96
+ # fine_stats_collection = db.collection(retentions[index]['name'])
97
+ coarse_stats_collection = db.collection(retention['name'])
98
+ step = retention['seconds']
99
+ current_coarse_bucket = current_bucket / step * step - step
100
+ previous_coarse_bucket = current_coarse_bucket - step
101
+ # Look up previous bucket
102
+ if coarse_stats_collection.find({:ts => previous_coarse_bucket}).count == 0
103
+ # Aggregate
104
+ print '.'
105
+ stats_to_aggregate = fine_stats_collection.find(
106
+ {:ts => {"$gte" => previous_coarse_bucket, "$lt" => current_coarse_bucket}})
107
+ rows = stats_to_aggregate.to_a
108
+ count = stats_to_aggregate.count
109
+ rows.group_by {|r| r["stat"] }.each_pair do |name,stats|
110
+ case stats.first['type']
111
+ when 'timer'
112
+ mean = stats.collect {|stat| stat['values']['mean'] }.inject( 0 ) { |s,x| s+x } / stats.count
113
+ max = stats.collect {|stat| stat['values']['max'] }.max
114
+ min = stats.collect {|stat| stat['values']['max'] }.min
115
+ upper_key = stats.first['values'].keys.find{|k| k =~ /upper_/}
116
+ max_at_threshold = stats.collect {|stat| stat['values'][upper_key] }.max
117
+ total_stats = stats.collect {|stat| stat['values']['count'] }.inject( 0 ) { |s,x| s+x }
118
+ doc = { :stat => name,
119
+ :values => {
120
+ :mean => mean,
121
+ :max => max,
122
+ :min => min,
123
+ upper_key.to_sym => max_at_threshold,
124
+ :count => total_stats
125
+ },
126
+ :type => "timer",
127
+ :ts => previous_coarse_bucket
128
+ }
129
+ when 'counter'
130
+ doc = {:stat => name,
131
+ :value => stats.collect {|stat| stat['value'] }.inject( 0 ) { |s,x| s+x },
132
+ :ts => previous_coarse_bucket,
133
+ :type => "counter"
134
+ }
135
+ else
136
+ raise "unknown type #{stats.first['type']}"
137
+ end
138
+ docs.push(doc)
139
+ end
140
+ coarse_stats_collection.insert(docs)
141
+ end
142
+ end
143
+
144
+ end
145
+ end
146
+ end
@@ -0,0 +1,41 @@
1
+ require 'eventmachine'
2
+ module Statsd
3
+ module Server #< EM::Connection
4
+ Version = '0.0.4'
5
+
6
+ FLUSH_INTERVAL = 10
7
+ COUNTERS = {}
8
+ TIMERS = {}
9
+ def post_init
10
+ puts "statsd server started!"
11
+ end
12
+ def self.get_and_clear_stats!
13
+ counters = COUNTERS.dup
14
+ timers = TIMERS.dup
15
+ COUNTERS.clear
16
+ TIMERS.clear
17
+ [counters,timers]
18
+ end
19
+ def receive_data(msg)
20
+ msg.split("\n").each do |row|
21
+ # puts row
22
+ bits = row.split(':')
23
+ key = bits.shift.gsub(/\s+/, '_').gsub(/\//, '-').gsub(/[^a-zA-Z_\-0-9\.]/, '')
24
+ bits.each do |record|
25
+ sample_rate = 1
26
+ fields = record.split("|")
27
+ if (fields[1].strip == "ms")
28
+ TIMERS[key] ||= []
29
+ TIMERS[key].push(fields[0].to_i)
30
+ else
31
+ if (fields[2] && fields[2].match(/^@([\d\.]+)/))
32
+ sample_rate = fields[2].match(/^@([\d\.]+)/)[1]
33
+ end
34
+ COUNTERS[key] ||= 0
35
+ COUNTERS[key] += (fields[0].to_i || 1) * (1.0 / sample_rate.to_f)
36
+ end
37
+ end
38
+ end
39
+ end
40
+ end
41
+ end
data/lib/statsd.rb ADDED
@@ -0,0 +1,75 @@
1
+ # encoding: utf-8
2
+ module Statsd
3
+
4
+ #
5
+ # Statsd::Client by Ben VandenBos
6
+ # http://github.com/bvandenbos/statsd-client
7
+ #
8
+ class Client
9
+
10
+ Version = '0.0.4'
11
+ attr_accessor :host, :port
12
+
13
+ def initialize(host='localhost', port=8125)
14
+ @host = host
15
+ @port = port
16
+ end
17
+
18
+ # +stat+ to log timing for
19
+ # +time+ is the time to log in ms
20
+ def timing(stat, time, sample_rate = 1)
21
+ send_stats "#{stat}:#{time}|ms", sample_rate
22
+ end
23
+
24
+ # +stats+ can be a string or an array of strings
25
+ def increment(stats, sample_rate = 1)
26
+ update_counter stats, 1, sample_rate
27
+ end
28
+
29
+ # +stats+ can be a string or an array of strings
30
+ def decrement(stats, sample_rate = 1)
31
+ update_counter stats, -1, sample_rate
32
+ end
33
+
34
+ # +stats+ can be a string or array of strings
35
+ def update_counter(stats, delta = 1, sample_rate = 1)
36
+ stats = Array(stats)
37
+ send_stats(stats.map { |s| "#{s}:#{delta}|c" }, sample_rate)
38
+ end
39
+
40
+ private
41
+
42
+ def send_stats(data, sample_rate = 1)
43
+ data = Array(data)
44
+ sampled_data = []
45
+
46
+ # Apply sample rate if less than one
47
+ if sample_rate < 1
48
+ data.each do |d|
49
+ if rand <= sample_rate
50
+ sampled_data << "#{d}@#{sample_rate}"
51
+ end
52
+ end
53
+ data = sampled_data
54
+ end
55
+
56
+ return if data.empty?
57
+
58
+ raise "host and port must be set" unless host && port
59
+
60
+ begin
61
+ sock = UDPSocket.new
62
+ data.each do |d|
63
+ sock.send(d, 0, host, port)
64
+ end
65
+ rescue # silent but deadly
66
+ ensure
67
+ sock.close
68
+ end
69
+ true
70
+ end
71
+
72
+
73
+ end
74
+ end
75
+
data/netcat-example.sh ADDED
@@ -0,0 +1,5 @@
1
+ nc -w 1 -u 127.0.0.1 8125 << EOF
2
+ globs:1|c
3
+ gorets:1|c|@0.1
4
+ glork:320|ms
5
+ EOF