lookout-statsd 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/.gitignore ADDED
@@ -0,0 +1,3 @@
1
+ /pkg
2
+ .rvmrc
3
+ Gemfile.lock
data/Gemfile ADDED
@@ -0,0 +1,16 @@
1
+ source 'https://rubygems.org'
2
+
3
+ gem 'rake'
4
+
5
+ group :test do
6
+ if RUBY_VERSION > "1.9"
7
+ gem 'debugger'
8
+ else
9
+ gem 'ruby-debug'
10
+ end
11
+
12
+ gem 'rspec'
13
+ gem 'cucumber'
14
+ end
15
+
16
+ gemspec
data/README.md ADDED
@@ -0,0 +1,126 @@
1
+ # StatsD
2
+
3
+ A network daemon for aggregating statistics (counters and timers), rolling them up, then sending them to [graphite][graphite].
4
+
5
+
6
+ ### Installation
7
+
8
+ gem install statsd
9
+
10
+ ### Configuration
11
+
12
+ Create config.yml to your liking.
13
+
14
+ Example config.yml
15
+ ---
16
+ bind: 127.0.0.1
17
+ port: 8125
18
+
19
+ # Flush interval should be your finest retention in seconds
20
+ flush_interval: 10
21
+
22
+ # Graphite
23
+ graphite_host: localhost
24
+ graphite_port: 2003
25
+
26
+
27
+
28
+ ### Server
29
+ Run the server:
30
+
31
+ Flush to Graphite (default):
32
+ statsd -c config.yml
33
+
34
+ ### Client
35
+ In your client code:
36
+
37
+ require 'rubygems'
38
+ require 'statsd'
39
+ STATSD = Statsd::Client.new('localhost',8125)
40
+
41
+ STATSD.increment('some_counter') # basic incrementing
42
+ STATSD.increment('system.nested_counter', 0.1) # incrementing with sampling (10%)
43
+
44
+ STATSD.decrement(:some_other_counter) # basic decrememting using a symbol
45
+ STATSD.decrement('system.nested_counter', 0.1) # decrementing with sampling (10%)
46
+
47
+ STATSD.timing('some_job_time', 20) # reporting job that took 20ms
48
+ STATSD.timing('some_job_time', 20, 0.05) # reporting job that took 20ms with sampling (5% sampling)
49
+
50
+ Concepts
51
+ --------
52
+
53
+ * *buckets*
54
+ Each stat is in it's own "bucket". They are not predefined anywhere. Buckets can be named anything that will translate to Graphite (periods make folders, etc)
55
+
56
+ * *values*
57
+ Each stat will have a value. How it is interpreted depends on modifiers
58
+
59
+ * *flush*
60
+ After the flush interval timeout (default 10 seconds), stats are munged and sent over to Graphite.
61
+
62
+ Counting
63
+ --------
64
+
65
+ gorets:1|c
66
+
67
+ This is a simple counter. Add 1 to the "gorets" bucket. It stays in memory until the flush interval.
68
+
69
+
70
+ Timing
71
+ ------
72
+
73
+ glork:320|ms
74
+
75
+ The glork took 320ms to complete this time. StatsD figures out 90th percentile, average (mean), lower and upper bounds for the flush interval.
76
+
77
+ Sampling
78
+ --------
79
+
80
+ gorets:1|c|@0.1
81
+
82
+ Tells StatsD that this counter is being sent sampled ever 1/10th of the time.
83
+
84
+
85
+ Guts
86
+ ----
87
+
88
+ * [UDP][udp]
89
+ Client libraries use UDP to send information to the StatsD daemon.
90
+
91
+ * [EventMachine][eventmachine]
92
+ * [Graphite][graphite]
93
+
94
+
95
+ Graphite
96
+ --------
97
+
98
+ Graphite uses "schemas" to define the different round robin datasets it houses (analogous to RRAs in rrdtool):
99
+
100
+ [stats]
101
+ priority = 110
102
+ pattern = ^stats\..*
103
+ retentions = 10:2160,60:10080,600:262974
104
+
105
+ That translates to:
106
+
107
+ * 6 hours of 10 second data (what we consider "near-realtime")
108
+ * 1 week of 1 minute data
109
+ * 5 years of 10 minute data
110
+
111
+ This has been a good tradeoff so far between size-of-file (round robin databases are fixed size) and data we care about. Each "stats" database is about 3.2 megs with these retentions.
112
+
113
+
114
+ Inspiration
115
+ -----------
116
+ [Etsy's][etsy] [blog post][blog post].
117
+
118
+ StatsD was inspired (heavily) by the project (of the same name) at Flickr. Here's a post where Cal Henderson described it in depth:
119
+ [Counting and timing](http://code.flickr.com/blog/2008/10/27/counting-timing/). Cal re-released the code recently: [Perl StatsD](https://github.com/iamcal/Flickr-StatsD)
120
+
121
+
122
+ [graphite]: http://graphite.wikidot.com
123
+ [etsy]: http://www.etsy.com
124
+ [blog post]: http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/
125
+ [udp]: http://enwp.org/udp
126
+ [eventmachine]: http://rubyeventmachine.com/
data/Rakefile ADDED
@@ -0,0 +1,10 @@
1
+ require 'bundler'
2
+ require 'rspec/core/rake_task'
3
+
4
+ Bundler::GemHelper.install_tasks
5
+
6
+ task :default => [:test]
7
+
8
+ RSpec::Core::RakeTask.new(:test) do |t|
9
+ t.rspec_opts = '--order random --fail-fast --color --format d'
10
+ end
data/bin/statsd ADDED
@@ -0,0 +1,45 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ $LOAD_PATH.unshift File.expand_path(File.dirname(__FILE__) + '/../lib')
4
+ require 'yaml'
5
+ require 'optparse'
6
+
7
+ begin
8
+ ORIGINAL_ARGV = ARGV.dup
9
+ options = {}
10
+
11
+ parser = OptionParser.new do |opts|
12
+ opts.banner = "Usage: statsd [options]"
13
+
14
+ opts.separator ""
15
+ opts.separator "Options:"
16
+
17
+ opts.on("-cCONFIG", "--config-file CONFIG", "Configuration file") do |x|
18
+ options[:config] = x
19
+ end
20
+
21
+ opts.on("-h", "--help", "Show this message") do
22
+ puts opts
23
+ exit
24
+ end
25
+ end
26
+
27
+ parser.parse!
28
+
29
+ # dispatch
30
+ if !options[:config]
31
+ puts parser.help
32
+ else
33
+ require 'statsd'
34
+ require 'statsd/server'
35
+ Statsd::Server::Daemon.new.run(options)
36
+ end
37
+ rescue Exception => e
38
+ if e.instance_of?(SystemExit)
39
+ raise
40
+ else
41
+ puts 'Uncaught exception'
42
+ puts e.message
43
+ puts e.backtrace.join("\n")
44
+ end
45
+ end
data/config.yml ADDED
@@ -0,0 +1,10 @@
1
+ ---
2
+ bind: 127.0.0.1
3
+ port: 8125
4
+
5
+ # Flush interval should be your finest retention in seconds
6
+ flush_interval: 10
7
+
8
+ # Graphite
9
+ graphite_host: localhost
10
+ graphite_port: 2003
data/lib/statsd.rb ADDED
@@ -0,0 +1,130 @@
1
+ require 'socket'
2
+ require 'resolv'
3
+
4
+ module Statsd
5
+ # initialize singleton instance in an initializer
6
+ def self.create_instance(opts={})
7
+ raise "Already initialized Statsd" if defined? @@instance
8
+ @@instance ||= Client.new(opts)
9
+ end
10
+
11
+ # access singleton instance, which must have been initialized with #create_instance
12
+ def self.instance
13
+ raise "Statsd has not been initialized" unless @@instance
14
+ @@instance
15
+ end
16
+
17
+ class Client
18
+ attr_accessor :host, :port, :prefix
19
+
20
+ def initialize(opts={})
21
+ @host = opts[:host] || 'localhost'
22
+ @port = opts[:port] || 8125
23
+ @prefix = opts[:prefix]
24
+ end
25
+
26
+ def host_ip_addr
27
+ @host_ip_addr ||= Resolv.getaddress(host)
28
+ end
29
+
30
+ def host=(h)
31
+ @host_ip_addr = nil
32
+ @host = h
33
+ end
34
+
35
+ # +stat+ to log timing for
36
+ # +time+ is the time to log in ms
37
+ def timing(stat, time = nil, sample_rate = 1)
38
+ value = nil
39
+ if block_given?
40
+ start_time = Time.now.to_f
41
+ value = yield
42
+ time = ((Time.now.to_f - start_time) * 1000).floor
43
+ end
44
+
45
+ if @prefix
46
+ stat = "#{@prefix}.#{stat}"
47
+ end
48
+
49
+ send_stats("#{stat}:#{time}|ms", sample_rate)
50
+ value
51
+ end
52
+
53
+ # +stats+ can be a string or an array of strings
54
+ def increment(stats, sample_rate = 1)
55
+ if @prefix
56
+ stats = "#{@prefix}.#{stats}"
57
+ end
58
+ update_counter stats, 1, sample_rate
59
+ end
60
+
61
+ # +stats+ can be a string or an array of strings
62
+ def decrement(stats, sample_rate = 1)
63
+ if @prefix
64
+ stats = "#{@prefix}.#{stats}"
65
+ end
66
+ update_counter stats, -1, sample_rate
67
+ end
68
+
69
+ # +stats+ can be a string or array of strings
70
+ def update_counter(stats, delta = 1, sample_rate = 1)
71
+ stats = Array(stats)
72
+ send_stats(stats.map { |s| "#{s}:#{delta}|c" }, sample_rate)
73
+ end
74
+
75
+ # +stats+ is a hash
76
+ def gauge(stats)
77
+ send_stats(stats.map { |s,val|
78
+ if @prefix
79
+ s = "#{@prefix}.#{s}"
80
+ end
81
+ "#{s}:#{val}|g"
82
+ })
83
+ end
84
+
85
+ private
86
+
87
+ def send_stats(data, sample_rate = 1)
88
+ data = Array(data)
89
+ sampled_data = []
90
+
91
+ # Apply sample rate if less than one
92
+ if sample_rate < 1
93
+ data.each do |d|
94
+ if rand <= sample_rate
95
+ sampled_data << "#{d}@#{sample_rate}"
96
+ end
97
+ end
98
+ data = sampled_data
99
+ end
100
+
101
+ return if data.empty?
102
+
103
+ raise "host and port must be set" unless host && port
104
+
105
+ begin
106
+ sock = UDPSocket.new
107
+ data.each do |d|
108
+ sock.send(d, 0, host, port)
109
+ end
110
+ rescue # silent but deadly
111
+ ensure
112
+ sock.close
113
+ end
114
+ true
115
+ end
116
+
117
+ end
118
+
119
+ module Rails
120
+ # to monitor all actions for this controller (and its descendents) with graphite,
121
+ # use "around_filter Statsd::Rails::ActionTimerFilter"
122
+ class ActionTimerFilter
123
+ def self.filter(controller, &block)
124
+ key = "requests.#{controller.controller_name}.#{controller.params[:action]}"
125
+ Statsd.instance.timing(key, &block)
126
+ end
127
+ end
128
+ end
129
+
130
+ end
@@ -0,0 +1,21 @@
1
+ #!/usr/bin/env ruby
2
+ #
3
+
4
+ require 'rubygems'
5
+ require 'eventmachine'
6
+
7
+ module EchoServer
8
+ def post_init
9
+ puts "-- someone connected to the server!"
10
+ end
11
+
12
+ def receive_data data
13
+ puts data
14
+ send_data ">>> you sent: #{data}"
15
+ end
16
+ end
17
+
18
+ EventMachine::run {
19
+ EventMachine::start_server "127.0.0.1", 2003, EchoServer
20
+ puts 'running dummy graphite echo server on 2003'
21
+ }
@@ -0,0 +1,70 @@
1
+ require 'benchmark'
2
+ require 'eventmachine'
3
+
4
+
5
+ module Statsd
6
+ class Graphite < EM::Connection
7
+ attr_accessor :counters, :timers, :flush_interval
8
+
9
+ def flush_stats
10
+ puts "#{Time.now} Flushing #{counters.count} counters and #{timers.count} timers to Graphite."
11
+
12
+ stat_string = ''
13
+
14
+ ts = Time.now.to_i
15
+ num_stats = 0
16
+
17
+ # store counters
18
+ counters.each_pair do |key,value|
19
+ message = "#{key} #{value} #{ts}\n"
20
+ stat_string += message
21
+ counters[key] = 0
22
+
23
+ num_stats += 1
24
+ end
25
+
26
+ # store timers
27
+ timers.each_pair do |key, values|
28
+ if (values.length > 0)
29
+ pct_threshold = 90
30
+ values.sort!
31
+ count = values.count
32
+ min = values.first
33
+ max = values.last
34
+
35
+ mean = min
36
+ max_at_threshold = max
37
+
38
+ if (count > 1)
39
+ # average all the timing data
40
+ sum = values.inject( 0 ) { |s,x| s+x }
41
+ mean = sum / values.count
42
+
43
+ # strip off the top 100-threshold
44
+ threshold_index = (((100 - pct_threshold) / 100.0) * count).round
45
+ values = values[0..-threshold_index]
46
+ max_at_threshold = values.last
47
+ end
48
+
49
+ message = ""
50
+ message += "#{key}.mean #{mean} #{ts}\n"
51
+ message += "#{key}.upper #{max} #{ts}\n"
52
+ message += "#{key}.upper_#{pct_threshold} #{max_at_threshold} #{ts}\n"
53
+ message += "#{key}.lower #{min} #{ts}\n"
54
+ message += "#{key}.count #{count} #{ts}\n"
55
+ stat_string += message
56
+
57
+ timers[key] = []
58
+
59
+ num_stats += 1
60
+ end
61
+ end
62
+
63
+ stat_string += "statsd.numStats #{num_stats} #{ts}\n"
64
+
65
+ # send to graphite
66
+ send_data stat_string
67
+ close_connection_after_writing
68
+ end
69
+ end
70
+ end
@@ -0,0 +1,82 @@
1
+ require 'eventmachine'
2
+ require 'yaml'
3
+ require 'erb'
4
+
5
+ require 'statsd/graphite'
6
+
7
+ module Statsd
8
+ module Server
9
+ Version = '0.5.5'
10
+
11
+ FLUSH_INTERVAL = 10
12
+ COUNTERS = {}
13
+ TIMERS = {}
14
+ GAUGES = {}
15
+
16
+ def post_init
17
+ puts "statsd server started!"
18
+ end
19
+
20
+ def self.get_and_clear_stats!
21
+ counters = COUNTERS.dup
22
+ timers = TIMERS.dup
23
+ gauges = GAUGES.dup
24
+ COUNTERS.clear
25
+ TIMERS.clear
26
+ GAUGES.clear
27
+ [counters,timers,gauges]
28
+ end
29
+
30
+ def receive_data(msg)
31
+ msg.split("\n").each do |row|
32
+ bits = row.split(':')
33
+ key = bits.shift.gsub(/\s+/, '_').gsub(/\//, '-').gsub(/[^a-zA-Z_\-0-9\.]/, '')
34
+ bits.each do |record|
35
+ sample_rate = 1
36
+ fields = record.split("|")
37
+ if fields.nil? || fields.count < 2
38
+ next
39
+ end
40
+ if (fields[1].strip == "ms")
41
+ TIMERS[key] ||= []
42
+ TIMERS[key].push(fields[0].to_i)
43
+ elsif (fields[1].strip == "c")
44
+ if (fields[2] && fields[2].match(/^@([\d\.]+)/))
45
+ sample_rate = fields[2].match(/^@([\d\.]+)/)[1]
46
+ end
47
+ COUNTERS[key] ||= 0
48
+ COUNTERS[key] += (fields[0].to_i || 1) * (1.0 / sample_rate.to_f)
49
+ elsif (fields[1].strip == "g")
50
+ GAUGES[key] ||= (fields[0].to_i || 0)
51
+ else
52
+ puts "Invalid statistic #{fields.inspect} received; ignoring"
53
+ end
54
+ end
55
+ end
56
+ end
57
+
58
+ class Daemon
59
+ def run(options)
60
+ config = YAML::load(ERB.new(IO.read(options[:config])).result)
61
+
62
+ EventMachine::run do
63
+ EventMachine::open_datagram_socket(config['bind'], config['port'], Statsd::Server)
64
+ puts "Listening on #{config['bind']}:#{config['port']}"
65
+
66
+ # Periodically Flush
67
+ EventMachine::add_periodic_timer(config['flush_interval']) do
68
+ counters,timers = Statsd::Server.get_and_clear_stats!
69
+
70
+ EventMachine.connect config['graphite_host'], config['graphite_port'], Statsd::Graphite do |conn|
71
+ conn.counters = counters
72
+ conn.timers = timers
73
+ conn.flush_interval = config['flush_interval']
74
+ conn.flush_stats
75
+ end
76
+ end
77
+ end
78
+ end
79
+ end
80
+
81
+ end
82
+ end
@@ -0,0 +1,3 @@
1
+ require './graphite'
2
+ counters = timers = []
3
+ #Statsd::Graphite.flush_stats(counters,timers)
data/netcat-example.sh ADDED
@@ -0,0 +1,5 @@
1
+ nc -w 1 -u 127.0.0.1 8125 << EOF
2
+ globs:1|c
3
+ gorets:1|c|@0.1
4
+ glork:320|ms
5
+ EOF
@@ -0,0 +1,4 @@
1
+ $:.unshift File.expand_path(File.dirname(__FILE__) + '/../lib')
2
+
3
+ require 'statsd'
4
+ require 'statsd/server'
@@ -0,0 +1,16 @@
1
+ require 'spec_helper'
2
+
3
+
4
+ describe Statsd::Server do
5
+ include Statsd::Server
6
+
7
+ describe :receive_data do
8
+ it 'should not vomit on bad data' do
9
+ bad_data = "dev.rwygand.app.flexd.exception.no action responded to index. actions: authenticate, authentication_request, authorization, bubble_stacktrace?, decode_credentials, encode_credentials, not_found, and user_name_and_password:1|c"
10
+
11
+ expect {
12
+ receive_data(bad_data)
13
+ }.not_to raise_error
14
+ end
15
+ end
16
+ end
@@ -0,0 +1,130 @@
1
+ require 'spec_helper'
2
+
3
+ describe Statsd do
4
+ describe '#create_instance' do
5
+ after(:each) do
6
+ Statsd.send(:remove_class_variable, :@@instance)
7
+ end
8
+
9
+ it 'should create an instance' do
10
+ Statsd.create_instance
11
+ Statsd.instance.should_not be nil
12
+ end
13
+
14
+ it 'should raise if called twice' do
15
+ Statsd.create_instance
16
+ expect { Statsd.create_instance }.to raise_error
17
+ end
18
+ end
19
+
20
+ describe '#instance' do
21
+ it 'should raise if not created' do
22
+ expect { Statsd.instance }.to raise_error
23
+ end
24
+ end
25
+ end
26
+
27
+ describe Statsd::Client do
28
+ describe '#initialize' do
29
+ it 'should work without arguments' do
30
+ c = Statsd::Client.new
31
+ c.should_not be nil
32
+ end
33
+
34
+ it 'should accept a :host keyword argument' do
35
+ host = 'zombo.com'
36
+ c = Statsd::Client.new(:host => host)
37
+ c.host.should match(host)
38
+ end
39
+
40
+ it 'should accept a :port keyword argument' do
41
+ port = 1337
42
+ c = Statsd::Client.new(:port => port)
43
+ c.port.should == port
44
+ end
45
+
46
+ it 'should accept a :prefix keyword argument' do
47
+ prefix = 'dev'
48
+ c = Statsd::Client.new(:prefix => prefix)
49
+ c.prefix.should match(prefix)
50
+ end
51
+ end
52
+
53
+ describe '#timing' do
54
+ let(:c) { Statsd::Client.new }
55
+
56
+ it 'should pass the sample rate along' do
57
+ sample = 10
58
+ c.should_receive(:send_stats).with(anything(), sample)
59
+ c.timing('foo', 1, sample)
60
+ end
61
+
62
+ it 'should use the right stat name' do
63
+ c.should_receive(:send_stats).with('foo:1|ms', anything())
64
+ c.timing('foo', 1)
65
+ end
66
+
67
+ it 'should prefix its stats if it has a prefix' do
68
+ c.should_receive(:send_stats).with('dev.foo:1|ms', anything())
69
+ c.prefix = 'dev'
70
+ c.timing('foo', 1)
71
+ end
72
+
73
+ it 'should wrap a block correctly' do
74
+ # Pretend our block took one second
75
+ c.should_receive(:send_stats).with('foo:1000|ms', anything())
76
+ Time.stub_chain(:now, :to_f).and_return(1, 2)
77
+
78
+ c.timing('foo') do
79
+ true.should be true
80
+ end
81
+ end
82
+
83
+ it 'should return the return value from the block' do
84
+ # Pretend our block took one second
85
+ c.should_receive(:send_stats).with('foo:1000|ms', anything())
86
+ Time.stub_chain(:now, :to_f).and_return(1, 2)
87
+
88
+ value = c.timing('foo') { 1337 }
89
+ value.should == 1337
90
+ end
91
+ end
92
+
93
+ describe '#increment' do
94
+ let(:c) { Statsd::Client.new }
95
+
96
+ it 'should prepend the prefix if it has one' do
97
+ c.prefix = 'dev'
98
+ c.should_receive(:update_counter).with('dev.foo', anything(), anything())
99
+ c.increment('foo')
100
+ end
101
+ end
102
+
103
+ describe '#decrement' do
104
+ let(:c) { Statsd::Client.new }
105
+
106
+ it 'should prepend the prefix if it has one' do
107
+ c.prefix = 'dev'
108
+ c.should_receive(:update_counter).with('dev.foo', anything(), anything())
109
+ c.decrement('foo')
110
+ end
111
+ end
112
+
113
+ describe '#gauge' do
114
+ let(:c) { Statsd::Client.new }
115
+
116
+ it 'should encode the values correctly' do
117
+ c.should_receive(:send_stats).with do |array|
118
+ array.should include('foo:1|g')
119
+ array.should include('bar:2|g')
120
+ end
121
+ c.gauge('foo' => 1, 'bar' => 2)
122
+ end
123
+
124
+ it 'should prepend the prefix if it has one' do
125
+ c.prefix = 'dev'
126
+ c.should_receive(:send_stats).with(['dev.foo:1|g'])
127
+ c.gauge('foo' => 1)
128
+ end
129
+ end
130
+ end
data/stats.rb ADDED
@@ -0,0 +1,28 @@
1
+ require 'eventmachine'
2
+ require 'statsd'
3
+ require 'statsd/server'
4
+ require 'statsd/graphite'
5
+
6
+ require 'yaml'
7
+ require 'erb'
8
+
9
+ ROOT = File.expand_path(File.dirname(__FILE__))
10
+ APP_CONFIG = YAML::load(ERB.new(IO.read(File.join(ROOT,'config.yml'))).result)
11
+
12
+ # Start the server
13
+ EventMachine::run do
14
+ EventMachine::open_datagram_socket('127.0.0.1', 8125, Statsd::Server)
15
+ EventMachine::add_periodic_timer(APP_CONFIG['flush_interval']) do
16
+ counters,timers = Statsd::Server.get_and_clear_stats!
17
+
18
+ # Graphite
19
+ EventMachine.connect APP_CONFIG['graphite_host'], APP_CONFIG['graphite_port'], Statsd::Graphite do |conn|
20
+ conn.counters = counters
21
+ conn.timers = timers
22
+ conn.flush_interval = 10
23
+ conn.flush_stats
24
+ end
25
+ end
26
+
27
+
28
+ end
data/statsd.gemspec ADDED
@@ -0,0 +1,24 @@
1
+ # -*- encoding: utf-8 -*-
2
+
3
+ Gem::Specification.new do |s|
4
+ s.name = "lookout-statsd"
5
+ s.version = "0.7.#{ENV['BUILD_NUMBER'] || 'dev'}"
6
+ s.platform = Gem::Platform::RUBY
7
+
8
+ s.authors = ['R. Tyler Croy', 'Andrew Coldham', 'Ben VandenBos']
9
+ s.email = ['rtyler.croy@mylookout.com']
10
+ s.homepage = "https://github.com/lookout/statsd"
11
+
12
+ s.summary = "Ruby version of statsd."
13
+ s.description = "A network daemon for aggregating statistics (counters and timers), rolling them up, then sending them to graphite."
14
+
15
+ s.required_rubygems_version = ">= 1.3.6"
16
+
17
+ s.add_dependency "eventmachine", ">= 0.12.10"
18
+ s.add_dependency "erubis", ">= 2.6.6"
19
+
20
+ s.files = `git ls-files`.split("\n")
21
+ s.executables = `git ls-files`.split("\n").map{|f| f =~ /^bin\/(.*)/ ? $1 : nil}.compact
22
+ s.require_path = 'lib'
23
+ end
24
+
metadata ADDED
@@ -0,0 +1,117 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: lookout-statsd
3
+ version: !ruby/object:Gem::Version
4
+ hash: 3
5
+ prerelease:
6
+ segments:
7
+ - 0
8
+ - 7
9
+ - 0
10
+ version: 0.7.0
11
+ platform: ruby
12
+ authors:
13
+ - R. Tyler Croy
14
+ - Andrew Coldham
15
+ - Ben VandenBos
16
+ autorequire:
17
+ bindir: bin
18
+ cert_chain: []
19
+
20
+ date: 2013-04-08 00:00:00 Z
21
+ dependencies:
22
+ - !ruby/object:Gem::Dependency
23
+ version_requirements: &id001 !ruby/object:Gem::Requirement
24
+ none: false
25
+ requirements:
26
+ - - ">="
27
+ - !ruby/object:Gem::Version
28
+ hash: 59
29
+ segments:
30
+ - 0
31
+ - 12
32
+ - 10
33
+ version: 0.12.10
34
+ prerelease: false
35
+ type: :runtime
36
+ requirement: *id001
37
+ name: eventmachine
38
+ - !ruby/object:Gem::Dependency
39
+ version_requirements: &id002 !ruby/object:Gem::Requirement
40
+ none: false
41
+ requirements:
42
+ - - ">="
43
+ - !ruby/object:Gem::Version
44
+ hash: 27
45
+ segments:
46
+ - 2
47
+ - 6
48
+ - 6
49
+ version: 2.6.6
50
+ prerelease: false
51
+ type: :runtime
52
+ requirement: *id002
53
+ name: erubis
54
+ description: A network daemon for aggregating statistics (counters and timers), rolling them up, then sending them to graphite.
55
+ email:
56
+ - rtyler.croy@mylookout.com
57
+ executables:
58
+ - statsd
59
+ extensions: []
60
+
61
+ extra_rdoc_files: []
62
+
63
+ files:
64
+ - .gitignore
65
+ - Gemfile
66
+ - README.md
67
+ - Rakefile
68
+ - bin/statsd
69
+ - config.yml
70
+ - lib/statsd.rb
71
+ - lib/statsd/echos.rb
72
+ - lib/statsd/graphite.rb
73
+ - lib/statsd/server.rb
74
+ - lib/statsd/test.rb
75
+ - netcat-example.sh
76
+ - spec/spec_helper.rb
77
+ - spec/statsd/server_spec.rb
78
+ - spec/statsd_spec.rb
79
+ - stats.rb
80
+ - statsd.gemspec
81
+ homepage: https://github.com/lookout/statsd
82
+ licenses: []
83
+
84
+ post_install_message:
85
+ rdoc_options: []
86
+
87
+ require_paths:
88
+ - lib
89
+ required_ruby_version: !ruby/object:Gem::Requirement
90
+ none: false
91
+ requirements:
92
+ - - ">="
93
+ - !ruby/object:Gem::Version
94
+ hash: 3
95
+ segments:
96
+ - 0
97
+ version: "0"
98
+ required_rubygems_version: !ruby/object:Gem::Requirement
99
+ none: false
100
+ requirements:
101
+ - - ">="
102
+ - !ruby/object:Gem::Version
103
+ hash: 23
104
+ segments:
105
+ - 1
106
+ - 3
107
+ - 6
108
+ version: 1.3.6
109
+ requirements: []
110
+
111
+ rubyforge_project:
112
+ rubygems_version: 1.8.25
113
+ signing_key:
114
+ specification_version: 3
115
+ summary: Ruby version of statsd.
116
+ test_files: []
117
+