pingdom-to-graphite 0.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ coverage
6
+ InstalledFiles
7
+ lib/bundler/man
8
+ pkg
9
+ rdoc
10
+ spec/reports
11
+ test/tmp
12
+ test/version_tmp
13
+ tmp
14
+
15
+ # YARD artifacts
16
+ .yardoc
17
+ _yardoc
18
+ doc/
19
+
20
+ # No Gemfile.lock for gem projects
21
+ Gemfile.lock
data/Gemfile ADDED
@@ -0,0 +1,11 @@
1
+ # A sample Gemfile
2
+ source "https://rubygems.org"
3
+
4
+ gemspec
5
+
6
+ group :development do
7
+ gem "rake"
8
+ gem "aruba"
9
+ gem "cucumber"
10
+ gem "ronn"
11
+ end
@@ -0,0 +1,96 @@
1
+ # pingdom-to-graphite
2
+
3
+ A tool for copying metrics from Pingdom to graphite. Pingdom, although allowing
4
+ access to effectively all your metrics through the API, does have some limits in
5
+ place to prevent abuse. This script tries to be mindful of that, although does
6
+ provide a "backfill" option if you care to burn up your daily api limit in one
7
+ fell swoop.
8
+
9
+ ## Usage
10
+
11
+ The utility itself will provide detailed help for all the commands if you just
12
+ invoke it by itself:
13
+
14
+ pingdom-to-graphite
15
+
16
+ ### For the Impatient
17
+
18
+ Ok, so you don't like reading command line help. Here's how to get up and
19
+ running quickly:
20
+
21
+ pingdom-to-graphite init
22
+
23
+ Will place a sample config file into the default location `~/.p2g/config.json`.
24
+ Don't worry scripters, this location can be overriden with the `-c` switch. Drop
25
+ your pingdom credentials and graphite settings into there to enable the script
26
+ to do any actual work.
27
+
28
+ pingdom-to-graphite init_checks
29
+
30
+ This will pre-fill the pingdom->checks setting in your config file with a list
31
+ of all your available check ids. Since you're curious:
32
+
33
+ pingdom-to-graphite list
34
+
35
+ Will list them all, as well as their current status. Ok, back to business:
36
+
37
+ pingdom-to-graphite update
38
+
39
+ Will pull the 100 most recent checks for each check specified in your config
40
+ file and create a `~/.p2g/state.json` file storing a few key timestamps about
41
+ what data you've successfully send to graphite. Similar to the config file, this
42
+ location can be overridden with the `-s` switch.
43
+
44
+ pingdom-to-graphite update
45
+
46
+ *"Hey, that's the same command!"* you say. Indeed it is, and will run through your
47
+ checks picking up where we left of from the last update. If you stopped to check
48
+ graphite after that last step you might even pick up a few new checks. The idea
49
+ is just to scedule this job via cron and let your metrics roll in on a fixed
50
+ schedule. How often? Well, you can't run any checks at more then a minute
51
+ resolution, so going any more frequently then that is wasteful. The actual
52
+ limiting factor is that updating each metric is a separate API call, so the more
53
+ checks you want to pipe to graphite the less frequently you'll be able to run
54
+ the script. Want some numbers?
55
+
56
+ pingdom-to-graphite advice
57
+
58
+ Will give you the rough API numbers you'd consume a day given your number of
59
+ monitored checks in five minute increments until it finds something that works.
60
+ However, keep in mind that you're not using this to do alerting (right, I mean,
61
+ that's what you're paying Pingdom for) and that graphite doesn't have any
62
+ issues backfilling content, so picking the most aggressive value you can isn't
63
+ necessarily the best approach! Speaking of the API, how about some more historical
64
+ data because, you know, charts and stuff! Got that covered, but you're going to have
65
+ to pick the specific check you'd like some more data for.
66
+
67
+ pingdom-to-graphite backfull CHECK_ID
68
+
69
+ Will use up a number of your existing API calls to get historical data for that specific
70
+ check. How many? Well, it will ask, and you can tell it. You can also specify with the
71
+ `-l` flag.
72
+
73
+ ## License
74
+
75
+ The MIT License
76
+
77
+ Copyright (c) 2012 Lewis J. Goettner, III
78
+
79
+ Permission is hereby granted, free of charge, to any person obtaining a copy
80
+ of this software and associated documentation files (the "Software"), to deal
81
+ in the Software without restriction, including without limitation the rights
82
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
83
+ copies of the Software, and to permit persons to whom the Software is
84
+ furnished to do so, subject to the following conditions:
85
+
86
+ The above copyright notice and this permission notice shall be included in
87
+ all copies or substantial portions of the Software.
88
+
89
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
90
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
91
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
92
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
93
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
94
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
95
+ THE SOFTWARE.
96
+
@@ -0,0 +1,3 @@
1
+ task :console do
2
+ exec %(irb -Ilib)
3
+ end
@@ -0,0 +1,10 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ $:.unshift File.expand_path("../../lib", __FILE__)
4
+
5
+ require 'rubygems'
6
+ require 'bundler/setup'
7
+
8
+ require "pingdom-to-graphite/cli"
9
+
10
+ PingdomToGraphite::CLI.start
@@ -0,0 +1,66 @@
1
+ Feature: Copy
2
+ In order to populate graphite
3
+ As a valid user of the Pingdom API
4
+ I should be able to copy metrics from Pingdom
5
+
6
+ Scenario: No argument
7
+ When I run `pingdom-to-graphite`
8
+ Then the output should contain:
9
+ """
10
+ Tasks:
11
+ """
12
+
13
+ Scenario: Initialize config file
14
+ When I run `pingdom-to-graphite init -c configtest.json`
15
+ Then the file "configtest.json" should contain "pingdom"
16
+
17
+
18
+ Scenario: Initialize the checks data
19
+ When I run `pingdom-to-graphite init_checks`
20
+ Then the exit status should be 0
21
+ Then the output should match /Added [\d]+ checks to/
22
+
23
+ Scenario: Get a list of your checks
24
+ When I run `pingdom-to-graphite list`
25
+ Then the exit status should be 0
26
+ Then the output should match /^.* \([\d]+\) - (up|down)/
27
+
28
+ Scenario: Get some advice on API limits
29
+ When I run `pingdom-to-graphite advice`
30
+ Then the exit status should be 0
31
+ Then the output should match /- WORKS$/
32
+
33
+ Scenario: Get the list of pingdom probes
34
+ When I run `pingdom-to-graphite probes`
35
+ Then the exit status should be 0
36
+ Then the output should match /^[A-Z]{2} - .*$/
37
+
38
+ Scenario: Get the results for a specific probe
39
+ When I run `pingdom-to-graphite results` with a valid check id
40
+ Then the exit status should be 0
41
+ Then the output should match /^[\d-]{10} [\d:]{8} (-|\+)[\d]{4}: (up|down) - [\d]+ms \(.*\)$/
42
+
43
+ @graphite
44
+ Scenario: Update the current checks without a state file
45
+ Given A mock graphite server is running
46
+ When I run `pingdom-to-graphite update -s teststate.json -c ../mockgraphite.json`
47
+ Then the exit status should be 0
48
+ Then graphite should have recieved results
49
+ Then the file "teststate.json" should contain "latest_ts"
50
+ Then the output should match /[\d]+ metrics sent to graphite for check [\d]+\./
51
+
52
+ Scenario: Try to backfill a check that's never been updated
53
+ When I run `pingdom-to-graphite backfill 1`
54
+ Then the exit status should be 1
55
+ Then the output should contain:
56
+ """
57
+ You can't backfill a check you've never run an update on.
58
+ """
59
+
60
+ @graphite @copystate
61
+ Scenario: Backfill a specific check
62
+ Given A mock graphite server is running
63
+ When I run `pingdom-to-graphite backfill -c ../mockgraphite.json -s ../copiedstate.json -l 1` with a valid check id
64
+ Then the exit status should be 0
65
+ Then graphite should have recieved results
66
+ Then the output should match /[\d]+ metrics sent to graphite for check [\d]+\./
@@ -0,0 +1,30 @@
1
+ require 'json'
2
+
3
+ Given /^A mock graphite server is running$/ do
4
+ @mock_received = Queue.new
5
+ @lew = "Goettner"
6
+ @mock_socket = TCPServer.new 20003
7
+ @server = Thread.new do
8
+ loop do
9
+ client = @mock_socket.accept
10
+ data = ""
11
+ recv_length = 100
12
+ while (tmp = client.recv(recv_length))
13
+ data += tmp
14
+ break if tmp.empty?
15
+ end
16
+ @mock_received << data
17
+ end
18
+ end
19
+ end
20
+
21
+ When /^I run `([^`]*)` with a valid check id$/ do |cmd|
22
+ config_file = File.expand_path('~/.p2g/config.json')
23
+ @config = JSON::parse(File.read(config_file));
24
+ step "I run `#{cmd} #{@config["pingdom"]["checks"][0]}`"
25
+ end
26
+
27
+ Then /^graphite should have recieved results$/ do
28
+ last_entry = @mock_received.pop
29
+ last_entry.should match /^[^ ]* [\d]+ [\d]{10,}$/
30
+ end
@@ -0,0 +1,41 @@
1
+ require "aruba/cucumber"
2
+ require "socket"
3
+ require "thread"
4
+
5
+ Before do
6
+ @aruba_timeout_seconds = 60
7
+ end
8
+
9
+ Before("@graphite") do
10
+ # Create a hybrid config (real pingdom / mock graphite)
11
+ config_file = File.expand_path('~/.p2g/config.json')
12
+ @config = JSON::parse(File.read(config_file))
13
+ @config["pingdom"]["checks"] = [ @config["pingdom"]["checks"][0] ]
14
+ @config["graphite"] = {
15
+ "host" => "localhost",
16
+ "port" => "20003",
17
+ "prefix" => "pingdom"
18
+ }
19
+ File.open('tmp/mockgraphite.json',"w",0600) do |f|
20
+ f.write(JSON.generate(@config))
21
+ end
22
+ end
23
+
24
+ After("@graphite") do
25
+ File.delete('tmp/mockgraphite.json')
26
+ @mock_socket.close
27
+ end
28
+
29
+
30
+ Before("@copystate") do
31
+ # Copy the real statefile (so we don't mess it up)
32
+ state_file = File.expand_path('~/.p2g/state.json')
33
+ File.open('tmp/copiedstate.json',"w",0600) do |f|
34
+ f.write(File.read(state_file))
35
+ end
36
+ end
37
+
38
+ After("@copystate") do
39
+ File.delete('tmp/copiedstate.json')
40
+ end
41
+
@@ -0,0 +1,3 @@
1
+ module PingdomToGraphite
2
+
3
+ end
@@ -0,0 +1,306 @@
1
+ require "pingdom-to-graphite"
2
+ require "pingdom-to-graphite/data-pull"
3
+ require "pingdom-to-graphite/data-push"
4
+ require "graphite-metric"
5
+ require "thor"
6
+ require "json"
7
+ require "fileutils"
8
+ require "logger"
9
+
10
+ class PingdomToGraphite::CLI < Thor
11
+
12
+ class_option :config,
13
+ :desc => "The path to your config file.",
14
+ :type => :string,
15
+ :aliases => "-c",
16
+ :default => "~/.p2g/config.json"
17
+
18
+ class_option :state,
19
+ :desc => "The path to your state file.",
20
+ :type => :string,
21
+ :aliases => "-s",
22
+ :default => "~/.p2g/state.json"
23
+
24
+ class_option :verbose, :type => :boolean, :aliases => "-v", :desc => "Increase verbosity."
25
+
26
+ desc "init", "Create an empty config JSON file if missing."
27
+ def init
28
+ config_path = File.expand_path(options.config)
29
+
30
+ if File.exists?(config_path)
31
+ error("Config file already exists. (#{options.config})")
32
+ else
33
+
34
+ # Make sure we have a directory to put the config in
35
+ unless File.directory?(File.dirname(config_path))
36
+ FileUtils.mkdir_p(File.dirname(config_path), :mode => 0700)
37
+ end
38
+
39
+ # A nice little defaults file.
40
+ settings = {
41
+ "pingdom" => {
42
+ "username" => "YOUR_USERNAME",
43
+ "password" => "YOUR_PASSWORD",
44
+ "key" => "YOUR_API_KEY",
45
+ "checks" => ["CHECK_ID_1","CHECK_ID_2"]
46
+ },
47
+ "graphite" => {
48
+ "host" => "YOUR_SERVER",
49
+ "port" => "2003",
50
+ "prefix" => "pingdom"
51
+ }
52
+ }
53
+ File.open(File.expand_path(options.config),"w",0600) do |f|
54
+ f.write(JSON.pretty_generate(settings))
55
+ end
56
+
57
+ end
58
+
59
+ end
60
+
61
+ desc "init_checks", "Add all your checks to your config. (Will overwrite existing list.)"
62
+ def init_checks
63
+ load_config!
64
+ load_check_list!
65
+ @config["pingdom"]["checks"] = @checks.keys
66
+ File.open(File.expand_path(options.config),"w",0600) do |f|
67
+ f.write(JSON.pretty_generate(@config))
68
+ end
69
+ puts "Added #{@checks.count} checks to #{options.config}"
70
+ end
71
+
72
+ desc "advice", "Gives you some advice about update frequency."
73
+ def advice
74
+ load_config!
75
+ total_checks = @config["pingdom"]["checks"].count
76
+ calls_per_check = 2 + (total_checks)
77
+ puts "You have #{total_checks} monitored checks. Given a 48000/day API limit:"
78
+ every_minutes = 5
79
+ begin
80
+ daily_calls = 60*24 / every_minutes * calls_per_check
81
+ puts "Every #{every_minutes} Minutes: #{daily_calls}/day - #{daily_calls < 48000 ? "WORKS" : "won't work"}"
82
+ every_minutes += 5
83
+ end until (daily_calls < 48000)
84
+ end
85
+
86
+ desc "list", "List all your available Pingdom checks."
87
+ def list
88
+ load_check_list!
89
+ @checks.each do |check_id, check|
90
+ puts "#{check.name} (#{check.id}) - #{check.status}"
91
+ end
92
+ end
93
+
94
+ desc "probes", "List all the pingdom probes."
95
+ def probes
96
+ load_probe_list!
97
+ @probes.each do |probe_id, probe|
98
+ puts "#{probe.countryiso} - #{probe.city}"
99
+ end
100
+ end
101
+
102
+ desc "results [CHECK_ID]", "List results for a specific check. The Pingdom API limits results to 100."
103
+
104
+ method_option :start_time,
105
+ :desc => "Beginning time for the checks, in any format supported by ruby's Time.parse(). Default will give you the last hour of checks.",
106
+ :type => :string,
107
+ :aliases => "-b"
108
+
109
+ method_option :end_time,
110
+ :desc => "End time for the checks, in any format supported by ruby's Time.parse(). Default to right now.",
111
+ :aliases => "-e"
112
+
113
+ def results(check_id)
114
+ load_config!
115
+ load_probe_list!
116
+ start_time = (options.start_time) ? DateTime.parse(options.start_time).to_i : 1.hour.ago.to_i
117
+ end_time = (options.end_time) ? DateTime.parse(options.end_time).to_i : DateTime.now.to_i
118
+ if start_time - end_time > 2764800
119
+ error("Date range must be less then 32 days.")
120
+ end
121
+ datapull = get_datapull
122
+ datapull.results(check_id, start_time, end_time).each do |result|
123
+ #<Pingdom::Result probeid: 33 time: 1343945109 status: "up" responsetime: 1103 statusdesc: "OK" statusdesclong: "OK">
124
+ puts "#{Time.at(result.time)}: #{result.status} - #{result.responsetime}ms (#{@probes[result.probeid].name})"
125
+ end
126
+ puts datapull.friendly_limit
127
+ end
128
+
129
+
130
+ desc "update", "Attempt to bring the checks defined in your config file up to date in graphite. If a check has never been polled before it will start with the last 100 checks."
131
+ def update
132
+ load_config!
133
+ load_state!
134
+ load_probe_list!
135
+ load_check_list!
136
+ datapull = get_datapull
137
+
138
+ @config["pingdom"]["checks"].each do |check_id|
139
+ puts "Check #{check_id}: " if options.verbose
140
+ # Check the state file
141
+ check_state = @state.has_key?(check_id.to_s) ? @state[check_id.to_s] : Hash.new
142
+ latest_ts = check_state.has_key?("latest_ts") ? check_state["latest_ts"] : 1.hour.ago.to_i
143
+ new_records = pull_and_push(check_id, latest_ts)
144
+ puts "#{new_records} metrics sent to graphite for check #{check_id}."
145
+ end
146
+ puts datapull.friendly_limit
147
+ end
148
+
149
+ desc 'backfill [CHECK_ID]', "Work backwards from the oldest check send to graphite, grabbing more historical data."
150
+
151
+ method_option :limit,
152
+ :desc => "Number of API calls to use while backfilling. If you don't provide one, I'll ask!",
153
+ :type => :numeric,
154
+ :aliases => "-l"
155
+
156
+ def backfill(check_id)
157
+ load_config!
158
+ load_state!
159
+ # Check the state file
160
+ if @state.has_key?(check_id) && @state[check_id].has_key?("earliest_ts")
161
+ earliest_ts = @state[check_id.to_s]["earliest_ts"]
162
+ else
163
+ error("You can't backfill a check you've never run an update on.")
164
+ end
165
+ load_probe_list!
166
+ load_check_list!
167
+ datapull = get_datapull
168
+ chunk = 10
169
+ unless limit = options.limit
170
+ limit = ask("You have #{datapull.effective_limit} API calls remaining. How many would you like to use?").to_i
171
+ end
172
+ created_ts = datapull.check(check_id).created
173
+
174
+ # Keep within the API limits
175
+ working_towards = (earliest_ts - created_ts) > 2678400 ? 31.days.ago.to_i : created_ts
176
+ puts "Backfilling from #{Time.at(earliest_ts)} working towards #{Time.at(working_towards)}. Check began on #{Time.at(created_ts)}"
177
+ # Break it into chunks
178
+ additions = 0
179
+ (limit.to_i.div(chunk)+1).times do
180
+ batch_count = pull_and_push(check_id, working_towards, earliest_ts, chunk)
181
+ puts "#{batch_count} metrics pushed in this batch." if options.verbose
182
+ additions += batch_count
183
+ end
184
+ puts "#{additions} metrics sent to graphite for check #{check_id}."
185
+ end
186
+
187
+ private
188
+
189
+ def get_datapull
190
+ if @datapull.nil?
191
+ load_config!
192
+ @datapull = PingdomToGraphite::DataPull.new(@config["pingdom"]["username"], @config["pingdom"]["password"], @config["pingdom"]["key"], log_level)
193
+ end
194
+ @datapull
195
+ end
196
+
197
+ def get_datapush
198
+ load_config!
199
+ datapush = PingdomToGraphite::DataPush.new(@config["graphite"]["host"], @config["graphite"]["port"])
200
+ end
201
+
202
+ def load_config!
203
+ if @config.nil?
204
+ config_file = File.expand_path(options.config)
205
+ unless File.exists?(config_file)
206
+ error("Missing config file (#{options.config})")
207
+ end
208
+
209
+ @config = JSON::parse(File.read(config_file));
210
+ end
211
+
212
+ end
213
+
214
+ def load_state!
215
+ state_file = File.expand_path(options.state)
216
+ if File.exists?(state_file)
217
+ @state = JSON::parse(File.read(state_file))
218
+ else
219
+ @state = Hash.new
220
+ end
221
+ end
222
+
223
+ # Write the state to disk
224
+ def write_state!
225
+ state_file = File.expand_path(options.state)
226
+ File.open(state_file,"w",0600) do |f|
227
+ f.write(JSON.generate(@state))
228
+ end
229
+ end
230
+
231
+
232
+ # Store the list in the object for reference (less api calls)
233
+ def load_probe_list!
234
+ config_file = File.expand_path(options.config)
235
+ datapull = get_datapull
236
+ @probes = Hash.new
237
+ datapull.probes.each do |probe|
238
+ # {"city"=>"Manchester", "name"=>"Manchester, UK", "country"=>"United Kingdom",
239
+ # "countryiso"=>"GB", "id"=>46, "ip"=>"212.84.74.156", "hostname"=>"s424.pingdom.com", "active"=>true}
240
+ @probes[probe.id] = probe
241
+ end
242
+ end
243
+
244
+ # Store the check list in the object for reference (less api calls)
245
+ def load_check_list!
246
+ load_config!
247
+ datapull = get_datapull
248
+ @checks = Hash.new
249
+ datapull.checks.each do |check|
250
+ # {"name"=>"Autocomplete", "id"=>259103, "type"=>"http", "lastresponsetime"=>203173,
251
+ # "status"=>"up", "lasttesttime"=>1298102416}
252
+ @checks[check.id] = check
253
+ end
254
+ end
255
+
256
+ # Take a pingdom check, and return an Array of metrics to be passed to graphite
257
+ def parse_result(check_id, result)
258
+ results = Array.new
259
+ prefix = "#{@config["graphite"]["prefix"]}.#{@checks[check_id.to_i].type}."
260
+ prefix += @checks[check_id.to_i].name.gsub(/ /,"_").gsub(/\./,"")
261
+ check_status = result.status.eql?("up") ? 1 : 0
262
+ check_time = Time.at(result.time).to_i
263
+ check_city = @probes[result.probe_id].city.gsub(/ /,"_").gsub(/\./,"")
264
+ results << GraphiteMetric::Plaintext.new("#{prefix}.status.#{@probes[result.probe_id].countryiso}.#{check_city}", check_status, check_time)
265
+ results << GraphiteMetric::Plaintext.new("#{prefix}.responsetime.#{@probes[result.probe_id].countryiso}.#{check_city}", result.responsetime, check_time)
266
+ results.each { |metric| puts metric } if options.verbose
267
+ results
268
+ end
269
+
270
+ def pull_and_push(check_id, latest_ts = nil, earlist_ts = nil, limit = nil)
271
+ datapull = get_datapull
272
+ datapush = get_datapush
273
+ load_state!
274
+ # Check the state file
275
+ check_state = @state.has_key?(check_id.to_s) ? @state[check_id.to_s] : Hash.new
276
+ latest_stored = check_state.has_key?("latest_ts") ? check_state["latest_ts"] : nil
277
+ earliest_stored = check_state.has_key?("earliest_ts") ? check_state["earliest_ts"] : nil
278
+ # Pull the data
279
+ rec_count = 0
280
+ result_list = Array.new
281
+ datapull.full_results(check_id, latest_ts, earlist_ts, limit).each do |result|
282
+ result_list += parse_result(check_id, result)
283
+ latest_stored = result.time if latest_stored.nil? || result.time > latest_stored
284
+ earliest_stored = result.time if earliest_stored.nil? || result.time < earliest_stored
285
+ rec_count += 1
286
+ end
287
+ # Push to graphite
288
+ datapush.to_graphite(result_list) unless result_list.empty?
289
+ # Store the state
290
+ @state[check_id] = Hash.new
291
+ @state[check_id]["latest_ts"] = latest_stored
292
+ @state[check_id]["earliest_ts"] = earliest_stored
293
+ write_state!
294
+ rec_count
295
+ end
296
+
297
+ def error(message)
298
+ STDERR.puts "ERROR: #{message}"
299
+ exit 1
300
+ end
301
+
302
+ def log_level
303
+ options.verbose ? Logger::DEBUG : Logger::ERROR
304
+ end
305
+
306
+ end
@@ -0,0 +1,97 @@
1
+ require "pingdom-to-graphite"
2
+ require "pingdom-client"
3
+
4
+ require "logger"
5
+
6
+ class PingdomToGraphite::DataPull
7
+
8
+ def initialize(username, password, key, log_level = Logger::ERROR)
9
+ @username = username
10
+ @password = password
11
+ @key = key
12
+ @log_level = log_level
13
+
14
+ @client = connect
15
+
16
+ end
17
+
18
+ # Return the lower of the two API limits
19
+ def effective_limit
20
+ # Catch-22: We want to maximize our API calls, but we don't have our limits until we make an API call.
21
+ unless @client.limit
22
+ @client.contacts
23
+ end
24
+ limit = @client.limit[:short][:remaining] > @client.limit[:long][:remaining] ? @client.limit[:long][:remaining] : @client.limit[:short][:remaining]
25
+ end
26
+
27
+ # A "Printer-friendly" version of the current limits
28
+ def friendly_limit
29
+ limit = @client.limit
30
+ short_time = Time.at(limit[:short][:resets_at] - Time.now).gmtime.strftime('%R:%S')
31
+ long_time = Time.at(limit[:long][:resets_at] - Time.now).gmtime.strftime('%R:%S')
32
+ "You can make #{limit[:short][:remaining]} requests in the next #{short_time} and #{limit[:long][:remaining]} requests in the next #{long_time}."
33
+ end
34
+
35
+ def checks
36
+ check_list = @client.checks
37
+ end
38
+
39
+ def check(id)
40
+ check_details = @client.check(id)
41
+ end
42
+
43
+ def probes
44
+ probe_list = @client.probes
45
+ end
46
+
47
+ def results(check_id, start_ts = nil, end_ts = nil, offset = nil)
48
+
49
+ check_options = {}
50
+
51
+ unless start_ts.nil?
52
+ check_options['from'] = start_ts
53
+ end
54
+
55
+ unless end_ts.nil?
56
+ check_options['to'] = end_ts
57
+ end
58
+
59
+ unless offset.nil?
60
+ check_options['offset'] = offset
61
+ end
62
+
63
+ results = @client.check(check_id).results(check_options)
64
+ end
65
+
66
+ # Get the full results for the range, looping over the API limits as necessary.
67
+ def full_results(check_id, start_ts, end_ts = nil, api_call_limit = 0)
68
+ offset = 0
69
+ full_set = Array.new
70
+ api_calls = 0
71
+ # Loop until we either grab the full data set, run out of API calls, or hit the first check
72
+ begin
73
+ api_calls += 1
74
+ result_set = self.results(check_id, start_ts, end_ts, offset)
75
+ full_set = full_set.concat(result_set)
76
+ offset += 100
77
+ end until result_set.count < 100 || effective_limit < 10 || api_calls >= api_call_limit.to_i
78
+ full_set
79
+ end
80
+
81
+
82
+ private
83
+
84
+ def connect
85
+ log = Logger.new(STDOUT)
86
+ log.level = @log_level
87
+ begin
88
+ client = Pingdom::Client.new :username => @username, :password => @password, :key => @key, :logger => log
89
+ rescue
90
+ error("There was a problem connecting to pingdom.")
91
+ end
92
+
93
+ client
94
+ end
95
+
96
+
97
+ end
@@ -0,0 +1,20 @@
1
+ require "graphite-metric"
2
+ require "socket"
3
+
4
+ class PingdomToGraphite::DataPush
5
+
6
+ def initialize(graphite_host, graphite_port)
7
+ @graphite_host = graphite_host
8
+ @graphite_post = graphite_port
9
+ end
10
+
11
+ # Sent an array of graphite metrics to graphite
12
+ def to_graphite(metric_array)
13
+ graphite = TCPSocket.new(@graphite_host, @graphite_post)
14
+ metric_array.each do |metric|
15
+ graphite.puts metric.to_s
16
+ end
17
+ graphite.close
18
+ end
19
+
20
+ end
@@ -0,0 +1,3 @@
1
+ module PingdomToGraphite
2
+ VERSION = "0.0.2"
3
+ end
@@ -0,0 +1,27 @@
1
+ # -*- encoding: utf-8 -*-
2
+ $:.unshift File.expand_path("../lib", __FILE__)
3
+ require "pingdom-to-graphite/version"
4
+
5
+ Gem::Specification.new do |s|
6
+ s.name = "pingdom-to-graphite"
7
+ s.version = PingdomToGraphite::VERSION
8
+ s.authors = ["Lew Goettner"]
9
+ s.email = ["lew@goettner.net"]
10
+ s.homepage = "http://lewg.github.com/pingdom-to-graphite"
11
+ s.summary = %q{A command line tool for pulling stats from pingdom and shipping them to graphite.}
12
+ s.description = %q{A tool for copying metrics from Pingdom to graphite. Pingdom, although
13
+ allowing access to effectively all your metrics through the API, does have some limits
14
+ in place to prevent abuse. This tool tries to be mindful of that, although does provide a
15
+ "backfill" option if you care to burn up your daily api limit in one fell swoop.}
16
+
17
+ s.files = `git ls-files`.split("\n")
18
+ s.test_files = `git ls-files -- {features}/*`.split("\n")
19
+ s.executables = `git ls-files -- bin/*`.split("\n").map{ |f| File.basename(f) }
20
+ s.require_paths = ["lib"]
21
+
22
+ s.add_runtime_dependency "thor"
23
+ s.add_runtime_dependency "pingdom-client"
24
+ s.add_runtime_dependency "json"
25
+ s.add_runtime_dependency "graphite-metric"
26
+
27
+ end
metadata ADDED
@@ -0,0 +1,129 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: pingdom-to-graphite
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.0.2
5
+ prerelease:
6
+ platform: ruby
7
+ authors:
8
+ - Lew Goettner
9
+ autorequire:
10
+ bindir: bin
11
+ cert_chain: []
12
+ date: 2012-08-10 00:00:00.000000000 Z
13
+ dependencies:
14
+ - !ruby/object:Gem::Dependency
15
+ name: thor
16
+ requirement: !ruby/object:Gem::Requirement
17
+ none: false
18
+ requirements:
19
+ - - ! '>='
20
+ - !ruby/object:Gem::Version
21
+ version: '0'
22
+ type: :runtime
23
+ prerelease: false
24
+ version_requirements: !ruby/object:Gem::Requirement
25
+ none: false
26
+ requirements:
27
+ - - ! '>='
28
+ - !ruby/object:Gem::Version
29
+ version: '0'
30
+ - !ruby/object:Gem::Dependency
31
+ name: pingdom-client
32
+ requirement: !ruby/object:Gem::Requirement
33
+ none: false
34
+ requirements:
35
+ - - ! '>='
36
+ - !ruby/object:Gem::Version
37
+ version: '0'
38
+ type: :runtime
39
+ prerelease: false
40
+ version_requirements: !ruby/object:Gem::Requirement
41
+ none: false
42
+ requirements:
43
+ - - ! '>='
44
+ - !ruby/object:Gem::Version
45
+ version: '0'
46
+ - !ruby/object:Gem::Dependency
47
+ name: json
48
+ requirement: !ruby/object:Gem::Requirement
49
+ none: false
50
+ requirements:
51
+ - - ! '>='
52
+ - !ruby/object:Gem::Version
53
+ version: '0'
54
+ type: :runtime
55
+ prerelease: false
56
+ version_requirements: !ruby/object:Gem::Requirement
57
+ none: false
58
+ requirements:
59
+ - - ! '>='
60
+ - !ruby/object:Gem::Version
61
+ version: '0'
62
+ - !ruby/object:Gem::Dependency
63
+ name: graphite-metric
64
+ requirement: !ruby/object:Gem::Requirement
65
+ none: false
66
+ requirements:
67
+ - - ! '>='
68
+ - !ruby/object:Gem::Version
69
+ version: '0'
70
+ type: :runtime
71
+ prerelease: false
72
+ version_requirements: !ruby/object:Gem::Requirement
73
+ none: false
74
+ requirements:
75
+ - - ! '>='
76
+ - !ruby/object:Gem::Version
77
+ version: '0'
78
+ description: ! "A tool for copying metrics from Pingdom to graphite. Pingdom, although
79
+ \n allowing access to effectively all your metrics through the API, does have
80
+ some limits \n in place to prevent abuse. This tool tries to be mindful of that,
81
+ although does provide a \n \"backfill\" option if you care to burn up your daily
82
+ api limit in one fell swoop."
83
+ email:
84
+ - lew@goettner.net
85
+ executables:
86
+ - pingdom-to-graphite
87
+ extensions: []
88
+ extra_rdoc_files: []
89
+ files:
90
+ - .gitignore
91
+ - Gemfile
92
+ - README.md
93
+ - Rakefile
94
+ - bin/pingdom-to-graphite
95
+ - features/copy.feature
96
+ - features/step_definitions/copy_steps.rb
97
+ - features/support/env.rb
98
+ - lib/pingdom-to-graphite.rb
99
+ - lib/pingdom-to-graphite/cli.rb
100
+ - lib/pingdom-to-graphite/data-pull.rb
101
+ - lib/pingdom-to-graphite/data-push.rb
102
+ - lib/pingdom-to-graphite/version.rb
103
+ - pingdom-to-graphite.gemspec
104
+ homepage: http://lewg.github.com/pingdom-to-graphite
105
+ licenses: []
106
+ post_install_message:
107
+ rdoc_options: []
108
+ require_paths:
109
+ - lib
110
+ required_ruby_version: !ruby/object:Gem::Requirement
111
+ none: false
112
+ requirements:
113
+ - - ! '>='
114
+ - !ruby/object:Gem::Version
115
+ version: '0'
116
+ required_rubygems_version: !ruby/object:Gem::Requirement
117
+ none: false
118
+ requirements:
119
+ - - ! '>='
120
+ - !ruby/object:Gem::Version
121
+ version: '0'
122
+ requirements: []
123
+ rubyforge_project:
124
+ rubygems_version: 1.8.24
125
+ signing_key:
126
+ specification_version: 3
127
+ summary: A command line tool for pulling stats from pingdom and shipping them to graphite.
128
+ test_files: []
129
+ has_rdoc: