librato-metrics-taps 0.3.7 → 0.3.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 733e9fa30670ae136a4e48449f6f31f37c7d305e
4
- data.tar.gz: 567d1545148a99222bc662d5a7f11822a39e27e9
3
+ metadata.gz: 3e29495d0f66b5b56bcdc03ff973812145379ea2
4
+ data.tar.gz: 67fcbff0a5824c5825f60124117f2326b143ce3e
5
5
  SHA512:
6
- metadata.gz: 2d636a85986a909aa9fd6ad720240cb03a70bad5658a6971712deb39a20efb04ef9ba116466938e69b9cc2b9c5e5290a8204868d8fc94dc09d83934fd8dd4e21
7
- data.tar.gz: 1027f1e2d5b2ae9638d81f6fe97262477f51399962aaf2ee618064524b181a2267e773561f78a52ec98c8f29071cee688729036b1c400aad7464a49c4a3f7285
6
+ metadata.gz: 25b57a166b0292f6808de05f262e1d3572af7b4cd63e9dc256eb0afeebb077d1a5bfc536e9ef3761f03d2377685e882a44e18622ee3070210ae93439dc246f47
7
+ data.tar.gz: 52db5bdacad21a636e04ff454179c1e3b13d3b94c5fb2a4c509668bf3ee9871e449c216ff71e51e9503969002b31672cae9bd045e0a2366c4ba177121182d1d0
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- librato-metrics-taps (0.3.7)
4
+ librato-metrics-taps (0.3.9)
5
5
  jmx4r (= 0.1.4)
6
6
  jruby-openssl (= 0.8.5)
7
7
  librato-metrics (~> 1.0.4)
data/README.md CHANGED
@@ -1,5 +1,15 @@
1
1
  # librato-metrics-taps
2
2
 
3
+ ## Deprecation Notice
4
+
5
+ **This project is being officially deprecated by the maintainers.**
6
+
7
+ We recommend alternatives such as [JMXTrans](https://github.com/jmxtrans/embedded-jmxtrans) and collectd's [JMX plugin](https://collectd.org/wiki/index.php/Plugin:GenericJMX) projects for submitting your JMX metrics to Librato. These have more comprehensive coverage, require fewer external dependencies, and are more actively developed than this project.
8
+
9
+ We will keep this project available on GitHub for now, but there will be no ongoing support or development of the project from this point forward.
10
+
11
+ ## Overview
12
+
3
13
  Collection of helper scripts and library routines to tap into external
4
14
  metric sources and pump those metrics into Librato's Metric Service.
5
15
 
@@ -132,6 +142,26 @@ librato-metrics-tap-jmxbeans --email "$EMAIL" --token "$TOKEN" \
132
142
  --measure-time "$MEASURE_TIME"
133
143
  ```
134
144
 
145
+ ### Daemon Mode
146
+
147
+ #### Single Interval
148
+
149
+ In order to poll and publish to librato every N seconds, specify `--interval <poll-interval>` option.
150
+
151
+ #### Multiple Intervals
152
+
153
+ In order to publish multiple YAML bean definition files in different intervals,
154
+ you must supply an interval file, specifying a map from YAML data files to the
155
+ period each file's metrics must be collected and published to librato. For example:
156
+
157
+ ```
158
+ metrics_1min.yaml: 60
159
+ metrics_5min.yaml: 300
160
+ metrics_1h.yaml: 3600
161
+ ```
162
+
163
+ The interval file must be specified using the `--interval-file <interval-file-path>` option. A sample interval file is contained in the examples folder.
164
+
135
165
  ## Contributing to librato-metrics-taps
136
166
 
137
167
  * Check out the latest master to make sure the feature hasn't been implemented or the bug hasn't been fixed yet
data/VERSION CHANGED
@@ -1 +1 @@
1
- 0.3.7
1
+ 0.3.9
@@ -42,6 +42,63 @@ def publish_beans(publisher, beans, opts)
42
42
  return r
43
43
  end
44
44
 
45
+ def load_yaml(filename)
46
+ begin
47
+ yaml_file= File.open(filename, "r")
48
+ rescue => err
49
+ puts "Failed to open yaml file #{filename}: #{err.message}"
50
+ exit 1
51
+ end
52
+
53
+ begin
54
+ yaml = YAML::load(yaml_file)
55
+ rescue => err
56
+ puts "Failed to parse yaml #{filename}: #{err.message}"
57
+ exit 1
58
+ end
59
+ yaml_file.close
60
+ return yaml
61
+ end
62
+
63
+ def publish_loop(opts, publisher, interval, beans)
64
+ # If --interval or --interval-file has been specified,
65
+ # broadcast every interval seconds. We wait for
66
+ # interval seconds each time to ensure we broadcast
67
+ # on the interval
68
+ #
69
+ # We use a random stagger to ensure that we are measuring and
70
+ # publishing our metrics at a random point within the period. This
71
+ # ensures that multiple entities are not measuring and reporting
72
+ # at the same exact points in time.
73
+
74
+ stagger = rand(interval)
75
+ begin
76
+ t = Time.now.tv_sec
77
+
78
+ # Floor the time to the current interval
79
+ t2 = (t / interval) * interval
80
+
81
+ # Offset by the stagger
82
+ t2 += stagger
83
+
84
+ # If our stagger is < interval/2, it is possible that we
85
+ # went back in time. In that case, simply skip another interval
86
+ #
87
+ if t2 <= t
88
+ t2 += interval
89
+ end
90
+
91
+ sleep (t2 - t)
92
+ t = Time.now
93
+
94
+ # We report our measure time as the nearest interval
95
+ tsecs = ((t.tv_sec + (interval / 2)) / interval) * interval
96
+
97
+ publish_beans(publisher, beans, opts.merge(:measure_time => tsecs))
98
+ end while true
99
+ end
100
+
101
+
45
102
  opts = Trollop::options do
46
103
  version "Version: #{Taps::version}"
47
104
  banner <<EOS
@@ -67,6 +124,7 @@ EOS
67
124
  opt :metrics_url, "Metrics URL", :short => "-r", :default => 'https://metrics-api.librato.com'
68
125
 
69
126
  opt :interval, "Run as a daemon and poll every N seconds", :short => "-i", :type => :int
127
+ opt :interval_file, "YAML file specifying different intervals for multiple data files (daemon mode)", :type => :string
70
128
 
71
129
  opt :ignore_missing, "Ignore missing beans/attributes", :short => "-g"
72
130
 
@@ -118,89 +176,42 @@ end
118
176
  # Load full definition
119
177
  #
120
178
  if opts[:data_file_full]
121
- filename = opts[:data_file_full]
122
- begin
123
- beanf = File.open(filename, "r")
124
- rescue => err
125
- puts "Failed to open bean file #{filename}: #{err.message}"
126
- exit 1
127
- end
128
-
129
- begin
130
- beans = YAML::load(beanf)
131
- rescue => err
132
- puts "Failed to parse #{filename}: #{err.message}"
133
- exit 1
134
- end
135
- beanf.close
179
+ beans = load_yaml(opts[:data_file_full])
136
180
  elsif opts[:bean_name] && opts[:data_file_attributes]
137
- # Load attributes from file
138
- #
139
- filename = opts[:data_file_attributes]
140
- begin
141
- beanf = File.open(filename, "r")
142
- rescue => err
143
- puts "Failed to open attributes file #{filename}: #{err.message}"
144
- exit 1
145
- end
146
-
147
- begin
148
- attrs = YAML::load(beanf)
149
- rescue => err
150
- puts "Failed to parse #{filename}: #{err.message}"
151
- exit 1
152
- end
153
- beanf.close
181
+ attrs = load_yaml(opts[:data_file_attributes])
154
182
 
155
183
  beans = {}
156
184
  beannames = get_beans(opts[:bean_name])
157
185
  beannames.each do |name|
158
186
  beans[name] = attrs
159
187
  end
188
+ elsif opts[:interval_file]
189
+ intervals = load_yaml(opts[:interval_file])
160
190
  else
161
- err "Must specify --data-file-full or --data-file-attributes"
191
+ err "Must specify --data-file-full or --data-file-attributes or --interval-file"
162
192
  end
163
193
 
164
- unless opts[:interval]
194
+ unless opts[:interval] or opts[:interval_file]
165
195
  r = publish_beans(publisher, beans, opts)
166
196
  exit(r ? 0 : 1)
167
197
  end
168
198
 
169
- # If --interval has been specified, broadcast every interval
170
- # seconds. We wait for interval seconds each time to ensure we
171
- # broadcast on the interval
172
- #
173
- # We use a random stagger to ensure that we are measuring and
174
- # publishing our metrics at a random point within the period. This
175
- # ensures that multiple entities are not measuring and reporting
176
- # at the same exact points in time.
177
-
178
- interval = opts[:interval]
179
- stagger = rand(interval)
180
- begin
181
- t = Time.now.tv_sec
182
-
183
- # Floor the time to the current interval
184
- t2 = (t / interval) * interval
185
-
186
- # Offset by the stagger
187
- t2 += stagger
188
-
189
- # If our stagger is < interval/2, it is possible that we
190
- # went back in time. In that case, simply skip another interval
191
- #
192
- if t2 <= t
193
- t2 += interval
199
+ if opts[:interval]
200
+ #single interval, single datafile
201
+ interval = opts[:interval]
202
+ publish_loop(opts, publisher, interval, beans)
203
+ elsif opts[:interval_file]
204
+ #multiple intervals, multiple datafiles
205
+ base_dir = File.dirname(opts[:interval_file])
206
+ workers = []
207
+ intervals.each do |data_file, interval|
208
+ data_file_path = "#{base_dir}/#{data_file}"
209
+ p "Starting publisher for data file #{data_file_path} with #{interval}s interval."
210
+ jmx_beans = load_yaml(data_file_path)
211
+ workers << Thread.new{publish_loop(opts, publisher, interval.to_i, jmx_beans)}
194
212
  end
195
-
196
- sleep (t2 - t)
197
- t = Time.now
198
-
199
- # We report our measure time as the nearest interval
200
- tsecs = ((t.tv_sec + (interval / 2)) / interval) * interval
201
-
202
- publish_beans(publisher, beans, opts.merge(:measure_time => tsecs))
203
- end while true
213
+ workers.map(&:join)
214
+ end
204
215
 
205
216
  exit 1
206
217
 
@@ -0,0 +1,31 @@
1
+ # Set of attributes to grab for each CF.
2
+ # TODO: Add comment description for each
3
+ #
4
+ ---
5
+ BloomFilterDiskSpaceUsed: counter
6
+ BloomFilterFalsePositives: counter
7
+ BloomFilterFalseRatio:
8
+ CompressionRatio:
9
+ DroppableTombstoneRatio:
10
+ LiveCellsPerSlice:
11
+ LiveDiskSpaceUsed:
12
+ LiveSSTableCount:
13
+ MaxRowSize:
14
+ MeanRowSize:
15
+ MinRowSize:
16
+ MaximumCompactionThreshold:
17
+ MinimumCompactionThreshold:
18
+ MemtableColumnsCount: counter
19
+ MemtableDataSize:
20
+ MemtableSwitchCount: counter
21
+ PendingTasks:
22
+ ReadCount: counter
23
+ RecentBloomFilterFalsePositives:
24
+ RecentBloomFilterFalseRatio:
25
+ RecentReadLatencyMicros:
26
+ RecentWriteLatencyMicros:
27
+ TotalDiskSpaceUsed: counter
28
+ TotalReadLatencyMicros: counter
29
+ TotalWriteLatencyMicros: counter
30
+ UnleveledSSTables:
31
+ WriteCount: counter
@@ -41,7 +41,7 @@ org.apache.cassandra.db:type=StorageProxy:
41
41
  org.apache.cassandra.db:type=StorageService:
42
42
  Load:
43
43
  CompactionThroughputMbPerSec:
44
- ExceptionCount: counter
44
+ ExceptionCount:
45
45
  StreamThroughputMbPerSec:
46
46
  org.apache.cassandra.internal:type=AntiEntropyStage:
47
47
  ActiveCount:
@@ -0,0 +1,187 @@
1
+ # Set of non-CF specific Cassandra JMX knobs
2
+ #
3
+ ---
4
+ org.apache.cassandra.db:type=BatchlogManager:
5
+ TotalBatchesReplayed: counter
6
+ org.apache.cassandra.db:type=Caches:
7
+ KeyCacheCapacityInBytes:
8
+ KeyCacheEntries:
9
+ KeyCacheHits: counter
10
+ KeyCacheRecentHitRate:
11
+ KeyCacheRequests: counter
12
+ KeyCacheSize:
13
+ RowCacheCapacityInBytes:
14
+ RowCacheEntries:
15
+ RowCacheHits: counter
16
+ RowCacheRecentHitRate:
17
+ RowCacheRequests:
18
+ RowCacheSize:
19
+ org.apache.cassandra.db:type=Commitlog:
20
+ CompletedTasks: counter
21
+ PendingTasks:
22
+ TotalCommitlogSize:
23
+ org.apache.cassandra.db:type=CompactionManager:
24
+ CompletedTasks: counter
25
+ PendingTasks:
26
+ TotalBytesCompacted: counter
27
+ TotalCompactionsCompleted: counter
28
+ org.apache.cassandra.db:type=StorageProxy:
29
+ HintsInProgress:
30
+ RecentRangeLatencyMicros:
31
+ RecentReadLatencyMicros:
32
+ RecentWriteLatencyMicros:
33
+ TotalRangeLatencyMicros: counter
34
+ TotalReadLatencyMicros: counter
35
+ TotalWriteLatencyMicros: counter
36
+ TotalHints: counter
37
+ RangeOperations: counter
38
+ ReadOperations: counter
39
+ WriteOperations: counter
40
+ ReadRepairAttempted:
41
+ ReadRepairRepairedBackground:
42
+ ReadRepairRepairedBlocking:
43
+ org.apache.cassandra.db:type=StorageService:
44
+ Load:
45
+ CompactionThroughputMbPerSec:
46
+ ExceptionCount:
47
+ StreamThroughputMbPerSec:
48
+ org.apache.cassandra.internal:type=AntiEntropyStage:
49
+ ActiveCount:
50
+ CompletedTasks: counter
51
+ PendingTasks:
52
+ CurrentlyBlockedTasks:
53
+ TotalBlockedTasks: counter
54
+ org.apache.cassandra.internal:type=CacheCleanupExecutor:
55
+ ActiveCount:
56
+ CompletedTasks: counter
57
+ PendingTasks:
58
+ CurrentlyBlockedTasks:
59
+ TotalBlockedTasks: counter
60
+ org.apache.cassandra.internal:type=CompactionExecutor:
61
+ ActiveCount:
62
+ CompletedTasks: counter
63
+ PendingTasks:
64
+ CurrentlyBlockedTasks:
65
+ TotalBlockedTasks: counter
66
+ org.apache.cassandra.internal:type=FlushWriter:
67
+ ActiveCount:
68
+ CompletedTasks: counter
69
+ PendingTasks:
70
+ CurrentlyBlockedTasks:
71
+ TotalBlockedTasks: counter
72
+ org.apache.cassandra.internal:type=GossipStage:
73
+ ActiveCount:
74
+ CompletedTasks: counter
75
+ PendingTasks:
76
+ CurrentlyBlockedTasks:
77
+ TotalBlockedTasks: counter
78
+ org.apache.cassandra.internal:type=HintedHandoff:
79
+ ActiveCount:
80
+ CompletedTasks: counter
81
+ PendingTasks:
82
+ CurrentlyBlockedTasks:
83
+ TotalBlockedTasks: counter
84
+ org.apache.cassandra.internal:type=InternalResponseStage:
85
+ ActiveCount:
86
+ CompletedTasks: counter
87
+ PendingTasks:
88
+ CurrentlyBlockedTasks:
89
+ TotalBlockedTasks: counter
90
+ org.apache.cassandra.internal:type=MemoryMeter:
91
+ ActiveCount:
92
+ CompletedTasks: counter
93
+ PendingTasks:
94
+ CurrentlyBlockedTasks:
95
+ TotalBlockedTasks: counter
96
+ org.apache.cassandra.internal:type=MemtablePostFlusher:
97
+ ActiveCount:
98
+ CompletedTasks: counter
99
+ PendingTasks:
100
+ CurrentlyBlockedTasks:
101
+ TotalBlockedTasks: counter
102
+ org.apache.cassandra.internal:type=MigrationStage:
103
+ ActiveCount:
104
+ CompletedTasks: counter
105
+ PendingTasks:
106
+ CurrentlyBlockedTasks:
107
+ TotalBlockedTasks: counter
108
+ org.apache.cassandra.internal:type=MiscStage:
109
+ ActiveCount:
110
+ CompletedTasks: counter
111
+ PendingTasks:
112
+ CurrentlyBlockedTasks:
113
+ TotalBlockedTasks: counter
114
+ org.apache.cassandra.internal:type=PendingRangeCalculator:
115
+ ActiveCount:
116
+ CompletedTasks: counter
117
+ PendingTasks:
118
+ CurrentlyBlockedTasks:
119
+ TotalBlockedTasks: counter
120
+ org.apache.cassandra.internal:type=ValidationExecutor:
121
+ ActiveCount:
122
+ CompletedTasks: counter
123
+ PendingTasks:
124
+ CurrentlyBlockedTasks:
125
+ TotalBlockedTasks: counter
126
+ org.apache.cassandra.internal:type=StreamStage:
127
+ ActiveCount:
128
+ CompletedTasks: counter
129
+ PendingTasks:
130
+ CurrentlyBlockedTasks:
131
+ TotalBlockedTasks: counter
132
+ org.apache.cassandra.request:type=MutationStage:
133
+ ActiveCount:
134
+ CompletedTasks: counter
135
+ PendingTasks:
136
+ CurrentlyBlockedTasks:
137
+ TotalBlockedTasks: counter
138
+ org.apache.cassandra.request:type=ReadRepairStage:
139
+ ActiveCount:
140
+ CompletedTasks: counter
141
+ PendingTasks:
142
+ CurrentlyBlockedTasks:
143
+ TotalBlockedTasks: counter
144
+ org.apache.cassandra.request:type=ReadStage:
145
+ ActiveCount:
146
+ CompletedTasks: counter
147
+ PendingTasks:
148
+ CurrentlyBlockedTasks:
149
+ TotalBlockedTasks: counter
150
+ org.apache.cassandra.request:type=ReplicateOnWriteStage:
151
+ ActiveCount:
152
+ CompletedTasks: counter
153
+ PendingTasks:
154
+ CurrentlyBlockedTasks:
155
+ TotalBlockedTasks: counter
156
+ org.apache.cassandra.request:type=RequestResponseStage:
157
+ ActiveCount:
158
+ CompletedTasks: counter
159
+ PendingTasks:
160
+ CurrentlyBlockedTasks:
161
+ TotalBlockedTasks: counter
162
+ org.apache.cassandra.net:type=MessagingService:
163
+ RecentTotalTimouts:
164
+ TotalTimeouts: counter
165
+ org.apache.cassandra.metrics:type=ClientRequestMetrics,name=ReadTimeouts:
166
+ Count: counter
167
+ org.apache.cassandra.metrics:type=ClientRequestMetrics,name=ReadUnavailables:
168
+ Count: counter
169
+ org.apache.cassandra.metrics:type=ClientRequestMetrics,name=WriteTimeouts:
170
+ Count: counter
171
+ org.apache.cassandra.metrics:type=ClientRequestMetrics,name=WriteUnavailables:
172
+ Count: counter
173
+ org.apache.cassandra.metrics:type=Client,name=connectedNativeClients:
174
+ Value:
175
+ org.apache.cassandra.metrics:type=Client,name=connectedThriftClients:
176
+ Value:
177
+ org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency:
178
+ Mean:
179
+ 95thPercentile:
180
+ 99thPercentile:
181
+ org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency:
182
+ Mean:
183
+ 95thPercentile:
184
+ 99thPercentile:
185
+ org.apache.cassandra.net:type=MessagingService:
186
+ RecentTotalTimouts:
187
+ TotalTimeouts: counter
@@ -0,0 +1,3 @@
1
+ cassandra/cfstats-2_0_9.yaml: 300 #5 minutes resolution
2
+ cassandra/tpstats-2_0_9.yaml: 120 #2 minutes resolution
3
+ java-lang.yaml: 60 #1 minute resolution
@@ -5,7 +5,7 @@ Gem::Specification.new do |s|
5
5
  s.version = File.read(File.join(File.dirname(__FILE__), 'VERSION')).chomp
6
6
 
7
7
  s.authors = [%q{Librato, Inc.}]
8
- s.date = %q{2012-05-15}
8
+ s.date = Time.now
9
9
  s.description = %q{Taps for extracting metrics data and publishing to Librato Metrics}
10
10
  s.email = %q{mike@librato.com}
11
11
  s.summary = %q{Librato Metrics Taps}
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: librato-metrics-taps
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.7
4
+ version: 0.3.9
5
5
  platform: ruby
6
6
  authors:
7
7
  - Librato, Inc.
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2012-05-15 00:00:00.000000000 Z
11
+ date: 2015-01-21 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: jmx4r
@@ -129,11 +129,14 @@ files:
129
129
  - examples/cassandra/cfstats-0_8_8.yaml
130
130
  - examples/cassandra/cfstats-1_1_6.yaml
131
131
  - examples/cassandra/cfstats-1_2_5.yaml
132
+ - examples/cassandra/cfstats-2_0_9.yaml
132
133
  - examples/cassandra/tpstats-0_7_6-2.yaml
133
134
  - examples/cassandra/tpstats-0_8_1.yaml
134
135
  - examples/cassandra/tpstats-0_8_8.yaml
135
136
  - examples/cassandra/tpstats-1_1_6.yaml
136
137
  - examples/cassandra/tpstats-1_2_5.yaml
138
+ - examples/cassandra/tpstats-2_0_9.yaml
139
+ - examples/interval-file.yaml
137
140
  - examples/java-lang.yaml
138
141
  - lib/librato-metrics-taps.rb
139
142
  - lib/librato/metrics/taps.rb