custom_benchmarks 0.0.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (3) hide show
  1. data/README +88 -0
  2. data/lib/custom_benchmarks.rb +108 -0
  3. metadata +44 -0
data/README ADDED
@@ -0,0 +1,88 @@
1
+ =Custom Benchmarks
2
+
3
+ == About
4
+
5
+ Custom Benchmarks allow you to easily add your own information to the
6
+ benchmark line logged at the end of each request. e.g.,
7
+
8
+ Completed in 0.40306 (2 reqs/sec) | Rendering: 0.05003 (12%) | DB: 0.04571 (11%) | Search: 0.16429,1 (40%) | PID: 22426 | 200 OK [http://www.zvents.com/welcome/index]
9
+
10
+ Typically, the log line includes the latency associated with executing
11
+ specific parts of a request. In the example above, we have added a
12
+ measurement of search latency. But you can use Custom Benchmarks to add
13
+ any information to the log line. The example above also shows the ID of
14
+ the process (PID) that served this request. The PID is useful when parsing
15
+ information from logs that contain data from multiple processes.
16
+
17
+ == Installation
18
+
19
+ 1. Install the plugin or the gem
20
+ $ script/plugin install svn://rubyforge.org/var/svn/zventstools/projects/custom_benchmarks
21
+ - OR -
22
+ # gem install custom_benchmarks
23
+
24
+ == Simple Example: Logging the Process ID
25
+
26
+ To add the PID as a custom benchmark field, simply add a custom_benchmark
27
+ line like the following to your ApplicationController:
28
+
29
+ class ApplicationController < ActionController::Base
30
+ custom_benchmark {|runtime| " | PID: #{$$}" }
31
+ ...
32
+ end
33
+
34
+ Declare your custom_benchmark with a block that expects an input parameter
35
+ called runtime. runtime, which isn't used in this example, contains the
36
+ overall latency of the entire request. Later, we'll show you an example
37
+ of using runtime to calculate percentage latency below. custom_benchmark
38
+ expects your block to return a string - which will be inserted in the
39
+ log file immediate before the status (e.g., 200 OK [http://www.zvents.com/])
40
+
41
+ == Complex Example: Logging Arbitrary Latency
42
+
43
+ Let's say that your application includes a search function that is powered
44
+ by Lucene. Like SQL calls issued to a database, calls to Lucene can take
45
+ a while so you want to log your search latency.
46
+
47
+ The first step is to set up a mechanism that allows you to record your
48
+ search latency for each request. You can do that with something like this:
49
+
50
+ class MySearch
51
+ @@latency = 0.0
52
+ cattr_accessor :latency
53
+
54
+ def run_search
55
+ @@latency = Benchmark::measure{
56
+ # execute the call to Lucene here
57
+ }.real
58
+ end
59
+
60
+ def self.get_timing_summary(runtime)
61
+ summary = " | Search: #{sprintf("%.5f",@@latency)} (#{sprintf("%d", (@@latency * 100) / runtime)}%)"
62
+ @@latency = 0.0
63
+ summary
64
+ end
65
+ end
66
+
67
+ The run_search method uses Benchmark::measure to record the latency of the
68
+ search. The get_timing_summary class method, which will be invoked by
69
+ a custom_benchmark, returns a formatted string summarizing the search
70
+ latency in absolute and percentage terms. It also resets the value
71
+ of @@latency to avoid affecting subsequent queries.
72
+
73
+ Finally, we just need to add a custom_benchmark statement to the
74
+ ApplicationController:
75
+
76
+ custom_benchmark {|runtime| MySearch.get_timing_summary(runtime) }
77
+
78
+ == Bugs, Code and Contributing
79
+
80
+ There's a RubyForge project set up at:
81
+
82
+ http://rubyforge.org/projects/zventstools/
83
+
84
+ Anonymous SVN access:
85
+
86
+ $ svn checkout svn://rubyforge.org/var/svn/zventstools
87
+
88
+ Author: Tyler Kovacs (tyler dot kovacs at gmail dot com)
@@ -0,0 +1,108 @@
1
+ # Custom Benchmarks
2
+ #
3
+ # Custom Benchmarks allow you to easily add your own information to the
4
+ # benchmark line logged at the end of each request. e.g.,
5
+ #
6
+ # Completed in 0.40306 (2 reqs/sec) | Rendering: 0.05003 (12%) | DB: 0.04571 (11%) | Search: 0.16429,1 (40%) | PID: 22426 | 200 OK [http://www.zvents.com/welcome/index]
7
+ #
8
+ # Typically, the log line includes the latency associated with executing
9
+ # specific parts of a request. In the example above, we have added a
10
+ # measurement of search latency. But you can use Custom Benchmarks to add
11
+ # any information to the log line. The example above also shows the ID of
12
+ # the process (PID) that served this request. The PID is useful when parsing
13
+ # information from logs that contain data from multiple processes.
14
+ #
15
+ # Simple Example: Logging the Process ID
16
+ #
17
+ # To add the PID as a custom benchmark field, simply add a custom_benchmark
18
+ # line like the following to your ApplicationController:
19
+ #
20
+ # class ApplicationController < ActionController::Base
21
+ # custom_benchmark {|runtime| " | PID: #{$$}" }
22
+ # ...
23
+ # end
24
+ #
25
+ # Declare your custom_benchmark with a block that expects an input parameter
26
+ # called runtime. runtime, which isn't used in this example, contains the
27
+ # overall latency of the entire request. Later, we'll show you an example
28
+ # of using runtime to calculate percentage latency below. custom_benchmark
29
+ # expects your block to return a string - which will be inserted in the
30
+ # log file immediate before the status (e.g., 200 OK [http://www.zvents.com/])
31
+ #
32
+ # Complex Example: Logging Arbitrary Latency
33
+ #
34
+ # Let's say that your application includes a search function that is powered
35
+ # by Lucene. Like SQL calls issued to a database, calls to Lucene can take
36
+ # a while so you want to log your search latency.
37
+ #
38
+ # The first step is to set up a mechanism that allows you to record your
39
+ # search latency for each request. You can do that with something like this:
40
+ #
41
+ # class MySearch
42
+ # @@latency = 0.0
43
+ # cattr_accessor :latency
44
+ #
45
+ # def run_search
46
+ # @@latency = Benchmark::measure{
47
+ # # execute the call to Lucene here
48
+ # }.real
49
+ # end
50
+ #
51
+ # def self.get_timing_summary(runtime)
52
+ # summary = " | Search: #{sprintf("%.5f",@@latency)} (#{sprintf("%d", (@@latency * 100) / runtime)}%)"
53
+ # @@latency = 0.0
54
+ # summary
55
+ # end
56
+ # end
57
+ #
58
+ # The run_search method uses Benchmark::measure to record the latency of the
59
+ # search. The get_timing_summary class method, which will be invoked by
60
+ # a custom_benchmark, returns a formatted string summarizing the search
61
+ # latency in absolute and percentage terms. It also resets the value
62
+ # of @@latency to avoid affecting subsequent queries.
63
+ #
64
+ # Finally, we just need to add a custom_benchmark statement to the
65
+ # ApplicationController:
66
+ #
67
+ # custom_benchmark {|runtime| MySearch.get_timing_summary(runtime) }
68
+
69
+ module ActionController #:nodoc:
70
+ module CustomBenchmarking #:nodoc:
71
+ def self.included(base)
72
+ base.class_eval do
73
+ alias_method :perform_action, :perform_action_with_custom_benchmark
74
+ end
75
+ base.extend(ClassMethods)
76
+ end
77
+
78
+ module ClassMethods
79
+ def custom_benchmark(*benchmark, &block)
80
+ if block_given?
81
+ write_inheritable_attribute(:custom_benchmarks,
82
+ (read_inheritable_attribute(:custom_benchmarks) || []) << block)
83
+ end
84
+ end
85
+
86
+ def custom_benchmarks
87
+ @custom_benchmarks ||= read_inheritable_attribute(:custom_benchmarks) || []
88
+ end
89
+ end
90
+
91
+ def perform_action_with_custom_benchmark
92
+ unless logger
93
+ perform_action_without_benchmark
94
+ else
95
+ runtime = [Benchmark::measure{ perform_action_without_benchmark }.real, 0.0001].max
96
+ log_message = "Completed in #{sprintf("%.5f", runtime)} (#{(1 / runtime).floor} reqs/sec)"
97
+ log_message << rendering_runtime(runtime) if @rendering_runtime
98
+ log_message << active_record_runtime(runtime) if Object.const_defined?("ActiveRecord") && ActiveRecord::Base.connected?
99
+ self.class.custom_benchmarks.each do |benchmark|
100
+ log_message << benchmark.call(runtime)
101
+ end
102
+ log_message << " | #{headers["Status"]}"
103
+ log_message << " [#{complete_request_uri rescue "unknown"}]"
104
+ logger.info(log_message)
105
+ end
106
+ end
107
+ end
108
+ end
metadata ADDED
@@ -0,0 +1,44 @@
1
+ --- !ruby/object:Gem::Specification
2
+ rubygems_version: 0.8.10
3
+ specification_version: 1
4
+ name: custom_benchmarks
5
+ version: !ruby/object:Gem::Version
6
+ version: 0.0.1
7
+ date: 2006-10-31
8
+ summary: Easily allows custom information to be included in the benchmark log line at the end of each request.
9
+ require_paths:
10
+ - lib
11
+ email: tyler.kovacs@zvents.com
12
+ homepage: http://blog.zvents.com/2006/10/31/rails-plugin-custom-benchmarks
13
+ rubyforge_project:
14
+ description:
15
+ autorequire: custom_benchmarks
16
+ default_executable:
17
+ bindir: bin
18
+ has_rdoc: true
19
+ required_ruby_version: !ruby/object:Gem::Version::Requirement
20
+ requirements:
21
+ - - ">"
22
+ - !ruby/object:Gem::Version
23
+ version: 0.0.0
24
+ version:
25
+ platform: ruby
26
+ authors:
27
+ - Tyler Kovacs
28
+ files:
29
+ - lib/custom_benchmarks.rb
30
+ - README
31
+ test_files: []
32
+
33
+ rdoc_options: []
34
+
35
+ extra_rdoc_files:
36
+ - README
37
+ executables: []
38
+
39
+ extensions: []
40
+
41
+ requirements: []
42
+
43
+ dependencies: []
44
+