fluent-plugin-perf-tools 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (98) hide show
  1. checksums.yaml +7 -0
  2. data/.gitignore +15 -0
  3. data/.rubocop.yml +26 -0
  4. data/.ruby-version +1 -0
  5. data/CHANGELOG.md +5 -0
  6. data/CODE_OF_CONDUCT.md +84 -0
  7. data/Gemfile +5 -0
  8. data/LICENSE.txt +21 -0
  9. data/README.md +43 -0
  10. data/Rakefile +17 -0
  11. data/bin/console +15 -0
  12. data/bin/setup +8 -0
  13. data/fluent-plugin-perf-tools.gemspec +48 -0
  14. data/lib/fluent/plugin/in_perf_tools.rb +42 -0
  15. data/lib/fluent/plugin/perf_tools/cachestat.rb +65 -0
  16. data/lib/fluent/plugin/perf_tools/command.rb +30 -0
  17. data/lib/fluent/plugin/perf_tools/version.rb +9 -0
  18. data/lib/fluent/plugin/perf_tools.rb +11 -0
  19. data/perf-tools/LICENSE +339 -0
  20. data/perf-tools/README.md +205 -0
  21. data/perf-tools/bin/bitesize +1 -0
  22. data/perf-tools/bin/cachestat +1 -0
  23. data/perf-tools/bin/execsnoop +1 -0
  24. data/perf-tools/bin/funccount +1 -0
  25. data/perf-tools/bin/funcgraph +1 -0
  26. data/perf-tools/bin/funcslower +1 -0
  27. data/perf-tools/bin/functrace +1 -0
  28. data/perf-tools/bin/iolatency +1 -0
  29. data/perf-tools/bin/iosnoop +1 -0
  30. data/perf-tools/bin/killsnoop +1 -0
  31. data/perf-tools/bin/kprobe +1 -0
  32. data/perf-tools/bin/opensnoop +1 -0
  33. data/perf-tools/bin/perf-stat-hist +1 -0
  34. data/perf-tools/bin/reset-ftrace +1 -0
  35. data/perf-tools/bin/syscount +1 -0
  36. data/perf-tools/bin/tcpretrans +1 -0
  37. data/perf-tools/bin/tpoint +1 -0
  38. data/perf-tools/bin/uprobe +1 -0
  39. data/perf-tools/deprecated/README.md +1 -0
  40. data/perf-tools/deprecated/execsnoop-proc +150 -0
  41. data/perf-tools/deprecated/execsnoop-proc.8 +80 -0
  42. data/perf-tools/deprecated/execsnoop-proc_example.txt +46 -0
  43. data/perf-tools/disk/bitesize +175 -0
  44. data/perf-tools/examples/bitesize_example.txt +63 -0
  45. data/perf-tools/examples/cachestat_example.txt +58 -0
  46. data/perf-tools/examples/execsnoop_example.txt +153 -0
  47. data/perf-tools/examples/funccount_example.txt +126 -0
  48. data/perf-tools/examples/funcgraph_example.txt +2178 -0
  49. data/perf-tools/examples/funcslower_example.txt +110 -0
  50. data/perf-tools/examples/functrace_example.txt +341 -0
  51. data/perf-tools/examples/iolatency_example.txt +350 -0
  52. data/perf-tools/examples/iosnoop_example.txt +302 -0
  53. data/perf-tools/examples/killsnoop_example.txt +62 -0
  54. data/perf-tools/examples/kprobe_example.txt +379 -0
  55. data/perf-tools/examples/opensnoop_example.txt +47 -0
  56. data/perf-tools/examples/perf-stat-hist_example.txt +149 -0
  57. data/perf-tools/examples/reset-ftrace_example.txt +88 -0
  58. data/perf-tools/examples/syscount_example.txt +297 -0
  59. data/perf-tools/examples/tcpretrans_example.txt +93 -0
  60. data/perf-tools/examples/tpoint_example.txt +210 -0
  61. data/perf-tools/examples/uprobe_example.txt +321 -0
  62. data/perf-tools/execsnoop +292 -0
  63. data/perf-tools/fs/cachestat +167 -0
  64. data/perf-tools/images/perf-tools_2016.png +0 -0
  65. data/perf-tools/iolatency +296 -0
  66. data/perf-tools/iosnoop +296 -0
  67. data/perf-tools/kernel/funccount +146 -0
  68. data/perf-tools/kernel/funcgraph +259 -0
  69. data/perf-tools/kernel/funcslower +248 -0
  70. data/perf-tools/kernel/functrace +192 -0
  71. data/perf-tools/kernel/kprobe +270 -0
  72. data/perf-tools/killsnoop +263 -0
  73. data/perf-tools/man/man8/bitesize.8 +70 -0
  74. data/perf-tools/man/man8/cachestat.8 +111 -0
  75. data/perf-tools/man/man8/execsnoop.8 +104 -0
  76. data/perf-tools/man/man8/funccount.8 +76 -0
  77. data/perf-tools/man/man8/funcgraph.8 +166 -0
  78. data/perf-tools/man/man8/funcslower.8 +129 -0
  79. data/perf-tools/man/man8/functrace.8 +123 -0
  80. data/perf-tools/man/man8/iolatency.8 +116 -0
  81. data/perf-tools/man/man8/iosnoop.8 +169 -0
  82. data/perf-tools/man/man8/killsnoop.8 +100 -0
  83. data/perf-tools/man/man8/kprobe.8 +162 -0
  84. data/perf-tools/man/man8/opensnoop.8 +113 -0
  85. data/perf-tools/man/man8/perf-stat-hist.8 +111 -0
  86. data/perf-tools/man/man8/reset-ftrace.8 +49 -0
  87. data/perf-tools/man/man8/syscount.8 +96 -0
  88. data/perf-tools/man/man8/tcpretrans.8 +93 -0
  89. data/perf-tools/man/man8/tpoint.8 +140 -0
  90. data/perf-tools/man/man8/uprobe.8 +168 -0
  91. data/perf-tools/misc/perf-stat-hist +223 -0
  92. data/perf-tools/net/tcpretrans +311 -0
  93. data/perf-tools/opensnoop +280 -0
  94. data/perf-tools/syscount +192 -0
  95. data/perf-tools/system/tpoint +232 -0
  96. data/perf-tools/tools/reset-ftrace +123 -0
  97. data/perf-tools/user/uprobe +390 -0
  98. metadata +349 -0
@@ -0,0 +1,150 @@
1
+ #!/usr/bin/perl
2
+ #
3
+ # execsnoop - trace process exec() with arguments. /proc version.
4
+ # Written using Linux ftrace.
5
+ #
6
+ # This shows the execution of new processes, especially short-lived ones that
7
+ # can be missed by sampling tools such as top(1).
8
+ #
9
+ # USAGE: ./execsnoop [-h] [-n name]
10
+ #
11
+ # REQUIREMENTS: FTRACE CONFIG, sched:sched_process_exec tracepoint (you may
12
+ # already have these on recent kernels), and Perl.
13
+ #
14
+ # This traces exec() from the fork()->exec() sequence, which means it won't
15
+ # catch new processes that only fork(), and, it will catch processes that
16
+ # re-exec. This instruments sched:sched_process_exec without buffering, and then
17
+ # in user-space (this program) reads PPID and process arguments asynchronously
18
+ # from /proc.
19
+ #
20
+ # If the process traced is very short-lived, this program may miss reading
21
+ # arguments and PPID details. In that case, "<?>" and "?" will be printed
22
+ # respectively. This program is best-effort, and should be improved in the
23
+ # future when other kernel capabilities are made available. If you need a
24
+ # more reliable tool now, then consider other tracing alternatives (eg,
25
+ # SystemTap). This tool is really a proof of concept to see what ftrace can
26
+ # currently do.
27
+ #
28
+ # From perf-tools: https://github.com/brendangregg/perf-tools
29
+ #
30
+ # See the execsnoop(8) man page (in perf-tools) for more info.
31
+ #
32
+ # COPYRIGHT: Copyright (c) 2014 Brendan Gregg.
33
+ #
34
+ # This program is free software; you can redistribute it and/or
35
+ # modify it under the terms of the GNU General Public License
36
+ # as published by the Free Software Foundation; either version 2
37
+ # of the License, or (at your option) any later version.
38
+ #
39
+ # This program is distributed in the hope that it will be useful,
40
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
41
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
42
+ # GNU General Public License for more details.
43
+ #
44
+ # You should have received a copy of the GNU General Public License
45
+ # along with this program; if not, write to the Free Software Foundation,
46
+ # Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
47
+ #
48
+ # (http://www.gnu.org/copyleft/gpl.html)
49
+ #
50
+ # 07-Jul-2014 Brendan Gregg Created this.
51
+
52
+ use strict;
53
+ use warnings;
54
+ use POSIX qw(strftime);
55
+ use Getopt::Long;
56
+ my $tracing = "/sys/kernel/debug/tracing";
57
+ my $flock = "/var/tmp/.ftrace-lock";
58
+ my $tpdir = "sched/sched_process_exec";
59
+ my $tptext = $tpdir; $tptext =~ s/\//:/;
60
+ local $SIG{INT} = \&cleanup;
61
+ local $SIG{QUIT} = \&cleanup;
62
+ local $SIG{TERM} = \&cleanup;
63
+ local $SIG{PIPE} = \&cleanup;
64
+ local $SIG{HUP} = \&cleanup;
65
+ $| = 1;
66
+
67
+ ### options
68
+ my ($name, $help);
69
+ GetOptions("name=s" => \$name,
70
+ "help" => \$help)
71
+ or usage();
72
+ usage() if $help;
73
+ sub usage {
74
+ print STDERR "USAGE: execsnoop [-h] [-n name]\n";
75
+ print STDERR " eg,\n";
76
+ print STDERR " execsnoop -n ls # show \"ls\" cmds only.\n";
77
+ exit;
78
+ }
79
+
80
+ sub ldie {
81
+ unlink $flock;
82
+ die @_;
83
+ }
84
+
85
+ sub writeto {
86
+ my ($string, $file) = @_;
87
+ open FILE, ">$file" or return 0;
88
+ print FILE $string or return 0;
89
+ close FILE or return 0;
90
+ }
91
+
92
+ ### check permissions
93
+ chdir "$tracing" or ldie "ERROR: accessing tracing. Root? Kernel has FTRACE?" .
94
+ "\ndebugfs mounted? (mount -t debugfs debugfs /sys/kernel/debug)";
95
+
96
+ ### ftrace lock
97
+ if (-e $flock) {
98
+ open FLOCK, $flock; my $fpid = <FLOCK>; chomp $fpid; close FLOCK;
99
+ die "ERROR: ftrace may be in use by PID $fpid ($flock)";
100
+ }
101
+ writeto "$$", $flock or die "ERROR: unable to write $flock.";
102
+
103
+ ### setup and begin tracing
104
+ writeto "nop", "current_tracer" or ldie "ERROR: disabling current_tracer.";
105
+ writeto "1", "events/$tpdir/enable" or ldie "ERROR: enabling tracepoint " .
106
+ "\"$tptext\" (tracepoint missing in this kernel version?)";
107
+ open TPIPE, "trace_pipe" or warn "ERROR: opening trace_pipe.";
108
+ printf "%-8s %6s %6s %s\n", "TIME", "PID", "PPID", "ARGS";
109
+
110
+ while (<TPIPE>) {
111
+ my ($taskpid, $rest) = split;
112
+ my ($task, $pid) = $taskpid =~ /(.*)-(\d+)/;
113
+
114
+ next if (defined $name and $name ne $task);
115
+
116
+ my $args = "$task <?>";
117
+ if (open CMDLINE, "/proc/$pid/cmdline") {
118
+ my $arglist = <CMDLINE>;
119
+ if (defined $arglist) {
120
+ $arglist =~ s/\000/ /g;
121
+ $args = $arglist;
122
+ }
123
+ close CMDLINE;
124
+ }
125
+
126
+ my $ppid = "?";
127
+ if (open STAT, "/proc/$pid/stat") {
128
+ my $fields = <STAT>;
129
+ if (defined $fields) {
130
+ $ppid = (split ' ', $fields)[3];
131
+ }
132
+ close STAT;
133
+ }
134
+
135
+ my $now = strftime "%H:%M:%S", localtime;
136
+ printf "%-8s %6s %6s %s\n", $now, $pid, $ppid, $args;
137
+ }
138
+
139
+ ### end tracing
140
+ cleanup();
141
+
142
+ sub cleanup {
143
+ print "\nEnding tracing...\n";
144
+ close TPIPE;
145
+ writeto "0", "events/$tpdir/enable" or
146
+ ldie "ERROR: disabling \"$tptext\"";
147
+ writeto "", "trace";
148
+ unlink $flock;
149
+ exit;
150
+ }
@@ -0,0 +1,80 @@
1
+ .TH execsnoop\-proc 8 "2014-07-07" "USER COMMANDS"
2
+ .SH NAME
3
+ execsnoop\-proc \- trace process exec() with arguments. Uses Linux ftrace. /proc version.
4
+ .SH SYNOPSIS
5
+ .B execsnoop\-proc
6
+ [\-h] [\-n name]
7
+ .SH DESCRIPTION
8
+ execsnoop\-proc traces process execution, showing PID, PPID, and argument details
9
+ if possible.
10
+
11
+ This traces exec() from the fork()->exec() sequence, which means it won't
12
+ catch new processes that only fork(), and, it will catch processes that
13
+ re-exec. This instruments sched:sched_process_exec without buffering, and then
14
+ in user-space (this program) reads PPID and process arguments asynchronously
15
+ from /proc.
16
+
17
+ If the process traced is very short-lived, this program may miss reading
18
+ arguments and PPID details. In that case, "<?>" and "?" will be printed
19
+ respectively.
20
+
21
+ This program is best-effort (a hack), and should be improved in the future when
22
+ other kernel capabilities are made available. It may be useful in the meantime.
23
+ If you need a more reliable tool now, consider other tracing alternates (eg,
24
+ SystemTap). This tool is really a proof of concept to see what ftrace can
25
+ currently do.
26
+
27
+ See execsnoop(8) for another version that reads arguments from registers
28
+ instead of /proc.
29
+
30
+ Since this uses ftrace, only the root user can use this tool.
31
+ .SH REQUIREMENTS
32
+ FTRACE CONFIG and the sched:sched_process_exec tracepoint, which you may already
33
+ have enabled and available on recent kernels, and Perl.
34
+ .SH OPTIONS
35
+ \-n name
36
+ Only show processes that match this name. This is filtered in user space.
37
+ .TP
38
+ \-h
39
+ Print usage message.
40
+ .SH EXAMPLES
41
+ .TP
42
+ Trace all new processes and arguments (if possible):
43
+ .B execsnoop\-proc
44
+ .TP
45
+ Trace all new processes with process name "sed":
46
+ .B execsnoop\-proc -n sed
47
+ .SH FIELDS
48
+ .TP
49
+ TIME
50
+ Time of process exec(): HH:MM:SS.
51
+ .TP
52
+ PID
53
+ Process ID.
54
+ .TP
55
+ PPID
56
+ Parent process ID, if this was able to be read (may be missed for short-lived
57
+ processes). If it is unable to be read, "?" is printed.
58
+ .TP
59
+ ARGS
60
+ Command line arguments, if these were able to be read in time (may be missed
61
+ for short-lived processes). If they are unable to be read, "<?>" is printed.
62
+ .SH OVERHEAD
63
+ This reads and processes exec() events in user space as they occur. Since the
64
+ rate of exec() is expected to be low (< 500/s), the overhead is expected to
65
+ be small or negligible.
66
+ .SH SOURCE
67
+ This is from the perf-tools collection.
68
+ .IP
69
+ https://github.com/brendangregg/perf-tools
70
+ .PP
71
+ Also look under the examples directory for a text file containing example
72
+ usage, output, and commentary for this tool.
73
+ .SH OS
74
+ Linux
75
+ .SH STABILITY
76
+ Unstable - in development.
77
+ .SH AUTHOR
78
+ Brendan Gregg
79
+ .SH SEE ALSO
80
+ execsnoop(8), top(1)
@@ -0,0 +1,46 @@
1
+ Demonstrations of execsnoop-proc, the Linux ftrace version.
2
+
3
+ Here's execsnoop showing what's really executed by "man ls":
4
+
5
+ # ./execsnoop
6
+ TIME PID PPID ARGS
7
+ 17:52:37 22406 25781 man ls
8
+ 17:52:37 22413 22406 preconv -e UTF-8
9
+ 17:52:37 22416 22406 pager -s
10
+ 17:52:37 22415 22406 /bin/sh /usr/bin/nroff -mandoc -rLL=162n -rLT=162n -Tutf8
11
+ 17:52:37 22414 22406 tbl
12
+ 17:52:37 22419 22418 locale charmap
13
+ 17:52:37 22420 22415 groff -mtty-char -Tutf8 -mandoc -rLL=162n -rLT=162n
14
+ 17:52:37 22421 22420 troff -mtty-char -mandoc -rLL=162n -rLT=162n -Tutf8
15
+ 17:52:37 22422 22420 grotty
16
+
17
+
18
+ These are short-lived processes, where the argument and PPID details are often
19
+ missed by execsnoop:
20
+
21
+ # ./execsnoop
22
+ TIME PID PPID ARGS
23
+ 18:00:33 26750 1961 multilog <?>
24
+ 18:00:33 26749 1972 multilog <?>
25
+ 18:00:33 26749 1972 multilog <?>
26
+ 18:00:33 26751 ? mkdir <?>
27
+ 18:00:33 26749 1972 multilog <?>
28
+ 18:00:33 26752 ? chown <?>
29
+ 18:00:33 26750 1961 multilog <?>
30
+ 18:00:33 26750 1961 multilog <?>
31
+ 18:00:34 26753 1961 multilog <?>
32
+ 18:00:34 26754 1972 multilog <?>
33
+ [...]
34
+
35
+ This will be fixed in a later version, but likely requires some kernel or
36
+ tracer changes first (fetching cmdline as the probe fires).
37
+
38
+
39
+ The previous examples were on Linux 3.14 and 3.16 kernels. Here's a 3.2 system
40
+ I'm running:
41
+
42
+ # ./execsnoop
43
+ ERROR: enabling tracepoint "sched:sched_process_exec" (tracepoint missing in this kernel version?) at ./execsnoop line 78.
44
+
45
+ This kernel version is missing the sched_process_exec probe, which is pretty
46
+ annoying.
@@ -0,0 +1,175 @@
1
+ #!/bin/bash
2
+ #
3
+ # bitesize - show disk I/O size as a histogram.
4
+ # Written using Linux perf_events (aka "perf").
5
+ #
6
+ # This can be used to characterize the distribution of block device I/O
7
+ # sizes. To study I/O in more detail, see iosnoop(8).
8
+ #
9
+ # USAGE: bitesize [-h] [-b buckets] [seconds]
10
+ # eg,
11
+ # ./bitesize 10
12
+ #
13
+ # Run "bitesize -h" for full usage.
14
+ #
15
+ # REQUIREMENTS: perf_events and block:block_rq_issue tracepoint, which you may
16
+ # already have on recent kernels.
17
+ #
18
+ # This uses multiple counting tracepoints with different filters, one for each
19
+ # histogram bucket. While this is summarized in-kernel, the use of multiple
20
+ # tracepoints does add addiitonal overhead, which is more evident if you add
21
+ # more buckets. In the future this functionality will be available in an
22
+ # efficient way in the kernel, and this tool can be rewritten.
23
+ #
24
+ # From perf-tools: https://github.com/brendangregg/perf-tools
25
+ #
26
+ # COPYRIGHT: Copyright (c) 2014 Brendan Gregg.
27
+ #
28
+ # This program is free software; you can redistribute it and/or
29
+ # modify it under the terms of the GNU General Public License
30
+ # as published by the Free Software Foundation; either version 2
31
+ # of the License, or (at your option) any later version.
32
+ #
33
+ # This program is distributed in the hope that it will be useful,
34
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
35
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
36
+ # GNU General Public License for more details.
37
+ #
38
+ # You should have received a copy of the GNU General Public License
39
+ # along with this program; if not, write to the Free Software Foundation,
40
+ # Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
41
+ #
42
+ # (http://www.gnu.org/copyleft/gpl.html)
43
+ #
44
+ # 22-Jul-2014 Brendan Gregg Created this.
45
+
46
+ duration=0
47
+ buckets=(1 8 64 128)
48
+ secsz=512
49
+ trap ':' INT QUIT TERM PIPE HUP
50
+
51
+ function usage {
52
+ cat <<-END >&2
53
+ USAGE: bitesize [-h] [-b buckets] [seconds]
54
+ -b buckets # specify histogram buckets (Kbytes)
55
+ -h # this usage message
56
+ eg,
57
+ bitesize # trace I/O size until Ctrl-C
58
+ bitesize 10 # trace I/O size for 10 seconds
59
+ bitesize -b "8 16 32" # specify custom bucket points
60
+ END
61
+ exit
62
+ }
63
+
64
+ function die {
65
+ echo >&2 "$@"
66
+ exit 1
67
+ }
68
+
69
+ ### process options
70
+ while getopts b:h opt
71
+ do
72
+ case $opt in
73
+ b) buckets=($OPTARG) ;;
74
+ h|?) usage ;;
75
+ esac
76
+ done
77
+ shift $(( $OPTIND - 1 ))
78
+ tpoint=block:block_rq_issue
79
+ var=nr_sector
80
+ duration=$1
81
+
82
+ ### convert buckets (Kbytes) to disk sectors
83
+ i=0
84
+ sectors=(${buckets[*]})
85
+ ((max_i = ${#buckets[*]} - 1))
86
+ while (( i <= max_i )); do
87
+ (( sectors[$i] = ${sectors[$i]} * 1024 / $secsz ))
88
+ # avoid negative array index errors for old version bash
89
+ if (( i > 0 ));then
90
+ if (( ${sectors[$i]} <= ${sectors[$i - 1]} )); then
91
+ die "ERROR: bucket list must increase in size."
92
+ fi
93
+ fi
94
+ (( i++ ))
95
+ done
96
+
97
+ ### build list of tracepoints and filters for each histogram bucket
98
+ max_b=${buckets[$max_i]}
99
+ max_s=${sectors[$max_i]}
100
+ tpoints="-e $tpoint --filter \"$var < ${sectors[0]}\""
101
+ awkarray=
102
+ i=0
103
+ while (( i < max_i )); do
104
+ tpoints="$tpoints -e $tpoint --filter \"$var >= ${sectors[$i]} && "
105
+ tpoints="$tpoints $var < ${sectors[$i + 1]}\""
106
+ awkarray="$awkarray buckets[$i]=${buckets[$i]};"
107
+ (( i++ ))
108
+ done
109
+ awkarray="$awkarray buckets[$max_i]=${buckets[$max_i]};"
110
+ tpoints="$tpoints -e $tpoint --filter \"$var >= ${sectors[$max_i]}\""
111
+
112
+ ### prepare to run
113
+ if (( duration )); then
114
+ etext="for $duration seconds"
115
+ cmd="sleep $duration"
116
+ else
117
+ etext="until Ctrl-C"
118
+ cmd="sleep 999999"
119
+ fi
120
+ echo "Tracing block I/O size (bytes), $etext..."
121
+
122
+ ### run perf
123
+ out="-o /dev/stdout" # a workaround needed in linux 3.2; not by 3.4.15
124
+ stat=$(eval perf stat $tpoints -a $out $cmd 2>&1)
125
+ if (( $? != 0 )); then
126
+ echo >&2 "ERROR running perf:"
127
+ echo >&2 "$stat"
128
+ exit
129
+ fi
130
+
131
+ ### find max value for ASCII histogram
132
+ most=$(echo "$stat" | awk -v tpoint=$tpoint '
133
+ $2 == tpoint { gsub(/,/, ""); if ($1 > m) { m = $1 } }
134
+ END { print m }'
135
+ )
136
+
137
+ ### process output
138
+ echo
139
+ echo "$stat" | awk -v tpoint=$tpoint -v max_i=$max_i -v most=$most '
140
+ function star(sval, smax, swidth) {
141
+ stars = ""
142
+ # using int could avoid error on gawk
143
+ if (int(smax) == 0) return ""
144
+ for (si = 0; si < (swidth * sval / smax); si++) {
145
+ stars = stars "#"
146
+ }
147
+ return stars
148
+ }
149
+ BEGIN {
150
+ '"$awkarray"'
151
+ printf(" %-15s: %-8s %s\n", "Kbytes", "I/O",
152
+ "Distribution")
153
+ }
154
+ /Performance counter stats/ { i = -1 }
155
+ # reverse order of rule set is important
156
+ { ok = 0 }
157
+ $2 == tpoint { num = $1; gsub(/,/, "", num); ok = 1 }
158
+ ok && i >= max_i {
159
+ printf(" %10.1f -> %-10s: %-8s |%-38s|\n",
160
+ buckets[i], "", num, star(num, most, 38))
161
+ next
162
+ }
163
+ ok && i >= 0 && i < max_i {
164
+ printf(" %10.1f -> %-10.1f: %-8s |%-38s|\n",
165
+ buckets[i], buckets[i+1] - 0.1, num,
166
+ star(num, most, 38))
167
+ i++
168
+ next
169
+ }
170
+ ok && i == -1 {
171
+ printf(" %10s -> %-10.1f: %-8s |%-38s|\n", "",
172
+ buckets[0] - 0.1, num, star(num, most, 38))
173
+ i++
174
+ }
175
+ '
@@ -0,0 +1,63 @@
1
+ Demonstrations of bitesize, the Linux perf_events version.
2
+
3
+
4
+ bitesize traces block I/O issued, and reports a histogram of I/O size. By
5
+ default five buckets are used to gather statistics on common I/O sizes:
6
+
7
+ # ./bitesize
8
+ Tracing block I/O size (bytes), until Ctrl-C...
9
+ ^C
10
+ Kbytes : I/O Distribution
11
+ -> 0.9 : 0 | |
12
+ 1.0 -> 7.9 : 38 |# |
13
+ 8.0 -> 63.9 : 10108 |######################################|
14
+ 64.0 -> 127.9 : 13 |# |
15
+ 128.0 -> : 1 |# |
16
+
17
+ In this case, most of the I/O was between 8 and 63.9 Kbytes. The "63.9"
18
+ really means "less than 64".
19
+
20
+
21
+ Specifying custom buckets to examine the I/O size in more detail:
22
+
23
+ # ./bitesize -b "8 16 24 32"
24
+ Tracing block I/O size (bytes), until Ctrl-C...
25
+ ^C
26
+ Kbytes : I/O Distribution
27
+ -> 7.9 : 89 |# |
28
+ 8.0 -> 15.9 : 14665 |######################################|
29
+ 16.0 -> 23.9 : 657 |## |
30
+ 24.0 -> 31.9 : 661 |## |
31
+ 32.0 -> : 376 |# |
32
+
33
+ The I/O is mostly between 8 and 15.9 Kbytes
34
+
35
+ It's probably 8 Kbytes. Checking:
36
+
37
+ # ./bitesize -b "8 9"
38
+ Tracing block I/O size (bytes), until Ctrl-C...
39
+ ^C
40
+ Kbytes : I/O Distribution
41
+ -> 7.9 : 62 |# |
42
+ 8.0 -> 8.9 : 11719 |######################################|
43
+ 9.0 -> : 1358 |##### |
44
+
45
+ It is.
46
+
47
+ The overhead of this tool is relative to the number of buckets used, hence only
48
+ using what is necessary.
49
+
50
+ To study this I/O in more detail, I can use iosnoop(8) and capture it to a file
51
+ for post-processing.
52
+
53
+
54
+ Use -h to print the USAGE message:
55
+
56
+ # ./bitesize -h
57
+ USAGE: bitesize [-h] [-b buckets] [seconds]
58
+ -b buckets # specify histogram buckets (Kbytes)
59
+ -h # this usage message
60
+ eg,
61
+ bitesize # trace I/O size until Ctrl-C
62
+ bitesize 10 # trace I/O size for 10 seconds
63
+ bitesize -b "8 16 32" # specify custom bucket points
@@ -0,0 +1,58 @@
1
+ Demonstrations of cachestat, the Linux ftrace version.
2
+
3
+
4
+ Here is some sample output showing file system cache statistics, followed by
5
+ the workload that caused it:
6
+
7
+ # ./cachestat -t
8
+ Counting cache functions... Output every 1 seconds.
9
+ TIME HITS MISSES DIRTIES RATIO BUFFERS_MB CACHE_MB
10
+ 08:28:57 415 0 0 100.0% 1 191
11
+ 08:28:58 411 0 0 100.0% 1 191
12
+ 08:28:59 362 97 0 78.9% 0 8
13
+ 08:29:00 411 0 0 100.0% 0 9
14
+ 08:29:01 775 20489 0 3.6% 0 89
15
+ 08:29:02 411 0 0 100.0% 0 89
16
+ 08:29:03 6069 0 0 100.0% 0 89
17
+ 08:29:04 15249 0 0 100.0% 0 89
18
+ 08:29:05 411 0 0 100.0% 0 89
19
+ 08:29:06 411 0 0 100.0% 0 89
20
+ 08:29:07 411 0 3 100.0% 0 89
21
+ [...]
22
+
23
+ I used the -t option to include the TIME column, to make describing the output
24
+ easier.
25
+
26
+ The workload was:
27
+
28
+ # echo 1 > /proc/sys/vm/drop_caches; sleep 2; cksum 80m; sleep 2; cksum 80m
29
+
30
+ At 8:28:58, the page cache was dropped by the first command, which can be seen
31
+ by the drop in size for "CACHE_MB" (page cache size) from 191 Mbytes to 8.
32
+ After a 2 second sleep, a cksum command was issued at 8:29:01, for an 80 Mbyte
33
+ file (called "80m"), which caused a total of ~20,400 misses ("MISSES" column),
34
+ and the page cache size to grow by 80 Mbytes. The hit ratio during this dropped
35
+ to 3.6%. Finally, after another 2 second sleep, at 8:29:03 the cksum command
36
+ was run a second time, this time hitting entirely from cache.
37
+
38
+ Instrumenting all file system cache accesses does cost some overhead, and this
39
+ tool might slow your target system by 2% or so. Test before use if this is a
40
+ concern.
41
+
42
+ This tool also uses dynamic tracing, and is tied to Linux kernel implementation
43
+ details. If it doesn't work for you, it probably needs fixing.
44
+
45
+
46
+ Use -h to print the USAGE message:
47
+
48
+ # ./cachestat -h
49
+ USAGE: cachestat [-Dht] [interval]
50
+ -D # print debug counters
51
+ -h # this usage message
52
+ -t # include timestamp
53
+ interval # output interval in secs (default 1)
54
+ eg,
55
+ cachestat # show stats every second
56
+ cachestat 5 # show stats every 5 seconds
57
+
58
+ See the man page and example file for more info.