rspec-watchdog 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: f70de9ae9f5063183759f08a354cd314415f1f361853dd8ce5c2cac863c319b7
4
+ data.tar.gz: 537a8463433e30ad9fd332d98ed5dfab2e8712ed06dd9ce52fec65f694b20b1e
5
+ SHA512:
6
+ metadata.gz: 43b4fb491cd5e7a08812275a254ee61bc114f35d4dc6aee91cd76af828288d3a309ec2f789c8ce4e96eec138d7ed290e086916b4c253c2332978217d6e7397e2
7
+ data.tar.gz: e7c13ed73adb19d2d6f8c6029e2db4fbe02f675eb48667350af302c371ae74940d3201354a32c1937d9b6248f3c0e3548df18437127944065af03b683ddd2d0e
data/README.md ADDED
@@ -0,0 +1,140 @@
1
+ # RspecWatchdog
2
+
3
+ RspecWatchdog is a gem designed to track the performance and reliability of your RSpec tests. It provides useful metrics like execution times, test failures, and flaky tests. With seamless integration into your existing RSpec setup, you can monitor test performance and diagnose flaky tests, improving both test suite efficiency and reliability.
4
+
5
+ ## Motivation
6
+
7
+ Testing is a crucial part of the development process, but it can be challenging to track and maintain an efficient test suite. RspecWatchdog offers a simple way to monitor test performance, identify slow tests, and spot flaky tests—helping you improve test reliability and speed. By integrating with rspec-rebound, this gem gives you insights into tests that frequently fail, allowing you to address instability in your suite.
8
+
9
+ ## Features
10
+
11
+ - **Performance tracking**: Measure the execution time of individual tests to identify slow or inefficient tests
12
+ - **Test statistics**: View summary metrics, such as total runs, failures, and average test times
13
+ - **Flaky test detection**: Integrated with rspec-rebound to help you spot flaky tests that fail intermittently
14
+ - **Minimal dependencies**: The gem only requires RSpec, making it easy to integrate into any project that uses RSpec for testing
15
+ - **Optional dashboard integration**: For Rails users, RspecWatchdog can send data to [Watchdog::Dashboard](https://github.com/windmotion-io/watchdog-dashboard) for visualization
16
+
17
+ ## Installation
18
+
19
+ ### Add the Gem to Your Project
20
+
21
+ Add rspec_watchdog to your Gemfile:
22
+
23
+ ```ruby
24
+ gem 'rspec-watchdog'
25
+ ```
26
+
27
+ Then, run:
28
+
29
+ ```bash
30
+ bundle install
31
+ ```
32
+
33
+ ## Configuration
34
+
35
+ Configure rspec_watchdog in your spec_helper.rb or rails_helper.rb.
36
+
37
+ ### Basic Configuration (RSpec-only Projects)
38
+
39
+ In your spec/spec_helper.rb, add the following:
40
+
41
+ ```ruby
42
+ require "rspec/watchdog"
43
+
44
+ Rspec::Watchdog.configure do |config|
45
+ config.show_logs = true
46
+
47
+ # Set these only if using the dashboard integration
48
+ # config.watchdog_api_url = "http://your-app.com/watchdog/analytics"
49
+ # config.watchdog_api_token = "your_secret_token"
50
+ end
51
+
52
+ RSpec.configure do |config|
53
+ config.add_formatter(:progress)
54
+ config.add_formatter(SlowSpecFormatter)
55
+
56
+ # To enable flaky test detection, add the following:
57
+ config.flaky_spec_detection = true
58
+ config.flaky_test_callback = proc do |example|
59
+ example.metadata[:flaky] = true
60
+ end
61
+ end
62
+ ```
63
+
64
+ ### Configuration Options
65
+
66
+ #### `show_logs`
67
+
68
+ When set to `true`, this option enables additional logging for RSpec tests:
69
+
70
+ - These logs provide insights into test execution, including test runtimes and other relevant debugging information
71
+ - This can be useful for diagnosing slow tests or identifying issues during test runs
72
+
73
+ #### `watchdog_api_url` (Optional)
74
+
75
+ This is the endpoint where test execution data will be sent after each RSpec test finishes:
76
+
77
+ - Only needed if you're using the dashboard integration
78
+ - Can point to your own server or a hosted instance of Watchdog Dashboard
79
+
80
+ #### `watchdog_api_token` (Optional)
81
+
82
+ This token is used to validate that the request being sent to the API is legitimate:
83
+
84
+ - If you're running tests in a CI/CD environment (e.g., GitHub Actions or CircleCI)
85
+ - Should match the token configured in your dashboard instance
86
+
87
+ ## Usage
88
+
89
+ After installation, RspecWatchdog automatically hooks into your RSpec test suite. You can start tracking your tests immediately without any additional configuration.
90
+
91
+ ### Running Your Tests
92
+
93
+ Simply run your tests as usual with RSpec:
94
+
95
+ ```bash
96
+ bundle exec rspec
97
+ ```
98
+
99
+ RspecWatchdog will capture metrics about your test runs and display them according to your configuration.
100
+
101
+ ### Output Example
102
+
103
+ RspecWatchdog will output information about your test runs directly to the console:
104
+
105
+ ```
106
+ SlowTest: "MyClass#method_1" - 2.31 seconds
107
+ SlowTest: "MyClass#method_2" - 1.45 seconds
108
+ ...
109
+ ```
110
+
111
+ ### Integration with Watchdog Dashboard (Optional)
112
+
113
+ ![Watchdog Dashboard](dashboard_1.png "Watchdog Dashboard")
114
+
115
+ For a more comprehensive visualization of your test metrics, RspecWatchdog can send data to [Watchdog::Dashboard](https://github.com/windmotion-io/watchdog-dashboard), a separate Rails engine that provides a visual interface.
116
+
117
+ To use this integration:
118
+
119
+ 1. Set up Watchdog Dashboard in your Rails application (see the [Watchdog Dashboard README](https://github.com/windmotion-io/watchdog-dashboard))
120
+ 2. Configure RspecWatchdog with the dashboard URL and API token:
121
+
122
+ ```ruby
123
+ RspecWatchdog.configure do |config|
124
+ config.show_logs = true
125
+ config.watchdog_api_url = "http://localhost:3000/watchdog/analytics"
126
+ config.watchdog_api_token = "your_secret_token" # Must match the dashboard token
127
+ end
128
+ ```
129
+
130
+ ## Integration with RSpec-Rebound
131
+
132
+ RspecWatchdog integrates with rspec-rebound(see the [rspec-rebound README](https://github.com/windmotion-io/rspec-rebound)) to track flaky tests. By enabling both gems in your project, you can easily spot tests that fail inconsistently, making it easier to identify root causes and improve the stability of your test suite.
133
+
134
+ ## Contributing
135
+
136
+ We welcome contributions to RspecWatchdog! If you have ideas, suggestions, or find a bug, please open an issue or submit a pull request on GitHub.
137
+
138
+ ## License
139
+
140
+ The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
data/Rakefile ADDED
@@ -0,0 +1,4 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ task default: %i[]
data/dashboard_1.png ADDED
Binary file
@@ -0,0 +1,245 @@
1
+ require 'rspec/core/formatters/base_text_formatter'
2
+ require 'net/http'
3
+ require 'json'
4
+
5
+ class SlowSpecFormatter
6
+ RSpec::Core::Formatters.register self, :dump_summary
7
+
8
+ def initialize(output)
9
+ @output = output
10
+ @show_logs = Rspec::Watchdog.config.show_logs
11
+ @flaky_spec_detection = RSpec.configuration.flaky_spec_detection
12
+ @watchdog_api_url = Rspec::Watchdog.config.watchdog_api_url
13
+ @watchdog_api_token = Rspec::Watchdog.config.watchdog_api_token
14
+ end
15
+
16
+ def dump_summary(summary)
17
+ return unless @show_logs
18
+
19
+ puts "\nAll examples sorted by run time (most durable to fastest):"
20
+
21
+ all_examples = summary.examples.map do |example|
22
+ {
23
+ description: example.full_description,
24
+ file_path: example.metadata[:file_path],
25
+ location: example.metadata[:location],
26
+ run_time: example.execution_result.run_time,
27
+ status: example.execution_result.status.to_s,
28
+ error_message: example.execution_result.exception ? example.execution_result.exception.message : nil,
29
+ flaky: true
30
+ # flaky: example.metadata[:flaky].presence || false
31
+ }
32
+ end
33
+
34
+ sorted_examples = all_examples.sort_by { |ex| -ex[:run_time] }
35
+
36
+ sorted_examples.each do |ex|
37
+ puts "#{ex[:description]} (#{ex[:file_path]}) - #{ex[:run_time]} seconds - #{ex[:location]}"
38
+ end
39
+
40
+ calculate_average_time(summary)
41
+ fastest_test(sorted_examples)
42
+ slowest_test(sorted_examples)
43
+ percentiles(sorted_examples)
44
+ failed_tests(summary)
45
+ tests_grouped_by_file(sorted_examples)
46
+ tests_that_tooked_longer_than(sorted_examples, 2.0)
47
+ time_distribution_analysis(sorted_examples)
48
+ test_stability_analysis(summary)
49
+ execution_time_variance(sorted_examples)
50
+ temporal_complexity_analysis(sorted_examples)
51
+ test_dependency_analysis(sorted_examples)
52
+
53
+ return unless @watchdog_api_url && @watchdog_api_token
54
+
55
+ send_to_api(sorted_examples)
56
+ end
57
+
58
+ private
59
+
60
+ def calculate_average_time(summary)
61
+ average_time = summary.duration / summary.example_count
62
+ puts "\n🕒 \e[34mAverage time per example:\e[0m #{format('%.4f', average_time)} seconds"
63
+ end
64
+
65
+ def fastest_test(sorted_examples)
66
+ fastest = sorted_examples.last
67
+ puts "\n🚀 \e[32mFastest test:\e[0m #{fastest[:description]} (#{fastest[:file_path]}) - #{format('%.4f',
68
+ fastest[:run_time])} seconds"
69
+ end
70
+
71
+ def slowest_test(sorted_examples)
72
+ slowest = sorted_examples.first
73
+ puts "\n🐢 \e[31mSlowest test:\e[0m #{slowest[:description]} (#{slowest[:file_path]}) - #{format('%.4f',
74
+ slowest[:run_time])} seconds"
75
+ end
76
+
77
+ def percentiles(sorted_examples)
78
+ percentiles = [0.25, 0.5, 0.75].map do |p|
79
+ index = (sorted_examples.size * p).round - 1
80
+ example = sorted_examples[index]
81
+ {
82
+ percentile: (p * 100).to_i,
83
+ description: example[:description],
84
+ file_path: example[:file_path],
85
+ run_time: example[:run_time]
86
+ }
87
+ end
88
+ percentiles.each do |p|
89
+ puts "\n📊 \e[35m#{p[:percentile]}th percentile:\e[0m #{p[:description]} (#{p[:file_path]}) - #{format('%.4f',
90
+ p[:run_time])} seconds"
91
+ end
92
+ end
93
+
94
+ def failed_tests(summary)
95
+ failed = summary.examples.select { |example| example.execution_result.status == :failed }
96
+ puts "\n❌ \e[31mFailed tests:\e[0m"
97
+ failed.each do |example|
98
+ puts "\e[31m#{example.full_description} (#{example.metadata[:file_path]}) - #{example.execution_result.run_time} seconds\e[0m"
99
+ puts " \e[33mLocation:\e[0m #{example.metadata[:location]}"
100
+ puts " \e[31mFailure message:\e[0m #{example.execution_result.exception.message}"
101
+ end
102
+ end
103
+
104
+ def tests_grouped_by_file(sorted_examples)
105
+ grouped_by_file = sorted_examples.group_by { |ex| ex[:file_path] }
106
+ puts "\n📁 \e[36mTests grouped by file:\e[0m"
107
+ grouped_by_file.each do |file_path, examples|
108
+ puts "\n\e[36mFile:\e[0m #{file_path}"
109
+ examples.each do |ex|
110
+ puts " 🧪 #{ex[:description]} - #{format('%.4f', ex[:run_time])} seconds"
111
+ puts " 📍 Location: #{ex[:location]}"
112
+ puts ' '
113
+ end
114
+ end
115
+ end
116
+
117
+ def tests_that_tooked_longer_than(sorted_examples, threshold)
118
+ long_tests = sorted_examples.select { |ex| ex[:run_time] > threshold }
119
+ puts "\n⏳ \e[33mTests that took longer than #{threshold} seconds:\e[0m"
120
+ long_tests.each do |ex|
121
+ puts "\e[33m#{ex[:description]} (#{ex[:file_path]}) - #{format('%.4f', ex[:run_time])} seconds\e[0m"
122
+ puts " 📍 Location: #{ex[:location]}"
123
+ end
124
+ end
125
+
126
+ def time_distribution_analysis(sorted_examples)
127
+ total_tests = sorted_examples.size
128
+
129
+ categories = {
130
+ '⚡ Ultra Fast (< 0.01s)' => 0,
131
+ '🚀 Fast (0.01s - 0.1s)' => 0,
132
+ '🏃 Normal (0.1s - 0.5s)' => 0,
133
+ '🚶 Slow (0.5s - 1s)' => 0,
134
+ '🐢 Very Slow (> 1s)' => 0
135
+ }
136
+
137
+ sorted_examples.each do |ex|
138
+ case ex[:run_time]
139
+ when 0...0.01
140
+ categories['⚡ Ultra Fast (< 0.01s)'] += 1
141
+ when 0.01...0.1
142
+ categories['🚀 Fast (0.01s - 0.1s)'] += 1
143
+ when 0.1...0.5
144
+ categories['🏃 Normal (0.1s - 0.5s)'] += 1
145
+ when 0.5...1.0
146
+ categories['🚶 Slow (0.5s - 1s)'] += 1
147
+ else
148
+ categories['🐢 Very Slow (> 1s)'] += 1
149
+ end
150
+ end
151
+
152
+ puts "\n📊 \e[36mTime Distribution Analysis:\e[0m"
153
+ categories.each do |category, count|
154
+ percentage = (count.to_f / total_tests * 100).round(2)
155
+ puts "#{category}: #{count} tests (#{percentage}%)"
156
+ end
157
+ end
158
+
159
+ def test_stability_analysis(summary)
160
+ total_tests = summary.example_count
161
+ passed = summary.examples.select { |e| e.execution_result.status == :passed }.count
162
+ failed = summary.examples.select { |e| e.execution_result.status == :failed }.count
163
+ pending = summary.examples.select { |e| e.execution_result.status == :pending }.count
164
+
165
+ puts "\n🛡️ \e[34mTest Suite Stability:\e[0m"
166
+ puts "Total Tests: #{total_tests}"
167
+ puts "\e[32m✅ Passed: #{passed} (#{(passed.to_f / total_tests * 100).round(2)}%)\e[0m"
168
+ puts "\e[31m❌ Failed: #{failed} (#{(failed.to_f / total_tests * 100).round(2)}%)\e[0m"
169
+ puts "\e[33m⏳ Pending: #{pending} (#{(pending.to_f / total_tests * 100).round(2)}%)\e[0m"
170
+ end
171
+
172
+ def execution_time_variance(sorted_examples)
173
+ run_times = sorted_examples.map { |ex| ex[:run_time] }
174
+ mean = run_times.sum / run_times.size
175
+ variance = run_times.map { |time| (time - mean)**2 }.sum / run_times.size
176
+ std_dev = Math.sqrt(variance)
177
+
178
+ puts "\n📈 \e[35mExecution Time Variance:\e[0m"
179
+ puts "Mean Execution Time: #{format('%.4f', mean)} seconds"
180
+ puts "Variance: #{format('%.4f', variance)} seconds²"
181
+ puts "Standard Deviation: #{format('%.4f', std_dev)} seconds"
182
+ end
183
+
184
+ def temporal_complexity_analysis(sorted_examples)
185
+ sorted_by_complexity = sorted_examples.sort_by { |ex| ex[:run_time] }
186
+
187
+ puts "\n🧩 \e[32mTemporal Complexity Analysis:\e[0m"
188
+ puts 'Top 3 Most Complex Tests:'
189
+ sorted_by_complexity.first(3).each_with_index do |ex, index|
190
+ puts "#{index + 1}. #{ex[:description]}"
191
+ puts " File: #{ex[:file_path]}"
192
+ puts " Execution Time: #{format('%.4f', ex[:run_time])} seconds"
193
+ end
194
+ end
195
+
196
+ def test_dependency_analysis(sorted_examples)
197
+ file_dependencies = sorted_examples.group_by { |ex| ex[:file_path] }
198
+
199
+ puts "\n🔗 \e[33mTest Dependency Analysis:\e[0m"
200
+ file_dependencies.each do |file, tests|
201
+ next if tests.size < 2
202
+
203
+ puts "Potential Dependency Group: #{file}"
204
+ puts "Number of Tests: #{tests.size}"
205
+ puts "Average Execution Time: #{format('%.4f', tests.map { |t| t[:run_time] }.sum / tests.size)} seconds"
206
+ end
207
+ end
208
+
209
+ def send_to_api(sorted_examples)
210
+ uri = URI.parse(@watchdog_api_url)
211
+ http = Net::HTTP.new(uri.host, uri.port)
212
+ http.use_ssl = (uri.scheme == 'https')
213
+
214
+ batch_size = 30
215
+ sorted_examples.each_slice(batch_size) do |batch|
216
+ puts "🌐 Sending batch of #{batch.size} to Analytics API"
217
+ timestamp = Time.now.to_i
218
+ batch_payload = batch.map do |example|
219
+ {
220
+ description: example[:description],
221
+ file_path: example[:file_path],
222
+ location: example[:location],
223
+ run_time: example[:run_time],
224
+ status: example[:status],
225
+ error_message: example[:error_message],
226
+ timestamp: timestamp,
227
+ flaky: example[:flaky]
228
+ }.merge(@flaky_spec_detection ? { flaky: example[:flaky] } : {})
229
+ end
230
+
231
+ request = Net::HTTP::Post.new(uri.path, {
232
+ 'Content-Type' => 'application/json',
233
+ 'Authorization' => @watchdog_api_token
234
+ })
235
+ request.body = { metrics: batch_payload }.to_json
236
+
237
+ begin
238
+ response = http.request(request)
239
+ puts "✅ Batch sent successfully: #{response.code} #{response.message}"
240
+ rescue StandardError => e
241
+ puts "❌ Error sending batch: #{e.message}"
242
+ end
243
+ end
244
+ end
245
+ end
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rspec
4
+ module Watchdog
5
+ VERSION = "0.1.0"
6
+ end
7
+ end
@@ -0,0 +1,23 @@
1
+ require_relative 'watchdog/version'
2
+ require 'rspec/core' # Ensure RSpec is loaded first
3
+ require_relative 'watchdog/slow_spec_formatter'
4
+ require 'rspec/rebound'
5
+
6
+ module Rspec
7
+ module Watchdog
8
+ class Error < StandardError; end
9
+
10
+ class << self
11
+ attr_accessor :config
12
+
13
+ def configure
14
+ self.config ||= Configuration.new
15
+ yield(config) if block_given?
16
+ end
17
+ end
18
+
19
+ class Configuration
20
+ attr_accessor :show_logs, :watchdog_api_url, :watchdog_api_token
21
+ end
22
+ end
23
+ end
@@ -0,0 +1,6 @@
1
+ module Rspec
2
+ module Watchdog
3
+ VERSION: String
4
+ # See the writing guide of rbs: https://github.com/ruby/rbs#guides
5
+ end
6
+ end
metadata ADDED
@@ -0,0 +1,82 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: rspec-watchdog
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Federico Aldunate
8
+ - Agustin Fornio
9
+ autorequire:
10
+ bindir: exe
11
+ cert_chain: []
12
+ date: 2025-04-08 00:00:00.000000000 Z
13
+ dependencies:
14
+ - !ruby/object:Gem::Dependency
15
+ name: rspec
16
+ requirement: !ruby/object:Gem::Requirement
17
+ requirements:
18
+ - - "~>"
19
+ - !ruby/object:Gem::Version
20
+ version: '3'
21
+ type: :runtime
22
+ prerelease: false
23
+ version_requirements: !ruby/object:Gem::Requirement
24
+ requirements:
25
+ - - "~>"
26
+ - !ruby/object:Gem::Version
27
+ version: '3'
28
+ - !ruby/object:Gem::Dependency
29
+ name: rspec-rebound
30
+ requirement: !ruby/object:Gem::Requirement
31
+ requirements:
32
+ - - "~>"
33
+ - !ruby/object:Gem::Version
34
+ version: 0.2.1
35
+ type: :runtime
36
+ prerelease: false
37
+ version_requirements: !ruby/object:Gem::Requirement
38
+ requirements:
39
+ - - "~>"
40
+ - !ruby/object:Gem::Version
41
+ version: 0.2.1
42
+ description: Track RSpec test performance, identify slow tests, and generate metrics
43
+ email:
44
+ - tech@windmotion.io
45
+ executables: []
46
+ extensions: []
47
+ extra_rdoc_files: []
48
+ files:
49
+ - README.md
50
+ - Rakefile
51
+ - dashboard_1.png
52
+ - lib/rspec/watchdog.rb
53
+ - lib/rspec/watchdog/slow_spec_formatter.rb
54
+ - lib/rspec/watchdog/version.rb
55
+ - sig/rspec/watchdog.rbs
56
+ homepage: https://github.com/windmotion-io/rspec-watchdog
57
+ licenses:
58
+ - MIT
59
+ metadata:
60
+ homepage_uri: https://github.com/windmotion-io/rspec-watchdog
61
+ source_code_uri: https://github.com/windmotion-io/rspec-watchdog
62
+ changelog_uri: https://github.com/windmotion-io/rspec-watchdog
63
+ post_install_message:
64
+ rdoc_options: []
65
+ require_paths:
66
+ - lib
67
+ required_ruby_version: !ruby/object:Gem::Requirement
68
+ requirements:
69
+ - - ">="
70
+ - !ruby/object:Gem::Version
71
+ version: 3.0.0
72
+ required_rubygems_version: !ruby/object:Gem::Requirement
73
+ requirements:
74
+ - - ">="
75
+ - !ruby/object:Gem::Version
76
+ version: '0'
77
+ requirements: []
78
+ rubygems_version: 3.3.7
79
+ signing_key:
80
+ specification_version: 4
81
+ summary: RSpec performance tracking and metrics
82
+ test_files: []