oss-stats 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/docs/RepoStats.md ADDED
@@ -0,0 +1,130 @@
1
+ # Repo Stats
2
+
3
+ [repo_stats](../bin/repo_stats.rb) is a tool designed to give a wide range of
4
+ statistics on everything from PRs response time to CI health. It supports both
5
+ GitHub and Buildkite and is highly configurable.
6
+
7
+ It will walk all configured repos in all configured organizations and gather
8
+ the desired statistics and print them out in Slack-friendly Markdown.
9
+
10
+ The tool has 3 modes, and by default runs all 3 (aka `all` mode):
11
+
12
+ * CI
13
+ * PR
14
+ * Issue
15
+
16
+ For each mode, it pulls statistics for the last N days (default: 30).
17
+
18
+ We'll look at each mode in detail in the sections below.
19
+
20
+ **NOTE**: These examples specify a single GH Organization and GH Repository on
21
+ the command line, but most people will want to use the configuration file to
22
+ configure a list of org/repos to walk.
23
+
24
+ ## CI mode
25
+
26
+ The CI mode walks all CI related to the repositires in question and reports
27
+ the number of days they were broken on each branch desired (default: main).
28
+
29
+ It discovers both GitHub Actions workflows as well as Buildkite Pipelines. For
30
+ Buildkite pipelines, the tool will pull all pipelines in the specified
31
+ Buildkite Organization and map them to GitHub repos. In addition, it'll check
32
+ the README on every repo it examines and look for any badges that point to a
33
+ Buildkite pipeline and attempt to pull status of that (even if it is in a
34
+ different Buildkite organization).
35
+
36
+ For a single repository with both GitHub and Buildkite checks, this is
37
+ sample output:
38
+
39
+ ```markdown
40
+ *_[chef/chef](https://github.com/chef/chef) Stats (Last 7 days)_*
41
+
42
+ * CI Stats:
43
+ * Branch: `main` has the following failures:
44
+ * Dependabot Updates / Dependabot: 5 days
45
+ * [BK] chef-oss/chef-chef-main-habitat-test: 3 days
46
+ * [BK] chef-oss/chef-chef-main-verify: 3 days
47
+ ```
48
+
49
+ If there is no `--buildkite-org` passed in, no attempt will be made to find
50
+ or check any Buildkite pipelines.
51
+
52
+ ## PR and Issues modes
53
+
54
+ These modes work the same way, but report on PRs or Issues, respectively.
55
+ They gather a variety of statistics, and here's some sample output when
56
+ both modes are active:
57
+
58
+ ```markdown
59
+ *_[chef/chef](https://github.com/chef/chef) Stats (Last 7 days)_*
60
+
61
+ * PR Stats:
62
+ * Closed PRs: 5
63
+ * Open PRs: 38 (6 opened this period)
64
+ * Oldest Open PR: 2025-03-03 (103 days open, last activity 84 days ago)
65
+ * Stale PR (>30 days without comment): 15
66
+ * Avg Time to Close PRs: 3.02 days
67
+
68
+ * Issue Stats:
69
+ * Closed Issues: 0
70
+ * Open Issues: 8 (1 opened this period)
71
+ * Oldest Open Issue: 2025-03-06 (100 days open, last activity 72 days ago)
72
+ * Stale Issue (>30 days without comment): 5
73
+ * Avg Time to Close Issues: 0 hours
74
+ ```
75
+
76
+ ## Configuration File
77
+
78
+ Unless you only have single repository you're interested in, you'll need to
79
+ make a configuration file if you don't want to have to run `repo_stats`
80
+ repeatedly. It allows you to specify orgs, repos within those orgs, and
81
+ customize branches to check and days to report on by repo.
82
+
83
+ See [examples/repo_stats_config.rb](../examples/repo_stats_config.rb) for details.
84
+
85
+ ## Threshold Filtering
86
+
87
+ When working with a large number of repositories, the output of `repo_stats.rb`
88
+ can become quite verbose. Threshold filtering options allow you to narrow down
89
+ the report to include only the top N or N% of repositories based on specific
90
+ criteria. This helps in identifying areas that might need the most attention.
91
+
92
+ If multiple threshold options are provided, repositories meeting *any* of the
93
+ specified criteria will be included in the report.
94
+
95
+ Percentages (`N%`) are calculated based on the total number of repositories
96
+ being processed in the current run. For example, if 20 repositories are being
97
+ processed and `--top-n-stale=10%` is used, the top 2 repositories (10% of 20)
98
+ will be selected for that criterion.
99
+
100
+ The following threshold filtering options are available:
101
+
102
+ * `--top-n-stale=N` or `N%`: Includes the top N or N% of repositories based
103
+ on the *maximum* stale count of either its Pull Requests or Issues (stale
104
+ is defined as >30 days without a comment). For example, if a repo has 5
105
+ stale PRs and 10 stale Issues, it's ranked by 10.
106
+ * `--top-n-oldest=N` or `N%`: Includes the top N or N% of repositories with
107
+ the oldest open item (Pull Request or Issue). The age of the single oldest
108
+ item (be it a PR or an Issue) in a repository is used for comparison.
109
+ * `--top-n-time-to-close=N` or `N%`: Includes the top N or N% of repositories
110
+ with the highest average time-to-close. The ranking uses the *higher* of
111
+ the average time-to-close for Pull Requests or the average time-to-close
112
+ for Issues within a repository.
113
+ * `--top-n-stale-pr=N` or `N%`: Includes the top N or N% of repositories with
114
+ the most stale Pull Requests.
115
+ * `--top-n-stale-issue=N` or `N%`: Includes the top N or N% of repositories
116
+ with the most stale Issues.
117
+ * `--top-n-oldest-pr=N` or `N%`: Includes the top N or N% of repositories
118
+ with the oldest open Pull Requests.
119
+ * `--top-n-oldest-issue=N` or `N%`: Includes the top N or N% of repositories
120
+ with the oldest open Issues.
121
+ * `--top-n-time-to-close-pr=N` or `N%`: Includes the top N or N% of
122
+ repositories with the highest average time-to-close for Pull Requests.
123
+ * `--top-n-time-to-close-issue=N` or `N%`: Includes the top N or N% of
124
+ repositories with the highest average time-to-close for Issues.
125
+ * `--top-n-most-broken-ci-days=N` or `N%`: Includes the top N or N% of
126
+ repositories with the highest total number of days CI jobs were reported as
127
+ broken across all checked branches.
128
+ * `--top-n-most-broken-ci-jobs=N` or `N%`: Includes the top N or N% of
129
+ repositories with the highest number of distinct CI jobs that were reported
130
+ as broken across all checked branches.
@@ -0,0 +1,22 @@
1
+ db_file DEFAULT_DB_FILE = File.expand_path(
2
+ './data/meeting_data.sqlite3',
3
+ __dir__,
4
+ )
5
+ output File.expand_path('./team_meeting_reports.md', __dir__)
6
+ image_dir File.expand_path('./images', __dir__)
7
+ teams [
8
+ 'Client',
9
+ 'Server',
10
+ 'Core Libs',
11
+ ]
12
+ header <<~EOF
13
+ # Slack Meeting tracking
14
+
15
+ some stuff here...
16
+
17
+ ## Trends
18
+
19
+ [![Attendance](images/attendance-small.png)](images/attendance-full.png)
20
+ [![Build Status
21
+ Reports](images/build_status-small.png)](images/build_status-full.png)
22
+ EOF
@@ -0,0 +1,23 @@
1
+ # Example promise_stats config file.
2
+ #
3
+ # You can specify anything in here you can specify on the command-line
4
+ # except for --date, --promise, and --reference.
5
+
6
+ db_file DEFAULT_DB_FILE = File.expand_path(
7
+ './data/promises.sqlite3',
8
+ __dir__,
9
+ )
10
+ header <<~EOF
11
+ # Promises Report #{Date.today.to_s}
12
+ EOF
13
+
14
+ # Uncomment this and set it to a string to have the output
15
+ # of this script written to a file.
16
+ #
17
+ # output nil
18
+
19
+ # Uncomment to change the default log level
20
+ # log_level :info
21
+
22
+ # Uncomment to include promises marked 'abandoned' in status output
23
+ # include_abandoned false
@@ -0,0 +1,49 @@
1
+ # Example configuration file for repo_stats
2
+
3
+ # You can specify branches for specific repos under 'organizations'
4
+ # below, but for anything not specified it'll use `default_branches`
5
+ # (which defaults to ['main'])
6
+ #
7
+ # Note do NOT set 'days' or 'branches' in your config, as that overrides
8
+ # everything and is meant for CLI options.
9
+ default_branches %w{main v2}
10
+ default_days 30
11
+ # you can specify 'days', but it will override everything, including
12
+ # anything repo-specific below, so don't do that.
13
+ log_level :info
14
+ ci_timeout 600
15
+ include_list false
16
+
17
+ # the most interesting part about this config file
18
+ # is the organizations block. It allows you to specify
19
+ # all of the repos that will be processed and how
20
+ # they should be processed
21
+ organizations({
22
+ 'someorg' => {
23
+ # if this org uses different branches (can further override under the repo)
24
+ 'branches' => ['trunk'],
25
+ # if you want a different number of days by for repos in this org (can
26
+ # further override under the repo)
27
+ 'days' => 7,
28
+ 'repositories' => {
29
+ 'repo1' => {},
30
+ 'repo2' => {
31
+ # crazy repo, only do 2 days
32
+ 'days' => 2,
33
+ 'branches' => ['main'],
34
+ },
35
+ 'repo3' => {
36
+ 'days' => 30,
37
+ 'branches' => ['main'],
38
+ }
39
+ },
40
+ 'anotherorg' => {
41
+ 'days' => 45,
42
+ 'branches' => %w{main oldstuff},
43
+ 'repositories' => {
44
+ 'repo1' => {},
45
+ 'repo2' => {},
46
+ 'repo3' => {},
47
+ },
48
+ },
49
+ })
@@ -0,0 +1,3 @@
1
+ source 'https://rubygems.org'
2
+
3
+ eval_gemfile('../oss-stats/Gemfile')
@@ -0,0 +1,20 @@
1
+ # OSS Stats for <Project>
2
+
3
+ This repo aims to track stats that affect how <project>'s open source community
4
+ interacts with the project and repositories.
5
+
6
+ It leverages [oss-stats](https://github.com/jaymzh/oss-stats) to track those
7
+ stats. It assumes oss-stats and this repo are checked out next to each other
8
+ on the filesystem.
9
+
10
+ ## tl;dr
11
+
12
+ * See **Issue, PR, and CI stats** in [ci_reports](ci_reports)
13
+ * See **weekly meeting stats** in [Slack Status Tracking](team_slack_reports.md)
14
+ * See **pipeline visiblity stats** in [pipeline_visibility_reports](pipeline_visibility_reports)
15
+ * See **promises** in [promises][promises]
16
+
17
+ ## Usage
18
+
19
+ For updated information on using these scripts see the [oss-stats
20
+ README](https://github.com/jaymzh/oss-stats/blob/main/README.md).
@@ -0,0 +1,2 @@
1
+ inherit_from:
2
+ - ../oss-stats/.rubocop.yml
@@ -0,0 +1,252 @@
1
+ require 'net/http'
2
+ require 'json'
3
+ require 'uri'
4
+
5
+ require_relative 'log'
6
+
7
+ module OssStats
8
+ # Client for interacting with the Buildkite GraphQL API.
9
+ class BuildkiteClient
10
+ attr_reader :token, :organization_slug
11
+
12
+ # Initializes a new BuildkiteClient.
13
+ #
14
+ # @param token [String] The Buildkite API token.
15
+ # @param organization_slug [String] The slug of the Buildkite organization.
16
+
17
+ def initialize(token)
18
+ @token = token
19
+ @graphql_endpoint = URI('https://graphql.buildkite.com/v1')
20
+ end
21
+
22
+ def get_pipeline(org, pipeline)
23
+ log.debug("Fetching pipeline: #{org}/#{pipeline}")
24
+ query = <<~GRAPHQL
25
+ query {
26
+ pipeline(slug: "#{org}/#{pipeline}") {
27
+ visibility
28
+ url
29
+ }
30
+ }
31
+ GRAPHQL
32
+
33
+ response_data = execute_graphql_query(query)
34
+ pipeline_data = response_data.dig('data', 'pipeline')
35
+ if pipeline_data.nil? && response_data['data'].key?('pipeline')
36
+ # The query returned, and the 'pipeline' key exists but is null,
37
+ # meaning the pipeline was not found by Buildkite.
38
+ log.debug(
39
+ "Pipeline #{org}/#{pipeline} not found.",
40
+ )
41
+ elsif pipeline_data.nil? && response_data['errors']
42
+ # Errors occurred, already logged by execute_graphql_query,
43
+ # but we might want to note the slug it failed for.
44
+ log.warn(
45
+ "Failed to fetch pipeline #{org}/#{pipeline}" +
46
+ 'due to API errors',
47
+ )
48
+ end
49
+ pipeline_data
50
+ rescue StandardError => e
51
+ log.error(
52
+ "Error in get_pipeline for slug #{org}/#{pipeline}: #{e.message}",
53
+ )
54
+ nil
55
+ end
56
+
57
+ def all_pipelines(org)
58
+ log.debug("Fetching all pipeline in #{org}")
59
+ pipelines = []
60
+ after_cursor = nil
61
+ has_next_page = true
62
+
63
+ while has_next_page
64
+ query = <<~GRAPHQL
65
+ query {
66
+ organization(slug: "#{org}") {
67
+ pipelines(
68
+ first: 50,
69
+ after: #{after_cursor ? "\"#{after_cursor}\"" : 'null'}
70
+ ) {
71
+ edges {
72
+ node {
73
+ slug
74
+ repository {
75
+ url
76
+ }
77
+ visibility
78
+ url
79
+ }
80
+ }
81
+ pageInfo {
82
+ endCursor
83
+ hasNextPage
84
+ }
85
+ }
86
+ }
87
+ }
88
+ GRAPHQL
89
+
90
+ response_data = execute_graphql_query(query)
91
+ current_pipelines = response_data.dig(
92
+ 'data', 'organization', 'pipelines', 'edges'
93
+ )
94
+ if current_pipelines
95
+ pipelines.concat(
96
+ current_pipelines.map { |edge| edge['node'] },
97
+ )
98
+ end
99
+
100
+ page_info = response_data.dig(
101
+ 'data', 'organization', 'pipelines', 'pageInfo'
102
+ )
103
+ break unless page_info
104
+ has_next_page = page_info['hasNextPage']
105
+ after_cursor = page_info['endCursor']
106
+ end
107
+
108
+ pipelines
109
+ end
110
+
111
+ def pipelines_by_repo(org)
112
+ log.debug("pipelines_by_repo: #{org}")
113
+ pipelines_by_repo = Hash.new { |h, k| h[k] = [] }
114
+ all_pipelines(org).each do |pipeline|
115
+ repo_url = pipeline.dig('repository', 'url').gsub('.git', '')
116
+ next unless repo_url
117
+
118
+ pipelines_by_repo[repo_url] << {
119
+ slug: pipeline['slug'],
120
+ url: pipeline['url'],
121
+ visibility: pipeline['visibility'],
122
+ }
123
+ end
124
+
125
+ pipelines_by_repo
126
+ end
127
+
128
+ # Fetches builds for a given pipeline within a specified date range.
129
+ # Handles pagination to retrieve all relevant builds.
130
+ #
131
+ # @param pipeline_slug [String] The slug of the pipeline
132
+ # (without the organization part).
133
+ # @param from_date [Date] The start date for fetching builds.
134
+ # @param to_date [Date] The end date for fetching builds.
135
+ # @return [Array<Hash>] An array of build edges from the GraphQL response.
136
+ # Each edge contains a 'node' with build details including 'state',
137
+ # 'createdAt', and 'jobs'.
138
+ # Returns an empty array if an error occurs or no builds are found.
139
+ def get_pipeline_builds(org, pipeline, from_date, to_date, branch = 'main')
140
+ log.debug("get_pipeline_builds: #{org}, #{pipeline}")
141
+ all_build_edges = []
142
+ after_cursor = nil
143
+ has_next_page = true
144
+
145
+ while has_next_page
146
+ query = <<~GRAPHQL
147
+ query {
148
+ pipeline(slug: "#{org}/#{pipeline}") {
149
+ builds(
150
+ first: 50,
151
+ after: #{after_cursor ? "\"#{after_cursor}\"" : 'null'},
152
+ createdAtFrom: "#{from_date.to_datetime.rfc3339}",
153
+ createdAtTo: "#{to_date.to_datetime.rfc3339}",
154
+ branch: "#{branch}",
155
+ ) {
156
+ edges {
157
+ node {
158
+ url
159
+ id
160
+ state
161
+ createdAt
162
+ message
163
+ }
164
+ }
165
+ pageInfo { # For build pagination
166
+ hasNextPage
167
+ endCursor
168
+ }
169
+ }
170
+ }
171
+ }
172
+ GRAPHQL
173
+
174
+ response_data = execute_graphql_query(query)
175
+ builds_data = response_data.dig('data', 'pipeline', 'builds')
176
+
177
+ if builds_data && builds_data['edges']
178
+ all_build_edges.concat(builds_data['edges'])
179
+ page_info = builds_data['pageInfo']
180
+ has_next_page = page_info['hasNextPage']
181
+ after_cursor = page_info['endCursor']
182
+ else
183
+ # No builds data or error in structure, stop pagination
184
+ has_next_page = false
185
+ end
186
+ end
187
+
188
+ all_build_edges
189
+ rescue StandardError => e
190
+ log.error(
191
+ "Error in get_pipeline_builds for #{org}/#{pipeline}: #{e.message}",
192
+ )
193
+ [] # Return empty array on error
194
+ end
195
+
196
+ private
197
+
198
+ def execute_graphql_query(query)
199
+ http = Net::HTTP.new(@graphql_endpoint.host, @graphql_endpoint.port)
200
+ http.use_ssl = true
201
+ request = Net::HTTP::Post.new(@graphql_endpoint.request_uri)
202
+ request['Authorization'] = "Bearer #{@token}"
203
+ request['Content-Type'] = 'application/json'
204
+ request.body = { query: }.to_json
205
+
206
+ begin
207
+ response = http.request(request)
208
+
209
+ unless response.is_a?(Net::HTTPSuccess)
210
+ error_message = 'Buildkite API request failed with status ' \
211
+ "#{response.code}: #{response.message}"
212
+ log.error(error_message)
213
+ raise error_message
214
+ end
215
+
216
+ parsed_response = JSON.parse(response.body)
217
+
218
+ if parsed_response['errors']
219
+ # Log each error for better context if multiple errors are returned
220
+ error_details = parsed_response['errors'].map do |e|
221
+ msg = e['message']
222
+ path = e['path'] ? " (path: #{e['path'].join(' -> ')})" : ''
223
+ "#{msg}#{path}"
224
+ end.join('; ')
225
+ error_message = "Buildkite API returned errors: #{error_details}"
226
+ log.error(error_message)
227
+ # Also log the full error response for detailed debugging
228
+ log.debug(
229
+ "Full Buildkite error response: #{parsed_response['errors']}",
230
+ )
231
+ raise error_message
232
+ end
233
+ parsed_response
234
+ rescue Net::HTTPExceptions => e
235
+ error_message =
236
+ "Network error connecting to Buildkite API: #{e.message}"
237
+ log.error(error_message)
238
+ raise error_message
239
+ rescue JSON::ParserError => e
240
+ error_message = "Error parsing JSON from Buildkite API: #{e.message}."
241
+ log.error(error_message)
242
+ log.debug("Problematic JSON response body: #{response&.body}")
243
+ raise error_message
244
+ rescue StandardError => e
245
+ error_message =
246
+ "Unexpected error during Buildkite API call: #{e.message}"
247
+ log.error(error_message)
248
+ raise error_message
249
+ end
250
+ end
251
+ end
252
+ end
@@ -0,0 +1,15 @@
1
+ def get_buildkite_token(options)
2
+ return options[:buildkite_token] if options[:buildkite_tokne]
3
+ return ENV['BUILDKITE_API_TOKEN'] if ENV['BUILDKITE_API_TOKEN']
4
+ nil
5
+ end
6
+
7
+ def get_buildkite_token!(config = OssStats::CiStatsConfig)
8
+ token = get_buildkite_token(config)
9
+ unless token
10
+ raise ArgumentError,
11
+ 'Buildkite token not found. Set via --buildkite-token CLI option, ' +
12
+ 'in the conifg, or as BUILDKITE_API_TOKEN environment variable.'
13
+ end
14
+ token
15
+ end
@@ -0,0 +1,36 @@
1
+ require 'mixlib/config'
2
+ require_relative 'shared'
3
+
4
+ module OssStats
5
+ module Config
6
+ module MeetingStats
7
+ extend Mixlib::Config
8
+ extend OssStats::Config::Shared
9
+
10
+ db_file File.expand_path('./data/meeting_data.sqlite3', Dir.pwd)
11
+ dryrun false
12
+ header <<~EOF
13
+ # Meeting Stats
14
+
15
+ This document tracks meeting attendance and participation.
16
+ It was generated by slack_meeting_stats.rb, do not edit manually.
17
+
18
+ ## Trends
19
+
20
+ [![Attendance](images/attendance-small.png)](images/attendance-full.png)
21
+ [![Build Status
22
+ Reports](images/build_status-small.png)](images/build_status-full.png)
23
+
24
+ EOF
25
+ output File.expand_path('./meeting_stats.md', Dir.pwd)
26
+ image_dir File.expand_path('./images', Dir.pwd)
27
+ log_level :info
28
+ mode 'record'
29
+ teams []
30
+
31
+ def self.config_file
32
+ find_config_file('meeting_stats_config.rb')
33
+ end
34
+ end
35
+ end
36
+ end
@@ -0,0 +1,22 @@
1
+ require 'mixlib/config'
2
+ require_relative 'shared'
3
+
4
+ module OssStats
5
+ module Config
6
+ module Promises
7
+ extend Mixlib::Config
8
+ extend OssStats::Config::Shared
9
+
10
+ db_file File.expand_path('./data/promises.sqlite3', Dir.pwd)
11
+ dryrun false
12
+ output nil
13
+ log_level :info
14
+ mode 'status'
15
+ include_abandoned false
16
+
17
+ def self.config_file
18
+ find_config_file('promises_config.rb')
19
+ end
20
+ end
21
+ end
22
+ end
@@ -0,0 +1,47 @@
1
+ require 'mixlib/config'
2
+ require_relative 'shared'
3
+
4
+ module OssStats
5
+ module Config
6
+ module RepoStats
7
+ extend Mixlib::Config
8
+ extend OssStats::Config::Shared
9
+
10
+ # generally these should NOT be set, they override everything
11
+ days nil
12
+ branches nil
13
+ top_n_stale nil
14
+ top_n_oldest nil
15
+ top_n_time_to_close nil
16
+ top_n_most_broken_ci_days nil
17
+ top_n_most_broken_ci_jobs nil
18
+ top_n_stale_pr nil
19
+ top_n_stale_issue nil
20
+ top_n_oldest_pr nil
21
+ top_n_oldest_issue nil
22
+ top_n_time_to_close_pr nil
23
+ top_n_time_to_close_issue nil
24
+
25
+ # set these instead
26
+ default_branches ['main']
27
+ default_days 30
28
+
29
+ log_level :info
30
+ ci_timeout 600
31
+ no_links false
32
+ github_api_endpoint nil
33
+ github_token nil
34
+ github_org nil
35
+ github_repo nil
36
+ buildkite_token nil
37
+ limit_gh_ops_per_minute nil
38
+ include_list false
39
+ mode ['all']
40
+ organizations({})
41
+
42
+ def self.config_file
43
+ find_config_file('repo_stats_config.rb')
44
+ end
45
+ end
46
+ end
47
+ end
@@ -0,0 +1,43 @@
1
+ require_relative '../log'
2
+
3
+ module OssStats
4
+ module Config
5
+ module Shared
6
+ # a common method among all config parsers to
7
+ # find the config file
8
+ def find_config_file(filename)
9
+ log.debug("#{name}: config_file called")
10
+
11
+ # Check to see if `config` has been defined
12
+ # in the current config, and if so, return that.
13
+ #
14
+ # Slight magic here, name will be, for example
15
+ # OssStats::Config::RepoStats
16
+ # So we trip the first part and get just `RepoStats`,
17
+ # then get the class, and check for config, akin to doing
18
+ #
19
+ # if OssStats::Config::RepoStats.config
20
+ # return OssStats::Config::RepoStats.config
21
+ # end
22
+ kls = name.sub('OssStats::Config::', '')
23
+ config_class = OssStats::Config.const_get(kls)
24
+ if config_class.config
25
+ return config_class.config
26
+ end
27
+
28
+ # otherwise, we check CWD, XDG, and /etc.
29
+ [
30
+ Dir.pwd,
31
+ File.join(ENV['HOME'], '.config', 'oss_stats'),
32
+ '/etc',
33
+ ].each do |dir|
34
+ f = File.join(dir, filename)
35
+ log.debug("[#{name}] Checking if #{f} exists...")
36
+ return f if ::File.exist?(f)
37
+ end
38
+
39
+ nil
40
+ end
41
+ end
42
+ end
43
+ end