@wbern/obscene 0.3.0 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +98 -29
- package/dist/cli.js +309 -95
- package/package.json +3 -2
package/README.md
CHANGED
|
@@ -60,14 +60,23 @@ obscene report # raw complexity (no churn)
|
|
|
60
60
|
obscene coupling # temporal coupling analysis
|
|
61
61
|
obscene coupling --min-cochanges 1 --format table
|
|
62
62
|
obscene --exclude "*.generated.*"
|
|
63
|
-
obscene | jq '.
|
|
63
|
+
obscene | jq '.rankings.complexity.entries[0]' # pipe-friendly
|
|
64
64
|
```
|
|
65
65
|
|
|
66
66
|
## Commands
|
|
67
67
|
|
|
68
68
|
### `obscene hotspots` (default)
|
|
69
69
|
|
|
70
|
-
|
|
70
|
+
Produces **four independent ranking tables**, each scoring files by a different metric multiplied by churn:
|
|
71
|
+
|
|
72
|
+
| Ranking | Score formula | Metric columns |
|
|
73
|
+
|---------|---------------|----------------|
|
|
74
|
+
| Complexity × Churn | `complexity × churn` | Cmplx, Dens |
|
|
75
|
+
| Nesting × Churn | `maxNesting × churn` | Nest |
|
|
76
|
+
| Defects × Churn | `defects × churn` | Dfcts, DfDns |
|
|
77
|
+
| Authors × Churn | `authors × churn` | Auth |
|
|
78
|
+
|
|
79
|
+
Each table has its own tier assignment by cumulative score distribution:
|
|
71
80
|
|
|
72
81
|
| Tier | Range | Meaning |
|
|
73
82
|
|------|-------|---------|
|
|
@@ -75,11 +84,13 @@ Scores each file by `complexity × commits` over a time window, then assigns tie
|
|
|
75
84
|
| **watch** | next 30% (50–80%) | Keep an eye on these |
|
|
76
85
|
| **stable** | bottom 20% | Low risk |
|
|
77
86
|
|
|
87
|
+
A file may rank high in one dimension (e.g. complexity) but low in another (e.g. authors). Tables with no scored entries are omitted.
|
|
88
|
+
|
|
78
89
|
### `obscene coupling`
|
|
79
90
|
|
|
80
|
-
Detects files that frequently change together in the same commit but live in different directories — Tornhill's "temporal coupling" analysis. Surfaces hidden structural dependencies that aren't visible in the
|
|
91
|
+
Detects files that frequently change together in the same commit but live in different directories — Tornhill's "temporal coupling" analysis from *Your Code as a Crime Scene* (2015). Surfaces hidden structural dependencies that aren't visible in imports or the module graph.
|
|
81
92
|
|
|
82
|
-
Same-directory pairs are excluded (co-location is expected coupling). Mass commits touching >20 files are skipped (formatting changes, large refactors).
|
|
93
|
+
Same-directory pairs are excluded (co-location is expected coupling). Mass commits touching >20 files are skipped (formatting changes, large refactors). See [Why temporal coupling?](#why-temporal-coupling) for the research backing this approach.
|
|
83
94
|
|
|
84
95
|
```bash
|
|
85
96
|
obscene coupling # default: min 2 shared commits
|
|
@@ -103,49 +114,55 @@ Per-file complexity without churn. Useful for raw complexity distribution.
|
|
|
103
114
|
|
|
104
115
|
## Metrics
|
|
105
116
|
|
|
106
|
-
|
|
117
|
+
### Hotspot metrics
|
|
107
118
|
|
|
108
|
-
|
|
119
|
+
#### Score
|
|
109
120
|
|
|
110
|
-
`
|
|
121
|
+
`metric × churn`. Each ranking table uses a different metric (complexity, nesting, defects, or authors) multiplied by churn. See [Why churn × complexity?](#why-churn-x-complexity) for the research backing this approach.
|
|
111
122
|
|
|
112
|
-
|
|
123
|
+
#### Churn (`Churn`)
|
|
113
124
|
|
|
114
125
|
Number of commits touching the file within the configured time window (default: 3 months). Measures how actively the file is being modified.
|
|
115
126
|
|
|
116
|
-
|
|
127
|
+
#### Cyclomatic complexity (`Cmplx`)
|
|
117
128
|
|
|
118
129
|
Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc). Counts independent execution paths (branches, loops, conditions). Higher values mean more paths to test and more places for bugs to hide.
|
|
119
130
|
|
|
120
|
-
|
|
131
|
+
#### Complexity density (`Dens`)
|
|
121
132
|
|
|
122
133
|
`complexity / lines of code`. Normalizes complexity by file size so a 50-line file with complexity 25 (density 0.50) stands out against a 500-line file with complexity 25 (density 0.05). Based on Harrison & Magel (1981), who found that complexity relative to code size is a stronger fault predictor than raw complexity alone.
|
|
123
134
|
|
|
124
|
-
|
|
135
|
+
#### Defects (`Dfcts`)
|
|
125
136
|
|
|
126
137
|
Count of `fix:` conventional commits touching the file within the churn window. A proxy for historical defect rate — files that attract repeated fixes are more likely to contain latent bugs. Inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction.
|
|
127
138
|
|
|
128
|
-
|
|
139
|
+
#### Defect density (`DfDns`)
|
|
129
140
|
|
|
130
|
-
`defects / lines of code`.
|
|
141
|
+
`defects / lines of code`. Shown in the Defects × Churn table. Normalizes defect count by file size.
|
|
131
142
|
|
|
132
|
-
|
|
143
|
+
#### Nesting depth (`Nest`)
|
|
133
144
|
|
|
134
145
|
Maximum indentation level (tab stops) in the file. Deep nesting correlates with high cognitive load and defect likelihood. Harrison & Magel (1981) identified nesting depth as a significant complexity contributor.
|
|
135
146
|
|
|
136
|
-
|
|
147
|
+
#### Unique authors (`Auth`)
|
|
137
148
|
|
|
138
149
|
Number of distinct git authors who committed to the file within the churn window. Files touched by many authors may lack clear ownership and accumulate inconsistent patterns. Kamei et al. (2013) found developer count to be a significant predictor of defect-introducing changes.
|
|
139
150
|
|
|
140
|
-
###
|
|
151
|
+
### Coupling metrics
|
|
152
|
+
|
|
153
|
+
#### Shared commits (`Shared`)
|
|
154
|
+
|
|
155
|
+
Number of commits where both files in a pair were modified together. The core ranking metric for temporal coupling — higher values indicate stronger hidden dependencies between files in different directories. Ball, Kim, Porter & Siy (1997) demonstrated that co-change relationships reveal design dependencies that static analysis misses.
|
|
156
|
+
|
|
157
|
+
#### Coupling degree (`Degree`)
|
|
141
158
|
|
|
142
|
-
|
|
159
|
+
`shared commits / min(churn of file1, churn of file2) × 100`. What percentage of the less-active file's changes also involved the other file. A degree of 100% means every change to the less-active file also touched the other file. This normalization follows D'Ambros, Lanza & Lungu (2009), who showed that relative coupling measures provide more stable results than raw co-change counts across projects of different sizes.
|
|
143
160
|
|
|
144
|
-
|
|
161
|
+
#### Combined complexity (`Cmplx`)
|
|
145
162
|
|
|
146
|
-
|
|
163
|
+
Sum of cyclomatic complexity of both files in the pair. Highlights coupled pairs where the involved code is also complex — the combination of hidden dependency and high complexity compounds maintenance risk.
|
|
147
164
|
|
|
148
|
-
|
|
165
|
+
#### Tier
|
|
149
166
|
|
|
150
167
|
Cumulative score distribution bucket:
|
|
151
168
|
|
|
@@ -158,19 +175,53 @@ Cumulative score distribution bucket:
|
|
|
158
175
|
## Example output
|
|
159
176
|
|
|
160
177
|
```
|
|
161
|
-
Hotspots — 3 months churn window
|
|
178
|
+
Hotspots — 3 months churn window
|
|
179
|
+
|
|
180
|
+
Complexity × Churn — Total score: 35,452
|
|
162
181
|
Tiers: 3 danger, 13 watch, 194 stable
|
|
163
182
|
Showing: 5 of 210
|
|
164
183
|
|
|
165
|
-
File
|
|
166
|
-
|
|
167
|
-
src/utils/effect-generator.ts
|
|
168
|
-
src/services/game-engine.ts
|
|
169
|
-
src/components/board-renderer.tsx
|
|
170
|
-
src/hooks/use-game-state.ts
|
|
171
|
-
src/utils/move-validator.ts
|
|
184
|
+
File Score % Churn Cmplx Dens Tier
|
|
185
|
+
──────────────────────────────────────────────────────────────────────────────────────────────────
|
|
186
|
+
src/utils/effect-generator.ts 8,296 23.4 68 122 0.12 🔴 DANGER
|
|
187
|
+
src/services/game-engine.ts 4,284 12.1 51 84 0.09 🔴 DANGER
|
|
188
|
+
src/components/board-renderer.tsx 2,940 8.3 42 70 0.11 🔴 DANGER
|
|
189
|
+
src/hooks/use-game-state.ts 1,320 3.7 33 40 0.08 🟡 WATCH
|
|
190
|
+
src/utils/move-validator.ts 945 2.7 27 35 0.06 🟡 WATCH
|
|
191
|
+
|
|
192
|
+
Nesting × Churn — Total score: 1,284
|
|
193
|
+
Tiers: 2 danger, 5 watch, 203 stable
|
|
194
|
+
Showing: 5 of 210
|
|
172
195
|
|
|
173
|
-
Score
|
|
196
|
+
File Score % Churn Nest Tier
|
|
197
|
+
────────────────────────────────────────────────────────────────────────────────────────
|
|
198
|
+
src/utils/effect-generator.ts 408 31.8 68 6 🔴 DANGER
|
|
199
|
+
src/services/game-engine.ts 255 19.8 51 5 🔴 DANGER
|
|
200
|
+
src/components/board-renderer.tsx 210 16.4 42 5 🟡 WATCH
|
|
201
|
+
src/hooks/use-game-state.ts 99 7.7 33 3 🟡 WATCH
|
|
202
|
+
src/utils/move-validator.ts 54 4.2 27 2 🟡 WATCH
|
|
203
|
+
|
|
204
|
+
Score=metric×churn | Tiers are relative to THIS codebase, not absolute quality grades.
|
|
205
|
+
High scores flag review candidates, not bad code — stable complex files (parsers, engines) score high naturally.
|
|
206
|
+
Docs: https://github.com/wbern/obscene#metrics
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
### Coupling example
|
|
210
|
+
|
|
211
|
+
```
|
|
212
|
+
Coupling — 6 months churn window | Min shared: 3 | Total score: 91
|
|
213
|
+
Tiers: 10 danger, 7 watch, 7 stable
|
|
214
|
+
Showing: 5 of 24
|
|
215
|
+
|
|
216
|
+
File 1 File 2 Shared Degree Cmplx Tier
|
|
217
|
+
────────────────────────────────────────────────────────────────────────────────────────────────────
|
|
218
|
+
…ePlayer/hooks/useChessEffects.ts src/utils/effect-generator.ts 6 46.2% 261 DANGER
|
|
219
|
+
…ePlayer/hooks/useChessEffects.ts src/utils/pgn-types.ts 6 50.0% 121 DANGER
|
|
220
|
+
src/test/pgn-fixtures.ts src/utils/pgn-parser.server.ts 5 71.4% 3 DANGER
|
|
221
|
+
src/test/pgn-fixtures.ts src/utils/effect-generator.ts 4 57.1% 145 DANGER
|
|
222
|
+
src/test/pgn-fixtures.ts src/utils/pgn-types.ts 4 57.1% 5 DANGER
|
|
223
|
+
|
|
224
|
+
Shared=co-changed commits | Degree=shared/min(churn)×100 | Cmplx=sum of both files
|
|
174
225
|
Docs: https://github.com/wbern/obscene#metrics
|
|
175
226
|
```
|
|
176
227
|
|
|
@@ -193,14 +244,32 @@ Files that are both complex and frequently modified are disproportionately likel
|
|
|
193
244
|
|
|
194
245
|
The general approach was popularized by Adam Tornhill's *Your Code as a Crime Scene* (2015), which applies forensic analysis techniques to version control history.
|
|
195
246
|
|
|
247
|
+
## Why temporal coupling?
|
|
248
|
+
|
|
249
|
+
Files that change together but live in different directories reveal implicit dependencies that the module graph doesn't capture. These hidden couplings are a maintenance hazard: a developer modifying one file doesn't know they also need to update the other, leading to bugs that only surface later.
|
|
250
|
+
|
|
251
|
+
- **Ball, Kim, Porter & Siy (1997)** pioneered co-change analysis and showed that version control history surfaces design relationships invisible to static analysis. — [ICSE 1997 Workshop](https://www.researchgate.net/publication/2791666_If_Your_Version_Control_System_Could_Talk)
|
|
252
|
+
- **D'Ambros, Lanza & Lungu (2009)** developed the Evolution Radar for visualizing logical coupling at both file and module level, showing how evolutionary coupling reveals architectural decay. The normalized approach (coupling relative to total changes) provides more stable measures across projects of different sizes. — [IEEE TSE](https://doi.org/10.1109/TSE.2009.17)
|
|
253
|
+
- **Tornhill (2015)** popularized temporal coupling analysis in *Your Code as a Crime Scene*, demonstrating how co-change patterns reveal "surprise dependencies" — files that should logically be independent but can't be changed separately in practice. His tooling (Code Maat) uses the same commit co-occurrence approach.
|
|
254
|
+
- **Cataldo, Mockus, Roberts & Herbsleb (2009)** analyzed both syntactic and logical dependencies across two large systems and found that logical (co-change) dependencies have a significant independent effect on failure proneness. When developers are unaware of these hidden couplings, defects increase. — [IEEE TSE](https://doi.org/10.1109/TSE.2009.42)
|
|
255
|
+
|
|
196
256
|
## Limitations
|
|
197
257
|
|
|
258
|
+
### General
|
|
259
|
+
|
|
198
260
|
- **Churn = commit count**, not lines changed. A one-line typo fix counts the same as a 500-line rewrite.
|
|
199
261
|
- **Per-file granularity only.** A 1000-line file with many small functions scores higher than it probably should. No function-level breakdown.
|
|
200
262
|
- **Must be run inside a git repo.** Churn data comes from `git log`.
|
|
201
263
|
- **Only analyzes files that currently exist.** Deleted files don't appear, even if they churned heavily before removal.
|
|
202
264
|
- **Tier thresholds are fixed** (50/80 cumulative %). Not configurable yet.
|
|
203
265
|
|
|
266
|
+
### Coupling-specific
|
|
267
|
+
|
|
268
|
+
- **Same-directory exclusion is a heuristic.** Files in the same directory that are unexpectedly coupled won't be surfaced. The assumption is that co-located files are *expected* to change together.
|
|
269
|
+
- **Mass commit threshold (>20 files) is hardcoded.** Commits touching many files are skipped to avoid noise from formatting changes and large refactors, but legitimate large features that touch many files across directories are also excluded.
|
|
270
|
+
- **Degree uses unfiltered churn.** The denominator (`min(churn)`) counts all commits to a file, including single-file commits. This means degree can understate coupling when a file has high solo churn.
|
|
271
|
+
- **Squash merges collapse coupling signal.** If a branch with 10 separate commits is squash-merged into one, all co-changes within that branch become a single co-occurrence.
|
|
272
|
+
|
|
204
273
|
## License
|
|
205
274
|
|
|
206
275
|
MIT
|
package/dist/cli.js
CHANGED
|
@@ -168,6 +168,108 @@ function getCoChanges(months, excludes = []) {
|
|
|
168
168
|
}
|
|
169
169
|
return cochanges;
|
|
170
170
|
}
|
|
171
|
+
function assignTiers(items, totalScore) {
|
|
172
|
+
let cumulative = 0;
|
|
173
|
+
for (const item of items) {
|
|
174
|
+
item.percentOfTotal = Math.round(item.score / totalScore * 1e3) / 10;
|
|
175
|
+
cumulative += item.score;
|
|
176
|
+
const cumulativeShare = cumulative / totalScore;
|
|
177
|
+
if (cumulativeShare <= DANGER_CUMULATIVE) {
|
|
178
|
+
item.tier = "danger";
|
|
179
|
+
} else if (cumulativeShare <= WATCH_CUMULATIVE) {
|
|
180
|
+
item.tier = "watch";
|
|
181
|
+
} else {
|
|
182
|
+
item.tier = "stable";
|
|
183
|
+
}
|
|
184
|
+
}
|
|
185
|
+
}
|
|
186
|
+
var RANKING_DEFS = [
|
|
187
|
+
{
|
|
188
|
+
key: "complexity",
|
|
189
|
+
label: "Complexity \xD7 Churn",
|
|
190
|
+
scoreFormula: "complexity \xD7 churn"
|
|
191
|
+
},
|
|
192
|
+
{
|
|
193
|
+
key: "nesting",
|
|
194
|
+
label: "Nesting \xD7 Churn",
|
|
195
|
+
scoreFormula: "maxNesting \xD7 churn"
|
|
196
|
+
},
|
|
197
|
+
{
|
|
198
|
+
key: "defects",
|
|
199
|
+
label: "Defects \xD7 Churn",
|
|
200
|
+
scoreFormula: "defects \xD7 churn"
|
|
201
|
+
},
|
|
202
|
+
{
|
|
203
|
+
key: "authors",
|
|
204
|
+
label: "Authors \xD7 Churn",
|
|
205
|
+
scoreFormula: "authors \xD7 churn"
|
|
206
|
+
}
|
|
207
|
+
];
|
|
208
|
+
function computeRanking(files, churn, metricExtractor, densityExtractor) {
|
|
209
|
+
const scored = files.map((f) => {
|
|
210
|
+
const fileChurn = churn.get(f.file) ?? 0;
|
|
211
|
+
const metricValue = metricExtractor(f);
|
|
212
|
+
return {
|
|
213
|
+
file: f.file,
|
|
214
|
+
score: metricValue * fileChurn,
|
|
215
|
+
percentOfTotal: 0,
|
|
216
|
+
tier: "stable",
|
|
217
|
+
churn: fileChurn,
|
|
218
|
+
metricValue,
|
|
219
|
+
metricDensity: densityExtractor ? densityExtractor(f) : void 0
|
|
220
|
+
};
|
|
221
|
+
}).filter((e) => e.score > 0).sort((a, b) => b.score - a.score);
|
|
222
|
+
const totalScore = scored.reduce((sum, e) => sum + e.score, 0);
|
|
223
|
+
if (totalScore === 0) return [];
|
|
224
|
+
assignTiers(scored, totalScore);
|
|
225
|
+
return scored;
|
|
226
|
+
}
|
|
227
|
+
function computeAllRankings(files, churn, defects, nestingDepths, authors, top) {
|
|
228
|
+
const extractors = {
|
|
229
|
+
complexity: {
|
|
230
|
+
extract: (f) => f.complexity,
|
|
231
|
+
density: (f) => f.complexityDensity
|
|
232
|
+
},
|
|
233
|
+
nesting: {
|
|
234
|
+
extract: (f) => nestingDepths.get(f.file) ?? 0
|
|
235
|
+
},
|
|
236
|
+
defects: {
|
|
237
|
+
extract: (f) => defects.get(f.file) ?? 0,
|
|
238
|
+
density: (f) => {
|
|
239
|
+
const d = defects.get(f.file) ?? 0;
|
|
240
|
+
return f.code > 0 ? Math.round(d / f.code * 1e4) / 1e4 : 0;
|
|
241
|
+
}
|
|
242
|
+
},
|
|
243
|
+
authors: {
|
|
244
|
+
extract: (f) => authors.get(f.file) ?? 0
|
|
245
|
+
}
|
|
246
|
+
};
|
|
247
|
+
const rankings = {};
|
|
248
|
+
for (const def of RANKING_DEFS) {
|
|
249
|
+
const ext = extractors[def.key];
|
|
250
|
+
const allEntries = computeRanking(files, churn, ext.extract, ext.density);
|
|
251
|
+
if (allEntries.length === 0) continue;
|
|
252
|
+
const limited = top > 0 ? allEntries.slice(0, top) : allEntries;
|
|
253
|
+
const tierCounts = {
|
|
254
|
+
danger: 0,
|
|
255
|
+
watch: 0,
|
|
256
|
+
stable: 0
|
|
257
|
+
};
|
|
258
|
+
for (const e of allEntries) {
|
|
259
|
+
tierCounts[e.tier]++;
|
|
260
|
+
}
|
|
261
|
+
rankings[def.key] = {
|
|
262
|
+
label: def.label,
|
|
263
|
+
scoreFormula: def.scoreFormula,
|
|
264
|
+
totalScore: allEntries.reduce((sum, e) => sum + e.score, 0),
|
|
265
|
+
tierCounts,
|
|
266
|
+
totalEntries: allEntries.length,
|
|
267
|
+
showing: limited.length,
|
|
268
|
+
entries: limited
|
|
269
|
+
};
|
|
270
|
+
}
|
|
271
|
+
return rankings;
|
|
272
|
+
}
|
|
171
273
|
function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
|
|
172
274
|
const entries = [];
|
|
173
275
|
for (const [key, count] of cochanges) {
|
|
@@ -190,18 +292,14 @@ function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
|
|
|
190
292
|
entries.sort((a, b) => b.couplingScore - a.couplingScore);
|
|
191
293
|
const totalScore = entries.reduce((sum, e) => sum + e.couplingScore, 0);
|
|
192
294
|
if (totalScore === 0) return [];
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
entry.tier = "watch";
|
|
202
|
-
} else {
|
|
203
|
-
entry.tier = "stable";
|
|
204
|
-
}
|
|
295
|
+
const adapted = entries.map((e) => ({
|
|
296
|
+
...e,
|
|
297
|
+
score: e.couplingScore
|
|
298
|
+
}));
|
|
299
|
+
assignTiers(adapted, totalScore);
|
|
300
|
+
for (let i = 0; i < entries.length; i++) {
|
|
301
|
+
entries[i].percentOfTotal = adapted[i].percentOfTotal;
|
|
302
|
+
entries[i].tier = adapted[i].tier;
|
|
205
303
|
}
|
|
206
304
|
return entries;
|
|
207
305
|
}
|
|
@@ -246,37 +344,41 @@ function getNestingDepths(filePaths) {
|
|
|
246
344
|
}
|
|
247
345
|
return depths;
|
|
248
346
|
}
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
|
|
347
|
+
|
|
348
|
+
// src/color.ts
|
|
349
|
+
import pc from "picocolors";
|
|
350
|
+
var ANSI_RE = /\x1b\[[0-9;]*m/g;
|
|
351
|
+
function visualWidth(s) {
|
|
352
|
+
return s.replace(ANSI_RE, "").length;
|
|
353
|
+
}
|
|
354
|
+
function padRight(s, n) {
|
|
355
|
+
const w = visualWidth(s);
|
|
356
|
+
return w >= n ? s : s + " ".repeat(n - w);
|
|
357
|
+
}
|
|
358
|
+
function padLeft(s, n) {
|
|
359
|
+
const w = visualWidth(s);
|
|
360
|
+
return w >= n ? s : " ".repeat(n - w) + s;
|
|
361
|
+
}
|
|
362
|
+
function truncate(s, max) {
|
|
363
|
+
return s.length <= max ? s : `\u2026${s.slice(s.length - max + 1)}`;
|
|
364
|
+
}
|
|
365
|
+
function tierLabel(tier) {
|
|
366
|
+
if (tier === "danger") return pc.red("\u{1F534} DANGER");
|
|
367
|
+
if (tier === "watch") return pc.yellow("\u{1F7E1} WATCH");
|
|
368
|
+
return pc.green("\u{1F7E2} stable");
|
|
369
|
+
}
|
|
370
|
+
function colorRow(tier, text) {
|
|
371
|
+
if (tier === "danger") return pc.red(text);
|
|
372
|
+
if (tier === "watch") return pc.yellow(text);
|
|
373
|
+
return pc.green(text);
|
|
374
|
+
}
|
|
375
|
+
function tierSummary(tierCounts, showing, total) {
|
|
376
|
+
const lines = [];
|
|
377
|
+
lines.push(
|
|
378
|
+
`Tiers: ${pc.red(`${tierCounts.danger} danger`)}, ${pc.yellow(`${tierCounts.watch} watch`)}, ${pc.green(`${tierCounts.stable} stable`)}`
|
|
379
|
+
);
|
|
380
|
+
lines.push(`Showing: ${showing} of ${total}`);
|
|
381
|
+
return lines;
|
|
280
382
|
}
|
|
281
383
|
|
|
282
384
|
// src/format.ts
|
|
@@ -299,27 +401,142 @@ function formatReportTable(output) {
|
|
|
299
401
|
padRight(truncate(f.file, 58), 60) + padLeft(String(f.code), 8) + padLeft(String(f.complexity), 12) + padLeft(f.complexityDensity.toFixed(2), 9) + padLeft(String(f.comments), 10)
|
|
300
402
|
);
|
|
301
403
|
}
|
|
404
|
+
lines.push("");
|
|
405
|
+
lines.push(
|
|
406
|
+
"Complexity=cyclomatic branch/loop count | Density=complexity/code | Comments=comment lines"
|
|
407
|
+
);
|
|
408
|
+
lines.push(
|
|
409
|
+
"High complexity is expected for parsers, state machines, and business logic. Compare density across files, not raw values."
|
|
410
|
+
);
|
|
411
|
+
lines.push("Docs: https://github.com/wbern/obscene#metrics");
|
|
302
412
|
return lines.join("\n");
|
|
303
413
|
}
|
|
304
|
-
function
|
|
414
|
+
function getRankingColumns(key) {
|
|
415
|
+
const base = [
|
|
416
|
+
{
|
|
417
|
+
header: "File",
|
|
418
|
+
width: 50,
|
|
419
|
+
align: "left",
|
|
420
|
+
value: (e) => truncate(e.file, 48)
|
|
421
|
+
},
|
|
422
|
+
{
|
|
423
|
+
header: "Score",
|
|
424
|
+
width: 8,
|
|
425
|
+
align: "right",
|
|
426
|
+
value: (e) => e.score.toLocaleString()
|
|
427
|
+
},
|
|
428
|
+
{
|
|
429
|
+
header: "%",
|
|
430
|
+
width: 7,
|
|
431
|
+
align: "right",
|
|
432
|
+
value: (e) => e.percentOfTotal.toFixed(1)
|
|
433
|
+
},
|
|
434
|
+
{
|
|
435
|
+
header: "Churn",
|
|
436
|
+
width: 7,
|
|
437
|
+
align: "right",
|
|
438
|
+
value: (e) => String(e.churn)
|
|
439
|
+
}
|
|
440
|
+
];
|
|
441
|
+
const metricCols = {
|
|
442
|
+
complexity: [
|
|
443
|
+
{
|
|
444
|
+
header: "Cmplx",
|
|
445
|
+
width: 7,
|
|
446
|
+
align: "right",
|
|
447
|
+
value: (e) => String(e.metricValue)
|
|
448
|
+
},
|
|
449
|
+
{
|
|
450
|
+
header: "Dens",
|
|
451
|
+
width: 7,
|
|
452
|
+
align: "right",
|
|
453
|
+
value: (e) => (e.metricDensity ?? 0).toFixed(2)
|
|
454
|
+
}
|
|
455
|
+
],
|
|
456
|
+
nesting: [
|
|
457
|
+
{
|
|
458
|
+
header: "Nest",
|
|
459
|
+
width: 6,
|
|
460
|
+
align: "right",
|
|
461
|
+
value: (e) => String(e.metricValue)
|
|
462
|
+
}
|
|
463
|
+
],
|
|
464
|
+
defects: [
|
|
465
|
+
{
|
|
466
|
+
header: "Dfcts",
|
|
467
|
+
width: 6,
|
|
468
|
+
align: "right",
|
|
469
|
+
value: (e) => String(e.metricValue)
|
|
470
|
+
},
|
|
471
|
+
{
|
|
472
|
+
header: "DfDns",
|
|
473
|
+
width: 7,
|
|
474
|
+
align: "right",
|
|
475
|
+
value: (e) => (e.metricDensity ?? 0).toFixed(4)
|
|
476
|
+
}
|
|
477
|
+
],
|
|
478
|
+
authors: [
|
|
479
|
+
{
|
|
480
|
+
header: "Auth",
|
|
481
|
+
width: 6,
|
|
482
|
+
align: "right",
|
|
483
|
+
value: (e) => String(e.metricValue)
|
|
484
|
+
}
|
|
485
|
+
]
|
|
486
|
+
};
|
|
487
|
+
const tierCol = {
|
|
488
|
+
header: "Tier",
|
|
489
|
+
width: 12,
|
|
490
|
+
align: "right",
|
|
491
|
+
value: (e) => tierLabel(e.tier)
|
|
492
|
+
};
|
|
493
|
+
return [...base, ...metricCols[key] ?? [], tierCol];
|
|
494
|
+
}
|
|
495
|
+
function formatRankingTable(key, ranking) {
|
|
305
496
|
const lines = [];
|
|
306
|
-
const
|
|
497
|
+
const cols = getRankingColumns(key);
|
|
307
498
|
lines.push(
|
|
308
|
-
|
|
499
|
+
`${ranking.label} \u2014 Total score: ${ranking.totalScore.toLocaleString()}`
|
|
309
500
|
);
|
|
310
|
-
pushTierSummary(lines, tierCounts, output.showing, output.totalHotspots);
|
|
311
501
|
lines.push(
|
|
312
|
-
|
|
502
|
+
...tierSummary(ranking.tierCounts, ranking.showing, ranking.totalEntries)
|
|
313
503
|
);
|
|
314
|
-
lines.push("
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
|
|
504
|
+
lines.push("");
|
|
505
|
+
const headerLine = cols.map(
|
|
506
|
+
(c) => c.align === "left" ? padRight(c.header, c.width) : padLeft(c.header, c.width)
|
|
507
|
+
).join("");
|
|
508
|
+
lines.push(headerLine);
|
|
509
|
+
const totalWidth = cols.reduce((sum, c) => sum + c.width, 0);
|
|
510
|
+
lines.push("\u2500".repeat(totalWidth));
|
|
511
|
+
for (const entry of ranking.entries) {
|
|
512
|
+
const rowParts = cols.map((c) => {
|
|
513
|
+
const val = c.value(entry);
|
|
514
|
+
return c.align === "left" ? padRight(val, c.width) : padLeft(val, c.width);
|
|
515
|
+
});
|
|
516
|
+
const rawRow = rowParts.join("");
|
|
517
|
+
lines.push(colorRow(entry.tier, rawRow));
|
|
319
518
|
}
|
|
519
|
+
return lines;
|
|
520
|
+
}
|
|
521
|
+
function formatHotspotsTable(output) {
|
|
522
|
+
const lines = [];
|
|
523
|
+
const { churnWindow, rankings } = output;
|
|
524
|
+
lines.push(`Hotspots \u2014 ${churnWindow} churn window`);
|
|
320
525
|
lines.push("");
|
|
526
|
+
const keys = Object.keys(rankings);
|
|
527
|
+
for (let i = 0; i < keys.length; i++) {
|
|
528
|
+
const key = keys[i];
|
|
529
|
+
lines.push(...formatRankingTable(key, rankings[key]));
|
|
530
|
+
if (i < keys.length - 1) {
|
|
531
|
+
lines.push("");
|
|
532
|
+
}
|
|
533
|
+
}
|
|
534
|
+
lines.push("");
|
|
535
|
+
lines.push(
|
|
536
|
+
"Score=metric\xD7churn | Tiers are relative to THIS codebase, not absolute quality grades."
|
|
537
|
+
);
|
|
321
538
|
lines.push(
|
|
322
|
-
"
|
|
539
|
+
"High scores flag review candidates, not bad code \u2014 stable complex files (parsers, engines) score high naturally."
|
|
323
540
|
);
|
|
324
541
|
lines.push("Docs: https://github.com/wbern/obscene#metrics");
|
|
325
542
|
return lines.join("\n");
|
|
@@ -330,48 +547,51 @@ function formatCouplingTable(output) {
|
|
|
330
547
|
lines.push(
|
|
331
548
|
`Coupling \u2014 ${churnWindow} churn window | Min shared: ${output.minCochanges} | Total score: ${totalScore.toLocaleString()}`
|
|
332
549
|
);
|
|
333
|
-
|
|
550
|
+
lines.push(...tierSummary(tierCounts, output.showing, output.totalCouplings));
|
|
334
551
|
lines.push(
|
|
335
|
-
padRight("File 1", 35) + padRight("File 2", 35) + padLeft("Shared", 7) + padLeft("Degree", 8) + padLeft("Cmplx", 7) + padLeft("Tier",
|
|
552
|
+
padRight("File 1", 35) + padRight("File 2", 35) + padLeft("Shared", 7) + padLeft("Degree", 8) + padLeft("Cmplx", 7) + padLeft("Tier", 12)
|
|
336
553
|
);
|
|
337
|
-
lines.push("\u2500".repeat(
|
|
554
|
+
lines.push("\u2500".repeat(104));
|
|
338
555
|
for (const c of couplings) {
|
|
339
|
-
|
|
340
|
-
|
|
341
|
-
);
|
|
556
|
+
const rawRow = padRight(truncate(c.file1, 33), 35) + padRight(truncate(c.file2, 33), 35) + padLeft(String(c.cochanges), 7) + padLeft(`${c.degree.toFixed(1)}%`, 8) + padLeft(String(c.totalComplexity), 7) + padLeft(tierLabel(c.tier), 12);
|
|
557
|
+
lines.push(colorRow(c.tier, rawRow));
|
|
342
558
|
}
|
|
343
559
|
lines.push("");
|
|
344
560
|
lines.push(
|
|
345
561
|
"Shared=co-changed commits | Degree=shared/min(churn)\xD7100 | Cmplx=sum of both files"
|
|
346
562
|
);
|
|
347
|
-
lines.push("Docs: https://github.com/wbern/obscene#metrics");
|
|
348
|
-
return lines.join("\n");
|
|
349
|
-
}
|
|
350
|
-
function pushTierSummary(lines, tierCounts, showing, total) {
|
|
351
563
|
lines.push(
|
|
352
|
-
|
|
564
|
+
"Tiers are relative to THIS codebase, not absolute quality grades. High coupling may be intentional and fine."
|
|
353
565
|
);
|
|
354
|
-
lines.push(
|
|
355
|
-
|
|
356
|
-
|
|
357
|
-
|
|
358
|
-
|
|
359
|
-
if (tier === "watch") return "WATCH";
|
|
360
|
-
return "stable";
|
|
361
|
-
}
|
|
362
|
-
function padRight(s, n) {
|
|
363
|
-
return s.length >= n ? s : s + " ".repeat(n - s.length);
|
|
364
|
-
}
|
|
365
|
-
function padLeft(s, n) {
|
|
366
|
-
return s.length >= n ? s : " ".repeat(n - s.length) + s;
|
|
367
|
-
}
|
|
368
|
-
function truncate(s, max) {
|
|
369
|
-
return s.length <= max ? s : `\u2026${s.slice(s.length - max + 1)}`;
|
|
566
|
+
lines.push(
|
|
567
|
+
"Same-directory pairs excluded. Commits touching >20 files skipped. Only cross-directory dependencies shown."
|
|
568
|
+
);
|
|
569
|
+
lines.push("Docs: https://github.com/wbern/obscene#metrics");
|
|
570
|
+
return lines.join("\n");
|
|
370
571
|
}
|
|
371
572
|
|
|
372
573
|
// src/cli.ts
|
|
373
574
|
var program = new Command();
|
|
374
|
-
program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("0.
|
|
575
|
+
program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("1.0.0");
|
|
576
|
+
var REPORT_GUIDE = {
|
|
577
|
+
complexity: "Cyclomatic complexity (branch/loop count). NOT a quality judgment \u2014 a 500-line parser will naturally score high. Compare density, not raw values.",
|
|
578
|
+
complexityDensity: "Complexity per line of code. Normalizes for file size. >0.25 suggests dense logic worth reviewing; <0.10 is typical for straightforward code.",
|
|
579
|
+
comments: "Comment line count. Low comments in high-density files may indicate under-documented logic. High comments alone is not a problem."
|
|
580
|
+
};
|
|
581
|
+
var HOTSPOTS_GUIDE = {
|
|
582
|
+
rankings: "Four independent ranking tables, each scoring files by a different metric \xD7 churn. A file may rank high in one dimension but not others.",
|
|
583
|
+
complexity: "complexity \xD7 churn. Ranks files by combined risk: complex code that changes often.",
|
|
584
|
+
nesting: "maxNesting \xD7 churn. Deeply nested code that changes often is harder to reason about.",
|
|
585
|
+
defects: "defects \xD7 churn. Files with fix: commits that also churn heavily may contain latent bugs.",
|
|
586
|
+
authors: "authors \xD7 churn. Files touched by many authors and changing often may lack clear ownership.",
|
|
587
|
+
tier: "Relative ranking within THIS codebase (top 50% = danger, next 30% = watch, bottom 20% = stable). NOT an absolute quality grade."
|
|
588
|
+
};
|
|
589
|
+
var COUPLING_GUIDE = {
|
|
590
|
+
cochanges: "Times both files appeared in the same commit. Higher values suggest a dependency between the files. Same-directory pairs are excluded \u2014 only cross-directory pairs are shown.",
|
|
591
|
+
degree: "Percentage: shared commits / min(churn of file1, file2) \xD7 100. Shows how tightly coupled the pair is relative to their individual change rates. 100% means every change to the less-active file also touched the other.",
|
|
592
|
+
totalComplexity: "Sum of both files' cyclomatic complexity. Highlights coupled pairs where the involved code is also complex \u2014 hidden dependency + high complexity compounds maintenance risk.",
|
|
593
|
+
tier: "Relative ranking within THIS codebase's coupling pairs (top 50% = danger, next 30% = watch, bottom 20% = stable). NOT an absolute quality grade. 'danger' means this pair co-changes more than most \u2014 it may be intentional and fine."
|
|
594
|
+
};
|
|
375
595
|
function addSharedOptions(cmd) {
|
|
376
596
|
return cmd.option("--top <n>", "limit to top N entries (0 = all)", "20").option("--format <type>", "output format: json | table", "json").option(
|
|
377
597
|
"--exclude <patterns...>",
|
|
@@ -421,6 +641,7 @@ function runReport(opts) {
|
|
|
421
641
|
const limited = top > 0 ? files.slice(0, top) : files;
|
|
422
642
|
const output = {
|
|
423
643
|
generated: (/* @__PURE__ */ new Date()).toISOString(),
|
|
644
|
+
guide: REPORT_GUIDE,
|
|
424
645
|
summary: {
|
|
425
646
|
...totals,
|
|
426
647
|
fileCount: files.length,
|
|
@@ -445,27 +666,19 @@ function runHotspots(opts) {
|
|
|
445
666
|
const defects = getDefects(months);
|
|
446
667
|
const authors = getAuthors(months);
|
|
447
668
|
const nestingDepths = getNestingDepths(files.map((f) => f.file));
|
|
448
|
-
const
|
|
669
|
+
const rankings = computeAllRankings(
|
|
449
670
|
files,
|
|
450
671
|
churn,
|
|
451
672
|
defects,
|
|
452
673
|
nestingDepths,
|
|
453
|
-
authors
|
|
674
|
+
authors,
|
|
675
|
+
top
|
|
454
676
|
);
|
|
455
|
-
const limited = top > 0 ? hotspots.slice(0, top) : hotspots;
|
|
456
|
-
const tierCounts = { danger: 0, watch: 0, stable: 0 };
|
|
457
|
-
for (const h of hotspots) {
|
|
458
|
-
tierCounts[h.tier]++;
|
|
459
|
-
}
|
|
460
|
-
const totalScore = hotspots.reduce((sum, h) => sum + h.hotspotScore, 0);
|
|
461
677
|
const output = {
|
|
462
678
|
generated: (/* @__PURE__ */ new Date()).toISOString(),
|
|
679
|
+
guide: HOTSPOTS_GUIDE,
|
|
463
680
|
churnWindow: `${months} months`,
|
|
464
|
-
|
|
465
|
-
tierCounts,
|
|
466
|
-
totalHotspots: hotspots.length,
|
|
467
|
-
showing: limited.length,
|
|
468
|
-
hotspots: limited
|
|
681
|
+
rankings
|
|
469
682
|
};
|
|
470
683
|
if (opts.format === "table") {
|
|
471
684
|
process.stdout.write(`${formatHotspotsTable(output)}
|
|
@@ -500,6 +713,7 @@ function runCoupling(opts) {
|
|
|
500
713
|
const totalScore = couplings.reduce((sum, c) => sum + c.couplingScore, 0);
|
|
501
714
|
const output = {
|
|
502
715
|
generated: (/* @__PURE__ */ new Date()).toISOString(),
|
|
716
|
+
guide: COUPLING_GUIDE,
|
|
503
717
|
churnWindow: `${months} months`,
|
|
504
718
|
minCochanges,
|
|
505
719
|
totalScore,
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@wbern/obscene",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "1.0.0",
|
|
4
4
|
"description": "Identify hotspot files — complex code that changes frequently. Churn × complexity analysis for any git repo.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"bin": {
|
|
@@ -48,7 +48,8 @@
|
|
|
48
48
|
"node": ">=18"
|
|
49
49
|
},
|
|
50
50
|
"dependencies": {
|
|
51
|
-
"commander": "^13.1.0"
|
|
51
|
+
"commander": "^13.1.0",
|
|
52
|
+
"picocolors": "^1.1.1"
|
|
52
53
|
},
|
|
53
54
|
"devDependencies": {
|
|
54
55
|
"@biomejs/biome": "^2.0.0",
|