@wbern/obscene 2.0.1 → 2.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +53 -33
- package/dist/cli.js +109 -24
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -88,7 +88,7 @@ Each table has its own tier assignment by cumulative score distribution:
|
|
|
88
88
|
|
|
89
89
|
Tiers are relative to THIS codebase, not absolute quality grades. A "hot" file is under heavy load, not necessarily broken.
|
|
90
90
|
|
|
91
|
-
A file may rank high in one dimension (e.g. complexity) but low in another (e.g. authors). Rankings with insufficient data are skipped with an explanation (e.g.
|
|
91
|
+
A file may rank high in one dimension (e.g. complexity) but low in another (e.g. authors). Rankings with insufficient data are skipped with an explanation (e.g. the Fix Activity ranking requires 5+ `fix:` commits across 3+ files). Bot authors (`[bot]` suffix) are filtered automatically.
|
|
92
92
|
|
|
93
93
|
### `obscene coupling`
|
|
94
94
|
|
|
@@ -122,7 +122,7 @@ Per-file complexity without churn. Useful for raw complexity distribution.
|
|
|
122
122
|
|
|
123
123
|
#### Score
|
|
124
124
|
|
|
125
|
-
`metric × churn`. Each ranking table uses a different metric (complexity, nesting,
|
|
125
|
+
`metric × churn`. Each ranking table uses a different metric (complexity, nesting, fix activity, or authors) multiplied by churn. See [Why churn × complexity?](#why-churn-x-complexity) for the research backing this approach.
|
|
126
126
|
|
|
127
127
|
#### Churn (`Churn`)
|
|
128
128
|
|
|
@@ -130,15 +130,17 @@ Number of commits touching the file within the configured time window (default:
|
|
|
130
130
|
|
|
131
131
|
#### Cyclomatic complexity (`Cmplx`)
|
|
132
132
|
|
|
133
|
-
Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc). Counts independent execution paths (branches, loops, conditions). Higher values mean more paths to test and more places for bugs to hide.
|
|
133
|
+
Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc). Counts independent execution paths (branches, loops, conditions). Higher values mean more paths to test and more places for bugs to hide. The measure was introduced by McCabe (1976) in *A Complexity Measure* and has been the standard structural-complexity metric since. — [IEEE TSE](https://doi.org/10.1109/TSE.1976.233837)
|
|
134
134
|
|
|
135
135
|
#### Complexity density (`Dens`)
|
|
136
136
|
|
|
137
137
|
`complexity / lines of code`. Normalizes complexity by file size so a 50-line file with complexity 25 (density 0.50) stands out against a 500-line file with complexity 25 (density 0.05). Based on Harrison & Magel (1981), who found that complexity relative to code size is a stronger fault predictor than raw complexity alone.
|
|
138
138
|
|
|
139
|
-
####
|
|
139
|
+
#### Fix activity (`Fixes`)
|
|
140
140
|
|
|
141
|
-
Count of `fix:` conventional commits touching the file within the churn window. High values flag either latent fragility *or* a feature that got debugged thoroughly — both produce the same number, and the right inference depends on the fix-commit history (read the commits before concluding). The metric is inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction
|
|
141
|
+
Count of `fix:` conventional commits touching the file within the churn window. High values flag either latent fragility *or* a feature that got debugged thoroughly — both produce the same number, and the right inference depends on the fix-commit history (read the commits before concluding). The metric is inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction.
|
|
142
|
+
|
|
143
|
+
The literature in [Why churn × complexity?](#why-churn-x-complexity) talks about *defects* — bugs confirmed against a bug-tracker or post-release issue database. obscene doesn't have access to that ground truth, so it uses `fix:` commits as a proxy and reports the raw signal as Fix Activity. The two are related but not identical: a `fix:` commit is direct evidence that someone considered something broken enough to label the change as a fix, but it doesn't distinguish trivial fixes from severe ones, and it relies on the team using conventional commits consistently. Treat Fix Activity as a prompt to read the commits, not as a defect count.
|
|
142
144
|
|
|
143
145
|
#### Fix density (`FxDns`)
|
|
144
146
|
|
|
@@ -176,6 +178,30 @@ Cumulative score distribution bucket:
|
|
|
176
178
|
| ☀️ **warm** | next 30% (50–80%) | Moderate coupling |
|
|
177
179
|
| 🧊 **cool** | bottom 20% | Low coupling |
|
|
178
180
|
|
|
181
|
+
#### Pair markers
|
|
182
|
+
|
|
183
|
+
The coupling table annotates entries that need framing:
|
|
184
|
+
|
|
185
|
+
| Marker | JSON field | Meaning |
|
|
186
|
+
|--------|------------|---------|
|
|
187
|
+
| `†` next to a path | `file1Deleted` / `file2Deleted` | File is no longer present at HEAD (deleted or renamed away). The coupling signal is historical; the pair is not actionable in the current tree. |
|
|
188
|
+
| `⇄` next to the Degree value | `lockstep` | Both files' total churn equals their co-change count over the window — they only ever changed together. The 100% degree is real but uninformative; treat the pair as a single unit from git's perspective. |
|
|
189
|
+
|
|
190
|
+
### Corpus framing
|
|
191
|
+
|
|
192
|
+
When the analyzed file set has no measurable cyclomatic complexity (every scanned file is non-code or trivial), the `hotspots` table prepends a banner noting that rankings reflect size and churn only. The `corpus` field in JSON output exposes the same signal:
|
|
193
|
+
|
|
194
|
+
```json
|
|
195
|
+
{
|
|
196
|
+
"corpus": {
|
|
197
|
+
"fileCount": 42,
|
|
198
|
+
"totalComplexity": 0
|
|
199
|
+
}
|
|
200
|
+
}
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
`fileCount` counts files *after* exclusion (`.obsignore` and `--exclude` patterns are already applied). Treat HOT/WARM/COOL as relative groupings rather than risk labels when `totalComplexity` is 0.
|
|
204
|
+
|
|
179
205
|
## Example output
|
|
180
206
|
|
|
181
207
|
```
|
|
@@ -262,7 +288,7 @@ obscene init
|
|
|
262
288
|
|
|
263
289
|
This creates a `.obsignore` containing:
|
|
264
290
|
- **Universal exclusions** — test files (`*.test.*`, `*.spec.*`, `__tests__/`, etc.), lock files (`package-lock.json`, `pnpm-lock.yaml`, etc.), and package manifests (`package.json`)
|
|
265
|
-
- **Detected project patterns** — CI directories (`.github/`), config files (`*.config.*`), vendored code, etc., based on your project structure
|
|
291
|
+
- **Detected project patterns** — CI directories (`.github/`), config files (`*.config.*`), vendored code, generated agent-command directories (`.claude/commands/**`, `.opencode/commands/**`, `.cursor/rules/**`), etc., based on your project structure
|
|
266
292
|
|
|
267
293
|
If no `.obsignore` or `.obsceneignore` exists, obscene prints a hint to stderr:
|
|
268
294
|
|
|
@@ -297,6 +323,8 @@ Files that are both complex and frequently modified are disproportionately likel
|
|
|
297
323
|
|
|
298
324
|
- **Nagappan & Ball (2005)** studied Windows Server 2003 and found that relative code churn measures predict system defect density with 89% accuracy. — [ICSE 2005](https://doi.org/10.1109/ICSE.2005.1553571)
|
|
299
325
|
- **Moser, Pedrycz & Succi (2008)** compared change metrics against static code attributes on Eclipse and found that process metrics (churn, change frequency) outperform static code metrics for defect prediction. — [ICSE 2008](https://doi.org/10.1145/1368088.1368114)
|
|
326
|
+
- **Hassan (2009)** introduced an entropy-based measure of code-change complexity and showed it predicts faults better than prior change and prior fault counts on six large open-source systems. — [ICSE 2009](https://doi.org/10.1109/ICSE.2009.5070510)
|
|
327
|
+
- **D'Ambros, Lanza & Robbes (2010)** systematically compared bug-prediction approaches (process, churn, source-code, entropy, and combined metrics) on five open-source systems and found that change-history metrics consistently rank among the strongest predictors. — [MSR 2010](https://doi.org/10.1109/MSR.2010.5463279)
|
|
300
328
|
- **Shin, Meneely, Williams & Osborne (2011)** combined complexity, churn, and developer activity metrics to predict vulnerabilities in Mozilla Firefox and the Linux kernel. By flagging only 10.9% of files, the model identified 70.8% of known vulnerabilities. — [IEEE TSE](https://doi.org/10.1109/TSE.2010.55)
|
|
301
329
|
- **Tornhill & Borg (2022)** analyzed 39 proprietary codebases and found that low-quality code (by their Code Health metric) contains 15x more defects and takes 124% longer to resolve. In their case studies, 4% of the codebase was responsible for 72% of all defects. — [ACM/IEEE TechDebt 2022](https://arxiv.org/abs/2203.04374)
|
|
302
330
|
|
|
@@ -331,43 +359,35 @@ Files that change together but live in different directories reveal implicit dep
|
|
|
331
359
|
|
|
332
360
|
## Field reports
|
|
333
361
|
|
|
334
|
-
Reviews from agents that ran obscene against real codebases.
|
|
362
|
+
Reviews from agents that ran obscene against real codebases.
|
|
335
363
|
|
|
336
|
-
> I ran obscene against a mid-sized polyglot codebase (web frontend + Python service + IaC, ~150 files, ~4 months of active history)
|
|
364
|
+
> I ran obscene against a mid-sized polyglot codebase (web frontend + Python service + IaC, ~150 files, ~4 months of active history). Honest take:
|
|
337
365
|
>
|
|
338
|
-
> What
|
|
366
|
+
> What surfaced new information from the hotspots view:
|
|
339
367
|
>
|
|
340
|
-
> - The
|
|
341
|
-
> - A nesting outlier (one handler scoring ~
|
|
368
|
+
> - The Fix Activity column (fix-commits × churn) flagged a component I would not have prioritized from reading the code alone — ~10 fix-commits over ~14 changes. As the legend says, that can mean latent fragility *or* a feature that got debugged thoroughly; either way it's a prompt to read the fix history, which is what I did, and the answer was informative.
|
|
369
|
+
> - A nesting outlier (one handler scoring ~15-deep in a repo whose median was 4–7). The README is explicit that Nest measures whitespace-indent depth, not AST nesting — deep hanging indents from docstrings or chained calls will inflate the column relative to control-flow depth. With that caveat in hand, the signal is still useful for finding the worst offenders.
|
|
342
370
|
>
|
|
343
|
-
>
|
|
371
|
+
> What `obscene coupling` added on a second run:
|
|
344
372
|
>
|
|
345
|
-
> -
|
|
346
|
-
> -
|
|
347
|
-
> - HOT/WARM/COOL tiers are relative to the repo, so *something* will always be HOT. Useful for "what's worst here," not a portable quality grade.
|
|
348
|
-
> - It can't see the failure modes that actually bite at runtime: coupling, type confusion, missing tests, brittle integration seams, hidden globals.
|
|
373
|
+
> - The headline finding: the top co-change pair (~21 shared commits, ~70% degree) was a service module and its corresponding configuration-management playbook. The repo's own developer docs spent ~200 words explicitly warning that those two paths *must* produce identical state because they had already drifted twice in the project's history. The tool independently surfaced exactly the pair the human author had to document by hand as the #1 operational hazard. Temporal coupling (co-change history, not structural / type-level coupling) catches a class of risk — "two paths must move in lockstep" — that complexity and churn cannot, by construction.
|
|
374
|
+
> - Second-tier signal: cross-stack pairs (frontend SPA + backend API, ~8 co-changes) flagged which abstraction boundaries actually leak in practice. Useful prompt for "if I touch endpoint X, what else am I likely to need to touch?"
|
|
349
375
|
>
|
|
350
|
-
>
|
|
351
|
-
>
|
|
352
|
-
> — Claude (Opus 4.7), via Claude Code
|
|
353
|
-
|
|
354
|
-
**Coupling addendum** — a separate run of `obscene coupling` against the same codebase a few weeks later, at the maintainer's request.
|
|
355
|
-
|
|
356
|
-
> What landed:
|
|
376
|
+
> Worth setting expectations on the hotspots view:
|
|
357
377
|
>
|
|
358
|
-
> -
|
|
359
|
-
> -
|
|
360
|
-
> -
|
|
378
|
+
> - It's a churn × complexity instrument, so it *temporarily penalizes* refactoring — moving code out of a hot file shows up as more commits on that file, inflating the score before the new structure pays off in stability.
|
|
379
|
+
> - McCabe complexity doesn't distinguish "one giant function" from "many small ones in the same file." A score tells you the size of the badness, not the shape.
|
|
380
|
+
> - HOT/WARM/COOL tiers are relative to the repo, so *something* will always be HOT. Useful for "what's worst here," not a portable quality grade.
|
|
381
|
+
> - Failure modes that aren't visible to git or scc — type confusion, missing tests, brittle integration seams, hidden globals — won't appear in the rankings at all. The tool can't tell you about risks it has no signal for.
|
|
361
382
|
>
|
|
362
|
-
>
|
|
383
|
+
> And on the coupling view:
|
|
363
384
|
>
|
|
364
|
-
> - Documentation files (CLAUDE.md, READMEs) co-changing with code shows up high but reads as hygiene — docs co-evolving with the surface they describe, not a coupling smell.
|
|
365
|
-
> -
|
|
366
|
-
> - Small-absolute / high-degree pairs (e.g. 5 co-changes at 83%)
|
|
367
|
-
> -
|
|
368
|
-
> - Tier inflation again: ~68 HOT pairs out of ~231 at defaults. Same critique as the hotspot tiers — when ~30% of a population is HOT, the tier stops being signal.
|
|
385
|
+
> - Documentation files (CLAUDE.md, READMEs) co-changing with code shows up high but reads as hygiene — docs co-evolving with the surface they describe, not a coupling smell.
|
|
386
|
+
> - `Degree` is asymmetric (`shared / min(churn)`, so it measures how entangled the *less-churned* file is with the other), but the file-pair display is symmetric. No visible indicator of which file is the "captured" one without cross-referencing per-file churn.
|
|
387
|
+
> - Small-absolute / high-degree pairs (e.g. 5 co-changes at 83%) appear near the top at defaults. `--min-cochanges 5` filters these out cleanly.
|
|
388
|
+
> - Tier inflation: a sizable fraction of pairs end up HOT at defaults. Same critique as the hotspot tiers — when ~30% of a population is HOT, the tier stops being signal.
|
|
369
389
|
>
|
|
370
|
-
> Verdict:
|
|
390
|
+
> Verdict: hotspots and coupling are complementary, not redundant. Hotspots ask "what file is the worst?"; coupling asks "what files must I keep in sync?" — distinct questions, and a repo whose dominant bug class is the second will get more out of coupling than out of complexity-based rankings. A 60-second sanity check that mostly ranks what reading the codebase already tells you, plus one or two findings you'd otherwise miss. Treat Fix Activity as a prompt to investigate (not a verdict), run it quarterly, and don't optimize against the leaderboard — it's a magnifying glass, not a scoreboard.
|
|
371
391
|
>
|
|
372
392
|
> — Claude (Opus 4.7), via Claude Code
|
|
373
393
|
|
package/dist/cli.js
CHANGED
|
@@ -322,15 +322,34 @@ function computeAllRankings(files, churn, defects, nestingDepths, authors, top)
|
|
|
322
322
|
}
|
|
323
323
|
return { rankings, skipped };
|
|
324
324
|
}
|
|
325
|
-
function
|
|
325
|
+
function getTrackedFiles() {
|
|
326
|
+
let raw;
|
|
327
|
+
try {
|
|
328
|
+
raw = execSync("git ls-files", {
|
|
329
|
+
maxBuffer: 50 * 1024 * 1024,
|
|
330
|
+
stdio: ["pipe", "pipe", "pipe"]
|
|
331
|
+
});
|
|
332
|
+
} catch {
|
|
333
|
+
throw new Error("Not a git repository or git is not installed.");
|
|
334
|
+
}
|
|
335
|
+
const set = /* @__PURE__ */ new Set();
|
|
336
|
+
for (const line of raw.toString().split("\n")) {
|
|
337
|
+
const trimmed = normalizePath(line.trim());
|
|
338
|
+
if (trimmed) set.add(trimmed);
|
|
339
|
+
}
|
|
340
|
+
return set;
|
|
341
|
+
}
|
|
342
|
+
function computeCoupling(cochanges, churn, complexityMap, minCochanges, trackedFiles) {
|
|
326
343
|
const entries = [];
|
|
327
344
|
for (const [key, count] of cochanges) {
|
|
328
345
|
if (count < minCochanges) continue;
|
|
329
346
|
const [file1, file2] = key.split("\0");
|
|
330
|
-
const
|
|
347
|
+
const churn1 = churn.get(file1) ?? 0;
|
|
348
|
+
const churn2 = churn.get(file2) ?? 0;
|
|
349
|
+
const minChurn = Math.min(churn1, churn2);
|
|
331
350
|
const degree = minChurn > 0 ? Math.round(count / minChurn * 1e3) / 10 : 0;
|
|
332
351
|
const totalComplexity = (complexityMap.get(file1) ?? 0) + (complexityMap.get(file2) ?? 0);
|
|
333
|
-
|
|
352
|
+
const entry = {
|
|
334
353
|
file1,
|
|
335
354
|
file2,
|
|
336
355
|
cochanges: count,
|
|
@@ -339,7 +358,15 @@ function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
|
|
|
339
358
|
couplingScore: count,
|
|
340
359
|
percentOfTotal: 0,
|
|
341
360
|
tier: "cool"
|
|
342
|
-
}
|
|
361
|
+
};
|
|
362
|
+
if (count > 0 && churn1 === count && churn2 === count) {
|
|
363
|
+
entry.lockstep = true;
|
|
364
|
+
}
|
|
365
|
+
if (trackedFiles) {
|
|
366
|
+
if (!trackedFiles.has(file1)) entry.file1Deleted = true;
|
|
367
|
+
if (!trackedFiles.has(file2)) entry.file2Deleted = true;
|
|
368
|
+
}
|
|
369
|
+
entries.push(entry);
|
|
343
370
|
}
|
|
344
371
|
entries.sort((a, b) => b.couplingScore - a.couplingScore);
|
|
345
372
|
const totalScore = entries.reduce((sum, e) => sum + e.couplingScore, 0);
|
|
@@ -460,19 +487,25 @@ var INIT_FILE_RULES = [
|
|
|
460
487
|
test: /(?:^|\/)\.gitlab-ci/,
|
|
461
488
|
pattern: ".gitlab-ci*",
|
|
462
489
|
comment: "GitLab CI configuration"
|
|
490
|
+
},
|
|
491
|
+
{
|
|
492
|
+
test: /^\.claude\/commands\//,
|
|
493
|
+
pattern: ".claude/commands/**",
|
|
494
|
+
comment: "Claude Code slash commands (often generated from sources)"
|
|
495
|
+
},
|
|
496
|
+
{
|
|
497
|
+
test: /^\.opencode\/commands\//,
|
|
498
|
+
pattern: ".opencode/commands/**",
|
|
499
|
+
comment: "OpenCode slash commands (often generated from sources)"
|
|
500
|
+
},
|
|
501
|
+
{
|
|
502
|
+
test: /^\.cursor\/rules\//,
|
|
503
|
+
pattern: ".cursor/rules/**",
|
|
504
|
+
comment: "Cursor rules (often generated from sources)"
|
|
463
505
|
}
|
|
464
506
|
];
|
|
465
507
|
function detectIgnorePatterns() {
|
|
466
|
-
|
|
467
|
-
try {
|
|
468
|
-
raw = execSync("git ls-files", {
|
|
469
|
-
maxBuffer: 50 * 1024 * 1024,
|
|
470
|
-
stdio: ["pipe", "pipe", "pipe"]
|
|
471
|
-
});
|
|
472
|
-
} catch {
|
|
473
|
-
throw new Error("Not a git repository or git is not installed.");
|
|
474
|
-
}
|
|
475
|
-
const trackedFiles = raw.toString().split("\n").map((l) => normalizePath(l.trim())).filter(Boolean);
|
|
508
|
+
const trackedFiles = getTrackedFiles();
|
|
476
509
|
const patterns = [];
|
|
477
510
|
const topDirs = /* @__PURE__ */ new Set();
|
|
478
511
|
for (const f of trackedFiles) {
|
|
@@ -485,8 +518,11 @@ function detectIgnorePatterns() {
|
|
|
485
518
|
}
|
|
486
519
|
}
|
|
487
520
|
for (const rule of INIT_FILE_RULES) {
|
|
488
|
-
|
|
489
|
-
|
|
521
|
+
for (const f of trackedFiles) {
|
|
522
|
+
if (rule.test.test(f)) {
|
|
523
|
+
patterns.push({ pattern: rule.pattern, comment: rule.comment });
|
|
524
|
+
break;
|
|
525
|
+
}
|
|
490
526
|
}
|
|
491
527
|
}
|
|
492
528
|
return patterns;
|
|
@@ -615,7 +651,13 @@ function padLeft(s, n) {
|
|
|
615
651
|
return w >= n ? s : " ".repeat(n - w) + s;
|
|
616
652
|
}
|
|
617
653
|
function truncate(s, max) {
|
|
618
|
-
|
|
654
|
+
if (max <= 0) return "";
|
|
655
|
+
if (s.length <= max) return s;
|
|
656
|
+
if (max === 1) return "\u2026";
|
|
657
|
+
const remaining = max - 1;
|
|
658
|
+
const tail = Math.ceil(remaining * 0.6);
|
|
659
|
+
const head = remaining - tail;
|
|
660
|
+
return `${s.slice(0, head)}\u2026${s.slice(s.length - tail)}`;
|
|
619
661
|
}
|
|
620
662
|
function tierLabel(tier) {
|
|
621
663
|
if (tier === "hot") return pc.red("\u{1F525} HOT ");
|
|
@@ -796,8 +838,21 @@ function formatRankingTable(key, ranking, description) {
|
|
|
796
838
|
}
|
|
797
839
|
function formatHotspotsTable(output) {
|
|
798
840
|
const lines = [];
|
|
799
|
-
const { churnWindow, rankings } = output;
|
|
841
|
+
const { churnWindow, rankings, corpus } = output;
|
|
800
842
|
lines.push(`Hotspots \u2014 ${churnWindow} churn window`);
|
|
843
|
+
if (corpus && corpus.fileCount > 0 && corpus.totalComplexity === 0) {
|
|
844
|
+
lines.push("");
|
|
845
|
+
lines.push(
|
|
846
|
+
pc2.yellow(
|
|
847
|
+
"Note: no measurable code complexity detected across this corpus (cyclomatic = 0)."
|
|
848
|
+
)
|
|
849
|
+
);
|
|
850
|
+
lines.push(
|
|
851
|
+
pc2.yellow(
|
|
852
|
+
"Rankings reflect size and churn only \u2014 HOT/WARM/COOL are relative groupings, not risk labels."
|
|
853
|
+
)
|
|
854
|
+
);
|
|
855
|
+
}
|
|
801
856
|
lines.push("");
|
|
802
857
|
const keys = Object.keys(rankings);
|
|
803
858
|
for (let i = 0; i < keys.length; i++) {
|
|
@@ -844,8 +899,15 @@ function formatCouplingTable(output) {
|
|
|
844
899
|
padRight("File 1", 35) + padRight("File 2", 35) + padLeft("Shared", 7) + padLeft("Degree", 8) + padLeft("Cmplx", 7) + padLeft("Tier", 12)
|
|
845
900
|
);
|
|
846
901
|
lines.push("\u2500".repeat(104));
|
|
902
|
+
let anyDeleted = false;
|
|
903
|
+
let anyLockstep = false;
|
|
847
904
|
for (const c of couplings) {
|
|
848
|
-
|
|
905
|
+
if (c.file1Deleted || c.file2Deleted) anyDeleted = true;
|
|
906
|
+
if (c.lockstep) anyLockstep = true;
|
|
907
|
+
const file1Cell = c.file1Deleted ? `\u2020 ${truncate(c.file1, 31)}` : truncate(c.file1, 33);
|
|
908
|
+
const file2Cell = c.file2Deleted ? `\u2020 ${truncate(c.file2, 31)}` : truncate(c.file2, 33);
|
|
909
|
+
const degreeText = c.lockstep ? `${c.degree.toFixed(1)}\u21C4` : `${c.degree.toFixed(1)}%`;
|
|
910
|
+
const rawRow = padRight(file1Cell, 35) + padRight(file2Cell, 35) + padLeft(String(c.cochanges), 7) + padLeft(degreeText, 8) + padLeft(String(c.totalComplexity), 7) + padLeft(tierLabel(c.tier), 12);
|
|
849
911
|
lines.push(colorRow(c.tier, rawRow));
|
|
850
912
|
}
|
|
851
913
|
lines.push("");
|
|
@@ -854,6 +916,18 @@ function formatCouplingTable(output) {
|
|
|
854
916
|
"Shared=co-changed commits | Degree=shared/min(churn)\xD7100 | Cmplx=sum of both files"
|
|
855
917
|
)
|
|
856
918
|
);
|
|
919
|
+
if (anyDeleted) {
|
|
920
|
+
lines.push(
|
|
921
|
+
pc2.dim("\u2020 = file no longer present at HEAD (deleted or renamed)")
|
|
922
|
+
);
|
|
923
|
+
}
|
|
924
|
+
if (anyLockstep) {
|
|
925
|
+
lines.push(
|
|
926
|
+
pc2.dim(
|
|
927
|
+
"\u21C4 = lockstep pair (both files only ever changed together \u2014 signal is real but uninformative)"
|
|
928
|
+
)
|
|
929
|
+
);
|
|
930
|
+
}
|
|
857
931
|
lines.push(
|
|
858
932
|
pc2.dim(
|
|
859
933
|
"Tiers are relative to THIS codebase, not absolute quality grades. High coupling may be intentional and fine."
|
|
@@ -890,7 +964,7 @@ function formatCompositeTable(output) {
|
|
|
890
964
|
|
|
891
965
|
// src/cli.ts
|
|
892
966
|
var program = new Command();
|
|
893
|
-
program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("2.0
|
|
967
|
+
program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("2.1.0");
|
|
894
968
|
var REPORT_GUIDE = {
|
|
895
969
|
complexity: "Cyclomatic complexity (branch/loop count). NOT a quality judgment \u2014 a 500-line parser will naturally score high. Compare density, not raw values.",
|
|
896
970
|
complexityDensity: "Complexity per line of code. Normalizes for file size. >0.25 suggests dense logic worth reviewing; <0.10 is typical for straightforward code.",
|
|
@@ -903,13 +977,16 @@ var HOTSPOTS_GUIDE = {
|
|
|
903
977
|
defects: "fixes \xD7 churn. Count of fix: commits touching the file \xD7 churn. High values can mean latent fragility, but they also flag features that got debugged thoroughly \u2014 read the fix-commit history before concluding which.\nSource: change-history metrics (Moser, Pedrycz & Succi 2008) via conventional commits (fix: prefix) \xB7 Strength: direct fix-history signal \xB7 Limit: counts fix activity, not defects per se; requires consistent fix: convention",
|
|
904
978
|
authors: "authors \xD7 churn. Files touched by many authors and changing often may lack clear ownership.\nSource: code ownership research (Bird et al. 2011, Microsoft) \xB7 Strength: flags diffuse ownership risk \xB7 Limit: doesn't measure expertise depth, bot authors filtered automatically",
|
|
905
979
|
composite: "Combined ranking using Reciprocal Rank Fusion (RRF) across all dimensions. Files appearing near the top of multiple rankings score highest.\nSource: RRF (Cormack et al. 2009) \xB7 Strength: robust to outliers, no normalization needed \xB7 Limit: equal weight across all dimensions",
|
|
906
|
-
tier: "Relative ranking within THIS codebase (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade \u2014 a hot file is under heavy load, not necessarily broken."
|
|
980
|
+
tier: "Relative ranking within THIS codebase (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade \u2014 a hot file is under heavy load, not necessarily broken.",
|
|
981
|
+
corpus: "Aggregate stats for the analyzed file set (post-exclude \u2014 files filtered by .obsignore or --exclude are not counted). When totalComplexity is 0, the rankings reflect size and churn only; HOT/WARM/COOL become relative groupings rather than risk labels."
|
|
907
982
|
};
|
|
908
983
|
var COUPLING_GUIDE = {
|
|
909
984
|
cochanges: "Times both files appeared in the same commit. Higher values suggest a dependency between the files. Same-directory pairs are excluded \u2014 only cross-directory pairs are shown.",
|
|
910
985
|
degree: "Percentage: shared commits / min(churn of file1, file2) \xD7 100. Shows how tightly coupled the pair is relative to their individual change rates. 100% means every change to the less-active file also touched the other.",
|
|
911
986
|
totalComplexity: "Sum of both files' cyclomatic complexity. Highlights coupled pairs where the involved code is also complex \u2014 hidden dependency + high complexity compounds maintenance risk.",
|
|
912
|
-
tier: "Relative ranking within THIS codebase's coupling pairs (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade. 'hot' means this pair co-changes more than most \u2014 it may be intentional and fine."
|
|
987
|
+
tier: "Relative ranking within THIS codebase's coupling pairs (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade. 'hot' means this pair co-changes more than most \u2014 it may be intentional and fine.",
|
|
988
|
+
deleted: "file1Deleted / file2Deleted are set when the file is no longer present at HEAD (deleted or renamed away). The coupling signal is historical \u2014 the pair is not actionable in the current tree.",
|
|
989
|
+
lockstep: "Set when both files' total churn equals their co-change count over the window \u2014 i.e. they only ever changed together. The 100% degree is real but uninformative; treat the pair as a single unit from git's perspective."
|
|
913
990
|
};
|
|
914
991
|
function addSharedOptions(cmd) {
|
|
915
992
|
return cmd.option("--top <n>", "limit to top N entries (0 = all)", "20").option("--format <type>", "output format: json | table", "json").option(
|
|
@@ -1015,13 +1092,19 @@ function runHotspots(opts) {
|
|
|
1015
1092
|
top
|
|
1016
1093
|
);
|
|
1017
1094
|
const composite = computeComposite(rankings, churn, top);
|
|
1095
|
+
let corpusTotalComplexity = 0;
|
|
1096
|
+
for (const f of files) corpusTotalComplexity += f.complexity;
|
|
1018
1097
|
const output = {
|
|
1019
1098
|
generated: (/* @__PURE__ */ new Date()).toISOString(),
|
|
1020
1099
|
guide: HOTSPOTS_GUIDE,
|
|
1021
1100
|
churnWindow: `${months} months`,
|
|
1022
1101
|
rankings,
|
|
1023
1102
|
skipped: Object.keys(skipped).length > 0 ? skipped : void 0,
|
|
1024
|
-
composite
|
|
1103
|
+
composite,
|
|
1104
|
+
corpus: {
|
|
1105
|
+
fileCount: files.length,
|
|
1106
|
+
totalComplexity: corpusTotalComplexity
|
|
1107
|
+
}
|
|
1025
1108
|
};
|
|
1026
1109
|
if (opts.format === "table") {
|
|
1027
1110
|
process.stdout.write(`${formatHotspotsTable(output)}
|
|
@@ -1049,11 +1132,13 @@ function runCoupling(opts) {
|
|
|
1049
1132
|
for (const f of files) {
|
|
1050
1133
|
complexityMap.set(f.file, f.complexity);
|
|
1051
1134
|
}
|
|
1135
|
+
const trackedFiles = getTrackedFiles();
|
|
1052
1136
|
const couplings = computeCoupling(
|
|
1053
1137
|
cochanges,
|
|
1054
1138
|
churn,
|
|
1055
1139
|
complexityMap,
|
|
1056
|
-
minCochanges
|
|
1140
|
+
minCochanges,
|
|
1141
|
+
trackedFiles
|
|
1057
1142
|
);
|
|
1058
1143
|
const limited = top > 0 ? couplings.slice(0, top) : couplings;
|
|
1059
1144
|
const tierCounts = { hot: 0, warm: 0, cool: 0 };
|