@wbern/obscene 2.0.1 → 2.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +53 -33
  2. package/dist/cli.js +112 -25
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -88,7 +88,7 @@ Each table has its own tier assignment by cumulative score distribution:
88
88
 
89
89
  Tiers are relative to THIS codebase, not absolute quality grades. A "hot" file is under heavy load, not necessarily broken.
90
90
 
91
- A file may rank high in one dimension (e.g. complexity) but low in another (e.g. authors). Rankings with insufficient data are skipped with an explanation (e.g. defects ranking requires 5+ `fix:` commits across 3+ files). Bot authors (`[bot]` suffix) are filtered automatically.
91
+ A file may rank high in one dimension (e.g. complexity) but low in another (e.g. authors). Rankings with insufficient data are skipped with an explanation (e.g. the Fix Activity ranking requires 5+ `fix:` commits across 3+ files). Bot authors (`[bot]` suffix) are filtered automatically.
92
92
 
93
93
  ### `obscene coupling`
94
94
 
@@ -122,7 +122,7 @@ Per-file complexity without churn. Useful for raw complexity distribution.
122
122
 
123
123
  #### Score
124
124
 
125
- `metric × churn`. Each ranking table uses a different metric (complexity, nesting, defects, or authors) multiplied by churn. See [Why churn × complexity?](#why-churn-x-complexity) for the research backing this approach.
125
+ `metric × churn`. Each ranking table uses a different metric (complexity, nesting, fix activity, or authors) multiplied by churn. See [Why churn × complexity?](#why-churn-x-complexity) for the research backing this approach.
126
126
 
127
127
  #### Churn (`Churn`)
128
128
 
@@ -130,15 +130,17 @@ Number of commits touching the file within the configured time window (default:
130
130
 
131
131
  #### Cyclomatic complexity (`Cmplx`)
132
132
 
133
- Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc). Counts independent execution paths (branches, loops, conditions). Higher values mean more paths to test and more places for bugs to hide.
133
+ Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc). Counts independent execution paths (branches, loops, conditions). Higher values mean more paths to test and more places for bugs to hide. The measure was introduced by McCabe (1976) in *A Complexity Measure* and has been the standard structural-complexity metric since. — [IEEE TSE](https://doi.org/10.1109/TSE.1976.233837)
134
134
 
135
135
  #### Complexity density (`Dens`)
136
136
 
137
137
  `complexity / lines of code`. Normalizes complexity by file size so a 50-line file with complexity 25 (density 0.50) stands out against a 500-line file with complexity 25 (density 0.05). Based on Harrison & Magel (1981), who found that complexity relative to code size is a stronger fault predictor than raw complexity alone.
138
138
 
139
- #### Fixes (`Fixes`)
139
+ #### Fix activity (`Fixes`)
140
140
 
141
- Count of `fix:` conventional commits touching the file within the churn window. High values flag either latent fragility *or* a feature that got debugged thoroughly — both produce the same number, and the right inference depends on the fix-commit history (read the commits before concluding). The metric is inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction; obscene reports the raw fix-activity signal and leaves the interpretation to you.
141
+ Count of `fix:` conventional commits touching the file within the churn window. High values flag either latent fragility *or* a feature that got debugged thoroughly — both produce the same number, and the right inference depends on the fix-commit history (read the commits before concluding). The metric is inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction.
142
+
143
+ The literature in [Why churn × complexity?](#why-churn-x-complexity) talks about *defects* — bugs confirmed against a bug-tracker or post-release issue database. obscene doesn't have access to that ground truth, so it uses `fix:` commits as a proxy and reports the raw signal as Fix Activity. The two are related but not identical: a `fix:` commit is direct evidence that someone considered something broken enough to label the change as a fix, but it doesn't distinguish trivial fixes from severe ones, and it relies on the team using conventional commits consistently. Treat Fix Activity as a prompt to read the commits, not as a defect count.
142
144
 
143
145
  #### Fix density (`FxDns`)
144
146
 
@@ -176,6 +178,30 @@ Cumulative score distribution bucket:
176
178
  | ☀️ **warm** | next 30% (50–80%) | Moderate coupling |
177
179
  | 🧊 **cool** | bottom 20% | Low coupling |
178
180
 
181
+ #### Pair markers
182
+
183
+ The coupling table annotates entries that need framing:
184
+
185
+ | Marker | JSON field | Meaning |
186
+ |--------|------------|---------|
187
+ | `†` next to a path | `file1Deleted` / `file2Deleted` | File is no longer present at HEAD (deleted or renamed away). The coupling signal is historical; the pair is not actionable in the current tree. |
188
+ | `⇄` next to the Degree value | `lockstep` | Shared commits / max(churn) ≥ 0.9 — both files almost always change together over the window. Typical of generator/mirror pairs (`README.md` ↔ `src/README.md`, `*.pb.go` ↔ `*.proto`). Treat the pair as a single unit from git's perspective. |
189
+
190
+ ### Corpus framing
191
+
192
+ When the analyzed file set has no measurable cyclomatic complexity (every scanned file is non-code or trivial), the `hotspots` table prepends a banner noting that rankings reflect size and churn only. The `corpus` field in JSON output exposes the same signal:
193
+
194
+ ```json
195
+ {
196
+ "corpus": {
197
+ "fileCount": 42,
198
+ "totalComplexity": 0
199
+ }
200
+ }
201
+ ```
202
+
203
+ `fileCount` counts files *after* exclusion (`.obsignore` and `--exclude` patterns are already applied). Treat HOT/WARM/COOL as relative groupings rather than risk labels when `totalComplexity` is 0.
204
+
179
205
  ## Example output
180
206
 
181
207
  ```
@@ -262,7 +288,7 @@ obscene init
262
288
 
263
289
  This creates a `.obsignore` containing:
264
290
  - **Universal exclusions** — test files (`*.test.*`, `*.spec.*`, `__tests__/`, etc.), lock files (`package-lock.json`, `pnpm-lock.yaml`, etc.), and package manifests (`package.json`)
265
- - **Detected project patterns** — CI directories (`.github/`), config files (`*.config.*`), vendored code, etc., based on your project structure
291
+ - **Detected project patterns** — CI directories (`.github/`), config files (`*.config.*`), vendored code, generated agent-command directories (`.claude/commands/**`, `.opencode/commands/**`, `.cursor/rules/**`), etc., based on your project structure
266
292
 
267
293
  If no `.obsignore` or `.obsceneignore` exists, obscene prints a hint to stderr:
268
294
 
@@ -297,6 +323,8 @@ Files that are both complex and frequently modified are disproportionately likel
297
323
 
298
324
  - **Nagappan & Ball (2005)** studied Windows Server 2003 and found that relative code churn measures predict system defect density with 89% accuracy. — [ICSE 2005](https://doi.org/10.1109/ICSE.2005.1553571)
299
325
  - **Moser, Pedrycz & Succi (2008)** compared change metrics against static code attributes on Eclipse and found that process metrics (churn, change frequency) outperform static code metrics for defect prediction. — [ICSE 2008](https://doi.org/10.1145/1368088.1368114)
326
+ - **Hassan (2009)** introduced an entropy-based measure of code-change complexity and showed it predicts faults better than prior change and prior fault counts on six large open-source systems. — [ICSE 2009](https://doi.org/10.1109/ICSE.2009.5070510)
327
+ - **D'Ambros, Lanza & Robbes (2010)** systematically compared bug-prediction approaches (process, churn, source-code, entropy, and combined metrics) on five open-source systems and found that change-history metrics consistently rank among the strongest predictors. — [MSR 2010](https://doi.org/10.1109/MSR.2010.5463279)
300
328
  - **Shin, Meneely, Williams & Osborne (2011)** combined complexity, churn, and developer activity metrics to predict vulnerabilities in Mozilla Firefox and the Linux kernel. By flagging only 10.9% of files, the model identified 70.8% of known vulnerabilities. — [IEEE TSE](https://doi.org/10.1109/TSE.2010.55)
301
329
  - **Tornhill & Borg (2022)** analyzed 39 proprietary codebases and found that low-quality code (by their Code Health metric) contains 15x more defects and takes 124% longer to resolve. In their case studies, 4% of the codebase was responsible for 72% of all defects. — [ACM/IEEE TechDebt 2022](https://arxiv.org/abs/2203.04374)
302
330
 
@@ -331,43 +359,35 @@ Files that change together but live in different directories reveal implicit dep
331
359
 
332
360
  ## Field reports
333
361
 
334
- Reviews from agents that ran obscene against real codebases. Unedited.
362
+ Reviews from agents that ran obscene against real codebases.
335
363
 
336
- > I ran obscene against a mid-sized polyglot codebase (web frontend + Python service + IaC, ~150 files, ~4 months of active history) right after a session of structural refactors. Honest take:
364
+ > I ran obscene against a mid-sized polyglot codebase (web frontend + Python service + IaC, ~150 files, ~4 months of active history). Honest take:
337
365
  >
338
- > What actually surfaced new information:
366
+ > What surfaced new information from the hotspots view:
339
367
  >
340
- > - The defect-density column (fix-commits per change) flagged a fragile component I would not have prioritized from reading the code alone — ~10 fix-commits over ~14 changes is a "this feature keeps breaking" signal you don't get from raw line counts or complexity.
341
- > - A nesting outlier (one handler scoring ~30-deep in a repo whose median was 4–7). Extreme enough that no amount of churn-weighting could hide it.
368
+ > - The Fix Activity column (fix-commits × churn) flagged a component I would not have prioritized from reading the code alone — ~10 fix-commits over ~14 changes. As the legend says, that can mean latent fragility *or* a feature that got debugged thoroughly; either way it's a prompt to read the fix history, which is what I did, and the answer was informative.
369
+ > - A nesting outlier (one handler scoring ~15-deep in a repo whose median was 4–7). The README is explicit that Nest measures whitespace-indent depth, not AST nesting — deep hanging indents from docstrings or chained calls will inflate the column relative to control-flow depth. With that caveat in hand, the signal is still useful for finding the worst offenders.
342
370
  >
343
- > Worth setting expectations on:
371
+ > What `obscene coupling` added on a second run:
344
372
  >
345
- > - It's a churn × complexity instrument, so it *temporarily penalizes* refactoring moving code out of a hot file shows up as more commits on that file, inflating the score before the new structure pays off in stability. The hottest file in our run was hottest partly because we'd touched it that day.
346
- > - McCabe complexity doesn't distinguish "one giant function" from "many small ones in the same file." A score tells you the size of the badness, not the shape.
347
- > - HOT/WARM/COOL tiers are relative to the repo, so *something* will always be HOT. Useful for "what's worst here," not a portable quality grade.
348
- > - It can't see the failure modes that actually bite at runtime: coupling, type confusion, missing tests, brittle integration seams, hidden globals.
373
+ > - The headline finding: the top co-change pair (~21 shared commits, ~70% degree) was a service module and its corresponding configuration-management playbook. The repo's own developer docs spent ~200 words explicitly warning that those two paths *must* produce identical state because they had already drifted twice in the project's history. The tool independently surfaced exactly the pair the human author had to document by hand as the #1 operational hazard. Temporal coupling (co-change history, not structural / type-level coupling) catches a class of risk — "two paths must move in lockstep" — that complexity and churn cannot, by construction.
374
+ > - Second-tier signal: cross-stack pairs (frontend SPA + backend API, ~8 co-changes) flagged which abstraction boundaries actually leak in practice. Useful prompt for "if I touch endpoint X, what else am I likely to need to touch?"
349
375
  >
350
- > Verdict: a 60-second sanity check that mostly ranks what reading the codebase already tells you, plus one or two findings you'd otherwise miss. Treat the defect-density column as the most signal-dense, run it quarterly, and don't optimize against the leaderboard — it's a magnifying glass, not a scoreboard.
351
- >
352
- > — Claude (Opus 4.7), via Claude Code
353
-
354
- **Coupling addendum** — a separate run of `obscene coupling` against the same codebase a few weeks later, at the maintainer's request.
355
-
356
- > What landed:
376
+ > Worth setting expectations on the hotspots view:
357
377
  >
358
- > - The headline finding: the top co-change pair (~21 shared commits, ~70% degree) was a service module and its corresponding configuration-management playbook. The repo's own developer docs spent ~200 words explicitly warning that those two paths *must* produce identical state because they had already drifted twice in the project's history. The tool independently surfaced exactly the pair the human author had to document by hand as the #1 operational hazard. That's a real find — temporal coupling catches a class of risk ("two paths must move in lockstep") that complexity and churn cannot, by construction.
359
- > - Second-tier signal that earned its keep: cross-stack pairs (frontend SPA + backend API, ~8 co-changes) flagged which abstraction boundaries actually leak in practice. Useful prompt for "if I touch endpoint X, what else am I likely to need to touch?"
360
- > - Worth saying explicitly: the original testament's line "can't see coupling" was unfair as written. I meant *structural* coupling — the static-analysis question of "if I rename this field, what breaks?". `obscene coupling` measures *temporal* coupling (co-change history). Different sense of the word, and for the failure mode I was implicitly thinking of ("two things must stay in sync") the temporal lens is arguably more diagnostic than the structural one would have been.
378
+ > - It's a churn × complexity instrument, so it *temporarily penalizes* refactoring moving code out of a hot file shows up as more commits on that file, inflating the score before the new structure pays off in stability.
379
+ > - McCabe complexity doesn't distinguish "one giant function" from "many small ones in the same file." A score tells you the size of the badness, not the shape.
380
+ > - HOT/WARM/COOL tiers are relative to the repo, so *something* will always be HOT. Useful for "what's worst here," not a portable quality grade.
381
+ > - Failure modes that aren't visible to git or scc — type confusion, missing tests, brittle integration seams, hidden globals — won't appear in the rankings at all. The tool can't tell you about risks it has no signal for.
361
382
  >
362
- > Where the friction was:
383
+ > And on the coupling view:
363
384
  >
364
- > - Documentation files (CLAUDE.md, READMEs) co-changing with code shows up high but reads as hygiene — docs co-evolving with the surface they describe, not a coupling smell. Worth either a default exclusion for markdown or an explicit callout in the legend.
365
- > - The `Degree` metric is asymmetric (`shared / min(churn)`, so it measures how entangled the *less-churned* file is with the other), but the file-pair display is symmetric. No visible indicator of which file is the "captured" one without cross-referencing per-file churn. Adding directionality to the printout would read more clearly.
366
- > - Small-absolute / high-degree pairs (e.g. 5 co-changes at 83%) appeared near the top at defaults. `--min-cochanges 5` filtered these out cleanly, but the defaults need either a sane minimum or a confidence-shaped column.
367
- > - The combined-complexity column on each row didn't add much a sum of two unrelated complexities has no clean interpretation, and the hotspots report already covers per-file complexity well.
368
- > - Tier inflation again: ~68 HOT pairs out of ~231 at defaults. Same critique as the hotspot tiers — when ~30% of a population is HOT, the tier stops being signal.
385
+ > - Documentation files (CLAUDE.md, READMEs) co-changing with code shows up high but reads as hygiene — docs co-evolving with the surface they describe, not a coupling smell.
386
+ > - `Degree` is asymmetric (`shared / min(churn)`, so it measures how entangled the *less-churned* file is with the other), but the file-pair display is symmetric. No visible indicator of which file is the "captured" one without cross-referencing per-file churn.
387
+ > - Small-absolute / high-degree pairs (e.g. 5 co-changes at 83%) appear near the top at defaults. `--min-cochanges 5` filters these out cleanly.
388
+ > - Tier inflation: a sizable fraction of pairs end up HOT at defaults. Same critique as the hotspot tiers when ~30% of a population is HOT, the tier stops being signal.
369
389
  >
370
- > Verdict: `obscene coupling` complements the hotspot view rather than overlapping with it. Hotspots ask "what file is the worst?"; coupling asks "what files must I keep in sync?" — distinct questions, and a repo whose dominant bug class is the second will get more out of coupling than out of complexity-based rankings. For this codebase, coupling rediscovered an institutional hazard the human author had felt compelled to document in prose. Worth running alongside hotspots, not in place of either lens. Same quarterly cadence applies; treat the cross-stack and cross-path pairs as the most action-shaped output.
390
+ > Verdict: hotspots and coupling are complementary, not redundant. Hotspots ask "what file is the worst?"; coupling asks "what files must I keep in sync?" — distinct questions, and a repo whose dominant bug class is the second will get more out of coupling than out of complexity-based rankings. A 60-second sanity check that mostly ranks what reading the codebase already tells you, plus one or two findings you'd otherwise miss. Treat Fix Activity as a prompt to investigate (not a verdict), run it quarterly, and don't optimize against the leaderboard it's a magnifying glass, not a scoreboard.
371
391
  >
372
392
  > — Claude (Opus 4.7), via Claude Code
373
393
 
package/dist/cli.js CHANGED
@@ -322,15 +322,34 @@ function computeAllRankings(files, churn, defects, nestingDepths, authors, top)
322
322
  }
323
323
  return { rankings, skipped };
324
324
  }
325
- function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
325
+ function getTrackedFiles() {
326
+ let raw;
327
+ try {
328
+ raw = execSync("git ls-files", {
329
+ maxBuffer: 50 * 1024 * 1024,
330
+ stdio: ["pipe", "pipe", "pipe"]
331
+ });
332
+ } catch {
333
+ throw new Error("Not a git repository or git is not installed.");
334
+ }
335
+ const set = /* @__PURE__ */ new Set();
336
+ for (const line of raw.toString().split("\n")) {
337
+ const trimmed = normalizePath(line.trim());
338
+ if (trimmed) set.add(trimmed);
339
+ }
340
+ return set;
341
+ }
342
+ function computeCoupling(cochanges, churn, complexityMap, minCochanges, trackedFiles) {
326
343
  const entries = [];
327
344
  for (const [key, count] of cochanges) {
328
345
  if (count < minCochanges) continue;
329
346
  const [file1, file2] = key.split("\0");
330
- const minChurn = Math.min(churn.get(file1) ?? 0, churn.get(file2) ?? 0);
347
+ const churn1 = churn.get(file1) ?? 0;
348
+ const churn2 = churn.get(file2) ?? 0;
349
+ const minChurn = Math.min(churn1, churn2);
331
350
  const degree = minChurn > 0 ? Math.round(count / minChurn * 1e3) / 10 : 0;
332
351
  const totalComplexity = (complexityMap.get(file1) ?? 0) + (complexityMap.get(file2) ?? 0);
333
- entries.push({
352
+ const entry = {
334
353
  file1,
335
354
  file2,
336
355
  cochanges: count,
@@ -339,7 +358,16 @@ function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
339
358
  couplingScore: count,
340
359
  percentOfTotal: 0,
341
360
  tier: "cool"
342
- });
361
+ };
362
+ const maxChurn = Math.max(churn1, churn2);
363
+ if (count > 0 && maxChurn > 0 && count / maxChurn >= 0.9) {
364
+ entry.lockstep = true;
365
+ }
366
+ if (trackedFiles) {
367
+ if (!trackedFiles.has(file1)) entry.file1Deleted = true;
368
+ if (!trackedFiles.has(file2)) entry.file2Deleted = true;
369
+ }
370
+ entries.push(entry);
343
371
  }
344
372
  entries.sort((a, b) => b.couplingScore - a.couplingScore);
345
373
  const totalScore = entries.reduce((sum, e) => sum + e.couplingScore, 0);
@@ -460,19 +488,25 @@ var INIT_FILE_RULES = [
460
488
  test: /(?:^|\/)\.gitlab-ci/,
461
489
  pattern: ".gitlab-ci*",
462
490
  comment: "GitLab CI configuration"
491
+ },
492
+ {
493
+ test: /^\.claude\/commands\//,
494
+ pattern: ".claude/commands/**",
495
+ comment: "Claude Code slash commands (often generated from sources)"
496
+ },
497
+ {
498
+ test: /^\.opencode\/commands\//,
499
+ pattern: ".opencode/commands/**",
500
+ comment: "OpenCode slash commands (often generated from sources)"
501
+ },
502
+ {
503
+ test: /^\.cursor\/rules\//,
504
+ pattern: ".cursor/rules/**",
505
+ comment: "Cursor rules (often generated from sources)"
463
506
  }
464
507
  ];
465
508
  function detectIgnorePatterns() {
466
- let raw;
467
- try {
468
- raw = execSync("git ls-files", {
469
- maxBuffer: 50 * 1024 * 1024,
470
- stdio: ["pipe", "pipe", "pipe"]
471
- });
472
- } catch {
473
- throw new Error("Not a git repository or git is not installed.");
474
- }
475
- const trackedFiles = raw.toString().split("\n").map((l) => normalizePath(l.trim())).filter(Boolean);
509
+ const trackedFiles = getTrackedFiles();
476
510
  const patterns = [];
477
511
  const topDirs = /* @__PURE__ */ new Set();
478
512
  for (const f of trackedFiles) {
@@ -485,8 +519,11 @@ function detectIgnorePatterns() {
485
519
  }
486
520
  }
487
521
  for (const rule of INIT_FILE_RULES) {
488
- if (trackedFiles.some((f) => rule.test.test(f))) {
489
- patterns.push({ pattern: rule.pattern, comment: rule.comment });
522
+ for (const f of trackedFiles) {
523
+ if (rule.test.test(f)) {
524
+ patterns.push({ pattern: rule.pattern, comment: rule.comment });
525
+ break;
526
+ }
490
527
  }
491
528
  }
492
529
  return patterns;
@@ -615,7 +652,13 @@ function padLeft(s, n) {
615
652
  return w >= n ? s : " ".repeat(n - w) + s;
616
653
  }
617
654
  function truncate(s, max) {
618
- return s.length <= max ? s : `\u2026${s.slice(s.length - max + 1)}`;
655
+ if (max <= 0) return "";
656
+ if (s.length <= max) return s;
657
+ if (max === 1) return "\u2026";
658
+ const remaining = max - 1;
659
+ const tail = Math.ceil(remaining * 0.6);
660
+ const head = remaining - tail;
661
+ return `${s.slice(0, head)}\u2026${s.slice(s.length - tail)}`;
619
662
  }
620
663
  function tierLabel(tier) {
621
664
  if (tier === "hot") return pc.red("\u{1F525} HOT ");
@@ -796,8 +839,21 @@ function formatRankingTable(key, ranking, description) {
796
839
  }
797
840
  function formatHotspotsTable(output) {
798
841
  const lines = [];
799
- const { churnWindow, rankings } = output;
842
+ const { churnWindow, rankings, corpus } = output;
800
843
  lines.push(`Hotspots \u2014 ${churnWindow} churn window`);
844
+ if (corpus && corpus.fileCount > 0 && corpus.totalComplexity === 0) {
845
+ lines.push("");
846
+ lines.push(
847
+ pc2.yellow(
848
+ "Note: no measurable code complexity detected across this corpus (cyclomatic = 0)."
849
+ )
850
+ );
851
+ lines.push(
852
+ pc2.yellow(
853
+ "Rankings reflect size and churn only \u2014 HOT/WARM/COOL are relative groupings, not risk labels."
854
+ )
855
+ );
856
+ }
801
857
  lines.push("");
802
858
  const keys = Object.keys(rankings);
803
859
  for (let i = 0; i < keys.length; i++) {
@@ -825,9 +881,10 @@ function formatHotspotsTable(output) {
825
881
  "Score=metric\xD7churn | Tiers are relative to THIS codebase, not absolute quality grades."
826
882
  )
827
883
  );
884
+ const zeroComplexityCorpus = corpus !== void 0 && corpus.fileCount > 0 && corpus.totalComplexity === 0;
828
885
  lines.push(
829
886
  pc2.dim(
830
- "High scores flag review candidates, not bad code \u2014 stable complex files (parsers, engines) score high naturally."
887
+ zeroComplexityCorpus ? "High scores flag files that change often and are sizable \u2014 neither is bad in itself." : "High scores flag review candidates, not bad code \u2014 stable complex files (parsers, engines) score high naturally."
831
888
  )
832
889
  );
833
890
  lines.push(pc2.dim("Docs: https://github.com/wbern/obscene#metrics"));
@@ -844,8 +901,15 @@ function formatCouplingTable(output) {
844
901
  padRight("File 1", 35) + padRight("File 2", 35) + padLeft("Shared", 7) + padLeft("Degree", 8) + padLeft("Cmplx", 7) + padLeft("Tier", 12)
845
902
  );
846
903
  lines.push("\u2500".repeat(104));
904
+ let anyDeleted = false;
905
+ let anyLockstep = false;
847
906
  for (const c of couplings) {
848
- const rawRow = padRight(truncate(c.file1, 33), 35) + padRight(truncate(c.file2, 33), 35) + padLeft(String(c.cochanges), 7) + padLeft(`${c.degree.toFixed(1)}%`, 8) + padLeft(String(c.totalComplexity), 7) + padLeft(tierLabel(c.tier), 12);
907
+ if (c.file1Deleted || c.file2Deleted) anyDeleted = true;
908
+ if (c.lockstep) anyLockstep = true;
909
+ const file1Cell = c.file1Deleted ? `\u2020 ${truncate(c.file1, 31)}` : truncate(c.file1, 33);
910
+ const file2Cell = c.file2Deleted ? `\u2020 ${truncate(c.file2, 31)}` : truncate(c.file2, 33);
911
+ const degreeText = c.lockstep ? `${c.degree.toFixed(1)}\u21C4` : `${c.degree.toFixed(1)}%`;
912
+ const rawRow = padRight(file1Cell, 35) + padRight(file2Cell, 35) + padLeft(String(c.cochanges), 7) + padLeft(degreeText, 8) + padLeft(String(c.totalComplexity), 7) + padLeft(tierLabel(c.tier), 12);
849
913
  lines.push(colorRow(c.tier, rawRow));
850
914
  }
851
915
  lines.push("");
@@ -854,6 +918,18 @@ function formatCouplingTable(output) {
854
918
  "Shared=co-changed commits | Degree=shared/min(churn)\xD7100 | Cmplx=sum of both files"
855
919
  )
856
920
  );
921
+ if (anyDeleted) {
922
+ lines.push(
923
+ pc2.dim("\u2020 = file no longer present at HEAD (deleted or renamed)")
924
+ );
925
+ }
926
+ if (anyLockstep) {
927
+ lines.push(
928
+ pc2.dim(
929
+ "\u21C4 = lockstep pair (both files only ever changed together \u2014 signal is real but uninformative)"
930
+ )
931
+ );
932
+ }
857
933
  lines.push(
858
934
  pc2.dim(
859
935
  "Tiers are relative to THIS codebase, not absolute quality grades. High coupling may be intentional and fine."
@@ -890,7 +966,7 @@ function formatCompositeTable(output) {
890
966
 
891
967
  // src/cli.ts
892
968
  var program = new Command();
893
- program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("2.0.1");
969
+ program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("2.1.1");
894
970
  var REPORT_GUIDE = {
895
971
  complexity: "Cyclomatic complexity (branch/loop count). NOT a quality judgment \u2014 a 500-line parser will naturally score high. Compare density, not raw values.",
896
972
  complexityDensity: "Complexity per line of code. Normalizes for file size. >0.25 suggests dense logic worth reviewing; <0.10 is typical for straightforward code.",
@@ -903,13 +979,16 @@ var HOTSPOTS_GUIDE = {
903
979
  defects: "fixes \xD7 churn. Count of fix: commits touching the file \xD7 churn. High values can mean latent fragility, but they also flag features that got debugged thoroughly \u2014 read the fix-commit history before concluding which.\nSource: change-history metrics (Moser, Pedrycz & Succi 2008) via conventional commits (fix: prefix) \xB7 Strength: direct fix-history signal \xB7 Limit: counts fix activity, not defects per se; requires consistent fix: convention",
904
980
  authors: "authors \xD7 churn. Files touched by many authors and changing often may lack clear ownership.\nSource: code ownership research (Bird et al. 2011, Microsoft) \xB7 Strength: flags diffuse ownership risk \xB7 Limit: doesn't measure expertise depth, bot authors filtered automatically",
905
981
  composite: "Combined ranking using Reciprocal Rank Fusion (RRF) across all dimensions. Files appearing near the top of multiple rankings score highest.\nSource: RRF (Cormack et al. 2009) \xB7 Strength: robust to outliers, no normalization needed \xB7 Limit: equal weight across all dimensions",
906
- tier: "Relative ranking within THIS codebase (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade \u2014 a hot file is under heavy load, not necessarily broken."
982
+ tier: "Relative ranking within THIS codebase (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade \u2014 a hot file is under heavy load, not necessarily broken.",
983
+ corpus: "Aggregate stats for the analyzed file set (post-exclude \u2014 files filtered by .obsignore or --exclude are not counted). When totalComplexity is 0, the rankings reflect size and churn only; HOT/WARM/COOL become relative groupings rather than risk labels."
907
984
  };
908
985
  var COUPLING_GUIDE = {
909
986
  cochanges: "Times both files appeared in the same commit. Higher values suggest a dependency between the files. Same-directory pairs are excluded \u2014 only cross-directory pairs are shown.",
910
987
  degree: "Percentage: shared commits / min(churn of file1, file2) \xD7 100. Shows how tightly coupled the pair is relative to their individual change rates. 100% means every change to the less-active file also touched the other.",
911
988
  totalComplexity: "Sum of both files' cyclomatic complexity. Highlights coupled pairs where the involved code is also complex \u2014 hidden dependency + high complexity compounds maintenance risk.",
912
- tier: "Relative ranking within THIS codebase's coupling pairs (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade. 'hot' means this pair co-changes more than most \u2014 it may be intentional and fine."
989
+ tier: "Relative ranking within THIS codebase's coupling pairs (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade. 'hot' means this pair co-changes more than most \u2014 it may be intentional and fine.",
990
+ deleted: "file1Deleted / file2Deleted are set when the file is no longer present at HEAD (deleted or renamed away). The coupling signal is historical \u2014 the pair is not actionable in the current tree.",
991
+ lockstep: "Set when shared commits / max(churn) \u2265 0.9 \u2014 both files almost always change together over the window. Typical of generator/mirror pairs (README \u2194 src/README, *.pb.go \u2194 *.proto). The coupling signal is real but uninformative; treat the pair as a single unit from git's perspective."
913
992
  };
914
993
  function addSharedOptions(cmd) {
915
994
  return cmd.option("--top <n>", "limit to top N entries (0 = all)", "20").option("--format <type>", "output format: json | table", "json").option(
@@ -1015,13 +1094,19 @@ function runHotspots(opts) {
1015
1094
  top
1016
1095
  );
1017
1096
  const composite = computeComposite(rankings, churn, top);
1097
+ let corpusTotalComplexity = 0;
1098
+ for (const f of files) corpusTotalComplexity += f.complexity;
1018
1099
  const output = {
1019
1100
  generated: (/* @__PURE__ */ new Date()).toISOString(),
1020
1101
  guide: HOTSPOTS_GUIDE,
1021
1102
  churnWindow: `${months} months`,
1022
1103
  rankings,
1023
1104
  skipped: Object.keys(skipped).length > 0 ? skipped : void 0,
1024
- composite
1105
+ composite,
1106
+ corpus: {
1107
+ fileCount: files.length,
1108
+ totalComplexity: corpusTotalComplexity
1109
+ }
1025
1110
  };
1026
1111
  if (opts.format === "table") {
1027
1112
  process.stdout.write(`${formatHotspotsTable(output)}
@@ -1049,11 +1134,13 @@ function runCoupling(opts) {
1049
1134
  for (const f of files) {
1050
1135
  complexityMap.set(f.file, f.complexity);
1051
1136
  }
1137
+ const trackedFiles = getTrackedFiles();
1052
1138
  const couplings = computeCoupling(
1053
1139
  cochanges,
1054
1140
  churn,
1055
1141
  complexityMap,
1056
- minCochanges
1142
+ minCochanges,
1143
+ trackedFiles
1057
1144
  );
1058
1145
  const limited = top > 0 ? couplings.slice(0, top) : couplings;
1059
1146
  const tierCounts = { hot: 0, warm: 0, cool: 0 };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@wbern/obscene",
3
- "version": "2.0.1",
3
+ "version": "2.1.1",
4
4
  "description": "Identify hotspot files — complex code that changes frequently. Churn × complexity analysis for any git repo.",
5
5
  "type": "module",
6
6
  "bin": {