@wbern/obscene 0.3.1 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +41 -15
  2. package/dist/cli.js +276 -105
  3. package/package.json +3 -2
package/README.md CHANGED
@@ -60,14 +60,23 @@ obscene report # raw complexity (no churn)
60
60
  obscene coupling # temporal coupling analysis
61
61
  obscene coupling --min-cochanges 1 --format table
62
62
  obscene --exclude "*.generated.*"
63
- obscene | jq '.hotspots[0]' # pipe-friendly
63
+ obscene | jq '.rankings.complexity.entries[0]' # pipe-friendly
64
64
  ```
65
65
 
66
66
  ## Commands
67
67
 
68
68
  ### `obscene hotspots` (default)
69
69
 
70
- Scores each file by `complexity × commits` over a time window, then assigns tiers by cumulative score distribution:
70
+ Produces **four independent ranking tables**, each scoring files by a different metric multiplied by churn:
71
+
72
+ | Ranking | Score formula | Metric columns |
73
+ |---------|---------------|----------------|
74
+ | Complexity × Churn | `complexity × churn` | Cmplx, Dens |
75
+ | Nesting × Churn | `maxNesting × churn` | Nest |
76
+ | Defects × Churn | `defects × churn` | Dfcts, DfDns |
77
+ | Authors × Churn | `authors × churn` | Auth |
78
+
79
+ Each table has its own tier assignment by cumulative score distribution:
71
80
 
72
81
  | Tier | Range | Meaning |
73
82
  |------|-------|---------|
@@ -75,6 +84,8 @@ Scores each file by `complexity × commits` over a time window, then assigns tie
75
84
  | **watch** | next 30% (50–80%) | Keep an eye on these |
76
85
  | **stable** | bottom 20% | Low risk |
77
86
 
87
+ A file may rank high in one dimension (e.g. complexity) but low in another (e.g. authors). Tables with no scored entries are omitted.
88
+
78
89
  ### `obscene coupling`
79
90
 
80
91
  Detects files that frequently change together in the same commit but live in different directories — Tornhill's "temporal coupling" analysis from *Your Code as a Crime Scene* (2015). Surfaces hidden structural dependencies that aren't visible in imports or the module graph.
@@ -105,9 +116,9 @@ Per-file complexity without churn. Useful for raw complexity distribution.
105
116
 
106
117
  ### Hotspot metrics
107
118
 
108
- #### Hotspot score (`Score`)
119
+ #### Score
109
120
 
110
- `complexity × churn`. The core ranking metric files that are both complex and frequently modified bubble to the top. See [Why churn × complexity?](#why-churn-x-complexity) for the research backing this approach.
121
+ `metric × churn`. Each ranking table uses a different metric (complexity, nesting, defects, or authors) multiplied by churn. See [Why churn × complexity?](#why-churn-x-complexity) for the research backing this approach.
111
122
 
112
123
  #### Churn (`Churn`)
113
124
 
@@ -125,9 +136,9 @@ Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc).
125
136
 
126
137
  Count of `fix:` conventional commits touching the file within the churn window. A proxy for historical defect rate — files that attract repeated fixes are more likely to contain latent bugs. Inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction.
127
138
 
128
- #### Defect density (`defectDensity`, JSON only)
139
+ #### Defect density (`DfDns`)
129
140
 
130
- `defects / lines of code`. Not shown in table output due to column width, but available in JSON. Normalizes defect count by file size.
141
+ `defects / lines of code`. Shown in the Defects × Churn table. Normalizes defect count by file size.
131
142
 
132
143
  #### Nesting depth (`Nest`)
133
144
 
@@ -164,19 +175,34 @@ Cumulative score distribution bucket:
164
175
  ## Example output
165
176
 
166
177
  ```
167
- Hotspots — 3 months churn window | Total score: 35,452
178
+ Hotspots — 3 months churn window
179
+
180
+ Complexity × Churn — Total score: 35,452
168
181
  Tiers: 3 danger, 13 watch, 194 stable
169
182
  Showing: 5 of 210
170
183
 
171
- File Score % Churn Cmplx Dens Dfcts Nest Auth Tier
172
- ────────────────────────────────────────────────────────────────────────────────────────────────────────────────
173
- src/utils/effect-generator.ts 8,296 23.4 68 122 0.12 5 6 4 DANGER
174
- src/services/game-engine.ts 4,284 12.1 51 84 0.09 3 4 3 DANGER
175
- src/components/board-renderer.tsx 2,940 8.3 42 70 0.11 2 5 3 DANGER
176
- src/hooks/use-game-state.ts 1,320 3.7 33 40 0.08 1 3 2 WATCH
177
- src/utils/move-validator.ts 945 2.7 27 35 0.06 0 2 1 WATCH
184
+ File Score % Churn Cmplx Dens Tier
185
+ ──────────────────────────────────────────────────────────────────────────────────────────────────
186
+ src/utils/effect-generator.ts 8,296 23.4 68 122 0.12 🔴 DANGER
187
+ src/services/game-engine.ts 4,284 12.1 51 84 0.09 🔴 DANGER
188
+ src/components/board-renderer.tsx 2,940 8.3 42 70 0.11 🔴 DANGER
189
+ src/hooks/use-game-state.ts 1,320 3.7 33 40 0.08 🟡 WATCH
190
+ src/utils/move-validator.ts 945 2.7 27 35 0.06 🟡 WATCH
191
+
192
+ Nesting × Churn — Total score: 1,284
193
+ Tiers: 2 danger, 5 watch, 203 stable
194
+ Showing: 5 of 210
195
+
196
+ File Score % Churn Nest Tier
197
+ ────────────────────────────────────────────────────────────────────────────────────────
198
+ src/utils/effect-generator.ts 408 31.8 68 6 🔴 DANGER
199
+ src/services/game-engine.ts 255 19.8 51 5 🔴 DANGER
200
+ src/components/board-renderer.tsx 210 16.4 42 5 🟡 WATCH
201
+ src/hooks/use-game-state.ts 99 7.7 33 3 🟡 WATCH
202
+ src/utils/move-validator.ts 54 4.2 27 2 🟡 WATCH
178
203
 
179
- Score=complexity×churn | Dens=complexity/code | Dfcts=fix commits | Nest=max indent depth | Auth=unique authors
204
+ Score=metric×churn | Tiers are relative to THIS codebase, not absolute quality grades.
205
+ High scores flag review candidates, not bad code — stable complex files (parsers, engines) score high naturally.
180
206
  Docs: https://github.com/wbern/obscene#metrics
181
207
  ```
182
208
 
package/dist/cli.js CHANGED
@@ -168,6 +168,108 @@ function getCoChanges(months, excludes = []) {
168
168
  }
169
169
  return cochanges;
170
170
  }
171
+ function assignTiers(items, totalScore) {
172
+ let cumulative = 0;
173
+ for (const item of items) {
174
+ item.percentOfTotal = Math.round(item.score / totalScore * 1e3) / 10;
175
+ cumulative += item.score;
176
+ const cumulativeShare = cumulative / totalScore;
177
+ if (cumulativeShare <= DANGER_CUMULATIVE) {
178
+ item.tier = "danger";
179
+ } else if (cumulativeShare <= WATCH_CUMULATIVE) {
180
+ item.tier = "watch";
181
+ } else {
182
+ item.tier = "stable";
183
+ }
184
+ }
185
+ }
186
+ var RANKING_DEFS = [
187
+ {
188
+ key: "complexity",
189
+ label: "Complexity \xD7 Churn",
190
+ scoreFormula: "complexity \xD7 churn"
191
+ },
192
+ {
193
+ key: "nesting",
194
+ label: "Nesting \xD7 Churn",
195
+ scoreFormula: "maxNesting \xD7 churn"
196
+ },
197
+ {
198
+ key: "defects",
199
+ label: "Defects \xD7 Churn",
200
+ scoreFormula: "defects \xD7 churn"
201
+ },
202
+ {
203
+ key: "authors",
204
+ label: "Authors \xD7 Churn",
205
+ scoreFormula: "authors \xD7 churn"
206
+ }
207
+ ];
208
+ function computeRanking(files, churn, metricExtractor, densityExtractor) {
209
+ const scored = files.map((f) => {
210
+ const fileChurn = churn.get(f.file) ?? 0;
211
+ const metricValue = metricExtractor(f);
212
+ return {
213
+ file: f.file,
214
+ score: metricValue * fileChurn,
215
+ percentOfTotal: 0,
216
+ tier: "stable",
217
+ churn: fileChurn,
218
+ metricValue,
219
+ metricDensity: densityExtractor ? densityExtractor(f) : void 0
220
+ };
221
+ }).filter((e) => e.score > 0).sort((a, b) => b.score - a.score);
222
+ const totalScore = scored.reduce((sum, e) => sum + e.score, 0);
223
+ if (totalScore === 0) return [];
224
+ assignTiers(scored, totalScore);
225
+ return scored;
226
+ }
227
+ function computeAllRankings(files, churn, defects, nestingDepths, authors, top) {
228
+ const extractors = {
229
+ complexity: {
230
+ extract: (f) => f.complexity,
231
+ density: (f) => f.complexityDensity
232
+ },
233
+ nesting: {
234
+ extract: (f) => nestingDepths.get(f.file) ?? 0
235
+ },
236
+ defects: {
237
+ extract: (f) => defects.get(f.file) ?? 0,
238
+ density: (f) => {
239
+ const d = defects.get(f.file) ?? 0;
240
+ return f.code > 0 ? Math.round(d / f.code * 1e4) / 1e4 : 0;
241
+ }
242
+ },
243
+ authors: {
244
+ extract: (f) => authors.get(f.file) ?? 0
245
+ }
246
+ };
247
+ const rankings = {};
248
+ for (const def of RANKING_DEFS) {
249
+ const ext = extractors[def.key];
250
+ const allEntries = computeRanking(files, churn, ext.extract, ext.density);
251
+ if (allEntries.length === 0) continue;
252
+ const limited = top > 0 ? allEntries.slice(0, top) : allEntries;
253
+ const tierCounts = {
254
+ danger: 0,
255
+ watch: 0,
256
+ stable: 0
257
+ };
258
+ for (const e of allEntries) {
259
+ tierCounts[e.tier]++;
260
+ }
261
+ rankings[def.key] = {
262
+ label: def.label,
263
+ scoreFormula: def.scoreFormula,
264
+ totalScore: allEntries.reduce((sum, e) => sum + e.score, 0),
265
+ tierCounts,
266
+ totalEntries: allEntries.length,
267
+ showing: limited.length,
268
+ entries: limited
269
+ };
270
+ }
271
+ return rankings;
272
+ }
171
273
  function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
172
274
  const entries = [];
173
275
  for (const [key, count] of cochanges) {
@@ -190,18 +292,14 @@ function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
190
292
  entries.sort((a, b) => b.couplingScore - a.couplingScore);
191
293
  const totalScore = entries.reduce((sum, e) => sum + e.couplingScore, 0);
192
294
  if (totalScore === 0) return [];
193
- let cumulative = 0;
194
- for (const entry of entries) {
195
- entry.percentOfTotal = Math.round(entry.couplingScore / totalScore * 1e3) / 10;
196
- cumulative += entry.couplingScore;
197
- const cumulativeShare = cumulative / totalScore;
198
- if (cumulativeShare <= DANGER_CUMULATIVE) {
199
- entry.tier = "danger";
200
- } else if (cumulativeShare <= WATCH_CUMULATIVE) {
201
- entry.tier = "watch";
202
- } else {
203
- entry.tier = "stable";
204
- }
295
+ const adapted = entries.map((e) => ({
296
+ ...e,
297
+ score: e.couplingScore
298
+ }));
299
+ assignTiers(adapted, totalScore);
300
+ for (let i = 0; i < entries.length; i++) {
301
+ entries[i].percentOfTotal = adapted[i].percentOfTotal;
302
+ entries[i].tier = adapted[i].tier;
205
303
  }
206
304
  return entries;
207
305
  }
@@ -246,37 +344,41 @@ function getNestingDepths(filePaths) {
246
344
  }
247
345
  return depths;
248
346
  }
249
- function computeHotspots(files, churn, defects = /* @__PURE__ */ new Map(), nestingDepths = /* @__PURE__ */ new Map(), authors = /* @__PURE__ */ new Map()) {
250
- const scored = files.map((f) => {
251
- const fileChurn = churn.get(f.file) ?? 0;
252
- const fileDefects = defects.get(f.file) ?? 0;
253
- return {
254
- ...f,
255
- churn: fileChurn,
256
- hotspotScore: f.complexity * fileChurn,
257
- defects: fileDefects,
258
- defectDensity: f.code > 0 ? Math.round(fileDefects / f.code * 1e4) / 1e4 : 0,
259
- maxNesting: nestingDepths.get(f.file) ?? 0,
260
- authors: authors.get(f.file) ?? 0
261
- };
262
- }).filter((h) => h.hotspotScore > 0).sort((a, b) => b.hotspotScore - a.hotspotScore);
263
- const totalScore = scored.reduce((sum, h) => sum + h.hotspotScore, 0);
264
- if (totalScore === 0) return [];
265
- let cumulative = 0;
266
- return scored.map((h) => {
267
- const percentOfTotal = Math.round(h.hotspotScore / totalScore * 1e3) / 10;
268
- cumulative += h.hotspotScore;
269
- const cumulativeShare = cumulative / totalScore;
270
- let tier;
271
- if (cumulativeShare <= DANGER_CUMULATIVE) {
272
- tier = "danger";
273
- } else if (cumulativeShare <= WATCH_CUMULATIVE) {
274
- tier = "watch";
275
- } else {
276
- tier = "stable";
277
- }
278
- return { ...h, percentOfTotal, tier };
279
- });
347
+
348
+ // src/color.ts
349
+ import pc from "picocolors";
350
+ var ANSI_RE = /\x1b\[[0-9;]*m/g;
351
+ function visualWidth(s) {
352
+ return s.replace(ANSI_RE, "").length;
353
+ }
354
+ function padRight(s, n) {
355
+ const w = visualWidth(s);
356
+ return w >= n ? s : s + " ".repeat(n - w);
357
+ }
358
+ function padLeft(s, n) {
359
+ const w = visualWidth(s);
360
+ return w >= n ? s : " ".repeat(n - w) + s;
361
+ }
362
+ function truncate(s, max) {
363
+ return s.length <= max ? s : `\u2026${s.slice(s.length - max + 1)}`;
364
+ }
365
+ function tierLabel(tier) {
366
+ if (tier === "danger") return pc.red("\u{1F534} DANGER");
367
+ if (tier === "watch") return pc.yellow("\u{1F7E1} WATCH");
368
+ return pc.green("\u{1F7E2} stable");
369
+ }
370
+ function colorRow(tier, text) {
371
+ if (tier === "danger") return pc.red(text);
372
+ if (tier === "watch") return pc.yellow(text);
373
+ return pc.green(text);
374
+ }
375
+ function tierSummary(tierCounts, showing, total) {
376
+ const lines = [];
377
+ lines.push(
378
+ `Tiers: ${pc.red(`${tierCounts.danger} danger`)}, ${pc.yellow(`${tierCounts.watch} watch`)}, ${pc.green(`${tierCounts.stable} stable`)}`
379
+ );
380
+ lines.push(`Showing: ${showing} of ${total}`);
381
+ return lines;
280
382
  }
281
383
 
282
384
  // src/format.ts
@@ -309,28 +411,129 @@ function formatReportTable(output) {
309
411
  lines.push("Docs: https://github.com/wbern/obscene#metrics");
310
412
  return lines.join("\n");
311
413
  }
312
- function formatHotspotsTable(output) {
414
+ function getRankingColumns(key) {
415
+ const base = [
416
+ {
417
+ header: "File",
418
+ width: 50,
419
+ align: "left",
420
+ value: (e) => truncate(e.file, 48)
421
+ },
422
+ {
423
+ header: "Score",
424
+ width: 8,
425
+ align: "right",
426
+ value: (e) => e.score.toLocaleString()
427
+ },
428
+ {
429
+ header: "%",
430
+ width: 7,
431
+ align: "right",
432
+ value: (e) => e.percentOfTotal.toFixed(1)
433
+ },
434
+ {
435
+ header: "Churn",
436
+ width: 7,
437
+ align: "right",
438
+ value: (e) => String(e.churn)
439
+ }
440
+ ];
441
+ const metricCols = {
442
+ complexity: [
443
+ {
444
+ header: "Cmplx",
445
+ width: 7,
446
+ align: "right",
447
+ value: (e) => String(e.metricValue)
448
+ },
449
+ {
450
+ header: "Dens",
451
+ width: 7,
452
+ align: "right",
453
+ value: (e) => (e.metricDensity ?? 0).toFixed(2)
454
+ }
455
+ ],
456
+ nesting: [
457
+ {
458
+ header: "Nest",
459
+ width: 6,
460
+ align: "right",
461
+ value: (e) => String(e.metricValue)
462
+ }
463
+ ],
464
+ defects: [
465
+ {
466
+ header: "Dfcts",
467
+ width: 6,
468
+ align: "right",
469
+ value: (e) => String(e.metricValue)
470
+ },
471
+ {
472
+ header: "DfDns",
473
+ width: 7,
474
+ align: "right",
475
+ value: (e) => (e.metricDensity ?? 0).toFixed(4)
476
+ }
477
+ ],
478
+ authors: [
479
+ {
480
+ header: "Auth",
481
+ width: 6,
482
+ align: "right",
483
+ value: (e) => String(e.metricValue)
484
+ }
485
+ ]
486
+ };
487
+ const tierCol = {
488
+ header: "Tier",
489
+ width: 12,
490
+ align: "right",
491
+ value: (e) => tierLabel(e.tier)
492
+ };
493
+ return [...base, ...metricCols[key] ?? [], tierCol];
494
+ }
495
+ function formatRankingTable(key, ranking) {
313
496
  const lines = [];
314
- const { tierCounts, totalScore, churnWindow, hotspots } = output;
497
+ const cols = getRankingColumns(key);
315
498
  lines.push(
316
- `Hotspots \u2014 ${churnWindow} churn window | Total score: ${totalScore.toLocaleString()}`
499
+ `${ranking.label} \u2014 Total score: ${ranking.totalScore.toLocaleString()}`
317
500
  );
318
- pushTierSummary(lines, tierCounts, output.showing, output.totalHotspots);
319
501
  lines.push(
320
- padRight("File", 50) + padLeft("Score", 8) + padLeft("%", 7) + padLeft("Churn", 7) + padLeft("Cmplx", 7) + padLeft("Dens", 7) + padLeft("Dfcts", 6) + padLeft("Nest", 6) + padLeft("Auth", 6) + padLeft("Tier", 8)
502
+ ...tierSummary(ranking.tierCounts, ranking.showing, ranking.totalEntries)
321
503
  );
322
- lines.push("\u2500".repeat(112));
323
- for (const h of hotspots) {
324
- lines.push(
325
- padRight(truncate(h.file, 48), 50) + padLeft(h.hotspotScore.toLocaleString(), 8) + padLeft(h.percentOfTotal.toFixed(1), 7) + padLeft(String(h.churn), 7) + padLeft(String(h.complexity), 7) + padLeft(h.complexityDensity.toFixed(2), 7) + padLeft(String(h.defects), 6) + padLeft(String(h.maxNesting), 6) + padLeft(String(h.authors), 6) + padLeft(tierLabel(h.tier), 8)
326
- );
504
+ lines.push("");
505
+ const headerLine = cols.map(
506
+ (c) => c.align === "left" ? padRight(c.header, c.width) : padLeft(c.header, c.width)
507
+ ).join("");
508
+ lines.push(headerLine);
509
+ const totalWidth = cols.reduce((sum, c) => sum + c.width, 0);
510
+ lines.push("\u2500".repeat(totalWidth));
511
+ for (const entry of ranking.entries) {
512
+ const rowParts = cols.map((c) => {
513
+ const val = c.value(entry);
514
+ return c.align === "left" ? padRight(val, c.width) : padLeft(val, c.width);
515
+ });
516
+ const rawRow = rowParts.join("");
517
+ lines.push(colorRow(entry.tier, rawRow));
518
+ }
519
+ return lines;
520
+ }
521
+ function formatHotspotsTable(output) {
522
+ const lines = [];
523
+ const { churnWindow, rankings } = output;
524
+ lines.push(`Hotspots \u2014 ${churnWindow} churn window`);
525
+ lines.push("");
526
+ const keys = Object.keys(rankings);
527
+ for (let i = 0; i < keys.length; i++) {
528
+ const key = keys[i];
529
+ lines.push(...formatRankingTable(key, rankings[key]));
530
+ if (i < keys.length - 1) {
531
+ lines.push("");
532
+ }
327
533
  }
328
534
  lines.push("");
329
535
  lines.push(
330
- "Score=complexity\xD7churn | Dens=complexity/code | Dfcts=fix commits | Nest=max indent depth | Auth=unique authors"
331
- );
332
- lines.push(
333
- "Tiers are relative to THIS codebase, not absolute quality grades. A 'danger' file in a clean codebase may be fine."
536
+ "Score=metric\xD7churn | Tiers are relative to THIS codebase, not absolute quality grades."
334
537
  );
335
538
  lines.push(
336
539
  "High scores flag review candidates, not bad code \u2014 stable complex files (parsers, engines) score high naturally."
@@ -344,15 +547,14 @@ function formatCouplingTable(output) {
344
547
  lines.push(
345
548
  `Coupling \u2014 ${churnWindow} churn window | Min shared: ${output.minCochanges} | Total score: ${totalScore.toLocaleString()}`
346
549
  );
347
- pushTierSummary(lines, tierCounts, output.showing, output.totalCouplings);
550
+ lines.push(...tierSummary(tierCounts, output.showing, output.totalCouplings));
348
551
  lines.push(
349
- padRight("File 1", 35) + padRight("File 2", 35) + padLeft("Shared", 7) + padLeft("Degree", 8) + padLeft("Cmplx", 7) + padLeft("Tier", 8)
552
+ padRight("File 1", 35) + padRight("File 2", 35) + padLeft("Shared", 7) + padLeft("Degree", 8) + padLeft("Cmplx", 7) + padLeft("Tier", 12)
350
553
  );
351
- lines.push("\u2500".repeat(100));
554
+ lines.push("\u2500".repeat(104));
352
555
  for (const c of couplings) {
353
- lines.push(
354
- padRight(truncate(c.file1, 33), 35) + padRight(truncate(c.file2, 33), 35) + padLeft(String(c.cochanges), 7) + padLeft(`${c.degree.toFixed(1)}%`, 8) + padLeft(String(c.totalComplexity), 7) + padLeft(tierLabel(c.tier), 8)
355
- );
556
+ const rawRow = padRight(truncate(c.file1, 33), 35) + padRight(truncate(c.file2, 33), 35) + padLeft(String(c.cochanges), 7) + padLeft(`${c.degree.toFixed(1)}%`, 8) + padLeft(String(c.totalComplexity), 7) + padLeft(tierLabel(c.tier), 12);
557
+ lines.push(colorRow(c.tier, rawRow));
356
558
  }
357
559
  lines.push("");
358
560
  lines.push(
@@ -367,44 +569,22 @@ function formatCouplingTable(output) {
367
569
  lines.push("Docs: https://github.com/wbern/obscene#metrics");
368
570
  return lines.join("\n");
369
571
  }
370
- function pushTierSummary(lines, tierCounts, showing, total) {
371
- lines.push(
372
- `Tiers: ${tierCounts.danger} danger, ${tierCounts.watch} watch, ${tierCounts.stable} stable`
373
- );
374
- lines.push(`Showing: ${showing} of ${total}`);
375
- lines.push("");
376
- }
377
- function tierLabel(tier) {
378
- if (tier === "danger") return "DANGER";
379
- if (tier === "watch") return "WATCH";
380
- return "stable";
381
- }
382
- function padRight(s, n) {
383
- return s.length >= n ? s : s + " ".repeat(n - s.length);
384
- }
385
- function padLeft(s, n) {
386
- return s.length >= n ? s : " ".repeat(n - s.length) + s;
387
- }
388
- function truncate(s, max) {
389
- return s.length <= max ? s : `\u2026${s.slice(s.length - max + 1)}`;
390
- }
391
572
 
392
573
  // src/cli.ts
393
574
  var program = new Command();
394
- program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("0.3.1");
575
+ program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("1.0.0");
395
576
  var REPORT_GUIDE = {
396
577
  complexity: "Cyclomatic complexity (branch/loop count). NOT a quality judgment \u2014 a 500-line parser will naturally score high. Compare density, not raw values.",
397
578
  complexityDensity: "Complexity per line of code. Normalizes for file size. >0.25 suggests dense logic worth reviewing; <0.10 is typical for straightforward code.",
398
579
  comments: "Comment line count. Low comments in high-density files may indicate under-documented logic. High comments alone is not a problem."
399
580
  };
400
581
  var HOTSPOTS_GUIDE = {
401
- hotspotScore: "complexity \xD7 churn. Ranks files by combined risk: complex code that changes often. High score does NOT mean bad code \u2014 stable high-complexity files (parsers, engines) are fine. Focus on files where score is rising over time.",
402
- churn: "Commit count in the time window. High churn alone is neutral \u2014 active development is normal. It becomes a signal when combined with high complexity.",
403
- tier: "Relative ranking within THIS codebase (top 50% = danger, next 30% = watch, bottom 20% = stable). NOT an absolute quality grade. A 'danger' file in a clean codebase may be perfectly fine. Compare across runs to spot trends.",
404
- defects: "Count of fix: conventional commits. A proxy for bug frequency \u2014 0 does not mean bug-free, and >0 does not mean bad code. Useful for spotting files that attract repeated fixes.",
405
- defectDensity: "Fix commits per line of code. Normalizes defect count by file size. Only meaningful with conventional commits (fix: prefix).",
406
- maxNesting: "Deepest indentation level. >6 suggests complex control flow worth simplifying. Language-dependent \u2014 Python files naturally nest less than C++.",
407
- authors: "Unique committers in the time window. High author count may indicate unclear ownership. Low count is normal for specialized code. Neither value is inherently good or bad."
582
+ rankings: "Four independent ranking tables, each scoring files by a different metric \xD7 churn. A file may rank high in one dimension but not others.",
583
+ complexity: "complexity \xD7 churn. Ranks files by combined risk: complex code that changes often.",
584
+ nesting: "maxNesting \xD7 churn. Deeply nested code that changes often is harder to reason about.",
585
+ defects: "defects \xD7 churn. Files with fix: commits that also churn heavily may contain latent bugs.",
586
+ authors: "authors \xD7 churn. Files touched by many authors and changing often may lack clear ownership.",
587
+ tier: "Relative ranking within THIS codebase (top 50% = danger, next 30% = watch, bottom 20% = stable). NOT an absolute quality grade."
408
588
  };
409
589
  var COUPLING_GUIDE = {
410
590
  cochanges: "Times both files appeared in the same commit. Higher values suggest a dependency between the files. Same-directory pairs are excluded \u2014 only cross-directory pairs are shown.",
@@ -486,28 +666,19 @@ function runHotspots(opts) {
486
666
  const defects = getDefects(months);
487
667
  const authors = getAuthors(months);
488
668
  const nestingDepths = getNestingDepths(files.map((f) => f.file));
489
- const hotspots = computeHotspots(
669
+ const rankings = computeAllRankings(
490
670
  files,
491
671
  churn,
492
672
  defects,
493
673
  nestingDepths,
494
- authors
674
+ authors,
675
+ top
495
676
  );
496
- const limited = top > 0 ? hotspots.slice(0, top) : hotspots;
497
- const tierCounts = { danger: 0, watch: 0, stable: 0 };
498
- for (const h of hotspots) {
499
- tierCounts[h.tier]++;
500
- }
501
- const totalScore = hotspots.reduce((sum, h) => sum + h.hotspotScore, 0);
502
677
  const output = {
503
678
  generated: (/* @__PURE__ */ new Date()).toISOString(),
504
679
  guide: HOTSPOTS_GUIDE,
505
680
  churnWindow: `${months} months`,
506
- totalScore,
507
- tierCounts,
508
- totalHotspots: hotspots.length,
509
- showing: limited.length,
510
- hotspots: limited
681
+ rankings
511
682
  };
512
683
  if (opts.format === "table") {
513
684
  process.stdout.write(`${formatHotspotsTable(output)}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@wbern/obscene",
3
- "version": "0.3.1",
3
+ "version": "1.0.0",
4
4
  "description": "Identify hotspot files — complex code that changes frequently. Churn × complexity analysis for any git repo.",
5
5
  "type": "module",
6
6
  "bin": {
@@ -48,7 +48,8 @@
48
48
  "node": ">=18"
49
49
  },
50
50
  "dependencies": {
51
- "commander": "^13.1.0"
51
+ "commander": "^13.1.0",
52
+ "picocolors": "^1.1.1"
52
53
  },
53
54
  "devDependencies": {
54
55
  "@biomejs/biome": "^2.0.0",