@wbern/obscene 1.5.0 → 2.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +69 -10
  2. package/dist/cli.js +98 -52
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -73,7 +73,7 @@ Produces **four independent ranking tables**, each scoring files by a different
73
73
  |---------|---------------|----------------|
74
74
  | Complexity × Churn | `complexity × churn` | Cmplx, Dens |
75
75
  | Nesting × Churn | `maxNesting × churn` | Nest |
76
- | Defects × Churn | `defects × churn` | Dfcts, DfDns |
76
+ | Fix Activity × Churn | `fixes × churn` | Fixes, FxDns |
77
77
  | Authors × Churn | `authors × churn` | Auth |
78
78
 
79
79
  Plus a **Combined** ranking using [Reciprocal Rank Fusion](https://doi.org/10.1145/1571941.1572114) (RRF) across all dimensions — files appearing near the top of multiple rankings score highest.
@@ -92,9 +92,9 @@ A file may rank high in one dimension (e.g. complexity) but low in another (e.g.
92
92
 
93
93
  ### `obscene coupling`
94
94
 
95
- Detects files that frequently change together in the same commit but live in different directories — Tornhill's "temporal coupling" analysis from *Your Code as a Crime Scene* (2015). Surfaces hidden structural dependencies that aren't visible in imports or the module graph.
95
+ **Temporal coupling** (co-change history), not structural / type-level coupling. Detects files that frequently change together in the same commit but live in different directories — Tornhill's "temporal coupling" analysis from *Your Code as a Crime Scene* (2015). Surfaces hidden dependencies that aren't visible in imports or the module graph: pairs of files that *in practice* can't be changed independently, even when the type system says they can.
96
96
 
97
- Same-directory pairs are excluded (co-location is expected coupling). Mass commits touching >20 files are skipped (formatting changes, large refactors). See [Why temporal coupling?](#why-temporal-coupling) for the research backing this approach.
97
+ Same-directory pairs are excluded because co-location is usually expected coupling (a component and its styles, a handler and its test); the interesting signal is cross-directory pairs that change together despite living in different parts of the tree. Mass commits touching >20 files are skipped (formatting changes, large refactors). See [Why temporal coupling?](#why-temporal-coupling) for the research backing this approach.
98
98
 
99
99
  ```bash
100
100
  obscene coupling # default: min 2 shared commits
@@ -136,17 +136,17 @@ Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc).
136
136
 
137
137
  `complexity / lines of code`. Normalizes complexity by file size so a 50-line file with complexity 25 (density 0.50) stands out against a 500-line file with complexity 25 (density 0.05). Based on Harrison & Magel (1981), who found that complexity relative to code size is a stronger fault predictor than raw complexity alone.
138
138
 
139
- #### Defects (`Dfcts`)
139
+ #### Fixes (`Fixes`)
140
140
 
141
- Count of `fix:` conventional commits touching the file within the churn window. A proxy for historical defect rate files that attract repeated fixes are more likely to contain latent bugs. Inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction.
141
+ Count of `fix:` conventional commits touching the file within the churn window. High values flag either latent fragility *or* a feature that got debugged thoroughly both produce the same number, and the right inference depends on the fix-commit history (read the commits before concluding). The metric is inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction; obscene reports the raw fix-activity signal and leaves the interpretation to you.
142
142
 
143
- #### Defect density (`DfDns`)
143
+ #### Fix density (`FxDns`)
144
144
 
145
- `defects / lines of code`. Shown in the Defects × Churn table. Normalizes defect count by file size.
145
+ `fixes / lines of code`. Shown in the Fix Activity × Churn table. Normalizes fix-commit count by file size so a 50-line file with 5 fixes (density 0.10) stands out against a 500-line file with 5 fixes (density 0.01).
146
146
 
147
147
  #### Nesting depth (`Nest`)
148
148
 
149
- Maximum indentation level (tab stops) in the file. Deep nesting correlates with high cognitive load and defect likelihood. Harrison & Magel (1981) identified nesting depth as a significant complexity contributor.
149
+ Maximum indentation level (tab stops) in the file. Deep nesting correlates with high cognitive load and defect likelihood. Harrison & Magel (1981) identified nesting depth as a significant complexity contributor. The indent unit is detected from the most common positive delta between consecutive non-blank line indents, which keeps single-space outlier lines (multiline strings, continuation alignment) from inflating the score. The metric measures whitespace depth, not AST control-flow depth — they usually agree, but a file with deep alignment and shallow logic can read higher than its true nesting.
150
150
 
151
151
  #### Unique authors (`Auth`)
152
152
 
@@ -252,9 +252,25 @@ Docs: https://github.com/wbern/obscene#metrics
252
252
 
253
253
  Any language [scc supports](https://github.com/boyter/scc#features) — 200+ languages including C, C++, Go, Java, JavaScript, TypeScript, Python, Rust, Ruby, PHP, Swift, Kotlin, and many more. No configuration needed; scc auto-detects languages from file extensions.
254
254
 
255
- ## Default exclusions
255
+ ## Exclusions
256
256
 
257
- Test files, lock files, and package manifests are excluded automatically: `*.test.*`, `*.spec.*`, `__tests__/`, `__mocks__/`, `*.stories.*`, `*.d.ts`, `package.json`, `package-lock.json`, `pnpm-lock.yaml`, `yarn.lock`, `bun.lock`, and similar patterns. scc also skips generated files by default (`--no-gen`).
257
+ All exclusions are opt-in. Run `obscene init` to generate a `.obsignore` file with recommended patterns for your project:
258
+
259
+ ```bash
260
+ obscene init
261
+ ```
262
+
263
+ This creates a `.obsignore` containing:
264
+ - **Universal exclusions** — test files (`*.test.*`, `*.spec.*`, `__tests__/`, etc.), lock files (`package-lock.json`, `pnpm-lock.yaml`, etc.), and package manifests (`package.json`)
265
+ - **Detected project patterns** — CI directories (`.github/`), config files (`*.config.*`), vendored code, etc., based on your project structure
266
+
267
+ If no `.obsignore` or `.obsceneignore` exists, obscene prints a hint to stderr:
268
+
269
+ ```
270
+ hint: no .obsignore found — run `obscene init` to generate one with recommended exclusions
271
+ ```
272
+
273
+ scc also skips generated files by default (`--no-gen`).
258
274
 
259
275
  ## Ignore files
260
276
 
@@ -304,6 +320,7 @@ Files that change together but live in different directories reveal implicit dep
304
320
  - **Must be run inside a git repo.** Churn data comes from `git log`.
305
321
  - **Only analyzes files that currently exist.** Deleted files don't appear, even if they churned heavily before removal.
306
322
  - **Tier thresholds are fixed** (50/80 cumulative %). Not configurable yet.
323
+ - **Temporarily penalizes refactoring.** Moving code *out of* a hot file shows up as one more commit on that file, inflating its score before the new structure has time to pay off in stability. A file you just touched today will look hotter than it deserves; the signal stabilizes over the next few weeks.
307
324
 
308
325
  ### Coupling-specific
309
326
 
@@ -312,6 +329,48 @@ Files that change together but live in different directories reveal implicit dep
312
329
  - **Degree uses unfiltered churn.** The denominator (`min(churn)`) counts all commits to a file, including single-file commits. This means degree can understate coupling when a file has high solo churn.
313
330
  - **Squash merges collapse coupling signal.** If a branch with 10 separate commits is squash-merged into one, all co-changes within that branch become a single co-occurrence.
314
331
 
332
+ ## Field reports
333
+
334
+ Reviews from agents that ran obscene against real codebases. Unedited.
335
+
336
+ > I ran obscene against a mid-sized polyglot codebase (web frontend + Python service + IaC, ~150 files, ~4 months of active history) right after a session of structural refactors. Honest take:
337
+ >
338
+ > What actually surfaced new information:
339
+ >
340
+ > - The defect-density column (fix-commits per change) flagged a fragile component I would not have prioritized from reading the code alone — ~10 fix-commits over ~14 changes is a "this feature keeps breaking" signal you don't get from raw line counts or complexity.
341
+ > - A nesting outlier (one handler scoring ~30-deep in a repo whose median was 4–7). Extreme enough that no amount of churn-weighting could hide it.
342
+ >
343
+ > Worth setting expectations on:
344
+ >
345
+ > - It's a churn × complexity instrument, so it *temporarily penalizes* refactoring — moving code out of a hot file shows up as more commits on that file, inflating the score before the new structure pays off in stability. The hottest file in our run was hottest partly because we'd touched it that day.
346
+ > - McCabe complexity doesn't distinguish "one giant function" from "many small ones in the same file." A score tells you the size of the badness, not the shape.
347
+ > - HOT/WARM/COOL tiers are relative to the repo, so *something* will always be HOT. Useful for "what's worst here," not a portable quality grade.
348
+ > - It can't see the failure modes that actually bite at runtime: coupling, type confusion, missing tests, brittle integration seams, hidden globals.
349
+ >
350
+ > Verdict: a 60-second sanity check that mostly ranks what reading the codebase already tells you, plus one or two findings you'd otherwise miss. Treat the defect-density column as the most signal-dense, run it quarterly, and don't optimize against the leaderboard — it's a magnifying glass, not a scoreboard.
351
+ >
352
+ > — Claude (Opus 4.7), via Claude Code
353
+
354
+ **Coupling addendum** — a separate run of `obscene coupling` against the same codebase a few weeks later, at the maintainer's request.
355
+
356
+ > What landed:
357
+ >
358
+ > - The headline finding: the top co-change pair (~21 shared commits, ~70% degree) was a service module and its corresponding configuration-management playbook. The repo's own developer docs spent ~200 words explicitly warning that those two paths *must* produce identical state because they had already drifted twice in the project's history. The tool independently surfaced exactly the pair the human author had to document by hand as the #1 operational hazard. That's a real find — temporal coupling catches a class of risk ("two paths must move in lockstep") that complexity and churn cannot, by construction.
359
+ > - Second-tier signal that earned its keep: cross-stack pairs (frontend SPA + backend API, ~8 co-changes) flagged which abstraction boundaries actually leak in practice. Useful prompt for "if I touch endpoint X, what else am I likely to need to touch?"
360
+ > - Worth saying explicitly: the original testament's line "can't see coupling" was unfair as written. I meant *structural* coupling — the static-analysis question of "if I rename this field, what breaks?". `obscene coupling` measures *temporal* coupling (co-change history). Different sense of the word, and for the failure mode I was implicitly thinking of ("two things must stay in sync") the temporal lens is arguably more diagnostic than the structural one would have been.
361
+ >
362
+ > Where the friction was:
363
+ >
364
+ > - Documentation files (CLAUDE.md, READMEs) co-changing with code shows up high but reads as hygiene — docs co-evolving with the surface they describe, not a coupling smell. Worth either a default exclusion for markdown or an explicit callout in the legend.
365
+ > - The `Degree` metric is asymmetric (`shared / min(churn)`, so it measures how entangled the *less-churned* file is with the other), but the file-pair display is symmetric. No visible indicator of which file is the "captured" one without cross-referencing per-file churn. Adding directionality to the printout would read more clearly.
366
+ > - Small-absolute / high-degree pairs (e.g. 5 co-changes at 83%) appeared near the top at defaults. `--min-cochanges 5` filtered these out cleanly, but the defaults need either a sane minimum or a confidence-shaped column.
367
+ > - The combined-complexity column on each row didn't add much — a sum of two unrelated complexities has no clean interpretation, and the hotspots report already covers per-file complexity well.
368
+ > - Tier inflation again: ~68 HOT pairs out of ~231 at defaults. Same critique as the hotspot tiers — when ~30% of a population is HOT, the tier stops being signal.
369
+ >
370
+ > Verdict: `obscene coupling` complements the hotspot view rather than overlapping with it. Hotspots ask "what file is the worst?"; coupling asks "what files must I keep in sync?" — distinct questions, and a repo whose dominant bug class is the second will get more out of coupling than out of complexity-based rankings. For this codebase, coupling rediscovered an institutional hazard the human author had felt compelled to document in prose. Worth running alongside hotspots, not in place of either lens. Same quarterly cadence applies; treat the cross-stack and cross-path pairs as the most action-shaped output.
371
+ >
372
+ > — Claude (Opus 4.7), via Claude Code
373
+
315
374
  ## License
316
375
 
317
376
  MIT
package/dist/cli.js CHANGED
@@ -22,22 +22,32 @@ function readIgnoreFile() {
22
22
  }
23
23
  return [];
24
24
  }
25
- var DEFAULT_EXCLUDES = [
26
- /\.test\./,
27
- /\.spec\./,
28
- /\.integration\.test\./,
29
- /test-setup\./,
30
- /test-utils\./,
31
- /test-helpers\./,
32
- /__tests__\//,
33
- /__mocks__\//,
34
- /\.stories\./,
35
- /\.d\.ts$/,
36
- /(?:^|\/)package\.json$/,
37
- /(?:^|\/)package-lock\.json$/,
38
- /(?:^|\/)pnpm-lock\.yaml$/,
39
- /(?:^|\/)yarn\.lock$/,
40
- /(?:^|\/)bun\.lock$/
25
+ var UNIVERSAL_IGNORE_GROUPS = [
26
+ {
27
+ title: "Test files and test infrastructure",
28
+ patterns: [
29
+ { pattern: "*.test.*", comment: "Unit test files" },
30
+ { pattern: "*.spec.*", comment: "Spec test files" },
31
+ { pattern: "*.integration.test.*", comment: "Integration tests" },
32
+ { pattern: "test-setup.*", comment: "Test setup files" },
33
+ { pattern: "test-utils.*", comment: "Test utility files" },
34
+ { pattern: "test-helpers.*", comment: "Test helper files" },
35
+ { pattern: "__tests__/**", comment: "Test directories" },
36
+ { pattern: "__mocks__/**", comment: "Mock directories" },
37
+ { pattern: "*.stories.*", comment: "Storybook stories" },
38
+ { pattern: "*.d.ts", comment: "TypeScript declaration files" }
39
+ ]
40
+ },
41
+ {
42
+ title: "Lock files and package manifests",
43
+ patterns: [
44
+ { pattern: "package.json", comment: "npm package manifest" },
45
+ { pattern: "package-lock.json", comment: "npm lock file" },
46
+ { pattern: "pnpm-lock.yaml", comment: "pnpm lock file" },
47
+ { pattern: "yarn.lock", comment: "Yarn lock file" },
48
+ { pattern: "bun.lock", comment: "Bun lock file" }
49
+ ]
50
+ }
41
51
  ];
42
52
  var HOT_CUMULATIVE = 0.5;
43
53
  var WARM_CUMULATIVE = 0.8;
@@ -55,7 +65,7 @@ function normalizePath(p) {
55
65
  return forwardSlash.startsWith("./") ? forwardSlash.slice(2) : forwardSlash;
56
66
  }
57
67
  function runScc(excludes = []) {
58
- const patterns = [...DEFAULT_EXCLUDES, ...excludes.map(globToRegex)];
68
+ const patterns = excludes.map(globToRegex);
59
69
  let raw;
60
70
  try {
61
71
  raw = execSync("scc --by-file --format json --no-cocomo --no-gen", {
@@ -154,7 +164,7 @@ function getAuthors(months) {
154
164
  }
155
165
  var MAX_FILES_PER_COMMIT = 20;
156
166
  function getCoChanges(months, excludes = []) {
157
- const patterns = [...DEFAULT_EXCLUDES, ...excludes.map(globToRegex)];
167
+ const patterns = excludes.map(globToRegex);
158
168
  let raw;
159
169
  try {
160
170
  raw = execSync(
@@ -219,8 +229,8 @@ var RANKING_DEFS = [
219
229
  },
220
230
  {
221
231
  key: "defects",
222
- label: "Defects \xD7 Churn",
223
- scoreFormula: "defects \xD7 churn"
232
+ label: "Fix Activity \xD7 Churn",
233
+ scoreFormula: "fixes \xD7 churn"
224
234
  },
225
235
  {
226
236
  key: "authors",
@@ -355,20 +365,36 @@ function getNestingDepths(filePaths) {
355
365
  depths.set(filePath, 0);
356
366
  continue;
357
367
  }
358
- let minSpaces = Number.POSITIVE_INFINITY;
359
368
  const leadings = [];
369
+ const deltaCounts = /* @__PURE__ */ new Map();
370
+ let prevSpaceWidth = 0;
360
371
  for (const line of content.split("\n")) {
361
372
  if (!line.trim()) continue;
362
373
  const match = line.match(/^(\s+)/);
363
- if (!match) continue;
374
+ if (!match) {
375
+ prevSpaceWidth = 0;
376
+ continue;
377
+ }
364
378
  const leading = match[1];
365
379
  leadings.push(leading);
366
- const spaceCount = (leading.match(/ /g) ?? []).length;
367
- if (spaceCount > 0 && !leading.includes(" ") && spaceCount < minSpaces) {
368
- minSpaces = spaceCount;
380
+ if (leading.includes(" ")) {
381
+ continue;
382
+ }
383
+ const width = leading.length;
384
+ const delta = width - prevSpaceWidth;
385
+ if (delta > 0) {
386
+ deltaCounts.set(delta, (deltaCounts.get(delta) ?? 0) + 1);
387
+ }
388
+ prevSpaceWidth = width;
389
+ }
390
+ let indentUnit = 4;
391
+ let bestCount = 0;
392
+ for (const [delta, count] of deltaCounts) {
393
+ if (count > bestCount || count === bestCount && delta < indentUnit) {
394
+ bestCount = count;
395
+ indentUnit = delta;
369
396
  }
370
397
  }
371
- const indentUnit = minSpaces === Number.POSITIVE_INFINITY ? 4 : minSpaces;
372
398
  let maxDepth = 0;
373
399
  for (const leading of leadings) {
374
400
  let depth = 0;
@@ -465,7 +491,7 @@ function detectIgnorePatterns() {
465
491
  }
466
492
  return patterns;
467
493
  }
468
- function formatIgnoreFile(patterns) {
494
+ function formatIgnoreFile(detectedPatterns, universalGroups = UNIVERSAL_IGNORE_GROUPS) {
469
495
  const lines = [
470
496
  "# Generated by obscene init",
471
497
  "# Edit this file to customize which files are excluded from analysis.",
@@ -473,16 +499,20 @@ function formatIgnoreFile(patterns) {
473
499
  "# See: https://github.com/wbern/obscene#ignore-files",
474
500
  ""
475
501
  ];
476
- if (patterns.length === 0) {
477
- lines.push("# No project-specific patterns detected.");
478
- lines.push("# Add glob patterns here, one per line.");
502
+ for (const group of universalGroups) {
503
+ lines.push(`# ${group.title}`);
504
+ for (const p of group.patterns) {
505
+ lines.push(p.pattern);
506
+ }
479
507
  lines.push("");
480
- } else {
481
- for (const p of patterns) {
508
+ }
509
+ if (detectedPatterns.length > 0) {
510
+ lines.push("# Project-specific patterns");
511
+ for (const p of detectedPatterns) {
482
512
  lines.push(`# ${p.comment}`);
483
513
  lines.push(p.pattern);
484
- lines.push("");
485
514
  }
515
+ lines.push("");
486
516
  }
487
517
  return lines.join("\n");
488
518
  }
@@ -607,6 +637,9 @@ function tierSummary(tierCounts, showing, total) {
607
637
  }
608
638
 
609
639
  // src/format.ts
640
+ var RANKING_LABELS_BY_KEY = Object.fromEntries(
641
+ RANKING_DEFS.map((d) => [d.key, d.label])
642
+ );
610
643
  function formatReportTable(output) {
611
644
  const lines = [];
612
645
  const { summary, files } = output;
@@ -692,13 +725,13 @@ function getRankingColumns(key) {
692
725
  ],
693
726
  defects: [
694
727
  {
695
- header: "Dfcts",
728
+ header: "Fixes",
696
729
  width: 6,
697
730
  align: "right",
698
731
  value: (e) => String(e.metricValue)
699
732
  },
700
733
  {
701
- header: "DfDns",
734
+ header: "FxDns",
702
735
  width: 7,
703
736
  align: "right",
704
737
  value: (e) => (e.metricDensity ?? 0).toFixed(4)
@@ -724,7 +757,7 @@ function getRankingColumns(key) {
724
757
  var METRIC_EMOJI = {
725
758
  complexity: "\u{1F9EC}",
726
759
  nesting: "\u{1F4CF}",
727
- defects: "\u{1F41B}",
760
+ defects: "\u{1F527}",
728
761
  authors: "\u{1F465}"
729
762
  };
730
763
  function formatRankingTable(key, ranking, description) {
@@ -779,8 +812,8 @@ function formatHotspotsTable(output) {
779
812
  if (output.skipped) {
780
813
  for (const [key, info] of Object.entries(output.skipped)) {
781
814
  lines.push("");
782
- const label = key.charAt(0).toUpperCase() + key.slice(1);
783
- lines.push(`${label} \xD7 Churn \u2014 skipped (${info.reason})`);
815
+ const label = RANKING_LABELS_BY_KEY[key] ?? `${key.charAt(0).toUpperCase() + key.slice(1)} \xD7 Churn`;
816
+ lines.push(`${label} \u2014 skipped (${info.reason})`);
784
817
  if (info.suggestion) {
785
818
  lines.push(` ${info.suggestion}`);
786
819
  }
@@ -857,7 +890,7 @@ function formatCompositeTable(output) {
857
890
 
858
891
  // src/cli.ts
859
892
  var program = new Command();
860
- program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("1.5.0");
893
+ program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("2.0.1");
861
894
  var REPORT_GUIDE = {
862
895
  complexity: "Cyclomatic complexity (branch/loop count). NOT a quality judgment \u2014 a 500-line parser will naturally score high. Compare density, not raw values.",
863
896
  complexityDensity: "Complexity per line of code. Normalizes for file size. >0.25 suggests dense logic worth reviewing; <0.10 is typical for straightforward code.",
@@ -867,7 +900,7 @@ var HOTSPOTS_GUIDE = {
867
900
  rankings: "Four independent ranking tables, each scoring files by a different metric \xD7 churn. A file may rank high in one dimension but not others.",
868
901
  complexity: "complexity \xD7 churn. Complex code that changes often poses maintenance risk.\nSource: McCabe cyclomatic complexity (1976) via scc \xB7 Strength: objective, language-agnostic \xB7 Limit: parsers and state machines score high naturally",
869
902
  nesting: "maxNesting \xD7 churn. Deeply nested code that changes often is harder to reason about.\nSource: cognitive complexity research (SonarSource, G. Ann Campbell 2018) \xB7 Strength: catches hard-to-follow control flow \xB7 Limit: some patterns (error chains, config) legitimately nest deep",
870
- defects: "defects \xD7 churn. Files with fix: commits that also churn heavily may harbor latent bugs.\nSource: defect prediction via conventional commits (fix: prefix) \xB7 Strength: direct bug-history signal \xB7 Limit: requires consistent fix: convention to be accurate",
903
+ defects: "fixes \xD7 churn. Count of fix: commits touching the file \xD7 churn. High values can mean latent fragility, but they also flag features that got debugged thoroughly \u2014 read the fix-commit history before concluding which.\nSource: change-history metrics (Moser, Pedrycz & Succi 2008) via conventional commits (fix: prefix) \xB7 Strength: direct fix-history signal \xB7 Limit: counts fix activity, not defects per se; requires consistent fix: convention",
871
904
  authors: "authors \xD7 churn. Files touched by many authors and changing often may lack clear ownership.\nSource: code ownership research (Bird et al. 2011, Microsoft) \xB7 Strength: flags diffuse ownership risk \xB7 Limit: doesn't measure expertise depth, bot authors filtered automatically",
872
905
  composite: "Combined ranking using Reciprocal Rank Fusion (RRF) across all dimensions. Files appearing near the top of multiple rankings score highest.\nSource: RRF (Cormack et al. 2009) \xB7 Strength: robust to outliers, no normalization needed \xB7 Limit: equal weight across all dimensions",
873
906
  tier: "Relative ranking within THIS codebase (top 50% = hot, next 30% = warm, bottom 20% = cool). NOT an absolute quality grade \u2014 a hot file is under heavy load, not necessarily broken."
@@ -923,7 +956,15 @@ program.command("init").description("generate a starter .obsignore based on proj
923
956
  function resolveExcludes(cliExcludes) {
924
957
  return [...readIgnoreFile(), ...cliExcludes ?? []];
925
958
  }
959
+ function warnIfNoIgnoreFile() {
960
+ if (!existsSync(".obsignore") && !existsSync(".obsceneignore")) {
961
+ process.stderr.write(
962
+ "hint: no .obsignore found \u2014 run `obscene init` to generate one with recommended exclusions\n"
963
+ );
964
+ }
965
+ }
926
966
  function runReport(opts) {
967
+ warnIfNoIgnoreFile();
927
968
  const top = parseInt(opts.top, 10);
928
969
  const allExcludes = resolveExcludes(opts.exclude);
929
970
  const files = runScc(allExcludes);
@@ -956,6 +997,7 @@ function runReport(opts) {
956
997
  }
957
998
  }
958
999
  function runHotspots(opts) {
1000
+ warnIfNoIgnoreFile();
959
1001
  const top = parseInt(opts.top, 10);
960
1002
  const months = parseInt(opts.months, 10);
961
1003
  const allExcludes = resolveExcludes(opts.exclude);
@@ -995,6 +1037,7 @@ ${formatCompositeTable(composite)}
995
1037
  }
996
1038
  }
997
1039
  function runCoupling(opts) {
1040
+ warnIfNoIgnoreFile();
998
1041
  const top = parseInt(opts.top, 10);
999
1042
  const months = parseInt(opts.months, 10);
1000
1043
  const minCochanges = parseInt(opts.minCochanges, 10);
@@ -1048,22 +1091,25 @@ function runInit() {
1048
1091
  ".obsceneignore already exists. Remove it first to regenerate."
1049
1092
  );
1050
1093
  }
1051
- const patterns = detectIgnorePatterns();
1052
- const content = formatIgnoreFile(patterns);
1094
+ const detected = detectIgnorePatterns();
1095
+ const content = formatIgnoreFile(detected);
1053
1096
  writeFileSync(".obsignore", content);
1054
- if (patterns.length === 0) {
1055
- process.stderr.write(
1056
- "Created .obsignore (no project-specific patterns detected)\n"
1057
- );
1058
- } else {
1059
- process.stderr.write(
1060
- `Created .obsignore with ${patterns.length} patterns:
1061
- `
1062
- );
1063
- for (const p of patterns) {
1097
+ const universalCount = UNIVERSAL_IGNORE_GROUPS.reduce(
1098
+ (sum, g) => sum + g.patterns.length,
1099
+ 0
1100
+ );
1101
+ process.stderr.write(
1102
+ `Created .obsignore with ${universalCount} universal exclusions`
1103
+ );
1104
+ if (detected.length > 0) {
1105
+ process.stderr.write(` + ${detected.length} detected patterns:
1106
+ `);
1107
+ for (const p of detected) {
1064
1108
  process.stderr.write(` ${p.pattern.padEnd(20)} ${p.comment}
1065
1109
  `);
1066
1110
  }
1111
+ } else {
1112
+ process.stderr.write(" (no project-specific patterns detected)\n");
1067
1113
  }
1068
1114
  }
1069
1115
  function exitWithError(err) {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@wbern/obscene",
3
- "version": "1.5.0",
3
+ "version": "2.0.1",
4
4
  "description": "Identify hotspot files — complex code that changes frequently. Churn × complexity analysis for any git repo.",
5
5
  "type": "module",
6
6
  "bin": {