@wbern/obscene 0.2.1 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +76 -10
  2. package/dist/cli.js +209 -8
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -57,6 +57,8 @@ obscene --format table # human-readable table
57
57
  obscene --top 50 --months 6 # more results, longer window
58
58
  obscene --top 0 # all files
59
59
  obscene report # raw complexity (no churn)
60
+ obscene coupling # temporal coupling analysis
61
+ obscene coupling --min-cochanges 1 --format table
60
62
  obscene --exclude "*.generated.*"
61
63
  obscene | jq '.hotspots[0]' # pipe-friendly
62
64
  ```
@@ -73,6 +75,18 @@ Scores each file by `complexity × commits` over a time window, then assigns tie
73
75
  | **watch** | next 30% (50–80%) | Keep an eye on these |
74
76
  | **stable** | bottom 20% | Low risk |
75
77
 
78
+ ### `obscene coupling`
79
+
80
+ Detects files that frequently change together in the same commit but live in different directories — Tornhill's "temporal coupling" analysis from *Your Code as a Crime Scene* (2015). Surfaces hidden structural dependencies that aren't visible in imports or the module graph.
81
+
82
+ Same-directory pairs are excluded (co-location is expected coupling). Mass commits touching >20 files are skipped (formatting changes, large refactors). See [Why temporal coupling?](#why-temporal-coupling) for the research backing this approach.
83
+
84
+ ```bash
85
+ obscene coupling # default: min 2 shared commits
86
+ obscene coupling --min-cochanges 1 # include single co-occurrences
87
+ obscene coupling --format table --top 10 # human-readable, top 10
88
+ ```
89
+
76
90
  ### `obscene report`
77
91
 
78
92
  Per-file complexity without churn. Useful for raw complexity distribution.
@@ -84,45 +98,60 @@ Per-file complexity without churn. Useful for raw complexity distribution.
84
98
  | `--top <n>` | `20` | Limit results (0 = all) |
85
99
  | `--months <n>` | `3` | Churn window in months |
86
100
  | `--format <type>` | `json` | `json` or `table` |
101
+ | `--min-cochanges <n>` | `2` | Minimum shared commits (coupling only) |
87
102
  | `--exclude <patterns...>` | — | Additional exclusion patterns |
88
103
 
89
104
  ## Metrics
90
105
 
91
- Each hotspot row includes the following metrics:
106
+ ### Hotspot metrics
92
107
 
93
- ### Hotspot score (`Score`)
108
+ #### Hotspot score (`Score`)
94
109
 
95
110
  `complexity × churn`. The core ranking metric — files that are both complex and frequently modified bubble to the top. See [Why churn × complexity?](#why-churn-x-complexity) for the research backing this approach.
96
111
 
97
- ### Churn (`Churn`)
112
+ #### Churn (`Churn`)
98
113
 
99
114
  Number of commits touching the file within the configured time window (default: 3 months). Measures how actively the file is being modified.
100
115
 
101
- ### Cyclomatic complexity (`Cmplx`)
116
+ #### Cyclomatic complexity (`Cmplx`)
102
117
 
103
118
  Total cyclomatic complexity as reported by [scc](https://github.com/boyter/scc). Counts independent execution paths (branches, loops, conditions). Higher values mean more paths to test and more places for bugs to hide.
104
119
 
105
- ### Complexity density (`Dens`)
120
+ #### Complexity density (`Dens`)
106
121
 
107
122
  `complexity / lines of code`. Normalizes complexity by file size so a 50-line file with complexity 25 (density 0.50) stands out against a 500-line file with complexity 25 (density 0.05). Based on Harrison & Magel (1981), who found that complexity relative to code size is a stronger fault predictor than raw complexity alone.
108
123
 
109
- ### Defects (`Dfcts`)
124
+ #### Defects (`Dfcts`)
110
125
 
111
126
  Count of `fix:` conventional commits touching the file within the churn window. A proxy for historical defect rate — files that attract repeated fixes are more likely to contain latent bugs. Inspired by Moser, Pedrycz & Succi (2008), who showed that change-history metrics outperform static code metrics for defect prediction.
112
127
 
113
- ### Defect density (`defectDensity`, JSON only)
128
+ #### Defect density (`defectDensity`, JSON only)
114
129
 
115
130
  `defects / lines of code`. Not shown in table output due to column width, but available in JSON. Normalizes defect count by file size.
116
131
 
117
- ### Nesting depth (`Nest`)
132
+ #### Nesting depth (`Nest`)
118
133
 
119
134
  Maximum indentation level (tab stops) in the file. Deep nesting correlates with high cognitive load and defect likelihood. Harrison & Magel (1981) identified nesting depth as a significant complexity contributor.
120
135
 
121
- ### Unique authors (`Auth`)
136
+ #### Unique authors (`Auth`)
122
137
 
123
138
  Number of distinct git authors who committed to the file within the churn window. Files touched by many authors may lack clear ownership and accumulate inconsistent patterns. Kamei et al. (2013) found developer count to be a significant predictor of defect-introducing changes.
124
139
 
125
- ### Tier
140
+ ### Coupling metrics
141
+
142
+ #### Shared commits (`Shared`)
143
+
144
+ Number of commits where both files in a pair were modified together. The core ranking metric for temporal coupling — higher values indicate stronger hidden dependencies between files in different directories. Ball, Kim, Porter & Siy (1997) demonstrated that co-change relationships reveal design dependencies that static analysis misses.
145
+
146
+ #### Coupling degree (`Degree`)
147
+
148
+ `shared commits / min(churn of file1, churn of file2) × 100`. What percentage of the less-active file's changes also involved the other file. A degree of 100% means every change to the less-active file also touched the other file. This normalization follows D'Ambros, Lanza & Lungu (2009), who showed that relative coupling measures provide more stable results than raw co-change counts across projects of different sizes.
149
+
150
+ #### Combined complexity (`Cmplx`)
151
+
152
+ Sum of cyclomatic complexity of both files in the pair. Highlights coupled pairs where the involved code is also complex — the combination of hidden dependency and high complexity compounds maintenance risk.
153
+
154
+ #### Tier
126
155
 
127
156
  Cumulative score distribution bucket:
128
157
 
@@ -151,6 +180,25 @@ Score=complexity×churn | Dens=complexity/code | Dfcts=fix commits | Nest=max in
151
180
  Docs: https://github.com/wbern/obscene#metrics
152
181
  ```
153
182
 
183
+ ### Coupling example
184
+
185
+ ```
186
+ Coupling — 6 months churn window | Min shared: 3 | Total score: 91
187
+ Tiers: 10 danger, 7 watch, 7 stable
188
+ Showing: 5 of 24
189
+
190
+ File 1 File 2 Shared Degree Cmplx Tier
191
+ ────────────────────────────────────────────────────────────────────────────────────────────────────
192
+ …ePlayer/hooks/useChessEffects.ts src/utils/effect-generator.ts 6 46.2% 261 DANGER
193
+ …ePlayer/hooks/useChessEffects.ts src/utils/pgn-types.ts 6 50.0% 121 DANGER
194
+ src/test/pgn-fixtures.ts src/utils/pgn-parser.server.ts 5 71.4% 3 DANGER
195
+ src/test/pgn-fixtures.ts src/utils/effect-generator.ts 4 57.1% 145 DANGER
196
+ src/test/pgn-fixtures.ts src/utils/pgn-types.ts 4 57.1% 5 DANGER
197
+
198
+ Shared=co-changed commits | Degree=shared/min(churn)×100 | Cmplx=sum of both files
199
+ Docs: https://github.com/wbern/obscene#metrics
200
+ ```
201
+
154
202
  ## Supported languages
155
203
 
156
204
  Any language [scc supports](https://github.com/boyter/scc#features) — 200+ languages including C, C++, Go, Java, JavaScript, TypeScript, Python, Rust, Ruby, PHP, Swift, Kotlin, and many more. No configuration needed; scc auto-detects languages from file extensions.
@@ -170,14 +218,32 @@ Files that are both complex and frequently modified are disproportionately likel
170
218
 
171
219
  The general approach was popularized by Adam Tornhill's *Your Code as a Crime Scene* (2015), which applies forensic analysis techniques to version control history.
172
220
 
221
+ ## Why temporal coupling?
222
+
223
+ Files that change together but live in different directories reveal implicit dependencies that the module graph doesn't capture. These hidden couplings are a maintenance hazard: a developer modifying one file doesn't know they also need to update the other, leading to bugs that only surface later.
224
+
225
+ - **Ball, Kim, Porter & Siy (1997)** pioneered co-change analysis and showed that version control history surfaces design relationships invisible to static analysis. — [ICSE 1997 Workshop](https://www.researchgate.net/publication/2791666_If_Your_Version_Control_System_Could_Talk)
226
+ - **D'Ambros, Lanza & Lungu (2009)** developed the Evolution Radar for visualizing logical coupling at both file and module level, showing how evolutionary coupling reveals architectural decay. The normalized approach (coupling relative to total changes) provides more stable measures across projects of different sizes. — [IEEE TSE](https://doi.org/10.1109/TSE.2009.17)
227
+ - **Tornhill (2015)** popularized temporal coupling analysis in *Your Code as a Crime Scene*, demonstrating how co-change patterns reveal "surprise dependencies" — files that should logically be independent but can't be changed separately in practice. His tooling (Code Maat) uses the same commit co-occurrence approach.
228
+ - **Cataldo, Mockus, Roberts & Herbsleb (2009)** analyzed both syntactic and logical dependencies across two large systems and found that logical (co-change) dependencies have a significant independent effect on failure proneness. When developers are unaware of these hidden couplings, defects increase. — [IEEE TSE](https://doi.org/10.1109/TSE.2009.42)
229
+
173
230
  ## Limitations
174
231
 
232
+ ### General
233
+
175
234
  - **Churn = commit count**, not lines changed. A one-line typo fix counts the same as a 500-line rewrite.
176
235
  - **Per-file granularity only.** A 1000-line file with many small functions scores higher than it probably should. No function-level breakdown.
177
236
  - **Must be run inside a git repo.** Churn data comes from `git log`.
178
237
  - **Only analyzes files that currently exist.** Deleted files don't appear, even if they churned heavily before removal.
179
238
  - **Tier thresholds are fixed** (50/80 cumulative %). Not configurable yet.
180
239
 
240
+ ### Coupling-specific
241
+
242
+ - **Same-directory exclusion is a heuristic.** Files in the same directory that are unexpectedly coupled won't be surfaced. The assumption is that co-located files are *expected* to change together.
243
+ - **Mass commit threshold (>20 files) is hardcoded.** Commits touching many files are skipped to avoid noise from formatting changes and large refactors, but legitimate large features that touch many files across directories are also excluded.
244
+ - **Degree uses unfiltered churn.** The denominator (`min(churn)`) counts all commits to a file, including single-file commits. This means degree can understate coupling when a file has high solo churn.
245
+ - **Squash merges collapse coupling signal.** If a branch with 10 separate commits is squash-merged into one, all co-changes within that branch become a single co-occurrence.
246
+
181
247
  ## License
182
248
 
183
249
  MIT
package/dist/cli.js CHANGED
@@ -129,6 +129,82 @@ function getAuthors(months) {
129
129
  }
130
130
  return counts;
131
131
  }
132
+ var MAX_FILES_PER_COMMIT = 20;
133
+ function getCoChanges(months, excludes = []) {
134
+ const patterns = [...DEFAULT_EXCLUDES, ...excludes.map(globToRegex)];
135
+ let raw;
136
+ try {
137
+ raw = execSync(
138
+ `git log --since="${months} months ago" --format="COMMIT_SEP%n" --name-only`,
139
+ { maxBuffer: 50 * 1024 * 1024, stdio: ["pipe", "pipe", "pipe"] }
140
+ );
141
+ } catch {
142
+ throw new Error("Not a git repository or git is not installed.");
143
+ }
144
+ const cochanges = /* @__PURE__ */ new Map();
145
+ const commits = raw.toString().split("COMMIT_SEP\n");
146
+ for (const commit of commits) {
147
+ if (!commit.trim()) continue;
148
+ const seen = /* @__PURE__ */ new Set();
149
+ for (const line of commit.split("\n")) {
150
+ const trimmed = normalizePath(line.trim());
151
+ if (!trimmed) continue;
152
+ if (!isExcluded(trimmed, patterns)) {
153
+ seen.add(trimmed);
154
+ }
155
+ }
156
+ const files = [...seen];
157
+ if (files.length < 2 || files.length > MAX_FILES_PER_COMMIT) continue;
158
+ for (let i = 0; i < files.length; i++) {
159
+ for (let j = i + 1; j < files.length; j++) {
160
+ const [a, b] = files[i] < files[j] ? [files[i], files[j]] : [files[j], files[i]];
161
+ const dirA = a.includes("/") ? a.slice(0, a.lastIndexOf("/")) : "";
162
+ const dirB = b.includes("/") ? b.slice(0, b.lastIndexOf("/")) : "";
163
+ if (dirA === dirB) continue;
164
+ const key = `${a}\0${b}`;
165
+ cochanges.set(key, (cochanges.get(key) ?? 0) + 1);
166
+ }
167
+ }
168
+ }
169
+ return cochanges;
170
+ }
171
+ function computeCoupling(cochanges, churn, complexityMap, minCochanges) {
172
+ const entries = [];
173
+ for (const [key, count] of cochanges) {
174
+ if (count < minCochanges) continue;
175
+ const [file1, file2] = key.split("\0");
176
+ const minChurn = Math.min(churn.get(file1) ?? 0, churn.get(file2) ?? 0);
177
+ const degree = minChurn > 0 ? Math.round(count / minChurn * 1e3) / 10 : 0;
178
+ const totalComplexity = (complexityMap.get(file1) ?? 0) + (complexityMap.get(file2) ?? 0);
179
+ entries.push({
180
+ file1,
181
+ file2,
182
+ cochanges: count,
183
+ degree,
184
+ totalComplexity,
185
+ couplingScore: count,
186
+ percentOfTotal: 0,
187
+ tier: "stable"
188
+ });
189
+ }
190
+ entries.sort((a, b) => b.couplingScore - a.couplingScore);
191
+ const totalScore = entries.reduce((sum, e) => sum + e.couplingScore, 0);
192
+ if (totalScore === 0) return [];
193
+ let cumulative = 0;
194
+ for (const entry of entries) {
195
+ entry.percentOfTotal = Math.round(entry.couplingScore / totalScore * 1e3) / 10;
196
+ cumulative += entry.couplingScore;
197
+ const cumulativeShare = cumulative / totalScore;
198
+ if (cumulativeShare <= DANGER_CUMULATIVE) {
199
+ entry.tier = "danger";
200
+ } else if (cumulativeShare <= WATCH_CUMULATIVE) {
201
+ entry.tier = "watch";
202
+ } else {
203
+ entry.tier = "stable";
204
+ }
205
+ }
206
+ return entries;
207
+ }
132
208
  function getNestingDepths(filePaths) {
133
209
  const depths = /* @__PURE__ */ new Map();
134
210
  for (const filePath of filePaths) {
@@ -223,6 +299,14 @@ function formatReportTable(output) {
223
299
  padRight(truncate(f.file, 58), 60) + padLeft(String(f.code), 8) + padLeft(String(f.complexity), 12) + padLeft(f.complexityDensity.toFixed(2), 9) + padLeft(String(f.comments), 10)
224
300
  );
225
301
  }
302
+ lines.push("");
303
+ lines.push(
304
+ "Complexity=cyclomatic branch/loop count | Density=complexity/code | Comments=comment lines"
305
+ );
306
+ lines.push(
307
+ "High complexity is expected for parsers, state machines, and business logic. Compare density across files, not raw values."
308
+ );
309
+ lines.push("Docs: https://github.com/wbern/obscene#metrics");
226
310
  return lines.join("\n");
227
311
  }
228
312
  function formatHotspotsTable(output) {
@@ -231,28 +315,70 @@ function formatHotspotsTable(output) {
231
315
  lines.push(
232
316
  `Hotspots \u2014 ${churnWindow} churn window | Total score: ${totalScore.toLocaleString()}`
233
317
  );
234
- lines.push(
235
- `Tiers: ${tierCounts.danger} danger, ${tierCounts.watch} watch, ${tierCounts.stable} stable`
236
- );
237
- lines.push(`Showing: ${output.showing} of ${output.totalHotspots}`);
238
- lines.push("");
318
+ pushTierSummary(lines, tierCounts, output.showing, output.totalHotspots);
239
319
  lines.push(
240
320
  padRight("File", 50) + padLeft("Score", 8) + padLeft("%", 7) + padLeft("Churn", 7) + padLeft("Cmplx", 7) + padLeft("Dens", 7) + padLeft("Dfcts", 6) + padLeft("Nest", 6) + padLeft("Auth", 6) + padLeft("Tier", 8)
241
321
  );
242
322
  lines.push("\u2500".repeat(112));
243
323
  for (const h of hotspots) {
244
- const tierLabel = h.tier === "danger" ? "DANGER" : h.tier === "watch" ? "WATCH" : "stable";
245
324
  lines.push(
246
- padRight(truncate(h.file, 48), 50) + padLeft(h.hotspotScore.toLocaleString(), 8) + padLeft(h.percentOfTotal.toFixed(1), 7) + padLeft(String(h.churn), 7) + padLeft(String(h.complexity), 7) + padLeft(h.complexityDensity.toFixed(2), 7) + padLeft(String(h.defects), 6) + padLeft(String(h.maxNesting), 6) + padLeft(String(h.authors), 6) + padLeft(tierLabel, 8)
325
+ padRight(truncate(h.file, 48), 50) + padLeft(h.hotspotScore.toLocaleString(), 8) + padLeft(h.percentOfTotal.toFixed(1), 7) + padLeft(String(h.churn), 7) + padLeft(String(h.complexity), 7) + padLeft(h.complexityDensity.toFixed(2), 7) + padLeft(String(h.defects), 6) + padLeft(String(h.maxNesting), 6) + padLeft(String(h.authors), 6) + padLeft(tierLabel(h.tier), 8)
247
326
  );
248
327
  }
249
328
  lines.push("");
250
329
  lines.push(
251
330
  "Score=complexity\xD7churn | Dens=complexity/code | Dfcts=fix commits | Nest=max indent depth | Auth=unique authors"
252
331
  );
332
+ lines.push(
333
+ "Tiers are relative to THIS codebase, not absolute quality grades. A 'danger' file in a clean codebase may be fine."
334
+ );
335
+ lines.push(
336
+ "High scores flag review candidates, not bad code \u2014 stable complex files (parsers, engines) score high naturally."
337
+ );
338
+ lines.push("Docs: https://github.com/wbern/obscene#metrics");
339
+ return lines.join("\n");
340
+ }
341
+ function formatCouplingTable(output) {
342
+ const lines = [];
343
+ const { tierCounts, totalScore, churnWindow, couplings } = output;
344
+ lines.push(
345
+ `Coupling \u2014 ${churnWindow} churn window | Min shared: ${output.minCochanges} | Total score: ${totalScore.toLocaleString()}`
346
+ );
347
+ pushTierSummary(lines, tierCounts, output.showing, output.totalCouplings);
348
+ lines.push(
349
+ padRight("File 1", 35) + padRight("File 2", 35) + padLeft("Shared", 7) + padLeft("Degree", 8) + padLeft("Cmplx", 7) + padLeft("Tier", 8)
350
+ );
351
+ lines.push("\u2500".repeat(100));
352
+ for (const c of couplings) {
353
+ lines.push(
354
+ padRight(truncate(c.file1, 33), 35) + padRight(truncate(c.file2, 33), 35) + padLeft(String(c.cochanges), 7) + padLeft(`${c.degree.toFixed(1)}%`, 8) + padLeft(String(c.totalComplexity), 7) + padLeft(tierLabel(c.tier), 8)
355
+ );
356
+ }
357
+ lines.push("");
358
+ lines.push(
359
+ "Shared=co-changed commits | Degree=shared/min(churn)\xD7100 | Cmplx=sum of both files"
360
+ );
361
+ lines.push(
362
+ "Tiers are relative to THIS codebase, not absolute quality grades. High coupling may be intentional and fine."
363
+ );
364
+ lines.push(
365
+ "Same-directory pairs excluded. Commits touching >20 files skipped. Only cross-directory dependencies shown."
366
+ );
253
367
  lines.push("Docs: https://github.com/wbern/obscene#metrics");
254
368
  return lines.join("\n");
255
369
  }
370
+ function pushTierSummary(lines, tierCounts, showing, total) {
371
+ lines.push(
372
+ `Tiers: ${tierCounts.danger} danger, ${tierCounts.watch} watch, ${tierCounts.stable} stable`
373
+ );
374
+ lines.push(`Showing: ${showing} of ${total}`);
375
+ lines.push("");
376
+ }
377
+ function tierLabel(tier) {
378
+ if (tier === "danger") return "DANGER";
379
+ if (tier === "watch") return "WATCH";
380
+ return "stable";
381
+ }
256
382
  function padRight(s, n) {
257
383
  return s.length >= n ? s : s + " ".repeat(n - s.length);
258
384
  }
@@ -265,7 +391,27 @@ function truncate(s, max) {
265
391
 
266
392
  // src/cli.ts
267
393
  var program = new Command();
268
- program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("0.2.1");
394
+ program.name("obscene").description("Identify hotspot files \u2014 complex code that changes frequently").version("0.3.1");
395
+ var REPORT_GUIDE = {
396
+ complexity: "Cyclomatic complexity (branch/loop count). NOT a quality judgment \u2014 a 500-line parser will naturally score high. Compare density, not raw values.",
397
+ complexityDensity: "Complexity per line of code. Normalizes for file size. >0.25 suggests dense logic worth reviewing; <0.10 is typical for straightforward code.",
398
+ comments: "Comment line count. Low comments in high-density files may indicate under-documented logic. High comments alone is not a problem."
399
+ };
400
+ var HOTSPOTS_GUIDE = {
401
+ hotspotScore: "complexity \xD7 churn. Ranks files by combined risk: complex code that changes often. High score does NOT mean bad code \u2014 stable high-complexity files (parsers, engines) are fine. Focus on files where score is rising over time.",
402
+ churn: "Commit count in the time window. High churn alone is neutral \u2014 active development is normal. It becomes a signal when combined with high complexity.",
403
+ tier: "Relative ranking within THIS codebase (top 50% = danger, next 30% = watch, bottom 20% = stable). NOT an absolute quality grade. A 'danger' file in a clean codebase may be perfectly fine. Compare across runs to spot trends.",
404
+ defects: "Count of fix: conventional commits. A proxy for bug frequency \u2014 0 does not mean bug-free, and >0 does not mean bad code. Useful for spotting files that attract repeated fixes.",
405
+ defectDensity: "Fix commits per line of code. Normalizes defect count by file size. Only meaningful with conventional commits (fix: prefix).",
406
+ maxNesting: "Deepest indentation level. >6 suggests complex control flow worth simplifying. Language-dependent \u2014 Python files naturally nest less than C++.",
407
+ authors: "Unique committers in the time window. High author count may indicate unclear ownership. Low count is normal for specialized code. Neither value is inherently good or bad."
408
+ };
409
+ var COUPLING_GUIDE = {
410
+ cochanges: "Times both files appeared in the same commit. Higher values suggest a dependency between the files. Same-directory pairs are excluded \u2014 only cross-directory pairs are shown.",
411
+ degree: "Percentage: shared commits / min(churn of file1, file2) \xD7 100. Shows how tightly coupled the pair is relative to their individual change rates. 100% means every change to the less-active file also touched the other.",
412
+ totalComplexity: "Sum of both files' cyclomatic complexity. Highlights coupled pairs where the involved code is also complex \u2014 hidden dependency + high complexity compounds maintenance risk.",
413
+ tier: "Relative ranking within THIS codebase's coupling pairs (top 50% = danger, next 30% = watch, bottom 20% = stable). NOT an absolute quality grade. 'danger' means this pair co-changes more than most \u2014 it may be intentional and fine."
414
+ };
269
415
  function addSharedOptions(cmd) {
270
416
  return cmd.option("--top <n>", "limit to top N entries (0 = all)", "20").option("--format <type>", "output format: json | table", "json").option(
271
417
  "--exclude <patterns...>",
@@ -290,6 +436,17 @@ addSharedOptions(
290
436
  exitWithError(err);
291
437
  }
292
438
  });
439
+ addSharedOptions(
440
+ program.command("coupling").description(
441
+ "temporal coupling \u2014 files that change together across directories"
442
+ )
443
+ ).option("--months <n>", "churn window in months", "3").option("--min-cochanges <n>", "minimum shared commits to include", "2").action((opts) => {
444
+ try {
445
+ runCoupling(opts);
446
+ } catch (err) {
447
+ exitWithError(err);
448
+ }
449
+ });
293
450
  function runReport(opts) {
294
451
  const top = parseInt(opts.top, 10);
295
452
  const files = runScc(opts.exclude);
@@ -304,6 +461,7 @@ function runReport(opts) {
304
461
  const limited = top > 0 ? files.slice(0, top) : files;
305
462
  const output = {
306
463
  generated: (/* @__PURE__ */ new Date()).toISOString(),
464
+ guide: REPORT_GUIDE,
307
465
  summary: {
308
466
  ...totals,
309
467
  fileCount: files.length,
@@ -343,6 +501,7 @@ function runHotspots(opts) {
343
501
  const totalScore = hotspots.reduce((sum, h) => sum + h.hotspotScore, 0);
344
502
  const output = {
345
503
  generated: (/* @__PURE__ */ new Date()).toISOString(),
504
+ guide: HOTSPOTS_GUIDE,
346
505
  churnWindow: `${months} months`,
347
506
  totalScore,
348
507
  tierCounts,
@@ -358,6 +517,48 @@ function runHotspots(opts) {
358
517
  `);
359
518
  }
360
519
  }
520
+ function runCoupling(opts) {
521
+ const top = parseInt(opts.top, 10);
522
+ const months = parseInt(opts.months, 10);
523
+ const minCochanges = parseInt(opts.minCochanges, 10);
524
+ const files = runScc(opts.exclude);
525
+ const churn = getChurn(months);
526
+ const cochanges = getCoChanges(months, opts.exclude);
527
+ const complexityMap = /* @__PURE__ */ new Map();
528
+ for (const f of files) {
529
+ complexityMap.set(f.file, f.complexity);
530
+ }
531
+ const couplings = computeCoupling(
532
+ cochanges,
533
+ churn,
534
+ complexityMap,
535
+ minCochanges
536
+ );
537
+ const limited = top > 0 ? couplings.slice(0, top) : couplings;
538
+ const tierCounts = { danger: 0, watch: 0, stable: 0 };
539
+ for (const c of couplings) {
540
+ tierCounts[c.tier]++;
541
+ }
542
+ const totalScore = couplings.reduce((sum, c) => sum + c.couplingScore, 0);
543
+ const output = {
544
+ generated: (/* @__PURE__ */ new Date()).toISOString(),
545
+ guide: COUPLING_GUIDE,
546
+ churnWindow: `${months} months`,
547
+ minCochanges,
548
+ totalScore,
549
+ tierCounts,
550
+ totalCouplings: couplings.length,
551
+ showing: limited.length,
552
+ couplings: limited
553
+ };
554
+ if (opts.format === "table") {
555
+ process.stdout.write(`${formatCouplingTable(output)}
556
+ `);
557
+ } else {
558
+ process.stdout.write(`${JSON.stringify(output, null, 2)}
559
+ `);
560
+ }
561
+ }
361
562
  function exitWithError(err) {
362
563
  const message = err instanceof Error ? err.message : String(err);
363
564
  process.stderr.write(`Error: ${message}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@wbern/obscene",
3
- "version": "0.2.1",
3
+ "version": "0.3.1",
4
4
  "description": "Identify hotspot files — complex code that changes frequently. Churn × complexity analysis for any git repo.",
5
5
  "type": "module",
6
6
  "bin": {