github-portfolio-analyzer 1.1.0 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -4,7 +4,13 @@ All notable changes to this project will be documented in this file.
4
4
 
5
5
  ## [Unreleased]
6
6
 
7
+ ## [1.3.0] — 2026-04-03
8
+
7
9
  ### Added
10
+ - `forkType` classification for forks via the GitHub compare API, distinguishing `active` forks from `passive` clones
11
+ - `publicAlias` best-effort generation for private repositories with OpenAI → Gemini → Anthropic fallback
12
+ - Global CLI credential flags: `--github-token`, `--github-username`, `--openai-key`, `--gemini-key`, `--anthropic-key`
13
+ - Interactive prompting for missing GitHub and optional LLM keys when `analyze` runs on a TTY
8
14
  - Colored terminal output — progress, success, warning, and error states with ANSI colors
9
15
  - Terminal header with ASCII art, version info, user, token status, and policy status
10
16
  - Per-repository progress logging during `analyze` (Analyzing N/total: repo-name)
@@ -12,6 +18,19 @@ All notable changes to this project will be documented in this file.
12
18
  - Fallback count in analyze summary when structural inspection fails
13
19
  - Fatal error messages for missing token, auth failure, and rate limit
14
20
 
21
+ ## [1.2.0] — 2026-04-02
22
+
23
+ ### Added
24
+ - `inferRepoCategory()` em `taxonomy.js`: inferência de categoria por heurística de nome, descrição e topics — detecta content, learning, template, library, infra, experiment, product; fallback: tooling
25
+ - `CATEGORY_WEIGHTS` em `scoring.js`: pesos de scoring distintos por categoria — `hasLicense` e `hasTests` zerados para content/learning/experiment, baselines altos para experiment (45), learning (35), template (30) e content (25)
26
+ - `category` exposto no output de `buildReportModel` em `report.js` — consumers do report (worker, frontend) agora recebem essa informação
27
+ - `docs/SCORING_MODEL.md`: documentação completa com tabela de pesos, exemplos end-to-end por categoria, e seção para agentes/LLMs
28
+ - Seção de scoring v2 no `AGENT_GUIDE.md`
29
+
30
+ ### Changed
31
+ - `scoreRepository` agora lê `repository.category` para selecionar os pesos corretos; fallback para `tooling` se category ausente ou inválida
32
+ - `sources.category` em `buildRepoTaxonomy` retorna `'inferred'` em vez de `'default'`
33
+
15
34
  ## [1.0.0] — 2026-03-31
16
35
 
17
36
  ### Added
package/README.md CHANGED
@@ -98,14 +98,17 @@ Design goals:
98
98
 
99
99
  ## Installation
100
100
 
101
- Install dependencies and run the CLI locally:
101
+ ```bash
102
+ npm install -g github-portfolio-analyzer
103
+ ```
104
+
105
+ Verify the installation:
102
106
 
103
107
  ```bash
104
- npm install
105
108
  github-portfolio-analyzer --version
106
109
  ```
107
110
 
108
- If the global binary is not available yet, use:
111
+ If the global binary is not available, run directly:
109
112
 
110
113
  ```bash
111
114
  node bin/github-portfolio-analyzer.js --version
@@ -497,7 +500,7 @@ Each `portfolio.json.items[]` entry includes:
497
500
  - `effort`: `xs | s | m | l | xl`
498
501
  - `value`: `low | medium | high | very-high`
499
502
  - `nextAction`: `"<Verb> <target> — Done when: <measurable condition>"`
500
- - `taxonomyMeta`: per-field provenance (`default | user | inferred`)
503
+ - `taxonomyMeta`: per-field provenance (`default | user | inferred`). For repositories, `sources.category` is always `user` (when set manually) or `inferred` (heuristic) — never `default`.
501
504
 
502
505
  `inventory.json.items[]` includes the same taxonomy fields and `taxonomyMeta` for repositories.
503
506
 
@@ -508,50 +511,133 @@ Each `portfolio.json.items[]` entry includes:
508
511
  - `meta` (generatedAt, asOfDate, owner, counts)
509
512
  - `summary` (state counts, top10 by score, now/next/later/park)
510
513
  - `matrix.completionByEffort` (`CL0..CL5` by `xs..xl`)
511
- - `items[]` with decision fields (`completionLevel`, `effortEstimate`, `priorityBand`, `priorityWhy`)
514
+ - `items[]` with decision fields (`completionLevel`, `effortEstimate`, `priorityBand`, `priorityWhy`, `category`)
512
515
 
513
516
  ## Decision Model (Report)
514
517
 
518
+ Every repository passes through a deterministic scoring pipeline:
519
+
520
+ ```mermaid
521
+ flowchart LR
522
+ subgraph top [ ]
523
+ direction LR
524
+ A([repo metadata]) --> B(inferRepoCategory) --> C([category]) --> D(scoreRepository) --> E([score 0–100])
525
+ end
526
+ subgraph mid [ ]
527
+ direction RL
528
+ J(computePriorityBand) <-- I([effort xs–xl]) <-- H(computeEffortEstimate) <-- G([CL 0–5]) <-- F(computeCompletionLevel)
529
+ end
530
+ E --> F
531
+ E -. feeds .-> J
532
+ G -. feeds .-> J
533
+ J --> park([park]) & later([later]) & next([next]) & now([now])
534
+ ```
535
+
536
+ ### Score
537
+
538
+ Each repository receives a score from 0 to 100 based on observable signals.
539
+ Signal weights depend on the project's **category**, inferred automatically
540
+ from its name, description, and GitHub topics.
541
+
542
+ | Signal | product | tooling | library | content | learning | infra | experiment | template |
543
+ |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
544
+ | baseline | — | — | — | **25** | **35** | — | **45** | **30** |
545
+ | pushed (90d) | 25 | 25 | 20 | 25 | 20 | 25 | 20 | 10 |
546
+ | README | 15 | 15 | 20 | 15 | 15 | 20 | 15 | **25** |
547
+ | license | 10 | 10 | **20** | ✗ | ✗ | 10 | ✗ | 10 |
548
+ | tests | 25 | 20 | **25** | ✗ | ✗ | 10 | ✗ | 5 |
549
+ | stars > 1 | 5 | 5 | 10 | 5 | 5 | 5 | 5 | 10 |
550
+ | updated (180d) | 20 | 25 | 5 | **30** | **25** | **30** | 15 | 10 |
551
+
552
+ `✗` = irrelevant for this category (weight 0). `library` penalizes missing
553
+ license most heavily. `experiment` and `learning` skip tests and license entirely.
554
+
555
+ Example — a `content` repo with no license and no tests still scores 95:
556
+
557
+ ```
558
+ "prompt-library" category: content
559
+ ────────────────────────────────────
560
+ baseline +25
561
+ pushed 10d ago +25
562
+ has README +15
563
+ has license +0 (irrelevant for content)
564
+ has tests +0 (irrelevant for content)
565
+ updated this month +30
566
+ ────────────────────────────────────
567
+ score 95
568
+ ```
569
+
570
+ See [docs/SCORING_MODEL.md](docs/SCORING_MODEL.md) for the full weight table
571
+ and numeric examples for every category.
572
+
515
573
  ### Completion Level
516
574
 
517
- - `CL0`: no README
518
- - `CL1`: has README
519
- - `CL2`: has package.json, or non-JS repo with size >= 500 KB
520
- - `CL3`: CL2 + CI
521
- - `CL4`: CL3 + tests
522
- - `CL5`: CL4 + score >= 70
523
- - Ideas default to `CL0`
575
+ Reflects structural maturity, regardless of category. Ideas always default to CL 0.
524
576
 
525
- ### Effort Estimate
577
+ | CL | Label | Condition |
578
+ |---|---|---|
579
+ | 0 | Concept only | no README, or `type: idea` |
580
+ | 1 | Documented | has README |
581
+ | 2 | Structured baseline | has `package.json` (or non-JS repo ≥ 500 KB) |
582
+ | 3 | Automated workflow | CL 2 + CI |
583
+ | 4 | Tested workflow | CL 3 + tests |
584
+ | 5 | Production-ready candidate | CL 4 + score ≥ 70 |
526
585
 
527
- Uses taxonomy `effort` unless `effort` source is `default`.
528
- If defaulted, infer by size and completion:
586
+ ### Effort Estimate
529
587
 
530
- - `xs`: size < 100 KB and CL <= 2
531
- - `s`: size < 500 KB and CL <= 3
532
- - `m`: size < 5000 KB
533
- - `l`: size < 20000 KB
534
- - `xl`: size >= 20000 KB
588
+ How much work remains to bring a project to its next meaningful state.
589
+ Inferred automatically from repository size and completion level when not set manually.
590
+ `effortEstimate` is a report-only field it never overwrites the taxonomy `effort`.
535
591
 
536
- `effortEstimate` is a report field only; it does not overwrite taxonomy `effort`.
592
+ | Estimate | Size | CL | What it means |
593
+ |---|---|---|---|
594
+ | `xs` | < 100 KB | ≤ 2 | A few hours. Easy to restart from scratch. |
595
+ | `s` | < 500 KB | ≤ 3 | A day or two. Focused sprint. |
596
+ | `m` | < 5 MB | any | About a week. Needs planning. |
597
+ | `l` | < 20 MB | any | Multiple weeks. Real commitment required. |
598
+ | `xl` | ≥ 20 MB | any | A long-term project. Strategic investment. |
537
599
 
538
600
  ### Priority Band
539
601
 
540
- Internal score calculation:
602
+ The base score is adjusted by state, completion, and effort to produce a
603
+ final `priorityScore`, which determines the band.
541
604
 
542
- - base: `score`
543
- - `+10` if state `active`
544
- - `+5` if state `stale`
545
- - `-20` if state `abandoned` or `archived`
546
- - `+10` if completion is CL1..CL3
547
- - `-10` if effortEstimate is `l` or `xl`
605
+ | Modifier | Condition | Effect |
606
+ |---|---|---|
607
+ | State boost | `active` | +10 |
608
+ | State boost | `stale` | +5 |
609
+ | State penalty | `abandoned` or `archived` | −20 |
610
+ | Quick-win boost | CL 1, 2, or 3 | +10 |
611
+ | Effort penalty | `l` or `xl` | −10 |
612
+
613
+ `priorityScore` has no lower bound — it can go negative.
614
+
615
+ | Band | Range | Meaning |
616
+ |---|---|---|
617
+ | `park` | < 45 | Needs a decision before any investment. Abandoned, low signal, or intentionally paused. |
618
+ | `later` | 45–64 | Viable but not urgent. Can return when backlog has room. |
619
+ | `next` | 65–79 | Strong candidate. High score but large effort, or active with average score. |
620
+ | `now` | ≥ 80 | High confidence. Active project, good score, low effort — or manually pinned. |
548
621
 
549
- Band mapping:
622
+ Example — modifiers can push a `park`-bound project below zero:
550
623
 
551
- - `now`: >= 80
552
- - `next`: 65..79
553
- - `later`: 45..64
554
- - `park`: < 45
624
+ ```
625
+ "old-monolith" category: product
626
+ ──────────────────────────────────
627
+ baseline 0
628
+ pushed 400d ago +0 (> 90 days)
629
+ has README +15
630
+ has license +10
631
+ no tests +0
632
+ updated 200d ago +0 (> 180 days)
633
+ ──────────────────────────────────
634
+ score 25
635
+
636
+ state=abandoned −20
637
+ effort=xl −10
638
+ ──────────────────────────────────
639
+ priorityScore −5 → park
640
+ ```
555
641
 
556
642
  ## Determinism and Time Rules
557
643
 
@@ -654,10 +740,13 @@ npm test
654
740
  Coverage includes:
655
741
 
656
742
  - activity/maturity/scoring boundaries
743
+ - category inference from repository name, description, and topics
744
+ - category-aware scoring weights and category preservation for user-specified values
657
745
  - taxonomy presence and provenance behavior
658
746
  - `nextAction` validation and normalization
659
747
  - portfolio merge determinism
660
748
  - report completion logic, priority mapping, and deterministic model generation
749
+ - `category` propagation to report items and all summary bands
661
750
 
662
751
  ## Troubleshooting
663
752
 
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "github-portfolio-analyzer",
3
- "version": "1.0.0",
3
+ "version": "1.2.0",
4
4
  "commands": [
5
5
  { "id": "analyze" },
6
6
  { "id": "ingest-ideas" },
File without changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "github-portfolio-analyzer",
3
- "version": "1.1.0",
3
+ "version": "1.3.0",
4
4
  "description": "CLI tool to analyze GitHub repos and portfolio ideas",
5
5
  "type": "module",
6
6
  "bin": {
@@ -17,7 +17,7 @@
17
17
  },
18
18
  "repository": {
19
19
  "type": "git",
20
- "url": "https://github.com/paulo-raoni/github-portfolio-analyzer.git"
20
+ "url": "git+https://github.com/paulo-raoni/github-portfolio-analyzer.git"
21
21
  },
22
22
  "files": [
23
23
  "bin/",
@@ -176,7 +176,11 @@
176
176
  "type": "array",
177
177
  "items": { "type": "string" }
178
178
  },
179
- "nextAction": { "type": "string" }
179
+ "nextAction": { "type": "string" },
180
+ "category": {
181
+ "type": "string",
182
+ "enum": ["product", "tooling", "library", "learning", "content", "infra", "experiment", "template"]
183
+ }
180
184
  }
181
185
  },
182
186
  "item": {
@@ -244,6 +248,17 @@
244
248
  },
245
249
  "htmlUrl": { "type": "string" },
246
250
  "homepage": { "type": "string" },
251
+ "category": {
252
+ "type": "string",
253
+ "enum": ["product", "tooling", "library", "learning", "content", "infra", "experiment", "template"]
254
+ },
255
+ "fork": { "type": "boolean" },
256
+ "forkType": {
257
+ "type": "string",
258
+ "enum": ["active", "passive"]
259
+ },
260
+ "private": { "type": "boolean" },
261
+ "publicAlias": { "type": "string" },
247
262
  "presentationState": {
248
263
  "type": "string",
249
264
  "enum": ["featured", "complete", "in-progress", "salvageable", "learning", "archived", "hidden"]
package/src/cli.js CHANGED
@@ -6,7 +6,16 @@ import { parseArgs } from './utils/args.js';
6
6
  import packageJson from '../package.json' with { type: 'json' };
7
7
  import { UsageError } from './errors.js';
8
8
 
9
- const GLOBAL_OPTIONS = new Set(['help', 'strict', 'version']);
9
+ const GLOBAL_OPTIONS = new Set([
10
+ 'help',
11
+ 'strict',
12
+ 'version',
13
+ 'github-token',
14
+ 'github-username',
15
+ 'openai-key',
16
+ 'gemini-key',
17
+ 'anthropic-key'
18
+ ]);
10
19
  const COMMAND_OPTIONS = {
11
20
  analyze: new Set(['as-of', 'output-dir']),
12
21
  'ingest-ideas': new Set(['input', 'prompt', 'output-dir']),
@@ -15,14 +24,16 @@ const COMMAND_OPTIONS = {
15
24
  };
16
25
 
17
26
  export async function runCli(argv) {
18
- const { positional, options } = parseArgs(argv);
27
+ const { positional, options: rawOptions } = parseArgs(argv);
19
28
  const [command] = positional;
20
- const strictMode = options.strict === true || options.strict === 'true';
29
+ const strictMode = rawOptions.strict === true || rawOptions.strict === 'true';
21
30
 
22
31
  if (strictMode) {
23
- validateStrictOptions(command, options);
32
+ validateStrictOptions(command, rawOptions);
24
33
  }
25
34
 
35
+ const options = mapCredentialOptions(rawOptions);
36
+
26
37
  if ((options.version === true && !command) || (command === '-v' && positional.length === 1)) {
27
38
  console.log(packageJson.version);
28
39
  return;
@@ -58,6 +69,11 @@ function printHelp() {
58
69
  console.log('github-portfolio-analyzer');
59
70
  console.log('Usage: github-portfolio-analyzer <command> [options]');
60
71
  console.log(' --strict Global: fail on unknown flags (exit code 2)');
72
+ console.log(' --github-token TOKEN Global: override GITHUB_TOKEN');
73
+ console.log(' --github-username USER Global: override GITHUB_USERNAME');
74
+ console.log(' --openai-key KEY Global: override OPENAI_API_KEY');
75
+ console.log(' --gemini-key KEY Global: override GEMINI_API_KEY');
76
+ console.log(' --anthropic-key KEY Global: override ANTHROPIC_API_KEY');
61
77
  console.log('Commands:');
62
78
  console.log(' analyze Analyze GitHub repositories and build inventory outputs');
63
79
  console.log(' ingest-ideas Add or update manual project ideas');
@@ -101,3 +117,14 @@ function validateStrictOptions(command, options) {
101
117
  throw new UsageError(`Unknown option(s): ${unknownFlags}`);
102
118
  }
103
119
  }
120
+
121
+ function mapCredentialOptions(options) {
122
+ return {
123
+ ...options,
124
+ ...(options['github-token'] !== undefined ? { githubToken: options['github-token'] } : {}),
125
+ ...(options['github-username'] !== undefined ? { githubUsername: options['github-username'] } : {}),
126
+ ...(options['openai-key'] !== undefined ? { openaiKey: options['openai-key'] } : {}),
127
+ ...(options['gemini-key'] !== undefined ? { geminiKey: options['gemini-key'] } : {}),
128
+ ...(options['anthropic-key'] !== undefined ? { anthropicKey: options['anthropic-key'] } : {})
129
+ };
130
+ }
@@ -1,5 +1,5 @@
1
1
  import path from 'node:path';
2
- import { getEnv, requireGithubToken } from '../config.js';
2
+ import { promptMissingKeys, requireGithubToken } from '../config.js';
3
3
  import { GithubClient } from '../github/client.js';
4
4
  import { fetchAllRepositories, normalizeRepository } from '../github/repos.js';
5
5
  import { inspectRepositoryStructure } from '../github/repo-inspection.js';
@@ -15,27 +15,32 @@ import { progress, success, error, warn, fatal } from '../utils/output.js';
15
15
 
16
16
  export async function runAnalyzeCommand(options = {}) {
17
17
  const startTime = Date.now();
18
- const env = getEnv();
18
+ let args = { ...options };
19
+
20
+ args = await promptMissingKeys(args, {
21
+ quiet: args.quiet,
22
+ required: [
23
+ { key: 'githubToken', label: 'GitHub Personal Access Token' }
24
+ ],
25
+ optional: [
26
+ { key: 'githubUsername', label: 'GitHub Username' },
27
+ { key: 'openaiKey', label: 'OpenAI API Key' },
28
+ { key: 'geminiKey', label: 'Gemini API Key' },
29
+ { key: 'anthropicKey', label: 'Anthropic API Key' }
30
+ ]
31
+ });
19
32
 
20
33
  let token;
21
34
  try {
22
- token = requireGithubToken(env);
35
+ token = requireGithubToken(args);
23
36
  } catch (err) {
24
- fatal('GITHUB_TOKEN missing — set it in .env: GITHUB_TOKEN=your_token');
37
+ fatal('GITHUB_TOKEN missing — set it in .env or pass --github-token');
25
38
  throw err;
26
39
  }
27
40
 
28
41
  const github = new GithubClient(token);
29
- const asOfDate = resolveAsOfDate(typeof options['as-of'] === 'string' ? options['as-of'] : undefined);
30
- const outputDir = typeof options['output-dir'] === 'string' ? options['output-dir'] : 'output';
31
-
32
- printHeader({
33
- command: 'analyze',
34
- asOfDate,
35
- outputDir,
36
- hasToken: Boolean(token),
37
- hasPolicy: false,
38
- });
42
+ const asOfDate = resolveAsOfDate(typeof args['as-of'] === 'string' ? args['as-of'] : undefined);
43
+ const outputDir = typeof args['output-dir'] === 'string' ? args['output-dir'] : 'output';
39
44
 
40
45
  let user;
41
46
  try {
@@ -49,6 +54,15 @@ export async function runAnalyzeCommand(options = {}) {
49
54
  throw err;
50
55
  }
51
56
 
57
+ printHeader({
58
+ command: 'analyze',
59
+ asOfDate,
60
+ outputDir,
61
+ hasToken: Boolean(token),
62
+ hasPolicy: false,
63
+ username: args.githubUsername || user.login
64
+ });
65
+
52
66
  let repositories;
53
67
  try {
54
68
  repositories = await fetchAllRepositories(github);
@@ -73,17 +87,22 @@ export async function runAnalyzeCommand(options = {}) {
73
87
  const structuralHealth = await inspectRepositoryStructure(github, normalized);
74
88
  const activity = classifyActivity(normalized._pushedAt, asOfDate);
75
89
  const maturity = classifyMaturity(normalized.sizeKb);
76
- const { score, scoreBreakdown } = scoreRepository(
77
- { ...normalized, structuralHealth, pushedAt: normalized._pushedAt, updatedAt: normalized._updatedAt },
78
- asOfDate
79
- );
80
90
  const taxonomy = buildRepoTaxonomy({
81
91
  ...normalized,
82
92
  structuralHealth,
83
93
  activity,
84
- maturity,
85
- score
94
+ maturity
86
95
  });
96
+ const { score, scoreBreakdown } = scoreRepository(
97
+ {
98
+ ...normalized,
99
+ structuralHealth,
100
+ pushedAt: normalized._pushedAt,
101
+ updatedAt: normalized._updatedAt,
102
+ category: taxonomy.category
103
+ },
104
+ asOfDate
105
+ );
87
106
 
88
107
  return stripInternalFields({
89
108
  ...normalized,
@@ -106,22 +125,22 @@ export async function runAnalyzeCommand(options = {}) {
106
125
  hasTests: false,
107
126
  hasCi: false
108
127
  };
128
+ const taxonomy = buildRepoTaxonomy({
129
+ ...normalized,
130
+ structuralHealth: fallbackStructuralHealth,
131
+ activity,
132
+ maturity
133
+ });
109
134
  const { score, scoreBreakdown } = scoreRepository(
110
135
  {
111
136
  ...normalized,
112
137
  structuralHealth: fallbackStructuralHealth,
113
138
  pushedAt: normalized._pushedAt,
114
- updatedAt: normalized._updatedAt
139
+ updatedAt: normalized._updatedAt,
140
+ category: taxonomy.category
115
141
  },
116
142
  asOfDate
117
143
  );
118
- const taxonomy = buildRepoTaxonomy({
119
- ...normalized,
120
- structuralHealth: fallbackStructuralHealth,
121
- activity,
122
- maturity,
123
- score
124
- });
125
144
 
126
145
  return stripInternalFields({
127
146
  ...normalized,
File without changes
File without changes
@@ -1,6 +1,8 @@
1
1
  import path from 'node:path';
2
2
  import { buildReportModel } from '../core/report.js';
3
+ import { createPublicAliasLLMCaller, generatePublicAlias } from '../core/publicAliasGenerator.js';
3
4
  import { loadPresentationOverrides, applyPresentationOverrides } from '../core/presentationOverrides.js';
5
+ import { getEnv } from '../config.js';
4
6
  import { readJsonFile, readJsonFileIfExists } from '../io/files.js';
5
7
  import { writeReportAscii, writeReportJson, writeReportMarkdown } from '../io/report.js';
6
8
  import { UsageError } from '../errors.js';
@@ -46,11 +48,38 @@ export async function runReportCommand(options = {}) {
46
48
  const presentationOverridesPath = resolvePresentationOverridesPath(options);
47
49
  const presentationOverrides = await loadPresentationOverrides(presentationOverridesPath);
48
50
  const reportModel = buildReportModel(portfolio, inventory, { policyOverlay });
51
+ const portfolioDescriptionBySlug = new Map(
52
+ (Array.isArray(portfolio?.items) ? portfolio.items : []).map((item) => [
53
+ String(item?.slug ?? '').trim(),
54
+ item?.description ?? ''
55
+ ])
56
+ );
57
+ const callLLM = createPublicAliasLLMCaller(getEnv(options));
49
58
 
50
59
  if (presentationOverrides.size > 0) {
51
60
  reportModel.items = applyPresentationOverrides(reportModel.items, presentationOverrides);
52
61
  }
53
62
 
63
+ if (typeof callLLM === 'function') {
64
+ const privateItems = reportModel.items.filter((item) => item.private && !item.publicAlias);
65
+ const aliasBySlug = new Map();
66
+
67
+ for (const item of privateItems) {
68
+ const itemForAlias = {
69
+ ...item,
70
+ description: portfolioDescriptionBySlug.get(String(item.slug ?? '').trim()) ?? ''
71
+ };
72
+ item.publicAlias = await generatePublicAlias(itemForAlias, callLLM);
73
+ if (item.publicAlias) {
74
+ aliasBySlug.set(item.slug, item.publicAlias);
75
+ }
76
+ }
77
+
78
+ if (aliasBySlug.size > 0) {
79
+ applyAliasesToReportModel(reportModel, aliasBySlug);
80
+ }
81
+ }
82
+
54
83
  const writtenPaths = [];
55
84
 
56
85
  if (formatOption === 'json' || formatOption === 'all') {
@@ -127,6 +156,27 @@ function validatePolicyOverlay(policy, policyPath) {
127
156
  }
128
157
  }
129
158
 
159
+ function applyAliasesToReportModel(reportModel, aliasBySlug) {
160
+ for (const item of reportModel.items) {
161
+ if (!item.private || !item.publicAlias) {
162
+ continue;
163
+ }
164
+
165
+ item.slug = item.publicAlias;
166
+ item.title = item.publicAlias;
167
+ }
168
+
169
+ const sections = ['top10ByScore', 'now', 'next', 'later', 'park'];
170
+ for (const section of sections) {
171
+ for (const item of reportModel.summary[section] ?? []) {
172
+ const alias = aliasBySlug.get(item.slug);
173
+ if (alias) {
174
+ item.slug = alias;
175
+ }
176
+ }
177
+ }
178
+ }
179
+
130
180
  function printNowExplain(reportModel) {
131
181
  const nowItems = Array.isArray(reportModel?.items)
132
182
  ? reportModel.items.filter((item) => item.priorityBand === 'now')
package/src/config.js CHANGED
@@ -1,18 +1,105 @@
1
1
  import dotenv from 'dotenv';
2
+ import { createInterface } from 'node:readline';
2
3
 
3
4
  dotenv.config({ quiet: true });
4
5
 
5
- export function getEnv() {
6
+ /**
7
+ * Returns env vars with optional CLI overrides.
8
+ * args uses camelCase keys, for example { githubToken: '...' }.
9
+ */
10
+ export function getEnv(args = {}) {
6
11
  return {
7
- githubToken: process.env.GITHUB_TOKEN ?? '',
8
- githubUsername: process.env.GITHUB_USERNAME ?? ''
12
+ githubToken: args.githubToken ?? process.env.GITHUB_TOKEN ?? '',
13
+ githubUsername: args.githubUsername ?? process.env.GITHUB_USERNAME ?? '',
14
+ openaiKey: args.openaiKey ?? process.env.OPENAI_API_KEY ?? '',
15
+ geminiKey: args.geminiKey ?? process.env.GEMINI_API_KEY ?? '',
16
+ anthropicKey: args.anthropicKey ?? process.env.ANTHROPIC_API_KEY ?? ''
9
17
  };
10
18
  }
11
19
 
12
- export function requireGithubToken(env = getEnv()) {
20
+ export function requireGithubToken(args = {}) {
21
+ const env = getEnv(args);
22
+
13
23
  if (!env.githubToken) {
14
- throw new Error('Missing GITHUB_TOKEN. Add it to your .env file before running analyze.');
24
+ throw new Error(
25
+ 'Missing GITHUB_TOKEN. Add it to your .env file or pass --github-token <token>'
26
+ );
15
27
  }
16
28
 
17
29
  return env.githubToken;
18
30
  }
31
+
32
+ /**
33
+ * Interactive terminal prompt for missing keys.
34
+ * Only runs on TTY and when quiet !== true.
35
+ */
36
+ export async function promptMissingKeys(
37
+ args = {},
38
+ { required = [], optional = [], quiet = false, input = process.stdin, output = process.stderr } = {}
39
+ ) {
40
+ if (quiet || !input?.isTTY) {
41
+ return args;
42
+ }
43
+
44
+ const env = getEnv(args);
45
+ const result = { ...args };
46
+ const rl = createInterface({ input, output });
47
+ rl.stdoutMuted = false;
48
+
49
+ const askVisible = (label, hint) =>
50
+ new Promise((resolve) => {
51
+ rl.question(` ${label}${hint ? ` (${hint})` : ''}: `, resolve);
52
+ });
53
+
54
+ const askSilent = (label, hint) =>
55
+ new Promise((resolve) => {
56
+ const originalWriteToOutput = rl._writeToOutput;
57
+ const hintText = hint ? ` (${hint})` : '';
58
+ rl.output.write(` ${label}${hintText}: `);
59
+ rl.stdoutMuted = true;
60
+ rl._writeToOutput = (str) => {
61
+ if (rl.stdoutMuted) {
62
+ rl.output.write('');
63
+ return;
64
+ }
65
+
66
+ rl.output.write(str);
67
+ };
68
+
69
+ rl.question('', (value) => {
70
+ rl.stdoutMuted = false;
71
+ rl._writeToOutput = originalWriteToOutput;
72
+ rl.output.write('\n');
73
+ resolve(value);
74
+ });
75
+ });
76
+
77
+ for (const { key, label } of required) {
78
+ if (env[key]) {
79
+ continue;
80
+ }
81
+
82
+ const value = await askSilent(label, 'required');
83
+ if (!value.trim()) {
84
+ rl.close();
85
+ throw new Error(`${label} is required.`);
86
+ }
87
+
88
+ result[key] = value.trim();
89
+ }
90
+
91
+ for (const { key, label } of optional) {
92
+ if (env[key]) {
93
+ continue;
94
+ }
95
+
96
+ const prompt = key === 'githubUsername' ? askVisible : askSilent;
97
+ const value = await prompt(label, 'optional, Enter to skip');
98
+ if (value.trim()) {
99
+ result[key] = value.trim();
100
+ }
101
+ }
102
+
103
+ rl.close();
104
+ return result;
105
+ }
File without changes
package/src/core/ideas.js CHANGED
File without changes
File without changes
File without changes
@@ -0,0 +1,202 @@
1
+ const OPENAI_URL = 'https://api.openai.com/v1/responses';
2
+ const GEMINI_URL =
3
+ 'https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent';
4
+ const ANTHROPIC_URL = 'https://api.anthropic.com/v1/messages';
5
+
6
+ /**
7
+ * Generates a plausible, non-identifying alias for private repositories.
8
+ * Preserves manually curated aliases when present.
9
+ */
10
+ export async function generatePublicAlias(item, callLLM) {
11
+ if (!item?.private) {
12
+ return null;
13
+ }
14
+
15
+ if (item.publicAlias) {
16
+ return item.publicAlias;
17
+ }
18
+
19
+ if (typeof callLLM !== 'function') {
20
+ return null;
21
+ }
22
+
23
+ const prompt = [
24
+ 'Given a private software project with:',
25
+ `- category: ${item.category ?? 'unknown'}`,
26
+ `- language: ${item.language ?? 'unknown'}`,
27
+ `- topics: ${Array.isArray(item.topics) ? item.topics.join(', ') : 'none'}`,
28
+ `- description: "${String(item.description ?? '').substring(0, 200)}"`,
29
+ '',
30
+ 'Generate a plausible but fictional project slug (2-3 words, kebab-case).',
31
+ 'Must reflect the technical domain. Must NOT contain: original repo name,',
32
+ 'company names, client names, person names, or any identifying information.',
33
+ 'Return ONLY the slug, nothing else. Example: "relay-task-engine"'
34
+ ].join('\n');
35
+
36
+ try {
37
+ const raw = await callLLM(prompt);
38
+ return (
39
+ raw
40
+ ?.trim()
41
+ .toLowerCase()
42
+ .replace(/[^a-z0-9-]/g, '-')
43
+ .replace(/-+/g, '-')
44
+ .replace(/^-|-$/g, '') || null
45
+ );
46
+ } catch {
47
+ return null;
48
+ }
49
+ }
50
+
51
+ export function createPublicAliasLLMCaller(env = {}) {
52
+ const providers = [];
53
+
54
+ if (env.openaiKey) {
55
+ providers.push((prompt) => callOpenAI(env.openaiKey, prompt));
56
+ }
57
+
58
+ if (env.geminiKey) {
59
+ providers.push((prompt) => callGemini(env.geminiKey, prompt));
60
+ }
61
+
62
+ if (env.anthropicKey) {
63
+ providers.push((prompt) => callAnthropic(env.anthropicKey, prompt));
64
+ }
65
+
66
+ if (providers.length === 0) {
67
+ return null;
68
+ }
69
+
70
+ return async function callLLM(prompt) {
71
+ let lastError = null;
72
+
73
+ for (const provider of providers) {
74
+ try {
75
+ const value = await provider(prompt);
76
+ if (typeof value === 'string' && value.trim()) {
77
+ return value;
78
+ }
79
+ } catch (error) {
80
+ lastError = error;
81
+ }
82
+ }
83
+
84
+ if (lastError) {
85
+ throw lastError;
86
+ }
87
+
88
+ return null;
89
+ };
90
+ }
91
+
92
+ async function callOpenAI(apiKey, prompt) {
93
+ const data = await postJson(
94
+ OPENAI_URL,
95
+ {
96
+ model: 'gpt-4.1-mini',
97
+ input: prompt,
98
+ max_output_tokens: 40
99
+ },
100
+ {
101
+ Authorization: `Bearer ${apiKey}`
102
+ }
103
+ );
104
+
105
+ return extractOpenAIText(data);
106
+ }
107
+
108
+ async function callGemini(apiKey, prompt) {
109
+ const data = await postJson(
110
+ `${GEMINI_URL}?key=${encodeURIComponent(apiKey)}`,
111
+ {
112
+ contents: [
113
+ {
114
+ role: 'user',
115
+ parts: [{ text: prompt }]
116
+ }
117
+ ],
118
+ generationConfig: {
119
+ temperature: 0.2,
120
+ maxOutputTokens: 40
121
+ }
122
+ }
123
+ );
124
+
125
+ return data?.candidates?.[0]?.content?.parts
126
+ ?.map((part) => part?.text ?? '')
127
+ .join('')
128
+ .trim();
129
+ }
130
+
131
+ async function callAnthropic(apiKey, prompt) {
132
+ const data = await postJson(
133
+ ANTHROPIC_URL,
134
+ {
135
+ model: 'claude-3-5-haiku-latest',
136
+ max_tokens: 40,
137
+ messages: [
138
+ {
139
+ role: 'user',
140
+ content: prompt
141
+ }
142
+ ]
143
+ },
144
+ {
145
+ 'x-api-key': apiKey,
146
+ 'anthropic-version': '2023-06-01'
147
+ }
148
+ );
149
+
150
+ return data?.content
151
+ ?.map((part) => (part?.type === 'text' ? part.text : ''))
152
+ .join('')
153
+ .trim();
154
+ }
155
+
156
+ async function postJson(url, body, headers = {}) {
157
+ const response = await fetch(url, {
158
+ method: 'POST',
159
+ headers: {
160
+ 'content-type': 'application/json',
161
+ ...headers
162
+ },
163
+ body: JSON.stringify(body)
164
+ });
165
+
166
+ const data = await safeJson(response);
167
+ if (!response.ok) {
168
+ const details = data?.error?.message ?? data?.message ?? response.statusText;
169
+ throw new Error(`LLM request failed: ${response.status} ${details}`.trim());
170
+ }
171
+
172
+ return data;
173
+ }
174
+
175
+ async function safeJson(response) {
176
+ const contentType = response.headers.get('content-type') ?? '';
177
+ if (!contentType.includes('application/json')) {
178
+ return null;
179
+ }
180
+
181
+ try {
182
+ return await response.json();
183
+ } catch {
184
+ return null;
185
+ }
186
+ }
187
+
188
+ function extractOpenAIText(data) {
189
+ if (typeof data?.output_text === 'string' && data.output_text.trim()) {
190
+ return data.output_text.trim();
191
+ }
192
+
193
+ for (const output of data?.output ?? []) {
194
+ for (const content of output?.content ?? []) {
195
+ if (typeof content?.text === 'string' && content.text.trim()) {
196
+ return content.text.trim();
197
+ }
198
+ }
199
+ }
200
+
201
+ return null;
202
+ }
@@ -142,8 +142,15 @@ export function buildReportModel(portfolioData, inventoryData = null, options =
142
142
  const inventoryLookup = buildInventoryLookup(inventoryItems);
143
143
 
144
144
  const reportItems = portfolioItems.map((item) => {
145
- const slug = String(item.slug ?? '').trim();
146
- const inventorySignals = inventoryLookup.get(slug) ?? null;
145
+ const rawSlug = String(item.slug ?? '').trim();
146
+ const inventorySignals = inventoryLookup.get(rawSlug) ?? null;
147
+ const isPrivate = Boolean(item.private);
148
+ const alias = typeof item.publicAlias === 'string' && item.publicAlias.trim()
149
+ ? item.publicAlias.trim()
150
+ : null;
151
+ const slug = isPrivate && alias ? alias : rawSlug;
152
+ const rawTitle = resolveTitle(item);
153
+ const title = isPrivate && alias ? alias : rawTitle;
147
154
 
148
155
  const completionLevel = computeCompletionLevel(item, inventorySignals);
149
156
  const effortEstimate = computeEffortEstimate(item, completionLevel, inventorySignals);
@@ -163,9 +170,9 @@ export function buildReportModel(portfolioData, inventoryData = null, options =
163
170
  } = applyPolicyOverlayToItem(
164
171
  {
165
172
  ...item,
166
- slug,
173
+ slug: rawSlug,
167
174
  type: resolveItemType(item),
168
- title: resolveTitle(item),
175
+ title: rawTitle,
169
176
  tags: collectItemTags(item)
170
177
  },
171
178
  {
@@ -180,7 +187,7 @@ export function buildReportModel(portfolioData, inventoryData = null, options =
180
187
  return {
181
188
  slug,
182
189
  type: resolveItemType(item),
183
- title: resolveTitle(item),
190
+ title,
184
191
  score: Number(item.score ?? 0),
185
192
  state: String(item.state ?? 'idea'),
186
193
  effort: normalizeEffort(item.effort) ?? 'm',
@@ -199,8 +206,15 @@ export function buildReportModel(portfolioData, inventoryData = null, options =
199
206
  // presentation fields — passed directly from portfolio item
200
207
  ...(item.language != null ? { language: item.language } : {}),
201
208
  ...(Array.isArray(item.topics) && item.topics.length > 0 ? { topics: item.topics } : {}),
202
- ...(item.htmlUrl != null ? { htmlUrl: item.htmlUrl } : {}),
203
- ...(item.homepage != null ? { homepage: item.homepage } : {})
209
+ ...(!isPrivate && item.htmlUrl != null ? { htmlUrl: item.htmlUrl } : {}),
210
+ ...(!isPrivate && item.homepage != null ? { homepage: item.homepage } : {}),
211
+ ...(item.category != null ? { category: item.category } : {}),
212
+ ...(item.fork != null ? { fork: Boolean(item.fork) } : {}),
213
+ ...(item.forkType != null ? { forkType: item.forkType } : {}),
214
+ ...(item.private != null ? { private: Boolean(item.private) } : {}),
215
+ ...(item.publicAlias != null ? { publicAlias: item.publicAlias } : {}),
216
+ ...(!isPrivate && item.description != null ? { description: item.description } : {}),
217
+ ...(isPrivate && item.description != null ? { _description: item.description } : {})
204
218
  };
205
219
  });
206
220
 
@@ -259,7 +273,12 @@ export function buildReportModel(portfolioData, inventoryData = null, options =
259
273
  matrix: {
260
274
  completionByEffort: matrix
261
275
  },
262
- items: sortedByPriority.map(({ priorityScore: _priorityScore, ...item }) => item)
276
+ items: sortedByPriority.map((item) => {
277
+ const publicItem = { ...item };
278
+ delete publicItem.priorityScore;
279
+ delete publicItem._description;
280
+ return publicItem;
281
+ })
263
282
  };
264
283
  }
265
284
 
@@ -362,7 +381,8 @@ function toSummaryItem(item) {
362
381
  ...(item.priorityTag ? { priorityTag: item.priorityTag } : {}),
363
382
  priorityOverrides: item.priorityOverrides,
364
383
  priorityWhy: item.priorityWhy,
365
- nextAction: item.nextAction
384
+ nextAction: item.nextAction,
385
+ ...(item.category != null ? { category: item.category } : {})
366
386
  };
367
387
  }
368
388
 
@@ -1,8 +1,49 @@
1
1
  import { daysSince } from './classification.js';
2
2
 
3
+ const CATEGORY_WEIGHTS = {
4
+ product: {
5
+ pushedWithin90Days: 25, hasReadme: 15, hasLicense: 10,
6
+ hasTests: 25, starsOverOne: 5, updatedWithin180Days: 20, baseline: 0
7
+ },
8
+ tooling: {
9
+ pushedWithin90Days: 25, hasReadme: 15, hasLicense: 10,
10
+ hasTests: 20, starsOverOne: 5, updatedWithin180Days: 25, baseline: 0
11
+ },
12
+ library: {
13
+ pushedWithin90Days: 20, hasReadme: 20, hasLicense: 20,
14
+ hasTests: 25, starsOverOne: 10, updatedWithin180Days: 5, baseline: 0
15
+ },
16
+ content: {
17
+ pushedWithin90Days: 25, hasReadme: 15, hasLicense: 0,
18
+ hasTests: 0, starsOverOne: 5, updatedWithin180Days: 30, baseline: 25
19
+ },
20
+ learning: {
21
+ pushedWithin90Days: 20, hasReadme: 15, hasLicense: 0,
22
+ hasTests: 0, starsOverOne: 5, updatedWithin180Days: 25, baseline: 35
23
+ },
24
+ infra: {
25
+ pushedWithin90Days: 25, hasReadme: 20, hasLicense: 10,
26
+ hasTests: 10, starsOverOne: 5, updatedWithin180Days: 30, baseline: 0
27
+ },
28
+ experiment: {
29
+ pushedWithin90Days: 20, hasReadme: 15, hasLicense: 0,
30
+ hasTests: 0, starsOverOne: 5, updatedWithin180Days: 15, baseline: 45
31
+ },
32
+ template: {
33
+ pushedWithin90Days: 10, hasReadme: 25, hasLicense: 10,
34
+ hasTests: 5, starsOverOne: 10, updatedWithin180Days: 10, baseline: 30
35
+ }
36
+ };
37
+
38
+ const DEFAULT_WEIGHTS = CATEGORY_WEIGHTS.tooling;
39
+
3
40
  export function scoreRepository(repository, asOfDate) {
4
- let score = 0;
41
+ const category = repository.category ?? 'tooling';
42
+ const weights = CATEGORY_WEIGHTS[category] ?? DEFAULT_WEIGHTS;
43
+
44
+ let score = weights.baseline ?? 0;
5
45
  const breakdown = {
46
+ baseline: weights.baseline ?? 0,
6
47
  pushedWithin90Days: 0,
7
48
  hasReadme: 0,
8
49
  hasLicense: 0,
@@ -11,34 +52,34 @@ export function scoreRepository(repository, asOfDate) {
11
52
  updatedWithin180Days: 0
12
53
  };
13
54
 
14
- if (daysSince(repository.pushedAt, asOfDate) <= 90) {
15
- score += 30;
16
- breakdown.pushedWithin90Days = 30;
55
+ if (weights.pushedWithin90Days > 0 && daysSince(repository.pushedAt, asOfDate) <= 90) {
56
+ score += weights.pushedWithin90Days;
57
+ breakdown.pushedWithin90Days = weights.pushedWithin90Days;
17
58
  }
18
59
 
19
- if (repository.structuralHealth?.hasReadme) {
20
- score += 15;
21
- breakdown.hasReadme = 15;
60
+ if (weights.hasReadme > 0 && repository.structuralHealth?.hasReadme) {
61
+ score += weights.hasReadme;
62
+ breakdown.hasReadme = weights.hasReadme;
22
63
  }
23
64
 
24
- if (repository.structuralHealth?.hasLicense) {
25
- score += 10;
26
- breakdown.hasLicense = 10;
65
+ if (weights.hasLicense > 0 && repository.structuralHealth?.hasLicense) {
66
+ score += weights.hasLicense;
67
+ breakdown.hasLicense = weights.hasLicense;
27
68
  }
28
69
 
29
- if (repository.structuralHealth?.hasTests) {
30
- score += 20;
31
- breakdown.hasTests = 20;
70
+ if (weights.hasTests > 0 && repository.structuralHealth?.hasTests) {
71
+ score += weights.hasTests;
72
+ breakdown.hasTests = weights.hasTests;
32
73
  }
33
74
 
34
- if ((repository.stargazersCount ?? 0) > 1) {
35
- score += 5;
36
- breakdown.starsOverOne = 5;
75
+ if (weights.starsOverOne > 0 && (repository.stargazersCount ?? 0) > 1) {
76
+ score += weights.starsOverOne;
77
+ breakdown.starsOverOne = weights.starsOverOne;
37
78
  }
38
79
 
39
- if (daysSince(repository.updatedAt, asOfDate) <= 180) {
40
- score += 20;
41
- breakdown.updatedWithin180Days = 20;
80
+ if (weights.updatedWithin180Days > 0 && daysSince(repository.updatedAt, asOfDate) <= 180) {
81
+ score += weights.updatedWithin180Days;
82
+ breakdown.updatedWithin180Days = weights.updatedWithin180Days;
42
83
  }
43
84
 
44
85
  return {
@@ -1,17 +1,57 @@
1
1
  import { formatNextAction, normalizeNextAction } from '../utils/nextAction.js';
2
2
 
3
+ function inferRepoCategory(repository) {
4
+ const name = String(repository.name ?? '').toLowerCase();
5
+ const desc = String(repository.description ?? '').toLowerCase();
6
+ const topics = Array.isArray(repository.topics)
7
+ ? repository.topics.map((topic) => String(topic).toLowerCase())
8
+ : [];
9
+ const all = [name, desc, ...topics].join(' ');
10
+
11
+ if (/\b(prompt|note|notes|snippet|snippets|cheatsheet|doc|docs|documentation|knowledge|wiki|resource|resources|writing|content|guide|guides|cookbook)\b/.test(all)) {
12
+ return 'content';
13
+ }
14
+
15
+ if (/\b(learn|learning|study|exercise|exercises|course|tutorial|tutorials|practice|training|bootcamp|challenge|challenges|kata)\b/.test(all)) {
16
+ return 'learning';
17
+ }
18
+
19
+ if (/\b(template|templates|boilerplate|starter|scaffold|skeleton|seed|base|init)\b/.test(all)) {
20
+ return 'template';
21
+ }
22
+
23
+ if (/\b(lib|library|sdk|package|npm|module|plugin|extension|addon|util|utils|helper|helpers)\b/.test(all)) {
24
+ return 'library';
25
+ }
26
+
27
+ if (/\b(infra|infrastructure|docker|kubernetes|k8s|ci|cd|pipeline|deploy|devops|ansible|terraform|nginx|proxy)\b/.test(all)) {
28
+ return 'infra';
29
+ }
30
+
31
+ if (/\b(poc|proof|experiment|spike|demo|prototype|sandbox|playground|try|trying)\b/.test(all)) {
32
+ return 'experiment';
33
+ }
34
+
35
+ if (/\b(app|application|system|platform|service|api|backend|frontend|web|mobile|dashboard|portal|saas)\b/.test(all)) {
36
+ return 'product';
37
+ }
38
+
39
+ return 'tooling';
40
+ }
41
+
3
42
  export function buildRepoTaxonomy(repository) {
4
43
  const activityState = repository.activity;
5
44
  const state = repository.archived ? 'archived' : normalizeState(activityState, 'active');
6
45
 
7
- const category = 'tooling';
46
+ const userCategory = normalizeCategory(repository.category);
47
+ const category = userCategory ?? inferRepoCategory(repository);
8
48
  const strategy = 'maintenance';
9
49
  const effort = 'm';
10
50
  const value = 'medium';
11
51
  const nextAction = defaultRepoNextAction(state);
12
52
 
13
53
  const sources = {
14
- category: 'default',
54
+ category: userCategory ? 'user' : 'inferred',
15
55
  state: repository.archived ? 'inferred' : 'inferred',
16
56
  strategy: 'default',
17
57
  effort: 'default',
package/src/errors.js CHANGED
File without changes
File without changes
File without changes
@@ -1,5 +1,39 @@
1
1
  const PAGE_SIZE = 100;
2
2
 
3
+ /**
4
+ * Classifies a fork as active or passive.
5
+ * Active forks have commits ahead of the upstream default branch.
6
+ */
7
+ export async function classifyFork(client, repo) {
8
+ if (!repo?.fork) {
9
+ return null;
10
+ }
11
+
12
+ const parent = repo.parent;
13
+ if (!parent) {
14
+ return 'passive';
15
+ }
16
+
17
+ const ownerLogin = repo.owner?.login ?? repo.ownerLogin;
18
+ const parentOwner = parent.owner?.login;
19
+ const parentBranch = parent.default_branch ?? parent.defaultBranch ?? 'main';
20
+ const branch = repo.default_branch ?? repo.defaultBranch ?? 'main';
21
+
22
+ if (!ownerLogin || !parentOwner || !repo.name) {
23
+ return 'passive';
24
+ }
25
+
26
+ try {
27
+ const comparison = await client.request(
28
+ `/repos/${encodeURIComponent(ownerLogin)}/${encodeURIComponent(repo.name)}/compare/${encodeURIComponent(parentOwner)}:${encodeURIComponent(parentBranch)}...${encodeURIComponent(ownerLogin)}:${encodeURIComponent(branch)}`
29
+ );
30
+
31
+ return (comparison?.ahead_by ?? 0) > 0 ? 'active' : 'passive';
32
+ } catch {
33
+ return 'passive';
34
+ }
35
+ }
36
+
3
37
  export async function fetchAllRepositories(client) {
4
38
  const repositories = [];
5
39
 
@@ -26,6 +60,17 @@ export async function fetchAllRepositories(client) {
26
60
  }
27
61
 
28
62
  repositories.sort((left, right) => left.full_name.localeCompare(right.full_name));
63
+
64
+ const forks = repositories.filter((repository) => repository.fork);
65
+ for (let index = 0; index < forks.length; index += 5) {
66
+ const batch = forks.slice(index, index + 5);
67
+ await Promise.all(
68
+ batch.map(async (repository) => {
69
+ repository.forkType = await classifyFork(client, repository);
70
+ })
71
+ );
72
+ }
73
+
29
74
  return repositories;
30
75
  }
31
76
 
@@ -39,6 +84,8 @@ export function normalizeRepository(repo) {
39
84
  private: repo.private,
40
85
  archived: repo.archived,
41
86
  fork: repo.fork,
87
+ forkType: repo.forkType ?? null,
88
+ parent: repo.parent ?? null,
42
89
  htmlUrl: repo.html_url,
43
90
  description: repo.description,
44
91
  language: repo.language,
package/src/io/csv.js CHANGED
File without changes
package/src/io/files.js CHANGED
File without changes
File without changes
package/src/io/report.js CHANGED
@@ -108,7 +108,8 @@ function renderAsciiBandSection(title, items) {
108
108
  }
109
109
 
110
110
  items.slice(0, 5).forEach((item, index) => {
111
- lines.push(`${index + 1}) ${item.slug} Score ${item.score} CL${item.completionLevel} — Effort ${item.effortEstimate} — State ${item.state}`);
111
+ const categoryPrefix = item.category == null ? '' : `[${item.category}] `;
112
+ lines.push(`${index + 1}) ${categoryPrefix}${item.slug} — Score ${item.score} — CL${item.completionLevel} — Effort ${item.effortEstimate} — State ${item.state}`);
112
113
  lines.push(` Why: ${item.priorityWhy?.join('; ') ?? ''}`);
113
114
  lines.push(` Next: ${item.nextAction}`);
114
115
  });
@@ -129,7 +130,8 @@ function renderMarkdownBandSection(title, items) {
129
130
  }
130
131
 
131
132
  items.slice(0, 5).forEach((item, index) => {
132
- lines.push(`${index + 1}. **${item.slug}** Score ${item.score} CL${item.completionLevel} — Effort ${item.effortEstimate} — State ${item.state}`);
133
+ const categoryPrefix = item.category == null ? '' : `\`${item.category}\` `;
134
+ lines.push(`${index + 1}. ${categoryPrefix}**${item.slug}** — Score ${item.score} — CL${item.completionLevel} — Effort ${item.effortEstimate} — State ${item.state}`);
133
135
  lines.push(` - Why: ${item.priorityWhy?.join('; ') ?? ''}`);
134
136
  lines.push(` - Next: ${item.nextAction}`);
135
137
  });
package/src/utils/args.js CHANGED
File without changes
File without changes
@@ -13,9 +13,9 @@ ${AMBER} later█░░░ ↑${RESET}
13
13
  ${DIM} ↓${RESET}
14
14
  ${GREEN} ✓ report.json${RESET}`;
15
15
 
16
- export function printHeader({ command: _command, asOfDate, outputDir, hasToken, hasPolicy, version }) {
16
+ export function printHeader({ command: _command, asOfDate, outputDir, hasToken, hasPolicy, version, username }) {
17
17
  const node = process.version;
18
- const user = process.env.GITHUB_USERNAME ?? '—';
18
+ const user = username ?? process.env.GITHUB_USERNAME ?? '—';
19
19
  const token = hasToken ? `${GREEN}✓ set${RESET}` : `${AMBER}not set${RESET}`;
20
20
  const policy = hasPolicy ? `${GREEN}✓ set${RESET}` : `${GRAY}not set${RESET}`;
21
21
  const ver = version ?? packageJson.version;
File without changes
File without changes
File without changes
package/src/utils/slug.js CHANGED
File without changes
package/src/utils/time.js CHANGED
File without changes