explainthisrepo 0.6.1 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,8 @@
1
1
  # ExplainThisRepo
2
2
 
3
- ExplainThisRepo is a CLI that generates plain-English explanations of any codebase (GitHub repositories and local directories) by analyzing project structure, README content, and high signal files.
3
+ ExplainThisRepo is a CLI that generates plain-English explanations of any codebase (GitHub repositories and local directories) by analyzing project structure, READMEs, and high signal files.
4
+
5
+ ExplainThisRepo is a command-line tool that analyzes GitHub repositories and local directories to generate plain-English explanations of the codebase architecture.
4
6
 
5
7
  It helps developers quickly understand unfamiliar codebases by deriving architectural explanations from real project structure and code signals, producing a clear, structured `EXPLAIN.md`.
6
8
 
@@ -26,9 +28,9 @@ It helps developers quickly understand unfamiliar codebases by deriving architec
26
28
  - Builds a file tree summary to understand project architecture
27
29
  - Detects programming languages with the GitHub API
28
30
  - Analyzes local project directories using the same pipeline as GitHub repositories
29
- - Generates a structured plain English explanation grounded in actual project files
30
- - Outputs the explanation to an `EXPLAIN.md` file in your current directory or print it directly in the terminal
31
- - Multi mode command-line interface
31
+ - Generates a structured plain-English explanation grounded in actual project files
32
+ - Outputs the explanation to an `EXPLAIN.md` file in your current directory or prints it directly in the terminal
33
+ - Multi-mode command-line interface
32
34
 
33
35
  ---
34
36
 
@@ -48,29 +50,27 @@ It helps developers quickly understand unfamiliar codebases by deriving architec
48
50
 
49
51
  - `--help` → Show usage guide
50
52
 
51
- - `--doctor` → Check environmental health and API connectivity
53
+ - `--doctor` → Check system health and active model diagnostics
52
54
 
53
55
  ---
54
56
 
55
57
  ## Configuration
56
58
 
57
- ExplainThisRepo uses Gemini models for code analysis.
58
-
59
- Set your Google Gemini API key as an environment variable.
59
+ ExplainThisRepo supports multiple LLM models:
60
60
 
61
- Linux / macOS
61
+ - Gemini
62
+ - OpenAI
63
+ - Ollama (local or cloud-routed)
62
64
 
63
- ```bash
64
- export GEMINI_API_KEY="your_api_key_here"
65
- ```
65
+ ### Quick setup (recommended)
66
66
 
67
- Windows (PowerShell)
67
+ Use the built-in `init` command to configure your preferred model:
68
68
 
69
69
  ```bash
70
- setx GEMINI_API_KEY "your_api_key_here"
70
+ explainthisrepo init
71
+ # or npx explainthisrepo init
71
72
  ```
72
-
73
- Restart your terminal after setting the key.
73
+ > For details about how initialization works, see [INIT.md](INIT.md).
74
74
 
75
75
  ## Installation
76
76
 
@@ -90,6 +90,13 @@ pipx install explainthisrepo
90
90
  explainthisrepo owner/repo
91
91
  ```
92
92
 
93
+ To install support for specific models:
94
+
95
+ ```bash
96
+ pip install explainthisrepo[gemini]
97
+ pip install explainthisrepo[openai]
98
+ ```
99
+
93
100
  ### Option 2: Install with npm
94
101
 
95
102
  Install globally and use forever:
@@ -121,6 +128,18 @@ All inputs are normalized internally to `owner/repo`.
121
128
 
122
129
  ---
123
130
 
131
+ ## Model selection
132
+
133
+ The `--llm` flag to selects which configured model backend to use for the current command
134
+
135
+ ```bash
136
+ explainthisrepo owner/repo --llm gemini
137
+ explainthisrepo owner/repo --llm openai
138
+ explainthisrepo owner/repo --llm ollama
139
+ ```
140
+
141
+ `--llm` works with all modes (``--quick``, ``--simple``, ``--detailed``).
142
+
124
143
  ## Usage
125
144
 
126
145
  ### Basic
@@ -183,7 +202,7 @@ explainthisrepo owner/repo --stack
183
202
  ```
184
203
  ![Stack detector Output](assets/stack-command-output.png)
185
204
 
186
- ### Local Directory Analysis
205
+ ## Local Directory Analysis
187
206
 
188
207
  ExplainThisRepo can analyze local directories directly in the terminal, using the same modes and output formats as GitHub repositories
189
208
 
@@ -203,7 +222,7 @@ explainthisrepo . --stack
203
222
 
204
223
  When analyzing a local directory:
205
224
  - Repository structure is derived from the filesystem
206
- - Key files (README, configs, entrypoints) are extracted locally
225
+ - High signal files (Configs, README, entrypoints) are extracted locally
207
226
  - No GitHub APIs calls are made
208
227
  - All prompts and outputs remain identical
209
228
 
@@ -219,15 +238,22 @@ explainthisrepo --version
219
238
 
220
239
  ---
221
240
 
222
- ### Doctor
223
-
224
- Check environment and connectivity (useful for debugging):
241
+ ### Diagnostics
242
+ Use the `--doctor` flag to verify the environment, network connectivity, and API key configuration:
225
243
 
226
244
  ```bash
227
245
  explainthisrepo --doctor
228
246
  ```
229
247
 
230
- ### Termux (Android) install notes
248
+ ### Set GitHub Token
249
+
250
+ Setting a `GITHUB_TOKEN` environment variable is recommended to avoid rate limits when analyzing public repositories.
251
+
252
+ ```bash
253
+ export GITHUB_TOKEN=yourActualTokenHere
254
+ ```
255
+
256
+ ## Termux (Android) install notes
231
257
 
232
258
  Termux has some environment limitations that can make `pip install explainthisrepo` fail to create the `explainthisrepo` command in `$PREFIX/bin`.
233
259
 
@@ -238,12 +264,14 @@ pip install --user -U explainthisrepo
238
264
  ```
239
265
 
240
266
  Make sure your user bin directory is on your PATH:
267
+
241
268
  ```bash
242
269
  export PATH="$HOME/.local/bin:$PATH"
243
270
  ```
271
+
244
272
  > Tip: Add the PATH export to your ~/.bashrc or ~/.zshrc so it persists.
245
273
 
246
- Alternative (No PATH changes)
274
+ ### Alternative (No PATH changes)
247
275
 
248
276
  If you do not want to modify PATH, you can run ExplainThisRepo as a module:
249
277
 
@@ -251,7 +279,7 @@ If you do not want to modify PATH, you can run ExplainThisRepo as a module:
251
279
  python -m explain_this_repo owner/repo
252
280
  ```
253
281
 
254
- ### Gemini support on Termux (Optional)
282
+ ### Gemini support on Termux
255
283
 
256
284
  Installing Gemini support may require building Rust-based dependencies on Android, which can take time on first install:
257
285
 
@@ -281,5 +309,5 @@ This project is licensed under the MIT License. See the [LICENSE](LICENSE) file
281
309
  Caleb Wodi
282
310
 
283
311
  - Email: caleb@explainthisrepo.com
284
- - Twitter: [@calchiwo](https://x.com/calchiwo)
285
- - LinkedIn: [@calchiwo](https://linkedin.com/in/calchiwo)
312
+ - LinkedIn: [@calchiwo](https://linkedin.com/in/calchiwo)
313
+ - Twitter: [@calchiwo](https://x.com/calchiwo)
package/dist/cli.js CHANGED
@@ -64,9 +64,6 @@ function getPkgVersion() {
64
64
  return "unknown";
65
65
  }
66
66
  }
67
- function printVersion() {
68
- console.log(getPkgVersion());
69
- }
70
67
  function hasEnv(key) {
71
68
  const v = process.env[key];
72
69
  return Boolean(v && v.trim());
@@ -90,21 +87,63 @@ async function checkUrl(url, timeoutMs = 6000) {
90
87
  return { ok: false, msg: `failed (${name}: ${message})` };
91
88
  }
92
89
  }
93
- async function runDoctor() {
90
+ async function runDoctor(llmOverride) {
94
91
  console.log("explainthisrepo doctor report\n");
95
92
  console.log(`node: ${process.version}`);
96
93
  console.log(`os: ${os.type()} ${os.release()}`);
97
94
  console.log(`platform: ${process.platform} ${process.arch}`);
98
95
  console.log(`version: ${getPkgVersion()}`);
99
96
  console.log("\nenvironment:");
100
- console.log(`- GEMINI_API_KEY set: ${hasEnv("GEMINI_API_KEY")}`);
101
97
  console.log(`- GITHUB_TOKEN set: ${hasEnv("GITHUB_TOKEN")}`);
102
98
  console.log("\nnetwork checks:");
103
99
  const gh = await checkUrl("https://api.github.com");
104
100
  console.log(`- github api: ${gh.msg}`);
105
- const gem = await checkUrl("https://generativelanguage.googleapis.com");
106
- console.log(`- gemini endpoint: ${gem.msg}`);
107
- return gh.ok && gem.ok ? 0 : 1;
101
+ console.log("\nprovider diagnostics:");
102
+ let providerOk = true;
103
+ try {
104
+ const { getActiveProvider } = await import("./providers/registry.js");
105
+ const provider = await getActiveProvider(llmOverride);
106
+ const providerName = provider.name ?? llmOverride ?? "unknown";
107
+ console.log(`- active provider: ${providerName}`);
108
+ const doctorFn = provider.doctor;
109
+ if (typeof doctorFn === "function") {
110
+ const result = await doctorFn.call(provider);
111
+ if (typeof result === "boolean") {
112
+ console.log(`- ${providerName}: ${result ? "ok" : "checks did not pass"}`);
113
+ providerOk = result;
114
+ }
115
+ else if (Array.isArray(result)) {
116
+ if (result.length === 0) {
117
+ console.log(`- ${providerName}: ok`);
118
+ }
119
+ else {
120
+ for (const line of result) {
121
+ console.log(`- ${providerName}: ${line}`);
122
+ }
123
+ providerOk = false;
124
+ }
125
+ }
126
+ else {
127
+ console.log(`- ${providerName}: ok`);
128
+ }
129
+ }
130
+ else {
131
+ console.log(`- ${providerName}: no diagnostics implemented`);
132
+ }
133
+ }
134
+ catch (e) {
135
+ const message = e instanceof Error ? e.message : String(e);
136
+ if (llmOverride) {
137
+ console.log(`- provider '${llmOverride}' could not be resolved`);
138
+ console.log("- check that the provider name is correct and properly configured");
139
+ }
140
+ else {
141
+ console.log(`- provider registry error: ${message}`);
142
+ console.log("- run `explainthisrepo init` to configure a provider");
143
+ }
144
+ providerOk = false;
145
+ }
146
+ return gh.ok && providerOk ? 0 : 1;
108
147
  }
109
148
  async function safeReadRepoFiles(owner, repo) {
110
149
  try {
@@ -116,60 +155,24 @@ async function safeReadRepoFiles(owner, repo) {
116
155
  return null;
117
156
  }
118
157
  }
119
- async function generateWithExit(prompt) {
158
+ async function generateWithExit(prompt, llm) {
120
159
  try {
121
- return await generateExplanation(prompt);
160
+ return await generateExplanation(prompt, llm);
122
161
  }
123
162
  catch (e) {
124
163
  const message = e instanceof Error ? e.message : String(e);
125
164
  console.error("Failed to generate explanation.");
126
165
  console.error(`error: ${message}`);
127
166
  console.error("\nfix:");
128
- console.error("- Ensure GEMINI_API_KEY is set");
167
+ console.error("- Check that the provider name is correct (e.g. gemini, openai, ollama)");
168
+ console.error("- Ensure your API key is set for the selected provider");
129
169
  console.error("- Or run: explainthisrepo --doctor");
130
170
  process.exit(1);
131
171
  }
132
172
  }
133
- async function main() {
134
- const program = new Command();
135
- program
136
- .name("explainthisrepo")
137
- .description("CLI that generates plain English explanations of any codebase")
138
- .version(getPkgVersion(), "-v, --version", "Show version")
139
- .argument("[repository]", "GitHub repository (owner/repo or URL) or local directories")
140
- .option("--doctor", "Run diagnostics")
141
- .option("--quick", "Quick summary mode")
142
- .option("--simple", "Simple summary mode")
143
- .option("--detailed", "Detailed explanation mode")
144
- .option("--stack", "Stack detection mode")
145
- .addHelpText("after", `
146
- Examples:
147
- $ explainthisrepo owner/repo
148
- $ explainthisrepo https://github.com/owner/repo
149
- $ explainthisrepo github.com/owner/repo
150
- $ explainthisrepo git@github.com:owner/repo.git
151
- $ explainthisrepo owner/repo --detailed
152
- $ explainthisrepo owner/repo --quick
153
- $ explainthisrepo owner/repo --simple
154
- $ explainthisrepo owner/repo --stack
155
- $ explainthisrepo .
156
- $ explainthisrepo ./path/to/directory
157
- $ explainthisrepo . --stack
158
- $ explainthisrepo --doctor`);
159
- program
160
- .command("init")
161
- .description("Initialize configuration with Gemini API key")
162
- .action(async () => {
163
- await runInit();
164
- });
165
- program.parse(process.argv);
166
- if (process.argv[2] === "init") {
167
- return;
168
- }
169
- const options = program.opts();
170
- const repository = program.args[0];
173
+ async function runAnalysis(repository, options) {
171
174
  if (options.doctor) {
172
- const code = await runDoctor();
175
+ const code = await runDoctor(options.llm);
173
176
  process.exit(code);
174
177
  }
175
178
  const modeFlags = [
@@ -182,9 +185,6 @@ Examples:
182
185
  console.error("error: only one mode flag can be used at a time");
183
186
  process.exit(1);
184
187
  }
185
- if (!repository) {
186
- program.error("repository argument required (or use `init` to set up API key)");
187
- }
188
188
  const local = fs.existsSync(repository);
189
189
  let owner = "";
190
190
  let repo = "";
@@ -266,7 +266,9 @@ Examples:
266
266
  if (options.quick) {
267
267
  let quickReadme = readme;
268
268
  const repoName = local ? localPath : (repoData?.full_name ?? "");
269
- const description = local ? null : (repoData?.description ?? null);
269
+ const description = local
270
+ ? null
271
+ : (repoData?.description ?? null);
270
272
  if (local) {
271
273
  const spinner = ora("Reading repository files…").start();
272
274
  try {
@@ -282,7 +284,7 @@ Examples:
282
284
  }
283
285
  const prompt = buildQuickPrompt(repoName, description, quickReadme);
284
286
  const spinner = ora("Generating explanation…").start();
285
- const output = await generateWithExit(prompt).finally(() => spinner.stop());
287
+ const output = await generateWithExit(prompt, options.llm).finally(() => spinner.stop());
286
288
  console.log("Quick summary 🎉");
287
289
  console.log(output.trim());
288
290
  return;
@@ -302,7 +304,7 @@ Examples:
302
304
  }
303
305
  const prompt = buildSimplePrompt(local ? localPath : (repoData?.full_name ?? ""), local ? null : (repoData?.description ?? null), local ? null : readme, readResult?.treeText ?? null);
304
306
  const genSpinner = ora("Generating explanation…").start();
305
- const output = await generateWithExit(prompt).finally(() => genSpinner.stop());
307
+ const output = await generateWithExit(prompt, options.llm).finally(() => genSpinner.stop());
306
308
  console.log("Simple summary 🎉");
307
309
  console.log(output.trim());
308
310
  return;
@@ -321,7 +323,7 @@ Examples:
321
323
  }
322
324
  const prompt = buildPrompt(local ? localPath : (repoData?.full_name ?? ""), local ? null : (repoData?.description ?? null), local ? null : readme, options.detailed || false, readResult?.treeText ?? null, readResult?.filesText ?? null);
323
325
  const genSpinner = ora("Generating explanation…").start();
324
- const output = await generateWithExit(prompt).finally(() => genSpinner.stop());
326
+ const output = await generateWithExit(prompt, options.llm).finally(() => genSpinner.stop());
325
327
  console.log("Writing EXPLAIN.md...");
326
328
  writeOutput(output);
327
329
  const wordCount = output.split(/\s+/).filter(Boolean).length;
@@ -329,4 +331,53 @@ Examples:
329
331
  console.log(`Words: ${wordCount}`);
330
332
  console.log("Open EXPLAIN.md to read it.");
331
333
  }
332
- main();
334
+ const program = new Command();
335
+ program
336
+ .name("explainthisrepo")
337
+ .description("CLI that generates plain English explanations of any codebase")
338
+ .version(getPkgVersion(), "-v, --version", "Show version")
339
+ .argument("[repository]", "GitHub repository (owner/repo or URL) or local directories")
340
+ .option("--doctor", "Run diagnostics")
341
+ .option("--quick", "Quick summary mode")
342
+ .option("--simple", "Simple summary mode")
343
+ .option("--detailed", "Detailed explanation mode")
344
+ .option("--stack", "Stack detection mode")
345
+ .option("--llm <provider>", "LLM provider to use (e.g. gemini, openai, ollama). Overrides config default.")
346
+ .addHelpText("after", `
347
+ Examples:
348
+ $ explainthisrepo owner/repo
349
+ $ explainthisrepo https://github.com/owner/repo
350
+ $ explainthisrepo github.com/owner/repo
351
+ $ explainthisrepo git@github.com:owner/repo.git
352
+ $ explainthisrepo owner/repo --detailed
353
+ $ explainthisrepo owner/repo --quick
354
+ $ explainthisrepo owner/repo --simple
355
+ $ explainthisrepo owner/repo --stack
356
+ $ explainthisrepo owner/repo --llm gemini
357
+ $ explainthisrepo owner/repo --llm openai
358
+ $ explainthisrepo owner/repo --llm ollama
359
+ $ explainthisrepo .
360
+ $ explainthisrepo ./path/to/directory
361
+ $ explainthisrepo . --stack
362
+ $ explainthisrepo --doctor
363
+ $ explainthisrepo --doctor --llm gemini
364
+ $ explainthisrepo --doctor --llm openai
365
+ $ explainthisrepo --doctor --llm ollama`)
366
+ .action(async (repository, options) => {
367
+ if (options.doctor) {
368
+ const code = await runDoctor(options.llm);
369
+ process.exit(code);
370
+ }
371
+ if (!repository) {
372
+ program.error("repository argument required (or use `explainthisrepo init` to configure a provider)");
373
+ return;
374
+ }
375
+ await runAnalysis(repository, options);
376
+ });
377
+ program
378
+ .command("init")
379
+ .description("Configure your LLM provider (Gemini, OpenAI, or Ollama)")
380
+ .action(async () => {
381
+ await runInit();
382
+ });
383
+ program.parseAsync(process.argv);
package/dist/config.d.ts CHANGED
@@ -2,3 +2,4 @@ export declare function getConfigPath(): string;
2
2
  export declare function ensureConfigDir(): string;
3
3
  export declare function writeConfig(contents: string): void;
4
4
  export declare function readConfig(): string | null;
5
+ export declare function loadConfig(): any;
package/dist/config.js CHANGED
@@ -1,6 +1,7 @@
1
1
  import fs from "node:fs";
2
2
  import os from "node:os";
3
3
  import path from "node:path";
4
+ import toml from "toml";
4
5
  const CONFIG_DIR_NAME = "ExplainThisRepo";
5
6
  const CONFIG_FILE_NAME = "config.toml";
6
7
  export function getConfigPath() {
@@ -31,3 +32,15 @@ export function readConfig() {
31
32
  return null;
32
33
  return fs.readFileSync(path, "utf-8");
33
34
  }
35
+ export function loadConfig() {
36
+ const raw = readConfig();
37
+ if (!raw) {
38
+ return {};
39
+ }
40
+ try {
41
+ return toml.parse(raw);
42
+ }
43
+ catch (err) {
44
+ throw new Error("Invalid config.toml format");
45
+ }
46
+ }
@@ -1 +1 @@
1
- export declare function generateExplanation(prompt: string): Promise<string>;
1
+ export declare function generateExplanation(prompt: string, providerOverride?: string): Promise<string>;
package/dist/generate.js CHANGED
@@ -1,36 +1,15 @@
1
- import { GoogleGenerativeAI } from "@google/generative-ai";
2
- const DEFAULT_MODEL = "gemini-2.5-flash-lite";
3
- function getApiKey() {
4
- const key = process.env.GEMINI_API_KEY;
5
- if (!key || !key.trim()) {
6
- throw new Error([
7
- "GEMINI_API_KEY is not set.",
8
- "",
9
- "Fix:",
10
- ' export GEMINI_API_KEY="your_key_here"',
11
- ].join("\n"));
12
- }
13
- return key.trim();
14
- }
15
- export async function generateExplanation(prompt) {
16
- const apiKey = getApiKey();
17
- const genAI = new GoogleGenerativeAI(apiKey);
18
- const modelName = (process.env.GEMINI_MODEL || DEFAULT_MODEL).trim();
19
- const model = genAI.getGenerativeModel({ model: modelName });
1
+ import { getActiveProvider } from "./providers/registry.js";
2
+ export async function generateExplanation(prompt, providerOverride) {
3
+ const provider = getActiveProvider(providerOverride);
20
4
  try {
21
- const result = await model.generateContent(prompt);
22
- const text = result?.response?.text?.() ?? "";
23
- if (!text.trim()) {
24
- throw new Error("Gemini returned no text");
5
+ const output = await provider.generate(prompt);
6
+ if (!output || !output.trim()) {
7
+ throw new Error(`${provider.name} returned no output`);
25
8
  }
26
- return text.trim();
9
+ return output.trim();
27
10
  }
28
11
  catch (err) {
29
- const msg = err?.message ? String(err.message) : String(err);
30
- throw new Error([
31
- "Failed to generate explanation (Gemini).",
32
- `Model: ${modelName}`,
33
- `Error: ${msg}`,
34
- ].join("\n"));
12
+ const message = err?.message ? String(err.message) : String(err);
13
+ throw new Error(`${provider.name} generation failed: ${message}`);
35
14
  }
36
15
  }
package/dist/init.js CHANGED
@@ -2,29 +2,94 @@ import readline from "node:readline";
2
2
  import process from "node:process";
3
3
  import chalk from "chalk";
4
4
  import { writeConfig } from "./config.js";
5
- const CONFIG_TEMPLATE = `[llm]
6
- provider = "gemini"
7
- api_key = "{api_key}"
8
- `;
5
+ const PROVIDERS = {
6
+ "1": "gemini",
7
+ "2": "openai",
8
+ "3": "ollama"
9
+ };
9
10
  export async function runInit() {
10
11
  const err = process.stderr;
11
- err.write(chalk.yellow("WARNING: input is hidden. Paste your GEMINI_API_KEY and press Enter.\n\n"));
12
+ err.write(chalk.yellow("WARNING: input is hidden where applicable. Configuration will be written once.\n\n"));
12
13
  try {
13
- const apiKey = (await promptHidden("Gemini API key: ")).trim();
14
- if (!apiKey) {
15
- err.write(chalk.red("error: API key cannot be empty\n"));
16
- process.exit(1);
14
+ const provider = await promptProvider();
15
+ const providerConfig = await promptProviderConfig(provider);
16
+ const lines = [
17
+ "[llm]",
18
+ `provider = "${provider}"`,
19
+ "",
20
+ `[providers.${provider}]`
21
+ ];
22
+ for (const [k, v] of Object.entries(providerConfig)) {
23
+ lines.push(`${k} = "${v}"`);
17
24
  }
18
- writeConfig(CONFIG_TEMPLATE.replace("{api_key}", apiKey));
19
- err.write("\r");
20
- err.write("\x1b[2K");
25
+ const contents = lines.join("\n") + "\n";
26
+ writeConfig(contents);
21
27
  err.write(chalk.green("Configuration written.\n"));
22
28
  process.exit(0);
23
29
  }
24
- catch {
25
- err.write(chalk.red("\nInterrupted.\n"));
26
- process.exit(130);
30
+ catch (err) {
31
+ if (err?.name === "AbortError") {
32
+ process.stderr.write(chalk.red("\nInterrupted.\n"));
33
+ process.exit(130);
34
+ }
35
+ process.stderr.write(chalk.red(`error: ${err?.message ?? err}\n`));
36
+ process.exit(1);
37
+ }
38
+ }
39
+ async function promptProvider() {
40
+ const err = process.stderr;
41
+ err.write(chalk.bold("Select LLM provider:\n"));
42
+ err.write(" 1) Gemini\n");
43
+ err.write(" 2) OpenAI\n");
44
+ err.write(" 3) Ollama (local)\n");
45
+ const choice = (await prompt("> ")).trim();
46
+ const provider = PROVIDERS[choice];
47
+ if (!provider) {
48
+ throw new Error("invalid provider selection");
49
+ }
50
+ return provider;
51
+ }
52
+ async function promptProviderConfig(provider) {
53
+ if (provider === "gemini") {
54
+ const key = (await promptHidden("Gemini API key: ")).trim();
55
+ if (!key) {
56
+ throw new Error("API key cannot be empty");
57
+ }
58
+ return { api_key: key };
59
+ }
60
+ if (provider === "openai") {
61
+ const key = (await promptHidden("OpenAI API key: ")).trim();
62
+ if (!key) {
63
+ throw new Error("API key cannot be empty");
64
+ }
65
+ return { api_key: key };
27
66
  }
67
+ if (provider === "ollama") {
68
+ const model = (await prompt("Ollama model (e.g. llama3, glm-5:cloud): ")).trim();
69
+ if (!model) {
70
+ throw new Error("Model cannot be empty");
71
+ }
72
+ const host = (await prompt("Ollama host [http://localhost:11434]: ")).trim()
73
+ || "http://localhost:11434";
74
+ return {
75
+ model,
76
+ host
77
+ };
78
+ }
79
+ throw new Error(`Unsupported provider: ${provider}`);
80
+ }
81
+ function prompt(label) {
82
+ const rl = readline.createInterface({
83
+ input: process.stdin,
84
+ output: process.stderr,
85
+ terminal: true
86
+ });
87
+ return new Promise((resolve) => {
88
+ rl.question(label, (answer) => {
89
+ rl.close();
90
+ resolve(answer);
91
+ });
92
+ });
28
93
  }
29
94
  function promptHidden(label) {
30
95
  const err = process.stderr;
@@ -33,7 +98,7 @@ function promptHidden(label) {
33
98
  const rl = readline.createInterface({
34
99
  input: process.stdin,
35
100
  output: undefined,
36
- terminal: true,
101
+ terminal: true
37
102
  });
38
103
  rl._writeToOutput = () => { };
39
104
  rl.question("", (answer) => {
@@ -0,0 +1,8 @@
1
+ export declare class LLMProviderError extends Error {
2
+ constructor(message: string);
3
+ }
4
+ export interface LLMProvider {
5
+ name: string;
6
+ validateConfig(): void;
7
+ generate(prompt: string): Promise<string>;
8
+ }
@@ -0,0 +1,6 @@
1
+ export class LLMProviderError extends Error {
2
+ constructor(message) {
3
+ super(message);
4
+ this.name = "LLMProviderError";
5
+ }
6
+ }
@@ -0,0 +1,16 @@
1
+ import { LLMProvider } from "./base.js";
2
+ type GeminiConfig = {
3
+ api_key?: string;
4
+ model?: string;
5
+ };
6
+ export declare class GeminiProvider implements LLMProvider {
7
+ name: string;
8
+ private apiKey?;
9
+ private model;
10
+ private client?;
11
+ constructor(config?: GeminiConfig);
12
+ validateConfig(): void;
13
+ private getClient;
14
+ generate(prompt: string): Promise<string>;
15
+ }
16
+ export {};
@@ -0,0 +1,47 @@
1
+ import { GoogleGenerativeAI } from "@google/generative-ai";
2
+ import { LLMProviderError } from "./base.js";
3
+ const DEFAULT_MODEL = "gemini-2.5-flash-lite";
4
+ export class GeminiProvider {
5
+ name = "gemini";
6
+ apiKey;
7
+ model;
8
+ client;
9
+ constructor(config = {}) {
10
+ this.apiKey = config.api_key;
11
+ this.model = config.model ?? DEFAULT_MODEL;
12
+ this.validateConfig();
13
+ }
14
+ validateConfig() {
15
+ if (!this.apiKey || !this.apiKey.trim()) {
16
+ throw new LLMProviderError([
17
+ "Gemini provider requires an API key.",
18
+ "Run `explainthisrepo init` to configure it."
19
+ ].join("\n"));
20
+ }
21
+ }
22
+ getClient() {
23
+ if (this.client) {
24
+ return this.client;
25
+ }
26
+ this.client = new GoogleGenerativeAI(this.apiKey);
27
+ return this.client;
28
+ }
29
+ async generate(prompt) {
30
+ const genAI = this.getClient();
31
+ const model = genAI.getGenerativeModel({
32
+ model: this.model
33
+ });
34
+ try {
35
+ const result = await model.generateContent(prompt);
36
+ const text = result?.response?.text?.() ?? "";
37
+ if (!text.trim()) {
38
+ throw new LLMProviderError("Gemini returned no text");
39
+ }
40
+ return text.trim();
41
+ }
42
+ catch (err) {
43
+ const message = err?.message ? String(err.message) : String(err);
44
+ throw new LLMProviderError(`Gemini request failed: ${message}`);
45
+ }
46
+ }
47
+ }
@@ -0,0 +1,15 @@
1
+ import { LLMProvider } from "./base.js";
2
+ type OllamaConfig = {
3
+ model?: string;
4
+ host?: string;
5
+ };
6
+ export declare class OllamaProvider implements LLMProvider {
7
+ name: string;
8
+ private model;
9
+ private host;
10
+ constructor(config?: OllamaConfig);
11
+ validateConfig(): void;
12
+ doctor(): Promise<string[]>;
13
+ generate(prompt: string): Promise<string>;
14
+ }
15
+ export {};
@@ -0,0 +1,73 @@
1
+ import { LLMProviderError } from "./base.js";
2
+ const DEFAULT_MODEL = "llama3";
3
+ const DEFAULT_HOST = "http://localhost:11434";
4
+ export class OllamaProvider {
5
+ name = "ollama";
6
+ model;
7
+ host;
8
+ constructor(config = {}) {
9
+ this.model = config.model ?? DEFAULT_MODEL;
10
+ this.host = (config.host ?? DEFAULT_HOST).replace(/\/$/, "");
11
+ this.validateConfig();
12
+ }
13
+ validateConfig() {
14
+ if (!this.host.startsWith("http")) {
15
+ throw new LLMProviderError("Ollama host must be a valid URL (e.g. http://localhost:11434)");
16
+ }
17
+ }
18
+ async doctor() {
19
+ const results = [];
20
+ try {
21
+ const res = await fetch(`${this.host}/api/tags`, {
22
+ method: "GET"
23
+ });
24
+ if (res.ok) {
25
+ results.push("Ollama server reachable");
26
+ }
27
+ else {
28
+ results.push(`Ollama server responded with ${res.status}`);
29
+ }
30
+ }
31
+ catch {
32
+ results.push("Ollama server not reachable");
33
+ }
34
+ results.push(`model: ${this.model}`);
35
+ results.push(`host: ${this.host}`);
36
+ return results;
37
+ }
38
+ async generate(prompt) {
39
+ const url = `${this.host}/api/generate`;
40
+ const payload = {
41
+ model: this.model,
42
+ prompt,
43
+ stream: false
44
+ };
45
+ try {
46
+ const res = await fetch(url, {
47
+ method: "POST",
48
+ headers: {
49
+ "Content-Type": "application/json"
50
+ },
51
+ body: JSON.stringify(payload)
52
+ });
53
+ if (!res.ok) {
54
+ throw new LLMProviderError(`Ollama server responded with ${res.status}`);
55
+ }
56
+ const data = await res.json();
57
+ const text = data?.response ?? "";
58
+ if (!text.trim()) {
59
+ throw new LLMProviderError("Ollama returned no text");
60
+ }
61
+ return text.trim();
62
+ }
63
+ catch (err) {
64
+ const message = err?.message ? String(err.message) : String(err);
65
+ throw new LLMProviderError([
66
+ "Failed to connect to Ollama.",
67
+ "Ensure Ollama is running locally.",
68
+ "Start it with: ollama serve",
69
+ `Error: ${message}`
70
+ ].join("\n"));
71
+ }
72
+ }
73
+ }
@@ -0,0 +1,17 @@
1
+ import { LLMProvider } from "./base.js";
2
+ type OpenAIConfig = {
3
+ api_key?: string;
4
+ model?: string;
5
+ };
6
+ export declare class OpenAIProvider implements LLMProvider {
7
+ name: string;
8
+ private apiKey?;
9
+ private model;
10
+ private client?;
11
+ constructor(config?: OpenAIConfig);
12
+ validateConfig(): void;
13
+ private getClient;
14
+ generate(prompt: string): Promise<string>;
15
+ doctor(): string[];
16
+ }
17
+ export {};
@@ -0,0 +1,57 @@
1
+ import OpenAI from "openai";
2
+ import { LLMProviderError } from "./base.js";
3
+ const DEFAULT_MODEL = "gpt-4o-mini";
4
+ export class OpenAIProvider {
5
+ name = "openai";
6
+ apiKey;
7
+ model;
8
+ client;
9
+ constructor(config = {}) {
10
+ this.apiKey = config.api_key;
11
+ this.model = config.model ?? DEFAULT_MODEL;
12
+ this.validateConfig();
13
+ }
14
+ validateConfig() {
15
+ if (!this.apiKey || !this.apiKey.trim()) {
16
+ throw new LLMProviderError([
17
+ "OpenAI provider requires an API key.",
18
+ "Run `explainthisrepo init` to configure it."
19
+ ].join("\n"));
20
+ }
21
+ }
22
+ getClient() {
23
+ if (this.client) {
24
+ return this.client;
25
+ }
26
+ this.client = new OpenAI({
27
+ apiKey: this.apiKey
28
+ });
29
+ return this.client;
30
+ }
31
+ async generate(prompt) {
32
+ const client = this.getClient();
33
+ try {
34
+ const response = await client.chat.completions.create({
35
+ model: this.model,
36
+ messages: [
37
+ { role: "user", content: prompt }
38
+ ]
39
+ });
40
+ const text = response?.choices?.[0]?.message?.content ?? "";
41
+ if (!text.trim()) {
42
+ throw new LLMProviderError("OpenAI returned no text");
43
+ }
44
+ return text.trim();
45
+ }
46
+ catch (err) {
47
+ const message = err?.message ? String(err.message) : String(err);
48
+ throw new LLMProviderError(`OpenAI request failed: ${message}`);
49
+ }
50
+ }
51
+ doctor() {
52
+ return [
53
+ `OPENAI_API_KEY set: ${Boolean(this.apiKey)}`,
54
+ `model: ${this.model}`
55
+ ];
56
+ }
57
+ }
@@ -0,0 +1,4 @@
1
+ import { LLMProvider } from "./base.js";
2
+ export declare function listProviders(): string[];
3
+ export declare function getProvider(name: string): LLMProvider;
4
+ export declare function getActiveProvider(override?: string): LLMProvider;
@@ -0,0 +1,34 @@
1
+ import { loadConfig } from "../config.js";
2
+ import { LLMProviderError } from "./base.js";
3
+ import { GeminiProvider } from "./gemini.js";
4
+ import { OpenAIProvider } from "./openai.js";
5
+ import { OllamaProvider } from "./ollama.js";
6
+ const PROVIDER_REGISTRY = {
7
+ gemini: GeminiProvider,
8
+ openai: OpenAIProvider,
9
+ ollama: OllamaProvider,
10
+ };
11
+ export function listProviders() {
12
+ return Object.keys(PROVIDER_REGISTRY);
13
+ }
14
+ export function getProvider(name) {
15
+ const providerName = name.toLowerCase();
16
+ const Provider = PROVIDER_REGISTRY[providerName];
17
+ if (!Provider) {
18
+ throw new LLMProviderError(`Unknown LLM provider '${providerName}'`);
19
+ }
20
+ const config = loadConfig();
21
+ const providerConfig = config?.providers?.[providerName] ?? {};
22
+ return new Provider(providerConfig);
23
+ }
24
+ export function getActiveProvider(override) {
25
+ if (override) {
26
+ return getProvider(override);
27
+ }
28
+ const config = loadConfig();
29
+ const defaultProvider = config?.llm?.provider;
30
+ if (!defaultProvider) {
31
+ throw new LLMProviderError("No LLM provider configured. Run 'explainthisrepo init'.");
32
+ }
33
+ return getProvider(defaultProvider);
34
+ }
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "explainthisrepo",
3
- "version": "0.6.1",
4
- "description": "CLI that generates plain English explanations of any codebase",
3
+ "version": "0.9.0",
4
+ "description": "CLI that generates plain-English explanations of any codebase",
5
5
  "license": "MIT",
6
6
  "type": "module",
7
7
  "author": "Caleb Wodi <calebwodi33@gmail.com>",
@@ -45,9 +45,11 @@
45
45
  },
46
46
  "dependencies": {
47
47
  "@google/generative-ai": "^0.24.1",
48
+ "@iarna/toml": "^2.2.5",
48
49
  "axios": "^1.13.2",
49
50
  "commander": "^14.0.3",
50
51
  "dotenv": "^17.2.3",
52
+ "openai": "^4.0.0",
51
53
  "ora": "^9.3.0"
52
54
  },
53
55
  "devDependencies": {