diff-hound 1.0.2 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -8,12 +8,13 @@ Supports GitHub today. GitLab and Bitbucket support are planned.
8
8
 
9
9
  ## ✨ Features
10
10
 
11
- - 🧠 Automated code review using OpenAI (Upcoming: Claude, DeepSeek, CodeLlama)
11
+ - 🧠 Automated code review using OpenAI or Ollama (Upcoming: Claude, DeepSeek, Gemini)
12
12
  - 💬 Posts inline or summary comments on pull requests
13
13
  - 🔌 Plug-and-play architecture for models and platforms
14
14
  - ⚙️ Configurable with JSON/YAML config files and CLI overrides
15
15
  - 🛠️ Designed for CI/CD pipelines and local runs
16
16
  - 🧐 Tracks last reviewed commit to avoid duplicate reviews
17
+ - 🖥️ Local diff mode — review local changes without a remote PR
17
18
 
18
19
  ---
19
20
 
@@ -53,12 +54,13 @@ Then modify with your keys / tokens:
53
54
  # Platform tokens
54
55
  GITHUB_TOKEN=your_github_token # Requires 'repo' scope
55
56
 
56
- # AI Model API keys
57
+ # AI Model API keys (set one depending on your provider)
57
58
  OPENAI_API_KEY=your_openai_key
58
59
  ```
59
60
 
60
- > 🔐 `GITHUB_TOKEN` is used to fetch PRs and post comments – [get it here](https://github.com/settings/personal-access-tokens)
61
+ > 🔐 `GITHUB_TOKEN` is used to fetch PRs and post comments – [get it here](https://github.com/settings/personal-access-tokens)
61
62
  > 🔐 `OPENAI_API_KEY` is used to generate code reviews via GPT – [get it here](https://platform.openai.com/api-keys)
63
+ > 💡 **Using Ollama?** No API key needed — just have Ollama running locally. See [Ollama (Local Models)](#ollama-local-models) below.
62
64
 
63
65
  ---
64
66
 
@@ -140,6 +142,70 @@ diff-hound --repo=owner/repo --provider=openai --model=gpt-4o --dry-run
140
142
 
141
143
  ---
142
144
 
145
+ ### Local Diff Mode
146
+
147
+ Review local git changes without a remote PR or GitHub token. Only an LLM API key is needed.
148
+
149
+ ```bash
150
+ # Review changes between current branch and main
151
+ diff-hound --local --base main
152
+
153
+ # Review last commit
154
+ diff-hound --local --base HEAD~1
155
+
156
+ # Review changes between two specific refs
157
+ diff-hound --local --base main --head feature-branch
158
+
159
+ # Review a patch file directly
160
+ diff-hound --patch changes.patch
161
+ ```
162
+
163
+ Local mode always runs in dry-run — output goes to your terminal. If `--base` is omitted, it defaults to the upstream tracking branch or `HEAD~1`.
164
+
165
+ ---
166
+
167
+ ### Ollama (Local Models)
168
+
169
+ Run fully offline code reviews using [Ollama](https://ollama.com) — no API key, no cloud, zero cost.
170
+
171
+ **Prerequisites:** Install and start Ollama, then pull a model:
172
+
173
+ ```bash
174
+ # Install Ollama (see https://ollama.com/download)
175
+ ollama serve # Start the Ollama server
176
+ ollama pull llama3 # Pull a model (one-time)
177
+ ```
178
+
179
+ **Run a review with Ollama:**
180
+
181
+ ```bash
182
+ # Review local changes using Ollama
183
+ diff-hound --provider ollama --model llama3 --local --base main
184
+
185
+ # Use a code-specialized model
186
+ diff-hound --provider ollama --model codellama --local --base main
187
+
188
+ # Point to a remote Ollama instance
189
+ diff-hound --provider ollama --model llama3 --model-endpoint http://my-server:11434 --local --base main
190
+
191
+ # Increase timeout for large diffs on slower models (default: 120000ms)
192
+ diff-hound --provider ollama --model llama3 --request-timeout 300000 --local --base main
193
+ ```
194
+
195
+ **Or set it in your config file (`.aicodeconfig.json`):**
196
+
197
+ ```json
198
+ {
199
+ "provider": "ollama",
200
+ "model": "llama3",
201
+ "endpoint": "http://localhost:11434"
202
+ }
203
+ ```
204
+
205
+ > 💡 Ollama's default endpoint is `http://localhost:11434`. You only need `--model-endpoint` / `endpoint` if running Ollama on a different host or port.
206
+
207
+ ---
208
+
143
209
  ### Output Example (Dry Run)
144
210
 
145
211
  ```bash
@@ -156,17 +222,22 @@ Consider refactoring to reduce nesting.
156
222
 
157
223
  ### Optional CLI Flags
158
224
 
159
- | Flag | Short | Description |
160
- | ------------------ | ----- | --------------------------------------- |
161
- | `--provider` | `-p` | AI model provider (e.g. `openai`) |
162
- | `--model` | `-m` | AI model (e.g. `gpt-4o`, `gpt-4`, etc.) |
163
- | `--model-endpoint` | `-e` | Custom API endpoint for the model |
164
- | `--git-provider` | `-g` | Repo platform (default: `github`) |
165
- | `--repo` | `-r` | GitHub repo in format `owner/repo` |
166
- | `--comment-style` | `-s` | `inline` or `summary` |
167
- | `--dry-run` | `-d` | Dont post comments, only print |
168
- | `--verbose` | `-v` | Enable debug logs |
169
- | `--config-path` | `-c` | Custom config file path |
225
+ | Flag | Short | Description |
226
+ | ------------------- | ----- | -------------------------------------------------- |
227
+ | `--provider` | `-p` | AI model provider (`openai`, `ollama`) |
228
+ | `--model` | `-m` | AI model (e.g. `gpt-4o`, `llama3`) |
229
+ | `--model-endpoint` | `-e` | Custom API endpoint for the model |
230
+ | `--git-provider` | `-g` | Repo platform (default: `github`) |
231
+ | `--repo` | `-r` | GitHub repo in format `owner/repo` |
232
+ | `--comment-style` | `-s` | `inline` or `summary` |
233
+ | `--dry-run` | `-d` | Don't post comments, only print |
234
+ | `--verbose` | `-v` | Enable debug logs |
235
+ | `--config-path` | `-c` | Custom config file path |
236
+ | `--local` | `-l` | Review local git diff (always dry-run) |
237
+ | `--base` | | Base ref for local diff (branch/commit) |
238
+ | `--head` | | Head ref for local diff (default: HEAD) |
239
+ | `--patch` | | Path to a patch file (implies `--local`) |
240
+ | `--request-timeout` | | Request timeout in ms (default: 120000) |
170
241
 
171
242
  ---
172
243
 
@@ -181,8 +252,9 @@ diff-hound/
181
252
  │ ├── cli/ # CLI argument parsing
182
253
  │ ├── config/ # JSON/YAML config handling
183
254
  │ ├── core/ # Diff parsing, formatting
184
- │ ├── models/ # AI model adapters
185
- │ ├── platforms/ # GitHub, GitLab, etc.
255
+ │ ├── models/ # AI model adapters (OpenAI, Ollama)
256
+ │ ├── platforms/ # GitHub, local git, etc.
257
+ │ ├── schemas/ # Structured output types and validation
186
258
  │ └── types/ # TypeScript types
187
259
  ├── .env
188
260
  ├── README.md
@@ -204,15 +276,23 @@ Create a new class in `src/platforms/` that implements the `CodeReviewPlatform`
204
276
 
205
277
  ## ✅ Next Steps
206
278
 
207
- 🔧 Add Winston for production-grade logging
208
- 🌐 Implement GitLab and Bitbucket platform adapters
209
- 🌍 Add support for other AI model providers (e.g. Anthropic, DeepSeek...)
210
- 💻 Add support for running local models (e.g. Ollama, Llama.cpp, Hugging Face transformers)
211
- 📤 Add support for webhook triggers (e.g., GitHub Actions, GitLab CI)
212
- 🧪 Add unit and integration test suites (Jest or Vitest)
213
- 📦 Publish Docker image for CI/CD use
214
- 🧩 Enable plugin hooks for custom rule logic
215
- 🗂 Add support for reviewing diffs from local branches or patch files
279
+ 🔧 Structured logging (pino)
280
+ 🌐 GitLab and Bitbucket platform adapters
281
+ 🌍 Anthropic and Gemini model adapters
282
+ 📤 Webhook server mode and GitHub Action
283
+ 📦 Docker image for self-hosting
284
+ 🧩 Plugin system with pipeline hooks
285
+ 🧠 Repo indexing and context-aware reviews
286
+
287
+ ---
288
+
289
+ ## 🤝 Contributing
290
+
291
+ We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for:
292
+
293
+ - Branching and commit conventions (Angular style)
294
+ - PR workflow (squash-merge)
295
+ - How to add new platform and model adapters
216
296
 
217
297
  ---
218
298
 
package/dist/cli/index.js CHANGED
@@ -20,7 +20,7 @@ function parseCli() {
20
20
  .name("diff-hound")
21
21
  .description("AI-powered code review for GitHub, GitLab, and Bitbucket")
22
22
  .version(package_json_1.version)
23
- .option("-p, --provider <provider>", "The provider of the AI model (openai, anthropic, deepseek, groq, gemini)", config_1.DEFAULT_CONFIG.provider)
23
+ .option("-p, --provider <provider>", "The provider of the AI model (openai, ollama, anthropic, deepseek, groq, gemini)", config_1.DEFAULT_CONFIG.provider)
24
24
  .option("-m, --model <model>", "The AI model (gpt-4o, claude-3-5-sonnet, deepseek, llama3, gemini-2.0-flash)", config_1.DEFAULT_CONFIG.model)
25
25
  .option("-e, --model-endpoint <endpoint>", "The endpoint for the AI model")
26
26
  .option("-g, --git-platform <platform>", "Platform to use (github, gitlab, bitbucket)", config_1.DEFAULT_CONFIG.gitPlatform)
@@ -29,18 +29,29 @@ function parseCli() {
29
29
  .option("-d, --dry-run", "Do not post comments, just print them", config_1.DEFAULT_CONFIG.dryRun)
30
30
  .option("-v, --verbose", "Enable verbose logging", config_1.DEFAULT_CONFIG.verbose)
31
31
  .option("-c, --config-path <path>", "Path to config file (default: .aicodeconfig.json or .aicode.yml)")
32
+ .option("-l, --local", "Review local git diff instead of remote PRs (always dry-run)")
33
+ .option("--base <ref>", "Base branch/commit for local diff (default: HEAD~1 or upstream)")
34
+ .option("--head <ref>", "Head branch/commit for local diff (default: HEAD)")
35
+ .option("--patch <path>", "Review a patch file directly (implies --local)")
36
+ .option("--request-timeout <ms>", "Request timeout in milliseconds (default: 120000)")
32
37
  .parse(process.argv);
33
38
  const options = program.opts();
39
+ const isLocal = options.local || options.patch;
34
40
  return sanitizeCliOptions({
35
41
  provider: options.provider,
36
42
  model: options.model,
37
- gitPlatform: options.gitPlatform,
43
+ gitPlatform: isLocal ? "local" : options.gitPlatform,
38
44
  repo: options.repo,
39
45
  commentStyle: options.commentStyle,
40
- dryRun: options.dryRun,
46
+ dryRun: isLocal ? true : options.dryRun,
41
47
  verbose: options.verbose,
42
48
  endpoint: options.modelEndpoint,
43
49
  configPath: options.configPath,
50
+ local: isLocal || undefined,
51
+ base: options.base,
52
+ head: options.head,
53
+ patch: options.patch,
54
+ requestTimeout: options.requestTimeout ? parseInt(options.requestTimeout, 10) : undefined,
44
55
  });
45
56
  }
46
57
  /**
@@ -77,17 +77,17 @@ function loadConfigFromFile(filePath) {
77
77
  * @returns Updated configuration
78
78
  */
79
79
  function validateConfig(cliOptions, config) {
80
- let finalConfig = { ...config, ...cliOptions };
80
+ const finalConfig = { ...config, ...cliOptions };
81
81
  // Validate provider
82
82
  // Todo: Add more providers as needed ("anthropic", "deepseek", "groq", "gemini")
83
- const validProviders = ["openai"];
83
+ const validProviders = ["openai", "ollama"];
84
84
  if (!validProviders.includes(finalConfig.provider)) {
85
85
  console.error(`Error: Invalid provider '${finalConfig.provider}'. Using default: ${exports.DEFAULT_CONFIG.provider}`);
86
86
  finalConfig.provider = exports.DEFAULT_CONFIG.provider;
87
87
  }
88
88
  // Validate platform
89
89
  // Todo: Add more platforms as needed ("gitlab", "bitbucket")
90
- const validPlatforms = ["github"];
90
+ const validPlatforms = ["github", "local"];
91
91
  if (!validPlatforms.includes(finalConfig.gitPlatform)) {
92
92
  console.error(`Error: Invalid platform '${finalConfig.gitPlatform}'. Using default: ${exports.DEFAULT_CONFIG.gitPlatform}`);
93
93
  finalConfig.gitPlatform = exports.DEFAULT_CONFIG.gitPlatform;
@@ -0,0 +1 @@
1
+ export {};
@@ -0,0 +1,330 @@
1
+ "use strict";
2
+ var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
3
+ if (k2 === undefined) k2 = k;
4
+ var desc = Object.getOwnPropertyDescriptor(m, k);
5
+ if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
6
+ desc = { enumerable: true, get: function() { return m[k]; } };
7
+ }
8
+ Object.defineProperty(o, k2, desc);
9
+ }) : (function(o, m, k, k2) {
10
+ if (k2 === undefined) k2 = k;
11
+ o[k2] = m[k];
12
+ }));
13
+ var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
14
+ Object.defineProperty(o, "default", { enumerable: true, value: v });
15
+ }) : function(o, v) {
16
+ o["default"] = v;
17
+ });
18
+ var __importStar = (this && this.__importStar) || (function () {
19
+ var ownKeys = function(o) {
20
+ ownKeys = Object.getOwnPropertyNames || function (o) {
21
+ var ar = [];
22
+ for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
23
+ return ar;
24
+ };
25
+ return ownKeys(o);
26
+ };
27
+ return function (mod) {
28
+ if (mod && mod.__esModule) return mod;
29
+ var result = {};
30
+ if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
31
+ __setModuleDefault(result, mod);
32
+ return result;
33
+ };
34
+ })();
35
+ Object.defineProperty(exports, "__esModule", { value: true });
36
+ const vitest_1 = require("vitest");
37
+ const fs = __importStar(require("fs"));
38
+ const index_1 = require("./index");
39
+ // Mock fs module
40
+ vitest_1.vi.mock("fs");
41
+ (0, vitest_1.describe)("loadConfig", () => {
42
+ const mockCwd = "/test/project";
43
+ (0, vitest_1.beforeEach)(() => {
44
+ vitest_1.vi.resetAllMocks();
45
+ vitest_1.vi.spyOn(process, "cwd").mockReturnValue(mockCwd);
46
+ vitest_1.vi.spyOn(console, "error").mockImplementation(() => { });
47
+ });
48
+ (0, vitest_1.afterEach)(() => {
49
+ vitest_1.vi.restoreAllMocks();
50
+ });
51
+ (0, vitest_1.describe)("with no config files", () => {
52
+ (0, vitest_1.it)("should return default config when no config files exist", async () => {
53
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(false);
54
+ const config = await (0, index_1.loadConfig)();
55
+ (0, vitest_1.expect)(config).toEqual(index_1.DEFAULT_CONFIG);
56
+ });
57
+ });
58
+ (0, vitest_1.describe)("with explicit config path", () => {
59
+ (0, vitest_1.it)("should load config from specified JSON path", async () => {
60
+ const customConfig = {
61
+ provider: "openai",
62
+ model: "gpt-4-turbo",
63
+ repo: "custom/repo",
64
+ };
65
+ vitest_1.vi.mocked(fs.existsSync).mockImplementation((p) => p === "/custom/path.json");
66
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(customConfig));
67
+ const config = await (0, index_1.loadConfig)("/custom/path.json");
68
+ (0, vitest_1.expect)(config.model).toBe("gpt-4-turbo");
69
+ (0, vitest_1.expect)(config.repo).toBe("custom/repo");
70
+ (0, vitest_1.expect)(config.provider).toBe("openai");
71
+ });
72
+ (0, vitest_1.it)("should load config from specified YAML path", async () => {
73
+ const yamlContent = `
74
+ provider: openai
75
+ model: gpt-3.5-turbo
76
+ verbose: true
77
+ `;
78
+ vitest_1.vi.mocked(fs.existsSync).mockImplementation((p) => p === "/custom/path.yml");
79
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue(yamlContent);
80
+ const config = await (0, index_1.loadConfig)("/custom/path.yml");
81
+ (0, vitest_1.expect)(config.model).toBe("gpt-3.5-turbo");
82
+ (0, vitest_1.expect)(config.verbose).toBe(true);
83
+ });
84
+ (0, vitest_1.it)("should return default config if explicit path does not exist", async () => {
85
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(false);
86
+ const config = await (0, index_1.loadConfig)("/nonexistent/config.json");
87
+ (0, vitest_1.expect)(config).toEqual(index_1.DEFAULT_CONFIG);
88
+ });
89
+ (0, vitest_1.it)("should handle file read errors gracefully", async () => {
90
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(true);
91
+ vitest_1.vi.mocked(fs.readFileSync).mockImplementation(() => {
92
+ throw new Error("Permission denied");
93
+ });
94
+ const config = await (0, index_1.loadConfig)("/error/config.json");
95
+ (0, vitest_1.expect)(config).toEqual(index_1.DEFAULT_CONFIG);
96
+ (0, vitest_1.expect)(console.error).toHaveBeenCalled();
97
+ });
98
+ });
99
+ (0, vitest_1.describe)("with default config file discovery", () => {
100
+ (0, vitest_1.it)("should prefer .aicodeconfig.json over .aicode.yml", async () => {
101
+ const jsonConfig = { model: "gpt-4o" };
102
+ // YAML config would be: { model: "gpt-3.5-turbo" }
103
+ vitest_1.vi.mocked(fs.existsSync).mockImplementation((p) => {
104
+ const pathStr = p;
105
+ return (pathStr.endsWith(".aicodeconfig.json") ||
106
+ pathStr.endsWith(".aicode.yml"));
107
+ });
108
+ vitest_1.vi.mocked(fs.readFileSync).mockImplementation((p) => {
109
+ if (p.endsWith(".json")) {
110
+ return JSON.stringify(jsonConfig);
111
+ }
112
+ return "model: gpt-3.5-turbo";
113
+ });
114
+ const config = await (0, index_1.loadConfig)();
115
+ // Should use JSON config, not YAML
116
+ (0, vitest_1.expect)(config.model).toBe("gpt-4o");
117
+ });
118
+ (0, vitest_1.it)("should fall back to .aicode.yml if .aicodeconfig.json does not exist", async () => {
119
+ vitest_1.vi.mocked(fs.existsSync).mockImplementation((p) => {
120
+ return p.endsWith(".aicode.yml");
121
+ });
122
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue("model: gpt-4o-mini");
123
+ const config = await (0, index_1.loadConfig)();
124
+ (0, vitest_1.expect)(config.model).toBe("gpt-4o-mini");
125
+ });
126
+ });
127
+ (0, vitest_1.describe)("config merging", () => {
128
+ (0, vitest_1.it)("should merge loaded config with defaults", async () => {
129
+ const partialConfig = {
130
+ model: "custom-model",
131
+ repo: "owner/repo",
132
+ };
133
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(true);
134
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(partialConfig));
135
+ const config = await (0, index_1.loadConfig)("/test/config.json");
136
+ // Should have loaded values
137
+ (0, vitest_1.expect)(config.model).toBe("custom-model");
138
+ (0, vitest_1.expect)(config.repo).toBe("owner/repo");
139
+ // Should have default values for unspecified fields
140
+ (0, vitest_1.expect)(config.provider).toBe(index_1.DEFAULT_CONFIG.provider);
141
+ (0, vitest_1.expect)(config.commentStyle).toBe(index_1.DEFAULT_CONFIG.commentStyle);
142
+ (0, vitest_1.expect)(config.severity).toBe(index_1.DEFAULT_CONFIG.severity);
143
+ });
144
+ (0, vitest_1.it)("should handle all config fields", async () => {
145
+ const fullConfig = {
146
+ provider: "openai",
147
+ model: "gpt-4o",
148
+ gitPlatform: "github",
149
+ repo: "test/repo",
150
+ commentStyle: "summary",
151
+ dryRun: true,
152
+ verbose: true,
153
+ endpoint: "https://custom.api.com",
154
+ configPath: "/custom/path",
155
+ severity: "error",
156
+ ignoreFiles: ["*.test.ts"],
157
+ rules: ["Check types"],
158
+ customPrompt: "Custom prompt",
159
+ };
160
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(true);
161
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(fullConfig));
162
+ const config = await (0, index_1.loadConfig)("/test/config.json");
163
+ (0, vitest_1.expect)(config).toMatchObject(fullConfig);
164
+ });
165
+ });
166
+ (0, vitest_1.describe)("file format support", () => {
167
+ (0, vitest_1.it)("should throw error for unsupported file format", async () => {
168
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(true);
169
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue("content");
170
+ const config = await (0, index_1.loadConfig)("/test/config.txt");
171
+ (0, vitest_1.expect)(config).toEqual(index_1.DEFAULT_CONFIG);
172
+ (0, vitest_1.expect)(console.error).toHaveBeenCalled();
173
+ });
174
+ (0, vitest_1.it)("should support .yaml extension", async () => {
175
+ vitest_1.vi.mocked(fs.existsSync).mockImplementation((p) => p.endsWith(".yaml"));
176
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue("model: gpt-4");
177
+ const config = await (0, index_1.loadConfig)("/test/config.yaml");
178
+ (0, vitest_1.expect)(config.model).toBe("gpt-4");
179
+ });
180
+ (0, vitest_1.it)("should handle invalid JSON gracefully", async () => {
181
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(true);
182
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue("{ invalid json");
183
+ const config = await (0, index_1.loadConfig)("/test/config.json");
184
+ (0, vitest_1.expect)(config).toEqual(index_1.DEFAULT_CONFIG);
185
+ (0, vitest_1.expect)(console.error).toHaveBeenCalled();
186
+ });
187
+ (0, vitest_1.it)("should handle invalid YAML gracefully", async () => {
188
+ vitest_1.vi.mocked(fs.existsSync).mockReturnValue(true);
189
+ vitest_1.vi.mocked(fs.readFileSync).mockReturnValue("{ invalid yaml: [ }");
190
+ const config = await (0, index_1.loadConfig)("/test/config.yml");
191
+ (0, vitest_1.expect)(config).toEqual(index_1.DEFAULT_CONFIG);
192
+ (0, vitest_1.expect)(console.error).toHaveBeenCalled();
193
+ });
194
+ });
195
+ });
196
+ (0, vitest_1.describe)("validateConfig", () => {
197
+ (0, vitest_1.beforeEach)(() => {
198
+ vitest_1.vi.spyOn(console, "error").mockImplementation(() => { });
199
+ vitest_1.vi.spyOn(console, "warn").mockImplementation(() => { });
200
+ });
201
+ (0, vitest_1.afterEach)(() => {
202
+ vitest_1.vi.restoreAllMocks();
203
+ });
204
+ (0, vitest_1.describe)("provider validation", () => {
205
+ (0, vitest_1.it)("should accept valid provider 'openai'", () => {
206
+ const cliOptions = { provider: "openai" };
207
+ const config = { ...index_1.DEFAULT_CONFIG };
208
+ const result = (0, index_1.validateConfig)(cliOptions, config);
209
+ (0, vitest_1.expect)(result.provider).toBe("openai");
210
+ });
211
+ (0, vitest_1.it)("should reject invalid provider and fall back to default", () => {
212
+ // eslint-disable-next-line @typescript-eslint/no-explicit-any
213
+ const cliOptions = { provider: "invalid-provider" };
214
+ const config = { ...index_1.DEFAULT_CONFIG };
215
+ const result = (0, index_1.validateConfig)(cliOptions, config);
216
+ (0, vitest_1.expect)(result.provider).toBe(index_1.DEFAULT_CONFIG.provider);
217
+ (0, vitest_1.expect)(console.error).toHaveBeenCalled();
218
+ });
219
+ });
220
+ (0, vitest_1.describe)("platform validation", () => {
221
+ (0, vitest_1.it)("should accept valid platform 'github'", () => {
222
+ const cliOptions = { gitPlatform: "github" };
223
+ const config = { ...index_1.DEFAULT_CONFIG };
224
+ const result = (0, index_1.validateConfig)(cliOptions, config);
225
+ (0, vitest_1.expect)(result.gitPlatform).toBe("github");
226
+ });
227
+ (0, vitest_1.it)("should accept valid platform 'local'", () => {
228
+ const cliOptions = { gitPlatform: "local" };
229
+ const config = { ...index_1.DEFAULT_CONFIG };
230
+ const result = (0, index_1.validateConfig)(cliOptions, config);
231
+ (0, vitest_1.expect)(result.gitPlatform).toBe("local");
232
+ });
233
+ (0, vitest_1.it)("should reject invalid platform and fall back to default", () => {
234
+ // eslint-disable-next-line @typescript-eslint/no-explicit-any
235
+ const cliOptions = { gitPlatform: "bitbucket" };
236
+ const config = { ...index_1.DEFAULT_CONFIG };
237
+ const result = (0, index_1.validateConfig)(cliOptions, config);
238
+ (0, vitest_1.expect)(result.gitPlatform).toBe(index_1.DEFAULT_CONFIG.gitPlatform);
239
+ (0, vitest_1.expect)(console.error).toHaveBeenCalled();
240
+ });
241
+ });
242
+ (0, vitest_1.describe)("severity validation", () => {
243
+ (0, vitest_1.it)("should accept valid severity 'suggestion'", () => {
244
+ const cliOptions = { severity: "suggestion" };
245
+ const config = { ...index_1.DEFAULT_CONFIG };
246
+ const result = (0, index_1.validateConfig)(cliOptions, config);
247
+ (0, vitest_1.expect)(result.severity).toBe("suggestion");
248
+ });
249
+ (0, vitest_1.it)("should accept valid severity 'warning'", () => {
250
+ const cliOptions = { severity: "warning" };
251
+ const config = { ...index_1.DEFAULT_CONFIG };
252
+ const result = (0, index_1.validateConfig)(cliOptions, config);
253
+ (0, vitest_1.expect)(result.severity).toBe("warning");
254
+ });
255
+ (0, vitest_1.it)("should accept valid severity 'error'", () => {
256
+ const cliOptions = { severity: "error" };
257
+ const config = { ...index_1.DEFAULT_CONFIG };
258
+ const result = (0, index_1.validateConfig)(cliOptions, config);
259
+ (0, vitest_1.expect)(result.severity).toBe("error");
260
+ });
261
+ (0, vitest_1.it)("should reject invalid severity and fall back to default", () => {
262
+ // eslint-disable-next-line @typescript-eslint/no-explicit-any
263
+ const cliOptions = { severity: "critical" };
264
+ const config = { ...index_1.DEFAULT_CONFIG };
265
+ const result = (0, index_1.validateConfig)(cliOptions, config);
266
+ (0, vitest_1.expect)(result.severity).toBe(index_1.DEFAULT_CONFIG.severity);
267
+ (0, vitest_1.expect)(console.warn).toHaveBeenCalled();
268
+ });
269
+ });
270
+ (0, vitest_1.describe)("CLI options merging", () => {
271
+ (0, vitest_1.it)("should CLI options override file config", () => {
272
+ const cliOptions = {
273
+ model: "cli-model",
274
+ verbose: true,
275
+ };
276
+ const fileConfig = {
277
+ ...index_1.DEFAULT_CONFIG,
278
+ model: "file-model",
279
+ verbose: false,
280
+ };
281
+ const result = (0, index_1.validateConfig)(cliOptions, fileConfig);
282
+ (0, vitest_1.expect)(result.model).toBe("cli-model");
283
+ (0, vitest_1.expect)(result.verbose).toBe(true);
284
+ });
285
+ (0, vitest_1.it)("should preserve file config values not in CLI options", () => {
286
+ const cliOptions = {
287
+ model: "new-model",
288
+ };
289
+ const fileConfig = {
290
+ ...index_1.DEFAULT_CONFIG,
291
+ repo: "owner/repo",
292
+ severity: "error",
293
+ };
294
+ const result = (0, index_1.validateConfig)(cliOptions, fileConfig);
295
+ (0, vitest_1.expect)(result.model).toBe("new-model");
296
+ (0, vitest_1.expect)(result.repo).toBe("owner/repo");
297
+ (0, vitest_1.expect)(result.severity).toBe("error");
298
+ });
299
+ (0, vitest_1.it)("should handle all CLI option overrides", () => {
300
+ const cliOptions = {
301
+ provider: "openai",
302
+ model: "gpt-4o",
303
+ gitPlatform: "github",
304
+ repo: "cli/repo",
305
+ commentStyle: "summary",
306
+ dryRun: true,
307
+ verbose: true,
308
+ endpoint: "https://cli.api.com",
309
+ severity: "error",
310
+ ignoreFiles: ["*.spec.ts"],
311
+ rules: ["CLI rule"],
312
+ customPrompt: "CLI prompt",
313
+ };
314
+ const result = (0, index_1.validateConfig)(cliOptions, index_1.DEFAULT_CONFIG);
315
+ (0, vitest_1.expect)(result).toMatchObject(cliOptions);
316
+ });
317
+ });
318
+ (0, vitest_1.describe)("default config values", () => {
319
+ (0, vitest_1.it)("should have correct default values", () => {
320
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.provider).toBe("openai");
321
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.model).toBe("gpt-4o");
322
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.gitPlatform).toBe("github");
323
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.commentStyle).toBe("inline");
324
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.dryRun).toBe(false);
325
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.verbose).toBe(false);
326
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.severity).toBe("suggestion");
327
+ (0, vitest_1.expect)(index_1.DEFAULT_CONFIG.ignoreFiles).toEqual([]);
328
+ });
329
+ });
330
+ });
@@ -0,0 +1 @@
1
+ export {};