@johnowennixon/diffdash 1.10.0 → 1.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,3 +1,5 @@
1
+ ![Demonstration](asciinema/diffdash-demo.gif)
2
+
1
3
  # DiffDash
2
4
 
3
5
  ![npm version](https://img.shields.io/npm/v/@johnowennixon/diffdash.svg)
@@ -7,21 +9,17 @@
7
9
 
8
10
  A command-line tool to generate Git commit messages using AI.
9
11
 
10
- ## Demonstration
11
-
12
- ![Demonstration](asciinema/diffdash-demo.gif)
13
-
14
12
  ## Features
15
13
 
16
14
  * Generate Git commit messages in **natural English**
17
- * Add a footer to the generated commit messages
18
15
  * Add a prefix or suffix to the summary line
16
+ * Add a footer to the generated commit messages
19
17
  * Select from a choice of LLM models
20
18
  * Compare messages generated from all configured models
21
19
  * Disable or auto-approve various stages
22
20
  * Option to output just the commit message for use in scripts
23
21
  * Configuration using standard API provider environment variables
24
- * Uses the Vercel AI SDK (version 5)
22
+ * Uses the Vercel AI SDK (version 6)
25
23
  * Uses structured JSON with compatible models
26
24
  * Substantially written using AI coding (Claude Code, Roo Code, and Amp)
27
25
 
@@ -37,8 +35,8 @@ Currently, for this application, the best LLM model is **gpt-4.1-mini** from Ope
37
35
  It is set as the default model.
38
36
  I can only presume they have done a ton of training on diffs.
39
37
 
40
- I am now testing the GPT-5 models and **gpt-5-mini-minimal** (GPT-5 Mini with reasoning disabled) is behaving much the same.
41
- It will probably become the default model soon.
38
+ I have tested the GPT-5 models and **gpt-5-mini-minimal** (GPT-5 Mini with reasoning disabled) is behaving much the same.
39
+ It will become the default model if gpt-4.1-mini is deprecated.
42
40
 
43
41
  ## API Keys
44
42
 
@@ -98,7 +96,7 @@ diffdash --no-verify
98
96
  diffdash --add-prefix "[FIX]"
99
97
 
100
98
  # Add a suffix to the commit message summary line
101
- diffdash --add-suffix "(closes #123)"
99
+ diffdash --add-suffix "(closes #DEV-1234)"
102
100
 
103
101
  # Display commit messages generated by all models
104
102
  diffdash --llm-compare
@@ -118,7 +116,7 @@ diffdash --debug-llm-prompts
118
116
  All command-line arguments are optional.
119
117
 
120
118
  | Argument | Description |
121
- |--------|-------------|
119
+ | -------- | ----------- |
122
120
  | `--help` | show a help message and exit |
123
121
  | `--version` | show program version information and exit |
124
122
  | `--auto-add` | automatically stage all changes without confirmation |
@@ -126,7 +124,7 @@ All command-line arguments are optional.
126
124
  | `--auto-push` | automatically push changes after commit without confirmation |
127
125
  | `--disable-add` | disable adding unstaged changes - exit if no changes staged |
128
126
  | `--disable-status` | disable listing the staged files before generating a message |
129
- | `--disable-preview` | disable previewing the generated message|
127
+ | `--disable-preview` | disable previewing the generated message |
130
128
  | `--disable-commit` | disable committing changes - exit after generating the message |
131
129
  | `--disable-push` | disable pushing changes - exit after making the commit |
132
130
  | `--add-prefix PREFIX` | add a prefix to the commit message summary line |
@@ -158,6 +156,8 @@ There is a rudimentary check for secrets in diffs before submitting to the LLM.
158
156
 
159
157
  ## Development
160
158
 
159
+ You will need to install [bun](https://bun.com/docs/pm/cli/install).
160
+
161
161
  To install on your laptop:
162
162
 
163
163
  ```bash
@@ -166,10 +166,10 @@ git clone https://github.com/johnowennixon/diffdash.git
166
166
  cd diffdash
167
167
 
168
168
  # Install dependencies
169
- pnpm install
169
+ bun install
170
170
 
171
171
  # Build the project
172
- pnpm run build
172
+ bun run build
173
173
 
174
174
  # Make binaries executable
175
175
  npm link
@@ -179,13 +179,13 @@ To rebuild after editing:
179
179
 
180
180
  ```bash
181
181
  # Lint the code
182
- pnpm run lint
182
+ bun run lint
183
183
 
184
184
  # Fix formatting issues (if required)
185
- pnpm run fix
185
+ bun run fix
186
186
 
187
187
  # Build the project
188
- pnpm run build
188
+ bun run build
189
189
  ```
190
190
 
191
191
  ## License
package/dist/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@johnowennixon/diffdash",
3
- "version": "1.10.0",
3
+ "version": "1.12.0",
4
4
  "description": "A command-line tool to generate Git commit messages using AI",
5
5
  "license": "0BSD",
6
6
  "author": "John Owen Nixon",
@@ -23,12 +23,12 @@
23
23
  "build:chmod": "echo 'Changing bin files to be executable' && chmodx --package",
24
24
  "build:clean": "echo 'Removing dist' && rimraf dist",
25
25
  "build:shebang": "echo 'Fixing the shebangs' && add-shebangs --node --exclude 'dist/**/lib_*.js' 'dist/**/*.js'",
26
- "build:tsc": "echo 'Transpiling TypeScript to dist (using tsc)' && tsc --erasableSyntaxOnly --libReplacement false",
26
+ "build:tsc": "echo 'Transpiling TypeScript to dist (using tsc)' && tsc",
27
27
  "build:tsgo": "echo 'Transpiling TypeScript to dist (using tsgo)' && tsgo || (rimraf dist && false)",
28
28
  "fix": "run-s -ls fix:biome fix:markdownlint",
29
29
  "fix:biome": "echo 'Fixing with Biome' && biome check --write",
30
30
  "fix:docbot": "echo 'Fixing with DocBot' && docbot --prune --generate",
31
- "fix:markdownlint": "echo 'Fixing with markdownlint' && markdownlint-cli2 '**/*.md' --fix",
31
+ "fix:markdownlint": "echo 'Fixing with Markdownlint' && markdownlint-cli2 '**/*.md' --fix",
32
32
  "fix:oxlint": "echo 'Fixing with Oxlint' && oxlint --fix",
33
33
  "lint": "run-s -ls lint:biome lint:oxlint lint:tsgolint lint:knip lint:markdownlint",
34
34
  "lint:biome": "echo 'Linting with Biome' && biome check",
@@ -36,40 +36,40 @@
36
36
  "lint:knip": "echo 'Linting with Knip' && knip",
37
37
  "lint:markdownlint": "echo 'Linting with Markdownlint' && markdownlint-cli2 '**/*.md'",
38
38
  "lint:oxlint": "echo 'Linting with Oxlint' && oxlint",
39
- "lint:tsc": "echo 'Linting with tsc' && tsc --noEmit --erasableSyntaxOnly --libReplacement false",
39
+ "lint:tsc": "echo 'Linting with tsc' && tsc --noEmit",
40
40
  "lint:tsgo": "echo 'Linting with tsgo' && tsgo --noEmit",
41
41
  "lint:tsgolint": "echo 'Linting with tsgolint' && candide-tsgolint",
42
42
  "test": "run-s -ls lint build"
43
43
  },
44
44
  "dependencies": {
45
- "@ai-sdk/anthropic": "2.0.23",
46
- "@ai-sdk/deepseek": "1.0.20",
47
- "@ai-sdk/google": "2.0.17",
48
- "@ai-sdk/openai": "2.0.42",
49
- "@inquirer/prompts": "7.8.6",
50
- "@openrouter/ai-sdk-provider": "1.2.0",
51
- "ai": "5.0.60",
45
+ "@ai-sdk/anthropic": "3.0.8",
46
+ "@ai-sdk/deepseek": "2.0.4",
47
+ "@ai-sdk/google": "3.0.5",
48
+ "@ai-sdk/openai": "3.0.21",
49
+ "@inquirer/prompts": "8.2.0",
50
+ "@openrouter/ai-sdk-provider": "2.1.1",
51
+ "ai": "6.0.17",
52
52
  "ansis": "4.2.0",
53
53
  "argparse": "2.0.1",
54
54
  "cli-table3": "0.6.5",
55
55
  "json5": "2.2.3",
56
56
  "magic-regexp": "0.10.0",
57
- "simple-git": "3.28.0",
58
- "zod": "4.1.11"
57
+ "simple-git": "3.30.0",
58
+ "zod": "4.3.6"
59
59
  },
60
60
  "devDependencies": {
61
- "@biomejs/biome": "2.2.5",
62
- "@candide/tsgolint": "1.4.0",
61
+ "@biomejs/biome": "2.3.13",
62
+ "@candide/tsgolint": "1.5.0",
63
63
  "@johnowennixon/add-shebangs": "1.1.0",
64
64
  "@johnowennixon/chmodx": "2.1.0",
65
65
  "@types/argparse": "2.0.17",
66
- "@types/node": "24.5.2",
67
- "@typescript/native-preview": "7.0.0-dev.20250925.1",
68
- "knip": "5.63.1",
69
- "markdownlint-cli2": "0.18.1",
66
+ "@types/node": "25.0.3",
67
+ "@typescript/native-preview": "7.0.0-dev.20260103.1",
68
+ "knip": "5.82.1",
69
+ "markdownlint-cli2": "0.20.0",
70
70
  "npm-run-all2": "8.0.4",
71
- "oxlint": "1.19.0",
72
- "rimraf": "6.0.1",
71
+ "oxlint": "1.42.0",
72
+ "rimraf": "6.1.2",
73
73
  "typescript": "5.9.3"
74
74
  }
75
75
  }
@@ -1,4 +1,12 @@
1
+ import { QUOTE_DOUBLE, QUOTE_SINGLE } from "./lib_char_punctuation.js";
1
2
  export const LEFT_DOUBLE_QUOTATION_MARK = "“";
2
3
  export const LEFT_SINGLE_QUOTATION_MARK = "‘";
3
4
  export const RIGHT_DOUBLE_QUOTATION_MARK = "”";
4
5
  export const RIGHT_SINGLE_QUOTATION_MARK = "’";
6
+ export function char_smart_remove(text) {
7
+ return text
8
+ .replaceAll(LEFT_DOUBLE_QUOTATION_MARK, QUOTE_DOUBLE)
9
+ .replaceAll(LEFT_SINGLE_QUOTATION_MARK, QUOTE_SINGLE)
10
+ .replaceAll(RIGHT_DOUBLE_QUOTATION_MARK, QUOTE_DOUBLE)
11
+ .replaceAll(RIGHT_SINGLE_QUOTATION_MARK, QUOTE_SINGLE);
12
+ }
@@ -1,5 +1,14 @@
1
1
  import { EMPTY } from "./lib_char_empty.js";
2
2
  import { COLON, DASH, SPACE } from "./lib_char_punctuation.js";
3
+ export const DATETIME_WEEKDAYS = {
4
+ SUNDAY: 0,
5
+ MONDAY: 1,
6
+ TUESDAY: 2,
7
+ WEDNESDAY: 3,
8
+ THURSDAY: 4,
9
+ FRIDAY: 5,
10
+ SATURDAY: 6,
11
+ };
3
12
  export function datetime_now() {
4
13
  return new Date();
5
14
  }
@@ -8,6 +17,11 @@ export function datetime_now_minus_days(days) {
8
17
  date.setDate(date.getDate() - days);
9
18
  return date;
10
19
  }
20
+ export function datetime_now_plus_days(days) {
21
+ const date = datetime_now();
22
+ date.setDate(date.getDate() + days);
23
+ return date;
24
+ }
11
25
  export function datetime_parse(s) {
12
26
  return new Date(s);
13
27
  }
@@ -4,8 +4,8 @@ const model_name_default = "gpt-4.1-mini";
4
4
  const model_name_options = [
5
5
  "claude-3.5-haiku", // fallback
6
6
  "deepseek-chat",
7
- "gemini-2.0-flash",
8
7
  "gemini-2.5-flash",
8
+ "gemini-3-flash-preview-low",
9
9
  "gpt-4.1-mini", // the best
10
10
  "gpt-4.1-nano",
11
11
  "gpt-5-mini",
@@ -13,7 +13,6 @@ const model_name_options = [
13
13
  "gpt-5-nano",
14
14
  "gpt-5-nano-minimal",
15
15
  "grok-code-fast-1",
16
- "llama-4-maverick@cerebras",
17
16
  ];
18
17
  export const diffdash_llm_model_details = llm_model_get_details({ llm_model_names: model_name_options });
19
18
  export const diffdash_llm_model_choices = llm_model_get_choices({ llm_model_details: diffdash_llm_model_details });
@@ -5,6 +5,8 @@ import { createOpenAI } from "@ai-sdk/openai";
5
5
  import { createOpenRouter } from "@openrouter/ai-sdk-provider";
6
6
  import { abort_with_error } from "./lib_abort.js";
7
7
  import { env_get } from "./lib_env.js";
8
+ // Disable AI SDK warning logs temporarily
9
+ globalThis.AI_SDK_LOG_WARNINGS = false;
8
10
  export function llm_api_get_api_key_env(llm_api_code) {
9
11
  switch (llm_api_code) {
10
12
  case "anthropic":
@@ -1,4 +1,4 @@
1
- import { generateObject, generateText, stepCountIs } from "ai";
1
+ import { generateText, Output, stepCountIs } from "ai";
2
2
  import { debug_channels, debug_inspect_when } from "./lib_debug.js";
3
3
  import { Duration } from "./lib_duration.js";
4
4
  import { env_get_empty, env_get_substitute } from "./lib_env.js";
@@ -93,8 +93,7 @@ export async function llm_chat_generate_object({ llm_config, user_prompt, system
93
93
  model: ai_sdk_language_model,
94
94
  system: system_prompt,
95
95
  prompt: user_prompt,
96
- output: "object",
97
- schema,
96
+ output: Output.object({ schema }),
98
97
  maxOutputTokens: max_output_tokens_env ?? max_output_tokens,
99
98
  temperature,
100
99
  providerOptions: provider_options,
@@ -102,8 +101,8 @@ export async function llm_chat_generate_object({ llm_config, user_prompt, system
102
101
  };
103
102
  debug_inspect_when(debug_channels.llm_inputs, llm_inputs, `LLM inputs object (for ${llm_model_name})`);
104
103
  // This is liable to throw an error
105
- const llm_outputs = await generateObject(llm_inputs);
104
+ const llm_outputs = await generateText(llm_inputs);
106
105
  debug_inspect_when(debug_channels.llm_outputs, llm_outputs, `LLM outputs object (for ${llm_model_name})`);
107
- const { object: generated_object, usage: total_usage, providerMetadata: provider_metadata } = llm_outputs;
106
+ const { output: generated_object, usage: total_usage, providerMetadata: provider_metadata } = llm_outputs;
108
107
  return { generated_object, total_usage, provider_metadata };
109
108
  }
@@ -1,10 +1,10 @@
1
1
  import { DOLLAR } from "./lib_char_punctuation.js";
2
2
  import { stdio_write_stdout_linefeed } from "./lib_stdio_write.js";
3
3
  import { tell_info, tell_warning } from "./lib_tell.js";
4
- import { TuiTable } from "./lib_tui_table.js";
4
+ import { LEFT, RIGHT, TuiTable } from "./lib_tui_table.js";
5
5
  export function llm_list_models({ llm_model_details }) {
6
6
  const headings = ["NAME", "API", "CONTEXT", "INPUT", "OUTPUT", "REASONING"];
7
- const alignments = ["left", "left", "right", "right", "right", "left"];
7
+ const alignments = [LEFT, LEFT, RIGHT, RIGHT, RIGHT, LEFT];
8
8
  const table = new TuiTable({ headings, alignments, compact: true });
9
9
  for (const detail of llm_model_details) {
10
10
  const { llm_model_name, llm_api_code, context_window, cents_input, cents_output, default_reasoning } = detail;
@@ -18,6 +18,7 @@ export function llm_list_models({ llm_model_details }) {
18
18
  table.push(row);
19
19
  }
20
20
  stdio_write_stdout_linefeed(table.toString());
21
+ tell_info(`This is a total of ${llm_model_details.length} models.`);
21
22
  tell_info("Prices are per million tokens.");
22
23
  tell_warning("Prices are best effort and are liable to change - always double-check with your LLM API provider.");
23
24
  }
@@ -15,6 +15,15 @@ function provider_options_anthropic({ thinking }) {
15
15
  }
16
16
  : undefined;
17
17
  }
18
+ function provider_options_google({ thinking_level }) {
19
+ return {
20
+ google: {
21
+ thinkingConfig: {
22
+ thinkingLevel: thinking_level,
23
+ },
24
+ },
25
+ };
26
+ }
18
27
  function provider_options_openai({ reasoning_effort, }) {
19
28
  return {
20
29
  openai: {
@@ -46,18 +55,31 @@ export const LLM_MODEL_DETAILS = [
46
55
  provider_options: provider_options_anthropic({ thinking: false }),
47
56
  },
48
57
  {
49
- llm_model_name: "claude-3.7-sonnet",
50
- llm_model_code: "claude-3-7-sonnet-latest",
58
+ llm_model_name: "claude-opus-4.5",
59
+ llm_model_code: "claude-opus-4-5",
51
60
  llm_api_code: "anthropic",
52
61
  context_window: 200_000,
53
62
  max_output_tokens: 64_000,
54
- cents_input: 300,
55
- cents_output: 1500,
63
+ cents_input: 300, // for input tokens <= 200K
64
+ cents_output: 1500, // for input tokens <= 200K
56
65
  default_reasoning: false,
57
66
  has_structured_json: true,
58
67
  recommended_temperature: undefined,
59
68
  provider_options: provider_options_anthropic({ thinking: false }),
60
69
  },
70
+ {
71
+ llm_model_name: "claude-opus-4.5-thinking",
72
+ llm_model_code: "claude-opus-4-5",
73
+ llm_api_code: "anthropic",
74
+ context_window: 200_000,
75
+ max_output_tokens: 64_000 - 1024,
76
+ cents_input: 300, // for input tokens <= 200K
77
+ cents_output: 1500, // for input tokens <= 200K
78
+ default_reasoning: false,
79
+ has_structured_json: true,
80
+ recommended_temperature: undefined,
81
+ provider_options: provider_options_anthropic({ thinking: true }),
82
+ },
61
83
  {
62
84
  llm_model_name: "claude-sonnet-4",
63
85
  llm_model_code: "claude-sonnet-4-0",
@@ -88,7 +110,7 @@ export const LLM_MODEL_DETAILS = [
88
110
  llm_model_name: "claude-sonnet-4.5",
89
111
  llm_model_code: "claude-sonnet-4-5",
90
112
  llm_api_code: "anthropic",
91
- context_window: 1_000_000,
113
+ context_window: 200_000, // 1_000_000 available with context-1m beta header
92
114
  max_output_tokens: 64_000,
93
115
  cents_input: 300, // for input tokens <= 200K
94
116
  cents_output: 1500, // for input tokens <= 200K
@@ -101,7 +123,7 @@ export const LLM_MODEL_DETAILS = [
101
123
  llm_model_name: "claude-sonnet-4.5-thinking",
102
124
  llm_model_code: "claude-sonnet-4-5",
103
125
  llm_api_code: "anthropic",
104
- context_window: 1_000_000,
126
+ context_window: 200_000, // 1_000_000 available with context-1m beta header
105
127
  max_output_tokens: 62_976, // = 64000 - 1024 used for reasoning
106
128
  cents_input: 300, // for input tokens <= 200K
107
129
  cents_output: 1500, // for input tokens <= 200K
@@ -110,19 +132,6 @@ export const LLM_MODEL_DETAILS = [
110
132
  recommended_temperature: undefined,
111
133
  provider_options: provider_options_anthropic({ thinking: true }),
112
134
  },
113
- {
114
- llm_model_name: "codestral-2508",
115
- llm_model_code: "mistralai/codestral-2508",
116
- llm_api_code: "openrouter",
117
- context_window: 256_000,
118
- max_output_tokens: 256_000,
119
- cents_input: 30,
120
- cents_output: 90,
121
- default_reasoning: false,
122
- has_structured_json: true,
123
- recommended_temperature: undefined,
124
- provider_options: provider_options_openrouter({ only: "mistral" }),
125
- },
126
135
  {
127
136
  llm_model_name: "deepseek-chat",
128
137
  llm_model_code: "deepseek-chat",
@@ -175,19 +184,6 @@ export const LLM_MODEL_DETAILS = [
175
184
  recommended_temperature: undefined,
176
185
  provider_options: provider_options_openrouter({ only: "mistral" }),
177
186
  },
178
- {
179
- llm_model_name: "gemini-2.0-flash",
180
- llm_model_code: "gemini-2.0-flash",
181
- llm_api_code: "google",
182
- context_window: 1_048_576,
183
- max_output_tokens: 8192,
184
- cents_input: 10,
185
- cents_output: 40,
186
- default_reasoning: false,
187
- has_structured_json: true,
188
- recommended_temperature: undefined,
189
- provider_options: undefined,
190
- },
191
187
  {
192
188
  llm_model_name: "gemini-2.5-flash",
193
189
  llm_model_code: "gemini-2.5-flash",
@@ -209,56 +205,69 @@ export const LLM_MODEL_DETAILS = [
209
205
  max_output_tokens: 65_536,
210
206
  cents_input: 125,
211
207
  cents_output: 1000,
212
- default_reasoning: false,
208
+ default_reasoning: true,
213
209
  has_structured_json: true,
214
210
  recommended_temperature: undefined,
215
211
  provider_options: undefined,
216
212
  },
217
213
  {
218
- llm_model_name: "glm-4-32b@z-ai",
219
- llm_model_code: "z-ai/glm-4-32b",
220
- llm_api_code: "openrouter",
221
- context_window: 128_000,
222
- max_output_tokens: 128_000,
223
- cents_input: 10,
224
- cents_output: 10,
225
- default_reasoning: false,
226
- has_structured_json: false,
214
+ llm_model_name: "gemini-3-flash-preview-high",
215
+ llm_model_code: "gemini-3-flash-preview",
216
+ llm_api_code: "google",
217
+ context_window: 1_048_576,
218
+ max_output_tokens: 65_536,
219
+ cents_input: 50,
220
+ cents_output: 300,
221
+ default_reasoning: true,
222
+ has_structured_json: true,
227
223
  recommended_temperature: undefined,
228
- provider_options: provider_options_openrouter({ only: "z-ai" }),
224
+ provider_options: provider_options_google({ thinking_level: "high" }),
229
225
  },
230
226
  {
231
- llm_model_name: "glm-4.5@z-ai",
232
- llm_model_code: "z-ai/glm-4.5",
233
- llm_api_code: "openrouter",
234
- context_window: 128_000,
235
- max_output_tokens: 96_000,
236
- cents_input: 60,
237
- cents_output: 220,
227
+ llm_model_name: "gemini-3-flash-preview-low",
228
+ llm_model_code: "gemini-3-flash-preview",
229
+ llm_api_code: "google",
230
+ context_window: 1_048_576,
231
+ max_output_tokens: 65_536,
232
+ cents_input: 50,
233
+ cents_output: 300,
238
234
  default_reasoning: true,
239
- has_structured_json: false,
235
+ has_structured_json: true,
240
236
  recommended_temperature: undefined,
241
- provider_options: provider_options_openrouter({ only: "z-ai" }),
237
+ provider_options: provider_options_google({ thinking_level: "low" }),
242
238
  },
243
239
  {
244
- llm_model_name: "glm-4.5-air@z-ai",
245
- llm_model_code: "z-ai/glm-4.5-air",
246
- llm_api_code: "openrouter",
247
- context_window: 128_000,
248
- max_output_tokens: 96_000,
249
- cents_input: 20,
250
- cents_output: 110,
240
+ llm_model_name: "gemini-3-pro-preview-high",
241
+ llm_model_code: "gemini-3-pro-preview",
242
+ llm_api_code: "google",
243
+ context_window: 1_048_576,
244
+ max_output_tokens: 65_536,
245
+ cents_input: 200,
246
+ cents_output: 1200,
251
247
  default_reasoning: true,
252
- has_structured_json: false,
248
+ has_structured_json: true,
253
249
  recommended_temperature: undefined,
254
- provider_options: provider_options_openrouter({ only: "z-ai" }),
250
+ provider_options: provider_options_google({ thinking_level: "high" }),
251
+ },
252
+ {
253
+ llm_model_name: "gemini-3-pro-preview-low",
254
+ llm_model_code: "gemini-3-pro-preview",
255
+ llm_api_code: "google",
256
+ context_window: 1_048_576,
257
+ max_output_tokens: 65_536,
258
+ cents_input: 200,
259
+ cents_output: 1200,
260
+ default_reasoning: false,
261
+ has_structured_json: true,
262
+ recommended_temperature: undefined,
263
+ provider_options: provider_options_google({ thinking_level: "low" }),
255
264
  },
256
265
  {
257
- llm_model_name: "glm-4.6@z-ai",
258
- llm_model_code: "z-ai/glm-4.6",
266
+ llm_model_name: "glm-4.7@z-ai",
267
+ llm_model_code: "z-ai/glm-4.7",
259
268
  llm_api_code: "openrouter",
260
- context_window: 128_000,
261
- max_output_tokens: 96_000,
269
+ context_window: 200_000,
270
+ max_output_tokens: 131_072,
262
271
  cents_input: 60,
263
272
  cents_output: 220,
264
273
  default_reasoning: true,
@@ -513,19 +522,6 @@ export const LLM_MODEL_DETAILS = [
513
522
  recommended_temperature: undefined,
514
523
  provider_options: undefined,
515
524
  },
516
- {
517
- llm_model_name: "kimi-k2-0711@groq",
518
- llm_model_code: "moonshotai/kimi-k2",
519
- llm_api_code: "openrouter",
520
- context_window: 131_072,
521
- max_output_tokens: 16_384,
522
- cents_input: 100,
523
- cents_output: 300,
524
- default_reasoning: false,
525
- has_structured_json: false,
526
- recommended_temperature: undefined,
527
- provider_options: provider_options_openrouter({ only: "groq" }),
528
- },
529
525
  {
530
526
  llm_model_name: "kimi-k2-0711@moonshotai",
531
527
  llm_model_code: "moonshotai/kimi-k2",
@@ -553,30 +549,30 @@ export const LLM_MODEL_DETAILS = [
553
549
  provider_options: provider_options_openrouter({ only: "groq" }),
554
550
  },
555
551
  {
556
- llm_model_name: "llama-4-maverick@cerebras",
552
+ llm_model_name: "llama-4-maverick@groq",
557
553
  llm_model_code: "meta-llama/llama-4-maverick",
558
554
  llm_api_code: "openrouter",
559
- context_window: 32_768,
560
- max_output_tokens: 32_768,
555
+ context_window: 131_072,
556
+ max_output_tokens: 8192,
561
557
  cents_input: 20,
562
558
  cents_output: 60,
563
559
  default_reasoning: false,
564
560
  has_structured_json: true,
565
561
  recommended_temperature: undefined,
566
- provider_options: provider_options_openrouter({ only: "cerebras" }),
562
+ provider_options: provider_options_openrouter({ only: "groq" }),
567
563
  },
568
564
  {
569
- llm_model_name: "llama-4-scout@cerebras",
565
+ llm_model_name: "llama-4-scout@groq",
570
566
  llm_model_code: "meta-llama/llama-4-scout",
571
567
  llm_api_code: "openrouter",
572
- context_window: 32_000,
573
- max_output_tokens: 32_000,
574
- cents_input: 65,
575
- cents_output: 85,
568
+ context_window: 131_072,
569
+ max_output_tokens: 8192,
570
+ cents_input: 11,
571
+ cents_output: 34,
576
572
  default_reasoning: false,
577
573
  has_structured_json: true,
578
574
  recommended_temperature: undefined,
579
- provider_options: provider_options_openrouter({ only: "cerebras" }),
575
+ provider_options: provider_options_openrouter({ only: "groq" }),
580
576
  },
581
577
  {
582
578
  llm_model_name: "longcat-flash",
@@ -617,6 +613,19 @@ export const LLM_MODEL_DETAILS = [
617
613
  recommended_temperature: undefined,
618
614
  provider_options: undefined,
619
615
  },
616
+ {
617
+ llm_model_name: "minimax-m2.1",
618
+ llm_model_code: "minimax/minimax-m2.1",
619
+ llm_api_code: "openrouter",
620
+ context_window: 204_800,
621
+ max_output_tokens: 131_072,
622
+ cents_input: 30,
623
+ cents_output: 120,
624
+ default_reasoning: false,
625
+ has_structured_json: false,
626
+ recommended_temperature: undefined,
627
+ provider_options: provider_options_openrouter({ only: "minimax" }),
628
+ },
620
629
  {
621
630
  llm_model_name: "mistral-medium-3.1",
622
631
  llm_model_code: "mistralai/mistral-medium-3.1",
@@ -643,19 +652,6 @@ export const LLM_MODEL_DETAILS = [
643
652
  recommended_temperature: undefined,
644
653
  provider_options: provider_options_openrouter({ only: "cerebras" }),
645
654
  },
646
- {
647
- llm_model_name: "qwen3-235b-a22b-2507-thinking@cerebras",
648
- llm_model_code: "qwen/qwen3-235b-a22b-thinking-2507",
649
- llm_api_code: "openrouter",
650
- context_window: 131_072,
651
- max_output_tokens: 131_072,
652
- cents_input: 60,
653
- cents_output: 120,
654
- default_reasoning: true,
655
- has_structured_json: true,
656
- recommended_temperature: undefined,
657
- provider_options: provider_options_openrouter({ only: "cerebras" }),
658
- },
659
655
  {
660
656
  llm_model_name: "qwen3-coder@alibaba",
661
657
  llm_model_code: "qwen/qwen3-coder",
@@ -669,19 +665,6 @@ export const LLM_MODEL_DETAILS = [
669
665
  recommended_temperature: undefined,
670
666
  provider_options: provider_options_openrouter({ only: "alibaba/opensource" }),
671
667
  },
672
- {
673
- llm_model_name: "qwen3-coder@cerebras",
674
- llm_model_code: "qwen/qwen3-coder",
675
- llm_api_code: "openrouter",
676
- context_window: 131_072,
677
- max_output_tokens: 131_072,
678
- cents_input: 200,
679
- cents_output: 200,
680
- default_reasoning: false,
681
- has_structured_json: true,
682
- recommended_temperature: undefined,
683
- provider_options: provider_options_openrouter({ only: "cerebras" }),
684
- },
685
668
  {
686
669
  llm_model_name: "qwen-plus@alibaba",
687
670
  llm_model_code: "qwen/qwen-plus-2025-07-28",
@@ -26,19 +26,20 @@ export function llm_results_summary(all_results) {
26
26
  const { default_reasoning } = llm_model_detail;
27
27
  const { outputs } = result;
28
28
  const { total_usage, provider_metadata } = outputs;
29
+ const { reasoningTokens: reasoning_tokens } = total_usage.outputTokenDetails;
29
30
  const openrouter_provider = provider_metadata?.["openrouter"]?.["provider"];
30
31
  const tui_model = tui_justify_left(max_length_model, llm_model_name);
31
32
  const tui_seconds = tui_number_plain({ num: seconds, justify_left: 3 });
32
33
  const tui_input = tui_number_plain({ num: total_usage.inputTokens, justify_left: 5 });
33
34
  const tui_output = tui_number_plain({ num: total_usage.outputTokens, justify_left: 5 });
34
- const tui_reasoning = tui_number_plain({ num: total_usage.reasoningTokens, justify_left: 5, none: QUESTION });
35
+ const tui_reasoning = tui_number_plain({ num: reasoning_tokens, justify_left: 5, none: QUESTION });
35
36
  const tui_provider = tui_none_blank(openrouter_provider);
36
37
  const segments = [];
37
38
  segments.push(tui_model);
38
39
  segments.push(`seconds=${tui_seconds}`);
39
40
  segments.push(`input=${tui_input}`);
40
41
  segments.push(`output=${tui_output}`);
41
- if (default_reasoning || total_usage.reasoningTokens !== undefined) {
42
+ if (default_reasoning || reasoning_tokens !== undefined) {
42
43
  segments.push(`reasoning=${tui_reasoning}`);
43
44
  }
44
45
  if (openrouter_provider) {
@@ -1,6 +1,9 @@
1
1
  import cli_table3 from "cli-table3";
2
2
  import { abort_with_error } from "./lib_abort.js";
3
3
  import { ansi_bold } from "./lib_ansi.js";
4
+ export const LEFT = "left";
5
+ export const CENTER = "center";
6
+ export const RIGHT = "right";
4
7
  export class TuiTable {
5
8
  table;
6
9
  columns_total;
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@johnowennixon/diffdash",
3
- "version": "1.10.0",
3
+ "version": "1.12.0",
4
4
  "description": "A command-line tool to generate Git commit messages using AI",
5
5
  "license": "0BSD",
6
6
  "author": "John Owen Nixon",
@@ -18,48 +18,17 @@
18
18
  "bin": {
19
19
  "diffdash": "dist/src/diffdash.js"
20
20
  },
21
- "dependencies": {
22
- "@ai-sdk/anthropic": "2.0.23",
23
- "@ai-sdk/deepseek": "1.0.20",
24
- "@ai-sdk/google": "2.0.17",
25
- "@ai-sdk/openai": "2.0.42",
26
- "@inquirer/prompts": "7.8.6",
27
- "@openrouter/ai-sdk-provider": "1.2.0",
28
- "ai": "5.0.60",
29
- "ansis": "4.2.0",
30
- "argparse": "2.0.1",
31
- "cli-table3": "0.6.5",
32
- "json5": "2.2.3",
33
- "magic-regexp": "0.10.0",
34
- "simple-git": "3.28.0",
35
- "zod": "4.1.11"
36
- },
37
- "devDependencies": {
38
- "@biomejs/biome": "2.2.5",
39
- "@candide/tsgolint": "1.4.0",
40
- "@johnowennixon/add-shebangs": "1.1.0",
41
- "@johnowennixon/chmodx": "2.1.0",
42
- "@types/argparse": "2.0.17",
43
- "@types/node": "24.5.2",
44
- "@typescript/native-preview": "7.0.0-dev.20250925.1",
45
- "knip": "5.63.1",
46
- "markdownlint-cli2": "0.18.1",
47
- "npm-run-all2": "8.0.4",
48
- "oxlint": "1.19.0",
49
- "rimraf": "6.0.1",
50
- "typescript": "5.9.3"
51
- },
52
21
  "scripts": {
53
22
  "build": "run-s -ls build:clean build:tsc build:shebang build:chmod",
54
23
  "build:chmod": "echo 'Changing bin files to be executable' && chmodx --package",
55
24
  "build:clean": "echo 'Removing dist' && rimraf dist",
56
25
  "build:shebang": "echo 'Fixing the shebangs' && add-shebangs --node --exclude 'dist/**/lib_*.js' 'dist/**/*.js'",
57
- "build:tsc": "echo 'Transpiling TypeScript to dist (using tsc)' && tsc --erasableSyntaxOnly --libReplacement false",
26
+ "build:tsc": "echo 'Transpiling TypeScript to dist (using tsc)' && tsc",
58
27
  "build:tsgo": "echo 'Transpiling TypeScript to dist (using tsgo)' && tsgo || (rimraf dist && false)",
59
28
  "fix": "run-s -ls fix:biome fix:markdownlint",
60
29
  "fix:biome": "echo 'Fixing with Biome' && biome check --write",
61
30
  "fix:docbot": "echo 'Fixing with DocBot' && docbot --prune --generate",
62
- "fix:markdownlint": "echo 'Fixing with markdownlint' && markdownlint-cli2 '**/*.md' --fix",
31
+ "fix:markdownlint": "echo 'Fixing with Markdownlint' && markdownlint-cli2 '**/*.md' --fix",
63
32
  "fix:oxlint": "echo 'Fixing with Oxlint' && oxlint --fix",
64
33
  "lint": "run-s -ls lint:biome lint:oxlint lint:tsgolint lint:knip lint:markdownlint",
65
34
  "lint:biome": "echo 'Linting with Biome' && biome check",
@@ -67,9 +36,40 @@
67
36
  "lint:knip": "echo 'Linting with Knip' && knip",
68
37
  "lint:markdownlint": "echo 'Linting with Markdownlint' && markdownlint-cli2 '**/*.md'",
69
38
  "lint:oxlint": "echo 'Linting with Oxlint' && oxlint",
70
- "lint:tsc": "echo 'Linting with tsc' && tsc --noEmit --erasableSyntaxOnly --libReplacement false",
39
+ "lint:tsc": "echo 'Linting with tsc' && tsc --noEmit",
71
40
  "lint:tsgo": "echo 'Linting with tsgo' && tsgo --noEmit",
72
41
  "lint:tsgolint": "echo 'Linting with tsgolint' && candide-tsgolint",
73
42
  "test": "run-s -ls lint build"
43
+ },
44
+ "dependencies": {
45
+ "@ai-sdk/anthropic": "3.0.8",
46
+ "@ai-sdk/deepseek": "2.0.4",
47
+ "@ai-sdk/google": "3.0.5",
48
+ "@ai-sdk/openai": "3.0.21",
49
+ "@inquirer/prompts": "8.2.0",
50
+ "@openrouter/ai-sdk-provider": "2.1.1",
51
+ "ai": "6.0.17",
52
+ "ansis": "4.2.0",
53
+ "argparse": "2.0.1",
54
+ "cli-table3": "0.6.5",
55
+ "json5": "2.2.3",
56
+ "magic-regexp": "0.10.0",
57
+ "simple-git": "3.30.0",
58
+ "zod": "4.3.6"
59
+ },
60
+ "devDependencies": {
61
+ "@biomejs/biome": "2.3.13",
62
+ "@candide/tsgolint": "1.5.0",
63
+ "@johnowennixon/add-shebangs": "1.1.0",
64
+ "@johnowennixon/chmodx": "2.1.0",
65
+ "@types/argparse": "2.0.17",
66
+ "@types/node": "25.0.3",
67
+ "@typescript/native-preview": "7.0.0-dev.20260103.1",
68
+ "knip": "5.82.1",
69
+ "markdownlint-cli2": "0.20.0",
70
+ "npm-run-all2": "8.0.4",
71
+ "oxlint": "1.42.0",
72
+ "rimraf": "6.1.2",
73
+ "typescript": "5.9.3"
74
74
  }
75
- }
75
+ }