reasonix 0.3.1 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -6,20 +6,25 @@
6
6
  [![downloads](https://img.shields.io/npm/dm/reasonix.svg)](https://www.npmjs.com/package/reasonix)
7
7
  [![node](https://img.shields.io/node/v/reasonix.svg)](./package.json)
8
8
 
9
- **The DeepSeek-native agent framework.** TypeScript. Ink TUI. No LangChain.
10
-
11
- Reasonix is not another generic agent wrapper. Every abstraction is justified
12
- by a DeepSeek-specific property — dirt-cheap tokens, R1 reasoning traces,
13
- automatic prefix caching, JSON mode. Generic frameworks treat DeepSeek as
14
- "OpenAI with a different base URL" and leave these advantages on the table.
15
- Reasonix leans into them.
9
+ **A DeepSeek-native AI coding assistant in your terminal.** Ink TUI. MCP
10
+ first-class. No LangChain.
16
11
 
17
12
  ```bash
18
- npx reasonix chat # first run prompts for your DeepSeek key
19
- # inside the TUI, type /help for everything else
13
+ npx reasonix
20
14
  ```
21
15
 
22
- No flag soup. All feature toggles live behind slash commands in the TUI.
16
+ One command. First run walks you through a 30-second wizard (API key →
17
+ preset → pick MCP servers from a checklist); every run after that drops
18
+ straight into chat with your tools wired up. Inside the chat, type `/help`.
19
+
20
+ Why bother with yet another agent framework? Because every abstraction
21
+ here earns its weight against a DeepSeek-specific property — dirt-cheap
22
+ tokens, R1 reasoning traces, automatic prefix caching, JSON mode.
23
+ Generic wrappers treat DeepSeek as "OpenAI with a different base URL"
24
+ and leave these advantages on the table. Reasonix leans into them:
25
+ on the same τ-bench-lite workload,
26
+ [**94.4% cache hit, ~40% cheaper tokens, 100% pass rate**](#validated-numbers)
27
+ vs. a cache-hostile baseline.
23
28
 
24
29
  ---
25
30
 
@@ -27,12 +32,15 @@ No flag soup. All feature toggles live behind slash commands in the TUI.
27
32
 
28
33
  | Feature | How it works | Opt in |
29
34
  |---|---|---|
35
+ | **Setup wizard** | First run of `npx reasonix`: pick preset, multi-select MCP servers from a curated catalog, saved to config so the next run just launches chat | always on (first run) |
36
+ | **MCP (stdio + SSE)** | Multi-server bridge — every MCP tool inherits Cache-First + repair + context-safety automatically. `reasonix mcp list` shows the catalog | always on |
30
37
  | **Cache-First Loop** | Immutable prefix + append-only log = prefix byte-stable across turns → DeepSeek's automatic prefix cache hits at 70–95% | always on |
31
- | **R1 Thought Harvesting** | Parses `reasoning_content` into typed `{ subgoals, hypotheses, uncertainties, rejectedPaths }` via a cheap V3 call | `--harvest` |
32
- | **Self-Consistency Branching** | Runs N parallel samples at spread temperatures; picks the one with the fewest flagged uncertainties | `--branch <N>` |
38
+ | **Context safety net** | Tool results capped at 32k chars · oversized sessions auto-heal on load · `/compact` to shrink further · ctx gauge in the status bar · Esc to abort exploration and get a forced summary | always on |
39
+ | **R1 Thought Harvesting** | Parses `reasoning_content` into typed `{ subgoals, hypotheses, uncertainties, rejectedPaths }` via a cheap V3 call | `/preset smart` |
40
+ | **Self-Consistency Branching** | Runs N parallel samples at spread temperatures; picks the one with the fewest flagged uncertainties | `/preset max` / `/branch N` |
33
41
  | **Tool-Call Repair** | Auto-flattens deep/wide schemas, scavenges tool calls leaked into `<think>`, repairs truncated JSON, breaks call-storms | always on |
34
42
  | **Retry layer** | Exponential backoff + jitter on 408/429/500/502/503/504 and network errors. 4xx auth errors don't retry | always on |
35
- | **Ink TUI** | Live cache-hit / cost panel. Streams R1 thinking to a compact preview. Renders Markdown (bold / lists / code / stripped LaTeX) | always on |
43
+ | **Ink TUI** | Live cache-hit / cost / context panel. Streams R1 thinking to a compact preview. Renders Markdown (bold / lists / code / stripped LaTeX) | always on |
36
44
 
37
45
  ---
38
46
 
@@ -91,10 +99,12 @@ with your own API key: `npx tsx benchmarks/tau-bench/runner.ts --repeats 3`.
91
99
 
92
100
  [r]: ./benchmarks/tau-bench/report.md
93
101
 
94
- ### Extends to MCP (v0.3-alpha)
102
+ ### MCP works out of the box
95
103
 
96
104
  Any [MCP](https://spec.modelcontextprotocol.io/) server's tools inherit
97
- the same Cache-First benefits. Two live runs, two data points:
105
+ Cache-First + repair + context-safety automatically. The wizard (`npx
106
+ reasonix`) lets you multi-select from a curated catalog — no flags, no
107
+ JSON-by-hand. Three live reference runs:
98
108
 
99
109
  | server | turns | tool calls | cache hit | cost | vs Claude |
100
110
  |---|---:|---:|---:|---:|---:|
@@ -103,40 +113,21 @@ the same Cache-First benefits. Two live runs, two data points:
103
113
  | **both concurrently** (`demo_add` + `fs_write_file`) | 5 | 4 | **81.1%** | $0.001852 | −95.9% |
104
114
 
105
115
  The third row is the ecosystem proof: two MCP servers running as
106
- separate subprocesses, tools from both exercised in one conversation
107
- (compute `17+25` with the demo server, write the result to a real file
108
- via the filesystem server). **One single prefix hash across all 5
109
- turns** — byte-stability survives concurrent MCP subprocesses.
116
+ separate subprocesses, tools from both exercised in one conversation.
117
+ **One single prefix hash across all 5 turns** byte-stability survives
118
+ concurrent MCP subprocesses.
110
119
 
111
- **Reproduce without an API key** (replay the committed transcripts):
120
+ Reproduce without an API key (replay the committed transcripts):
112
121
 
113
122
  ```bash
114
123
  npx reasonix replay benchmarks/tau-bench/transcripts/mcp-demo.add.jsonl
115
124
  npx reasonix replay benchmarks/tau-bench/transcripts/mcp-filesystem.jsonl
116
125
  ```
117
126
 
118
- **Reproduce with your own key** (live, ~$0.002):
119
-
120
- ```bash
121
- # Don't know what MCP servers exist? Start here:
122
- reasonix mcp list
123
- # Prints a curated catalog (filesystem, fetch, github, sqlite, …) with
124
- # ready-to-paste --mcp commands.
125
-
126
- # One server:
127
- reasonix chat --mcp "filesystem=npx -y @modelcontextprotocol/server-filesystem /tmp/safe"
128
-
129
- # Multiple servers at once — each gets its own namespace prefix:
130
- reasonix chat \
131
- --mcp "fs=npx -y @modelcontextprotocol/server-filesystem /tmp/safe" \
132
- --mcp "mem=npx -y @modelcontextprotocol/server-memory"
133
- # Tools land in a shared registry as fs_read_file, mem_set, etc.
134
-
135
- # Remote / hosted MCP server — pass an http(s) URL instead of a command.
136
- # Reasonix opens an SSE stream and POSTs JSON-RPC to the endpoint the
137
- # server advertises (MCP 2024-11-05 HTTP+SSE transport).
138
- reasonix chat --mcp "kb=https://mcp.example.com/sse"
139
- ```
127
+ Supported transports: **stdio** (local `npx` or binary) and **HTTP+SSE**
128
+ (remote / hosted servers, MCP 2024-11-05 spec). Pass an `http(s)://`
129
+ URL to `--mcp` and Reasonix opens the SSE stream and POSTs JSON-RPC
130
+ to the endpoint the server advertises.
140
131
 
141
132
  [mcp]: ./benchmarks/tau-bench/transcripts/mcp-demo.add.jsonl
142
133
 
@@ -144,55 +135,66 @@ reasonix chat --mcp "kb=https://mcp.example.com/sse"
144
135
 
145
136
  ## Usage
146
137
 
147
- ### CLI
138
+ ### One command
148
139
 
149
140
  ```bash
150
- npx reasonix chat # auto-saves to session 'default'; resumes next time
151
- npx reasonix chat --session work # use a different named session
152
- npx reasonix chat --no-session # ephemeral — nothing persisted
153
- npx reasonix run "ask anything" # one-shot, streams to stdout
154
- npx reasonix stats session.jsonl # quick summary of a transcript
155
- npx reasonix replay chat.jsonl # pretty-print a transcript + rebuild cost/cache offline
156
- npx reasonix diff a.jsonl b.jsonl --md diff.md # compare two transcripts: cache/cost delta + first divergence
141
+ npx reasonix
157
142
  ```
158
143
 
159
- Sessions live as JSONL under `~/.reasonix/sessions/<name>.jsonl` every
160
- turn's message log is appended atomically, so killing the CLI never loses
161
- context. Inside the TUI: `/sessions` to list, `/forget` to delete the
162
- current one.
144
+ First run: a wizard asks for your API key, lets you pick a preset
145
+ (fast / smart / max), then offers a multi-select checklist of MCP
146
+ servers filesystem, memory, github, puppeteer, everything. Everything
147
+ is saved to `~/.reasonix/config.json`. Subsequent runs drop straight
148
+ into chat.
163
149
 
164
- ### Inside the chat — slash commands
150
+ ### Inside the chat
165
151
 
166
- A command strip runs under the input box so you don't have to memorize
167
- anything. Type `/help` for the full list. The biggest shortcut:
152
+ A status bar at the top shows cache hit %, cost, Claude-equivalent, and
153
+ the **context gauge** (`ctx 42k/131k (32%)` yellow at 50%, red + a
154
+ `/compact` nudge at 80%). A command strip under the input lists the
155
+ slash commands:
168
156
 
169
157
  ```
170
- /preset fast deepseek-chat, no harvest, no branch (default)
171
- /preset smart reasoner + harvest (~10x cost)
172
- /preset max reasoner + harvest + branch 3 (~30x cost, slowest)
158
+ /help full list + hints
159
+ /preset <fast|smart|max> one-tap bundles (model + harvest + branch)
160
+ /mcp list attached MCP servers and tools
161
+ /compact [cap] shrink oversized tool results in history
162
+ /sessions · /forget list / delete saved sessions
163
+ /setup reconfigure (exits and tells you to run `reasonix setup`)
164
+ /clear · /exit
173
165
  ```
174
166
 
175
- One-tap switch between fast daily driver, careful thinker, and max-quality
176
- self-consistency. Individual knobs are available too:
167
+ **Esc while thinking** abort the current exploration and force the
168
+ model to summarize what it already found. No more "model ran 24 tool
169
+ calls and gave up" — you get an answer every time.
177
170
 
178
- ```
179
- /status show current model / harvest / branch / stream
180
- /model <id> deepseek-chat or deepseek-reasoner
181
- /harvest [on|off] Pillar 2 parse R1 reasoning into typed plan state
182
- /branch <N|off> run N parallel samples per turn, pick most confident
183
- /clear clear displayed history (log is kept)
184
- /exit quit
185
- ```
171
+ Sessions live as JSONL under `~/.reasonix/sessions/<name>.jsonl` —
172
+ every message appended atomically, so killing the CLI never loses
173
+ context. Oversized tool results auto-heal on load, so poisoning a
174
+ session with one giant `read_file` doesn't brick your history.
186
175
 
187
- The top panel shows active flags live: `· harvest · branch3` appear next to
188
- the model once enabled.
176
+ ### Advanced CLI subcommands and flags
189
177
 
190
- ### Flags (for automation / CI)
178
+ ```bash
179
+ npx reasonix setup # reconfigure any time
180
+ npx reasonix chat --session work # a different named session
181
+ npx reasonix chat --no-session # ephemeral — nothing persisted
182
+ npx reasonix run "ask anything" # one-shot, streams to stdout
183
+ npx reasonix stats session.jsonl # summarize a transcript
184
+ npx reasonix replay chat.jsonl # scrub a transcript + rebuild cost/cache
185
+ npx reasonix diff a.jsonl b.jsonl --md # compare two transcripts
186
+ npx reasonix mcp list # curated MCP server catalog
187
+ ```
191
188
 
192
- The same knobs are also available as CLI flags if you're scripting:
189
+ Power users can still bypass config and drive Reasonix with flags:
193
190
 
194
191
  ```bash
195
- npx reasonix chat -m deepseek-reasoner --harvest --branch 3 --transcript session.jsonl
192
+ npx reasonix chat \
193
+ --preset max \
194
+ --mcp "filesystem=npx -y @modelcontextprotocol/server-filesystem /tmp/safe" \
195
+ --mcp "kb=https://mcp.example.com/sse" \
196
+ --transcript session.jsonl \
197
+ --no-config # ignore ~/.reasonix/config.json (for CI / reproducing issues)
196
198
  ```
197
199
 
198
200
  ### Library
@@ -238,16 +240,19 @@ console.log(loop.stats.summary());
238
240
 
239
241
  ### Configuration
240
242
 
241
- On first run the CLI prompts for your DeepSeek API key and saves it to
242
- `~/.reasonix/config.json`. Alternatives:
243
+ The wizard handles everything on first run. If you'd rather use env vars
244
+ (CI, shared boxes, etc.):
243
245
 
244
246
  ```bash
245
- export DEEPSEEK_API_KEY=sk-... # env var (wins over config file)
247
+ export DEEPSEEK_API_KEY=sk-... # wins over ~/.reasonix/config.json
246
248
  export DEEPSEEK_BASE_URL=https://... # optional alternate endpoint
247
249
  ```
248
250
 
249
251
  Get a key (free credit on signup): <https://platform.deepseek.com/api_keys>
250
252
 
253
+ Re-run `npx reasonix setup` any time to add/remove MCP servers or switch
254
+ preset — your existing selections are pre-checked.
255
+
251
256
  ---
252
257
 
253
258
  ## Non-goals
@@ -269,7 +274,7 @@ cd reasonix
269
274
  npm install
270
275
  npm run dev chat # run CLI from source via tsx
271
276
  npm run build # tsup to dist/
272
- npm test # vitest (89 tests)
277
+ npm test # vitest (279 tests)
273
278
  npm run lint # biome
274
279
  npm run typecheck # tsc --noEmit
275
280
  ```
package/dist/cli/index.js CHANGED
@@ -1155,6 +1155,12 @@ var CacheFirstLoop = class {
1155
1155
  resumedMessageCount;
1156
1156
  _turn = 0;
1157
1157
  _streamPreference;
1158
+ /**
1159
+ * Set by {@link abort} to short-circuit the tool-call loop after the
1160
+ * current iteration. Reset at the start of each `step()` so an Esc
1161
+ * during one turn doesn't poison the next.
1162
+ */
1163
+ _aborted = false;
1158
1164
  constructor(opts) {
1159
1165
  this.client = opts.client;
1160
1166
  this.prefix = opts.prefix;
@@ -1266,12 +1272,42 @@ var CacheFirstLoop = class {
1266
1272
  if (pendingUser !== null) msgs.push({ role: "user", content: pendingUser });
1267
1273
  return msgs;
1268
1274
  }
1275
+ /**
1276
+ * Signal the currently-running {@link step} that the user wants to
1277
+ * stop exploring. Takes effect at the next iteration boundary — if a
1278
+ * tool call is mid-flight it will be allowed to finish, then the
1279
+ * loop diverts to the forced-summary path so the user gets an
1280
+ * answer instead of a cliff. Called by the TUI on Esc.
1281
+ */
1282
+ abort() {
1283
+ this._aborted = true;
1284
+ }
1269
1285
  async *step(userInput) {
1270
1286
  this._turn++;
1271
1287
  this.scratch.reset();
1288
+ this._aborted = false;
1272
1289
  let pendingUser = userInput;
1273
1290
  const toolSpecs = this.prefix.tools();
1291
+ const warnAt = Math.max(1, Math.floor(this.maxToolIters * 0.7));
1292
+ let warnedForIterBudget = false;
1274
1293
  for (let iter = 0; iter < this.maxToolIters; iter++) {
1294
+ if (this._aborted) {
1295
+ yield {
1296
+ turn: this._turn,
1297
+ role: "warning",
1298
+ content: `aborted at iter ${iter}/${this.maxToolIters} \u2014 forcing summary from what was gathered`
1299
+ };
1300
+ yield* this.forceSummaryAfterIterLimit({ reason: "aborted" });
1301
+ return;
1302
+ }
1303
+ if (!warnedForIterBudget && iter >= warnAt) {
1304
+ warnedForIterBudget = true;
1305
+ yield {
1306
+ turn: this._turn,
1307
+ role: "warning",
1308
+ content: `${iter}/${this.maxToolIters} tool calls used \u2014 approaching budget. Press Esc to force a summary now.`
1309
+ };
1310
+ }
1275
1311
  const messages = this.buildMessages(pendingUser);
1276
1312
  let assistantContent = "";
1277
1313
  let reasoningContent = "";
@@ -1459,9 +1495,9 @@ var CacheFirstLoop = class {
1459
1495
  };
1460
1496
  }
1461
1497
  }
1462
- yield* this.forceSummaryAfterIterLimit();
1498
+ yield* this.forceSummaryAfterIterLimit({ reason: "budget" });
1463
1499
  }
1464
- async *forceSummaryAfterIterLimit() {
1500
+ async *forceSummaryAfterIterLimit(opts = { reason: "budget" }) {
1465
1501
  try {
1466
1502
  const messages = this.buildMessages(null);
1467
1503
  const resp = await this.client.chat({
@@ -1470,7 +1506,8 @@ var CacheFirstLoop = class {
1470
1506
  // no tools → model is forced to answer in text
1471
1507
  });
1472
1508
  const summary = resp.content?.trim() || "(model returned no text; try a narrower question or raise --max-tool-iters)";
1473
- const annotated = `[tool-call budget (${this.maxToolIters}) reached \u2014 forcing summary from what I found]
1509
+ const reasonPrefix = opts.reason === "aborted" ? "[aborted by user (Esc) \u2014 summarizing what I found so far]" : `[tool-call budget (${this.maxToolIters}) reached \u2014 forcing summary from what I found]`;
1510
+ const annotated = `${reasonPrefix}
1474
1511
 
1475
1512
  ${summary}`;
1476
1513
  const summaryStats = this.stats.record(this._turn, this.model, resp.usage ?? new Usage());
@@ -1483,11 +1520,12 @@ ${summary}`;
1483
1520
  };
1484
1521
  yield { turn: this._turn, role: "done", content: summary };
1485
1522
  } catch (err) {
1523
+ const label = opts.reason === "aborted" ? "aborted by user" : `tool-call budget (${this.maxToolIters}) reached`;
1486
1524
  yield {
1487
1525
  turn: this._turn,
1488
1526
  role: "error",
1489
1527
  content: "",
1490
- error: `tool-call budget (${this.maxToolIters}) reached and the fallback summary call failed: ${err.message}. Run /clear and retry with a narrower question, or pass --max-tool-iters higher.`
1528
+ error: `${label} and the fallback summary call failed: ${err.message}. Run /clear and retry with a narrower question, or raise --max-tool-iters.`
1491
1529
  };
1492
1530
  yield { turn: this._turn, role: "done", content: "" };
1493
1531
  }
@@ -2533,14 +2571,14 @@ function parseMcpSpec(input) {
2533
2571
  }
2534
2572
 
2535
2573
  // src/index.ts
2536
- var VERSION = "0.3.1";
2574
+ var VERSION = "0.3.2";
2537
2575
 
2538
2576
  // src/cli/commands/chat.tsx
2539
2577
  import { render } from "ink";
2540
2578
  import React8, { useState as useState4 } from "react";
2541
2579
 
2542
2580
  // src/cli/ui/App.tsx
2543
- import { Box as Box6, Static, Text as Text6, useApp } from "ink";
2581
+ import { Box as Box6, Static, Text as Text6, useApp, useInput } from "ink";
2544
2582
  import React6, { useCallback, useEffect as useEffect2, useMemo, useRef, useState as useState2 } from "react";
2545
2583
 
2546
2584
  // src/cli/ui/EventLog.tsx
@@ -2771,6 +2809,9 @@ var EventRow = React3.memo(function EventRow2({ event }) {
2771
2809
  if (event.role === "info") {
2772
2810
  return /* @__PURE__ */ React3.createElement(Box3, null, /* @__PURE__ */ React3.createElement(Text3, { dimColor: true }, event.text));
2773
2811
  }
2812
+ if (event.role === "warning") {
2813
+ return /* @__PURE__ */ React3.createElement(Box3, null, /* @__PURE__ */ React3.createElement(Text3, { color: "yellow" }, "\u25B8 "), /* @__PURE__ */ React3.createElement(Text3, { color: "yellow" }, event.text));
2814
+ }
2774
2815
  return /* @__PURE__ */ React3.createElement(Box3, null, /* @__PURE__ */ React3.createElement(Text3, null, event.text));
2775
2816
  });
2776
2817
  function BranchBlock({ branch }) {
@@ -3059,6 +3100,7 @@ function App({
3059
3100
  const [streaming, setStreaming] = useState2(null);
3060
3101
  const [input, setInput] = useState2("");
3061
3102
  const [busy, setBusy] = useState2(false);
3103
+ const abortedThisTurn = useRef(false);
3062
3104
  const [summary, setSummary] = useState2({
3063
3105
  turns: 0,
3064
3106
  totalCostUsd: 0,
@@ -3126,6 +3168,13 @@ function App({
3126
3168
  ]);
3127
3169
  }
3128
3170
  }, [session, loop]);
3171
+ useInput((_input, key) => {
3172
+ if (!key.escape) return;
3173
+ if (!busy) return;
3174
+ if (abortedThisTurn.current) return;
3175
+ abortedThisTurn.current = true;
3176
+ loop.abort();
3177
+ });
3129
3178
  const prefixHash = loop.prefix.fingerprint;
3130
3179
  const writeTranscript = useCallback(
3131
3180
  (ev) => {
@@ -3171,6 +3220,7 @@ function App({
3171
3220
  const reasoningBuf = { current: "" };
3172
3221
  setStreaming({ id: assistantId, role: "assistant", text: "", streaming: true });
3173
3222
  setBusy(true);
3223
+ abortedThisTurn.current = false;
3174
3224
  const flush = () => {
3175
3225
  if (!contentBuf.current && !reasoningBuf.current) return;
3176
3226
  streamRef.text += contentBuf.current;
@@ -3243,6 +3293,11 @@ function App({
3243
3293
  ...prev,
3244
3294
  { id: `e-${Date.now()}`, role: "error", text: ev.error ?? ev.content }
3245
3295
  ]);
3296
+ } else if (ev.role === "warning") {
3297
+ setHistorical((prev) => [
3298
+ ...prev,
3299
+ { id: `w-${Date.now()}-${Math.random()}`, role: "warning", text: ev.content }
3300
+ ]);
3246
3301
  }
3247
3302
  }
3248
3303
  flush();
@@ -3267,7 +3322,7 @@ function App({
3267
3322
  ), /* @__PURE__ */ React6.createElement(Static, { items: historical }, (item) => /* @__PURE__ */ React6.createElement(EventRow, { key: item.id, event: item })), streaming ? /* @__PURE__ */ React6.createElement(Box6, { marginY: 1 }, /* @__PURE__ */ React6.createElement(EventRow, { event: streaming })) : null, /* @__PURE__ */ React6.createElement(PromptInput, { value: input, onChange: setInput, onSubmit: handleSubmit, disabled: busy }), /* @__PURE__ */ React6.createElement(CommandStrip, null));
3268
3323
  }
3269
3324
  function CommandStrip() {
3270
- return /* @__PURE__ */ React6.createElement(Box6, { paddingX: 2 }, /* @__PURE__ */ React6.createElement(Text6, { dimColor: true }, "/help \xB7 /preset ", "<fast|smart|max>", " \xB7 /mcp \xB7 /compact \xB7 /sessions \xB7 /setup \xB7 /clear \xB7 /exit"));
3325
+ return /* @__PURE__ */ React6.createElement(Box6, { paddingX: 2, flexDirection: "column" }, /* @__PURE__ */ React6.createElement(Text6, { dimColor: true }, "/help \xB7 /preset ", "<fast|smart|max>", " \xB7 /mcp \xB7 /compact \xB7 /sessions \xB7 /setup \xB7 /clear \xB7 /exit"), /* @__PURE__ */ React6.createElement(Text6, { dimColor: true }, "Esc (while thinking) \u2014 abort & summarize what was found so far"));
3271
3326
  }
3272
3327
  function describeRepair(repair) {
3273
3328
  const parts = [];
@@ -3404,7 +3459,7 @@ import { render as render2 } from "ink";
3404
3459
  import React11 from "react";
3405
3460
 
3406
3461
  // src/cli/ui/DiffApp.tsx
3407
- import { Box as Box9, Static as Static2, Text as Text9, useApp as useApp3, useInput } from "ink";
3462
+ import { Box as Box9, Static as Static2, Text as Text9, useApp as useApp3, useInput as useInput2 } from "ink";
3408
3463
  import React10, { useState as useState5 } from "react";
3409
3464
 
3410
3465
  // src/cli/ui/RecordView.tsx
@@ -3449,7 +3504,7 @@ function DiffApp({ report }) {
3449
3504
  const maxIdx = Math.max(0, report.pairs.length - 1);
3450
3505
  const initialIdx = report.firstDivergenceTurn ? report.pairs.findIndex((p) => p.turn === report.firstDivergenceTurn) : 0;
3451
3506
  const [idx, setIdx] = useState5(Math.max(0, initialIdx));
3452
- useInput((input, key) => {
3507
+ useInput2((input, key) => {
3453
3508
  if (input === "q" || key.ctrl && input === "c") {
3454
3509
  exit();
3455
3510
  return;
@@ -3629,13 +3684,13 @@ import { render as render3 } from "ink";
3629
3684
  import React13 from "react";
3630
3685
 
3631
3686
  // src/cli/ui/ReplayApp.tsx
3632
- import { Box as Box10, Static as Static3, Text as Text10, useApp as useApp4, useInput as useInput2 } from "ink";
3687
+ import { Box as Box10, Static as Static3, Text as Text10, useApp as useApp4, useInput as useInput3 } from "ink";
3633
3688
  import React12, { useMemo as useMemo2, useState as useState6 } from "react";
3634
3689
  function ReplayApp({ meta, pages }) {
3635
3690
  const { exit } = useApp4();
3636
3691
  const maxIdx = Math.max(0, pages.length - 1);
3637
3692
  const [idx, setIdx] = useState6(maxIdx);
3638
- useInput2((input, key) => {
3693
+ useInput3((input, key) => {
3639
3694
  if (input === "q" || key.ctrl && input === "c") {
3640
3695
  exit();
3641
3696
  return;
@@ -3990,12 +4045,12 @@ import { render as render4 } from "ink";
3990
4045
  import React16 from "react";
3991
4046
 
3992
4047
  // src/cli/ui/Wizard.tsx
3993
- import { Box as Box12, Text as Text12, useApp as useApp5, useInput as useInput4 } from "ink";
4048
+ import { Box as Box12, Text as Text12, useApp as useApp5, useInput as useInput5 } from "ink";
3994
4049
  import TextInput3 from "ink-text-input";
3995
4050
  import React15, { useState as useState8 } from "react";
3996
4051
 
3997
4052
  // src/cli/ui/Select.tsx
3998
- import { Box as Box11, Text as Text11, useInput as useInput3 } from "ink";
4053
+ import { Box as Box11, Text as Text11, useInput as useInput4 } from "ink";
3999
4054
  import React14, { useState as useState7 } from "react";
4000
4055
  function SingleSelect({
4001
4056
  items,
@@ -4008,7 +4063,7 @@ function SingleSelect({
4008
4063
  items.findIndex((i) => i.value === initialValue && !i.disabled)
4009
4064
  );
4010
4065
  const [index, setIndex] = useState7(initialIndex === -1 ? 0 : initialIndex);
4011
- useInput3((_input, key) => {
4066
+ useInput4((_input, key) => {
4012
4067
  if (key.upArrow) {
4013
4068
  setIndex((i) => findNextEnabled(items, i, -1));
4014
4069
  } else if (key.downArrow) {
@@ -4042,7 +4097,7 @@ function MultiSelect({
4042
4097
  return first === -1 ? 0 : first;
4043
4098
  });
4044
4099
  const [selected, setSelected] = useState7(new Set(initialSelected));
4045
- useInput3((input, key) => {
4100
+ useInput4((input, key) => {
4046
4101
  if (key.upArrow) {
4047
4102
  setIndex((i) => findNextEnabled(items, i, -1));
4048
4103
  } else if (key.downArrow) {
@@ -4128,7 +4183,7 @@ function Wizard({ onComplete, onCancel, existingApiKey, initial }) {
4128
4183
  catalogArgs: {}
4129
4184
  });
4130
4185
  const [error, setError] = useState8(null);
4131
- useInput4((_input, key) => {
4186
+ useInput5((_input, key) => {
4132
4187
  if (key.escape && step !== "saved" && onCancel) onCancel();
4133
4188
  });
4134
4189
  if (step === "apiKey") {
@@ -4290,13 +4345,13 @@ function McpArgsStep({
4290
4345
  )), error ? /* @__PURE__ */ React15.createElement(Box12, { marginTop: 1 }, /* @__PURE__ */ React15.createElement(Text12, { color: "red" }, error)) : null));
4291
4346
  }
4292
4347
  function ReviewConfirm({ onConfirm }) {
4293
- useInput4((_i, key) => {
4348
+ useInput5((_i, key) => {
4294
4349
  if (key.return) onConfirm();
4295
4350
  });
4296
4351
  return null;
4297
4352
  }
4298
4353
  function ExitOnEnter({ onExit }) {
4299
- useInput4((_i, key) => {
4354
+ useInput5((_i, key) => {
4300
4355
  if (key.return) onExit();
4301
4356
  });
4302
4357
  return null;