ace-interview-prep 0.1.3 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/README.md +44 -14
  2. package/dist/commands/dispute.js +226 -0
  3. package/dist/commands/setup.js +38 -6
  4. package/dist/index.js +7 -1
  5. package/dist/lib/llm.js +13 -8
  6. package/dist/prompts/question-generate.md +18 -0
  7. package/dist/prompts/test-dispute.md +54 -0
  8. package/package.json +1 -1
  9. package/questions/design-be/url-shortener/README.md +0 -23
  10. package/questions/design-be/url-shortener/notes.md +0 -27
  11. package/questions/design-be/url-shortener/scorecard.json +0 -1
  12. package/questions/design-fe/news-feed/README.md +0 -22
  13. package/questions/design-fe/news-feed/notes.md +0 -27
  14. package/questions/design-fe/news-feed/scorecard.json +0 -1
  15. package/questions/design-full/google-docs/README.md +0 -22
  16. package/questions/design-full/google-docs/notes.md +0 -27
  17. package/questions/design-full/google-docs/scorecard.json +0 -1
  18. package/questions/js-ts/debounce/README.md +0 -86
  19. package/questions/js-ts/debounce/scorecard.json +0 -9
  20. package/questions/js-ts/debounce/solution.test.ts +0 -128
  21. package/questions/js-ts/debounce/solution.ts +0 -4
  22. package/questions/leetcode-algo/two-sum/README.md +0 -58
  23. package/questions/leetcode-algo/two-sum/scorecard.json +0 -1
  24. package/questions/leetcode-algo/two-sum/solution.test.ts +0 -55
  25. package/questions/leetcode-algo/two-sum/solution.ts +0 -4
  26. package/questions/leetcode-ds/lru-cache/README.md +0 -70
  27. package/questions/leetcode-ds/lru-cache/scorecard.json +0 -1
  28. package/questions/leetcode-ds/lru-cache/solution.test.ts +0 -82
  29. package/questions/leetcode-ds/lru-cache/solution.ts +0 -14
  30. package/questions/react-apps/todo-app/App.test.tsx +0 -130
  31. package/questions/react-apps/todo-app/App.tsx +0 -10
  32. package/questions/react-apps/todo-app/README.md +0 -23
  33. package/questions/react-apps/todo-app/scorecard.json +0 -9
package/README.md CHANGED
@@ -1,10 +1,9 @@
1
1
  # ace
2
2
 
3
3
  [![npm version](https://img.shields.io/npm/v/ace-interview-prep.svg)](https://www.npmjs.com/package/ace-interview-prep)
4
- [![CI](https://github.com/neel/ace-interview-prep/actions/workflows/ci.yml/badge.svg)](https://github.com/neel/ace-interview-prep/actions/workflows/ci.yml)
5
4
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
6
5
 
7
- A CLI tool for staff-engineer-level frontend interview preparation. Scaffolds questions with test cases, tracks progress with scorecards, and provides LLM-powered feedback.
6
+ A CLI tool for interview prep focusing on frontend. Scaffolds questions with test cases, tracks progress with scorecards, and provides LLM-powered feedback.
8
7
 
9
8
  ## Install
10
9
 
@@ -12,12 +11,6 @@ A CLI tool for staff-engineer-level frontend interview preparation. Scaffolds qu
12
11
  npm install -g ace-interview-prep
13
12
  ```
14
13
 
15
- Or run directly with npx:
16
-
17
- ```bash
18
- npx ace-interview-prep help
19
- ```
20
-
21
14
  ## Quick Start
22
15
 
23
16
  ### 1. Configure API Keys
@@ -27,24 +20,41 @@ ace setup
27
20
  ```
28
21
 
29
22
  Stores your OpenAI / Anthropic API keys in `~/.ace/config.json` (one-time, works across all workspaces).
23
+ If both keys are valid, `ace setup` prompts you to choose a default provider (`openai` or `anthropic`).
30
24
 
31
25
  ```bash
32
26
  # Non-interactive
33
27
  ace setup --openai-key sk-... --anthropic-key sk-ant-...
28
+
29
+ # Set default provider explicitly when both keys are configured
30
+ ace setup --openai-key sk-... --anthropic-key sk-ant-... --default-provider anthropic
34
31
  ```
35
32
 
36
33
  ### 2. Initialize a Workspace
37
34
 
38
- Navigate to any folder where you want to practice:
35
+ Navigate to any folder where you want to practice, then run:
39
36
 
40
37
  ```bash
41
38
  ace init
42
39
  ```
43
40
 
44
- Creates a `questions/` directory and vitest config files. Then install the test dependencies:
41
+ `ace init` bootstraps the workspace and installs dependencies for you. It:
45
42
 
46
43
  ```bash
47
- npm install vitest happy-dom @testing-library/jest-dom
44
+ # Creates/updates workspace files
45
+ # - questions/
46
+ # - package.json (adds test scripts + devDependencies if missing)
47
+ # - tsconfig.json
48
+ # - vitest.config.ts
49
+ # - vitest.setup.ts
50
+ #
51
+ # Installs dependencies automatically via npm install
52
+ ```
53
+
54
+ If you need to regenerate workspace files:
55
+
56
+ ```bash
57
+ ace init --force
48
58
  ```
49
59
 
50
60
  ### 3. Practice
@@ -69,6 +79,7 @@ ace list
69
79
  ### 4. Test, Review, Track
70
80
 
71
81
  All commands below work in three modes:
82
+
72
83
  - **Interactive** — run with no arguments to pick from a selectable list
73
84
  - **Direct** — pass a slug to target a specific question
74
85
  - **All** — pass `--all` to operate on every question
@@ -96,6 +107,23 @@ ace reset debounce # specific question
96
107
  ace reset --all # reset everything (with confirmation)
97
108
  ```
98
109
 
110
+ ### 5. Dispute Potentially Incorrect Tests
111
+
112
+ Use this when your implementation appears correct but a generated test assertion might be wrong.
113
+
114
+ ```bash
115
+ # Dispute interactively (pick a question)
116
+ ace dispute
117
+
118
+ # Dispute a specific question
119
+ ace dispute debounce
120
+
121
+ # Optional: force a provider for dispute analysis
122
+ ace dispute debounce --provider anthropic
123
+ ```
124
+
125
+ If the verdict says the test is incorrect (or ambiguous), ace can apply a corrected test file and re-run tests.
126
+
99
127
  ## Question Categories
100
128
 
101
129
  | Category | Slug | Type | Focus |
@@ -126,11 +154,13 @@ ace reset --all # reset everything (with confirmation)
126
154
  - `~/.ace/.env` — fallback (dotenv format)
127
155
  - Environment variables — final fallback
128
156
 
129
- **Workspace** — each workspace gets its own `questions/` directory and test config.
157
+ Typical `~/.ace/config.json` keys:
130
158
 
131
- ## Seed Questions
159
+ - `OPENAI_API_KEY`
160
+ - `ANTHROPIC_API_KEY`
161
+ - `default_provider` (set automatically or via `ace setup --default-provider ...`)
132
162
 
133
- Ships with 8 starter questions (one per category) so you can begin practicing immediately after install.
163
+ **Workspace** each workspace gets its own `questions/` directory and test config.
134
164
 
135
165
  ## Development
136
166
 
@@ -0,0 +1,226 @@
1
+ import fs from "node:fs";
2
+ import path from "node:path";
3
+ import { spawnSync } from "node:child_process";
4
+ import prompts from "prompts";
5
+ import chalk from "chalk";
6
+ import { findQuestion, readScorecard, writeScorecard, promptForSlug } from "../lib/scorecard.js";
7
+ import { CATEGORIES, isDesignCategory } from "../lib/categories.js";
8
+ import { chat, requireProvider } from "../lib/llm.js";
9
+ import { resolveWorkspaceRoot, isWorkspaceInitialized } from "../lib/paths.js";
10
+ const PROMPTS_DIR = path.resolve(import.meta.dirname, "../prompts");
11
+ function loadPrompt(name) {
12
+ return fs.readFileSync(path.join(PROMPTS_DIR, name), "utf-8");
13
+ }
14
+ function extractJSON(text) {
15
+ const match = text.match(/```json\s*([\s\S]*?)```/);
16
+ if (match) return match[1].trim();
17
+ const jsonMatch = text.match(/\{[\s\S]*\}/);
18
+ if (jsonMatch) return jsonMatch[0];
19
+ return text;
20
+ }
21
+ function parseArgs(args) {
22
+ let slug;
23
+ let provider;
24
+ for (let i = 0; i < args.length; i++) {
25
+ const arg = args[i];
26
+ if (arg === "--provider" && args[i + 1]) {
27
+ provider = args[++i];
28
+ } else if (!arg.startsWith("--")) {
29
+ slug = arg;
30
+ }
31
+ }
32
+ return { slug, provider };
33
+ }
34
+ function runTestsAndCapture(projectRoot, questionDir) {
35
+ const result = spawnSync("npx", ["vitest", "run", questionDir], {
36
+ cwd: projectRoot,
37
+ encoding: "utf-8"
38
+ });
39
+ const stdout = result.stdout || "";
40
+ const stderr = result.stderr || "";
41
+ const output = [stdout, stderr].filter(Boolean).join("\n");
42
+ const exitCode = typeof result.status === "number" ? result.status : 1;
43
+ return { output, exitCode };
44
+ }
45
+ async function run(args) {
46
+ const projectRoot = resolveWorkspaceRoot();
47
+ if (!isWorkspaceInitialized(projectRoot)) {
48
+ console.error(chalk.red("\nError: Workspace not initialized."));
49
+ console.error(chalk.dim("Run `ace init` in this folder first.\n"));
50
+ process.exit(1);
51
+ }
52
+ const parsed = parseArgs(args);
53
+ let selectedSlug = parsed.slug;
54
+ if (!selectedSlug) {
55
+ selectedSlug = await promptForSlug();
56
+ if (!selectedSlug) return;
57
+ }
58
+ const question = findQuestion(selectedSlug);
59
+ if (!question) {
60
+ console.error(chalk.red(`Question not found: ${selectedSlug}`));
61
+ return;
62
+ }
63
+ if (isDesignCategory(question.category)) {
64
+ console.log(chalk.yellow(`"${selectedSlug}" is a system design question \u2014 no tests to dispute.`));
65
+ return;
66
+ }
67
+ const config = CATEGORIES[question.category];
68
+ const readmePath = path.join(question.dir, "README.md");
69
+ const readme = fs.existsSync(readmePath) ? fs.readFileSync(readmePath, "utf-8") : "";
70
+ if (!readme.trim()) {
71
+ console.error(chalk.red("No README.md found for this question."));
72
+ return;
73
+ }
74
+ let solutionContent = "";
75
+ for (const f of config.solutionFiles) {
76
+ const fp = path.join(question.dir, f);
77
+ if (fs.existsSync(fp)) {
78
+ const content = fs.readFileSync(fp, "utf-8");
79
+ solutionContent += `
80
+ --- ${f} ---
81
+ ${content}
82
+ `;
83
+ }
84
+ }
85
+ if (!solutionContent.trim() || solutionContent.includes("// TODO: implement")) {
86
+ console.error(chalk.yellow("Solution appears to be the default stub. Write your solution first!"));
87
+ return;
88
+ }
89
+ let testContent = "";
90
+ let testFilePath = "";
91
+ for (const f of config.testFiles) {
92
+ const fp = path.join(question.dir, f);
93
+ if (fs.existsSync(fp)) {
94
+ testContent += fs.readFileSync(fp, "utf-8");
95
+ testFilePath = fp;
96
+ }
97
+ }
98
+ if (!testContent.trim()) {
99
+ console.error(chalk.red("No test file found for this question."));
100
+ return;
101
+ }
102
+ console.log(chalk.cyan(`
103
+ Running tests for "${selectedSlug}" to capture failures...
104
+ `));
105
+ const initialRun = runTestsAndCapture(projectRoot, question.dir);
106
+ const testOutput = initialRun.output;
107
+ if (initialRun.exitCode === 0) {
108
+ console.log(chalk.green("All tests are passing \u2014 nothing to dispute."));
109
+ return;
110
+ }
111
+ const provider = requireProvider(parsed.provider);
112
+ console.log(chalk.yellow("\nFailing tests detected. Sending to LLM for analysis...\n"));
113
+ const systemPrompt = loadPrompt("test-dispute.md");
114
+ const userContent = `## Problem Statement
115
+ ${readme}
116
+
117
+ ## Solution Code
118
+ ${solutionContent}
119
+
120
+ ## Test File
121
+ ${testContent}
122
+
123
+ ## Test Failure Output
124
+ \`\`\`
125
+ ${testOutput}
126
+ \`\`\``;
127
+ const messages = [
128
+ { role: "system", content: systemPrompt },
129
+ { role: "user", content: userContent }
130
+ ];
131
+ const spinner = ["|", "/", "-", "\\"];
132
+ let spinIdx = 0;
133
+ const interval = setInterval(() => {
134
+ process.stdout.write(`\r${chalk.cyan(spinner[spinIdx++ % spinner.length])} Analyzing...`);
135
+ }, 120);
136
+ let response;
137
+ try {
138
+ response = await chat(provider, messages, true);
139
+ } finally {
140
+ clearInterval(interval);
141
+ process.stdout.write("\r" + " ".repeat(30) + "\r");
142
+ }
143
+ let result;
144
+ try {
145
+ result = JSON.parse(extractJSON(response));
146
+ } catch {
147
+ console.error(chalk.red("Failed to parse LLM response."));
148
+ console.log(chalk.dim(response));
149
+ return;
150
+ }
151
+ const verdictColors = {
152
+ test_incorrect: chalk.green,
153
+ solution_incorrect: chalk.red,
154
+ ambiguous: chalk.yellow
155
+ };
156
+ const verdictLabels = {
157
+ test_incorrect: "Test is incorrect",
158
+ solution_incorrect: "Solution has a bug",
159
+ ambiguous: "Ambiguous specification"
160
+ };
161
+ const color = verdictColors[result.verdict] || chalk.white;
162
+ const label = verdictLabels[result.verdict] || result.verdict;
163
+ console.log(chalk.bold(`
164
+ Verdict: ${color(label)}
165
+ `));
166
+ console.log(result.summary);
167
+ console.log(chalk.dim("\n--- Details ---\n"));
168
+ console.log(result.details);
169
+ if (result.failingTests?.length) {
170
+ console.log(chalk.dim("\n--- Per-Test Breakdown ---\n"));
171
+ for (const t of result.failingTests) {
172
+ const tColor = verdictColors[t.verdict] || chalk.white;
173
+ console.log(` ${tColor("\u25CF")} ${chalk.bold(t.testName)}: ${tColor(verdictLabels[t.verdict] || t.verdict)}`);
174
+ console.log(` ${t.explanation}`);
175
+ if (t.fixedAssertion) {
176
+ console.log(chalk.dim(` Fix: ${t.fixedAssertion}`));
177
+ }
178
+ }
179
+ }
180
+ if ((result.verdict === "test_incorrect" || result.verdict === "ambiguous") && result.fixedTestCode) {
181
+ console.log("");
182
+ const { confirm } = await prompts({
183
+ type: "confirm",
184
+ name: "confirm",
185
+ message: "Apply the corrected test file?",
186
+ initial: true
187
+ });
188
+ if (confirm) {
189
+ fs.writeFileSync(testFilePath, result.fixedTestCode, "utf-8");
190
+ console.log(chalk.green(`
191
+ Test file updated: ${path.relative(projectRoot, testFilePath)}`));
192
+ console.log(chalk.cyan("\nRe-running tests to verify...\n"));
193
+ const verifyRun = runTestsAndCapture(projectRoot, question.dir);
194
+ const verifyOutput = verifyRun.output;
195
+ console.log(verifyOutput);
196
+ const scorecard = readScorecard(question.category, selectedSlug);
197
+ if (scorecard) {
198
+ const passMatch = verifyOutput.match(/(\d+)\s+passed/);
199
+ const failMatch = verifyOutput.match(/(\d+)\s+failed/);
200
+ const passed = passMatch ? parseInt(passMatch[1], 10) : 0;
201
+ const failed = failMatch ? parseInt(failMatch[1], 10) : 0;
202
+ const total = passed + failed;
203
+ if (total > 0) {
204
+ if (scorecard.attempts.length === 0) {
205
+ scorecard.attempts.push({ attempt: 1, testsTotal: 0, testsPassed: 0, llmScore: null });
206
+ }
207
+ const current = scorecard.attempts[scorecard.attempts.length - 1];
208
+ current.testsTotal = total;
209
+ current.testsPassed = passed;
210
+ scorecard.status = passed === total ? "solved" : "attempted";
211
+ writeScorecard(question.category, selectedSlug, scorecard);
212
+ const resultColor = passed === total ? chalk.green : chalk.yellow;
213
+ console.log(resultColor(`Scorecard updated: ${passed}/${total} tests passed`));
214
+ }
215
+ }
216
+ } else {
217
+ console.log(chalk.dim("No changes made."));
218
+ }
219
+ } else if (result.verdict === "solution_incorrect" && result.hint) {
220
+ console.log(chalk.dim("\n--- Hint ---\n"));
221
+ console.log(chalk.yellow(result.hint));
222
+ }
223
+ }
224
+ export {
225
+ run
226
+ };
@@ -93,17 +93,42 @@ async function run(args) {
93
93
  anthropicValid = result.valid;
94
94
  anthropicError = result.error;
95
95
  }
96
+ if (openaiValid && anthropicValid) {
97
+ let defaultProvider = parsed["default-provider"];
98
+ if (!defaultProvider) {
99
+ const currentDefault = final.default_provider || "openai";
100
+ const { selected } = await prompts({
101
+ type: "select",
102
+ name: "selected",
103
+ message: "Default LLM provider:",
104
+ choices: [
105
+ { title: "OpenAI", value: "openai" },
106
+ { title: "Anthropic", value: "anthropic" }
107
+ ],
108
+ initial: currentDefault === "anthropic" ? 1 : 0
109
+ });
110
+ defaultProvider = selected;
111
+ }
112
+ if (defaultProvider === "openai" || defaultProvider === "anthropic") {
113
+ saveGlobalAceConfig({ default_provider: defaultProvider });
114
+ }
115
+ } else if (openaiValid) {
116
+ saveGlobalAceConfig({ default_provider: "openai" });
117
+ } else if (anthropicValid) {
118
+ saveGlobalAceConfig({ default_provider: "anthropic" });
119
+ }
120
+ const updated = loadAceConfig();
96
121
  console.log(chalk.cyan("\n\u256D\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256E"));
97
122
  console.log(chalk.cyan("\u2502") + chalk.bold(" ace status") + " " + chalk.cyan("\u2502"));
98
123
  console.log(chalk.cyan("\u251C\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524"));
99
- if (final.OPENAI_API_KEY) {
100
- const detail = openaiValid ? maskApiKey(final.OPENAI_API_KEY) : openaiError || "validation failed";
124
+ if (updated.OPENAI_API_KEY) {
125
+ const detail = openaiValid ? maskApiKey(updated.OPENAI_API_KEY) : openaiError || "validation failed";
101
126
  printStatusLine("OpenAI key", openaiValid, detail);
102
127
  } else {
103
128
  printStatusLine("OpenAI key", null, "not configured");
104
129
  }
105
- if (final.ANTHROPIC_API_KEY) {
106
- const detail = anthropicValid ? maskApiKey(final.ANTHROPIC_API_KEY) : anthropicError || "validation failed";
130
+ if (updated.ANTHROPIC_API_KEY) {
131
+ const detail = anthropicValid ? maskApiKey(updated.ANTHROPIC_API_KEY) : anthropicError || "validation failed";
107
132
  printStatusLine("Anthropic key", anthropicValid, detail);
108
133
  } else {
109
134
  printStatusLine("Anthropic key", null, "not configured");
@@ -111,10 +136,17 @@ async function run(args) {
111
136
  console.log(chalk.cyan("\u251C\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524"));
112
137
  const ready = openaiValid === true || anthropicValid === true;
113
138
  printStatusLine("Ready", ready, ready ? "at least one provider configured" : "no valid API keys");
139
+ if (updated.default_provider) {
140
+ const providerName = updated.default_provider === "openai" ? "OpenAI" : "Anthropic";
141
+ printStatusLine("Default provider", true, providerName.toLowerCase());
142
+ }
114
143
  console.log(chalk.cyan("\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256F\n"));
115
144
  if (!ready && (openaiValid === false || anthropicValid === false)) {
116
- console.log(chalk.yellow("Warning: No valid API keys configured."));
117
- console.log(chalk.dim("Run `ace setup` again with valid keys to use LLM features.\n"));
145
+ console.log(chalk.yellow("No valid API keys configured. You need at least one to use LLM features.\n"));
146
+ console.log(chalk.dim("Generate an API key from either provider:\n"));
147
+ console.log(chalk.cyan(" OpenAI ") + chalk.dim(" https://platform.openai.com/api-keys"));
148
+ console.log(chalk.cyan(" Anthropic ") + chalk.dim(" https://console.anthropic.com/settings/keys"));
149
+ console.log(chalk.dim("\nThen run ") + chalk.cyan("ace setup") + chalk.dim(" again to configure your key.\n"));
118
150
  }
119
151
  }
120
152
  export {
package/dist/index.js CHANGED
@@ -10,7 +10,8 @@ const COMMANDS = {
10
10
  test: () => import("./commands/test.js"),
11
11
  feedback: () => import("./commands/feedback.js"),
12
12
  reset: () => import("./commands/reset.js"),
13
- score: () => import("./commands/score.js")
13
+ score: () => import("./commands/score.js"),
14
+ dispute: () => import("./commands/dispute.js")
14
15
  };
15
16
  function printHelp() {
16
17
  console.log(`
@@ -30,6 +31,7 @@ ${chalk.bold("Question Commands:")}
30
31
  ${chalk.green("feedback")} Get LLM code review or design review
31
32
  ${chalk.green("reset")} Reset a question to unanswered state
32
33
  ${chalk.green("score")} View scorecard for a question
34
+ ${chalk.green("dispute")} Challenge a potentially incorrect test case
33
35
 
34
36
  ${chalk.bold("Examples:")}
35
37
 
@@ -68,6 +70,10 @@ ${chalk.bold("Examples:")}
68
70
  ace reset
69
71
  ace reset debounce
70
72
  ace reset --all
73
+
74
+ ${chalk.dim("# Dispute a test you think is wrong")}
75
+ ace dispute
76
+ ace dispute debounce
71
77
  `);
72
78
  }
73
79
  async function main() {
package/dist/lib/llm.js CHANGED
@@ -17,9 +17,14 @@ function getAvailableProviders() {
17
17
  return providers;
18
18
  }
19
19
  function getDefaultProvider() {
20
- const providers = getAvailableProviders();
21
- if (providers.length === 0) return null;
22
- return providers[0];
20
+ const config = getConfig();
21
+ const available = getAvailableProviders();
22
+ if (available.length === 0) return null;
23
+ if (config.default_provider && available.includes(config.default_provider)) {
24
+ return config.default_provider;
25
+ }
26
+ if (available.includes("openai")) return "openai";
27
+ return available[0];
23
28
  }
24
29
  function requireProvider(preferred) {
25
30
  const config = getConfig();
@@ -44,7 +49,7 @@ async function callOpenAI(messages, jsonMode = false) {
44
49
  const config = getConfig();
45
50
  const client = new OpenAI({ apiKey: config.OPENAI_API_KEY });
46
51
  const response = await client.chat.completions.create({
47
- model: "gpt-4o",
52
+ model: "gpt-5.2",
48
53
  messages: messages.map((m) => ({ role: m.role, content: m.content })),
49
54
  temperature: 0.7,
50
55
  max_tokens: 4096,
@@ -58,7 +63,7 @@ async function callAnthropic(messages, _jsonMode = false) {
58
63
  const systemMsg = messages.find((m) => m.role === "system");
59
64
  const nonSystemMessages = messages.filter((m) => m.role !== "system");
60
65
  const response = await client.messages.create({
61
- model: "claude-sonnet-4-20250514",
66
+ model: "claude-sonnet-4-5-20250929",
62
67
  max_tokens: 4096,
63
68
  ...systemMsg ? { system: systemMsg.content } : {},
64
69
  messages: nonSystemMessages.map((m) => ({
@@ -80,7 +85,7 @@ async function streamOpenAI(messages) {
80
85
  const config = getConfig();
81
86
  const client = new OpenAI({ apiKey: config.OPENAI_API_KEY });
82
87
  const stream = await client.chat.completions.create({
83
- model: "gpt-4o",
88
+ model: "gpt-5.2",
84
89
  messages: messages.map((m) => ({ role: m.role, content: m.content })),
85
90
  temperature: 0.7,
86
91
  max_tokens: 4096,
@@ -101,7 +106,7 @@ async function streamAnthropic(messages) {
101
106
  const systemMsg = messages.find((m) => m.role === "system");
102
107
  const nonSystemMessages = messages.filter((m) => m.role !== "system");
103
108
  const stream = client.messages.stream({
104
- model: "claude-sonnet-4-20250514",
109
+ model: "claude-sonnet-4-5-20250929",
105
110
  max_tokens: 4096,
106
111
  ...systemMsg ? { system: systemMsg.content } : {},
107
112
  messages: nonSystemMessages.map((m) => ({
@@ -140,7 +145,7 @@ async function validateAnthropicKey(apiKey) {
140
145
  try {
141
146
  const client = new Anthropic({ apiKey });
142
147
  await client.messages.create({
143
- model: "claude-sonnet-4-20250514",
148
+ model: "claude-sonnet-4-5-20250929",
144
149
  max_tokens: 1,
145
150
  messages: [{ role: "user", content: "hi" }]
146
151
  });
@@ -66,6 +66,24 @@ Return a JSON object with:
66
66
  - No `signature`, `testCode`, or `solutionCode`
67
67
  - The description must include a clear **Requirements** section that candidates can use to structure their design
68
68
 
69
+ ## Common Test Mistakes to Avoid
70
+
71
+ Before finalizing `testCode`, mentally execute each test case against a correct implementation step-by-step. Verify that every expected value is accurate.
72
+
73
+ - **Wrong expected values**: Double-check computed outputs (sorted arrays, mathematical results, string transformations) by tracing through the algorithm by hand
74
+ - **Off-by-one errors**: Verify boundary indices, slice ranges, and loop counts in expected outputs
75
+ - **Incorrect sort order**: Ensure expected output matches the exact sort direction (ascending vs descending) and sort key specified in the problem
76
+ - **Async timing**: For debounce/throttle/timer tests, ensure timing assertions match the described delay behavior — account for whether the function fires on the leading edge, trailing edge, or both
77
+ - **Floating point**: Use `toBeCloseTo` instead of `toBe` for floating point comparisons
78
+ - **Reference vs value equality**: Use `toEqual` for deep object/array comparisons, not `toBe`
79
+ - **Hardcoded magic values**: Never guess an expected value — derive it from the problem constraints and input
80
+
81
+ If a test involves a computed result, add a brief inline comment explaining how the expected value was derived, e.g.:
82
+ ```
83
+ // [1,2,3] -> sum = 6, mean = 6/3 = 2
84
+ expect(mean([1, 2, 3])).toBe(2);
85
+ ```
86
+
69
87
  ## Quality Guidelines
70
88
 
71
89
  - Questions should be achievable within the suggested time for the category and difficulty
@@ -0,0 +1,54 @@
1
+ # Test Case Dispute Analyst
2
+
3
+ You are an expert code reviewer specializing in test correctness. A candidate believes their solution is correct but one or more test cases are failing. Your job is to determine whether the **test** or the **solution** is at fault.
4
+
5
+ ## Input
6
+
7
+ You will receive:
8
+ - **Problem statement**: The question README describing expected behavior
9
+ - **Solution code**: The candidate's implementation
10
+ - **Test file**: The full test file
11
+ - **Test failure output**: The actual vitest output showing which tests failed and why
12
+
13
+ ## Analysis Process
14
+
15
+ 1. Read the problem statement carefully to understand the **specified behavior**
16
+ 2. Trace the candidate's solution logic step-by-step
17
+ 3. For each failing test:
18
+ - Determine the **expected output** according to the problem statement (not the test)
19
+ - Trace the solution's actual output for that input
20
+ - Compare both to the test's expected value
21
+ 4. Classify the root cause as one of:
22
+ - `test_incorrect` — the test's expected value or assertion is wrong
23
+ - `solution_incorrect` — the solution has a bug
24
+ - `ambiguous` — the problem statement is unclear and both interpretations are valid
25
+
26
+ ## Output Format
27
+
28
+ **IMPORTANT**: Your response MUST be valid JSON wrapped in ```json code fences. No other text before or after.
29
+
30
+ ```json
31
+ {
32
+ "verdict": "test_incorrect | solution_incorrect | ambiguous",
33
+ "summary": "One-sentence summary of the finding",
34
+ "details": "Detailed explanation with step-by-step trace showing why the test or solution is wrong",
35
+ "failingTests": [
36
+ {
37
+ "testName": "name of the failing test",
38
+ "verdict": "test_incorrect | solution_incorrect | ambiguous",
39
+ "explanation": "Why this specific test is wrong or why the solution fails it",
40
+ "fixedAssertion": "The corrected expect(...) line, if test_incorrect. Omit if solution_incorrect."
41
+ }
42
+ ],
43
+ "fixedTestCode": "The complete corrected test file content (only if verdict is test_incorrect or ambiguous). Omit entirely if verdict is solution_incorrect.",
44
+ "hint": "A nudge toward the bug in the solution (only if verdict is solution_incorrect, without revealing the answer). Omit if test_incorrect."
45
+ }
46
+ ```
47
+
48
+ ## Rules
49
+
50
+ - Be precise: trace actual values, not hand-wavy reasoning
51
+ - When the problem statement is the source of truth, favor it over both the test and the solution
52
+ - If the test is wrong, provide the complete corrected test file — do not leave placeholders
53
+ - If the solution is wrong, give a helpful hint without giving away the full fix
54
+ - If ambiguous, explain both valid interpretations and provide a corrected test file that matches the more common/standard interpretation
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "ace-interview-prep",
3
- "version": "0.1.3",
3
+ "version": "0.1.4",
4
4
  "description": "CLI tool for frontend interview preparation — scaffolds questions with tests, tracks progress, and provides LLM-powered feedback",
5
5
  "type": "module",
6
6
  "bin": {
@@ -1,23 +0,0 @@
1
- # Design a URL Shortener
2
-
3
- **Category:** Backend Design
4
- **Difficulty:** Medium
5
- **Suggested Time:** ~40 minutes
6
-
7
- ---
8
-
9
- ## Problem
10
-
11
- Design a URL shortening service (like bit.ly) from a **backend perspective**.
12
-
13
- Consider the following aspects:
14
-
15
- - **Encoding scheme** — How to generate short codes (base62, UUID, etc.)
16
- - **Database design** — Schema for URLs, mappings, metadata
17
- - **Read/write ratio** — Optimize for heavy read traffic
18
- - **Caching** — Cache hot short URLs for fast redirects
19
- - **Analytics** — Click tracking, geographic data
20
- - **Expiration** — Optional TTL for short links
21
- - **Horizontal scaling** — Sharding, load balancing
22
-
23
- Walk through your design, data flow, and trade-offs. Use the `notes.md` file to capture your solution.
@@ -1,27 +0,0 @@
1
- # Design a URL Shortener — Design Notes
2
-
3
- ## Requirements
4
-
5
- ### Functional Requirements
6
- <!-- List the core features and user-facing requirements -->
7
-
8
- ### Non-Functional Requirements
9
- <!-- Performance, scalability, availability, security, etc. -->
10
-
11
- ## High-Level Architecture
12
- <!-- Describe the overall system architecture -->
13
-
14
- ## Data Model
15
- <!-- Define key data structures, schemas, etc. -->
16
-
17
- ## API Design
18
- <!-- Endpoint design, data contracts, etc. -->
19
-
20
- ## Deep Dive
21
- <!-- Pick 1-2 areas to go deep on -->
22
-
23
- ## Scaling Considerations
24
- <!-- How does this scale? -->
25
-
26
- ## Trade-offs
27
- <!-- What trade-offs did you make? -->
@@ -1 +0,0 @@
1
- {"title":"Design a URL Shortener","category":"design-be","difficulty":"medium","suggestedTime":40,"status":"untouched","attempts":[],"llmFeedback":null}
@@ -1,22 +0,0 @@
1
- # Design a News Feed
2
-
3
- **Category:** Frontend Design
4
- **Difficulty:** Hard
5
- **Suggested Time:** ~55 minutes
6
-
7
- ---
8
-
9
- ## Problem
10
-
11
- Design a social media news feed (like Facebook or Twitter feed) from a **frontend perspective**.
12
-
13
- Consider the following aspects:
14
-
15
- - **Infinite scroll** — Loading more content as the user scrolls
16
- - **Real-time updates** — New posts appearing without a full refresh
17
- - **Optimistic UI** — Immediate feedback for likes, comments, shares
18
- - **Caching** — Stale-while-revalidate, cache invalidation strategies
19
- - **Virtualization** — Rendering only visible items for performance
20
- - **Accessibility** — Screen readers, keyboard navigation, focus management
21
-
22
- Walk through your design, data flow, and trade-offs. Use the `notes.md` file to capture your solution.