auq-mcp-server 0.1.5 → 0.1.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,36 +1,64 @@
1
+ ![AUQ Demo](media/demo.png)
2
+
1
3
  # AUQ - ask-user-questions MCP
2
4
 
3
5
  [![npm version](https://img.shields.io/npm/v/auq-mcp-server.svg)](https://www.npmjs.com/package/auq-mcp-server)
4
6
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5
- [![Build Status](https://img.shields.io/github/actions/workflow/status/paulp-o/ask-user-questions-mcp/test.yml)](https://github.com/paulp-o/ask-user-questions-mcp/actions)
6
-
7
- **A lightweight MCP server & CLI tool that allows your LLMs to ask questions to you in a clean, separate space with great terminal UX.**
7
+ [![Install MCP Server](https://cursor.com/deeplink/mcp-install-light.svg)](https://cursor.com/en-US/install-mcp?name=ask-user-questions&config=eyJlbnYiOnt9LCJjb21tYW5kIjoibnB4IC15IGF1cS1tY3Atc2VydmVyIHNlcnZlciJ9)
8
8
 
9
- ![AUQ Demo](docs/screenshot.png)
9
+ **A lightweight MCP server & CLI tool that allows your LLMs to ask questions to you in a clean, separate space with great terminal UX. Made for multi-agent parallel coding workflows.**
10
10
 
11
11
  ---
12
12
 
13
13
  ## What does it do?
14
14
 
15
- Through this MCP server, your LLM can generate question sets consisting of multiple-choice, single-choice, and free-text questions (with an "Other" option for custom input) while coding or working, and wait for your answers.
15
+ This MCP server lets your AI assistants generate clarifying questions consisting of multiple-choice/single-choice questions (with an "Other" option for custom input) while coding or working, and wait for your answers through a separate CLI tool without messing up your workflow.
16
16
 
17
- You can keep the `auq` CLI running in advance, or start it when questions are pending. With simple arrow key navigation, you can select answers and send them back to the AI—all within a clean terminal interface.
17
+ You can keep the CLI running in advance, or start it when questions are pending. With simple arrow key navigation, you can select answers and send them back to the AI—all within a clean terminal interface.
18
18
 
19
- ## Why?
19
+ ## Background
20
20
 
21
- In AI-assisted coding, **clarifying questions** have always been recognized as a powerful prompt engineering technique to overcome LLM hallucination and generate more contextually appropriate code ([research paper](https://arxiv.org/abs/2308.13507)).
21
+ In AI-assisted coding, guiding LLMs to ask **clarifying questions** have been widely recognized as a powerful prompt engineering technique to overcome LLM hallucination and generate more contextually appropriate code [1].
22
22
 
23
- On October 18th, Claude Code 2.0.21 introduced an internal `ask-user-question` tool, and I loved this feature. Inspired by it, I created this project to overcome what I saw as a few limitations:
23
+ On October 18th, Claude Code 2.0.21 introduced an internal `ask-user-question` tool. Inspired by it, I decided to build a similar tool that is:
24
24
 
25
25
  - **Tool-agnostic** - Works with any MCP client (Claude Desktop, Cursor, etc.)
26
26
  - **Non-invasive** - Doesn't heavily integrate with your coding CLI workflow or occupy UI space
27
27
  - **Multi-agent friendly** - Supports receiving questions from multiple agents simultaneously in parallel workflows
28
28
 
29
- This is AUQ—a human-AI question-answer loop tool designed for modern AI coding workflows.
29
+ ---
30
+
31
+ ## ✨ Features
32
+
33
+ <https://github.com/user-attachments/assets/3a135a13-fcb1-4795-9a6b-f426fa079674>
34
+
35
+ ### 🖥️ CLI-Based
36
+
37
+ - **Lightweight**: Adds only ~150 tokens to your context per question
38
+ - **SSH-compatible**: Use over remote connections
39
+ - **Fast**: Instant startup, minimal resource usage
40
+
41
+ ### 📦 100% Local
42
+
43
+ All information operates based on your local file system. No data leaves your machine.
44
+
45
+ ### 🔄 Resumable & Stateless
46
+
47
+ The CLI app doesn't need to be running in advance. Whether the model calls the MCP first and you start the CLI later, or you keep it running—you can immediately answer pending questions in FIFO order.
48
+
49
+ ### ❌ Question Set Rejection with Feedback Loop
50
+
51
+ When the LLM asks about the wrong domain entirely, you can reject the question set, optionally providing the reason to the LLM. The rejection feedback is sent back to the LLM, allowing it to ask more helpful questions or align on what's important for the project.
52
+
53
+ ### 📋 Question Set Queuing
54
+
55
+ Recent AI workflows often use parallel sub-agents for concurrent coding. AUQ handles multiple simultaneous LLM calls gracefully—when a new question set arrives while you're answering another, it's queued and processed sequentially. Perfect for multi-agent parallel coding workflows.
30
56
 
31
57
  ---
32
58
 
33
- ## 🚀 Quick Start
59
+ # Setup Instructions
60
+
61
+ ## 🚀 Step 1: Setup CLI
34
62
 
35
63
  ### Global Installation (Recommended)
36
64
 
@@ -53,13 +81,13 @@ npx auq
53
81
  ```
54
82
 
55
83
  **Session Storage:**
84
+
56
85
  - **Global install**: `~/Library/Application Support/auq/sessions` (macOS), `~/.local/share/auq/sessions` (Linux)
57
86
  - **Local install**: `.auq/sessions/` in your project root
58
- - **Override**: Set `AUQ_SESSION_DIR` environment variable
59
87
 
60
88
  ---
61
89
 
62
- ## 🔌 MCP Server Configuration
90
+ ## 🔌 Step 2: Setup MCP Server
63
91
 
64
92
  ### Cursor
65
93
 
@@ -91,30 +119,6 @@ Add to `.mcp.json` in your project root (for team-wide sharing):
91
119
 
92
120
  Or add to `~/.claude.json` for global access across all projects.
93
121
 
94
- **With environment variables:**
95
-
96
- ```json
97
- {
98
- "mcpServers": {
99
- "ask-user-questions": {
100
- "type": "stdio",
101
- "command": "npx",
102
- "args": ["-y", "auq-mcp-server", "server"],
103
- "env": {
104
- "AUQ_SESSION_DIR": "${AUQ_SESSION_DIR}"
105
- }
106
- }
107
- }
108
- }
109
- ```
110
-
111
- **Manage servers:**
112
- ```bash
113
- claude mcp list # View all servers
114
- claude mcp get ask-user-questions # Server details
115
- claude mcp remove ask-user-questions # Remove server
116
- ```
117
-
118
122
  **Verify setup:** Type `/mcp` in Claude Code to check server status.
119
123
 
120
124
  ### Codex CLI
@@ -165,33 +169,13 @@ Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS)
165
169
 
166
170
  ---
167
171
 
168
- ## ✨ Features
169
-
170
- ### 🖥️ CLI-Based
171
- - **Lightweight**: Adds only ~150 tokens to your context per question
172
- - **SSH-compatible**: Use over remote connections
173
- - **Fast**: Instant startup, minimal resource usage
174
-
175
- ### 📦 100% Local
176
- All information operates based on your local file system. No data leaves your machine.
177
-
178
- ### 🔄 Resumable & Stateless
179
- The CLI app doesn't need to be running in advance. Whether the model calls the MCP first and you start the CLI later, or you keep it running—you can immediately answer pending questions in FIFO order.
180
-
181
- ### ❌ Question Set Rejection with Feedback Loop
182
- When the LLM asks about the wrong domain entirely, you can reject the question set, optionally providing the reason to the LLM. The rejection feedback is sent back to the LLM, allowing it to ask more helpful questions or align on what's important for the project.
183
-
184
- ### 📋 Question Set Queuing
185
- Recent AI workflows often use parallel sub-agents for concurrent coding. AUQ handles multiple simultaneous LLM calls gracefully—when a new question set arrives while you're answering another, it's queued and processed sequentially. Perfect for multi-agent parallel coding workflows.
186
-
187
- ---
188
-
189
172
  ## 💻 Usage
190
173
 
191
174
  ### Starting the CLI tool
192
175
 
193
176
  ```bash
194
- auq
177
+ auq # if you installed globally
178
+ npx auq # if you installed locally
195
179
  ```
196
180
 
197
181
  Then just start working with your coding agent or AI assistant. You may prompt to ask questions with the tool the agent got; it will mostly just get what you mean.
@@ -227,17 +211,17 @@ rm -rf .auq/sessions/*
227
211
  - [ ] Light & dark mode themes
228
212
  - [ ] MCP prompt mode switch (Anthropic style / minimal)
229
213
  - [ ] Custom color themes
230
- - [ ] Question history/recall
231
214
  - [ ] Multi-language support
232
- - [ ] Audio notifications (optional)
233
- - [ ] Export answers to file
215
+ - [ ] Audio notifications on new question
216
+ - [ ] Simple option to prompt the LLM to/not ask more questions after answering.
217
+ - [ ] Optional 'context' field privided by the LLM, that describes the context of the questions - will be useful for multi-agent coding
234
218
 
235
219
  ---
236
220
 
237
-
238
221
  ## 📄 License
239
222
 
240
223
  MIT License - see [LICENSE](LICENSE) file for details.
241
224
 
242
225
  ---
243
226
 
227
+ [1] arXiv:2308.13507 <https://arxiv.org/abs/2308.13507>
package/dist/bin/auq.js CHANGED
@@ -1,4 +1,5 @@
1
1
  #!/usr/bin/env node
2
+ // import { exec } from "child_process";
2
3
  import { readFileSync } from "fs";
3
4
  import { dirname, join } from "path";
4
5
  import { fileURLToPath } from "url";
@@ -171,10 +172,13 @@ const App = () => {
171
172
  setToast({ message, type });
172
173
  };
173
174
  // Handle session completion
174
- const handleSessionComplete = (wasRejected = false) => {
175
+ const handleSessionComplete = (wasRejected = false, rejectionReason) => {
175
176
  // Show appropriate toast
176
177
  if (wasRejected) {
177
- showToast("Question set rejected", "info");
178
+ const message = rejectionReason
179
+ ? `**Question set rejected**\nRejection reason: ${rejectionReason}`
180
+ : "**Question set rejected**";
181
+ showToast(message, "info");
178
182
  }
179
183
  else {
180
184
  showToast("✓ Answers submitted successfully!", "success");
@@ -21,7 +21,7 @@ const server = new FastMCP({
21
21
  "returning formatted responses for continued reasoning. " +
22
22
  "Each question supports 2-4 multiple-choice options with descriptions, and users can always provide custom text input. " +
23
23
  "Both single-select and multi-select modes are supported.",
24
- version: "0.1.0",
24
+ version: "0.1.5",
25
25
  });
26
26
  // Define the question and option schemas
27
27
  const OptionSchema = z.object({
@@ -117,9 +117,9 @@ export const StepperView = ({ onComplete, sessionId, sessionRequest, }) => {
117
117
  try {
118
118
  const sessionManager = new SessionManager({ baseDir: getSessionDirectory() });
119
119
  await sessionManager.rejectSession(sessionId, reason);
120
- // Call onComplete with rejection flag
120
+ // Call onComplete with rejection flag and reason
121
121
  if (onComplete) {
122
- onComplete(true);
122
+ onComplete(true, reason);
123
123
  }
124
124
  }
125
125
  catch (error) {
@@ -14,6 +14,6 @@ export const Toast = ({ message, type = "success", onDismiss, duration = 2000, }
14
14
  }, [duration, onDismiss]);
15
15
  // Color based on type
16
16
  const color = type === "success" ? "green" : type === "error" ? "red" : "cyan";
17
- return (React.createElement(Box, { borderColor: color, borderStyle: "round", paddingX: 2, paddingY: 0.5 },
17
+ return (React.createElement(Box, { borderColor: color, borderStyle: "round", paddingX: 2, paddingY: 0 },
18
18
  React.createElement(Text, { bold: true, color: color }, message)));
19
19
  };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "auq-mcp-server",
3
- "version": "0.1.5",
3
+ "version": "0.1.6",
4
4
  "main": "dist/index.js",
5
5
  "bin": {
6
6
  "auq": "dist/bin/auq.js"