auq-mcp-server 0.1.5 → 0.1.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +47 -63
- package/dist/bin/auq.js +11 -5
- package/dist/src/server.js +1 -1
- package/dist/src/tui/components/StepperView.js +2 -2
- package/dist/src/tui/components/Toast.js +4 -3
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,36 +1,64 @@
|
|
|
1
|
+

|
|
2
|
+
|
|
1
3
|
# AUQ - ask-user-questions MCP
|
|
2
4
|
|
|
3
5
|
[](https://www.npmjs.com/package/auq-mcp-server)
|
|
4
6
|
[](https://opensource.org/licenses/MIT)
|
|
5
|
-
[](https://cursor.com/en-US/install-mcp?name=ask-user-questions&config=eyJlbnYiOnt9LCJjb21tYW5kIjoibnB4IC15IGF1cS1tY3Atc2VydmVyIHNlcnZlciJ9)
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
**A lightweight MCP server & CLI tool that allows your LLMs to ask questions to you in a clean, separate space with great terminal UX. Made for multi-agent parallel coding workflows.**
|
|
10
10
|
|
|
11
11
|
---
|
|
12
12
|
|
|
13
13
|
## What does it do?
|
|
14
14
|
|
|
15
|
-
|
|
15
|
+
This MCP server lets your AI assistants generate clarifying questions consisting of multiple-choice/single-choice questions (with an "Other" option for custom input) while coding or working, and wait for your answers through a separate CLI tool without messing up your workflow.
|
|
16
16
|
|
|
17
|
-
You can keep the
|
|
17
|
+
You can keep the CLI running in advance, or start it when questions are pending. With simple arrow key navigation, you can select answers and send them back to the AI—all within a clean terminal interface.
|
|
18
18
|
|
|
19
|
-
##
|
|
19
|
+
## Background
|
|
20
20
|
|
|
21
|
-
In AI-assisted coding, **clarifying questions** have
|
|
21
|
+
In AI-assisted coding, guiding LLMs to ask **clarifying questions** have been widely recognized as a powerful prompt engineering technique to overcome LLM hallucination and generate more contextually appropriate code [1].
|
|
22
22
|
|
|
23
|
-
On October 18th, Claude Code 2.0.21 introduced an internal `ask-user-question` tool
|
|
23
|
+
On October 18th, Claude Code 2.0.21 introduced an internal `ask-user-question` tool. Inspired by it, I decided to build a similar tool that is:
|
|
24
24
|
|
|
25
25
|
- **Tool-agnostic** - Works with any MCP client (Claude Desktop, Cursor, etc.)
|
|
26
26
|
- **Non-invasive** - Doesn't heavily integrate with your coding CLI workflow or occupy UI space
|
|
27
27
|
- **Multi-agent friendly** - Supports receiving questions from multiple agents simultaneously in parallel workflows
|
|
28
28
|
|
|
29
|
-
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
## ✨ Features
|
|
32
|
+
|
|
33
|
+
<https://github.com/user-attachments/assets/3a135a13-fcb1-4795-9a6b-f426fa079674>
|
|
34
|
+
|
|
35
|
+
### 🖥️ CLI-Based
|
|
36
|
+
|
|
37
|
+
- **Lightweight**: Adds only ~150 tokens to your context per question
|
|
38
|
+
- **SSH-compatible**: Use over remote connections
|
|
39
|
+
- **Fast**: Instant startup, minimal resource usage
|
|
40
|
+
|
|
41
|
+
### 📦 100% Local
|
|
42
|
+
|
|
43
|
+
All information operates based on your local file system. No data leaves your machine.
|
|
44
|
+
|
|
45
|
+
### 🔄 Resumable & Stateless
|
|
46
|
+
|
|
47
|
+
The CLI app doesn't need to be running in advance. Whether the model calls the MCP first and you start the CLI later, or you keep it running—you can immediately answer pending questions in FIFO order.
|
|
48
|
+
|
|
49
|
+
### ❌ Question Set Rejection with Feedback Loop
|
|
50
|
+
|
|
51
|
+
When the LLM asks about the wrong domain entirely, you can reject the question set, optionally providing the reason to the LLM. The rejection feedback is sent back to the LLM, allowing it to ask more helpful questions or align on what's important for the project.
|
|
52
|
+
|
|
53
|
+
### 📋 Question Set Queuing
|
|
54
|
+
|
|
55
|
+
Recent AI workflows often use parallel sub-agents for concurrent coding. AUQ handles multiple simultaneous LLM calls gracefully—when a new question set arrives while you're answering another, it's queued and processed sequentially. Perfect for multi-agent parallel coding workflows.
|
|
30
56
|
|
|
31
57
|
---
|
|
32
58
|
|
|
33
|
-
|
|
59
|
+
# Setup Instructions
|
|
60
|
+
|
|
61
|
+
## 🚀 Step 1: Setup CLI
|
|
34
62
|
|
|
35
63
|
### Global Installation (Recommended)
|
|
36
64
|
|
|
@@ -53,13 +81,13 @@ npx auq
|
|
|
53
81
|
```
|
|
54
82
|
|
|
55
83
|
**Session Storage:**
|
|
84
|
+
|
|
56
85
|
- **Global install**: `~/Library/Application Support/auq/sessions` (macOS), `~/.local/share/auq/sessions` (Linux)
|
|
57
86
|
- **Local install**: `.auq/sessions/` in your project root
|
|
58
|
-
- **Override**: Set `AUQ_SESSION_DIR` environment variable
|
|
59
87
|
|
|
60
88
|
---
|
|
61
89
|
|
|
62
|
-
## 🔌 MCP Server
|
|
90
|
+
## 🔌 Step 2: Setup MCP Server
|
|
63
91
|
|
|
64
92
|
### Cursor
|
|
65
93
|
|
|
@@ -91,30 +119,6 @@ Add to `.mcp.json` in your project root (for team-wide sharing):
|
|
|
91
119
|
|
|
92
120
|
Or add to `~/.claude.json` for global access across all projects.
|
|
93
121
|
|
|
94
|
-
**With environment variables:**
|
|
95
|
-
|
|
96
|
-
```json
|
|
97
|
-
{
|
|
98
|
-
"mcpServers": {
|
|
99
|
-
"ask-user-questions": {
|
|
100
|
-
"type": "stdio",
|
|
101
|
-
"command": "npx",
|
|
102
|
-
"args": ["-y", "auq-mcp-server", "server"],
|
|
103
|
-
"env": {
|
|
104
|
-
"AUQ_SESSION_DIR": "${AUQ_SESSION_DIR}"
|
|
105
|
-
}
|
|
106
|
-
}
|
|
107
|
-
}
|
|
108
|
-
}
|
|
109
|
-
```
|
|
110
|
-
|
|
111
|
-
**Manage servers:**
|
|
112
|
-
```bash
|
|
113
|
-
claude mcp list # View all servers
|
|
114
|
-
claude mcp get ask-user-questions # Server details
|
|
115
|
-
claude mcp remove ask-user-questions # Remove server
|
|
116
|
-
```
|
|
117
|
-
|
|
118
122
|
**Verify setup:** Type `/mcp` in Claude Code to check server status.
|
|
119
123
|
|
|
120
124
|
### Codex CLI
|
|
@@ -165,33 +169,13 @@ Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS)
|
|
|
165
169
|
|
|
166
170
|
---
|
|
167
171
|
|
|
168
|
-
## ✨ Features
|
|
169
|
-
|
|
170
|
-
### 🖥️ CLI-Based
|
|
171
|
-
- **Lightweight**: Adds only ~150 tokens to your context per question
|
|
172
|
-
- **SSH-compatible**: Use over remote connections
|
|
173
|
-
- **Fast**: Instant startup, minimal resource usage
|
|
174
|
-
|
|
175
|
-
### 📦 100% Local
|
|
176
|
-
All information operates based on your local file system. No data leaves your machine.
|
|
177
|
-
|
|
178
|
-
### 🔄 Resumable & Stateless
|
|
179
|
-
The CLI app doesn't need to be running in advance. Whether the model calls the MCP first and you start the CLI later, or you keep it running—you can immediately answer pending questions in FIFO order.
|
|
180
|
-
|
|
181
|
-
### ❌ Question Set Rejection with Feedback Loop
|
|
182
|
-
When the LLM asks about the wrong domain entirely, you can reject the question set, optionally providing the reason to the LLM. The rejection feedback is sent back to the LLM, allowing it to ask more helpful questions or align on what's important for the project.
|
|
183
|
-
|
|
184
|
-
### 📋 Question Set Queuing
|
|
185
|
-
Recent AI workflows often use parallel sub-agents for concurrent coding. AUQ handles multiple simultaneous LLM calls gracefully—when a new question set arrives while you're answering another, it's queued and processed sequentially. Perfect for multi-agent parallel coding workflows.
|
|
186
|
-
|
|
187
|
-
---
|
|
188
|
-
|
|
189
172
|
## 💻 Usage
|
|
190
173
|
|
|
191
174
|
### Starting the CLI tool
|
|
192
175
|
|
|
193
176
|
```bash
|
|
194
|
-
auq
|
|
177
|
+
auq # if you installed globally
|
|
178
|
+
npx auq # if you installed locally
|
|
195
179
|
```
|
|
196
180
|
|
|
197
181
|
Then just start working with your coding agent or AI assistant. You may prompt to ask questions with the tool the agent got; it will mostly just get what you mean.
|
|
@@ -227,17 +211,17 @@ rm -rf .auq/sessions/*
|
|
|
227
211
|
- [ ] Light & dark mode themes
|
|
228
212
|
- [ ] MCP prompt mode switch (Anthropic style / minimal)
|
|
229
213
|
- [ ] Custom color themes
|
|
230
|
-
- [ ] Question history/recall
|
|
231
214
|
- [ ] Multi-language support
|
|
232
|
-
- [ ] Audio notifications
|
|
233
|
-
- [ ]
|
|
215
|
+
- [ ] Audio notifications on new question
|
|
216
|
+
- [ ] Simple option to prompt the LLM to/not ask more questions after answering.
|
|
217
|
+
- [ ] Optional 'context' field privided by the LLM, that describes the context of the questions - will be useful for multi-agent coding
|
|
234
218
|
|
|
235
219
|
---
|
|
236
220
|
|
|
237
|
-
|
|
238
221
|
## 📄 License
|
|
239
222
|
|
|
240
223
|
MIT License - see [LICENSE](LICENSE) file for details.
|
|
241
224
|
|
|
242
225
|
---
|
|
243
226
|
|
|
227
|
+
[1] arXiv:2308.13507 <https://arxiv.org/abs/2308.13507>
|
package/dist/bin/auq.js
CHANGED
|
@@ -1,4 +1,5 @@
|
|
|
1
1
|
#!/usr/bin/env node
|
|
2
|
+
// import { exec } from "child_process";
|
|
2
3
|
import { readFileSync } from "fs";
|
|
3
4
|
import { dirname, join } from "path";
|
|
4
5
|
import { fileURLToPath } from "url";
|
|
@@ -167,14 +168,19 @@ const App = () => {
|
|
|
167
168
|
}
|
|
168
169
|
});
|
|
169
170
|
// Show toast notification
|
|
170
|
-
const showToast = (message, type = "success") => {
|
|
171
|
-
setToast({ message, type });
|
|
171
|
+
const showToast = (message, type = "success", title) => {
|
|
172
|
+
setToast({ message, type, title });
|
|
172
173
|
};
|
|
173
174
|
// Handle session completion
|
|
174
|
-
const handleSessionComplete = (wasRejected = false) => {
|
|
175
|
+
const handleSessionComplete = (wasRejected = false, rejectionReason) => {
|
|
175
176
|
// Show appropriate toast
|
|
176
177
|
if (wasRejected) {
|
|
177
|
-
|
|
178
|
+
if (rejectionReason) {
|
|
179
|
+
showToast(`Rejection reason: ${rejectionReason}`, "info", "Question set rejected");
|
|
180
|
+
}
|
|
181
|
+
else {
|
|
182
|
+
showToast("", "info", "Question set rejected");
|
|
183
|
+
}
|
|
178
184
|
}
|
|
179
185
|
else {
|
|
180
186
|
showToast("✓ Answers submitted successfully!", "success");
|
|
@@ -207,7 +213,7 @@ const App = () => {
|
|
|
207
213
|
return (React.createElement(Box, { flexDirection: "column", paddingX: 1 },
|
|
208
214
|
React.createElement(Header, { pendingCount: sessionQueue.length }),
|
|
209
215
|
toast && (React.createElement(Box, { marginBottom: 1, marginTop: 1 },
|
|
210
|
-
React.createElement(Toast, { message: toast.message, onDismiss: () => setToast(null), type: toast.type }))),
|
|
216
|
+
React.createElement(Toast, { message: toast.message, onDismiss: () => setToast(null), type: toast.type, title: toast.title }))),
|
|
211
217
|
mainContent,
|
|
212
218
|
showSessionLog && (React.createElement(Box, { marginTop: 1 },
|
|
213
219
|
React.createElement(Text, { dimColor: true },
|
package/dist/src/server.js
CHANGED
|
@@ -21,7 +21,7 @@ const server = new FastMCP({
|
|
|
21
21
|
"returning formatted responses for continued reasoning. " +
|
|
22
22
|
"Each question supports 2-4 multiple-choice options with descriptions, and users can always provide custom text input. " +
|
|
23
23
|
"Both single-select and multi-select modes are supported.",
|
|
24
|
-
version: "0.1.
|
|
24
|
+
version: "0.1.7",
|
|
25
25
|
});
|
|
26
26
|
// Define the question and option schemas
|
|
27
27
|
const OptionSchema = z.object({
|
|
@@ -117,9 +117,9 @@ export const StepperView = ({ onComplete, sessionId, sessionRequest, }) => {
|
|
|
117
117
|
try {
|
|
118
118
|
const sessionManager = new SessionManager({ baseDir: getSessionDirectory() });
|
|
119
119
|
await sessionManager.rejectSession(sessionId, reason);
|
|
120
|
-
// Call onComplete with rejection flag
|
|
120
|
+
// Call onComplete with rejection flag and reason
|
|
121
121
|
if (onComplete) {
|
|
122
|
-
onComplete(true);
|
|
122
|
+
onComplete(true, reason);
|
|
123
123
|
}
|
|
124
124
|
}
|
|
125
125
|
catch (error) {
|
|
@@ -4,7 +4,7 @@ import React, { useEffect } from "react";
|
|
|
4
4
|
* Toast component for brief non-blocking notifications
|
|
5
5
|
* Auto-dismisses after specified duration (default 2000ms)
|
|
6
6
|
*/
|
|
7
|
-
export const Toast = ({ message, type = "success", onDismiss, duration = 2000, }) => {
|
|
7
|
+
export const Toast = ({ message, type = "success", onDismiss, duration = 2000, title, }) => {
|
|
8
8
|
// Auto-dismiss after duration
|
|
9
9
|
useEffect(() => {
|
|
10
10
|
const timer = setTimeout(() => {
|
|
@@ -14,6 +14,7 @@ export const Toast = ({ message, type = "success", onDismiss, duration = 2000, }
|
|
|
14
14
|
}, [duration, onDismiss]);
|
|
15
15
|
// Color based on type
|
|
16
16
|
const color = type === "success" ? "green" : type === "error" ? "red" : "cyan";
|
|
17
|
-
return (React.createElement(Box, { borderColor: color, borderStyle: "round", paddingX: 2, paddingY: 0
|
|
18
|
-
React.createElement(Text, { bold: true, color: color },
|
|
17
|
+
return (React.createElement(Box, { borderColor: color, borderStyle: "round", paddingX: 2, paddingY: 0, flexDirection: "column" },
|
|
18
|
+
title && (React.createElement(Text, { bold: true, color: color }, title)),
|
|
19
|
+
React.createElement(Text, { color: color }, message)));
|
|
19
20
|
};
|