pqs-mcp-server 1.0.1 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +61 -7
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -8,7 +8,9 @@ Score, optimize, and compare LLM prompts before they hit any model. Built on PEE
|
|
|
8
8
|
|
|
9
9
|
## Install
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
### Claude Desktop
|
|
12
|
+
|
|
13
|
+
Add to your config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
|
|
12
14
|
```json
|
|
13
15
|
{
|
|
14
16
|
"mcpServers": {
|
|
@@ -20,16 +22,68 @@ Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_
|
|
|
20
22
|
}
|
|
21
23
|
```
|
|
22
24
|
|
|
25
|
+
### Smithery
|
|
26
|
+
```bash
|
|
27
|
+
smithery mcp add onchaintel/pqs
|
|
28
|
+
```
|
|
29
|
+
|
|
23
30
|
## Tools
|
|
24
31
|
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
32
|
+
### score_prompt (Free — no API key needed)
|
|
33
|
+
Score any prompt before it hits any model. Returns grade A-F, score out of 40, and percentile.
|
|
34
|
+
|
|
35
|
+
**Example output:**
|
|
36
|
+
```json
|
|
37
|
+
{
|
|
38
|
+
"pqs_version": "1.0",
|
|
39
|
+
"prompt": "analyze this wallet",
|
|
40
|
+
"vertical": "crypto",
|
|
41
|
+
"score": 8,
|
|
42
|
+
"out_of": 40,
|
|
43
|
+
"grade": "D",
|
|
44
|
+
"upgrade": "Get full dimension breakdown at /api/score for $0.025 USDC via x402",
|
|
45
|
+
"powered_by": "PQS — pqs.onchainintel.net"
|
|
46
|
+
}
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
### optimize_prompt ($0.025 USDC via x402)
|
|
50
|
+
Score AND optimize any prompt. Returns full 8-dimension breakdown + optimized version.
|
|
28
51
|
|
|
29
|
-
|
|
52
|
+
**Requires:** PQS API key (get one free at pqs.onchainintel.net)
|
|
30
53
|
|
|
31
|
-
|
|
54
|
+
### compare_models ($1.25 USDC via x402)
|
|
55
|
+
Compare Claude vs GPT-4o on the same prompt. Judged by a third model. Returns winner, scores, and recommendation.
|
|
56
|
+
|
|
57
|
+
**Requires:** PQS API key (get one free at pqs.onchainintel.net)
|
|
58
|
+
|
|
59
|
+
## Verticals
|
|
60
|
+
|
|
61
|
+
Specify the domain context for more accurate scoring:
|
|
62
|
+
|
|
63
|
+
- `software` — Software engineering, code, debugging
|
|
64
|
+
- `content` — Content creation, copywriting, social media
|
|
65
|
+
- `business` — Business analysis, finance, strategy
|
|
66
|
+
- `education` — Education, research, academic writing
|
|
67
|
+
- `science` — Scientific research, data analysis
|
|
68
|
+
- `crypto` — Crypto trading, DeFi, onchain analysis
|
|
69
|
+
- `general` — General purpose (default)
|
|
70
|
+
|
|
71
|
+
## Quality Gate Pattern
|
|
72
|
+
|
|
73
|
+
Use PQS as a pre-inference quality gate:
|
|
74
|
+
```javascript
|
|
75
|
+
const score = await fetch("https://pqs.onchainintel.net/api/score/free", {
|
|
76
|
+
method: "POST",
|
|
77
|
+
headers: { "Content-Type": "application/json" },
|
|
78
|
+
body: JSON.stringify({ prompt: userPrompt, vertical: "software" })
|
|
79
|
+
});
|
|
80
|
+
const { score: pqsScore } = await score.json();
|
|
81
|
+
if (pqsScore < 28) throw new Error("Prompt quality too low — improve and retry");
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
Grade D or below (< 28/40) means the prompt will waste inference spend.
|
|
32
85
|
|
|
33
86
|
## Built by
|
|
34
87
|
|
|
35
|
-
John / OnChainIntel — @OnChainAIIntel
|
|
88
|
+
John / OnChainIntel — @OnChainAIIntel
|
|
89
|
+
pqs.onchainintel.net
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "pqs-mcp-server",
|
|
3
|
-
"version": "1.0.
|
|
3
|
+
"version": "1.0.2",
|
|
4
4
|
"description": "PQS (Prompt Quality Score) MCP server — score, optimize, and compare LLM prompts before they hit any model. x402-native, built on PEEM, RAGAS, G-Eval, and MT-Bench.",
|
|
5
5
|
"main": "index.js",
|
|
6
6
|
"scripts": {
|