pqs-mcp-server 1.0.0 → 1.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +63 -7
  2. package/package.json +3 -3
package/README.md CHANGED
@@ -1,3 +1,5 @@
1
+ [![smithery badge](https://smithery.ai/badge/onchaintel/pqs)](https://smithery.ai/servers/onchaintel/pqs)
2
+
1
3
  # PQS MCP Server
2
4
 
3
5
  The world's first named AI prompt quality score — as an MCP server.
@@ -6,7 +8,9 @@ Score, optimize, and compare LLM prompts before they hit any model. Built on PEE
6
8
 
7
9
  ## Install
8
10
 
9
- Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
11
+ ### Claude Desktop
12
+
13
+ Add to your config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
10
14
  ```json
11
15
  {
12
16
  "mcpServers": {
@@ -18,16 +22,68 @@ Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_
18
22
  }
19
23
  ```
20
24
 
25
+ ### Smithery
26
+ ```bash
27
+ smithery mcp add onchaintel/pqs
28
+ ```
29
+
21
30
  ## Tools
22
31
 
23
- - **score_prompt** Free. Score any prompt, get grade + percentile. No API key needed.
24
- - **optimize_prompt** $0.025 USDC via x402. Full dimension breakdown + optimized prompt.
25
- - **compare_models** — $0.50 USDC via x402. Claude vs GPT-4o head-to-head.
32
+ ### score_prompt (Free no API key needed)
33
+ Score any prompt before it hits any model. Returns grade A-F, score out of 40, and percentile.
34
+
35
+ **Example output:**
36
+ ```json
37
+ {
38
+ "pqs_version": "1.0",
39
+ "prompt": "analyze this wallet",
40
+ "vertical": "crypto",
41
+ "score": 8,
42
+ "out_of": 40,
43
+ "grade": "D",
44
+ "upgrade": "Get full dimension breakdown at /api/score for $0.025 USDC via x402",
45
+ "powered_by": "PQS — pqs.onchainintel.net"
46
+ }
47
+ ```
26
48
 
27
- ## Get an API Key
49
+ ### optimize_prompt ($0.025 USDC via x402)
50
+ Score AND optimize any prompt. Returns full 8-dimension breakdown + optimized version.
28
51
 
29
- pqs.onchainintel.net
52
+ **Requires:** PQS API key (get one free at pqs.onchainintel.net)
53
+
54
+ ### compare_models ($1.25 USDC via x402)
55
+ Compare Claude vs GPT-4o on the same prompt. Judged by a third model. Returns winner, scores, and recommendation.
56
+
57
+ **Requires:** PQS API key (get one free at pqs.onchainintel.net)
58
+
59
+ ## Verticals
60
+
61
+ Specify the domain context for more accurate scoring:
62
+
63
+ - `software` — Software engineering, code, debugging
64
+ - `content` — Content creation, copywriting, social media
65
+ - `business` — Business analysis, finance, strategy
66
+ - `education` — Education, research, academic writing
67
+ - `science` — Scientific research, data analysis
68
+ - `crypto` — Crypto trading, DeFi, onchain analysis
69
+ - `general` — General purpose (default)
70
+
71
+ ## Quality Gate Pattern
72
+
73
+ Use PQS as a pre-inference quality gate:
74
+ ```javascript
75
+ const score = await fetch("https://pqs.onchainintel.net/api/score/free", {
76
+ method: "POST",
77
+ headers: { "Content-Type": "application/json" },
78
+ body: JSON.stringify({ prompt: userPrompt, vertical: "software" })
79
+ });
80
+ const { score: pqsScore } = await score.json();
81
+ if (pqsScore < 28) throw new Error("Prompt quality too low — improve and retry");
82
+ ```
83
+
84
+ Grade D or below (< 28/40) means the prompt will waste inference spend.
30
85
 
31
86
  ## Built by
32
87
 
33
- John / OnChainIntel — @OnChainAIIntel
88
+ John / OnChainIntel — @OnChainAIIntel
89
+ pqs.onchainintel.net
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "pqs-mcp-server",
3
- "version": "1.0.0",
4
- "description": "PQS (Prompt Quality Score) MCP server \u2014 score, optimize, and compare LLM prompts before they hit any model. x402-native, built on PEEM, RAGAS, G-Eval, and MT-Bench.",
3
+ "version": "1.0.2",
4
+ "description": "PQS (Prompt Quality Score) MCP server score, optimize, and compare LLM prompts before they hit any model. x402-native, built on PEEM, RAGAS, G-Eval, and MT-Bench.",
5
5
  "main": "index.js",
6
6
  "scripts": {
7
7
  "start": "node index.js"
@@ -33,4 +33,4 @@
33
33
  "type": "git",
34
34
  "url": "https://github.com/OnChainAIIntel/pqs-mcp-server"
35
35
  }
36
- }
36
+ }