ppq-private-mode 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +145 -0
- package/bin/server.ts +51 -0
- package/index.ts +247 -0
- package/lib/proxy.ts +314 -0
- package/openclaw.plugin.json +24 -0
- package/package.json +28 -0
- package/skills/private-mode/SKILL.md +88 -0
package/README.md
ADDED
|
@@ -0,0 +1,145 @@
|
|
|
1
|
+
# How to use PPQ's Private (TEE) AI models via our API and with OpenClaw
|
|
2
|
+
|
|
3
|
+
> Your queries are encrypted on your machine and only decrypted inside a hardware-secured enclave. Neither PPQ.AI nor any intermediary can read the content.
|
|
4
|
+
|
|
5
|
+
This guide walks you through setting up the PPQ private mode proxy. You can follow these steps from either a terminal or the OpenClaw chat interface.
|
|
6
|
+
|
|
7
|
+
> **Install via the OpenClaw Chat interface or Command line:** Some users report that they are able to install the plugin and update their config files by simply giving the OpenClaw chat interface these instructions. Others report issues. If you encounter problems in the chat, you may need to open a terminal connected to your OpenClaw instance and do it that way.
|
|
8
|
+
|
|
9
|
+
## What You'll Need
|
|
10
|
+
|
|
11
|
+
- An OpenClaw installation (already running)
|
|
12
|
+
- A PPQ.AI API key -- get one at https://ppq.ai/api-docs (it starts with `sk-`)
|
|
13
|
+
|
|
14
|
+
## Step 1: Install the Private Mode Plugin
|
|
15
|
+
|
|
16
|
+
Run this command to install the plugin:
|
|
17
|
+
|
|
18
|
+
```
|
|
19
|
+
openclaw plugins install https://github.com/PayPerQ/ppq-private-mode-proxy
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
This downloads a local encryption proxy that handles all the cryptography on your machine.
|
|
23
|
+
|
|
24
|
+
To verify it installed correctly:
|
|
25
|
+
|
|
26
|
+
```
|
|
27
|
+
ls ~/.openclaw/plugins/ppq-private-mode/
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
You should see files like `index.ts`, `package.json`, etc.
|
|
31
|
+
|
|
32
|
+
## Step 2: Add Your API Key and Models to OpenClaw Config
|
|
33
|
+
|
|
34
|
+
You need to edit your OpenClaw config file at `~/.openclaw/openclaw.json` and add two things: a provider pointing to the local proxy, and your API key.
|
|
35
|
+
|
|
36
|
+
If you already have a `models.providers` section, merge this in -- don't overwrite your existing providers.
|
|
37
|
+
|
|
38
|
+
**Add this provider** under `models.providers`:
|
|
39
|
+
|
|
40
|
+
```json
|
|
41
|
+
"ppq-private": {
|
|
42
|
+
"baseUrl": "http://127.0.0.1:8787/v1",
|
|
43
|
+
"apiKey": "unused",
|
|
44
|
+
"api": "openai-completions",
|
|
45
|
+
"models": [
|
|
46
|
+
{ "id": "private/kimi-k2-5", "name": "private/kimi-k2-5" },
|
|
47
|
+
{ "id": "private/deepseek-r1-0528", "name": "private/deepseek-r1-0528" },
|
|
48
|
+
{ "id": "private/gpt-oss-120b", "name": "private/gpt-oss-120b" },
|
|
49
|
+
{ "id": "private/llama3-3-70b", "name": "private/llama3-3-70b" },
|
|
50
|
+
{ "id": "private/qwen3-vl-30b", "name": "private/qwen3-vl-30b" }
|
|
51
|
+
]
|
|
52
|
+
}
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
**Add this plugin config** (replace `YOUR_API_KEY` with your actual PPQ API key):
|
|
56
|
+
|
|
57
|
+
```json
|
|
58
|
+
"plugins": {
|
|
59
|
+
"entries": {
|
|
60
|
+
"ppq-private-mode": {
|
|
61
|
+
"config": {
|
|
62
|
+
"apiKey": "YOUR_API_KEY"
|
|
63
|
+
}
|
|
64
|
+
}
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Chat interface shortcut:** You can tell OpenClaw: *"Please add the ppq-private provider to my openclaw.json config. Here's the JSON to merge in:"* and paste the blocks above. The AI can edit config files for you.
|
|
70
|
+
|
|
71
|
+
## Step 3: Restart the Gateway
|
|
72
|
+
|
|
73
|
+
```
|
|
74
|
+
systemctl --user restart openclaw-gateway.service
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
This starts the local encryption proxy and connects it to your OpenClaw instance.
|
|
78
|
+
|
|
79
|
+
## Step 4: Switch to a Private Model
|
|
80
|
+
|
|
81
|
+
```
|
|
82
|
+
openclaw models set private/kimi-k2-5
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
That's it! Your queries are now end-to-end encrypted. You may need to start a new chat session for the model change to take effect.
|
|
86
|
+
|
|
87
|
+
## Available Private Models
|
|
88
|
+
|
|
89
|
+
| Model | Best For |
|
|
90
|
+
|-------|----------|
|
|
91
|
+
| `private/kimi-k2-5` | **Recommended.** Fast general tasks, 262K context window |
|
|
92
|
+
| `private/deepseek-r1-0528` | Reasoning and analysis |
|
|
93
|
+
| `private/gpt-oss-120b` | Budget-friendly general use |
|
|
94
|
+
| `private/llama3-3-70b` | Open-source tasks |
|
|
95
|
+
| `private/qwen3-vl-30b` | Vision + text, 262K context window |
|
|
96
|
+
|
|
97
|
+
Switch between them anytime with `openclaw models set <model-name>`.
|
|
98
|
+
|
|
99
|
+
## How It Works
|
|
100
|
+
|
|
101
|
+
The plugin runs a local proxy on your machine (port 8787) that:
|
|
102
|
+
|
|
103
|
+
1. **Verifies the enclave** -- performs hardware attestation to confirm it's talking to a genuine secure enclave, not an impersonator
|
|
104
|
+
2. **Encrypts your request** -- uses HPKE (RFC 9180) to encrypt the entire request body before it leaves your machine
|
|
105
|
+
3. **Sends the encrypted blob** -- PPQ.AI routes the encrypted data to the secure enclave. PPQ.AI only sees ciphertext.
|
|
106
|
+
4. **Enclave processes privately** -- the enclave decrypts your query, runs the AI model, and re-encrypts the response
|
|
107
|
+
5. **Your proxy decrypts** -- the response is decrypted locally on your machine
|
|
108
|
+
|
|
109
|
+
PPQ.AI handles billing via HTTP headers (your API key), so they never need to see the actual content of your queries.
|
|
110
|
+
|
|
111
|
+
## Using Private Mode Alongside Regular Models
|
|
112
|
+
|
|
113
|
+
Your existing OpenClaw models continue working normally. Standard models (like Claude, GPT, etc.) use their regular API routes, while private models route through the encrypted proxy. Both can coexist in your config, and you can switch between them anytime.
|
|
114
|
+
|
|
115
|
+
## Troubleshooting
|
|
116
|
+
|
|
117
|
+
**"Authentication error" or "Protocol error"**
|
|
118
|
+
Your API key may be wrong or your account balance is low. Check at https://ppq.ai
|
|
119
|
+
|
|
120
|
+
**"Attestation failed"**
|
|
121
|
+
The secure enclave may be temporarily unavailable. Wait a few minutes and restart:
|
|
122
|
+
```
|
|
123
|
+
systemctl --user restart openclaw-gateway.service
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
**Port conflict on 8787**
|
|
127
|
+
If something else is using port 8787, add a different port to your plugin config in `openclaw.json`:
|
|
128
|
+
```json
|
|
129
|
+
"ppq-private-mode": {
|
|
130
|
+
"config": {
|
|
131
|
+
"apiKey": "YOUR_API_KEY",
|
|
132
|
+
"port": 8788
|
|
133
|
+
}
|
|
134
|
+
}
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
**Plugin not found after install**
|
|
138
|
+
Make sure `~/.openclaw/plugins/ppq-private-mode/` exists. If not, re-run the install command from Step 1.
|
|
139
|
+
|
|
140
|
+
**Checking proxy status**
|
|
141
|
+
Use the `ppq_private_mode_status` tool in OpenClaw to verify the proxy is running and attestation succeeded.
|
|
142
|
+
|
|
143
|
+
## About
|
|
144
|
+
|
|
145
|
+
PPQ.AI provides pay-per-query AI inference with no subscriptions. Private models run inside secure enclaves with hardware-enforced memory encryption. Learn more at https://ppq.ai
|
package/bin/server.ts
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* Standalone entry point for the PPQ Private Mode proxy.
|
|
4
|
+
*
|
|
5
|
+
* Usage:
|
|
6
|
+
* PPQ_API_KEY=sk-xxx npx tsx bin/server.ts
|
|
7
|
+
*
|
|
8
|
+
* Environment variables:
|
|
9
|
+
* PPQ_API_KEY (required) — Your PPQ.AI API key from https://ppq.ai/api-docs
|
|
10
|
+
* PORT (optional) — Proxy port, default 8787
|
|
11
|
+
* PPQ_API_BASE (optional) — API base URL, default https://api.ppq.ai
|
|
12
|
+
* DEBUG (optional) — Set to "true" for verbose logging
|
|
13
|
+
*/
|
|
14
|
+
|
|
15
|
+
import { startProxy } from "../lib/proxy.js";
|
|
16
|
+
|
|
17
|
+
const apiKey = process.env.PPQ_API_KEY;
|
|
18
|
+
if (!apiKey) {
|
|
19
|
+
console.error("Error: PPQ_API_KEY environment variable is required");
|
|
20
|
+
console.error("Get your API key from https://ppq.ai/api-docs");
|
|
21
|
+
process.exit(1);
|
|
22
|
+
}
|
|
23
|
+
|
|
24
|
+
const port = parseInt(process.env.PORT || "8787", 10);
|
|
25
|
+
const apiBase = process.env.PPQ_API_BASE || "https://api.ppq.ai";
|
|
26
|
+
const debug = process.env.DEBUG === "true";
|
|
27
|
+
|
|
28
|
+
const proxy = await startProxy(
|
|
29
|
+
{ apiKey, port, apiBase, debug },
|
|
30
|
+
{
|
|
31
|
+
info: (msg) => console.log(msg),
|
|
32
|
+
error: (msg) => console.error(msg),
|
|
33
|
+
debug: debug ? (msg) => console.log(`[debug] ${msg}`) : undefined,
|
|
34
|
+
}
|
|
35
|
+
);
|
|
36
|
+
|
|
37
|
+
console.log("");
|
|
38
|
+
console.log("Send a test request:");
|
|
39
|
+
console.log(` curl http://127.0.0.1:${proxy.port}/v1/chat/completions \\`);
|
|
40
|
+
console.log(` -H "Content-Type: application/json" \\`);
|
|
41
|
+
console.log(` -d '{"model":"private/kimi-k2-5","messages":[{"role":"user","content":"Hello"}]}'`);
|
|
42
|
+
console.log("");
|
|
43
|
+
|
|
44
|
+
// Graceful shutdown
|
|
45
|
+
for (const sig of ["SIGINT", "SIGTERM"] as const) {
|
|
46
|
+
process.on(sig, async () => {
|
|
47
|
+
console.log("\nShutting down...");
|
|
48
|
+
await proxy.close();
|
|
49
|
+
process.exit(0);
|
|
50
|
+
});
|
|
51
|
+
}
|
package/index.ts
ADDED
|
@@ -0,0 +1,247 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* PPQ Private Mode — OpenClaw Plugin
|
|
3
|
+
*
|
|
4
|
+
* Registers a local proxy service that encrypts AI requests using EHBP
|
|
5
|
+
* before forwarding to PPQ.AI's private inference endpoints. This ensures
|
|
6
|
+
* end-to-end encryption: neither PPQ.AI nor any intermediary can read your queries.
|
|
7
|
+
*
|
|
8
|
+
* Install:
|
|
9
|
+
* 1. Copy this plugin directory to your OpenClaw plugins folder
|
|
10
|
+
* 2. Run `npm install` in the plugin directory
|
|
11
|
+
* 3. Add your PPQ API key in OpenClaw settings
|
|
12
|
+
* 4. Set model to any private model (e.g. private/kimi-k2-5)
|
|
13
|
+
*/
|
|
14
|
+
|
|
15
|
+
import { startProxy, type ProxyHandle, type ProxyConfig } from "./lib/proxy.js";
|
|
16
|
+
|
|
17
|
+
export const id = "ppq-private-mode";
|
|
18
|
+
export const name = "PPQ Private Mode";
|
|
19
|
+
|
|
20
|
+
interface PluginConfig {
|
|
21
|
+
apiKey?: string;
|
|
22
|
+
port?: number;
|
|
23
|
+
apiBase?: string;
|
|
24
|
+
debug?: boolean;
|
|
25
|
+
}
|
|
26
|
+
|
|
27
|
+
let proxy: ProxyHandle | null = null;
|
|
28
|
+
|
|
29
|
+
export default function register(api: any) {
|
|
30
|
+
const pluginConfig: PluginConfig =
|
|
31
|
+
api.config?.plugins?.entries?.["ppq-private-mode"]?.config || {};
|
|
32
|
+
|
|
33
|
+
// ─── MCP Tool: check proxy status ───────────────────────────────────────────
|
|
34
|
+
|
|
35
|
+
api.registerTool(
|
|
36
|
+
{
|
|
37
|
+
name: "ppq_private_mode_status",
|
|
38
|
+
description:
|
|
39
|
+
"Check status of the PPQ Private Mode proxy (attestation, health, endpoints)",
|
|
40
|
+
parameters: { type: "object", properties: {} },
|
|
41
|
+
async execute() {
|
|
42
|
+
if (!pluginConfig.apiKey) {
|
|
43
|
+
return {
|
|
44
|
+
content: [
|
|
45
|
+
{
|
|
46
|
+
type: "text",
|
|
47
|
+
text: JSON.stringify({
|
|
48
|
+
status: "not_configured",
|
|
49
|
+
message:
|
|
50
|
+
"PPQ API key not configured. Set it in OpenClaw settings under PPQ Private Mode.",
|
|
51
|
+
setup: {
|
|
52
|
+
step1: "Get a PPQ.AI API key from https://ppq.ai/settings",
|
|
53
|
+
step2: 'Add your key to the plugin config: { "apiKey": "your-key" }',
|
|
54
|
+
step3: "Restart OpenClaw",
|
|
55
|
+
},
|
|
56
|
+
}),
|
|
57
|
+
},
|
|
58
|
+
],
|
|
59
|
+
};
|
|
60
|
+
}
|
|
61
|
+
|
|
62
|
+
if (!proxy) {
|
|
63
|
+
return {
|
|
64
|
+
content: [
|
|
65
|
+
{
|
|
66
|
+
type: "text",
|
|
67
|
+
text: JSON.stringify({
|
|
68
|
+
status: "not_running",
|
|
69
|
+
message: "Private Mode proxy is not running. It may be starting up.",
|
|
70
|
+
}),
|
|
71
|
+
},
|
|
72
|
+
],
|
|
73
|
+
};
|
|
74
|
+
}
|
|
75
|
+
|
|
76
|
+
// Health check
|
|
77
|
+
try {
|
|
78
|
+
const resp = await fetch(`http://127.0.0.1:${proxy.port}/health`);
|
|
79
|
+
const health = await resp.json();
|
|
80
|
+
|
|
81
|
+
return {
|
|
82
|
+
content: [
|
|
83
|
+
{
|
|
84
|
+
type: "text",
|
|
85
|
+
text: JSON.stringify({
|
|
86
|
+
status: "running",
|
|
87
|
+
healthy: resp.ok,
|
|
88
|
+
port: proxy.port,
|
|
89
|
+
baseUrl: `http://127.0.0.1:${proxy.port}`,
|
|
90
|
+
attestation: {
|
|
91
|
+
verified: !!proxy.verification,
|
|
92
|
+
enclaveHost: proxy.verification?.enclaveHost || null,
|
|
93
|
+
codeFingerprint: proxy.verification?.codeFingerprint || null,
|
|
94
|
+
},
|
|
95
|
+
endpoints: {
|
|
96
|
+
models: `http://127.0.0.1:${proxy.port}/v1/models`,
|
|
97
|
+
chat: `http://127.0.0.1:${proxy.port}/v1/chat/completions`,
|
|
98
|
+
},
|
|
99
|
+
availableModels: [
|
|
100
|
+
"private/kimi-k2-5",
|
|
101
|
+
"private/deepseek-r1-0528",
|
|
102
|
+
"private/gpt-oss-120b",
|
|
103
|
+
"private/llama3-3-70b",
|
|
104
|
+
"private/qwen3-vl-30b",
|
|
105
|
+
],
|
|
106
|
+
}),
|
|
107
|
+
},
|
|
108
|
+
],
|
|
109
|
+
};
|
|
110
|
+
} catch {
|
|
111
|
+
return {
|
|
112
|
+
content: [
|
|
113
|
+
{
|
|
114
|
+
type: "text",
|
|
115
|
+
text: JSON.stringify({
|
|
116
|
+
status: "unhealthy",
|
|
117
|
+
message: "Proxy is registered but health check failed.",
|
|
118
|
+
}),
|
|
119
|
+
},
|
|
120
|
+
],
|
|
121
|
+
};
|
|
122
|
+
}
|
|
123
|
+
},
|
|
124
|
+
},
|
|
125
|
+
{ optional: true }
|
|
126
|
+
);
|
|
127
|
+
|
|
128
|
+
// ─── Provider: expose private models to OpenClaw ────────────────────────────
|
|
129
|
+
|
|
130
|
+
api.registerProvider({
|
|
131
|
+
id: "ppq-private-mode",
|
|
132
|
+
label: "PPQ Private Mode (End-to-End Encrypted)",
|
|
133
|
+
docsPath: "./skills/private-mode",
|
|
134
|
+
models: {
|
|
135
|
+
baseUrl: `http://127.0.0.1:${pluginConfig.port || 8787}`,
|
|
136
|
+
api: "openai-completions",
|
|
137
|
+
models: [
|
|
138
|
+
{
|
|
139
|
+
id: "private/kimi-k2-5",
|
|
140
|
+
name: "Kimi K2.5 (Private)",
|
|
141
|
+
reasoning: false,
|
|
142
|
+
input: ["text"],
|
|
143
|
+
cost: { input: 2.48, output: 8.66, cacheRead: 0, cacheWrite: 0 },
|
|
144
|
+
contextWindow: 262144,
|
|
145
|
+
maxTokens: 8192,
|
|
146
|
+
},
|
|
147
|
+
{
|
|
148
|
+
id: "private/deepseek-r1-0528",
|
|
149
|
+
name: "DeepSeek R1 (Private)",
|
|
150
|
+
reasoning: true,
|
|
151
|
+
input: ["text"],
|
|
152
|
+
cost: { input: 2.48, output: 8.66, cacheRead: 0, cacheWrite: 0 },
|
|
153
|
+
contextWindow: 131072,
|
|
154
|
+
maxTokens: 8192,
|
|
155
|
+
},
|
|
156
|
+
{
|
|
157
|
+
id: "private/gpt-oss-120b",
|
|
158
|
+
name: "GPT-OSS 120B (Private)",
|
|
159
|
+
reasoning: false,
|
|
160
|
+
input: ["text"],
|
|
161
|
+
cost: { input: 1.24, output: 2.06, cacheRead: 0, cacheWrite: 0 },
|
|
162
|
+
contextWindow: 131072,
|
|
163
|
+
maxTokens: 8192,
|
|
164
|
+
},
|
|
165
|
+
{
|
|
166
|
+
id: "private/llama3-3-70b",
|
|
167
|
+
name: "Llama 3.3 70B (Private)",
|
|
168
|
+
reasoning: false,
|
|
169
|
+
input: ["text"],
|
|
170
|
+
cost: { input: 2.89, output: 4.54, cacheRead: 0, cacheWrite: 0 },
|
|
171
|
+
contextWindow: 131072,
|
|
172
|
+
maxTokens: 8192,
|
|
173
|
+
},
|
|
174
|
+
{
|
|
175
|
+
id: "private/qwen3-vl-30b",
|
|
176
|
+
name: "Qwen3-VL 30B (Private)",
|
|
177
|
+
reasoning: false,
|
|
178
|
+
input: ["text", "image"],
|
|
179
|
+
cost: { input: 2.06, output: 6.60, cacheRead: 0, cacheWrite: 0 },
|
|
180
|
+
contextWindow: 262144,
|
|
181
|
+
maxTokens: 8192,
|
|
182
|
+
},
|
|
183
|
+
],
|
|
184
|
+
},
|
|
185
|
+
auth: [
|
|
186
|
+
{
|
|
187
|
+
id: "api_key",
|
|
188
|
+
label: "PPQ.AI API Key",
|
|
189
|
+
hint: "Get your API key from https://ppq.ai/settings",
|
|
190
|
+
kind: "api_key" as const,
|
|
191
|
+
async run(ctx: any) {
|
|
192
|
+
const key = await ctx.prompter.text({
|
|
193
|
+
message: "Enter your PPQ.AI API key:",
|
|
194
|
+
validate: (v: string) =>
|
|
195
|
+
v.trim().length > 0 ? undefined : "API key is required",
|
|
196
|
+
});
|
|
197
|
+
if (typeof key === "symbol") throw new Error("Setup cancelled");
|
|
198
|
+
return {
|
|
199
|
+
profiles: [{ profileId: "default", credential: { apiKey: key } }],
|
|
200
|
+
defaultModel: "private/kimi-k2-5",
|
|
201
|
+
};
|
|
202
|
+
},
|
|
203
|
+
},
|
|
204
|
+
],
|
|
205
|
+
});
|
|
206
|
+
|
|
207
|
+
// ─── Service: manage proxy lifecycle ────────────────────────────────────────
|
|
208
|
+
|
|
209
|
+
api.registerService({
|
|
210
|
+
id: "ppq-private-mode-service",
|
|
211
|
+
|
|
212
|
+
async start() {
|
|
213
|
+
if (!pluginConfig.apiKey) {
|
|
214
|
+
api.logger.warn(
|
|
215
|
+
"PPQ Private Mode: no API key configured. Skipping proxy startup."
|
|
216
|
+
);
|
|
217
|
+
return;
|
|
218
|
+
}
|
|
219
|
+
|
|
220
|
+
const config: ProxyConfig = {
|
|
221
|
+
apiKey: pluginConfig.apiKey,
|
|
222
|
+
port: pluginConfig.port || 8787,
|
|
223
|
+
apiBase: pluginConfig.apiBase || "https://api.ppq.ai",
|
|
224
|
+
debug: pluginConfig.debug || false,
|
|
225
|
+
};
|
|
226
|
+
|
|
227
|
+
try {
|
|
228
|
+
proxy = await startProxy(config, {
|
|
229
|
+
info: (msg) => api.logger.info(`[private-mode] ${msg}`),
|
|
230
|
+
error: (msg) => api.logger.error(`[private-mode] ${msg}`),
|
|
231
|
+
debug: config.debug
|
|
232
|
+
? (msg) => api.logger.info(`[private-mode:debug] ${msg}`)
|
|
233
|
+
: undefined,
|
|
234
|
+
});
|
|
235
|
+
} catch (err: any) {
|
|
236
|
+
api.logger.error(`Failed to start Private Mode proxy: ${err.message}`);
|
|
237
|
+
}
|
|
238
|
+
},
|
|
239
|
+
|
|
240
|
+
async stop() {
|
|
241
|
+
if (proxy) {
|
|
242
|
+
await proxy.close();
|
|
243
|
+
proxy = null;
|
|
244
|
+
}
|
|
245
|
+
},
|
|
246
|
+
});
|
|
247
|
+
}
|
package/lib/proxy.ts
ADDED
|
@@ -0,0 +1,314 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* PPQ Private Mode Proxy
|
|
3
|
+
*
|
|
4
|
+
* Runs a local OpenAI-compatible HTTP server that transparently encrypts
|
|
5
|
+
* requests using EHBP (SecureClient) before forwarding them to PPQ.AI's
|
|
6
|
+
* private inference endpoints. The proxy handles attestation, encryption,
|
|
7
|
+
* and response decryption — your client sees a standard OpenAI API at localhost.
|
|
8
|
+
*
|
|
9
|
+
* Flow:
|
|
10
|
+
* Client → localhost:{port}/v1/chat/completions
|
|
11
|
+
* → proxy encrypts body via EHBP (SecureClient.fetch)
|
|
12
|
+
* → api.ppq.ai/private/v1/chat/completions
|
|
13
|
+
* → secure enclave decrypts, runs inference
|
|
14
|
+
* → encrypted response streams back
|
|
15
|
+
* → proxy decrypts → plaintext stream to client
|
|
16
|
+
*/
|
|
17
|
+
|
|
18
|
+
import http from "node:http";
|
|
19
|
+
import type { SecureClient, VerificationDocument } from "tinfoil";
|
|
20
|
+
|
|
21
|
+
// ─── Types ───────────────────────────────────────────────────────────────────
|
|
22
|
+
|
|
23
|
+
export interface ProxyConfig {
|
|
24
|
+
apiKey: string;
|
|
25
|
+
port: number;
|
|
26
|
+
apiBase: string;
|
|
27
|
+
debug: boolean;
|
|
28
|
+
}
|
|
29
|
+
|
|
30
|
+
export interface ProxyHandle {
|
|
31
|
+
port: number;
|
|
32
|
+
server: http.Server;
|
|
33
|
+
verification: VerificationDocument | null;
|
|
34
|
+
close: () => Promise<void>;
|
|
35
|
+
}
|
|
36
|
+
|
|
37
|
+
export interface Logger {
|
|
38
|
+
info: (msg: string) => void;
|
|
39
|
+
error: (msg: string) => void;
|
|
40
|
+
debug?: (msg: string) => void;
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
// ─── Constants ───────────────────────────────────────────────────────────────
|
|
44
|
+
|
|
45
|
+
const DEFAULT_PORT = 8787;
|
|
46
|
+
const DEFAULT_API_BASE = "https://api.ppq.ai";
|
|
47
|
+
const HEALTH_TIMEOUT_MS = 15_000;
|
|
48
|
+
|
|
49
|
+
/** Maps user-facing model IDs to enclave-internal model IDs */
|
|
50
|
+
const PRIVATE_MODEL_MAP: Record<string, string> = {
|
|
51
|
+
"private/kimi-k2-5": "kimi-k2-5",
|
|
52
|
+
"private/deepseek-r1-0528": "deepseek-r1-0528",
|
|
53
|
+
"private/gpt-oss-120b": "gpt-oss-120b",
|
|
54
|
+
"private/llama3-3-70b": "llama3-3-70b",
|
|
55
|
+
"private/qwen3-vl-30b": "qwen3-vl-30b",
|
|
56
|
+
};
|
|
57
|
+
|
|
58
|
+
/** All available private model IDs (user-facing) */
|
|
59
|
+
const PRIVATE_MODELS = Object.keys(PRIVATE_MODEL_MAP);
|
|
60
|
+
|
|
61
|
+
/** OpenAI-format model list response */
|
|
62
|
+
const MODEL_LIST_RESPONSE = {
|
|
63
|
+
object: "list",
|
|
64
|
+
data: [
|
|
65
|
+
{
|
|
66
|
+
id: "private/kimi-k2-5",
|
|
67
|
+
object: "model",
|
|
68
|
+
created: 0,
|
|
69
|
+
owned_by: "ppq-private",
|
|
70
|
+
},
|
|
71
|
+
{
|
|
72
|
+
id: "private/deepseek-r1-0528",
|
|
73
|
+
object: "model",
|
|
74
|
+
created: 0,
|
|
75
|
+
owned_by: "ppq-private",
|
|
76
|
+
},
|
|
77
|
+
{
|
|
78
|
+
id: "private/gpt-oss-120b",
|
|
79
|
+
object: "model",
|
|
80
|
+
created: 0,
|
|
81
|
+
owned_by: "ppq-private",
|
|
82
|
+
},
|
|
83
|
+
{
|
|
84
|
+
id: "private/llama3-3-70b",
|
|
85
|
+
object: "model",
|
|
86
|
+
created: 0,
|
|
87
|
+
owned_by: "ppq-private",
|
|
88
|
+
},
|
|
89
|
+
{
|
|
90
|
+
id: "private/qwen3-vl-30b",
|
|
91
|
+
object: "model",
|
|
92
|
+
created: 0,
|
|
93
|
+
owned_by: "ppq-private",
|
|
94
|
+
},
|
|
95
|
+
],
|
|
96
|
+
};
|
|
97
|
+
|
|
98
|
+
// ─── Proxy server ────────────────────────────────────────────────────────────
|
|
99
|
+
|
|
100
|
+
export async function startProxy(config: ProxyConfig, logger: Logger): Promise<ProxyHandle> {
|
|
101
|
+
const port = config.port || DEFAULT_PORT;
|
|
102
|
+
const apiBase = config.apiBase || DEFAULT_API_BASE;
|
|
103
|
+
|
|
104
|
+
// Dynamic import to avoid loading at module level
|
|
105
|
+
const { SecureClient: SC } = await import("tinfoil");
|
|
106
|
+
|
|
107
|
+
logger.info("Initializing encrypted connection to secure enclave...");
|
|
108
|
+
|
|
109
|
+
const client = new SC({
|
|
110
|
+
baseURL: `${apiBase}/private/`,
|
|
111
|
+
attestationBundleURL: `${apiBase}/private`,
|
|
112
|
+
transport: "ehbp",
|
|
113
|
+
});
|
|
114
|
+
|
|
115
|
+
// Perform attestation — verifies enclave code fingerprint
|
|
116
|
+
await client.ready();
|
|
117
|
+
|
|
118
|
+
let verification: VerificationDocument | null = null;
|
|
119
|
+
try {
|
|
120
|
+
verification = client.getVerificationDocument();
|
|
121
|
+
logger.info(
|
|
122
|
+
`Attestation verified — enclave: ${verification?.enclaveHost || "unknown"}, ` +
|
|
123
|
+
`code fingerprint: ${verification?.codeFingerprint?.slice(0, 16) || "unknown"}...`
|
|
124
|
+
);
|
|
125
|
+
} catch {
|
|
126
|
+
logger.info("Attestation completed (verification document unavailable)");
|
|
127
|
+
}
|
|
128
|
+
|
|
129
|
+
const encryptedFetch = client.fetch;
|
|
130
|
+
|
|
131
|
+
const server = http.createServer(async (req, res) => {
|
|
132
|
+
// CORS headers
|
|
133
|
+
res.setHeader("Access-Control-Allow-Origin", "*");
|
|
134
|
+
res.setHeader("Access-Control-Allow-Methods", "GET, POST, OPTIONS");
|
|
135
|
+
res.setHeader("Access-Control-Allow-Headers", "Content-Type, Authorization");
|
|
136
|
+
|
|
137
|
+
if (req.method === "OPTIONS") {
|
|
138
|
+
res.writeHead(204);
|
|
139
|
+
res.end();
|
|
140
|
+
return;
|
|
141
|
+
}
|
|
142
|
+
|
|
143
|
+
const url = new URL(req.url || "/", `http://127.0.0.1:${port}`);
|
|
144
|
+
|
|
145
|
+
// GET /health
|
|
146
|
+
if (url.pathname === "/health" || url.pathname === "/") {
|
|
147
|
+
res.writeHead(200, { "Content-Type": "application/json" });
|
|
148
|
+
res.end(JSON.stringify({ status: "ok", attestation: !!verification }));
|
|
149
|
+
return;
|
|
150
|
+
}
|
|
151
|
+
|
|
152
|
+
// GET /v1/models
|
|
153
|
+
if (url.pathname === "/v1/models" && req.method === "GET") {
|
|
154
|
+
res.writeHead(200, { "Content-Type": "application/json" });
|
|
155
|
+
res.end(JSON.stringify(MODEL_LIST_RESPONSE));
|
|
156
|
+
return;
|
|
157
|
+
}
|
|
158
|
+
|
|
159
|
+
// POST /v1/chat/completions
|
|
160
|
+
if (url.pathname === "/v1/chat/completions" && req.method === "POST") {
|
|
161
|
+
try {
|
|
162
|
+
const body = await readBody(req);
|
|
163
|
+
const parsed = JSON.parse(body);
|
|
164
|
+
|
|
165
|
+
// Resolve model
|
|
166
|
+
let modelId: string = parsed.model || "private/kimi-k2-5";
|
|
167
|
+
|
|
168
|
+
// Ensure model is a valid private model
|
|
169
|
+
if (!PRIVATE_MODEL_MAP[modelId]) {
|
|
170
|
+
// Try adding private/ prefix
|
|
171
|
+
const prefixed = `private/${modelId}`;
|
|
172
|
+
if (PRIVATE_MODEL_MAP[prefixed]) {
|
|
173
|
+
modelId = prefixed;
|
|
174
|
+
} else {
|
|
175
|
+
res.writeHead(400, { "Content-Type": "application/json" });
|
|
176
|
+
res.end(
|
|
177
|
+
JSON.stringify({
|
|
178
|
+
error: {
|
|
179
|
+
message: `Unknown model: ${parsed.model}. Available: ${PRIVATE_MODELS.join(", ")}`,
|
|
180
|
+
type: "invalid_request_error",
|
|
181
|
+
},
|
|
182
|
+
})
|
|
183
|
+
);
|
|
184
|
+
return;
|
|
185
|
+
}
|
|
186
|
+
}
|
|
187
|
+
|
|
188
|
+
// Map to enclave-internal model ID
|
|
189
|
+
const enclaveModelId = PRIVATE_MODEL_MAP[modelId];
|
|
190
|
+
parsed.model = enclaveModelId;
|
|
191
|
+
|
|
192
|
+
if (config.debug) {
|
|
193
|
+
logger.debug?.(`→ ${modelId} (enclave: ${enclaveModelId}), stream: ${!!parsed.stream}`);
|
|
194
|
+
}
|
|
195
|
+
|
|
196
|
+
// Forward via SecureClient (EHBP-encrypted)
|
|
197
|
+
const endpoint = `${apiBase}/private/v1/chat/completions`;
|
|
198
|
+
const response = await encryptedFetch(endpoint, {
|
|
199
|
+
method: "POST",
|
|
200
|
+
headers: {
|
|
201
|
+
"Content-Type": "application/json",
|
|
202
|
+
Authorization: `Bearer ${config.apiKey}`,
|
|
203
|
+
"X-Private-Model": modelId,
|
|
204
|
+
"x-query-source": "api",
|
|
205
|
+
},
|
|
206
|
+
body: JSON.stringify(parsed),
|
|
207
|
+
});
|
|
208
|
+
|
|
209
|
+
// Forward status and headers
|
|
210
|
+
const responseHeaders: Record<string, string> = {
|
|
211
|
+
"Content-Type": response.headers.get("content-type") || "application/json",
|
|
212
|
+
"Access-Control-Allow-Origin": "*",
|
|
213
|
+
};
|
|
214
|
+
|
|
215
|
+
if (parsed.stream) {
|
|
216
|
+
responseHeaders["Cache-Control"] = "no-cache";
|
|
217
|
+
responseHeaders["Connection"] = "keep-alive";
|
|
218
|
+
}
|
|
219
|
+
|
|
220
|
+
res.writeHead(response.status, responseHeaders);
|
|
221
|
+
|
|
222
|
+
// Stream the (decrypted) response body
|
|
223
|
+
if (response.body) {
|
|
224
|
+
const reader = response.body.getReader();
|
|
225
|
+
try {
|
|
226
|
+
while (true) {
|
|
227
|
+
const { done, value } = await reader.read();
|
|
228
|
+
if (done) break;
|
|
229
|
+
res.write(value);
|
|
230
|
+
}
|
|
231
|
+
} catch (err: any) {
|
|
232
|
+
if (config.debug) {
|
|
233
|
+
logger.error(`Stream error: ${err.message}`);
|
|
234
|
+
}
|
|
235
|
+
} finally {
|
|
236
|
+
res.end();
|
|
237
|
+
}
|
|
238
|
+
} else {
|
|
239
|
+
const text = await response.text();
|
|
240
|
+
res.end(text);
|
|
241
|
+
}
|
|
242
|
+
} catch (err: any) {
|
|
243
|
+
// Handle non-encrypted error responses (auth/balance errors from proxy)
|
|
244
|
+
if (err?.name === "ProtocolError") {
|
|
245
|
+
logger.error(`Protocol error (likely auth/balance issue): ${err.message}`);
|
|
246
|
+
res.writeHead(401, { "Content-Type": "application/json" });
|
|
247
|
+
res.end(
|
|
248
|
+
JSON.stringify({
|
|
249
|
+
error: {
|
|
250
|
+
message: "Authentication or balance error. Check your PPQ API key and account balance.",
|
|
251
|
+
type: "authentication_error",
|
|
252
|
+
},
|
|
253
|
+
})
|
|
254
|
+
);
|
|
255
|
+
return;
|
|
256
|
+
}
|
|
257
|
+
|
|
258
|
+
logger.error(`Request error: ${err.message}`);
|
|
259
|
+
res.writeHead(500, { "Content-Type": "application/json" });
|
|
260
|
+
res.end(
|
|
261
|
+
JSON.stringify({
|
|
262
|
+
error: {
|
|
263
|
+
message: err.message || "Internal proxy error",
|
|
264
|
+
type: "proxy_error",
|
|
265
|
+
},
|
|
266
|
+
})
|
|
267
|
+
);
|
|
268
|
+
}
|
|
269
|
+
return;
|
|
270
|
+
}
|
|
271
|
+
|
|
272
|
+
// 404 for everything else
|
|
273
|
+
res.writeHead(404, { "Content-Type": "application/json" });
|
|
274
|
+
res.end(
|
|
275
|
+
JSON.stringify({
|
|
276
|
+
error: {
|
|
277
|
+
message: `Unknown endpoint: ${req.method} ${url.pathname}`,
|
|
278
|
+
type: "invalid_request_error",
|
|
279
|
+
},
|
|
280
|
+
})
|
|
281
|
+
);
|
|
282
|
+
});
|
|
283
|
+
|
|
284
|
+
// Start listening
|
|
285
|
+
await new Promise<void>((resolve, reject) => {
|
|
286
|
+
server.on("error", reject);
|
|
287
|
+
server.listen(port, "127.0.0.1", () => {
|
|
288
|
+
logger.info(`PPQ Private Mode proxy listening on http://127.0.0.1:${port}`);
|
|
289
|
+
logger.info(`Endpoints: GET /v1/models, POST /v1/chat/completions`);
|
|
290
|
+
resolve();
|
|
291
|
+
});
|
|
292
|
+
});
|
|
293
|
+
|
|
294
|
+
return {
|
|
295
|
+
port,
|
|
296
|
+
server,
|
|
297
|
+
verification,
|
|
298
|
+
close: () =>
|
|
299
|
+
new Promise<void>((resolve) => {
|
|
300
|
+
server.close(() => resolve());
|
|
301
|
+
}),
|
|
302
|
+
};
|
|
303
|
+
}
|
|
304
|
+
|
|
305
|
+
// ─── Helpers ─────────────────────────────────────────────────────────────────
|
|
306
|
+
|
|
307
|
+
function readBody(req: http.IncomingMessage): Promise<string> {
|
|
308
|
+
return new Promise((resolve, reject) => {
|
|
309
|
+
const chunks: Buffer[] = [];
|
|
310
|
+
req.on("data", (chunk) => chunks.push(chunk));
|
|
311
|
+
req.on("end", () => resolve(Buffer.concat(chunks).toString("utf-8")));
|
|
312
|
+
req.on("error", reject);
|
|
313
|
+
});
|
|
314
|
+
}
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
{
|
|
2
|
+
"id": "ppq-private-mode",
|
|
3
|
+
"name": "PPQ Private Mode",
|
|
4
|
+
"description": "End-to-end encrypted AI inference via PPQ.AI. Runs a local proxy that encrypts your queries before they leave your machine.",
|
|
5
|
+
"version": "0.1.0",
|
|
6
|
+
"configSchema": {
|
|
7
|
+
"type": "object",
|
|
8
|
+
"additionalProperties": false,
|
|
9
|
+
"properties": {
|
|
10
|
+
"apiKey": { "type": "string" },
|
|
11
|
+
"port": { "type": "number" },
|
|
12
|
+
"apiBase": { "type": "string" },
|
|
13
|
+
"debug": { "type": "boolean" }
|
|
14
|
+
},
|
|
15
|
+
"required": ["apiKey"]
|
|
16
|
+
},
|
|
17
|
+
"uiHints": {
|
|
18
|
+
"apiKey": { "label": "PPQ.AI API Key", "sensitive": true },
|
|
19
|
+
"port": { "label": "Local Port", "placeholder": "8787" },
|
|
20
|
+
"apiBase": { "label": "PPQ API Base URL", "placeholder": "https://api.ppq.ai" },
|
|
21
|
+
"debug": { "label": "Debug Logging" }
|
|
22
|
+
},
|
|
23
|
+
"skills": ["./skills/private-mode"]
|
|
24
|
+
}
|
package/package.json
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "ppq-private-mode",
|
|
3
|
+
"version": "0.1.0",
|
|
4
|
+
"type": "module",
|
|
5
|
+
"description": "End-to-end encrypted AI proxy for PPQ.AI private models. Works standalone or as an OpenClaw plugin.",
|
|
6
|
+
"openclaw": {
|
|
7
|
+
"extensions": ["./index.ts"]
|
|
8
|
+
},
|
|
9
|
+
"bin": {
|
|
10
|
+
"ppq-private-proxy": "./bin/server.ts"
|
|
11
|
+
},
|
|
12
|
+
"scripts": {
|
|
13
|
+
"start": "npx tsx bin/server.ts"
|
|
14
|
+
},
|
|
15
|
+
"files": ["index.ts", "lib", "bin", "skills", "openclaw.plugin.json"],
|
|
16
|
+
"dependencies": {
|
|
17
|
+
"tinfoil": "^1.0.0",
|
|
18
|
+
"zod": "^4.0.0"
|
|
19
|
+
},
|
|
20
|
+
"peerDependencies": {
|
|
21
|
+
"openclaw": ">=2026.1.0"
|
|
22
|
+
},
|
|
23
|
+
"devDependencies": {
|
|
24
|
+
"tsx": "^4.0.0",
|
|
25
|
+
"typescript": "^5.7.0",
|
|
26
|
+
"@types/node": "^22.0.0"
|
|
27
|
+
}
|
|
28
|
+
}
|
|
@@ -0,0 +1,88 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ppq-private-mode
|
|
3
|
+
description: Configure PPQ.AI Private Mode encrypted models in OpenClaw
|
|
4
|
+
user-invocable: true
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# PPQ.AI Private Mode -- Configure Encrypted Models
|
|
8
|
+
|
|
9
|
+
This skill configures OpenClaw to use PPQ.AI's end-to-end encrypted models. The PPQ Private Mode plugin must already be installed before running this skill.
|
|
10
|
+
|
|
11
|
+
## Prerequisites
|
|
12
|
+
|
|
13
|
+
The user must have already installed the PPQ Private Mode plugin. If it is not installed, tell the user to run this command themselves:
|
|
14
|
+
|
|
15
|
+
openclaw plugins install https://github.com/PayPerQ/ppq-private-mode-proxy
|
|
16
|
+
|
|
17
|
+
Then ask the user to come back and run this skill again after installing.
|
|
18
|
+
|
|
19
|
+
## Configuration Steps
|
|
20
|
+
|
|
21
|
+
### Step 1: Verify the plugin is installed
|
|
22
|
+
|
|
23
|
+
Check that the plugin exists:
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
ls ~/.openclaw/plugins/ppq-private-mode/
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
If the directory does not exist, stop and tell the user they need to install the plugin first (see Prerequisites above). Do not continue.
|
|
30
|
+
|
|
31
|
+
### Step 2: Get the user's PPQ API key
|
|
32
|
+
|
|
33
|
+
Ask the user for their PPQ.AI API key. They can find it at https://ppq.ai/api-docs -- it starts with `sk-`.
|
|
34
|
+
|
|
35
|
+
Do not proceed until the user provides their key.
|
|
36
|
+
|
|
37
|
+
### Step 3: Add the provider to OpenClaw config
|
|
38
|
+
|
|
39
|
+
Merge the following into `~/.openclaw/openclaw.json`, preserving all existing settings.
|
|
40
|
+
|
|
41
|
+
Add a new provider entry under `models.providers`:
|
|
42
|
+
|
|
43
|
+
```json
|
|
44
|
+
"ppq-private": {
|
|
45
|
+
"baseUrl": "http://127.0.0.1:8787/v1",
|
|
46
|
+
"apiKey": "unused",
|
|
47
|
+
"api": "openai-completions",
|
|
48
|
+
"models": [
|
|
49
|
+
{ "id": "private/kimi-k2-5", "name": "private/kimi-k2-5" },
|
|
50
|
+
{ "id": "private/deepseek-r1-0528", "name": "private/deepseek-r1-0528" },
|
|
51
|
+
{ "id": "private/gpt-oss-120b", "name": "private/gpt-oss-120b" },
|
|
52
|
+
{ "id": "private/llama3-3-70b", "name": "private/llama3-3-70b" },
|
|
53
|
+
{ "id": "private/qwen3-vl-30b", "name": "private/qwen3-vl-30b" }
|
|
54
|
+
]
|
|
55
|
+
}
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
Add the plugin entry under `plugins.entries`:
|
|
59
|
+
|
|
60
|
+
```json
|
|
61
|
+
"ppq-private-mode": {
|
|
62
|
+
"config": {
|
|
63
|
+
"apiKey": "<the key the user provided>"
|
|
64
|
+
}
|
|
65
|
+
}
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
### Step 4: Restart the gateway
|
|
69
|
+
|
|
70
|
+
```bash
|
|
71
|
+
systemctl --user restart openclaw-gateway.service
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
### Step 5: Confirm to the user
|
|
75
|
+
|
|
76
|
+
Tell the user PPQ Private Mode is configured. Available encrypted models:
|
|
77
|
+
|
|
78
|
+
- `private/kimi-k2-5` -- Fast general tasks
|
|
79
|
+
- `private/deepseek-r1-0528` -- Reasoning and analysis
|
|
80
|
+
- `private/gpt-oss-120b` -- Cost-efficient general use
|
|
81
|
+
- `private/llama3-3-70b` -- Open-source tasks
|
|
82
|
+
- `private/qwen3-vl-30b` -- Vision plus text
|
|
83
|
+
|
|
84
|
+
Switch with: `openclaw models set private/kimi-k2-5`
|
|
85
|
+
|
|
86
|
+
## About
|
|
87
|
+
|
|
88
|
+
PPQ.AI provides pay-per-query AI with no subscriptions. Private models run inside hardware secure enclaves. Learn more at https://ppq.ai
|