@vaultgradient/pq-agent 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,152 @@
1
+ # @vaultgradient/pq-agent
2
+
3
+ **The agent that runs inside your network so AI agents can query your data — safely, audibly, and over a single outbound connection.**
4
+
5
+ `pq-agent` is half of [PipeQuery Cloud](https://pipequery.cloud). The other half is the hosted control plane your AI tools (Claude Desktop, Cursor, Copilot, custom MCP clients) connect to. The agent sits between them and your data sources, so:
6
+
7
+ - **Your data stays on your infra.** Source credentials live in *your* `pipequery.yaml`. The control plane never sees them.
8
+ - **No inbound ports.** The agent dials *out* to the control plane over WSS:443. Your firewall stays closed.
9
+ - **Every read is audited.** Each query produces a tamper-evident audit entry persisted in the control plane.
10
+
11
+ ---
12
+
13
+ ## Install
14
+
15
+ ```bash
16
+ npm install -g @vaultgradient/pq-agent
17
+ ```
18
+
19
+ Requires Node 20 or later.
20
+
21
+ ## Configure
22
+
23
+ Create a `pipequery.yaml` next to your data:
24
+
25
+ ```yaml
26
+ sources:
27
+ orders:
28
+ type: postgres
29
+ url: ${DATABASE_URL}
30
+ query: SELECT * FROM orders
31
+ interval: 30s
32
+
33
+ endpoints:
34
+ /top-orders:
35
+ query: orders | sort(total desc) | first(10)
36
+ ```
37
+
38
+ `pq-agent` reuses the [PipeQuery](https://github.com/andreadito/pipequery) source library: Postgres, MySQL, SQLite, Snowflake, ClickHouse, MongoDB, Kafka, REST APIs, WebSocket streams, CSV / JSON files, inline data. Connection strings support `${ENV_VAR}` interpolation so secrets stay out of yaml.
39
+
40
+ ## Run
41
+
42
+ You need two things from your control-plane operator (or your own `pq-cloud-admin` if you self-host):
43
+
44
+ 1. **Control-plane URL** — e.g. `wss://pipequery-cloud.fly.dev/agent`
45
+ 2. **Agent token** — issued via `pq-cloud-admin issue-agent-token`. The token is shown **once**.
46
+
47
+ Then:
48
+
49
+ ```bash
50
+ export PQ_AGENT_CONTROL_PLANE_URL=wss://your-cp.fly.dev/agent
51
+ export PQ_AGENT_ENROLLMENT_TOKEN=<your-agent-token>
52
+ export PQ_AGENT_CONFIG=./pipequery.yaml
53
+
54
+ pq-agent run
55
+ ```
56
+
57
+ You should see:
58
+
59
+ ```
60
+ [agent] loading config from /path/to/pipequery.yaml
61
+ [agent] loaded 1 source(s), 1 endpoint(s)
62
+ [agent] connected to wss://your-cp.fly.dev/agent (auth=enrollment)
63
+ ```
64
+
65
+ The agent now serves MCP traffic from the control plane against your sources. Your AI client uses the **MCP URL** (e.g. `https://your-cp.fly.dev/mcp`) plus an **api_key** — issued separately, also via the admin CLI.
66
+
67
+ ### As a service (systemd / launchd)
68
+
69
+ For production, run `pq-agent` under a process supervisor. Example systemd unit:
70
+
71
+ ```ini
72
+ [Unit]
73
+ Description=PipeQuery agent
74
+ After=network.target
75
+
76
+ [Service]
77
+ Environment=PQ_AGENT_CONTROL_PLANE_URL=wss://your-cp.fly.dev/agent
78
+ Environment=PQ_AGENT_ENROLLMENT_TOKEN=...
79
+ Environment=PQ_AGENT_CONFIG=/etc/pipequery/pipequery.yaml
80
+ ExecStart=/usr/bin/pq-agent run
81
+ Restart=on-failure
82
+ User=pipequery
83
+
84
+ [Install]
85
+ WantedBy=multi-user.target
86
+ ```
87
+
88
+ ---
89
+
90
+ ## How it works
91
+
92
+ ```
93
+ ┌─────────────────────┐ ┌──────────────────────────┐
94
+ │ Claude Desktop │ HTTPS │ PipeQuery Cloud │
95
+ │ Cursor / Copilot │ ──────► │ <your-cp>.fly.dev/mcp │
96
+ │ (any MCP client) │ │ │
97
+ └─────────────────────┘ │ ┌────────────────────┐ │
98
+ │ │ Tenant registry │ │
99
+ │ │ Audit log (Postgres)│ │
100
+ │ └────────────────────┘ │
101
+ │ │ │
102
+ │ │ WSS (long-lived)
103
+ │ ▼ │
104
+ └─────────────┬────────────┘
105
+
106
+ outbound :443 only
107
+
108
+
109
+ ┌──────────────────────────┐
110
+ │ pq-agent (this package) │
111
+ │ Inside YOUR network │
112
+ │ │
113
+ │ ┌────────────────────┐ │
114
+ │ │ pipequery.yaml │ │
115
+ │ │ source credentials │ │
116
+ │ └────────────────────┘ │
117
+ │ │ │
118
+ │ ▼ │
119
+ │ Postgres / Snowflake / │
120
+ │ Kafka / REST APIs / etc │
121
+ └──────────────────────────┘
122
+ ```
123
+
124
+ The agent dials the control plane over WSS:443. When the AI client makes an MCP tool call, the control plane routes it through the WS to the agent, the agent executes the query locally using whichever source adapter is configured (with push-down to native SQL where supported), and the result streams back. Source credentials never cross the public internet — only the query results do.
125
+
126
+ ## CLI reference
127
+
128
+ ```
129
+ pq-agent run [options]
130
+
131
+ Options:
132
+ --url <url> Control-plane WSS URL
133
+ (env: PQ_AGENT_CONTROL_PLANE_URL)
134
+ --config <path> Path to pipequery.yaml
135
+ (env: PQ_AGENT_CONFIG, default: ./pipequery.yaml)
136
+ --token-file <path> Where to read/write the agent token
137
+ (env: PQ_AGENT_TOKEN_FILE,
138
+ default: $XDG_DATA_HOME/pipequery/agent.token)
139
+ --insecure Allow ws:// for localhost-only dev. Refused
140
+ against any non-localhost host.
141
+ ```
142
+
143
+ ## Troubleshooting
144
+
145
+ - **`Refusing to connect: URL must use wss://`** — the production control plane uses TLS; `ws://` is allowed only for localhost dev.
146
+ - **`WS closed 4001`** — your agent token is invalid or revoked. Get a fresh one from your operator (`pq-cloud-admin issue-agent-token`).
147
+ - **`WS closed 4003`** — protocol version mismatch. Upgrade `pq-agent` to the latest.
148
+ - **`AUDIT CHAIN BREAK` in the CP logs after a restart** — known v1 limitation: the agent's chain resets to zero on restart. Fix is on the roadmap (disk-persistence of the chain head).
149
+
150
+ ## License
151
+
152
+ Proprietary. © Vault Gradient. Commercial licensing inquiries: [pipequery.cloud](https://pipequery.cloud).
@@ -0,0 +1,22 @@
1
+ import { AgentRuntime } from './runtime.js';
2
+ import '@vaultgradient/pipequery-cli/sources';
3
+
4
+ interface ClientOptions {
5
+ url: string;
6
+ token: string;
7
+ runtime: AgentRuntime;
8
+ tenantId?: string;
9
+ agentId?: string;
10
+ allowInsecure?: boolean;
11
+ requestTimeoutMs?: number;
12
+ }
13
+ interface Client {
14
+ start(): Promise<void>;
15
+ stop(): Promise<void>;
16
+ on(event: 'open', listener: () => void): void;
17
+ on(event: 'close', listener: (code: number) => void): void;
18
+ on(event: 'error', listener: (err: Error) => void): void;
19
+ }
20
+ declare function createClient(opts: ClientOptions): Client;
21
+
22
+ export { type Client, type ClientOptions, createClient };
package/dist/client.js ADDED
@@ -0,0 +1,501 @@
1
+ // src/client.ts
2
+ import { randomUUID as randomUUID2 } from "crypto";
3
+ import { EventEmitter } from "events";
4
+ import WebSocket from "ws";
5
+
6
+ // ../protocol/dist/errors.js
7
+ var WS_CLOSE_CODES = {
8
+ GOING_AWAY: 1001,
9
+ INTERNAL_ERROR: 1011,
10
+ INVALID_AGENT_TOKEN: 4001,
11
+ TENANT_NOT_ENROLLED: 4002,
12
+ PROTOCOL_VERSION_MISMATCH: 4003
13
+ };
14
+ var NO_RECONNECT_CLOSE_CODES = /* @__PURE__ */ new Set([
15
+ WS_CLOSE_CODES.INVALID_AGENT_TOKEN,
16
+ WS_CLOSE_CODES.TENANT_NOT_ENROLLED,
17
+ WS_CLOSE_CODES.PROTOCOL_VERSION_MISMATCH
18
+ ]);
19
+
20
+ // ../protocol/dist/version.js
21
+ var PROTOCOL_VERSION = 1;
22
+ var PROTOCOL_VERSION_HEADER = "x-pq-protocol-version";
23
+
24
+ // src/audit.ts
25
+ import { createHash } from "crypto";
26
+ var ZERO_HASH = "0".repeat(64);
27
+ function canonicalJson(value) {
28
+ if (value === null) return "null";
29
+ if (typeof value === "number" || typeof value === "boolean") {
30
+ return JSON.stringify(value);
31
+ }
32
+ if (typeof value === "string") return JSON.stringify(value);
33
+ if (Array.isArray(value)) {
34
+ return "[" + value.map(canonicalJson).join(",") + "]";
35
+ }
36
+ if (typeof value === "object") {
37
+ const obj = value;
38
+ const keys = Object.keys(obj).filter((k) => obj[k] !== void 0).sort();
39
+ const parts = keys.map(
40
+ (k) => `${JSON.stringify(k)}:${canonicalJson(obj[k])}`
41
+ );
42
+ return "{" + parts.join(",") + "}";
43
+ }
44
+ throw new Error(`canonicalJson: unsupported value of type ${typeof value}`);
45
+ }
46
+ function computeHash(prevHash, entry) {
47
+ return createHash("sha256").update(prevHash).update(canonicalJson(entry)).digest("hex");
48
+ }
49
+ var AuditEmitter = class {
50
+ constructor(cfg) {
51
+ this.cfg = cfg;
52
+ }
53
+ cfg;
54
+ prevHash = ZERO_HASH;
55
+ emit(partial) {
56
+ const { ts: tsOverride, ...rest } = partial;
57
+ const entry = {
58
+ tenant_id: this.cfg.tenantId,
59
+ agent_id: this.cfg.agentId,
60
+ ts: tsOverride ?? (/* @__PURE__ */ new Date()).toISOString(),
61
+ ...rest
62
+ };
63
+ const prev_hash = this.prevHash;
64
+ const hash = computeHash(prev_hash, entry);
65
+ this.prevHash = hash;
66
+ return { entry, prev_hash, hash };
67
+ }
68
+ // For tests / observability.
69
+ currentHash() {
70
+ return this.prevHash;
71
+ }
72
+ };
73
+
74
+ // src/dispatch.ts
75
+ import { randomUUID } from "crypto";
76
+ var REQUEST_TIMEOUT_MS = 3e4;
77
+ var MAX_ROWS = 1e5;
78
+ var MAX_BYTES = 10 * 1024 * 1024;
79
+ async function dispatch(inbound, opts) {
80
+ switch (inbound.type) {
81
+ case "ping":
82
+ return { outbound: [{ type: "pong", id: inbound.id, payload: {} }] };
83
+ case "request.list_sources":
84
+ return wrapRequest(inbound.id, opts, () => listSources(opts.runtime));
85
+ case "request.describe_source":
86
+ return wrapRequest(
87
+ inbound.id,
88
+ opts,
89
+ () => describeSource(opts.runtime, inbound.payload.name, inbound.payload.sample_size)
90
+ );
91
+ case "request.list_endpoints":
92
+ return wrapRequest(inbound.id, opts, () => listEndpoints(opts.runtime));
93
+ case "request.call_endpoint":
94
+ return wrapRequest(
95
+ inbound.id,
96
+ opts,
97
+ () => callEndpoint(opts.runtime, inbound.payload.path)
98
+ );
99
+ case "request.query":
100
+ return wrapRequest(
101
+ inbound.id,
102
+ opts,
103
+ () => runQuery(opts.runtime, inbound.payload.expression)
104
+ );
105
+ case "control.rotate_token":
106
+ case "control.update_config":
107
+ case "control.shutdown":
108
+ return { outbound: [] };
109
+ }
110
+ }
111
+ async function wrapRequest(requestId, opts, handler) {
112
+ const timeoutMs = opts.timeoutMs ?? REQUEST_TIMEOUT_MS;
113
+ const startedAt = Date.now();
114
+ let result;
115
+ try {
116
+ result = await Promise.race([
117
+ handler(),
118
+ timeoutAfter(timeoutMs)
119
+ ]);
120
+ } catch (err) {
121
+ if (err instanceof TimeoutError) {
122
+ return {
123
+ outbound: [
124
+ {
125
+ type: "response.error",
126
+ id: requestId,
127
+ payload: {
128
+ code: "timeout",
129
+ message: `request exceeded ${timeoutMs}ms`
130
+ }
131
+ }
132
+ ]
133
+ };
134
+ }
135
+ return {
136
+ outbound: [
137
+ {
138
+ type: "response.error",
139
+ id: requestId,
140
+ payload: {
141
+ code: "execution",
142
+ message: err instanceof Error ? err.message : String(err)
143
+ }
144
+ }
145
+ ]
146
+ };
147
+ }
148
+ const latency_ms = Date.now() - startedAt;
149
+ if (!result.ok) {
150
+ return {
151
+ outbound: [
152
+ {
153
+ type: "response.error",
154
+ id: requestId,
155
+ payload: { code: result.code, message: result.message }
156
+ }
157
+ ]
158
+ };
159
+ }
160
+ const bytes = byteLengthOfJson(result.rows);
161
+ if (result.rows.length > MAX_ROWS || bytes > MAX_BYTES) {
162
+ return {
163
+ outbound: [
164
+ {
165
+ type: "response.error",
166
+ id: requestId,
167
+ payload: {
168
+ code: "too_large",
169
+ message: `result exceeded cap (rows=${result.rows.length}, bytes=${bytes})`,
170
+ details: {
171
+ row_count: result.rows.length,
172
+ bytes,
173
+ max_rows: MAX_ROWS,
174
+ max_bytes: MAX_BYTES
175
+ }
176
+ }
177
+ }
178
+ ]
179
+ };
180
+ }
181
+ return {
182
+ outbound: [
183
+ {
184
+ type: "response.chunk",
185
+ id: requestId,
186
+ payload: { rows: result.rows, chunk_index: 0 }
187
+ },
188
+ {
189
+ type: "response.end",
190
+ id: requestId,
191
+ payload: {
192
+ total_rows: result.rows.length,
193
+ latency_ms,
194
+ // The published SourceManager.runQuery() returns rows without
195
+ // exposing whether push-down was used. Slice 4 reports false until
196
+ // the OSS API surfaces this; the wire field is still in the spec
197
+ // and consumed by future audit/billing observability.
198
+ pushed_down: false
199
+ }
200
+ }
201
+ ]
202
+ };
203
+ }
204
+ var TimeoutError = class extends Error {
205
+ };
206
+ function timeoutAfter(ms) {
207
+ return new Promise(
208
+ (_, reject) => setTimeout(() => reject(new TimeoutError()), ms)
209
+ );
210
+ }
211
+ function byteLengthOfJson(value) {
212
+ return Buffer.byteLength(JSON.stringify(value), "utf8");
213
+ }
214
+ async function listSources(runtime) {
215
+ const names = runtime.sources.getSourceNames();
216
+ const statuses = runtime.sources.getAllStatuses();
217
+ const rows = names.map((name) => ({
218
+ name,
219
+ status: statuses[name] ?? null
220
+ }));
221
+ return { ok: true, rows };
222
+ }
223
+ async function describeSource(runtime, name, sampleSize) {
224
+ const status = runtime.sources.getSourceStatus(name);
225
+ if (!status) {
226
+ return { ok: false, code: "not_found", message: `source "${name}" is not configured` };
227
+ }
228
+ const data = runtime.sources.getSourceData(name) ?? [];
229
+ const size = Math.min(data.length, Math.max(1, sampleSize ?? 5));
230
+ const sample = data.slice(0, size);
231
+ const fields = /* @__PURE__ */ new Set();
232
+ for (const row of sample) {
233
+ if (row && typeof row === "object") {
234
+ for (const k of Object.keys(row)) fields.add(k);
235
+ }
236
+ }
237
+ return {
238
+ ok: true,
239
+ rows: [
240
+ {
241
+ name,
242
+ status,
243
+ fields: [...fields],
244
+ sample
245
+ }
246
+ ]
247
+ };
248
+ }
249
+ async function listEndpoints(runtime) {
250
+ const rows = [...runtime.endpoints.values()];
251
+ return { ok: true, rows };
252
+ }
253
+ async function callEndpoint(runtime, path) {
254
+ const endpoint = runtime.endpoints.get(path);
255
+ if (!endpoint) {
256
+ return { ok: false, code: "not_found", message: `endpoint "${path}" is not registered` };
257
+ }
258
+ return runQuery(runtime, endpoint.query);
259
+ }
260
+ async function runQuery(runtime, expression) {
261
+ try {
262
+ const result = await runtime.sources.runQuery(expression);
263
+ const rows = Array.isArray(result) ? result : [result];
264
+ return { ok: true, rows };
265
+ } catch (err) {
266
+ const message = err instanceof Error ? err.message : String(err);
267
+ return { ok: false, code: "execution", message };
268
+ }
269
+ }
270
+ function isInboundEnvelope(value) {
271
+ if (typeof value !== "object" || value === null) return false;
272
+ const env = value;
273
+ return typeof env.type === "string" && typeof env.id === "string" && typeof env.payload === "object" && env.payload !== null;
274
+ }
275
+
276
+ // src/reconnect.ts
277
+ var Backoff = class {
278
+ initialMs;
279
+ ceilingMs;
280
+ resetAfterStableMs;
281
+ currentMs;
282
+ constructor(opts = {}) {
283
+ this.initialMs = opts.initialMs ?? 1e3;
284
+ this.ceilingMs = opts.ceilingMs ?? 6e4;
285
+ this.resetAfterStableMs = opts.resetAfterStableMs ?? 5 * 6e4;
286
+ this.currentMs = this.initialMs;
287
+ }
288
+ // Returns the delay (ms) to wait before the next reconnect attempt, then
289
+ // doubles the cap for the following attempt (up to ceiling).
290
+ nextDelayMs(random = Math.random) {
291
+ const delay = Math.floor(random() * this.currentMs);
292
+ this.currentMs = Math.min(this.currentMs * 2, this.ceilingMs);
293
+ return delay;
294
+ }
295
+ reset() {
296
+ this.currentMs = this.initialMs;
297
+ }
298
+ };
299
+
300
+ // src/client.ts
301
+ function createClient(opts) {
302
+ const emitter = new EventEmitter();
303
+ const backoff = new Backoff();
304
+ const audit = new AuditEmitter({
305
+ tenantId: opts.tenantId ?? "dev-tenant",
306
+ agentId: opts.agentId ?? "dev-agent"
307
+ });
308
+ let socket = null;
309
+ let stopped = false;
310
+ let reconnectTimer = null;
311
+ let stableTimer = null;
312
+ function assertSafeUrl(url) {
313
+ if (url.startsWith("wss://")) return;
314
+ if (url.startsWith("ws://")) {
315
+ if (!opts.allowInsecure) {
316
+ throw new Error(
317
+ `Refusing to connect: URL must use wss:// (got ${url}). Pass allowInsecure for local dev only.`
318
+ );
319
+ }
320
+ let host;
321
+ try {
322
+ host = new URL(url).hostname;
323
+ } catch {
324
+ throw new Error(`Refusing to connect: invalid URL (${url}).`);
325
+ }
326
+ const localish = host === "localhost" || host === "127.0.0.1" || host === "::1" || host === "0.0.0.0" || host.endsWith(".local");
327
+ if (!localish) {
328
+ throw new Error(
329
+ `Refusing to connect: --insecure is only allowed for localhost targets, got host="${host}". Use wss:// for ${url}.`
330
+ );
331
+ }
332
+ return;
333
+ }
334
+ throw new Error(
335
+ `Refusing to connect: URL must be ws:// or wss:// (got ${url}).`
336
+ );
337
+ }
338
+ async function connect() {
339
+ assertSafeUrl(opts.url);
340
+ const ws = new WebSocket(opts.url, {
341
+ headers: {
342
+ Authorization: `Bearer ${opts.token}`,
343
+ [PROTOCOL_VERSION_HEADER]: String(PROTOCOL_VERSION)
344
+ }
345
+ });
346
+ socket = ws;
347
+ ws.on("open", () => {
348
+ stableTimer = setTimeout(
349
+ () => backoff.reset(),
350
+ backoff.resetAfterStableMs
351
+ );
352
+ emitter.emit("open");
353
+ });
354
+ ws.on("message", (raw) => {
355
+ void handleFrame(ws, raw.toString());
356
+ });
357
+ ws.on("close", (code) => {
358
+ if (stableTimer) {
359
+ clearTimeout(stableTimer);
360
+ stableTimer = null;
361
+ }
362
+ socket = null;
363
+ emitter.emit("close", code);
364
+ if (stopped) return;
365
+ if (NO_RECONNECT_CLOSE_CODES.has(code)) {
366
+ emitter.emit(
367
+ "error",
368
+ new Error(
369
+ `WS closed ${code} \u2014 non-recoverable per \xA77. Re-enrollment or operator action required.`
370
+ )
371
+ );
372
+ return;
373
+ }
374
+ scheduleReconnect();
375
+ });
376
+ ws.on("error", (err) => {
377
+ emitter.emit("error", err);
378
+ });
379
+ }
380
+ async function handleFrame(ws, raw) {
381
+ let parsed;
382
+ try {
383
+ parsed = JSON.parse(raw);
384
+ } catch (err) {
385
+ emitter.emit("error", new Error(`Invalid JSON frame: ${String(err)}`));
386
+ return;
387
+ }
388
+ if (!isInboundEnvelope(parsed)) {
389
+ emitter.emit("error", new Error("Frame missing required envelope fields"));
390
+ return;
391
+ }
392
+ const inbound = parsed;
393
+ const shouldAudit = inbound.type.startsWith("request.");
394
+ const start = Date.now();
395
+ const result = await dispatch(inbound, {
396
+ runtime: opts.runtime,
397
+ ...opts.requestTimeoutMs !== void 0 && {
398
+ timeoutMs: opts.requestTimeoutMs
399
+ }
400
+ });
401
+ const latency_ms = Date.now() - start;
402
+ for (const outbound of result.outbound) {
403
+ send(ws, outbound);
404
+ }
405
+ if (shouldAudit) {
406
+ const auditEnvelope = buildAuditEvent(
407
+ inbound,
408
+ result.outbound,
409
+ latency_ms
410
+ );
411
+ send(ws, auditEnvelope);
412
+ }
413
+ }
414
+ function buildAuditEvent(inbound, outbound, latency_ms) {
415
+ const summary = summarizeOutbound(outbound);
416
+ let expression;
417
+ let endpoint_path;
418
+ if (inbound.type === "request.query") expression = inbound.payload.expression;
419
+ if (inbound.type === "request.call_endpoint") endpoint_path = inbound.payload.path;
420
+ const auditEvent = audit.emit({
421
+ request_id: inbound.id,
422
+ caller_id: "unknown",
423
+ // §13 spec gap — see audit.ts header
424
+ message_type: inbound.type,
425
+ ...expression !== void 0 && { expression },
426
+ ...endpoint_path !== void 0 && { endpoint_path },
427
+ pushed_down: summary.pushed_down,
428
+ row_count: summary.row_count,
429
+ bytes: summary.bytes,
430
+ latency_ms,
431
+ outcome: summary.outcome,
432
+ ...summary.error_code !== void 0 && { error_code: summary.error_code }
433
+ });
434
+ return {
435
+ type: "event.audit",
436
+ id: randomUUID2(),
437
+ payload: auditEvent
438
+ };
439
+ }
440
+ function send(ws, envelope) {
441
+ if (ws.readyState !== WebSocket.OPEN) return;
442
+ ws.send(JSON.stringify(envelope));
443
+ }
444
+ function scheduleReconnect() {
445
+ const delay = backoff.nextDelayMs();
446
+ reconnectTimer = setTimeout(() => {
447
+ reconnectTimer = null;
448
+ if (stopped) return;
449
+ void connect().catch((err) => {
450
+ emitter.emit("error", err);
451
+ scheduleReconnect();
452
+ });
453
+ }, delay);
454
+ }
455
+ return {
456
+ async start() {
457
+ stopped = false;
458
+ await connect();
459
+ },
460
+ async stop() {
461
+ stopped = true;
462
+ if (reconnectTimer) {
463
+ clearTimeout(reconnectTimer);
464
+ reconnectTimer = null;
465
+ }
466
+ if (stableTimer) {
467
+ clearTimeout(stableTimer);
468
+ stableTimer = null;
469
+ }
470
+ if (socket) {
471
+ socket.close();
472
+ socket = null;
473
+ }
474
+ },
475
+ on(event, listener) {
476
+ emitter.on(event, listener);
477
+ }
478
+ };
479
+ }
480
+ function summarizeOutbound(outbound) {
481
+ let row_count = 0;
482
+ let bytes = 0;
483
+ let pushed_down = false;
484
+ let outcome = "success";
485
+ let error_code;
486
+ for (const env of outbound) {
487
+ if (env.type === "response.error") {
488
+ outcome = "error";
489
+ error_code = env.payload.code;
490
+ } else if (env.type === "response.chunk") {
491
+ bytes += JSON.stringify(env.payload).length;
492
+ } else if (env.type === "response.end") {
493
+ row_count = env.payload.total_rows;
494
+ pushed_down = env.payload.pushed_down;
495
+ }
496
+ }
497
+ return error_code !== void 0 ? { outcome, error_code, row_count, bytes, pushed_down } : { outcome, row_count, bytes, pushed_down };
498
+ }
499
+ export {
500
+ createClient
501
+ };