mock-mcp 0.0.1 β†’ 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 MCP Land
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,375 @@
1
+ # mock-mcp
2
+
3
+ ![Node CI](https://github.com/mcpland/mock-mcp/workflows/Node%20CI/badge.svg)
4
+ [![npm](https://img.shields.io/npm/v/mock-mcp.svg)](https://www.npmjs.com/package/mock-mcp)
5
+ ![license](https://img.shields.io/npm/l/mock-mcp)
6
+
7
+ Mock MCP Server - AI-generated mock data. The project pairs a WebSocket batch bridge with MCP tooling so Cursor, Claude Desktop, or any compatible client can fulfill intercepted requests in real time.
8
+
9
+ ## Table of Contents
10
+
11
+ - [Quick Start](#quick-start)
12
+ - [Why Mock MCP](#why-mock-mcp)
13
+ - [What Mock MCP Does](#what-mock-mcp-does)
14
+ - [Configure MCP Server](#configure-mcp-server)
15
+ - [Connect From Tests](#connect-from-tests)
16
+ - [Describe Requests with Metadata](#describe-requests-with-metadata)
17
+ - [MCP tools](#mcp-tools)
18
+ - [Available APIs](#available-apis)
19
+ - [Environment Variables](#environment-variables)
20
+ - [How It Works](#how-it-works)
21
+ - [Use the development scripts](#use-the-development-scripts)
22
+ - [License](#license)
23
+
24
+ ## Quick Start
25
+
26
+ 1. **Install the package.** Add mock-mcp as a dev dependency inside your project.
27
+
28
+ ```bash
29
+ npm install -D mock-mcp
30
+ ```
31
+
32
+ 2. **Configure the Model Context Protocol server.** For example, Claude Desktop can launch the binary through npx:
33
+
34
+ ```json
35
+ {
36
+ "mock-mcp": {
37
+ "command": "npx",
38
+ "args": ["-y", "mock-mcp@latest"]
39
+ }
40
+ }
41
+ ```
42
+
43
+ 3. **Connect from your tests.** Use `connect` to retrieve a mock client and request data for intercepted calls.
44
+
45
+ ```ts
46
+ import { render, screen, fireEvent } from "@testing-library/react";
47
+ import { connect } from "mock-mcp";
48
+
49
+ const userSchema = {
50
+ summary: "Fetch the current user",
51
+ response: {
52
+ type: "object",
53
+ required: ["id", "name"],
54
+ properties: {
55
+ id: { type: "number" },
56
+ name: { type: "string" },
57
+ },
58
+ },
59
+ };
60
+
61
+ it("example", async () => {
62
+ const mockClient = await connect();
63
+ const metadata = {
64
+ schemaUrl: "https://example.com/openapi.json#/paths/~1user/get",
65
+ schema: userSchema,
66
+ instructions: "Respond with a single user described by the schema.",
67
+ };
68
+
69
+ fetchMock.get("/user", () =>
70
+ mockClient.requestMock("/user", "GET", { metadata })
71
+ );
72
+
73
+ const result = await fetch("/user");
74
+ const data = await result.json();
75
+ expect(data).toEqual({ id: 1, name: "Jane" });
76
+ }); // 10 minute timeout for AI interaction
77
+ ```
78
+
79
+ 4. **Run with MCP enabled.** Prompt your AI client to run the persistent test command and provide mocks through the tools.
80
+
81
+ ```
82
+ Please run the persistent test: `MOCK_MCP=true npm test test/example.test.tsx` and mock fetch data with mock-mcp
83
+ ```
84
+
85
+ ## Why Mock MCP
86
+
87
+ ### The Problem with Traditional Mock Approaches
88
+
89
+ Testing modern web applications often feels like preparing for a battleβ€”you need the right weapons (test cases), ammunition (mock data), and strategy (test logic). But creating mock data has always been the most tedious part:
90
+
91
+ ```typescript
92
+ // Traditional approach: Manual fixture hell
93
+ const mockUsers = [
94
+ { id: 1, name: "Alice", email: "alice@example.com", role: "admin", ... },
95
+ { id: 2, name: "Bob", email: "bob@example.com", role: "user", ... },
96
+ // ... 50 more lines of boring manual data entry
97
+ ];
98
+ ```
99
+
100
+ **Common Pain Points:**
101
+
102
+ | Challenge | Traditional Solutions | Limitations |
103
+ | --------------------------- | ------------------------------- | ------------------------------------------- |
104
+ | **Creating Realistic Data** | Manual JSON files or faker.js | ❌ Time-consuming, lacks business logic |
105
+ | **Complex Scenarios** | Hardcoded edge cases | ❌ Difficult to maintain, brittle |
106
+ | **Evolving Requirements** | Update fixtures manually | ❌ High maintenance cost |
107
+ | **Learning Curve** | New team members write fixtures | ❌ Steep learning curve for complex domains |
108
+ | **CI/CD Integration** | Static fixtures only | ❌ Can't adapt to new scenarios |
109
+
110
+ ### The Mock MCP Innovation
111
+
112
+ Mock MCP introduces a **paradigm shift**: instead of treating mock data as static artifacts, it makes them **AI-generated, interactive, and evolvable**.
113
+
114
+ ```
115
+ Traditional: Write Test β†’ Create Fixtures β†’ Run Test β†’ Maintain Fixtures
116
+ ↑ ↓
117
+ └──────── Pain Loop β”€β”€β”€β”€β”€β”€β”€β”˜
118
+
119
+ Mock MCP: Write Test β†’ AI Generates Data β†’ Run Test β†’ Solidify Code
120
+ ↑ ↓
121
+ └─────── Evolution β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
122
+ ```
123
+
124
+ ## What Mock MCP Does
125
+
126
+ Mock MCP pairs a WebSocket batch bridge with MCP tooling to move intercepted requests from tests to AI helpers and back again.
127
+
128
+ - **Batch-aware test client** collects every network interception inside a single macrotask and waits for the full response set.
129
+ - **MCP tooling** exposes `get_pending_batches` and `provide_batch_mock_data` so AI agents understand the waiting requests and push data back.
130
+ - **WebSocket bridge** connects the test runner to the MCP server while hiding transport details from both sides.
131
+ - **Timeouts, TTLs, and cleanup** guard the test runner from stale batches or disconnected clients.
132
+
133
+ ## Configure MCP Server
134
+
135
+ CLI flags keep the WebSocket bridge and the MCP transports aligned. Use them to adapt the server to your local ports while the Environment Variables section covers per-process overrides:
136
+
137
+ | Option | Description | Default |
138
+ | -------------- | ------------------------------------------------------------------ | ------- |
139
+ | `--port`, `-p` | WebSocket port for test runners | `3002` |
140
+ | `--no-stdio` | Disable the MCP stdio transport (useful for local debugging/tests) | enabled |
141
+
142
+ The CLI installs a SIGINT/SIGTERM handler so `Ctrl+C` shuts everything down cleanly.
143
+
144
+ **Add the server to MCP clients.** MCP clients such as Cursor or Claude Desktop need an entry in their configuration so they can launch the bridge:
145
+
146
+ ```json
147
+ {
148
+ "mcpServers": {
149
+ "mock-mcp": {
150
+ "command": "npx",
151
+ "args": ["-y", "mock-mcp@latest"],
152
+ "env": {
153
+ "MCP_SERVER_PORT": "3002" // 3002 is the default port
154
+ }
155
+ }
156
+ }
157
+ }
158
+ ```
159
+
160
+ Restart the client and confirm that the `mock-mcp` server exposes two tools.
161
+
162
+ ## Connect From Tests
163
+
164
+ Tests call `connect` to spin up a `BatchMockCollector`, intercept HTTP calls, and wait for fulfilled data:
165
+
166
+ ```ts
167
+ // tests/mocks.ts
168
+ import { connect } from "mock-mcp";
169
+
170
+ const mockClient = await connect({
171
+ port: 3002,
172
+ timeout: 60000,
173
+ });
174
+
175
+ await page.route("**/api/users", async (route) => {
176
+ const url = new URL(route.request().url());
177
+ const data = await mockClient.requestMock(
178
+ url.pathname,
179
+ route.request().method()
180
+ );
181
+
182
+ await route.fulfill({
183
+ status: 200,
184
+ contentType: "application/json",
185
+ body: JSON.stringify(data),
186
+ });
187
+ });
188
+ ```
189
+
190
+ Batch behaviour stays automatic: additional `requestMock` calls issued in the same macrotask are grouped, forwarded, and resolved together.
191
+
192
+ ## Describe Requests with Metadata
193
+
194
+ `requestMock` accepts an optional third argument (`RequestMockOptions`) that is forwarded without modification to the MCP server. The most important field in that object is `metadata`, which lets the test process describe each request with the exact OpenAPI/JSON Schema fragment, sample payloads, or test context that the AI client needs to build a response.
195
+
196
+ When an MCP client calls `get_pending_batches`, every `requests[].metadata` entry from the test run is included in the response. That is the channel the LLM uses to understand the requested endpoint before supplying data through `provide_batch_mock_data`. Metadata is also persisted when batch logging is enabled, so you can audit what was sent to the model.
197
+
198
+ ```ts
199
+ const listProductsSchema = {
200
+ summary: "List products by popularity",
201
+ response: {
202
+ type: "array",
203
+ items: {
204
+ type: "object",
205
+ required: ["id", "name", "price"],
206
+ properties: {
207
+ id: { type: "string" },
208
+ name: { type: "string" },
209
+ price: { type: "number" },
210
+ },
211
+ },
212
+ },
213
+ };
214
+
215
+ await mockClient.requestMock("/api/products", "GET", {
216
+ metadata: {
217
+ // Link or embed the authoritative contract for the AI to follow.
218
+ schemaUrl:
219
+ "https://shop.example.com/openapi.json#/paths/~1api~1products/get",
220
+ schema: listProductsSchema,
221
+ instructions:
222
+ "Return 3 popular products with stable ids so the UI can snapshot them.",
223
+ testFile: expect.getState().testPath,
224
+ },
225
+ });
226
+ ```
227
+
228
+ **Tips for useful metadata**
229
+
230
+ - Embed the OpenAPI/JSON Schema snippet (or a reference URL) that describes the response structure for the intercepted endpoint.
231
+ - Include contextual hints such as the test name, scenario, user role, or seed data so the model can mirror your expected fixtures.
232
+ - Keep the metadata JSON-serializable and deterministic; large binary blobs or class instances will be dropped.
233
+ - Reuse helper functions to centralize schema definitions so each test only supplies the endpoint-specific instructions.
234
+
235
+ ## MCP tools
236
+
237
+ Two tools keep the queue visible to AI agents and deliver mocks back to waiting tests:
238
+
239
+ | Tool | Purpose | Response |
240
+ | ------------------------- | ------------------------------------------ | ------------------------------------------------------- |
241
+ | `get_pending_batches` | Lists queued batches with request metadata | JSON string (array of `{batchId, timestamp, requests}`) |
242
+ | `provide_batch_mock_data` | Sends mock payloads for a specific batch | JSON string reporting success |
243
+
244
+ Example payload for `provide_batch_mock_data`:
245
+
246
+ ```jsonc
247
+ {
248
+ "batchId": "batch-3",
249
+ "mocks": [
250
+ {
251
+ "requestId": "req-7",
252
+ "data": { "users": [{ "id": 1, "name": "Alice" }] }
253
+ }
254
+ ]
255
+ }
256
+ ```
257
+
258
+ ## Available APIs
259
+
260
+ The library exports primitives so you can embed the workflow inside bespoke runners or scripts:
261
+
262
+ - `TestMockMCPServer` starts and stops the WebSocket plus MCP tooling bridge programmatically.
263
+ - `BatchMockCollector` provides a low-level batching client used directly inside test environments.
264
+ - `connect(options)` instantiates `BatchMockCollector` and waits for the WebSocket connection to open.
265
+
266
+ Each class accepts logger overrides, timeout tweaks, and other ergonomics surfaced in the technical design.
267
+
268
+ ## Environment Variables
269
+
270
+ | Variable | Description | Default |
271
+ | ----------------- | ---------------------------------------------------------------------------- | ------- |
272
+ | `MCP_SERVER_PORT` | Overrides the WebSocket port used by both the CLI and any spawned MCP host. | `3002` |
273
+ | `MOCK_MCP` | Enables the test runner hook so intercepted requests are routed to mock-mcp. | unset |
274
+
275
+ ## How It Works
276
+
277
+ Three collaborating processes share responsibilities while staying loosely coupled:
278
+
279
+ | Process | Responsibility | Technology | Communication |
280
+ | ---------------- | ----------------------------------------------------- | ---------------------------------------- | ------------------------------------------ |
281
+ | **Test Process** | Executes test cases and intercepts HTTP requests | Playwright/Puppeteer + WebSocket client | WebSocket β†’ MCP Server |
282
+ | **MCP Server** | Coordinates batches and forwards data between parties | Node.js + WebSocket server + MCP SDK | stdio ↔ MCP Client Β· WebSocket ↔ Test Flow |
283
+ | **MCP Client** | Uses AI to produce mock data via MCP tools | Cursor / Claude Desktop / custom clients | MCP protocol β†’ MCP Server |
284
+
285
+ ### Data flow sequence clarifies message order
286
+
287
+ ```
288
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
289
+ β”‚ Test Process β”‚ β”‚ MCP Server β”‚ β”‚ MCP Client β”‚
290
+ β”‚ (Browser Test) β”‚ β”‚ β”‚ β”‚ (AI) β”‚
291
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
292
+ β”‚ β”‚ β”‚
293
+ β”‚ 1. Start Test β”‚ β”‚
294
+ β”‚ page.goto() β”‚ β”‚
295
+ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚ β”‚
296
+ β”‚ β”‚ β”‚
297
+ β”‚ 2. Trigger concurrent β”‚ β”‚
298
+ β”‚ requests β”‚ β”‚
299
+ β”‚ fetch /api/users β”‚ β”‚
300
+ β”‚ fetch /api/products β”‚ β”‚
301
+ β”‚ fetch /api/orders β”‚ β”‚
302
+ β”‚ (Promises pending) β”‚ β”‚
303
+ β”‚ β”‚ β”‚
304
+ β”‚ 3. setTimeout(0) batches β”‚ β”‚
305
+ β”‚ BATCH_MOCK_REQUEST β”‚ β”‚
306
+ β”‚ [req-1, req-2, req-3] β”‚ β”‚
307
+ β”œβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β–Ίβ”‚ β”‚
308
+ β”‚ β”‚ β”‚
309
+ β”‚ Test paused... β”‚ 4. Store batch in queue β”‚
310
+ β”‚ Awaiting mocks β”‚ pendingBatches.set() β”‚
311
+ β”‚ β”‚ β”‚
312
+ β”‚ β”‚ 5. Wait for MCP Client β”‚
313
+ β”‚ β”‚ to call tools β”‚
314
+ β”‚ β”‚ β”‚
315
+ β”‚ │◄────────────────────────────
316
+ β”‚ β”‚ 6. Tool Call: β”‚
317
+ β”‚ β”‚ get_pending_batches β”‚
318
+ β”‚ β”‚ β”‚
319
+ β”‚ β”‚ 7. Return batch info β”‚
320
+ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚
321
+ β”‚ β”‚ [{batchId, requests}] β”‚
322
+ β”‚ β”‚ β”‚
323
+ β”‚ β”‚ 8. AI analyzes β”‚
324
+ β”‚ β”‚ Generates mocks β”‚
325
+ β”‚ β”‚ β”‚
326
+ β”‚ │◄────────────────────────────
327
+ β”‚ β”‚ 9. Tool Call: β”‚
328
+ β”‚ β”‚ provide_batch_mock_dataβ”‚
329
+ β”‚ β”‚ {mocks: [...]} β”‚
330
+ β”‚ β”‚ β”‚
331
+ β”‚ 10. BATCH_MOCK_RESPONSE β”‚ β”‚
332
+ β”‚ [mock-1, mock-2, ...] β”‚ β”‚
333
+ │◄═══════════════════════════─ β”‚
334
+ β”‚ β”‚ β”‚
335
+ β”‚ 11. Batch resolve β”‚ β”‚
336
+ β”‚ req-1.resolve() β”‚ β”‚
337
+ β”‚ req-2.resolve() β”‚ β”‚
338
+ β”‚ req-3.resolve() β”‚ β”‚
339
+ β”‚ β”‚ β”‚
340
+ β”‚ 12. Test continues β”‚ β”‚
341
+ β”‚ Assertions & β”‚ β”‚
342
+ β”‚ Verification β”‚ β”‚
343
+ β”‚ β”‚ β”‚
344
+ β”‚ 13. Test Complete βœ“ β”‚ β”‚
345
+ β–Ό β–Ό β–Ό
346
+
347
+ Protocol Summary:
348
+ ─────────────────
349
+ - Test Process ←→ MCP Server: WebSocket/IPC
350
+ Message types: BATCH_MOCK_REQUEST, BATCH_MOCK_RESPONSE
351
+
352
+ - MCP Server ←→ MCP Client: Stdio/JSON-RPC (MCP Protocol)
353
+ Tools: get_pending_batches, provide_batch_mock_data
354
+
355
+ Key Features:
356
+ ──────────────
357
+ βœ“ Batch processing of concurrent requests
358
+ βœ“ Non-blocking test execution during AI mock generation
359
+ βœ“ Real-time mock data generation by AI
360
+ βœ“ Automatic promise resolution after mock provision
361
+ ```
362
+
363
+ ## Use the development scripts
364
+
365
+ ```bash
366
+ pnpm test # runs Vitest suites
367
+ pnpm dev # tsx watch mode for the CLI
368
+ pnpm lint # eslint --ext .ts
369
+ ```
370
+
371
+ Vitest suites spin up ephemeral WebSocket servers, so avoid running them concurrently with an already running instance on the same port.
372
+
373
+ ## License
374
+
375
+ MIT
@@ -0,0 +1,82 @@
1
+ type Logger = Pick<Console, "log" | "warn" | "error"> & {
2
+ debug?: (...args: unknown[]) => void;
3
+ };
4
+ export interface BatchMockCollectorOptions {
5
+ /**
6
+ * TCP port exposed by {@link TestMockMCPServer}.
7
+ *
8
+ * @default 8080
9
+ */
10
+ port?: number;
11
+ /**
12
+ * Timeout for individual mock requests in milliseconds.
13
+ *
14
+ * @default 60000
15
+ */
16
+ timeout?: number;
17
+ /**
18
+ * Delay (in milliseconds) that determines how long the collector waits before
19
+ * flushing the current batch. Setting this to 0 mirrors the "flush on the next
20
+ * macrotask" approach described in the technical design document.
21
+ *
22
+ * @default 0
23
+ */
24
+ batchDebounceMs?: number;
25
+ /**
26
+ * Maximum number of requests that may be included in a single batch payload.
27
+ * Requests that exceed this limit will be split into multiple batches.
28
+ *
29
+ * @default 50
30
+ */
31
+ maxBatchSize?: number;
32
+ /**
33
+ * Optional custom logger. Defaults to `console`.
34
+ */
35
+ logger?: Logger;
36
+ }
37
+ export interface RequestMockOptions {
38
+ body?: unknown;
39
+ headers?: Record<string, string>;
40
+ metadata?: Record<string, unknown>;
41
+ }
42
+ /**
43
+ * Collects HTTP requests issued during a single macrotask and forwards them to
44
+ * the MCP server as a batch for AI-assisted mock generation.
45
+ */
46
+ export declare class BatchMockCollector {
47
+ private readonly ws;
48
+ private readonly pendingRequests;
49
+ private readonly queuedRequestIds;
50
+ private readonly timeout;
51
+ private readonly batchDebounceMs;
52
+ private readonly maxBatchSize;
53
+ private readonly logger;
54
+ private batchTimer;
55
+ private requestIdCounter;
56
+ private closed;
57
+ private readyResolve?;
58
+ private readyReject?;
59
+ private readonly readyPromise;
60
+ constructor(options?: BatchMockCollectorOptions);
61
+ /**
62
+ * Ensures the underlying WebSocket connection is ready for use.
63
+ */
64
+ waitUntilReady(): Promise<void>;
65
+ /**
66
+ * Request mock data for a specific endpoint/method pair.
67
+ */
68
+ requestMock<T = unknown>(endpoint: string, method: string, options?: RequestMockOptions): Promise<T>;
69
+ /**
70
+ * Close the underlying connection and fail all pending requests.
71
+ */
72
+ close(code?: number): Promise<void>;
73
+ private setupWebSocket;
74
+ private handleMessage;
75
+ private resolveRequest;
76
+ private enqueueRequest;
77
+ private flushQueue;
78
+ private sendBatch;
79
+ private rejectRequest;
80
+ private failAllPending;
81
+ }
82
+ export {};
@@ -0,0 +1,201 @@
1
+ import WebSocket from "ws";
2
+ import { BATCH_MOCK_REQUEST, BATCH_MOCK_RESPONSE, } from "../types.js";
3
+ const DEFAULT_TIMEOUT = 60_000;
4
+ const DEFAULT_BATCH_DEBOUNCE_MS = 0;
5
+ const DEFAULT_MAX_BATCH_SIZE = 50;
6
+ const DEFAULT_PORT = 8080;
7
+ /**
8
+ * Collects HTTP requests issued during a single macrotask and forwards them to
9
+ * the MCP server as a batch for AI-assisted mock generation.
10
+ */
11
+ export class BatchMockCollector {
12
+ ws;
13
+ pendingRequests = new Map();
14
+ queuedRequestIds = new Set();
15
+ timeout;
16
+ batchDebounceMs;
17
+ maxBatchSize;
18
+ logger;
19
+ batchTimer = null;
20
+ requestIdCounter = 0;
21
+ closed = false;
22
+ readyResolve;
23
+ readyReject;
24
+ readyPromise;
25
+ constructor(options = {}) {
26
+ this.timeout = options.timeout ?? DEFAULT_TIMEOUT;
27
+ this.batchDebounceMs = options.batchDebounceMs ?? DEFAULT_BATCH_DEBOUNCE_MS;
28
+ this.maxBatchSize = options.maxBatchSize ?? DEFAULT_MAX_BATCH_SIZE;
29
+ this.logger = options.logger ?? console;
30
+ const port = options.port ?? DEFAULT_PORT;
31
+ this.readyPromise = new Promise((resolve, reject) => {
32
+ this.readyResolve = resolve;
33
+ this.readyReject = reject;
34
+ });
35
+ const wsUrl = `ws://localhost:${port}`;
36
+ this.ws = new WebSocket(wsUrl);
37
+ this.setupWebSocket();
38
+ }
39
+ /**
40
+ * Ensures the underlying WebSocket connection is ready for use.
41
+ */
42
+ async waitUntilReady() {
43
+ return this.readyPromise;
44
+ }
45
+ /**
46
+ * Request mock data for a specific endpoint/method pair.
47
+ */
48
+ async requestMock(endpoint, method, options = {}) {
49
+ if (this.closed) {
50
+ throw new Error("BatchMockCollector has been closed");
51
+ }
52
+ await this.waitUntilReady();
53
+ const requestId = `req-${++this.requestIdCounter}`;
54
+ const request = {
55
+ requestId,
56
+ endpoint,
57
+ method,
58
+ body: options.body,
59
+ headers: options.headers,
60
+ metadata: options.metadata,
61
+ };
62
+ return new Promise((resolve, reject) => {
63
+ const timeoutId = setTimeout(() => {
64
+ this.pendingRequests.delete(requestId);
65
+ reject(new Error(`Mock request timed out after ${this.timeout}ms: ${method} ${endpoint}`));
66
+ }, this.timeout);
67
+ this.pendingRequests.set(requestId, {
68
+ request,
69
+ resolve: (data) => {
70
+ resolve(data);
71
+ },
72
+ reject: (error) => {
73
+ reject(error);
74
+ },
75
+ timeoutId,
76
+ });
77
+ this.enqueueRequest(requestId);
78
+ });
79
+ }
80
+ /**
81
+ * Close the underlying connection and fail all pending requests.
82
+ */
83
+ async close(code) {
84
+ if (this.closed) {
85
+ return;
86
+ }
87
+ this.closed = true;
88
+ if (this.batchTimer) {
89
+ clearTimeout(this.batchTimer);
90
+ this.batchTimer = null;
91
+ }
92
+ this.queuedRequestIds.clear();
93
+ const closePromise = new Promise((resolve) => {
94
+ this.ws.once("close", () => resolve());
95
+ });
96
+ this.ws.close(code);
97
+ this.failAllPending(new Error("BatchMockCollector has been closed"));
98
+ await closePromise;
99
+ }
100
+ setupWebSocket() {
101
+ this.ws.on("open", () => {
102
+ this.logger.log("πŸ”Œ Connected to mock MCP WebSocket endpoint");
103
+ this.readyResolve?.();
104
+ });
105
+ this.ws.on("message", (data) => this.handleMessage(data));
106
+ this.ws.on("error", (error) => {
107
+ this.logger.error("❌ WebSocket error:", error);
108
+ this.readyReject?.(error instanceof Error ? error : new Error(String(error)));
109
+ this.failAllPending(error instanceof Error ? error : new Error(String(error)));
110
+ });
111
+ this.ws.on("close", () => {
112
+ this.logger.warn("πŸ”Œ WebSocket connection closed");
113
+ this.failAllPending(new Error("WebSocket connection closed"));
114
+ });
115
+ }
116
+ handleMessage(data) {
117
+ let parsed;
118
+ try {
119
+ parsed = JSON.parse(data.toString());
120
+ }
121
+ catch (error) {
122
+ this.logger.error("Failed to parse server message:", error);
123
+ return;
124
+ }
125
+ if (parsed.type !== BATCH_MOCK_RESPONSE) {
126
+ this.logger.warn("Received unsupported message type", parsed.type);
127
+ return;
128
+ }
129
+ this.logger.debug?.(`πŸ“¦ Received mock data for ${parsed.mocks.length} requests (batch ${parsed.batchId})`);
130
+ for (const mock of parsed.mocks) {
131
+ this.resolveRequest(mock);
132
+ }
133
+ }
134
+ resolveRequest(mock) {
135
+ const pending = this.pendingRequests.get(mock.requestId);
136
+ if (!pending) {
137
+ this.logger.warn(`Received mock for unknown request: ${mock.requestId}`);
138
+ return;
139
+ }
140
+ clearTimeout(pending.timeoutId);
141
+ this.pendingRequests.delete(mock.requestId);
142
+ pending.resolve(mock.data);
143
+ }
144
+ enqueueRequest(requestId) {
145
+ this.queuedRequestIds.add(requestId);
146
+ if (this.batchTimer) {
147
+ return;
148
+ }
149
+ this.batchTimer = setTimeout(() => {
150
+ this.batchTimer = null;
151
+ this.flushQueue();
152
+ }, this.batchDebounceMs);
153
+ }
154
+ flushQueue() {
155
+ const queuedIds = Array.from(this.queuedRequestIds);
156
+ this.queuedRequestIds.clear();
157
+ if (queuedIds.length === 0) {
158
+ return;
159
+ }
160
+ for (let i = 0; i < queuedIds.length; i += this.maxBatchSize) {
161
+ const chunkIds = queuedIds.slice(i, i + this.maxBatchSize);
162
+ const requests = [];
163
+ for (const id of chunkIds) {
164
+ const pending = this.pendingRequests.get(id);
165
+ if (pending) {
166
+ requests.push(pending.request);
167
+ }
168
+ }
169
+ if (requests.length > 0) {
170
+ this.sendBatch(requests);
171
+ }
172
+ }
173
+ }
174
+ sendBatch(requests) {
175
+ if (this.ws.readyState !== WebSocket.OPEN) {
176
+ const error = new Error("WebSocket is not open");
177
+ requests.forEach((request) => this.rejectRequest(request.requestId, error));
178
+ return;
179
+ }
180
+ const payload = {
181
+ type: BATCH_MOCK_REQUEST,
182
+ requests,
183
+ };
184
+ this.logger.debug?.(`πŸ“€ Sending batch with ${requests.length} request(s) to MCP server`);
185
+ this.ws.send(JSON.stringify(payload));
186
+ }
187
+ rejectRequest(requestId, error) {
188
+ const pending = this.pendingRequests.get(requestId);
189
+ if (!pending) {
190
+ return;
191
+ }
192
+ clearTimeout(pending.timeoutId);
193
+ this.pendingRequests.delete(requestId);
194
+ pending.reject(error);
195
+ }
196
+ failAllPending(error) {
197
+ for (const requestId of Array.from(this.pendingRequests.keys())) {
198
+ this.rejectRequest(requestId, error);
199
+ }
200
+ }
201
+ }
@@ -0,0 +1,8 @@
1
+ import { BatchMockCollector } from "./batch-mock-collector.js";
2
+ import type { BatchMockCollectorOptions } from "./batch-mock-collector.js";
3
+ export type ConnectOptions = number | BatchMockCollectorOptions | undefined;
4
+ /**
5
+ * Convenience helper that creates a {@link BatchMockCollector} and waits for the
6
+ * underlying WebSocket connection to become ready before resolving.
7
+ */
8
+ export declare const connect: (options?: ConnectOptions) => Promise<BatchMockCollector | void>;