mock-mcp 0.2.3 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
  [![npm](https://img.shields.io/npm/v/mock-mcp.svg)](https://www.npmjs.com/package/mock-mcp)
5
5
  ![license](https://img.shields.io/npm/l/mock-mcp)
6
6
 
7
- Mock MCP Server - AI-generated mock data. The project pairs a WebSocket batch bridge with MCP tooling so Cursor, Claude Desktop, or any compatible client can fulfill intercepted requests in real time.
7
+ Mock MCP Server - AI-generated mock data based on your **OpenAPI JSON Schema** definitions. The project pairs a WebSocket batch bridge with MCP tooling so Cursor, Claude Desktop, or any compatible client can fulfill intercepted requests in real time, ensuring strict contract compliance.
8
8
 
9
9
  ## Table of Contents
10
10
 
@@ -25,62 +25,66 @@ Mock MCP Server - AI-generated mock data. The project pairs a WebSocket batch br
25
25
 
26
26
  1. **Install the package.** Add mock-mcp as a dev dependency inside your project.
27
27
 
28
- ```bash
29
- npm install -D mock-mcp
30
- ```
28
+ ```bash
29
+ npm install -D mock-mcp
30
+ # or
31
+ yarn add -D mock-mcp
32
+ # or
33
+ pnpm add -D mock-mcp
34
+ ```
31
35
 
32
36
  2. **Configure the Model Context Protocol server.** For example, Claude Desktop can launch the binary through npx:
33
37
 
34
- ```json
35
- {
36
- "mock-mcp": {
37
- "command": "npx",
38
- "args": ["-y", "mock-mcp@latest"]
39
- }
40
- }
41
- ```
38
+ ```json
39
+ {
40
+ "mock-mcp": {
41
+ "command": "npx",
42
+ "args": ["-y", "mock-mcp@latest"]
43
+ }
44
+ }
45
+ ```
42
46
 
43
47
  3. **Connect from your tests.** Use `connect` to retrieve a mock client and request data for intercepted calls.
44
48
 
45
- ```ts
46
- import { render, screen, fireEvent } from "@testing-library/react";
47
- import { connect } from "mock-mcp";
48
-
49
- const userSchema = {
50
- summary: "Fetch the current user",
51
- response: {
52
- type: "object",
53
- required: ["id", "name"],
54
- properties: {
55
- id: { type: "number" },
56
- name: { type: "string" },
57
- },
58
- },
59
- };
60
-
61
- it("example", async () => {
62
- const mockClient = await connect();
63
- const metadata = {
64
- schemaUrl: "https://example.com/openapi.json#/paths/~1user/get",
65
- schema: userSchema,
66
- instructions: "Respond with a single user described by the schema.",
67
- };
68
-
69
- fetchMock.get("/user", () =>
70
- mockClient.requestMock("/user", "GET", { metadata })
71
- );
72
-
73
- const result = await fetch("/user");
74
- const data = await result.json();
75
- expect(data).toEqual({ id: 1, name: "Jane" });
76
- }); // 10 minute timeout for AI interaction
77
- ```
49
+ ```ts
50
+ import { render, screen, fireEvent } from "@testing-library/react";
51
+ import { connect } from "mock-mcp";
52
+
53
+ const userSchema = {
54
+ summary: "Fetch the current user",
55
+ response: {
56
+ type: "object",
57
+ required: ["id", "name"],
58
+ properties: {
59
+ id: { type: "number" },
60
+ name: { type: "string" },
61
+ },
62
+ },
63
+ };
64
+
65
+ it("example", async () => {
66
+ const mockClient = await connect();
67
+ const metadata = {
68
+ schemaUrl: "https://example.com/openapi.json#/paths/~1user/get",
69
+ schema: userSchema,
70
+ instructions: "Respond with a single user described by the schema.",
71
+ };
72
+
73
+ fetchMock.get("/user", () =>
74
+ mockClient.requestMock("/user", "GET", { metadata }) // add mock via mock-mcp
75
+ );
76
+
77
+ const result = await fetch("/user");
78
+ const data = await result.json();
79
+ expect(data).toEqual({ id: 1, name: "Jane" });
80
+ }, 10 * 60 * 1000); // 10 minute timeout for AI interaction
81
+ ```
78
82
 
79
83
  4. **Run with MCP enabled.** Prompt your AI client to run the persistent test command and provide mocks through the tools.
80
84
 
81
- ```
82
- Please run the persistent test: `MOCK_MCP=true npm test test/example.test.tsx` and mock fetch data with mock-mcp
83
- ```
85
+ ```
86
+ Please run the persistent test: `MOCK_MCP=true npm test test/example.test.tsx` and mock fetch data with mock-mcp
87
+ ```
84
88
 
85
89
  ## Why Mock MCP
86
90
 
@@ -116,15 +120,20 @@ Traditional: Write Test → Create Fixtures → Run Test → Maintain Fixtures
116
120
  ↑ ↓
117
121
  └──────── Pain Loop ───────┘
118
122
 
119
- Mock MCP: Write Test → AI Generates Data → Run Test → Solidify Code
120
-
121
- └─────── Evolution ─────────┘
123
+ Mock MCP: Write Test → AI Generates Data (Schema-Compliant) → Run Test → Solidify Code
124
+
125
+ └───────────── Evolution ────────────┘
122
126
  ```
123
127
 
128
+ ### Schema-Driven Accuracy
129
+
130
+ Unlike "hallucinated" mocks, Mock MCP uses your actual **OpenAPI JSON Schema** definitions to ground the AI. This ensures that generated data not only looks real but strictly adheres to your API contracts, catching integration issues early.
131
+
124
132
  ## What Mock MCP Does
125
133
 
126
134
  Mock MCP pairs a WebSocket batch bridge with MCP tooling to move intercepted requests from tests to AI helpers and back again.
127
135
 
136
+ - **Schema-aware generation** uses your provided metadata (OpenAPI JSON Schema) to ensure mocks match production behavior.
128
137
  - **Batch-aware test client** collects every network interception inside a single macrotask and waits for the full response set.
129
138
  - **MCP tooling** exposes `get_pending_batches` and `provide_batch_mock_data` so AI agents understand the waiting requests and push data back.
130
139
  - **WebSocket bridge** connects the test runner to the MCP server while hiding transport details from both sides.
@@ -189,9 +198,17 @@ await page.route("**/api/users", async (route) => {
189
198
 
190
199
  Batch behaviour stays automatic: additional `requestMock` calls issued in the same macrotask are grouped, forwarded, and resolved together.
191
200
 
201
+ Need to pause the test until everything in-flight resolves? Call `waitForPendingRequests` to block on the current set of pending requests (anything started after the call is not included):
202
+
203
+ ```ts
204
+ // After routing a few requests
205
+ await mockClient.waitForPendingRequests();
206
+ // Safe to assert on the results produced by the mocked responses
207
+ ```
208
+
192
209
  ## Describe Requests with Metadata
193
210
 
194
- `requestMock` accepts an optional third argument (`RequestMockOptions`) that is forwarded without modification to the MCP server. The most important field in that object is `metadata`, which lets the test process describe each request with the exact OpenAPI/JSON Schema fragment, sample payloads, or test context that the AI client needs to build a response.
211
+ `requestMock` accepts an optional third argument (`RequestMockOptions`) that is forwarded without modification to the MCP server. The most important field in that object is `metadata`, which lets the test process describe each request with the exact OpenAPI JSON Schema fragment, sample payloads, or test context that the AI client needs to build a response.
195
212
 
196
213
  When an MCP client calls `get_pending_batches`, every `requests[].metadata` entry from the test run is included in the response. That is the channel the LLM uses to understand the requested endpoint before supplying data through `provide_batch_mock_data`. Metadata is also persisted when batch logging is enabled, so you can audit what was sent to the model.
197
214
 
@@ -261,6 +278,7 @@ The library exports primitives so you can embed the workflow inside bespoke runn
261
278
 
262
279
  - `TestMockMCPServer` starts and stops the WebSocket plus MCP tooling bridge programmatically.
263
280
  - `BatchMockCollector` provides a low-level batching client used directly inside test environments.
281
+ - `BatchMockCollector.waitForPendingRequests()` waits for the currently pending mock requests to settle (resolves when all finish, rejects if any fail).
264
282
  - `connect(options)` instantiates `BatchMockCollector` and waits for the WebSocket connection to open.
265
283
 
266
284
  Each class accepts logger overrides, timeout tweaks, and other ergonomics surfaced in the technical design.
@@ -5,7 +5,7 @@ export interface BatchMockCollectorOptions {
5
5
  /**
6
6
  * TCP port exposed by {@link TestMockMCPServer}.
7
7
  *
8
- * @default 8080
8
+ * @default 3002
9
9
  */
10
10
  port?: number;
11
11
  /**
@@ -66,6 +66,11 @@ export declare class BatchMockCollector {
66
66
  * Request mock data for a specific endpoint/method pair.
67
67
  */
68
68
  requestMock<T = unknown>(endpoint: string, method: string, options?: RequestMockOptions): Promise<T>;
69
+ /**
70
+ * Wait for all requests that are currently pending to settle. Requests
71
+ * created after this method is called are not included.
72
+ */
73
+ waitForPendingRequests(): Promise<void>;
69
74
  /**
70
75
  * Close the underlying connection and fail all pending requests.
71
76
  */
@@ -1,9 +1,10 @@
1
1
  import WebSocket from "ws";
2
2
  import { BATCH_MOCK_REQUEST, BATCH_MOCK_RESPONSE, } from "../types.js";
3
+ import { isEnabled } from "./util.js";
3
4
  const DEFAULT_TIMEOUT = 60_000;
4
5
  const DEFAULT_BATCH_DEBOUNCE_MS = 0;
5
6
  const DEFAULT_MAX_BATCH_SIZE = 50;
6
- const DEFAULT_PORT = 8080;
7
+ const DEFAULT_PORT = 3002;
7
8
  /**
8
9
  * Collects HTTP requests issued during a single macrotask and forwards them to
9
10
  * the MCP server as a batch for AI-assisted mock generation.
@@ -59,24 +60,45 @@ export class BatchMockCollector {
59
60
  headers: options.headers,
60
61
  metadata: options.metadata,
61
62
  };
63
+ let settleCompletion;
64
+ const completion = new Promise((resolve) => {
65
+ settleCompletion = resolve;
66
+ });
62
67
  return new Promise((resolve, reject) => {
63
68
  const timeoutId = setTimeout(() => {
64
- this.pendingRequests.delete(requestId);
65
- reject(new Error(`Mock request timed out after ${this.timeout}ms: ${method} ${endpoint}`));
69
+ this.rejectRequest(requestId, new Error(`Mock request timed out after ${this.timeout}ms: ${method} ${endpoint}`));
66
70
  }, this.timeout);
67
71
  this.pendingRequests.set(requestId, {
68
72
  request,
69
73
  resolve: (data) => {
74
+ settleCompletion({ status: "fulfilled", value: undefined });
70
75
  resolve(data);
71
76
  },
72
77
  reject: (error) => {
78
+ settleCompletion({ status: "rejected", reason: error });
73
79
  reject(error);
74
80
  },
75
81
  timeoutId,
82
+ completion,
76
83
  });
77
84
  this.enqueueRequest(requestId);
78
85
  });
79
86
  }
87
+ /**
88
+ * Wait for all requests that are currently pending to settle. Requests
89
+ * created after this method is called are not included.
90
+ */
91
+ async waitForPendingRequests() {
92
+ if (!isEnabled()) {
93
+ return;
94
+ }
95
+ const pendingCompletions = Array.from(this.pendingRequests.values()).map((pending) => pending.completion);
96
+ const results = await Promise.all(pendingCompletions);
97
+ const rejected = results.find((result) => result.status === "rejected");
98
+ if (rejected) {
99
+ throw rejected.reason;
100
+ }
101
+ }
80
102
  /**
81
103
  * Close the underlying connection and fail all pending requests.
82
104
  */
@@ -1,11 +1,11 @@
1
1
  import { BatchMockCollector } from "./batch-mock-collector.js";
2
+ import { isEnabled } from "./util.js";
2
3
  /**
3
4
  * Convenience helper that creates a {@link BatchMockCollector} and waits for the
4
5
  * underlying WebSocket connection to become ready before resolving.
5
6
  */
6
7
  export const connect = async (options) => {
7
- const isEnabled = process.env.MOCK_MCP !== undefined && process.env.MOCK_MCP !== "0";
8
- if (!isEnabled) {
8
+ if (!isEnabled()) {
9
9
  console.log("[mock-mcp] Skipping (set MOCK_MCP=1 to enable)");
10
10
  return;
11
11
  }
@@ -0,0 +1 @@
1
+ export declare const isEnabled: () => boolean;
@@ -0,0 +1,3 @@
1
+ export const isEnabled = () => {
2
+ return process.env.MOCK_MCP !== undefined && process.env.MOCK_MCP !== "0";
3
+ };
package/dist/connect.cjs CHANGED
@@ -41,11 +41,16 @@ var import_ws = __toESM(require("ws"), 1);
41
41
  var BATCH_MOCK_REQUEST = "BATCH_MOCK_REQUEST";
42
42
  var BATCH_MOCK_RESPONSE = "BATCH_MOCK_RESPONSE";
43
43
 
44
+ // src/client/util.ts
45
+ var isEnabled = () => {
46
+ return process.env.MOCK_MCP !== void 0 && process.env.MOCK_MCP !== "0";
47
+ };
48
+
44
49
  // src/client/batch-mock-collector.ts
45
50
  var DEFAULT_TIMEOUT = 6e4;
46
51
  var DEFAULT_BATCH_DEBOUNCE_MS = 0;
47
52
  var DEFAULT_MAX_BATCH_SIZE = 50;
48
- var DEFAULT_PORT = 8080;
53
+ var DEFAULT_PORT = 3002;
49
54
  var BatchMockCollector = class {
50
55
  ws;
51
56
  pendingRequests = /* @__PURE__ */ new Map();
@@ -97,10 +102,14 @@ var BatchMockCollector = class {
97
102
  headers: options.headers,
98
103
  metadata: options.metadata
99
104
  };
105
+ let settleCompletion;
106
+ const completion = new Promise((resolve) => {
107
+ settleCompletion = resolve;
108
+ });
100
109
  return new Promise((resolve, reject) => {
101
110
  const timeoutId = setTimeout(() => {
102
- this.pendingRequests.delete(requestId);
103
- reject(
111
+ this.rejectRequest(
112
+ requestId,
104
113
  new Error(
105
114
  `Mock request timed out after ${this.timeout}ms: ${method} ${endpoint}`
106
115
  )
@@ -109,16 +118,38 @@ var BatchMockCollector = class {
109
118
  this.pendingRequests.set(requestId, {
110
119
  request,
111
120
  resolve: (data) => {
121
+ settleCompletion({ status: "fulfilled", value: void 0 });
112
122
  resolve(data);
113
123
  },
114
124
  reject: (error) => {
125
+ settleCompletion({ status: "rejected", reason: error });
115
126
  reject(error);
116
127
  },
117
- timeoutId
128
+ timeoutId,
129
+ completion
118
130
  });
119
131
  this.enqueueRequest(requestId);
120
132
  });
121
133
  }
134
+ /**
135
+ * Wait for all requests that are currently pending to settle. Requests
136
+ * created after this method is called are not included.
137
+ */
138
+ async waitForPendingRequests() {
139
+ if (!isEnabled()) {
140
+ return;
141
+ }
142
+ const pendingCompletions = Array.from(this.pendingRequests.values()).map(
143
+ (pending) => pending.completion
144
+ );
145
+ const results = await Promise.all(pendingCompletions);
146
+ const rejected = results.find(
147
+ (result) => result.status === "rejected"
148
+ );
149
+ if (rejected) {
150
+ throw rejected.reason;
151
+ }
152
+ }
122
153
  /**
123
154
  * Close the underlying connection and fail all pending requests.
124
155
  */
@@ -253,8 +284,7 @@ var BatchMockCollector = class {
253
284
 
254
285
  // src/client/connect.ts
255
286
  var connect = async (options) => {
256
- const isEnabled = process.env.MOCK_MCP !== void 0 && process.env.MOCK_MCP !== "0";
257
- if (!isEnabled) {
287
+ if (!isEnabled()) {
258
288
  console.log("[mock-mcp] Skipping (set MOCK_MCP=1 to enable)");
259
289
  return;
260
290
  }
@@ -5,7 +5,7 @@ interface BatchMockCollectorOptions {
5
5
  /**
6
6
  * TCP port exposed by {@link TestMockMCPServer}.
7
7
  *
8
- * @default 8080
8
+ * @default 3002
9
9
  */
10
10
  port?: number;
11
11
  /**
@@ -66,6 +66,11 @@ declare class BatchMockCollector {
66
66
  * Request mock data for a specific endpoint/method pair.
67
67
  */
68
68
  requestMock<T = unknown>(endpoint: string, method: string, options?: RequestMockOptions): Promise<T>;
69
+ /**
70
+ * Wait for all requests that are currently pending to settle. Requests
71
+ * created after this method is called are not included.
72
+ */
73
+ waitForPendingRequests(): Promise<void>;
69
74
  /**
70
75
  * Close the underlying connection and fail all pending requests.
71
76
  */
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mock-mcp",
3
- "version": "0.2.3",
3
+ "version": "0.3.0",
4
4
  "description": "An MCP server enabling LLMs to write integration tests through live test environment interaction",
5
5
  "main": "./dist/connect.cjs",
6
6
  "type": "module",
@@ -70,4 +70,4 @@
70
70
  "path": "cz-conventional-changelog"
71
71
  }
72
72
  }
73
- }
73
+ }