@h1deya/langchain-mcp-tools 0.2.9 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,18 +1,18 @@
1
- # MCP To LangChain Tools Conversion Utility / TypeScript [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/LICENSE) [![npm version](https://img.shields.io/npm/v/@h1deya/langchain-mcp-tools.svg)](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
1
+ # MCP to LangChain Tools Conversion Utility / TypeScript [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/LICENSE) [![npm version](https://img.shields.io/npm/v/@h1deya/langchain-mcp-tools.svg)](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools) [![network dependents](https://dependents.info/hideya/langchain-mcp-tools-ts/badge)](https://dependents.info/hideya/langchain-mcp-tools-ts)
2
2
 
3
3
  A simple, lightweight library intended to simplify the use of
4
4
  [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
5
5
  server tools with LangChain.
6
6
 
7
- Its simplicity and extra features for stdio MCP servers can make it useful as a basis for your own customizations.
8
- However, it only supports text results of tool calls and does not support MCP features other than tools.
7
+ Its simplicity and extra features, such as
8
+ [tools schema adjustments for LLM compatibility](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/README.md#llm-provider-schema-compatibility)
9
+ and [tools invocation logging](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/README.md#debugging-and-logging),
10
+ make it a useful basis for your experiments and customizations.
11
+ However, it only supports text results of tool calls and does not support MCP features other than Tools.
9
12
 
10
- LangChain's **official LangChain.js MCP Adapters** library,
11
- which supports comprehensive integration with LangChain, has been released at:
12
- - npmjs: https://www.npmjs.com/package/@langchain/mcp-adapters
13
- - github: https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-mcp-adapters`
14
-
15
- You may want to consider using the above if you don't have specific needs for this library.
13
+ [LangChain's **official LangChain.js MCP Adapters** library](https://www.npmjs.com/package/@langchain/mcp-adapters),
14
+ which supports comprehensive integration with LangChain, has been released.
15
+ You may want to consider using it if you don't have specific needs for this library.
16
16
 
17
17
  ## Introduction
18
18
 
@@ -22,24 +22,20 @@ server tools with LangChain / TypeScript.
22
22
 
23
23
  [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is the de facto industry standard
24
24
  that dramatically expands the scope of LLMs by enabling the integration of external tools and resources,
25
- including DBs, GitHub, Google Drive, Docker, Slack, Notion, Spotify, and more.
26
-
27
- There are quite a few useful MCP servers already available:
28
-
29
- - [MCP Server Listing on the Official Site](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#model-context-protocol-servers)
30
- - [MCP.so - Find Awesome MCP Servers and Clients](https://mcp.so/)
31
- - [Smithery: MCP Server Registry](https://smithery.ai/)
32
-
33
- This utility's goal is to make these massive numbers of MCP servers easily accessible from LangChain.
25
+ including DBs, Cloud Storages, GitHub, Docker, Slack, and more.
26
+ There are [quite a few useful MCP servers already available.
27
+ See [MCP Server Listing on the Official Site](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#model-context-protocol-servers).
34
28
 
29
+ This utility's goal is to make these numerous MCP servers easily accessible from LangChain.
35
30
  It contains a utility function `convertMcpToLangchainTools()`.
36
31
  This async function handles parallel initialization of specified multiple MCP servers
37
32
  and converts their available tools into an array of LangChain-compatible tools.
33
+ It also performs LLM provider-specific schema transformations
34
+ to prevent [schema compatibility issues](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/README.md#llm-provider-schema-compatibility)
38
35
 
39
36
  For detailed information on how to use this library, please refer to the following document:
40
- - ["Supercharging LangChain: Integrating 2000+ MCP with ReAct"](https://medium.com/@h1deya/supercharging-langchain-integrating-450-mcp-with-react-d4e467cbf41a)
41
-
42
- A python equivalent of this utility is available
37
+ ["Supercharging LangChain: Integrating 2000+ MCP with ReAct"](https://medium.com/@h1deya/supercharging-langchain-integrating-450-mcp-with-react-d4e467cbf41a).
38
+ A Python equivalent of this utility is available
43
39
  [here](https://pypi.org/project/langchain-mcp-tools)
44
40
 
45
41
  ## Prerequisites
@@ -86,7 +82,14 @@ const mcpServers: McpServersConfig = {
86
82
  },
87
83
  };
88
84
 
89
- const { tools, cleanup } = await convertMcpToLangchainTools(mcpServers);
85
+ const { tools, cleanup } = await convertMcpToLangchainTools(
86
+ mcpServers, {
87
+ // Perform provider-specific JSON schema transformations to prevent schema compatibility issues
88
+ llmProvider: "google_gemini"
89
+ // llmProvider: "openai"
90
+ // llmProvider: "anthropic"
91
+ }
92
+ );
90
93
  ```
91
94
 
92
95
  This utility function initializes all specified MCP servers in parallel,
@@ -94,14 +97,20 @@ and returns LangChain Tools
94
97
  ([`tools: StructuredTool[]`](https://api.js.langchain.com/classes/_langchain_core.tools.StructuredTool.html))
95
98
  by gathering available MCP tools from the servers,
96
99
  and by wrapping them into LangChain tools.
100
+
101
+ When `llmProvider` option is specified, it performs LLM provider-specific schema transformations
102
+ for MCP tools to prevent schema compatibility issues.
103
+ Set this option when you enconter schema related warnings/errors while execution.
104
+ [See below](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/README.md#llm-provider-schema-compatibility) for details.
105
+
97
106
  It also returns an async callback function (`cleanup: McpServerCleanupFn`)
98
107
  to be invoked to close all MCP server sessions when finished.
99
108
 
100
109
  The returned tools can be used with LangChain, e.g.:
101
110
 
102
111
  ```ts
103
- // import { ChatAnthropic } from "@langchain/anthropic";
104
- const llm = new ChatAnthropic({ model: "claude-sonnet-4-0" });
112
+ // import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
113
+ const llm = new ChatGoogleGenerativeAI({ model: "gemini-2.5-flash" });
105
114
 
106
115
  // import { createReactAgent } from "@langchain/langgraph/prebuilt";
107
116
  const agent = createReactAgent({
@@ -110,17 +119,35 @@ const agent = createReactAgent({
110
119
  });
111
120
  ```
112
121
 
122
+ A minimal but complete working usage example can be found
123
+ [in this example in the langchain-mcp-tools-ts-usage repo](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts)
124
+
113
125
  For hands-on experimentation with MCP server integration,
114
- try [this LangChain application built with the utility](https://github.com/hideya/mcp-client-langchain-ts)
126
+ try [this MCP Client CLI tool built with this library](https://www.npmjs.com/package/@h1deya/mcp-client-cli)
115
127
 
116
- For detailed information on how to use this library, please refer to the following document:
117
- ["Supercharging LangChain: Integrating 2000+ MCP with ReAct"](https://medium.com/@h1deya/supercharging-langchain-integrating-450-mcp-with-react-d4e467cbf41a)
128
+ ## Building from Source
129
+
130
+ See [README_DEV.md](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/README_DEV.md) for details.
118
131
 
119
132
  ## MCP Protocol Support
120
133
 
121
134
  This library supports **MCP Protocol version 2025-03-26** and maintains backwards compatibility with version 2024-11-05.
122
135
  It follows the [official MCP specification](https://modelcontextprotocol.io/specification/2025-03-26/) for transport selection and backwards compatibility.
123
136
 
137
+ ### Limitations
138
+
139
+ - **Tool Return Types**: Currently, only text results of tool calls are supported.
140
+ The library uses LangChain's `response_format: 'content'` (the default), which only supports text strings.
141
+ While MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.
142
+ - **MCP Features**: Only MCP [Tools](https://modelcontextprotocol.io/docs/concepts/tools) are supported. Other MCP features like Resources, Prompts, and Sampling are not implemented.
143
+
144
+ ### Notes:
145
+
146
+ - **LLM Compatibility and Schema Transformations**: The library can perform schema transformations for LLM compatibility.
147
+ [See below](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/README.md#llm-provider-schema-compatibility) for details.
148
+ - **Passing PATH Env Variable**: The library automatically adds the `PATH` environment variable to stdio server configrations if not explicitly provided to ensure servers can find required executables.
149
+
150
+
124
151
  ## Features
125
152
 
126
153
  ### `stderr` Redirection for Local MCP Server
@@ -155,8 +182,6 @@ can be specified with the `"cwd"` key as follows:
155
182
  The key name `cwd` is derived from
156
183
  TypeScript SDK's [`StdioServerParameters`](https://github.com/modelcontextprotocol/typescript-sdk/blob/131776764536b5fdca642df51230a3746fb4ade0/src/client/stdio.ts#L39).
157
184
 
158
- **Note:** The library automatically adds the `PATH` environment variable to stdio servers if not explicitly provided to ensure servers can find required executables.
159
-
160
185
  ### Transport Selection Priority
161
186
 
162
187
  The library selects transports using the following priority order:
@@ -243,7 +268,8 @@ class MyOAuthProvider implements OAuthClientProvider {
243
268
  const mcpServers = {
244
269
  "secure-streamable-server": {
245
270
  url: "https://secure-mcp-server.example.com/mcp",
246
- transport: "streamable_http", // Optional: explicit transport
271
+ // To avoid auto protocol fallback, specify the protocol explicitly when using authentication
272
+ transport: "streamable_http", // or `type: "http",`
247
273
  streamableHTTPOptions: {
248
274
  // Provide an OAuth client provider
249
275
  authProvider: new MyOAuthProvider(),
@@ -271,57 +297,6 @@ Test implementations are provided:
271
297
  - MCP client uses this library: [streamable-http-auth-test-client.ts](https://github.com/hideya/langchain-mcp-tools-ts/tree/main/testfiles/streamable-http-auth-test-client.ts)
272
298
  - Test MCP Server: [streamable-http-auth-test-server.ts](https://github.com/hideya/langchain-mcp-tools-ts/tree/main/testfiles/streamable-http-auth-test-server.ts)
273
299
 
274
- ### Authentication Support for SSE Connections (Legacy)
275
-
276
- The library also supports authentication for SSE connections to MCP servers.
277
- Note that SSE transport is deprecated; Streamable HTTP is the recommended approach.
278
-
279
- To enable authentication, provide SSE options in your server configuration:
280
-
281
- ```ts
282
- import { OAuthClientProvider } from '@modelcontextprotocol/sdk/client/auth.js';
283
-
284
- // Implement your own OAuth client provider
285
- class MyOAuthProvider implements OAuthClientProvider {
286
- // Implementation details...
287
- }
288
-
289
- const mcpServers = {
290
- "secure-server": {
291
- url: "https://secure-mcp-server.example.com",
292
- sseOptions: {
293
- // Provide an OAuth client provider
294
- authProvider: new MyOAuthProvider(),
295
-
296
- // Optionally customize the initial SSE request
297
- eventSourceInit: {
298
- // Custom options
299
- },
300
-
301
- // Optionally customize recurring POST requests
302
- requestInit: {
303
- headers: {
304
- 'X-Custom-Header': 'custom-value'
305
- }
306
- }
307
- }
308
- }
309
- };
310
- ```
311
-
312
- Test implementations are provided:
313
-
314
- - **SSE Authentication Tests**:
315
- - MCP client uses this library: [sse-auth-test-client.ts](https://github.com/hideya/langchain-mcp-tools-ts/tree/main/testfiles/sse-auth-test-client.ts)
316
- - Test MCP Server: [sse-auth-test-server.ts](https://github.com/hideya/langchain-mcp-tools-ts/tree/main/testfiles/sse-auth-test-server.ts)
317
-
318
- ## Limitations
319
-
320
- - **Tool Return Types**: Currently, only text results of tool calls are supported.
321
- The library uses LangChain's `response_format: 'content'` (the default), which only supports text strings.
322
- While MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.
323
- - **MCP Features**: Only MCP [Tools](https://modelcontextprotocol.io/docs/concepts/tools) are supported. Other MCP features like Resources, Prompts, and Sampling are not implemented.
324
-
325
300
  ## Change Log
326
301
 
327
302
  Can be found [here](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/CHANGELOG.md)
@@ -330,76 +305,90 @@ Can be found [here](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/C
330
305
 
331
306
  ### Troubleshooting
332
307
 
333
- #### Common Configuration Errors
308
+ 1. **Enable debug logging**: Set `logLevel: "debug"` to see detailed connection and execution logs
309
+ 2. **Check server stderr**: For stdio MCP servers, use `stderr` redirection to capture server error output
310
+ 3. **Test explicit transports**: Try forcing specific transport types to isolate auto-detection issues
311
+ 4. **Verify server independently**: Test the MCP server with other clients (e.g., MCP Inspector)
312
+
313
+ ### LLM Provider Schema Compatibility
334
314
 
335
- **McpInitializationError: Cannot specify both 'command' and 'url'**
336
- - Remove either the `command` field (for URL-based servers) or the `url` field (for local stdio servers)
337
- - Use `command` for local MCP servers, `url` for remote servers
315
+ Different LLM providers have incompatible JSON Schema requirements for function calling:
338
316
 
339
- **McpInitializationError: URL protocol to be http: or https:**
340
- - Check that your URL starts with `http://` or `https://` when using HTTP transport
341
- - For WebSocket servers, use `ws://` or `wss://` URLs
317
+ - **OpenAI requires**: Optional fields must be nullable (`.optional()` + `.nullable()`)
318
+ for function calling (based on Structured Outputs API requirements,
319
+ strict enforcement coming in future SDK versions)"
320
+ - **Google Gemini API**: Rejects nullable fields and `$defs` references, requires strict OpenAPI 3.0 subset compliance
321
+ - **Anthropic Claude**: Very relaxed schema requirements with no documented restrictions
342
322
 
343
- **McpInitializationError: command to be specified**
344
- - Add a `command` field when using stdio transport
345
- - Ensure the command path is correct and the executable exists
323
+ **Note**: Google Vertex AI provides OpenAI-compatible endpoints that support nullable fields.
346
324
 
347
- #### Transport Detection Issues
325
+ #### Real-World Impact
348
326
 
349
- **Transport detection failed**
350
- - Server may not support the MCP protocol correctly
351
- - Try specifying an explicit transport type (`transport: "streamable_http"` or `transport: "sse"`)
352
- - Check server documentation for supported transport types
327
+ This creates challenges for developers trying to create universal schemas across providers.
353
328
 
354
- **Connection timeout or network errors**
355
- - Verify the server URL and port are correct
356
- - Check that the server is running and accessible
357
- - Ensure firewall/network settings allow the connection
329
+ Many MCP servers generate schemas that don't satisfy all providers' requirements.
330
+ For example, the official Notion MCP server [@notionhq/notion-mcp-server](https://www.npmjs.com/package/@notionhq/notion-mcp-server) (as of Jul 2, 2025) produces:
358
331
 
359
- #### Tool Execution Problems
332
+ **OpenAI Warnings:**
333
+ ```
334
+ Zod field at `#/definitions/API-get-users/properties/start_cursor` uses `.optional()` without `.nullable()` which is not supported by the API. See: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#all-fields-must-be-required
335
+ ... followed by many more
336
+ ```
360
337
 
361
- **Schema sanitization warnings for Gemini compatibility**
362
- - These are informational and generally safe to ignore
363
- - Consider updating the MCP server to use Gemini-compatible schemas
364
- - Warnings help identify servers that may need upstream fixes
338
+ **Gemini Errors:**
339
+ ```
340
+ GoogleGenerativeAIFetchError: [GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent: [400 Bad Request] * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[children].items.properties[paragraph].properties[rich_text].items.properties[mention].any_of[0].required: only allowed for OBJECT type
341
+ ... followed by many more
342
+ ```
365
343
 
366
- **Tool calls returning empty results**
367
- - Check server logs (use `stderr` redirection to capture them)
368
- - Verify tool parameters match the expected schema
369
- - Enable debug logging to see detailed tool execution information
344
+ #### Solution
370
345
 
371
- #### Debug Steps
346
+ The new option, `llmProvider`, has been introduced for performing provider-specific JSON schema transformations:
372
347
 
373
- 1. **Enable debug logging**: Set `logLevel: "debug"` to see detailed connection and execution logs
374
- 2. **Check server stderr**: For stdio MCP servers, use `stderr` redirection to capture server error output
375
- 3. **Test explicit transports**: Try forcing specific transport types to isolate auto-detection issues
376
- 4. **Verify server independently**: Test the MCP server with other clients (e.g., MCP Inspector)
348
+ ```typescript
349
+ const { tools, cleanup } = await convertMcpToLangchainTools(
350
+ mcpServers, {
351
+ llmProvider: "openai" // Makes optional fields nullable
352
+ // llmProvider: "google_gemini" // Applies Gemini's strict validation rules
353
+ // llmProvider: "anthropic" // No transformations needed
354
+ }
355
+ );
356
+ ```
357
+
358
+ **Features:**
359
+ - Generates INFO-level logs when transformations are applied
360
+ - Helps users identify potential MCP server schema improvements
361
+ - Falls back to original schema when no `llmProvider` is specified
377
362
 
378
- #### Configuration Validation
363
+ #### Provider-Specific Transformations
379
364
 
380
- The library validates server configurations and will throw `McpInitializationError` for invalid configurations:
365
+ | Provider | Transformations Applied |
366
+ |----------|------------------------|
367
+ | `openai` | Makes optional fields nullable, handles union types |
368
+ | `google_gemini` | Filters invalid required fields, fixes anyOf variants, removes unsupported features |
369
+ | `anthropic` | Accepts schemas as-is, but handles them efficiently |
381
370
 
382
- - **Cannot specify both `url` and `command`**: Use `command` for local servers or `url` for remote servers
383
- - **Transport type must match URL protocol**: e.g., `transport: "http"` requires `http:` or `https:` URL
384
- - **Transport requires appropriate configuration**: HTTP/WS transports need URLs, stdio transport needs command
371
+ For other providers, try without specifying the option:
385
372
 
386
- ### LLM Compatibility
373
+ ```typescript
374
+ const { tools, cleanup } = await convertMcpToLangchainTools(
375
+ mcpServers
376
+ );
377
+ ```
387
378
 
388
- The library automatically handles schema compatibility for different LLM providers:
379
+ #### References
389
380
 
390
- - **Google Gemini**: Sanitizes schemas to remove unsupported properties (logs warnings when changes are made)
391
- - **OpenAI Structured Outputs**: Makes optional fields nullable as required by OpenAI's specification
392
- - **Anthropic Claude**: Works with schemas as-is
393
- - **Other providers**: Generally compatible with standard JSON schemas
381
+ - [OpenAI Function Calling](https://platform.openai.com/docs/guides/function-calling)
382
+ - [Gemini API Schema Requirements](https://ai.google.dev/api/caching#Schema)
383
+ - [Anthropic Tool Use](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/overview)
394
384
 
395
- Schema transformations are applied automatically and logged at the `warn` level when changes are made, helping you identify which MCP servers might need upstream schema fixes for optimal compatibility.
396
385
 
397
386
  ### Resource Management
398
387
 
399
388
  The returned `cleanup` function properly handles resource cleanup:
400
389
 
401
390
  - Closes all MCP server connections concurrently
402
- - Logs any cleanup failures without throwing errors
391
+ - Logs any cleanup failures
403
392
  - Continues cleanup of remaining servers even if some fail
404
393
  - Should always be called when done using the tools
405
394
 
@@ -440,3 +429,8 @@ const { tools, cleanup } = await convertMcpToLangchainTools(
440
429
  ```
441
430
 
442
431
  Available log levels: `"fatal" | "error" | "warn" | "info" | "debug" | "trace"`
432
+
433
+ ### For Developers
434
+
435
+ See [README_DEV.md](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/README.md)
436
+ for more information about development and testing.
@@ -80,6 +80,13 @@ export type SingleMcpServerConfig = CommandBasedConfig | UrlBasedConfig;
80
80
  export interface McpServersConfig {
81
81
  [key: string]: SingleMcpServerConfig;
82
82
  }
83
+ /**
84
+ * Logger interface for MCP tools operations.
85
+ * Provides structured logging capabilities for debugging and monitoring
86
+ * MCP server connections and tool executions.
87
+ *
88
+ * @public
89
+ */
83
90
  export interface McpToolsLogger {
84
91
  debug(...args: unknown[]): void;
85
92
  info(...args: unknown[]): void;
@@ -88,12 +95,42 @@ export interface McpToolsLogger {
88
95
  }
89
96
  /**
90
97
  * Options for configuring logging behavior.
98
+ * Controls the verbosity level of logging output during MCP operations.
91
99
  *
92
100
  * @public
93
101
  */
94
102
  export interface LogOptions {
103
+ /** Log verbosity level. Higher levels include all lower levels (e.g., "debug" includes "info", "warn", "error", "fatal") */
95
104
  logLevel?: "fatal" | "error" | "warn" | "info" | "debug" | "trace";
96
105
  }
106
+ /**
107
+ * Supported LLM providers for schema transformations.
108
+ * Each provider has specific JSON schema requirements that may conflict with MCP tool schemas.
109
+ *
110
+ * @public
111
+ */
112
+ export type LlmProvider = "openai" | "google_gemini" | "google_genai" | "anthropic" | "none";
113
+ /**
114
+ * Configuration options for converting MCP servers to LangChain tools.
115
+ * Extends LogOptions to include provider-specific schema transformations and custom logging.
116
+ *
117
+ * @public
118
+ */
119
+ export interface ConvertMcpToLangchainOptions extends LogOptions {
120
+ /** Custom logger implementation. If not provided, uses default Logger with specified logLevel */
121
+ logger?: McpToolsLogger;
122
+ /** LLM provider for schema compatibility transformations. Performs provider-specific JSON schema modifications to prevent compatibility issues */
123
+ llmProvider?: LlmProvider;
124
+ }
125
+ /**
126
+ * Cleanup function returned by convertMcpToLangchainTools.
127
+ * Properly terminates all MCP server connections and cleans up resources.
128
+ *
129
+ * @public
130
+ */
131
+ export interface McpServerCleanupFn {
132
+ (): Promise<void>;
133
+ }
97
134
  /**
98
135
  * Error interface for MCP-related errors.
99
136
  * Extends the standard Error interface with MCP-specific properties.
@@ -104,9 +141,6 @@ export interface McpError extends Error {
104
141
  serverName: string;
105
142
  details?: unknown;
106
143
  }
107
- export interface McpServerCleanupFn {
108
- (): Promise<void>;
109
- }
110
144
  /**
111
145
  * Error thrown when an MCP server initialization fails.
112
146
  * Contains details about the server that failed to initialize.
@@ -127,6 +161,10 @@ export declare class McpInitializationError extends Error implements McpError {
127
161
  * @param options.logLevel - Log verbosity level ("fatal" | "error" | "warn" | "info" | "debug" | "trace")
128
162
  * @param options.logger - Custom logger implementation that follows the McpToolsLogger interface.
129
163
  * If provided, overrides the default Logger instance.
164
+ * @param options.llmProvider - LLM provider for schema compatibility transformations.
165
+ * Performs provider-specific JSON schema modifications to prevent compatibility issues.
166
+ * Set to "openai" for OpenAI models, "google_gemini"/"google_genai" for Google models,
167
+ * "anthropic" for Claude models, or "none" for no transformation.
130
168
  *
131
169
  * @returns A promise that resolves to:
132
170
  * - tools: Array of StructuredTool instances ready for use with LangChain
@@ -138,17 +176,19 @@ export declare class McpInitializationError extends Error implements McpError {
138
176
  * @remarks
139
177
  * - Servers are initialized concurrently for better performance
140
178
  * - Configuration is validated and will throw errors for conflicts (e.g., both url and command specified)
179
+ * - Schema transformations are applied based on llmProvider to ensure compatibility
141
180
  * - The cleanup function continues with remaining servers even if some cleanup operations fail
142
181
  *
143
182
  * @example
144
183
  * const { tools, cleanup } = await convertMcpToLangchainTools({
145
184
  * filesystem: { command: "npx", args: ["-y", "@modelcontextprotocol/server-filesystem", "."] },
146
185
  * fetch: { command: "uvx", args: ["mcp-server-fetch"] }
186
+ * }, {
187
+ * llmProvider: "openai",
188
+ * logLevel: "debug"
147
189
  * });
148
190
  */
149
- export declare function convertMcpToLangchainTools(configs: McpServersConfig, options?: LogOptions & {
150
- logger?: McpToolsLogger;
151
- }): Promise<{
191
+ export declare function convertMcpToLangchainTools(configs: McpServersConfig, options?: ConvertMcpToLangchainOptions): Promise<{
152
192
  tools: StructuredTool[];
153
193
  cleanup: McpServerCleanupFn;
154
194
  }>;