langchain-mcp-tools 0.1.1__tar.gz → 0.1.2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {langchain_mcp_tools-0.1.1/src/langchain_mcp_tools.egg-info → langchain_mcp_tools-0.1.2}/PKG-INFO +2 -78
- langchain_mcp_tools-0.1.2/README.md +81 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/pyproject.toml +11 -1
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools/langchain_mcp_tools.py +108 -144
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2/src/langchain_mcp_tools.egg-info}/PKG-INFO +2 -78
- langchain_mcp_tools-0.1.1/README.md +0 -158
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/LICENSE +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/setup.cfg +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools/__init__.py +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools/py.typed +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools.egg-info/SOURCES.txt +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools.egg-info/dependency_links.txt +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools.egg-info/requires.txt +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools.egg-info/top_level.txt +0 -0
- {langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/tests/test_langchain_mcp_tools.py +0 -0
{langchain_mcp_tools-0.1.1/src/langchain_mcp_tools.egg-info → langchain_mcp_tools-0.1.2}/PKG-INFO
RENAMED
@@ -1,9 +1,10 @@
|
|
1
1
|
Metadata-Version: 2.2
|
2
2
|
Name: langchain-mcp-tools
|
3
|
-
Version: 0.1.
|
3
|
+
Version: 0.1.2
|
4
4
|
Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
|
5
5
|
Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
|
6
6
|
Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
|
7
|
+
Keywords: modelcontextprotocol,mcp,mcp-client,langchain,langchain-python,tool-call,tool-calling,python
|
7
8
|
Requires-Python: >=3.11
|
8
9
|
Description-Content-Type: text/markdown
|
9
10
|
License-File: LICENSE
|
@@ -102,80 +103,3 @@ A more realistic usage example can be found
|
|
102
103
|
## Limitations
|
103
104
|
|
104
105
|
Currently, only text results of tool calls are supported.
|
105
|
-
|
106
|
-
## Technical Details
|
107
|
-
|
108
|
-
It was very tricky (for me) to get the parallel MCP server initialization
|
109
|
-
to work, including successful final resource cleanup...
|
110
|
-
|
111
|
-
I'm new to Python, so it is very possible that my ignorance is playing
|
112
|
-
a big role here...
|
113
|
-
I'll summarize the difficulties I faced below.
|
114
|
-
The source code is available
|
115
|
-
[here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
|
116
|
-
Any comments pointing out something I am missing would be greatly appreciated!
|
117
|
-
[(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
|
118
|
-
|
119
|
-
1. Challenge:
|
120
|
-
|
121
|
-
A key requirement for parallel initialization is that each server must be
|
122
|
-
initialized in its own dedicated task - there's no way around this as far as
|
123
|
-
I know. However, this poses a challenge when combined with
|
124
|
-
`asynccontextmanager`.
|
125
|
-
|
126
|
-
- Resources management for `stdio_client` and `ClientSession` seems
|
127
|
-
to require relying exclusively on `asynccontextmanager` for cleanup,
|
128
|
-
with no manual cleanup options
|
129
|
-
(based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
|
130
|
-
- Initializing multiple MCP servers in parallel requires a dedicated
|
131
|
-
`asyncio.Task` per server
|
132
|
-
- Server cleanup can be initiated later by a task other than the one
|
133
|
-
that initialized the resources, whereas `AsyncExitStack.aclose()` must be
|
134
|
-
called from the same task that created the context
|
135
|
-
|
136
|
-
2. Solution:
|
137
|
-
|
138
|
-
The key insight is to keep the initialization tasks alive throughout the
|
139
|
-
session lifetime, rather than letting them complete after initialization.
|
140
|
-
|
141
|
-
By using `asyncio.Event`s for coordination, we can:
|
142
|
-
- Allow parallel initialization while maintaining proper context management
|
143
|
-
- Keep each initialization task running until explicit cleanup is requested
|
144
|
-
- Ensure cleanup occurs in the same task that created the resources
|
145
|
-
- Provide a clean interface for the caller to manage the lifecycle
|
146
|
-
|
147
|
-
Alternative Considered:
|
148
|
-
A generator/coroutine approach using `finally` block for cleanup was
|
149
|
-
considered but rejected because:
|
150
|
-
- It turned out that the `finally` block in a generator/coroutine can be
|
151
|
-
executed by a different task than the one that ran the main body of
|
152
|
-
the code
|
153
|
-
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
154
|
-
called from the same task that created the context
|
155
|
-
|
156
|
-
3. Task Lifecycle:
|
157
|
-
|
158
|
-
The following task lifecyle diagram illustrates how the above strategy
|
159
|
-
was impelemented:
|
160
|
-
```
|
161
|
-
[Task starts]
|
162
|
-
↓
|
163
|
-
Initialize server & convert tools
|
164
|
-
↓
|
165
|
-
Set ready_event (signals tools are ready)
|
166
|
-
↓
|
167
|
-
await cleanup_event.wait() (keeps task alive)
|
168
|
-
↓
|
169
|
-
When cleanup_event is set:
|
170
|
-
exit_stack.aclose() (cleanup in original task)
|
171
|
-
```
|
172
|
-
This approach indeed enables parallel initialization while maintaining proper
|
173
|
-
async resource lifecycle management through context managers.
|
174
|
-
However, I'm afraid I'm twisting things around too much.
|
175
|
-
It usually means I'm doing something very worng...
|
176
|
-
|
177
|
-
I think it is a natural assumption that MCP SDK is designed with consideration
|
178
|
-
for parallel server initialization.
|
179
|
-
I'm not sure what I'm missing...
|
180
|
-
(FYI, with the TypeScript MCP SDK, parallel initialization was
|
181
|
-
[pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
|
@@ -0,0 +1,81 @@
|
|
1
|
+
# MCP To LangChain Tools Conversion Utility [](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
2
|
+
|
3
|
+
This package is intended to simplify the use of
|
4
|
+
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
5
|
+
server tools with LangChain / Python.
|
6
|
+
|
7
|
+
It contains a utility function `convert_mcp_to_langchain_tools()`.
|
8
|
+
This async function handles parallel initialization of specified multiple MCP servers
|
9
|
+
and converts their available tools into a list of LangChain-compatible tools.
|
10
|
+
|
11
|
+
A typescript equivalent of this utility library is available
|
12
|
+
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
13
|
+
|
14
|
+
## Requirements
|
15
|
+
|
16
|
+
- Python 3.11+
|
17
|
+
|
18
|
+
## Installation
|
19
|
+
|
20
|
+
```bash
|
21
|
+
pip install langchain-mcp-tools
|
22
|
+
```
|
23
|
+
|
24
|
+
## Quick Start
|
25
|
+
|
26
|
+
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
27
|
+
that follow the same structure as
|
28
|
+
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
29
|
+
but only the contents of the `mcpServers` property,
|
30
|
+
and is expressed as a `dict`, e.g.:
|
31
|
+
|
32
|
+
```python
|
33
|
+
mcp_configs = {
|
34
|
+
'filesystem': {
|
35
|
+
'command': 'npx',
|
36
|
+
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
37
|
+
},
|
38
|
+
'fetch': {
|
39
|
+
'command': 'uvx',
|
40
|
+
'args': ['mcp-server-fetch']
|
41
|
+
}
|
42
|
+
}
|
43
|
+
|
44
|
+
tools, cleanup = await convert_mcp_to_langchain_tools(
|
45
|
+
mcp_configs
|
46
|
+
)
|
47
|
+
```
|
48
|
+
|
49
|
+
This utility function initializes all specified MCP servers in parallel,
|
50
|
+
and returns LangChain Tools
|
51
|
+
([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
|
52
|
+
by gathering available MCP tools from the servers,
|
53
|
+
and by wrapping them into LangChain tools.
|
54
|
+
It also returns an async callback function (`cleanup: McpServerCleanupFn`)
|
55
|
+
to be invoked to close all MCP server sessions when finished.
|
56
|
+
|
57
|
+
The returned tools can be used with LangChain, e.g.:
|
58
|
+
|
59
|
+
```python
|
60
|
+
# from langchain.chat_models import init_chat_model
|
61
|
+
llm = init_chat_model(
|
62
|
+
model='claude-3-5-haiku-latest',
|
63
|
+
model_provider='anthropic'
|
64
|
+
)
|
65
|
+
|
66
|
+
# from langgraph.prebuilt import create_react_agent
|
67
|
+
agent = create_react_agent(
|
68
|
+
llm,
|
69
|
+
tools
|
70
|
+
)
|
71
|
+
```
|
72
|
+
A simple and experimentable usage example can be found
|
73
|
+
[here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
|
74
|
+
|
75
|
+
A more realistic usage example can be found
|
76
|
+
[here](https://github.com/hideya/mcp-client-langchain-py)
|
77
|
+
|
78
|
+
|
79
|
+
## Limitations
|
80
|
+
|
81
|
+
Currently, only text results of tool calls are supported.
|
@@ -1,7 +1,17 @@
|
|
1
1
|
[project]
|
2
2
|
name = "langchain-mcp-tools"
|
3
|
-
version = "0.1.
|
3
|
+
version = "0.1.2"
|
4
4
|
description = "Model Context Protocol (MCP) To LangChain Tools Conversion Utility"
|
5
|
+
keywords = [
|
6
|
+
"modelcontextprotocol",
|
7
|
+
"mcp",
|
8
|
+
"mcp-client",
|
9
|
+
"langchain",
|
10
|
+
"langchain-python",
|
11
|
+
"tool-call",
|
12
|
+
"tool-calling",
|
13
|
+
"python",
|
14
|
+
]
|
5
15
|
readme = "README.md"
|
6
16
|
requires-python = ">=3.11"
|
7
17
|
dependencies = [
|
@@ -1,5 +1,8 @@
|
|
1
1
|
# Standard library imports
|
2
|
-
import
|
2
|
+
from anyio.streams.memory import (
|
3
|
+
MemoryObjectReceiveStream,
|
4
|
+
MemoryObjectSendStream,
|
5
|
+
)
|
3
6
|
import logging
|
4
7
|
import os
|
5
8
|
import sys
|
@@ -13,6 +16,7 @@ from typing import (
|
|
13
16
|
NoReturn,
|
14
17
|
Tuple,
|
15
18
|
Type,
|
19
|
+
TypeAlias,
|
16
20
|
)
|
17
21
|
|
18
22
|
# Third-party imports
|
@@ -21,6 +25,7 @@ try:
|
|
21
25
|
from langchain_core.tools import BaseTool, ToolException
|
22
26
|
from mcp import ClientSession, StdioServerParameters
|
23
27
|
from mcp.client.stdio import stdio_client
|
28
|
+
import mcp.types as mcp_types
|
24
29
|
from pydantic import BaseModel
|
25
30
|
from pympler import asizeof
|
26
31
|
except ImportError as e:
|
@@ -29,111 +34,34 @@ except ImportError as e:
|
|
29
34
|
sys.exit(1)
|
30
35
|
|
31
36
|
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
A key requirement for parallel initialization is that each server must be
|
42
|
-
initialized in its own dedicated task - there's no way around this as far as
|
43
|
-
I know. However, this poses a challenge when combined with
|
44
|
-
`asynccontextmanager`.
|
45
|
-
|
46
|
-
- Resources management for `stdio_client` and `ClientSession` seems
|
47
|
-
to require relying exclusively on `asynccontextmanager` for cleanup,
|
48
|
-
with no manual cleanup options
|
49
|
-
(based on the mcp python-sdk impl as of Jan 14, 2025)
|
50
|
-
- Initializing multiple MCP servers in parallel requires a dedicated
|
51
|
-
`asyncio.Task` per server
|
52
|
-
- Server cleanup can be initiated later by a task other than the one that
|
53
|
-
initialized the resources, whereas `AsyncExitStack.aclose()` must be
|
54
|
-
called from the same task that created the context
|
55
|
-
|
56
|
-
2. Solution:
|
57
|
-
|
58
|
-
The key insight is to keep the initialization tasks alive throughout the
|
59
|
-
session lifetime, rather than letting them complete after initialization.
|
60
|
-
|
61
|
-
By using `asyncio.Event`s for coordination, we can:
|
62
|
-
- Allow parallel initialization while maintaining proper context management
|
63
|
-
- Keep each initialization task running until explicit cleanup is requested
|
64
|
-
- Ensure cleanup occurs in the same task that created the resources
|
65
|
-
- Provide a clean interface for the caller to manage the lifecycle
|
66
|
-
|
67
|
-
Alternative Considered:
|
68
|
-
A generator/coroutine approach using `finally` block for cleanup was
|
69
|
-
considered but rejected because:
|
70
|
-
- It turned out that the `finally` block in a generator/coroutine can be
|
71
|
-
executed by a different task than the one that ran the main body of
|
72
|
-
the code
|
73
|
-
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
74
|
-
called from the same task that created the context
|
75
|
-
|
76
|
-
3. Task Lifecycle:
|
77
|
-
|
78
|
-
The following task lifecyle diagram illustrates how the above strategy
|
79
|
-
was impelemented:
|
80
|
-
```
|
81
|
-
[Task starts]
|
82
|
-
↓
|
83
|
-
Initialize server & convert tools
|
84
|
-
↓
|
85
|
-
Set ready_event (signals tools are ready)
|
86
|
-
↓
|
87
|
-
await cleanup_event.wait() (keeps task alive)
|
88
|
-
↓
|
89
|
-
When cleanup_event is set:
|
90
|
-
exit_stack.aclose() (cleanup in original task)
|
91
|
-
```
|
92
|
-
This approach indeed enables parallel initialization while maintaining proper
|
93
|
-
async resource lifecycle management through context managers.
|
94
|
-
However, I'm afraid I'm twisting things around too much.
|
95
|
-
It usually means I'm doing something very worng...
|
96
|
-
|
97
|
-
I think it is a natural assumption that MCP SDK is designed with consideration
|
98
|
-
for parallel server initialization.
|
99
|
-
I'm not sure what I'm missing...
|
100
|
-
(FYI, with the TypeScript MCP SDK, parallel initialization was
|
101
|
-
pretty straightforward.
|
102
|
-
"""
|
103
|
-
|
104
|
-
|
105
|
-
async def spawn_mcp_server_tools_task(
|
37
|
+
# Type alias for the bidirectional communication channels with the MCP server
|
38
|
+
# FIXME: not defined in mcp.types, really?
|
39
|
+
StdioTransport: TypeAlias = tuple[
|
40
|
+
MemoryObjectReceiveStream[mcp_types.JSONRPCMessage | Exception],
|
41
|
+
MemoryObjectSendStream[mcp_types.JSONRPCMessage]
|
42
|
+
]
|
43
|
+
|
44
|
+
|
45
|
+
async def spawn_mcp_server_and_get_transport(
|
106
46
|
server_name: str,
|
107
47
|
server_config: Dict[str, Any],
|
108
|
-
|
109
|
-
ready_event: asyncio.Event,
|
110
|
-
cleanup_event: asyncio.Event,
|
48
|
+
exit_stack: AsyncExitStack,
|
111
49
|
logger: logging.Logger = logging.getLogger(__name__)
|
112
|
-
) ->
|
113
|
-
"""
|
114
|
-
and
|
115
|
-
|
116
|
-
This task initializes an MCP server connection, converts its tools
|
117
|
-
to LangChain format, and manages the connection lifecycle.
|
118
|
-
It adds the tools to the provided langchain_tools list and uses events
|
119
|
-
for synchronization.
|
50
|
+
) -> StdioTransport:
|
51
|
+
"""
|
52
|
+
Spawns an MCP server process and establishes communication channels.
|
120
53
|
|
121
54
|
Args:
|
122
|
-
server_name:
|
123
|
-
server_config:
|
124
|
-
|
125
|
-
|
126
|
-
be appended
|
127
|
-
ready_event: Event to signal when tools are ready for use
|
128
|
-
cleanup_event: Event to trigger cleanup and connection closure
|
129
|
-
logger: Logger instance to use for logging events and errors.
|
130
|
-
Defaults to module logger.
|
55
|
+
server_name: Server instance name to use for better logging
|
56
|
+
server_config: Configuration dictionary for server setup
|
57
|
+
exit_stack: Context manager for cleanup handling
|
58
|
+
logger: Logger instance for debugging and monitoring
|
131
59
|
|
132
60
|
Returns:
|
133
|
-
|
61
|
+
A tuple of receive and send streams for server communication
|
134
62
|
|
135
63
|
Raises:
|
136
|
-
Exception: If
|
64
|
+
Exception: If server spawning fails
|
137
65
|
"""
|
138
66
|
try:
|
139
67
|
logger.info(f'MCP server "{server_name}": initializing with:',
|
@@ -146,29 +74,59 @@ async def spawn_mcp_server_tools_task(
|
|
146
74
|
if 'PATH' not in env:
|
147
75
|
env['PATH'] = os.environ.get('PATH', '')
|
148
76
|
|
77
|
+
# Create server parameters with command, arguments and environment
|
149
78
|
server_params = StdioServerParameters(
|
150
79
|
command=server_config['command'],
|
151
80
|
args=server_config.get('args', []),
|
152
81
|
env=env
|
153
82
|
)
|
154
83
|
|
84
|
+
# Initialize stdio client and register it with exit stack for cleanup
|
85
|
+
stdio_transport = await exit_stack.enter_async_context(
|
86
|
+
stdio_client(server_params)
|
87
|
+
)
|
88
|
+
except Exception as e:
|
89
|
+
logger.error(f'Error spawning MCP server: {str(e)}')
|
90
|
+
raise
|
91
|
+
|
92
|
+
return stdio_transport
|
93
|
+
|
94
|
+
|
95
|
+
async def get_mcp_server_tools(
|
96
|
+
server_name: str,
|
97
|
+
stdio_transport: StdioTransport,
|
98
|
+
exit_stack: AsyncExitStack,
|
99
|
+
logger: logging.Logger = logging.getLogger(__name__)
|
100
|
+
) -> List[BaseTool]:
|
101
|
+
"""
|
102
|
+
Retrieves and converts MCP server tools to LangChain format.
|
103
|
+
|
104
|
+
Args:
|
105
|
+
server_name: Server instance name to use for better logging
|
106
|
+
stdio_transport: Communication channels tuple
|
107
|
+
exit_stack: Context manager for cleanup handling
|
108
|
+
logger: Logger instance for debugging and monitoring
|
109
|
+
|
110
|
+
Returns:
|
111
|
+
List of LangChain tools converted from MCP tools
|
112
|
+
|
113
|
+
Raises:
|
114
|
+
Exception: If tool conversion fails
|
115
|
+
"""
|
116
|
+
try:
|
117
|
+
read, write = stdio_transport
|
118
|
+
|
155
119
|
# Use an intermediate `asynccontextmanager` to log the cleanup message
|
156
120
|
@asynccontextmanager
|
157
121
|
async def log_before_aexit(context_manager, message):
|
122
|
+
"""Helper context manager that logs before cleanup"""
|
158
123
|
yield await context_manager.__aenter__()
|
159
124
|
try:
|
160
125
|
logger.info(message)
|
161
126
|
finally:
|
162
127
|
await context_manager.__aexit__(None, None, None)
|
163
128
|
|
164
|
-
# Initialize
|
165
|
-
exit_stack = AsyncExitStack()
|
166
|
-
|
167
|
-
stdio_transport = await exit_stack.enter_async_context(
|
168
|
-
stdio_client(server_params)
|
169
|
-
)
|
170
|
-
read, write = stdio_transport
|
171
|
-
|
129
|
+
# Initialize client session with cleanup logging
|
172
130
|
session = await exit_stack.enter_async_context(
|
173
131
|
log_before_aexit(
|
174
132
|
ClientSession(read, write),
|
@@ -182,11 +140,14 @@ async def spawn_mcp_server_tools_task(
|
|
182
140
|
# Get MCP tools
|
183
141
|
tools_response = await session.list_tools()
|
184
142
|
|
185
|
-
# Wrap MCP tools
|
143
|
+
# Wrap MCP tools into LangChain tools
|
144
|
+
langchain_tools: List[BaseTool] = []
|
186
145
|
for tool in tools_response.tools:
|
146
|
+
# Define adapter class to convert MCP tool to LangChain format
|
187
147
|
class McpToLangChainAdapter(BaseTool):
|
188
148
|
name: str = tool.name or 'NO NAME'
|
189
149
|
description: str = tool.description or ''
|
150
|
+
# Convert JSON schema to Pydantic model for argument validation
|
190
151
|
args_schema: Type[BaseModel] = jsonschema_to_pydantic(
|
191
152
|
tool.inputSchema
|
192
153
|
)
|
@@ -197,12 +158,17 @@ async def spawn_mcp_server_tools_task(
|
|
197
158
|
)
|
198
159
|
|
199
160
|
async def _arun(self, **kwargs: Any) -> Any:
|
161
|
+
"""
|
162
|
+
Asynchronously executes the tool with given arguments.
|
163
|
+
Logs input/output and handles errors.
|
164
|
+
"""
|
200
165
|
logger.info(f'MCP tool "{server_name}"/"{tool.name}"'
|
201
166
|
f' received input:', kwargs)
|
202
167
|
result = await session.call_tool(self.name, kwargs)
|
203
168
|
if result.isError:
|
204
169
|
raise ToolException(result.content)
|
205
170
|
|
171
|
+
# Log result size for monitoring
|
206
172
|
size = asizeof.asizeof(result.content)
|
207
173
|
logger.info(f'MCP tool "{server_name}"/"{tool.name}" '
|
208
174
|
f'received result (size: {size})')
|
@@ -210,24 +176,19 @@ async def spawn_mcp_server_tools_task(
|
|
210
176
|
|
211
177
|
langchain_tools.append(McpToLangChainAdapter())
|
212
178
|
|
179
|
+
# Log available tools for debugging
|
213
180
|
logger.info(f'MCP server "{server_name}": {len(langchain_tools)} '
|
214
181
|
f'tool(s) available:')
|
215
182
|
for tool in langchain_tools:
|
216
183
|
logger.info(f'- {tool.name}')
|
217
184
|
except Exception as e:
|
218
|
-
logger.error(f'Error getting
|
185
|
+
logger.error(f'Error getting MCP tools: {str(e)}')
|
219
186
|
raise
|
220
187
|
|
221
|
-
|
222
|
-
ready_event.set()
|
223
|
-
|
224
|
-
# Keep this task alive until cleanup is requested
|
225
|
-
await cleanup_event.wait()
|
226
|
-
|
227
|
-
# Cleanup the resources
|
228
|
-
await exit_stack.aclose()
|
188
|
+
return langchain_tools
|
229
189
|
|
230
190
|
|
191
|
+
# Type hint for cleanup function
|
231
192
|
McpServerCleanupFn = Callable[[], Awaitable[None]]
|
232
193
|
|
233
194
|
|
@@ -264,43 +225,46 @@ async def convert_mcp_to_langchain_tools(
|
|
264
225
|
# Use tools...
|
265
226
|
await cleanup()
|
266
227
|
"""
|
267
|
-
per_server_tools = []
|
268
|
-
ready_event_list = []
|
269
|
-
cleanup_event_list = []
|
270
228
|
|
271
|
-
#
|
272
|
-
|
229
|
+
# Initialize AsyncExitStack for managing multiple server lifecycles
|
230
|
+
stdio_transports: List[StdioTransport] = []
|
231
|
+
async_exit_stack = AsyncExitStack()
|
232
|
+
|
233
|
+
# Spawn all MCP servers concurrently
|
273
234
|
for server_name, server_config in server_configs.items():
|
274
|
-
|
275
|
-
|
276
|
-
|
277
|
-
|
278
|
-
|
279
|
-
cleanup_event_list.append(cleanup_event)
|
280
|
-
task = asyncio.create_task(spawn_mcp_server_tools_task(
|
235
|
+
# NOTE: the following `await` only blocks until the server subprocess
|
236
|
+
# is spawned, i.e. after returning from the `await`, the spawned
|
237
|
+
# subprocess starts its initialization independently of (so in
|
238
|
+
# parallel with) the Python execution of the following lines.
|
239
|
+
stdio_transport = await spawn_mcp_server_and_get_transport(
|
281
240
|
server_name,
|
282
241
|
server_config,
|
283
|
-
|
284
|
-
ready_event,
|
285
|
-
cleanup_event,
|
242
|
+
async_exit_stack,
|
286
243
|
logger
|
287
|
-
)
|
288
|
-
|
289
|
-
|
290
|
-
#
|
291
|
-
|
292
|
-
|
293
|
-
|
294
|
-
|
295
|
-
|
296
|
-
|
244
|
+
)
|
245
|
+
stdio_transports.append(stdio_transport)
|
246
|
+
|
247
|
+
# Convert tools from each server to LangChain format
|
248
|
+
langchain_tools: List[BaseTool] = []
|
249
|
+
for (server_name, server_config), stdio_transport in zip(
|
250
|
+
server_configs.items(),
|
251
|
+
stdio_transports,
|
252
|
+
strict=True
|
253
|
+
):
|
254
|
+
tools = await get_mcp_server_tools(
|
255
|
+
server_name,
|
256
|
+
stdio_transport,
|
257
|
+
async_exit_stack,
|
258
|
+
logger
|
259
|
+
)
|
260
|
+
langchain_tools.extend(tools)
|
297
261
|
|
298
|
-
# Define a cleanup
|
299
|
-
# it is time to clean up the resources
|
262
|
+
# Define a cleanup function to properly shut down all servers
|
300
263
|
async def mcp_cleanup() -> None:
|
301
|
-
|
302
|
-
|
264
|
+
"""Closes all server connections and cleans up resources"""
|
265
|
+
await async_exit_stack.aclose()
|
303
266
|
|
267
|
+
# Log summary of initialized tools
|
304
268
|
logger.info(f'MCP servers initialized: {len(langchain_tools)} tool(s) '
|
305
269
|
f'available in total')
|
306
270
|
for tool in langchain_tools:
|
{langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2/src/langchain_mcp_tools.egg-info}/PKG-INFO
RENAMED
@@ -1,9 +1,10 @@
|
|
1
1
|
Metadata-Version: 2.2
|
2
2
|
Name: langchain-mcp-tools
|
3
|
-
Version: 0.1.
|
3
|
+
Version: 0.1.2
|
4
4
|
Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
|
5
5
|
Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
|
6
6
|
Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
|
7
|
+
Keywords: modelcontextprotocol,mcp,mcp-client,langchain,langchain-python,tool-call,tool-calling,python
|
7
8
|
Requires-Python: >=3.11
|
8
9
|
Description-Content-Type: text/markdown
|
9
10
|
License-File: LICENSE
|
@@ -102,80 +103,3 @@ A more realistic usage example can be found
|
|
102
103
|
## Limitations
|
103
104
|
|
104
105
|
Currently, only text results of tool calls are supported.
|
105
|
-
|
106
|
-
## Technical Details
|
107
|
-
|
108
|
-
It was very tricky (for me) to get the parallel MCP server initialization
|
109
|
-
to work, including successful final resource cleanup...
|
110
|
-
|
111
|
-
I'm new to Python, so it is very possible that my ignorance is playing
|
112
|
-
a big role here...
|
113
|
-
I'll summarize the difficulties I faced below.
|
114
|
-
The source code is available
|
115
|
-
[here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
|
116
|
-
Any comments pointing out something I am missing would be greatly appreciated!
|
117
|
-
[(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
|
118
|
-
|
119
|
-
1. Challenge:
|
120
|
-
|
121
|
-
A key requirement for parallel initialization is that each server must be
|
122
|
-
initialized in its own dedicated task - there's no way around this as far as
|
123
|
-
I know. However, this poses a challenge when combined with
|
124
|
-
`asynccontextmanager`.
|
125
|
-
|
126
|
-
- Resources management for `stdio_client` and `ClientSession` seems
|
127
|
-
to require relying exclusively on `asynccontextmanager` for cleanup,
|
128
|
-
with no manual cleanup options
|
129
|
-
(based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
|
130
|
-
- Initializing multiple MCP servers in parallel requires a dedicated
|
131
|
-
`asyncio.Task` per server
|
132
|
-
- Server cleanup can be initiated later by a task other than the one
|
133
|
-
that initialized the resources, whereas `AsyncExitStack.aclose()` must be
|
134
|
-
called from the same task that created the context
|
135
|
-
|
136
|
-
2. Solution:
|
137
|
-
|
138
|
-
The key insight is to keep the initialization tasks alive throughout the
|
139
|
-
session lifetime, rather than letting them complete after initialization.
|
140
|
-
|
141
|
-
By using `asyncio.Event`s for coordination, we can:
|
142
|
-
- Allow parallel initialization while maintaining proper context management
|
143
|
-
- Keep each initialization task running until explicit cleanup is requested
|
144
|
-
- Ensure cleanup occurs in the same task that created the resources
|
145
|
-
- Provide a clean interface for the caller to manage the lifecycle
|
146
|
-
|
147
|
-
Alternative Considered:
|
148
|
-
A generator/coroutine approach using `finally` block for cleanup was
|
149
|
-
considered but rejected because:
|
150
|
-
- It turned out that the `finally` block in a generator/coroutine can be
|
151
|
-
executed by a different task than the one that ran the main body of
|
152
|
-
the code
|
153
|
-
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
154
|
-
called from the same task that created the context
|
155
|
-
|
156
|
-
3. Task Lifecycle:
|
157
|
-
|
158
|
-
The following task lifecyle diagram illustrates how the above strategy
|
159
|
-
was impelemented:
|
160
|
-
```
|
161
|
-
[Task starts]
|
162
|
-
↓
|
163
|
-
Initialize server & convert tools
|
164
|
-
↓
|
165
|
-
Set ready_event (signals tools are ready)
|
166
|
-
↓
|
167
|
-
await cleanup_event.wait() (keeps task alive)
|
168
|
-
↓
|
169
|
-
When cleanup_event is set:
|
170
|
-
exit_stack.aclose() (cleanup in original task)
|
171
|
-
```
|
172
|
-
This approach indeed enables parallel initialization while maintaining proper
|
173
|
-
async resource lifecycle management through context managers.
|
174
|
-
However, I'm afraid I'm twisting things around too much.
|
175
|
-
It usually means I'm doing something very worng...
|
176
|
-
|
177
|
-
I think it is a natural assumption that MCP SDK is designed with consideration
|
178
|
-
for parallel server initialization.
|
179
|
-
I'm not sure what I'm missing...
|
180
|
-
(FYI, with the TypeScript MCP SDK, parallel initialization was
|
181
|
-
[pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
|
@@ -1,158 +0,0 @@
|
|
1
|
-
# MCP To LangChain Tools Conversion Utility [](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
2
|
-
|
3
|
-
This package is intended to simplify the use of
|
4
|
-
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
5
|
-
server tools with LangChain / Python.
|
6
|
-
|
7
|
-
It contains a utility function `convert_mcp_to_langchain_tools()`.
|
8
|
-
This async function handles parallel initialization of specified multiple MCP servers
|
9
|
-
and converts their available tools into a list of LangChain-compatible tools.
|
10
|
-
|
11
|
-
A typescript equivalent of this utility library is available
|
12
|
-
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
13
|
-
|
14
|
-
## Requirements
|
15
|
-
|
16
|
-
- Python 3.11+
|
17
|
-
|
18
|
-
## Installation
|
19
|
-
|
20
|
-
```bash
|
21
|
-
pip install langchain-mcp-tools
|
22
|
-
```
|
23
|
-
|
24
|
-
## Quick Start
|
25
|
-
|
26
|
-
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
27
|
-
that follow the same structure as
|
28
|
-
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
29
|
-
but only the contents of the `mcpServers` property,
|
30
|
-
and is expressed as a `dict`, e.g.:
|
31
|
-
|
32
|
-
```python
|
33
|
-
mcp_configs = {
|
34
|
-
'filesystem': {
|
35
|
-
'command': 'npx',
|
36
|
-
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
37
|
-
},
|
38
|
-
'fetch': {
|
39
|
-
'command': 'uvx',
|
40
|
-
'args': ['mcp-server-fetch']
|
41
|
-
}
|
42
|
-
}
|
43
|
-
|
44
|
-
tools, cleanup = await convert_mcp_to_langchain_tools(
|
45
|
-
mcp_configs
|
46
|
-
)
|
47
|
-
```
|
48
|
-
|
49
|
-
This utility function initializes all specified MCP servers in parallel,
|
50
|
-
and returns LangChain Tools
|
51
|
-
([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
|
52
|
-
by gathering available MCP tools from the servers,
|
53
|
-
and by wrapping them into LangChain tools.
|
54
|
-
It also returns an async callback function (`cleanup: McpServerCleanupFn`)
|
55
|
-
to be invoked to close all MCP server sessions when finished.
|
56
|
-
|
57
|
-
The returned tools can be used with LangChain, e.g.:
|
58
|
-
|
59
|
-
```python
|
60
|
-
# from langchain.chat_models import init_chat_model
|
61
|
-
llm = init_chat_model(
|
62
|
-
model='claude-3-5-haiku-latest',
|
63
|
-
model_provider='anthropic'
|
64
|
-
)
|
65
|
-
|
66
|
-
# from langgraph.prebuilt import create_react_agent
|
67
|
-
agent = create_react_agent(
|
68
|
-
llm,
|
69
|
-
tools
|
70
|
-
)
|
71
|
-
```
|
72
|
-
A simple and experimentable usage example can be found
|
73
|
-
[here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
|
74
|
-
|
75
|
-
A more realistic usage example can be found
|
76
|
-
[here](https://github.com/hideya/mcp-client-langchain-py)
|
77
|
-
|
78
|
-
|
79
|
-
## Limitations
|
80
|
-
|
81
|
-
Currently, only text results of tool calls are supported.
|
82
|
-
|
83
|
-
## Technical Details
|
84
|
-
|
85
|
-
It was very tricky (for me) to get the parallel MCP server initialization
|
86
|
-
to work, including successful final resource cleanup...
|
87
|
-
|
88
|
-
I'm new to Python, so it is very possible that my ignorance is playing
|
89
|
-
a big role here...
|
90
|
-
I'll summarize the difficulties I faced below.
|
91
|
-
The source code is available
|
92
|
-
[here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
|
93
|
-
Any comments pointing out something I am missing would be greatly appreciated!
|
94
|
-
[(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
|
95
|
-
|
96
|
-
1. Challenge:
|
97
|
-
|
98
|
-
A key requirement for parallel initialization is that each server must be
|
99
|
-
initialized in its own dedicated task - there's no way around this as far as
|
100
|
-
I know. However, this poses a challenge when combined with
|
101
|
-
`asynccontextmanager`.
|
102
|
-
|
103
|
-
- Resources management for `stdio_client` and `ClientSession` seems
|
104
|
-
to require relying exclusively on `asynccontextmanager` for cleanup,
|
105
|
-
with no manual cleanup options
|
106
|
-
(based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
|
107
|
-
- Initializing multiple MCP servers in parallel requires a dedicated
|
108
|
-
`asyncio.Task` per server
|
109
|
-
- Server cleanup can be initiated later by a task other than the one
|
110
|
-
that initialized the resources, whereas `AsyncExitStack.aclose()` must be
|
111
|
-
called from the same task that created the context
|
112
|
-
|
113
|
-
2. Solution:
|
114
|
-
|
115
|
-
The key insight is to keep the initialization tasks alive throughout the
|
116
|
-
session lifetime, rather than letting them complete after initialization.
|
117
|
-
|
118
|
-
By using `asyncio.Event`s for coordination, we can:
|
119
|
-
- Allow parallel initialization while maintaining proper context management
|
120
|
-
- Keep each initialization task running until explicit cleanup is requested
|
121
|
-
- Ensure cleanup occurs in the same task that created the resources
|
122
|
-
- Provide a clean interface for the caller to manage the lifecycle
|
123
|
-
|
124
|
-
Alternative Considered:
|
125
|
-
A generator/coroutine approach using `finally` block for cleanup was
|
126
|
-
considered but rejected because:
|
127
|
-
- It turned out that the `finally` block in a generator/coroutine can be
|
128
|
-
executed by a different task than the one that ran the main body of
|
129
|
-
the code
|
130
|
-
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
131
|
-
called from the same task that created the context
|
132
|
-
|
133
|
-
3. Task Lifecycle:
|
134
|
-
|
135
|
-
The following task lifecyle diagram illustrates how the above strategy
|
136
|
-
was impelemented:
|
137
|
-
```
|
138
|
-
[Task starts]
|
139
|
-
↓
|
140
|
-
Initialize server & convert tools
|
141
|
-
↓
|
142
|
-
Set ready_event (signals tools are ready)
|
143
|
-
↓
|
144
|
-
await cleanup_event.wait() (keeps task alive)
|
145
|
-
↓
|
146
|
-
When cleanup_event is set:
|
147
|
-
exit_stack.aclose() (cleanup in original task)
|
148
|
-
```
|
149
|
-
This approach indeed enables parallel initialization while maintaining proper
|
150
|
-
async resource lifecycle management through context managers.
|
151
|
-
However, I'm afraid I'm twisting things around too much.
|
152
|
-
It usually means I'm doing something very worng...
|
153
|
-
|
154
|
-
I think it is a natural assumption that MCP SDK is designed with consideration
|
155
|
-
for parallel server initialization.
|
156
|
-
I'm not sure what I'm missing...
|
157
|
-
(FYI, with the TypeScript MCP SDK, parallel initialization was
|
158
|
-
[pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
{langchain_mcp_tools-0.1.1 → langchain_mcp_tools-0.1.2}/src/langchain_mcp_tools.egg-info/SOURCES.txt
RENAMED
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|