langchain-mcp-tools 0.0.2__tar.gz → 0.0.4__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- langchain_mcp_tools-0.0.4/PKG-INFO +172 -0
- langchain_mcp_tools-0.0.4/README.md +151 -0
- langchain_mcp_tools-0.0.4/langchain_mcp_tools.egg-info/PKG-INFO +172 -0
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/langchain_mcp_tools.egg-info/top_level.txt +0 -1
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/pyproject.toml +4 -6
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/src/langchain_mcp_tools.py +22 -12
- langchain_mcp_tools-0.0.2/PKG-INFO +0 -106
- langchain_mcp_tools-0.0.2/README.md +0 -83
- langchain_mcp_tools-0.0.2/langchain_mcp_tools.egg-info/PKG-INFO +0 -106
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/LICENSE +0 -0
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/langchain_mcp_tools.egg-info/SOURCES.txt +0 -0
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/langchain_mcp_tools.egg-info/dependency_links.txt +0 -0
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/langchain_mcp_tools.egg-info/requires.txt +0 -0
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/setup.cfg +0 -0
- {langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/tests/test_langchain_mcp_tools.py +0 -0
@@ -0,0 +1,172 @@
|
|
1
|
+
Metadata-Version: 2.2
|
2
|
+
Name: langchain-mcp-tools
|
3
|
+
Version: 0.0.4
|
4
|
+
Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
|
5
|
+
Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
|
6
|
+
Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
|
7
|
+
Requires-Python: >=3.11
|
8
|
+
Description-Content-Type: text/markdown
|
9
|
+
License-File: LICENSE
|
10
|
+
Requires-Dist: jsonschema-pydantic>=0.6
|
11
|
+
Requires-Dist: langchain-core>=0.3.29
|
12
|
+
Requires-Dist: langchain>=0.3.14
|
13
|
+
Requires-Dist: langchain-anthropic>=0.3.1
|
14
|
+
Requires-Dist: langchain-groq>=0.2.3
|
15
|
+
Requires-Dist: langchain-openai>=0.3.0
|
16
|
+
Requires-Dist: langgraph>=0.2.62
|
17
|
+
Requires-Dist: mcp>=1.2.0
|
18
|
+
Requires-Dist: pyjson5>=1.6.8
|
19
|
+
Requires-Dist: pympler>=1.1
|
20
|
+
Requires-Dist: python-dotenv>=1.0.1
|
21
|
+
|
22
|
+
# MCP To LangChain Tools Conversion Utility / Python [](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
23
|
+
|
24
|
+
This package is intended to simplify the use of
|
25
|
+
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
26
|
+
server tools with LangChain / Python.
|
27
|
+
|
28
|
+
It contains a utility function `convertMcpToLangchainTools()`.
|
29
|
+
This function handles parallel initialization of specified multiple MCP servers
|
30
|
+
and converts their available tools into an array of
|
31
|
+
[LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
32
|
+
|
33
|
+
A typescript version of this utility library is available
|
34
|
+
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
35
|
+
|
36
|
+
## Requirements
|
37
|
+
|
38
|
+
- Python 3.11+
|
39
|
+
|
40
|
+
## Installation
|
41
|
+
|
42
|
+
```bash
|
43
|
+
pip install langchain-mcp-tools
|
44
|
+
```
|
45
|
+
|
46
|
+
## Quick Start
|
47
|
+
|
48
|
+
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
49
|
+
that follow the same structure as
|
50
|
+
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
51
|
+
but only the contents of the `mcpServers` property,
|
52
|
+
and is expressed as a `dict`, e.g.:
|
53
|
+
|
54
|
+
```python
|
55
|
+
mcp_configs = {
|
56
|
+
'filesystem': {
|
57
|
+
'command': 'npx',
|
58
|
+
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
59
|
+
},
|
60
|
+
'fetch': {
|
61
|
+
'command': 'uvx',
|
62
|
+
'args': ['mcp-server-fetch']
|
63
|
+
}
|
64
|
+
}
|
65
|
+
|
66
|
+
tools, cleanup = await convert_mcp_to_langchain_tools(
|
67
|
+
mcp_configs
|
68
|
+
)
|
69
|
+
```
|
70
|
+
|
71
|
+
The utility function initializes all specified MCP servers in parallel,
|
72
|
+
and returns LangChain Tools (`List[BaseTool]`)
|
73
|
+
by gathering all available MCP server tools,
|
74
|
+
and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
75
|
+
It also returns a cleanup callback function (`McpServerCleanupFn`)
|
76
|
+
to be invoked to close all MCP server sessions when finished.
|
77
|
+
|
78
|
+
The returned tools can be used with LangChain, e.g.:
|
79
|
+
|
80
|
+
```python
|
81
|
+
# from langchain.chat_models import init_chat_model
|
82
|
+
llm = init_chat_model(
|
83
|
+
model='claude-3-5-haiku-latest',
|
84
|
+
model_provider='anthropic'
|
85
|
+
)
|
86
|
+
|
87
|
+
# from langgraph.prebuilt import create_react_agent
|
88
|
+
agent = create_react_agent(
|
89
|
+
llm,
|
90
|
+
tools
|
91
|
+
)
|
92
|
+
```
|
93
|
+
<!-- A simple and experimentable usage example can be found
|
94
|
+
[here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
|
95
|
+
|
96
|
+
<!-- A more realistic usage example can be found
|
97
|
+
[here](https://github.com/hideya/langchain-mcp-client-ts) -->
|
98
|
+
|
99
|
+
An usage example can be found
|
100
|
+
[here](https://github.com/hideya/mcp-client-langchain-py)
|
101
|
+
|
102
|
+
## Limitations
|
103
|
+
|
104
|
+
Currently, only text results of tool calls are supported.
|
105
|
+
|
106
|
+
## Technical Details
|
107
|
+
|
108
|
+
It was very tricky (for me) to get the parallel MCP server initialization
|
109
|
+
to work successfully...
|
110
|
+
|
111
|
+
I'm new to Python, so it is very possible that my ignorance is playing
|
112
|
+
a big role here...
|
113
|
+
I'm summarizing why it was difficult for me below.
|
114
|
+
Any comments pointing out something I am missing would be greatly appreciated!
|
115
|
+
[(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
|
116
|
+
|
117
|
+
1. Core Challenge:
|
118
|
+
- Async resources management for `stdio_client` and `ClientSession` seems
|
119
|
+
to rely exclusively on `asynccontextmanager` for cleanup with no manual
|
120
|
+
cleanup options (based on the mcp python-sdk impl as of Jan 14, 2025)
|
121
|
+
- Initializing multiple MCP servers in parallel requires a dedicated
|
122
|
+
`asyncio.Task` per server
|
123
|
+
- Necessity of keeping sessions alive for later use after initialization
|
124
|
+
- Ensuring proper cleanup later in the same task that created them
|
125
|
+
|
126
|
+
2. Solution Strategy:
|
127
|
+
A key requirement for parallel initialization is that each server must be
|
128
|
+
initialized in its own dedicated task - there's no way around this if we
|
129
|
+
want true parallel initialization. However, this creates a challenge since
|
130
|
+
we also need to maintain long-lived sessions and handle cleanup properly.
|
131
|
+
|
132
|
+
The key insight is to keep the initialization tasks alive throughout the
|
133
|
+
session lifetime, rather than letting them complete after initialization.
|
134
|
+
|
135
|
+
By using `asyncio.Event`s for coordination, we can:
|
136
|
+
- Allow parallel initialization while maintaining proper context management
|
137
|
+
- Keep each initialization task running until explicit cleanup is requested
|
138
|
+
- Ensure cleanup occurs in the same task that created the resources
|
139
|
+
- Provide a clean interface for the caller to manage the lifecycle
|
140
|
+
|
141
|
+
Alternative Considered:
|
142
|
+
A generator/coroutine approach using `finally` block for cleanup was
|
143
|
+
considered but rejected because:
|
144
|
+
- It turned out that the `finally` block in a generator/coroutine can be
|
145
|
+
executed by a different task than the one that ran the main body of
|
146
|
+
the code
|
147
|
+
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
148
|
+
called from the same task that created the context
|
149
|
+
|
150
|
+
3. Task Lifecycle:
|
151
|
+
To allow the initialization task to stay alive waiting for cleanup:
|
152
|
+
```
|
153
|
+
[Task starts]
|
154
|
+
↓
|
155
|
+
Initialize server & convert tools
|
156
|
+
↓
|
157
|
+
Set ready_event (signals tools are ready)
|
158
|
+
↓
|
159
|
+
await cleanup_event.wait() (keeps task alive)
|
160
|
+
↓
|
161
|
+
When cleanup_event is set:
|
162
|
+
exit_stack.aclose() (cleanup in original task)
|
163
|
+
```
|
164
|
+
This approach indeed enables parallel initialization while maintaining proper
|
165
|
+
async resource lifecycle management through context managers.
|
166
|
+
However, I'm afraid I'm twisting things around too much.
|
167
|
+
It usually means I'm doing something very worng...
|
168
|
+
|
169
|
+
I think it is a natural assumption that MCP SDK is designed with consideration
|
170
|
+
for parallel server initialization.
|
171
|
+
I'm not sure what I'm missing...
|
172
|
+
(FYI, with the TypeScript SDK, parallel server initialization was pretty straightforward)
|
@@ -0,0 +1,151 @@
|
|
1
|
+
# MCP To LangChain Tools Conversion Utility / Python [](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
2
|
+
|
3
|
+
This package is intended to simplify the use of
|
4
|
+
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
5
|
+
server tools with LangChain / Python.
|
6
|
+
|
7
|
+
It contains a utility function `convertMcpToLangchainTools()`.
|
8
|
+
This function handles parallel initialization of specified multiple MCP servers
|
9
|
+
and converts their available tools into an array of
|
10
|
+
[LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
11
|
+
|
12
|
+
A typescript version of this utility library is available
|
13
|
+
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
14
|
+
|
15
|
+
## Requirements
|
16
|
+
|
17
|
+
- Python 3.11+
|
18
|
+
|
19
|
+
## Installation
|
20
|
+
|
21
|
+
```bash
|
22
|
+
pip install langchain-mcp-tools
|
23
|
+
```
|
24
|
+
|
25
|
+
## Quick Start
|
26
|
+
|
27
|
+
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
28
|
+
that follow the same structure as
|
29
|
+
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
30
|
+
but only the contents of the `mcpServers` property,
|
31
|
+
and is expressed as a `dict`, e.g.:
|
32
|
+
|
33
|
+
```python
|
34
|
+
mcp_configs = {
|
35
|
+
'filesystem': {
|
36
|
+
'command': 'npx',
|
37
|
+
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
38
|
+
},
|
39
|
+
'fetch': {
|
40
|
+
'command': 'uvx',
|
41
|
+
'args': ['mcp-server-fetch']
|
42
|
+
}
|
43
|
+
}
|
44
|
+
|
45
|
+
tools, cleanup = await convert_mcp_to_langchain_tools(
|
46
|
+
mcp_configs
|
47
|
+
)
|
48
|
+
```
|
49
|
+
|
50
|
+
The utility function initializes all specified MCP servers in parallel,
|
51
|
+
and returns LangChain Tools (`List[BaseTool]`)
|
52
|
+
by gathering all available MCP server tools,
|
53
|
+
and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
54
|
+
It also returns a cleanup callback function (`McpServerCleanupFn`)
|
55
|
+
to be invoked to close all MCP server sessions when finished.
|
56
|
+
|
57
|
+
The returned tools can be used with LangChain, e.g.:
|
58
|
+
|
59
|
+
```python
|
60
|
+
# from langchain.chat_models import init_chat_model
|
61
|
+
llm = init_chat_model(
|
62
|
+
model='claude-3-5-haiku-latest',
|
63
|
+
model_provider='anthropic'
|
64
|
+
)
|
65
|
+
|
66
|
+
# from langgraph.prebuilt import create_react_agent
|
67
|
+
agent = create_react_agent(
|
68
|
+
llm,
|
69
|
+
tools
|
70
|
+
)
|
71
|
+
```
|
72
|
+
<!-- A simple and experimentable usage example can be found
|
73
|
+
[here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
|
74
|
+
|
75
|
+
<!-- A more realistic usage example can be found
|
76
|
+
[here](https://github.com/hideya/langchain-mcp-client-ts) -->
|
77
|
+
|
78
|
+
An usage example can be found
|
79
|
+
[here](https://github.com/hideya/mcp-client-langchain-py)
|
80
|
+
|
81
|
+
## Limitations
|
82
|
+
|
83
|
+
Currently, only text results of tool calls are supported.
|
84
|
+
|
85
|
+
## Technical Details
|
86
|
+
|
87
|
+
It was very tricky (for me) to get the parallel MCP server initialization
|
88
|
+
to work successfully...
|
89
|
+
|
90
|
+
I'm new to Python, so it is very possible that my ignorance is playing
|
91
|
+
a big role here...
|
92
|
+
I'm summarizing why it was difficult for me below.
|
93
|
+
Any comments pointing out something I am missing would be greatly appreciated!
|
94
|
+
[(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
|
95
|
+
|
96
|
+
1. Core Challenge:
|
97
|
+
- Async resources management for `stdio_client` and `ClientSession` seems
|
98
|
+
to rely exclusively on `asynccontextmanager` for cleanup with no manual
|
99
|
+
cleanup options (based on the mcp python-sdk impl as of Jan 14, 2025)
|
100
|
+
- Initializing multiple MCP servers in parallel requires a dedicated
|
101
|
+
`asyncio.Task` per server
|
102
|
+
- Necessity of keeping sessions alive for later use after initialization
|
103
|
+
- Ensuring proper cleanup later in the same task that created them
|
104
|
+
|
105
|
+
2. Solution Strategy:
|
106
|
+
A key requirement for parallel initialization is that each server must be
|
107
|
+
initialized in its own dedicated task - there's no way around this if we
|
108
|
+
want true parallel initialization. However, this creates a challenge since
|
109
|
+
we also need to maintain long-lived sessions and handle cleanup properly.
|
110
|
+
|
111
|
+
The key insight is to keep the initialization tasks alive throughout the
|
112
|
+
session lifetime, rather than letting them complete after initialization.
|
113
|
+
|
114
|
+
By using `asyncio.Event`s for coordination, we can:
|
115
|
+
- Allow parallel initialization while maintaining proper context management
|
116
|
+
- Keep each initialization task running until explicit cleanup is requested
|
117
|
+
- Ensure cleanup occurs in the same task that created the resources
|
118
|
+
- Provide a clean interface for the caller to manage the lifecycle
|
119
|
+
|
120
|
+
Alternative Considered:
|
121
|
+
A generator/coroutine approach using `finally` block for cleanup was
|
122
|
+
considered but rejected because:
|
123
|
+
- It turned out that the `finally` block in a generator/coroutine can be
|
124
|
+
executed by a different task than the one that ran the main body of
|
125
|
+
the code
|
126
|
+
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
127
|
+
called from the same task that created the context
|
128
|
+
|
129
|
+
3. Task Lifecycle:
|
130
|
+
To allow the initialization task to stay alive waiting for cleanup:
|
131
|
+
```
|
132
|
+
[Task starts]
|
133
|
+
↓
|
134
|
+
Initialize server & convert tools
|
135
|
+
↓
|
136
|
+
Set ready_event (signals tools are ready)
|
137
|
+
↓
|
138
|
+
await cleanup_event.wait() (keeps task alive)
|
139
|
+
↓
|
140
|
+
When cleanup_event is set:
|
141
|
+
exit_stack.aclose() (cleanup in original task)
|
142
|
+
```
|
143
|
+
This approach indeed enables parallel initialization while maintaining proper
|
144
|
+
async resource lifecycle management through context managers.
|
145
|
+
However, I'm afraid I'm twisting things around too much.
|
146
|
+
It usually means I'm doing something very worng...
|
147
|
+
|
148
|
+
I think it is a natural assumption that MCP SDK is designed with consideration
|
149
|
+
for parallel server initialization.
|
150
|
+
I'm not sure what I'm missing...
|
151
|
+
(FYI, with the TypeScript SDK, parallel server initialization was pretty straightforward)
|
@@ -0,0 +1,172 @@
|
|
1
|
+
Metadata-Version: 2.2
|
2
|
+
Name: langchain-mcp-tools
|
3
|
+
Version: 0.0.4
|
4
|
+
Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
|
5
|
+
Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
|
6
|
+
Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
|
7
|
+
Requires-Python: >=3.11
|
8
|
+
Description-Content-Type: text/markdown
|
9
|
+
License-File: LICENSE
|
10
|
+
Requires-Dist: jsonschema-pydantic>=0.6
|
11
|
+
Requires-Dist: langchain-core>=0.3.29
|
12
|
+
Requires-Dist: langchain>=0.3.14
|
13
|
+
Requires-Dist: langchain-anthropic>=0.3.1
|
14
|
+
Requires-Dist: langchain-groq>=0.2.3
|
15
|
+
Requires-Dist: langchain-openai>=0.3.0
|
16
|
+
Requires-Dist: langgraph>=0.2.62
|
17
|
+
Requires-Dist: mcp>=1.2.0
|
18
|
+
Requires-Dist: pyjson5>=1.6.8
|
19
|
+
Requires-Dist: pympler>=1.1
|
20
|
+
Requires-Dist: python-dotenv>=1.0.1
|
21
|
+
|
22
|
+
# MCP To LangChain Tools Conversion Utility / Python [](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
23
|
+
|
24
|
+
This package is intended to simplify the use of
|
25
|
+
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
26
|
+
server tools with LangChain / Python.
|
27
|
+
|
28
|
+
It contains a utility function `convertMcpToLangchainTools()`.
|
29
|
+
This function handles parallel initialization of specified multiple MCP servers
|
30
|
+
and converts their available tools into an array of
|
31
|
+
[LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
32
|
+
|
33
|
+
A typescript version of this utility library is available
|
34
|
+
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
35
|
+
|
36
|
+
## Requirements
|
37
|
+
|
38
|
+
- Python 3.11+
|
39
|
+
|
40
|
+
## Installation
|
41
|
+
|
42
|
+
```bash
|
43
|
+
pip install langchain-mcp-tools
|
44
|
+
```
|
45
|
+
|
46
|
+
## Quick Start
|
47
|
+
|
48
|
+
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
49
|
+
that follow the same structure as
|
50
|
+
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
51
|
+
but only the contents of the `mcpServers` property,
|
52
|
+
and is expressed as a `dict`, e.g.:
|
53
|
+
|
54
|
+
```python
|
55
|
+
mcp_configs = {
|
56
|
+
'filesystem': {
|
57
|
+
'command': 'npx',
|
58
|
+
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
59
|
+
},
|
60
|
+
'fetch': {
|
61
|
+
'command': 'uvx',
|
62
|
+
'args': ['mcp-server-fetch']
|
63
|
+
}
|
64
|
+
}
|
65
|
+
|
66
|
+
tools, cleanup = await convert_mcp_to_langchain_tools(
|
67
|
+
mcp_configs
|
68
|
+
)
|
69
|
+
```
|
70
|
+
|
71
|
+
The utility function initializes all specified MCP servers in parallel,
|
72
|
+
and returns LangChain Tools (`List[BaseTool]`)
|
73
|
+
by gathering all available MCP server tools,
|
74
|
+
and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
75
|
+
It also returns a cleanup callback function (`McpServerCleanupFn`)
|
76
|
+
to be invoked to close all MCP server sessions when finished.
|
77
|
+
|
78
|
+
The returned tools can be used with LangChain, e.g.:
|
79
|
+
|
80
|
+
```python
|
81
|
+
# from langchain.chat_models import init_chat_model
|
82
|
+
llm = init_chat_model(
|
83
|
+
model='claude-3-5-haiku-latest',
|
84
|
+
model_provider='anthropic'
|
85
|
+
)
|
86
|
+
|
87
|
+
# from langgraph.prebuilt import create_react_agent
|
88
|
+
agent = create_react_agent(
|
89
|
+
llm,
|
90
|
+
tools
|
91
|
+
)
|
92
|
+
```
|
93
|
+
<!-- A simple and experimentable usage example can be found
|
94
|
+
[here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
|
95
|
+
|
96
|
+
<!-- A more realistic usage example can be found
|
97
|
+
[here](https://github.com/hideya/langchain-mcp-client-ts) -->
|
98
|
+
|
99
|
+
An usage example can be found
|
100
|
+
[here](https://github.com/hideya/mcp-client-langchain-py)
|
101
|
+
|
102
|
+
## Limitations
|
103
|
+
|
104
|
+
Currently, only text results of tool calls are supported.
|
105
|
+
|
106
|
+
## Technical Details
|
107
|
+
|
108
|
+
It was very tricky (for me) to get the parallel MCP server initialization
|
109
|
+
to work successfully...
|
110
|
+
|
111
|
+
I'm new to Python, so it is very possible that my ignorance is playing
|
112
|
+
a big role here...
|
113
|
+
I'm summarizing why it was difficult for me below.
|
114
|
+
Any comments pointing out something I am missing would be greatly appreciated!
|
115
|
+
[(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
|
116
|
+
|
117
|
+
1. Core Challenge:
|
118
|
+
- Async resources management for `stdio_client` and `ClientSession` seems
|
119
|
+
to rely exclusively on `asynccontextmanager` for cleanup with no manual
|
120
|
+
cleanup options (based on the mcp python-sdk impl as of Jan 14, 2025)
|
121
|
+
- Initializing multiple MCP servers in parallel requires a dedicated
|
122
|
+
`asyncio.Task` per server
|
123
|
+
- Necessity of keeping sessions alive for later use after initialization
|
124
|
+
- Ensuring proper cleanup later in the same task that created them
|
125
|
+
|
126
|
+
2. Solution Strategy:
|
127
|
+
A key requirement for parallel initialization is that each server must be
|
128
|
+
initialized in its own dedicated task - there's no way around this if we
|
129
|
+
want true parallel initialization. However, this creates a challenge since
|
130
|
+
we also need to maintain long-lived sessions and handle cleanup properly.
|
131
|
+
|
132
|
+
The key insight is to keep the initialization tasks alive throughout the
|
133
|
+
session lifetime, rather than letting them complete after initialization.
|
134
|
+
|
135
|
+
By using `asyncio.Event`s for coordination, we can:
|
136
|
+
- Allow parallel initialization while maintaining proper context management
|
137
|
+
- Keep each initialization task running until explicit cleanup is requested
|
138
|
+
- Ensure cleanup occurs in the same task that created the resources
|
139
|
+
- Provide a clean interface for the caller to manage the lifecycle
|
140
|
+
|
141
|
+
Alternative Considered:
|
142
|
+
A generator/coroutine approach using `finally` block for cleanup was
|
143
|
+
considered but rejected because:
|
144
|
+
- It turned out that the `finally` block in a generator/coroutine can be
|
145
|
+
executed by a different task than the one that ran the main body of
|
146
|
+
the code
|
147
|
+
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
148
|
+
called from the same task that created the context
|
149
|
+
|
150
|
+
3. Task Lifecycle:
|
151
|
+
To allow the initialization task to stay alive waiting for cleanup:
|
152
|
+
```
|
153
|
+
[Task starts]
|
154
|
+
↓
|
155
|
+
Initialize server & convert tools
|
156
|
+
↓
|
157
|
+
Set ready_event (signals tools are ready)
|
158
|
+
↓
|
159
|
+
await cleanup_event.wait() (keeps task alive)
|
160
|
+
↓
|
161
|
+
When cleanup_event is set:
|
162
|
+
exit_stack.aclose() (cleanup in original task)
|
163
|
+
```
|
164
|
+
This approach indeed enables parallel initialization while maintaining proper
|
165
|
+
async resource lifecycle management through context managers.
|
166
|
+
However, I'm afraid I'm twisting things around too much.
|
167
|
+
It usually means I'm doing something very worng...
|
168
|
+
|
169
|
+
I think it is a natural assumption that MCP SDK is designed with consideration
|
170
|
+
for parallel server initialization.
|
171
|
+
I'm not sure what I'm missing...
|
172
|
+
(FYI, with the TypeScript SDK, parallel server initialization was pretty straightforward)
|
@@ -1,7 +1,7 @@
|
|
1
1
|
[project]
|
2
2
|
name = "langchain-mcp-tools"
|
3
|
-
version = "0.0.
|
4
|
-
description = "MCP To LangChain Tools Conversion Utility"
|
3
|
+
version = "0.0.4"
|
4
|
+
description = "Model Context Protocol (MCP) To LangChain Tools Conversion Utility"
|
5
5
|
readme = "README.md"
|
6
6
|
requires-python = ">=3.11"
|
7
7
|
dependencies = [
|
@@ -33,7 +33,5 @@ exclude = [
|
|
33
33
|
]
|
34
34
|
|
35
35
|
[project.urls]
|
36
|
-
"
|
37
|
-
"
|
38
|
-
"Documentation" = "https://github.com/hideya/mcp-client-langchain-py/blob/main/README.md"
|
39
|
-
"Source Code" = "https://github.com/hideya/mcp-client-langchain-py"
|
36
|
+
"Bug Tracker" = "https://github.com/hideya/langchain-mcp-tools-py/issues"
|
37
|
+
"Source Code" = "https://github.com/hideya/langchain-mcp-tools-py"
|
@@ -37,11 +37,12 @@ require context managers while enabling parallel initialization.
|
|
37
37
|
The key aspects are:
|
38
38
|
|
39
39
|
1. Core Challenge:
|
40
|
-
-
|
41
|
-
rely exclusively on asynccontextmanager for cleanup with no manual
|
42
|
-
options (based on the mcp python-sdk impl as of Jan 14, 2025
|
43
|
-
- Initializing multiple servers in parallel
|
44
|
-
|
40
|
+
- Async resources management for `stdio_client` and `ClientSession` seems
|
41
|
+
to rely exclusively on `asynccontextmanager` for cleanup with no manual
|
42
|
+
cleanup options (based on the mcp python-sdk impl as of Jan 14, 2025)
|
43
|
+
- Initializing multiple MCP servers in parallel requires a dedicated
|
44
|
+
`asyncio.Task` per server
|
45
|
+
- Necessity of keeping sessions alive for later use after initialization
|
45
46
|
- Ensuring proper cleanup in the same task that created them
|
46
47
|
|
47
48
|
2. Solution Strategy:
|
@@ -52,18 +53,19 @@ The key aspects are:
|
|
52
53
|
|
53
54
|
The key insight is to keep the initialization tasks alive throughout the
|
54
55
|
session lifetime, rather than letting them complete after initialization.
|
55
|
-
By using
|
56
|
+
By using `asyncio.Event`s for coordination, we can:
|
56
57
|
- Allow parallel initialization while maintaining proper context management
|
57
58
|
- Keep each initialization task running until explicit cleanup is requested
|
58
59
|
- Ensure cleanup occurs in the same task that created the resources
|
59
60
|
- Provide a clean interface for the caller to manage the lifecycle
|
60
61
|
|
61
62
|
Alternative Considered:
|
62
|
-
A generator/coroutine approach using
|
63
|
+
A generator/coroutine approach using `finally` block for cleanup was
|
63
64
|
considered but rejected because:
|
64
|
-
-
|
65
|
-
different task than the one that ran the main body of
|
66
|
-
|
65
|
+
- It turned out that the `finally` block in a generator/coroutine can be
|
66
|
+
executed by a different task than the one that ran the main body of
|
67
|
+
the code
|
68
|
+
- This breaks the requirement that `AsyncExitStack.aclose()` must be
|
67
69
|
called from the same task that created the context
|
68
70
|
|
69
71
|
3. Task Lifecycle:
|
@@ -79,8 +81,16 @@ The key aspects are:
|
|
79
81
|
When cleanup_event is set:
|
80
82
|
exit_stack.aclose() (cleanup in original task)
|
81
83
|
|
82
|
-
This
|
83
|
-
resource lifecycle management through context managers.
|
84
|
+
This approach indeed enables parallel initialization while maintaining proper
|
85
|
+
async resource lifecycle management through context managers.
|
86
|
+
However, I'm afraid I'm twisting things around too much.
|
87
|
+
It usually means I'm doing something very worng...
|
88
|
+
|
89
|
+
I think it is a natural assumption that MCP SDK is designed with consideration
|
90
|
+
for parallel server initialization.
|
91
|
+
I'm not sure what I'm missing...
|
92
|
+
(FYI, with the TypeScript SDK, parallel server initializaion was quite
|
93
|
+
straight forward)
|
84
94
|
"""
|
85
95
|
|
86
96
|
|
@@ -1,106 +0,0 @@
|
|
1
|
-
Metadata-Version: 2.2
|
2
|
-
Name: langchain-mcp-tools
|
3
|
-
Version: 0.0.2
|
4
|
-
Summary: MCP To LangChain Tools Conversion Utility
|
5
|
-
Project-URL: Homepage, https://github.com/hideya
|
6
|
-
Project-URL: Bug Tracker, https://github.com/hideya/mcp-client-langchain-py/issues
|
7
|
-
Project-URL: Documentation, https://github.com/hideya/mcp-client-langchain-py/blob/main/README.md
|
8
|
-
Project-URL: Source Code, https://github.com/hideya/mcp-client-langchain-py
|
9
|
-
Requires-Python: >=3.11
|
10
|
-
Description-Content-Type: text/markdown
|
11
|
-
License-File: LICENSE
|
12
|
-
Requires-Dist: jsonschema-pydantic>=0.6
|
13
|
-
Requires-Dist: langchain-core>=0.3.29
|
14
|
-
Requires-Dist: langchain>=0.3.14
|
15
|
-
Requires-Dist: langchain-anthropic>=0.3.1
|
16
|
-
Requires-Dist: langchain-groq>=0.2.3
|
17
|
-
Requires-Dist: langchain-openai>=0.3.0
|
18
|
-
Requires-Dist: langgraph>=0.2.62
|
19
|
-
Requires-Dist: mcp>=1.2.0
|
20
|
-
Requires-Dist: pyjson5>=1.6.8
|
21
|
-
Requires-Dist: pympler>=1.1
|
22
|
-
Requires-Dist: python-dotenv>=1.0.1
|
23
|
-
|
24
|
-
# MCP To LangChain Tools Conversion Utility / Python [](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
25
|
-
|
26
|
-
This package is intended to simplify the use of
|
27
|
-
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
28
|
-
server tools with LangChain / Python.
|
29
|
-
|
30
|
-
It contains a utility function `convertMcpToLangchainTools()`.
|
31
|
-
This function handles parallel initialization of specified multiple MCP servers
|
32
|
-
and converts their available tools into an array of
|
33
|
-
[LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
34
|
-
|
35
|
-
A typescript version of this MCP client is available
|
36
|
-
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
37
|
-
|
38
|
-
## Requirements
|
39
|
-
|
40
|
-
- Python 3.11+
|
41
|
-
|
42
|
-
## Installation
|
43
|
-
|
44
|
-
```bash
|
45
|
-
pip install langchain-mcp-tools
|
46
|
-
```
|
47
|
-
|
48
|
-
## Quick Start
|
49
|
-
|
50
|
-
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
51
|
-
that follow the same structure as
|
52
|
-
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
53
|
-
but only the contents of the `mcpServers` property,
|
54
|
-
and is expressed as a `dict`, e.g.:
|
55
|
-
|
56
|
-
```python
|
57
|
-
mcp_configs = {
|
58
|
-
'filesystem': {
|
59
|
-
'command': 'npx',
|
60
|
-
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
61
|
-
},
|
62
|
-
'fetch': {
|
63
|
-
'command': 'uvx',
|
64
|
-
'args': ['mcp-server-fetch']
|
65
|
-
}
|
66
|
-
}
|
67
|
-
|
68
|
-
tools, mcp_cleanup = await convert_mcp_to_langchain_tools(
|
69
|
-
mcp_configs
|
70
|
-
)
|
71
|
-
```
|
72
|
-
|
73
|
-
The utility function initializes all specified MCP servers in parallel,
|
74
|
-
and returns LangChain Tools (`List[BaseTool]`)
|
75
|
-
by gathering all available MCP server tools,
|
76
|
-
and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
77
|
-
It also returns a cleanup callback function (`McpServerCleanupFn`)
|
78
|
-
to be invoked to close all MCP server sessions when finished.
|
79
|
-
|
80
|
-
The returned tools can be used with LangChain, e.g.:
|
81
|
-
|
82
|
-
```python
|
83
|
-
# from langchain.chat_models import init_chat_model
|
84
|
-
llm = init_chat_model(
|
85
|
-
model='claude-3-5-haiku-latest',
|
86
|
-
model_provider='anthropic'
|
87
|
-
)
|
88
|
-
|
89
|
-
# from langgraph.prebuilt import create_react_agent
|
90
|
-
agent = create_react_agent(
|
91
|
-
llm,
|
92
|
-
tools
|
93
|
-
)
|
94
|
-
```
|
95
|
-
<!-- A simple and experimentable usage example can be found
|
96
|
-
[here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
|
97
|
-
|
98
|
-
<!-- A more realistic usage example can be found
|
99
|
-
[here](https://github.com/hideya/langchain-mcp-client-ts) -->
|
100
|
-
|
101
|
-
An usage example can be found
|
102
|
-
[here](https://github.com/hideya/mcp-client-langchain-py)
|
103
|
-
|
104
|
-
## Limitations
|
105
|
-
|
106
|
-
Currently, only text results of tool calls are supported.
|
@@ -1,83 +0,0 @@
|
|
1
|
-
# MCP To LangChain Tools Conversion Utility / Python [](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
2
|
-
|
3
|
-
This package is intended to simplify the use of
|
4
|
-
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
5
|
-
server tools with LangChain / Python.
|
6
|
-
|
7
|
-
It contains a utility function `convertMcpToLangchainTools()`.
|
8
|
-
This function handles parallel initialization of specified multiple MCP servers
|
9
|
-
and converts their available tools into an array of
|
10
|
-
[LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
11
|
-
|
12
|
-
A typescript version of this MCP client is available
|
13
|
-
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
14
|
-
|
15
|
-
## Requirements
|
16
|
-
|
17
|
-
- Python 3.11+
|
18
|
-
|
19
|
-
## Installation
|
20
|
-
|
21
|
-
```bash
|
22
|
-
pip install langchain-mcp-tools
|
23
|
-
```
|
24
|
-
|
25
|
-
## Quick Start
|
26
|
-
|
27
|
-
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
28
|
-
that follow the same structure as
|
29
|
-
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
30
|
-
but only the contents of the `mcpServers` property,
|
31
|
-
and is expressed as a `dict`, e.g.:
|
32
|
-
|
33
|
-
```python
|
34
|
-
mcp_configs = {
|
35
|
-
'filesystem': {
|
36
|
-
'command': 'npx',
|
37
|
-
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
38
|
-
},
|
39
|
-
'fetch': {
|
40
|
-
'command': 'uvx',
|
41
|
-
'args': ['mcp-server-fetch']
|
42
|
-
}
|
43
|
-
}
|
44
|
-
|
45
|
-
tools, mcp_cleanup = await convert_mcp_to_langchain_tools(
|
46
|
-
mcp_configs
|
47
|
-
)
|
48
|
-
```
|
49
|
-
|
50
|
-
The utility function initializes all specified MCP servers in parallel,
|
51
|
-
and returns LangChain Tools (`List[BaseTool]`)
|
52
|
-
by gathering all available MCP server tools,
|
53
|
-
and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
54
|
-
It also returns a cleanup callback function (`McpServerCleanupFn`)
|
55
|
-
to be invoked to close all MCP server sessions when finished.
|
56
|
-
|
57
|
-
The returned tools can be used with LangChain, e.g.:
|
58
|
-
|
59
|
-
```python
|
60
|
-
# from langchain.chat_models import init_chat_model
|
61
|
-
llm = init_chat_model(
|
62
|
-
model='claude-3-5-haiku-latest',
|
63
|
-
model_provider='anthropic'
|
64
|
-
)
|
65
|
-
|
66
|
-
# from langgraph.prebuilt import create_react_agent
|
67
|
-
agent = create_react_agent(
|
68
|
-
llm,
|
69
|
-
tools
|
70
|
-
)
|
71
|
-
```
|
72
|
-
<!-- A simple and experimentable usage example can be found
|
73
|
-
[here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
|
74
|
-
|
75
|
-
<!-- A more realistic usage example can be found
|
76
|
-
[here](https://github.com/hideya/langchain-mcp-client-ts) -->
|
77
|
-
|
78
|
-
An usage example can be found
|
79
|
-
[here](https://github.com/hideya/mcp-client-langchain-py)
|
80
|
-
|
81
|
-
## Limitations
|
82
|
-
|
83
|
-
Currently, only text results of tool calls are supported.
|
@@ -1,106 +0,0 @@
|
|
1
|
-
Metadata-Version: 2.2
|
2
|
-
Name: langchain-mcp-tools
|
3
|
-
Version: 0.0.2
|
4
|
-
Summary: MCP To LangChain Tools Conversion Utility
|
5
|
-
Project-URL: Homepage, https://github.com/hideya
|
6
|
-
Project-URL: Bug Tracker, https://github.com/hideya/mcp-client-langchain-py/issues
|
7
|
-
Project-URL: Documentation, https://github.com/hideya/mcp-client-langchain-py/blob/main/README.md
|
8
|
-
Project-URL: Source Code, https://github.com/hideya/mcp-client-langchain-py
|
9
|
-
Requires-Python: >=3.11
|
10
|
-
Description-Content-Type: text/markdown
|
11
|
-
License-File: LICENSE
|
12
|
-
Requires-Dist: jsonschema-pydantic>=0.6
|
13
|
-
Requires-Dist: langchain-core>=0.3.29
|
14
|
-
Requires-Dist: langchain>=0.3.14
|
15
|
-
Requires-Dist: langchain-anthropic>=0.3.1
|
16
|
-
Requires-Dist: langchain-groq>=0.2.3
|
17
|
-
Requires-Dist: langchain-openai>=0.3.0
|
18
|
-
Requires-Dist: langgraph>=0.2.62
|
19
|
-
Requires-Dist: mcp>=1.2.0
|
20
|
-
Requires-Dist: pyjson5>=1.6.8
|
21
|
-
Requires-Dist: pympler>=1.1
|
22
|
-
Requires-Dist: python-dotenv>=1.0.1
|
23
|
-
|
24
|
-
# MCP To LangChain Tools Conversion Utility / Python [](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/)
|
25
|
-
|
26
|
-
This package is intended to simplify the use of
|
27
|
-
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|
28
|
-
server tools with LangChain / Python.
|
29
|
-
|
30
|
-
It contains a utility function `convertMcpToLangchainTools()`.
|
31
|
-
This function handles parallel initialization of specified multiple MCP servers
|
32
|
-
and converts their available tools into an array of
|
33
|
-
[LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
34
|
-
|
35
|
-
A typescript version of this MCP client is available
|
36
|
-
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
|
37
|
-
|
38
|
-
## Requirements
|
39
|
-
|
40
|
-
- Python 3.11+
|
41
|
-
|
42
|
-
## Installation
|
43
|
-
|
44
|
-
```bash
|
45
|
-
pip install langchain-mcp-tools
|
46
|
-
```
|
47
|
-
|
48
|
-
## Quick Start
|
49
|
-
|
50
|
-
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
|
51
|
-
that follow the same structure as
|
52
|
-
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
|
53
|
-
but only the contents of the `mcpServers` property,
|
54
|
-
and is expressed as a `dict`, e.g.:
|
55
|
-
|
56
|
-
```python
|
57
|
-
mcp_configs = {
|
58
|
-
'filesystem': {
|
59
|
-
'command': 'npx',
|
60
|
-
'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
|
61
|
-
},
|
62
|
-
'fetch': {
|
63
|
-
'command': 'uvx',
|
64
|
-
'args': ['mcp-server-fetch']
|
65
|
-
}
|
66
|
-
}
|
67
|
-
|
68
|
-
tools, mcp_cleanup = await convert_mcp_to_langchain_tools(
|
69
|
-
mcp_configs
|
70
|
-
)
|
71
|
-
```
|
72
|
-
|
73
|
-
The utility function initializes all specified MCP servers in parallel,
|
74
|
-
and returns LangChain Tools (`List[BaseTool]`)
|
75
|
-
by gathering all available MCP server tools,
|
76
|
-
and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
|
77
|
-
It also returns a cleanup callback function (`McpServerCleanupFn`)
|
78
|
-
to be invoked to close all MCP server sessions when finished.
|
79
|
-
|
80
|
-
The returned tools can be used with LangChain, e.g.:
|
81
|
-
|
82
|
-
```python
|
83
|
-
# from langchain.chat_models import init_chat_model
|
84
|
-
llm = init_chat_model(
|
85
|
-
model='claude-3-5-haiku-latest',
|
86
|
-
model_provider='anthropic'
|
87
|
-
)
|
88
|
-
|
89
|
-
# from langgraph.prebuilt import create_react_agent
|
90
|
-
agent = create_react_agent(
|
91
|
-
llm,
|
92
|
-
tools
|
93
|
-
)
|
94
|
-
```
|
95
|
-
<!-- A simple and experimentable usage example can be found
|
96
|
-
[here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
|
97
|
-
|
98
|
-
<!-- A more realistic usage example can be found
|
99
|
-
[here](https://github.com/hideya/langchain-mcp-client-ts) -->
|
100
|
-
|
101
|
-
An usage example can be found
|
102
|
-
[here](https://github.com/hideya/mcp-client-langchain-py)
|
103
|
-
|
104
|
-
## Limitations
|
105
|
-
|
106
|
-
Currently, only text results of tool calls are supported.
|
File without changes
|
{langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/langchain_mcp_tools.egg-info/SOURCES.txt
RENAMED
File without changes
|
File without changes
|
{langchain_mcp_tools-0.0.2 → langchain_mcp_tools-0.0.4}/langchain_mcp_tools.egg-info/requires.txt
RENAMED
File without changes
|
File without changes
|
File without changes
|