langchain-mcp-tools 0.0.17__tar.gz → 0.1.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.17
3
+ Version: 0.1.1
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -8,7 +8,6 @@ Requires-Python: >=3.11
8
8
  Description-Content-Type: text/markdown
9
9
  License-File: LICENSE
10
10
  Requires-Dist: jsonschema-pydantic>=0.6
11
- Requires-Dist: langchain-core>=0.3.29
12
11
  Requires-Dist: langchain>=0.3.14
13
12
  Requires-Dist: langchain-anthropic>=0.3.1
14
13
  Requires-Dist: langchain-groq>=0.2.3
@@ -19,7 +18,8 @@ Requires-Dist: pyjson5>=1.6.8
19
18
  Requires-Dist: pympler>=1.1
20
19
  Requires-Dist: python-dotenv>=1.0.1
21
20
  Provides-Extra: dev
22
- Requires-Dist: twine>=6.0.1; extra == "dev"
21
+ Requires-Dist: pytest>=8.3.4; extra == "dev"
22
+ Requires-Dist: pytest-asyncio>=0.25.2; extra == "dev"
23
23
 
24
24
  # MCP To LangChain Tools Conversion Utility [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
25
25
 
@@ -28,9 +28,8 @@ This package is intended to simplify the use of
28
28
  server tools with LangChain / Python.
29
29
 
30
30
  It contains a utility function `convert_mcp_to_langchain_tools()`.
31
- This function handles parallel initialization of specified multiple MCP servers
32
- and converts their available tools into a list of
33
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
31
+ This async function handles parallel initialization of specified multiple MCP servers
32
+ and converts their available tools into a list of LangChain-compatible tools.
34
33
 
35
34
  A typescript equivalent of this utility library is available
36
35
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -54,25 +53,25 @@ but only the contents of the `mcpServers` property,
54
53
  and is expressed as a `dict`, e.g.:
55
54
 
56
55
  ```python
57
- mcp_configs = {
58
- 'filesystem': {
59
- 'command': 'npx',
60
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
61
- },
62
- 'fetch': {
63
- 'command': 'uvx',
64
- 'args': ['mcp-server-fetch']
65
- }
56
+ mcp_configs = {
57
+ 'filesystem': {
58
+ 'command': 'npx',
59
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
60
+ },
61
+ 'fetch': {
62
+ 'command': 'uvx',
63
+ 'args': ['mcp-server-fetch']
66
64
  }
65
+ }
67
66
 
68
- tools, cleanup = await convert_mcp_to_langchain_tools(
69
- mcp_configs
70
- )
67
+ tools, cleanup = await convert_mcp_to_langchain_tools(
68
+ mcp_configs
69
+ )
71
70
  ```
72
71
 
73
72
  This utility function initializes all specified MCP servers in parallel,
74
- and returns [LangChain Tools](https://python.langchain.com/api_reference/core/tools.html)
75
- (`tools: List[BaseTool]`)
73
+ and returns LangChain Tools
74
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
76
75
  by gathering available MCP tools from the servers,
77
76
  and by wrapping them into LangChain tools.
78
77
  It also returns an async callback function (`cleanup: McpServerCleanupFn`)
@@ -81,17 +80,17 @@ to be invoked to close all MCP server sessions when finished.
81
80
  The returned tools can be used with LangChain, e.g.:
82
81
 
83
82
  ```python
84
- # from langchain.chat_models import init_chat_model
85
- llm = init_chat_model(
86
- model='claude-3-5-haiku-latest',
87
- model_provider='anthropic'
88
- )
89
-
90
- # from langgraph.prebuilt import create_react_agent
91
- agent = create_react_agent(
92
- llm,
93
- tools
94
- )
83
+ # from langchain.chat_models import init_chat_model
84
+ llm = init_chat_model(
85
+ model='claude-3-5-haiku-latest',
86
+ model_provider='anthropic'
87
+ )
88
+
89
+ # from langgraph.prebuilt import create_react_agent
90
+ agent = create_react_agent(
91
+ llm,
92
+ tools
93
+ )
95
94
  ```
96
95
  A simple and experimentable usage example can be found
97
96
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -113,11 +112,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
113
112
  a big role here...
114
113
  I'll summarize the difficulties I faced below.
115
114
  The source code is available
116
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
115
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
117
116
  Any comments pointing out something I am missing would be greatly appreciated!
118
117
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
119
118
 
120
- 1. Core Challenge:
119
+ 1. Challenge:
121
120
 
122
121
  A key requirement for parallel initialization is that each server must be
123
122
  initialized in its own dedicated task - there's no way around this as far as
@@ -130,11 +129,11 @@ Any comments pointing out something I am missing would be greatly appreciated!
130
129
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
131
130
  - Initializing multiple MCP servers in parallel requires a dedicated
132
131
  `asyncio.Task` per server
133
- - Need to keep sessions alive for later use by different tasks
134
- after initialization
135
- - Need to ensure proper cleanup later in the same task that created them
132
+ - Server cleanup can be initiated later by a task other than the one
133
+ that initialized the resources, whereas `AsyncExitStack.aclose()` must be
134
+ called from the same task that created the context
136
135
 
137
- 2. Solution Strategy:
136
+ 2. Solution:
138
137
 
139
138
  The key insight is to keep the initialization tasks alive throughout the
140
139
  session lifetime, rather than letting them complete after initialization.
@@ -178,5 +177,5 @@ It usually means I'm doing something very worng...
178
177
  I think it is a natural assumption that MCP SDK is designed with consideration
179
178
  for parallel server initialization.
180
179
  I'm not sure what I'm missing...
181
- (FYI, with the TypeScript MCP SDK, parallel initialization was
180
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
182
181
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -5,9 +5,8 @@ This package is intended to simplify the use of
5
5
  server tools with LangChain / Python.
6
6
 
7
7
  It contains a utility function `convert_mcp_to_langchain_tools()`.
8
- This function handles parallel initialization of specified multiple MCP servers
9
- and converts their available tools into a list of
10
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
8
+ This async function handles parallel initialization of specified multiple MCP servers
9
+ and converts their available tools into a list of LangChain-compatible tools.
11
10
 
12
11
  A typescript equivalent of this utility library is available
13
12
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -31,25 +30,25 @@ but only the contents of the `mcpServers` property,
31
30
  and is expressed as a `dict`, e.g.:
32
31
 
33
32
  ```python
34
- mcp_configs = {
35
- 'filesystem': {
36
- 'command': 'npx',
37
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
38
- },
39
- 'fetch': {
40
- 'command': 'uvx',
41
- 'args': ['mcp-server-fetch']
42
- }
33
+ mcp_configs = {
34
+ 'filesystem': {
35
+ 'command': 'npx',
36
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
37
+ },
38
+ 'fetch': {
39
+ 'command': 'uvx',
40
+ 'args': ['mcp-server-fetch']
43
41
  }
42
+ }
44
43
 
45
- tools, cleanup = await convert_mcp_to_langchain_tools(
46
- mcp_configs
47
- )
44
+ tools, cleanup = await convert_mcp_to_langchain_tools(
45
+ mcp_configs
46
+ )
48
47
  ```
49
48
 
50
49
  This utility function initializes all specified MCP servers in parallel,
51
- and returns [LangChain Tools](https://python.langchain.com/api_reference/core/tools.html)
52
- (`tools: List[BaseTool]`)
50
+ and returns LangChain Tools
51
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
53
52
  by gathering available MCP tools from the servers,
54
53
  and by wrapping them into LangChain tools.
55
54
  It also returns an async callback function (`cleanup: McpServerCleanupFn`)
@@ -58,17 +57,17 @@ to be invoked to close all MCP server sessions when finished.
58
57
  The returned tools can be used with LangChain, e.g.:
59
58
 
60
59
  ```python
61
- # from langchain.chat_models import init_chat_model
62
- llm = init_chat_model(
63
- model='claude-3-5-haiku-latest',
64
- model_provider='anthropic'
65
- )
66
-
67
- # from langgraph.prebuilt import create_react_agent
68
- agent = create_react_agent(
69
- llm,
70
- tools
71
- )
60
+ # from langchain.chat_models import init_chat_model
61
+ llm = init_chat_model(
62
+ model='claude-3-5-haiku-latest',
63
+ model_provider='anthropic'
64
+ )
65
+
66
+ # from langgraph.prebuilt import create_react_agent
67
+ agent = create_react_agent(
68
+ llm,
69
+ tools
70
+ )
72
71
  ```
73
72
  A simple and experimentable usage example can be found
74
73
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -90,11 +89,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
90
89
  a big role here...
91
90
  I'll summarize the difficulties I faced below.
92
91
  The source code is available
93
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
92
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
94
93
  Any comments pointing out something I am missing would be greatly appreciated!
95
94
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
96
95
 
97
- 1. Core Challenge:
96
+ 1. Challenge:
98
97
 
99
98
  A key requirement for parallel initialization is that each server must be
100
99
  initialized in its own dedicated task - there's no way around this as far as
@@ -107,11 +106,11 @@ Any comments pointing out something I am missing would be greatly appreciated!
107
106
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
108
107
  - Initializing multiple MCP servers in parallel requires a dedicated
109
108
  `asyncio.Task` per server
110
- - Need to keep sessions alive for later use by different tasks
111
- after initialization
112
- - Need to ensure proper cleanup later in the same task that created them
109
+ - Server cleanup can be initiated later by a task other than the one
110
+ that initialized the resources, whereas `AsyncExitStack.aclose()` must be
111
+ called from the same task that created the context
113
112
 
114
- 2. Solution Strategy:
113
+ 2. Solution:
115
114
 
116
115
  The key insight is to keep the initialization tasks alive throughout the
117
116
  session lifetime, rather than letting them complete after initialization.
@@ -155,5 +154,5 @@ It usually means I'm doing something very worng...
155
154
  I think it is a natural assumption that MCP SDK is designed with consideration
156
155
  for parallel server initialization.
157
156
  I'm not sure what I'm missing...
158
- (FYI, with the TypeScript MCP SDK, parallel initialization was
157
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
159
158
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -1,12 +1,11 @@
1
1
  [project]
2
2
  name = "langchain-mcp-tools"
3
- version = "0.0.17"
3
+ version = "0.1.1"
4
4
  description = "Model Context Protocol (MCP) To LangChain Tools Conversion Utility"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.11"
7
7
  dependencies = [
8
8
  "jsonschema-pydantic>=0.6",
9
- "langchain-core>=0.3.29",
10
9
  "langchain>=0.3.14",
11
10
  "langchain-anthropic>=0.3.1",
12
11
  "langchain-groq>=0.2.3",
@@ -20,15 +19,15 @@ dependencies = [
20
19
 
21
20
  [project.optional-dependencies]
22
21
  dev = [
23
- "twine>=6.0.1",
22
+ "pytest>=8.3.4",
23
+ "pytest-asyncio>=0.25.2",
24
24
  ]
25
25
 
26
+ [tool.setuptools]
27
+ package-dir = {"" = "src"}
28
+
26
29
  [tool.setuptools.packages.find]
27
- exclude = [
28
- "tests*",
29
- "docs*",
30
- "examples*",
31
- ]
30
+ where = ["src"]
32
31
 
33
32
  [tool.setuptools.package-data]
34
33
  langchain_mcp_tools = ["py.typed"]
@@ -36,7 +36,7 @@ This code implements a specific pattern for managing async resources that
36
36
  require context managers while enabling parallel initialization.
37
37
  The key aspects are:
38
38
 
39
- 1. Core Challenge:
39
+ 1. Challenge:
40
40
 
41
41
  A key requirement for parallel initialization is that each server must be
42
42
  initialized in its own dedicated task - there's no way around this as far as
@@ -49,11 +49,11 @@ The key aspects are:
49
49
  (based on the mcp python-sdk impl as of Jan 14, 2025)
50
50
  - Initializing multiple MCP servers in parallel requires a dedicated
51
51
  `asyncio.Task` per server
52
- - Need to keep sessions alive for later use by different tasks
53
- after initialization
54
- - Need to ensure proper cleanup later in the same task that created them
52
+ - Server cleanup can be initiated later by a task other than the one that
53
+ initialized the resources, whereas `AsyncExitStack.aclose()` must be
54
+ called from the same task that created the context
55
55
 
56
- 2. Solution Strategy:
56
+ 2. Solution:
57
57
 
58
58
  The key insight is to keep the initialization tasks alive throughout the
59
59
  session lifetime, rather than letting them complete after initialization.
@@ -152,12 +152,16 @@ async def spawn_mcp_server_tools_task(
152
152
  env=env
153
153
  )
154
154
 
155
+ # Use an intermediate `asynccontextmanager` to log the cleanup message
155
156
  @asynccontextmanager
156
157
  async def log_before_aexit(context_manager, message):
157
158
  yield await context_manager.__aenter__()
158
- logger.info(message)
159
- await context_manager.__aexit__(None, None, None)
159
+ try:
160
+ logger.info(message)
161
+ finally:
162
+ await context_manager.__aexit__(None, None, None)
160
163
 
164
+ # Initialize the MCP server
161
165
  exit_stack = AsyncExitStack()
162
166
 
163
167
  stdio_transport = await exit_stack.enter_async_context(
@@ -175,8 +179,10 @@ async def spawn_mcp_server_tools_task(
175
179
  await session.initialize()
176
180
  logger.info(f'MCP server "{server_name}": connected')
177
181
 
182
+ # Get MCP tools
178
183
  tools_response = await session.list_tools()
179
184
 
185
+ # Wrap MCP tools to into LangChain tools
180
186
  for tool in tools_response.tools:
181
187
  class McpToLangChainAdapter(BaseTool):
182
188
  name: str = tool.name or 'NO NAME'
@@ -212,10 +218,13 @@ async def spawn_mcp_server_tools_task(
212
218
  logger.error(f'Error getting response: {str(e)}')
213
219
  raise
214
220
 
221
+ # Set ready_event; signals tools are ready
215
222
  ready_event.set()
216
223
 
224
+ # Keep this task alive until cleanup is requested
217
225
  await cleanup_event.wait()
218
226
 
227
+ # Cleanup the resources
219
228
  await exit_stack.aclose()
220
229
 
221
230
 
@@ -259,6 +268,7 @@ async def convert_mcp_to_langchain_tools(
259
268
  ready_event_list = []
260
269
  cleanup_event_list = []
261
270
 
271
+ # Concurrently initialize all the MCP servers
262
272
  tasks = []
263
273
  for server_name, server_config in server_configs.items():
264
274
  server_tools_accumulator: List[BaseTool] = []
@@ -277,12 +287,16 @@ async def convert_mcp_to_langchain_tools(
277
287
  ))
278
288
  tasks.append(task)
279
289
 
290
+ # Wait for all tasks to finish filling in the `server_tools_accumulator`
280
291
  await asyncio.gather(*(event.wait() for event in ready_event_list))
281
292
 
293
+ # Flatten the tools list
282
294
  langchain_tools = [
283
295
  item for sublist in per_server_tools for item in sublist
284
296
  ]
285
297
 
298
+ # Define a cleanup callback to set cleanup_event and signal that
299
+ # it is time to clean up the resources
286
300
  async def mcp_cleanup() -> None:
287
301
  for event in cleanup_event_list:
288
302
  event.set()
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.17
3
+ Version: 0.1.1
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -8,7 +8,6 @@ Requires-Python: >=3.11
8
8
  Description-Content-Type: text/markdown
9
9
  License-File: LICENSE
10
10
  Requires-Dist: jsonschema-pydantic>=0.6
11
- Requires-Dist: langchain-core>=0.3.29
12
11
  Requires-Dist: langchain>=0.3.14
13
12
  Requires-Dist: langchain-anthropic>=0.3.1
14
13
  Requires-Dist: langchain-groq>=0.2.3
@@ -19,7 +18,8 @@ Requires-Dist: pyjson5>=1.6.8
19
18
  Requires-Dist: pympler>=1.1
20
19
  Requires-Dist: python-dotenv>=1.0.1
21
20
  Provides-Extra: dev
22
- Requires-Dist: twine>=6.0.1; extra == "dev"
21
+ Requires-Dist: pytest>=8.3.4; extra == "dev"
22
+ Requires-Dist: pytest-asyncio>=0.25.2; extra == "dev"
23
23
 
24
24
  # MCP To LangChain Tools Conversion Utility [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
25
25
 
@@ -28,9 +28,8 @@ This package is intended to simplify the use of
28
28
  server tools with LangChain / Python.
29
29
 
30
30
  It contains a utility function `convert_mcp_to_langchain_tools()`.
31
- This function handles parallel initialization of specified multiple MCP servers
32
- and converts their available tools into a list of
33
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
31
+ This async function handles parallel initialization of specified multiple MCP servers
32
+ and converts their available tools into a list of LangChain-compatible tools.
34
33
 
35
34
  A typescript equivalent of this utility library is available
36
35
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -54,25 +53,25 @@ but only the contents of the `mcpServers` property,
54
53
  and is expressed as a `dict`, e.g.:
55
54
 
56
55
  ```python
57
- mcp_configs = {
58
- 'filesystem': {
59
- 'command': 'npx',
60
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
61
- },
62
- 'fetch': {
63
- 'command': 'uvx',
64
- 'args': ['mcp-server-fetch']
65
- }
56
+ mcp_configs = {
57
+ 'filesystem': {
58
+ 'command': 'npx',
59
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
60
+ },
61
+ 'fetch': {
62
+ 'command': 'uvx',
63
+ 'args': ['mcp-server-fetch']
66
64
  }
65
+ }
67
66
 
68
- tools, cleanup = await convert_mcp_to_langchain_tools(
69
- mcp_configs
70
- )
67
+ tools, cleanup = await convert_mcp_to_langchain_tools(
68
+ mcp_configs
69
+ )
71
70
  ```
72
71
 
73
72
  This utility function initializes all specified MCP servers in parallel,
74
- and returns [LangChain Tools](https://python.langchain.com/api_reference/core/tools.html)
75
- (`tools: List[BaseTool]`)
73
+ and returns LangChain Tools
74
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
76
75
  by gathering available MCP tools from the servers,
77
76
  and by wrapping them into LangChain tools.
78
77
  It also returns an async callback function (`cleanup: McpServerCleanupFn`)
@@ -81,17 +80,17 @@ to be invoked to close all MCP server sessions when finished.
81
80
  The returned tools can be used with LangChain, e.g.:
82
81
 
83
82
  ```python
84
- # from langchain.chat_models import init_chat_model
85
- llm = init_chat_model(
86
- model='claude-3-5-haiku-latest',
87
- model_provider='anthropic'
88
- )
89
-
90
- # from langgraph.prebuilt import create_react_agent
91
- agent = create_react_agent(
92
- llm,
93
- tools
94
- )
83
+ # from langchain.chat_models import init_chat_model
84
+ llm = init_chat_model(
85
+ model='claude-3-5-haiku-latest',
86
+ model_provider='anthropic'
87
+ )
88
+
89
+ # from langgraph.prebuilt import create_react_agent
90
+ agent = create_react_agent(
91
+ llm,
92
+ tools
93
+ )
95
94
  ```
96
95
  A simple and experimentable usage example can be found
97
96
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -113,11 +112,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
113
112
  a big role here...
114
113
  I'll summarize the difficulties I faced below.
115
114
  The source code is available
116
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
115
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
117
116
  Any comments pointing out something I am missing would be greatly appreciated!
118
117
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
119
118
 
120
- 1. Core Challenge:
119
+ 1. Challenge:
121
120
 
122
121
  A key requirement for parallel initialization is that each server must be
123
122
  initialized in its own dedicated task - there's no way around this as far as
@@ -130,11 +129,11 @@ Any comments pointing out something I am missing would be greatly appreciated!
130
129
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
131
130
  - Initializing multiple MCP servers in parallel requires a dedicated
132
131
  `asyncio.Task` per server
133
- - Need to keep sessions alive for later use by different tasks
134
- after initialization
135
- - Need to ensure proper cleanup later in the same task that created them
132
+ - Server cleanup can be initiated later by a task other than the one
133
+ that initialized the resources, whereas `AsyncExitStack.aclose()` must be
134
+ called from the same task that created the context
136
135
 
137
- 2. Solution Strategy:
136
+ 2. Solution:
138
137
 
139
138
  The key insight is to keep the initialization tasks alive throughout the
140
139
  session lifetime, rather than letting them complete after initialization.
@@ -178,5 +177,5 @@ It usually means I'm doing something very worng...
178
177
  I think it is a natural assumption that MCP SDK is designed with consideration
179
178
  for parallel server initialization.
180
179
  I'm not sure what I'm missing...
181
- (FYI, with the TypeScript MCP SDK, parallel initialization was
180
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
182
181
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -0,0 +1,12 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/langchain_mcp_tools/__init__.py
5
+ src/langchain_mcp_tools/langchain_mcp_tools.py
6
+ src/langchain_mcp_tools/py.typed
7
+ src/langchain_mcp_tools.egg-info/PKG-INFO
8
+ src/langchain_mcp_tools.egg-info/SOURCES.txt
9
+ src/langchain_mcp_tools.egg-info/dependency_links.txt
10
+ src/langchain_mcp_tools.egg-info/requires.txt
11
+ src/langchain_mcp_tools.egg-info/top_level.txt
12
+ tests/test_langchain_mcp_tools.py
@@ -1,5 +1,4 @@
1
1
  jsonschema-pydantic>=0.6
2
- langchain-core>=0.3.29
3
2
  langchain>=0.3.14
4
3
  langchain-anthropic>=0.3.1
5
4
  langchain-groq>=0.2.3
@@ -11,4 +10,5 @@ pympler>=1.1
11
10
  python-dotenv>=1.0.1
12
11
 
13
12
  [dev]
14
- twine>=6.0.1
13
+ pytest>=8.3.4
14
+ pytest-asyncio>=0.25.2
@@ -0,0 +1,164 @@
1
+ import pytest
2
+ from unittest.mock import AsyncMock, MagicMock, patch
3
+ from langchain_core.tools import BaseTool
4
+ from langchain_mcp_tools.langchain_mcp_tools import (
5
+ convert_mcp_to_langchain_tools,
6
+ )
7
+
8
+ # Fix the asyncio mark warning by installing pytest-asyncio
9
+ pytest_plugins = ('pytest_asyncio',)
10
+
11
+
12
+ @pytest.fixture
13
+ def mock_stdio_client():
14
+ with patch('langchain_mcp_tools.langchain_mcp_tools.stdio_client') as mock:
15
+ mock.return_value.__aenter__.return_value = (AsyncMock(), AsyncMock())
16
+ yield mock
17
+
18
+
19
+ @pytest.fixture
20
+ def mock_client_session():
21
+ with patch('langchain_mcp_tools.langchain_mcp_tools.ClientSession') \
22
+ as mock:
23
+ session = AsyncMock()
24
+ # Mock the list_tools response
25
+ session.list_tools.return_value = MagicMock(
26
+ tools=[
27
+ MagicMock(
28
+ name="tool1",
29
+ description="Test tool",
30
+ inputSchema={"type": "object", "properties": {}}
31
+ )
32
+ ]
33
+ )
34
+ mock.return_value.__aenter__.return_value = session
35
+ yield mock
36
+
37
+
38
+ @pytest.mark.asyncio
39
+ async def test_convert_mcp_to_langchain_tools_empty():
40
+ server_configs = {}
41
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
42
+ assert isinstance(tools, list)
43
+ assert len(tools) == 0
44
+ await cleanup()
45
+
46
+
47
+ """
48
+ @pytest.mark.asyncio
49
+ async def test_convert_mcp_to_langchain_tools_invalid_config():
50
+ server_configs = {"invalid": {"command": "nonexistent"}}
51
+ with pytest.raises(Exception):
52
+ await convert_mcp_to_langchain_tools(server_configs)
53
+ """
54
+
55
+
56
+ """
57
+ @pytest.mark.asyncio
58
+ async def test_convert_single_mcp_success(
59
+ mock_stdio_client,
60
+ mock_client_session
61
+ ):
62
+ # Test data
63
+ server_name = "test_server"
64
+ server_config = {
65
+ "command": "test_command",
66
+ "args": ["--test"],
67
+ "env": {"TEST_ENV": "value"}
68
+ }
69
+ langchain_tools = []
70
+ ready_event = asyncio.Event()
71
+ cleanup_event = asyncio.Event()
72
+
73
+ # Create task
74
+ task = asyncio.create_task(
75
+ convert_single_mcp_to_langchain_tools(
76
+ server_name,
77
+ server_config,
78
+ langchain_tools,
79
+ ready_event,
80
+ cleanup_event
81
+ )
82
+ )
83
+
84
+ # Wait for ready event
85
+ await asyncio.wait_for(ready_event.wait(), timeout=1.0)
86
+
87
+ # Verify tools were created
88
+ assert len(langchain_tools) == 1
89
+ assert isinstance(langchain_tools[0], BaseTool)
90
+ assert langchain_tools[0].name == "tool1"
91
+
92
+ # Trigger cleanup
93
+ cleanup_event.set()
94
+ await task
95
+ """
96
+
97
+
98
+ @pytest.mark.asyncio
99
+ async def test_convert_mcp_to_langchain_tools_multiple_servers(
100
+ mock_stdio_client,
101
+ mock_client_session
102
+ ):
103
+ server_configs = {
104
+ "server1": {"command": "cmd1", "args": []},
105
+ "server2": {"command": "cmd2", "args": []}
106
+ }
107
+
108
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
109
+
110
+ # Verify correct number of tools created
111
+ assert len(tools) == 2 # One tool per server
112
+ assert all(isinstance(tool, BaseTool) for tool in tools)
113
+
114
+ # Test cleanup
115
+ await cleanup()
116
+
117
+
118
+ """
119
+ @pytest.mark.asyncio
120
+ async def test_tool_execution(mock_stdio_client, mock_client_session):
121
+ server_configs = {
122
+ "test_server": {"command": "test", "args": []}
123
+ }
124
+
125
+ # Mock the tool execution response
126
+ session = mock_client_session.return_value.__aenter__.return_value
127
+ session.call_tool.return_value = MagicMock(
128
+ isError=False,
129
+ content={"result": "success"}
130
+ )
131
+
132
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
133
+
134
+ # Test tool execution
135
+ result = await tools[0]._arun(test_param="value")
136
+ assert result == {"result": "success"}
137
+
138
+ # Verify tool was called with correct parameters
139
+ session.call_tool.assert_called_once_with("tool1", {"test_param": "value"})
140
+
141
+ await cleanup()
142
+ """
143
+
144
+
145
+ @pytest.mark.asyncio
146
+ async def test_tool_execution_error(mock_stdio_client, mock_client_session):
147
+ server_configs = {
148
+ "test_server": {"command": "test", "args": []}
149
+ }
150
+
151
+ # Mock error response
152
+ session = mock_client_session.return_value.__aenter__.return_value
153
+ session.call_tool.return_value = MagicMock(
154
+ isError=True,
155
+ content="Error message"
156
+ )
157
+
158
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
159
+
160
+ # Test tool execution error
161
+ with pytest.raises(Exception):
162
+ await tools[0]._arun(test_param="value")
163
+
164
+ await cleanup()
@@ -1,11 +0,0 @@
1
- LICENSE
2
- README.md
3
- pyproject.toml
4
- langchain_mcp_tools/__init__.py
5
- langchain_mcp_tools/langchain_mcp_tools.py
6
- langchain_mcp_tools/py.typed
7
- langchain_mcp_tools.egg-info/PKG-INFO
8
- langchain_mcp_tools.egg-info/SOURCES.txt
9
- langchain_mcp_tools.egg-info/dependency_links.txt
10
- langchain_mcp_tools.egg-info/requires.txt
11
- langchain_mcp_tools.egg-info/top_level.txt