langchain-mcp-tools 0.0.16__tar.gz → 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.16
3
+ Version: 0.1.0
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -8,7 +8,6 @@ Requires-Python: >=3.11
8
8
  Description-Content-Type: text/markdown
9
9
  License-File: LICENSE
10
10
  Requires-Dist: jsonschema-pydantic>=0.6
11
- Requires-Dist: langchain-core>=0.3.29
12
11
  Requires-Dist: langchain>=0.3.14
13
12
  Requires-Dist: langchain-anthropic>=0.3.1
14
13
  Requires-Dist: langchain-groq>=0.2.3
@@ -19,7 +18,8 @@ Requires-Dist: pyjson5>=1.6.8
19
18
  Requires-Dist: pympler>=1.1
20
19
  Requires-Dist: python-dotenv>=1.0.1
21
20
  Provides-Extra: dev
22
- Requires-Dist: twine>=6.0.1; extra == "dev"
21
+ Requires-Dist: pytest>=8.3.4; extra == "dev"
22
+ Requires-Dist: pytest-asyncio>=0.25.2; extra == "dev"
23
23
 
24
24
  # MCP To LangChain Tools Conversion Utility [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
25
25
 
@@ -29,8 +29,7 @@ server tools with LangChain / Python.
29
29
 
30
30
  It contains a utility function `convert_mcp_to_langchain_tools()`.
31
31
  This function handles parallel initialization of specified multiple MCP servers
32
- and converts their available tools into a list of
33
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
32
+ and converts their available tools into a list of LangChain-compatible tools.
34
33
 
35
34
  A typescript equivalent of this utility library is available
36
35
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -54,43 +53,44 @@ but only the contents of the `mcpServers` property,
54
53
  and is expressed as a `dict`, e.g.:
55
54
 
56
55
  ```python
57
- mcp_configs = {
58
- 'filesystem': {
59
- 'command': 'npx',
60
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
61
- },
62
- 'fetch': {
63
- 'command': 'uvx',
64
- 'args': ['mcp-server-fetch']
65
- }
56
+ mcp_configs = {
57
+ 'filesystem': {
58
+ 'command': 'npx',
59
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
60
+ },
61
+ 'fetch': {
62
+ 'command': 'uvx',
63
+ 'args': ['mcp-server-fetch']
66
64
  }
65
+ }
67
66
 
68
- tools, cleanup = await convert_mcp_to_langchain_tools(
69
- mcp_configs
70
- )
67
+ tools, cleanup = await convert_mcp_to_langchain_tools(
68
+ mcp_configs
69
+ )
71
70
  ```
72
71
 
73
- The utility function initializes all specified MCP servers in parallel,
74
- and returns LangChain Tools (`List[BaseTool]`)
75
- by gathering all available MCP server tools,
76
- and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
77
- It also returns a cleanup callback function (`McpServerCleanupFn`)
72
+ This utility function initializes all specified MCP servers in parallel,
73
+ and returns LangChain Tools
74
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
75
+ by gathering available MCP tools from the servers,
76
+ and by wrapping them into LangChain tools.
77
+ It also returns an async callback function (`cleanup: McpServerCleanupFn`)
78
78
  to be invoked to close all MCP server sessions when finished.
79
79
 
80
80
  The returned tools can be used with LangChain, e.g.:
81
81
 
82
82
  ```python
83
- # from langchain.chat_models import init_chat_model
84
- llm = init_chat_model(
85
- model='claude-3-5-haiku-latest',
86
- model_provider='anthropic'
87
- )
88
-
89
- # from langgraph.prebuilt import create_react_agent
90
- agent = create_react_agent(
91
- llm,
92
- tools
93
- )
83
+ # from langchain.chat_models import init_chat_model
84
+ llm = init_chat_model(
85
+ model='claude-3-5-haiku-latest',
86
+ model_provider='anthropic'
87
+ )
88
+
89
+ # from langgraph.prebuilt import create_react_agent
90
+ agent = create_react_agent(
91
+ llm,
92
+ tools
93
+ )
94
94
  ```
95
95
  A simple and experimentable usage example can be found
96
96
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -112,11 +112,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
112
112
  a big role here...
113
113
  I'll summarize the difficulties I faced below.
114
114
  The source code is available
115
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
115
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
116
116
  Any comments pointing out something I am missing would be greatly appreciated!
117
117
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
118
118
 
119
- 1. Core Challenge:
119
+ 1. Challenge:
120
120
 
121
121
  A key requirement for parallel initialization is that each server must be
122
122
  initialized in its own dedicated task - there's no way around this as far as
@@ -129,11 +129,10 @@ Any comments pointing out something I am missing would be greatly appreciated!
129
129
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
130
130
  - Initializing multiple MCP servers in parallel requires a dedicated
131
131
  `asyncio.Task` per server
132
- - Need to keep sessions alive for later use by different tasks
133
- after initialization
134
- - Need to ensure proper cleanup later in the same task that created them
132
+ - Server cleanup can be initiated later by a task other than the one
133
+ that initialized the resources
135
134
 
136
- 2. Solution Strategy:
135
+ 2. Solution:
137
136
 
138
137
  The key insight is to keep the initialization tasks alive throughout the
139
138
  session lifetime, rather than letting them complete after initialization.
@@ -155,7 +154,8 @@ Any comments pointing out something I am missing would be greatly appreciated!
155
154
 
156
155
  3. Task Lifecycle:
157
156
 
158
- The following illustrates how to implement the above-mentioned strategy:
157
+ The following task lifecyle diagram illustrates how the above strategy
158
+ was impelemented:
159
159
  ```
160
160
  [Task starts]
161
161
 
@@ -176,5 +176,5 @@ It usually means I'm doing something very worng...
176
176
  I think it is a natural assumption that MCP SDK is designed with consideration
177
177
  for parallel server initialization.
178
178
  I'm not sure what I'm missing...
179
- (FYI, with the TypeScript MCP SDK, parallel initialization was
179
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
180
180
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -6,8 +6,7 @@ server tools with LangChain / Python.
6
6
 
7
7
  It contains a utility function `convert_mcp_to_langchain_tools()`.
8
8
  This function handles parallel initialization of specified multiple MCP servers
9
- and converts their available tools into a list of
10
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
9
+ and converts their available tools into a list of LangChain-compatible tools.
11
10
 
12
11
  A typescript equivalent of this utility library is available
13
12
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -31,43 +30,44 @@ but only the contents of the `mcpServers` property,
31
30
  and is expressed as a `dict`, e.g.:
32
31
 
33
32
  ```python
34
- mcp_configs = {
35
- 'filesystem': {
36
- 'command': 'npx',
37
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
38
- },
39
- 'fetch': {
40
- 'command': 'uvx',
41
- 'args': ['mcp-server-fetch']
42
- }
33
+ mcp_configs = {
34
+ 'filesystem': {
35
+ 'command': 'npx',
36
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
37
+ },
38
+ 'fetch': {
39
+ 'command': 'uvx',
40
+ 'args': ['mcp-server-fetch']
43
41
  }
42
+ }
44
43
 
45
- tools, cleanup = await convert_mcp_to_langchain_tools(
46
- mcp_configs
47
- )
44
+ tools, cleanup = await convert_mcp_to_langchain_tools(
45
+ mcp_configs
46
+ )
48
47
  ```
49
48
 
50
- The utility function initializes all specified MCP servers in parallel,
51
- and returns LangChain Tools (`List[BaseTool]`)
52
- by gathering all available MCP server tools,
53
- and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
54
- It also returns a cleanup callback function (`McpServerCleanupFn`)
49
+ This utility function initializes all specified MCP servers in parallel,
50
+ and returns LangChain Tools
51
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
52
+ by gathering available MCP tools from the servers,
53
+ and by wrapping them into LangChain tools.
54
+ It also returns an async callback function (`cleanup: McpServerCleanupFn`)
55
55
  to be invoked to close all MCP server sessions when finished.
56
56
 
57
57
  The returned tools can be used with LangChain, e.g.:
58
58
 
59
59
  ```python
60
- # from langchain.chat_models import init_chat_model
61
- llm = init_chat_model(
62
- model='claude-3-5-haiku-latest',
63
- model_provider='anthropic'
64
- )
65
-
66
- # from langgraph.prebuilt import create_react_agent
67
- agent = create_react_agent(
68
- llm,
69
- tools
70
- )
60
+ # from langchain.chat_models import init_chat_model
61
+ llm = init_chat_model(
62
+ model='claude-3-5-haiku-latest',
63
+ model_provider='anthropic'
64
+ )
65
+
66
+ # from langgraph.prebuilt import create_react_agent
67
+ agent = create_react_agent(
68
+ llm,
69
+ tools
70
+ )
71
71
  ```
72
72
  A simple and experimentable usage example can be found
73
73
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -89,11 +89,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
89
89
  a big role here...
90
90
  I'll summarize the difficulties I faced below.
91
91
  The source code is available
92
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
92
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
93
93
  Any comments pointing out something I am missing would be greatly appreciated!
94
94
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
95
95
 
96
- 1. Core Challenge:
96
+ 1. Challenge:
97
97
 
98
98
  A key requirement for parallel initialization is that each server must be
99
99
  initialized in its own dedicated task - there's no way around this as far as
@@ -106,11 +106,10 @@ Any comments pointing out something I am missing would be greatly appreciated!
106
106
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
107
107
  - Initializing multiple MCP servers in parallel requires a dedicated
108
108
  `asyncio.Task` per server
109
- - Need to keep sessions alive for later use by different tasks
110
- after initialization
111
- - Need to ensure proper cleanup later in the same task that created them
109
+ - Server cleanup can be initiated later by a task other than the one
110
+ that initialized the resources
112
111
 
113
- 2. Solution Strategy:
112
+ 2. Solution:
114
113
 
115
114
  The key insight is to keep the initialization tasks alive throughout the
116
115
  session lifetime, rather than letting them complete after initialization.
@@ -132,7 +131,8 @@ Any comments pointing out something I am missing would be greatly appreciated!
132
131
 
133
132
  3. Task Lifecycle:
134
133
 
135
- The following illustrates how to implement the above-mentioned strategy:
134
+ The following task lifecyle diagram illustrates how the above strategy
135
+ was impelemented:
136
136
  ```
137
137
  [Task starts]
138
138
 
@@ -153,5 +153,5 @@ It usually means I'm doing something very worng...
153
153
  I think it is a natural assumption that MCP SDK is designed with consideration
154
154
  for parallel server initialization.
155
155
  I'm not sure what I'm missing...
156
- (FYI, with the TypeScript MCP SDK, parallel initialization was
156
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
157
157
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -1,12 +1,11 @@
1
1
  [project]
2
2
  name = "langchain-mcp-tools"
3
- version = "0.0.16"
3
+ version = "0.1.0"
4
4
  description = "Model Context Protocol (MCP) To LangChain Tools Conversion Utility"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.11"
7
7
  dependencies = [
8
8
  "jsonschema-pydantic>=0.6",
9
- "langchain-core>=0.3.29",
10
9
  "langchain>=0.3.14",
11
10
  "langchain-anthropic>=0.3.1",
12
11
  "langchain-groq>=0.2.3",
@@ -20,15 +19,15 @@ dependencies = [
20
19
 
21
20
  [project.optional-dependencies]
22
21
  dev = [
23
- "twine>=6.0.1",
22
+ "pytest>=8.3.4",
23
+ "pytest-asyncio>=0.25.2",
24
24
  ]
25
25
 
26
+ [tool.setuptools]
27
+ package-dir = {"" = "src"}
28
+
26
29
  [tool.setuptools.packages.find]
27
- exclude = [
28
- "tests*",
29
- "docs*",
30
- "examples*",
31
- ]
30
+ where = ["src"]
32
31
 
33
32
  [tool.setuptools.package-data]
34
33
  langchain_mcp_tools = ["py.typed"]
@@ -36,7 +36,7 @@ This code implements a specific pattern for managing async resources that
36
36
  require context managers while enabling parallel initialization.
37
37
  The key aspects are:
38
38
 
39
- 1. Core Challenge:
39
+ 1. Challenge:
40
40
 
41
41
  A key requirement for parallel initialization is that each server must be
42
42
  initialized in its own dedicated task - there's no way around this as far as
@@ -46,14 +46,13 @@ The key aspects are:
46
46
  - Resources management for `stdio_client` and `ClientSession` seems
47
47
  to require relying exclusively on `asynccontextmanager` for cleanup,
48
48
  with no manual cleanup options
49
- (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
49
+ (based on the mcp python-sdk impl as of Jan 14, 2025)
50
50
  - Initializing multiple MCP servers in parallel requires a dedicated
51
51
  `asyncio.Task` per server
52
- - Need to keep sessions alive for later use by different tasks
53
- after initialization
54
- - Need to ensure proper cleanup later in the same task that created them
52
+ - Server cleanup can be initiated later by a task other than the one that
53
+ initialized the resources
55
54
 
56
- 2. Solution Strategy:
55
+ 2. Solution:
57
56
 
58
57
  The key insight is to keep the initialization tasks alive throughout the
59
58
  session lifetime, rather than letting them complete after initialization.
@@ -75,7 +74,8 @@ The key aspects are:
75
74
 
76
75
  3. Task Lifecycle:
77
76
 
78
- The following illustrates how to implement the above-mentioned strategy:
77
+ The following task lifecyle diagram illustrates how the above strategy
78
+ was impelemented:
79
79
  ```
80
80
  [Task starts]
81
81
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.16
3
+ Version: 0.1.0
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -8,7 +8,6 @@ Requires-Python: >=3.11
8
8
  Description-Content-Type: text/markdown
9
9
  License-File: LICENSE
10
10
  Requires-Dist: jsonschema-pydantic>=0.6
11
- Requires-Dist: langchain-core>=0.3.29
12
11
  Requires-Dist: langchain>=0.3.14
13
12
  Requires-Dist: langchain-anthropic>=0.3.1
14
13
  Requires-Dist: langchain-groq>=0.2.3
@@ -19,7 +18,8 @@ Requires-Dist: pyjson5>=1.6.8
19
18
  Requires-Dist: pympler>=1.1
20
19
  Requires-Dist: python-dotenv>=1.0.1
21
20
  Provides-Extra: dev
22
- Requires-Dist: twine>=6.0.1; extra == "dev"
21
+ Requires-Dist: pytest>=8.3.4; extra == "dev"
22
+ Requires-Dist: pytest-asyncio>=0.25.2; extra == "dev"
23
23
 
24
24
  # MCP To LangChain Tools Conversion Utility [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
25
25
 
@@ -29,8 +29,7 @@ server tools with LangChain / Python.
29
29
 
30
30
  It contains a utility function `convert_mcp_to_langchain_tools()`.
31
31
  This function handles parallel initialization of specified multiple MCP servers
32
- and converts their available tools into a list of
33
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
32
+ and converts their available tools into a list of LangChain-compatible tools.
34
33
 
35
34
  A typescript equivalent of this utility library is available
36
35
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -54,43 +53,44 @@ but only the contents of the `mcpServers` property,
54
53
  and is expressed as a `dict`, e.g.:
55
54
 
56
55
  ```python
57
- mcp_configs = {
58
- 'filesystem': {
59
- 'command': 'npx',
60
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
61
- },
62
- 'fetch': {
63
- 'command': 'uvx',
64
- 'args': ['mcp-server-fetch']
65
- }
56
+ mcp_configs = {
57
+ 'filesystem': {
58
+ 'command': 'npx',
59
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
60
+ },
61
+ 'fetch': {
62
+ 'command': 'uvx',
63
+ 'args': ['mcp-server-fetch']
66
64
  }
65
+ }
67
66
 
68
- tools, cleanup = await convert_mcp_to_langchain_tools(
69
- mcp_configs
70
- )
67
+ tools, cleanup = await convert_mcp_to_langchain_tools(
68
+ mcp_configs
69
+ )
71
70
  ```
72
71
 
73
- The utility function initializes all specified MCP servers in parallel,
74
- and returns LangChain Tools (`List[BaseTool]`)
75
- by gathering all available MCP server tools,
76
- and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
77
- It also returns a cleanup callback function (`McpServerCleanupFn`)
72
+ This utility function initializes all specified MCP servers in parallel,
73
+ and returns LangChain Tools
74
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
75
+ by gathering available MCP tools from the servers,
76
+ and by wrapping them into LangChain tools.
77
+ It also returns an async callback function (`cleanup: McpServerCleanupFn`)
78
78
  to be invoked to close all MCP server sessions when finished.
79
79
 
80
80
  The returned tools can be used with LangChain, e.g.:
81
81
 
82
82
  ```python
83
- # from langchain.chat_models import init_chat_model
84
- llm = init_chat_model(
85
- model='claude-3-5-haiku-latest',
86
- model_provider='anthropic'
87
- )
88
-
89
- # from langgraph.prebuilt import create_react_agent
90
- agent = create_react_agent(
91
- llm,
92
- tools
93
- )
83
+ # from langchain.chat_models import init_chat_model
84
+ llm = init_chat_model(
85
+ model='claude-3-5-haiku-latest',
86
+ model_provider='anthropic'
87
+ )
88
+
89
+ # from langgraph.prebuilt import create_react_agent
90
+ agent = create_react_agent(
91
+ llm,
92
+ tools
93
+ )
94
94
  ```
95
95
  A simple and experimentable usage example can be found
96
96
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -112,11 +112,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
112
112
  a big role here...
113
113
  I'll summarize the difficulties I faced below.
114
114
  The source code is available
115
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
115
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
116
116
  Any comments pointing out something I am missing would be greatly appreciated!
117
117
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
118
118
 
119
- 1. Core Challenge:
119
+ 1. Challenge:
120
120
 
121
121
  A key requirement for parallel initialization is that each server must be
122
122
  initialized in its own dedicated task - there's no way around this as far as
@@ -129,11 +129,10 @@ Any comments pointing out something I am missing would be greatly appreciated!
129
129
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
130
130
  - Initializing multiple MCP servers in parallel requires a dedicated
131
131
  `asyncio.Task` per server
132
- - Need to keep sessions alive for later use by different tasks
133
- after initialization
134
- - Need to ensure proper cleanup later in the same task that created them
132
+ - Server cleanup can be initiated later by a task other than the one
133
+ that initialized the resources
135
134
 
136
- 2. Solution Strategy:
135
+ 2. Solution:
137
136
 
138
137
  The key insight is to keep the initialization tasks alive throughout the
139
138
  session lifetime, rather than letting them complete after initialization.
@@ -155,7 +154,8 @@ Any comments pointing out something I am missing would be greatly appreciated!
155
154
 
156
155
  3. Task Lifecycle:
157
156
 
158
- The following illustrates how to implement the above-mentioned strategy:
157
+ The following task lifecyle diagram illustrates how the above strategy
158
+ was impelemented:
159
159
  ```
160
160
  [Task starts]
161
161
 
@@ -176,5 +176,5 @@ It usually means I'm doing something very worng...
176
176
  I think it is a natural assumption that MCP SDK is designed with consideration
177
177
  for parallel server initialization.
178
178
  I'm not sure what I'm missing...
179
- (FYI, with the TypeScript MCP SDK, parallel initialization was
179
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
180
180
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -0,0 +1,12 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/langchain_mcp_tools/__init__.py
5
+ src/langchain_mcp_tools/langchain_mcp_tools.py
6
+ src/langchain_mcp_tools/py.typed
7
+ src/langchain_mcp_tools.egg-info/PKG-INFO
8
+ src/langchain_mcp_tools.egg-info/SOURCES.txt
9
+ src/langchain_mcp_tools.egg-info/dependency_links.txt
10
+ src/langchain_mcp_tools.egg-info/requires.txt
11
+ src/langchain_mcp_tools.egg-info/top_level.txt
12
+ tests/test_langchain_mcp_tools.py
@@ -1,5 +1,4 @@
1
1
  jsonschema-pydantic>=0.6
2
- langchain-core>=0.3.29
3
2
  langchain>=0.3.14
4
3
  langchain-anthropic>=0.3.1
5
4
  langchain-groq>=0.2.3
@@ -11,4 +10,5 @@ pympler>=1.1
11
10
  python-dotenv>=1.0.1
12
11
 
13
12
  [dev]
14
- twine>=6.0.1
13
+ pytest>=8.3.4
14
+ pytest-asyncio>=0.25.2
@@ -0,0 +1,164 @@
1
+ import pytest
2
+ from unittest.mock import AsyncMock, MagicMock, patch
3
+ from langchain_core.tools import BaseTool
4
+ from langchain_mcp_tools.langchain_mcp_tools import (
5
+ convert_mcp_to_langchain_tools,
6
+ )
7
+
8
+ # Fix the asyncio mark warning by installing pytest-asyncio
9
+ pytest_plugins = ('pytest_asyncio',)
10
+
11
+
12
+ @pytest.fixture
13
+ def mock_stdio_client():
14
+ with patch('langchain_mcp_tools.langchain_mcp_tools.stdio_client') as mock:
15
+ mock.return_value.__aenter__.return_value = (AsyncMock(), AsyncMock())
16
+ yield mock
17
+
18
+
19
+ @pytest.fixture
20
+ def mock_client_session():
21
+ with patch('langchain_mcp_tools.langchain_mcp_tools.ClientSession') \
22
+ as mock:
23
+ session = AsyncMock()
24
+ # Mock the list_tools response
25
+ session.list_tools.return_value = MagicMock(
26
+ tools=[
27
+ MagicMock(
28
+ name="tool1",
29
+ description="Test tool",
30
+ inputSchema={"type": "object", "properties": {}}
31
+ )
32
+ ]
33
+ )
34
+ mock.return_value.__aenter__.return_value = session
35
+ yield mock
36
+
37
+
38
+ @pytest.mark.asyncio
39
+ async def test_convert_mcp_to_langchain_tools_empty():
40
+ server_configs = {}
41
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
42
+ assert isinstance(tools, list)
43
+ assert len(tools) == 0
44
+ await cleanup()
45
+
46
+
47
+ """
48
+ @pytest.mark.asyncio
49
+ async def test_convert_mcp_to_langchain_tools_invalid_config():
50
+ server_configs = {"invalid": {"command": "nonexistent"}}
51
+ with pytest.raises(Exception):
52
+ await convert_mcp_to_langchain_tools(server_configs)
53
+ """
54
+
55
+
56
+ """
57
+ @pytest.mark.asyncio
58
+ async def test_convert_single_mcp_success(
59
+ mock_stdio_client,
60
+ mock_client_session
61
+ ):
62
+ # Test data
63
+ server_name = "test_server"
64
+ server_config = {
65
+ "command": "test_command",
66
+ "args": ["--test"],
67
+ "env": {"TEST_ENV": "value"}
68
+ }
69
+ langchain_tools = []
70
+ ready_event = asyncio.Event()
71
+ cleanup_event = asyncio.Event()
72
+
73
+ # Create task
74
+ task = asyncio.create_task(
75
+ convert_single_mcp_to_langchain_tools(
76
+ server_name,
77
+ server_config,
78
+ langchain_tools,
79
+ ready_event,
80
+ cleanup_event
81
+ )
82
+ )
83
+
84
+ # Wait for ready event
85
+ await asyncio.wait_for(ready_event.wait(), timeout=1.0)
86
+
87
+ # Verify tools were created
88
+ assert len(langchain_tools) == 1
89
+ assert isinstance(langchain_tools[0], BaseTool)
90
+ assert langchain_tools[0].name == "tool1"
91
+
92
+ # Trigger cleanup
93
+ cleanup_event.set()
94
+ await task
95
+ """
96
+
97
+
98
+ @pytest.mark.asyncio
99
+ async def test_convert_mcp_to_langchain_tools_multiple_servers(
100
+ mock_stdio_client,
101
+ mock_client_session
102
+ ):
103
+ server_configs = {
104
+ "server1": {"command": "cmd1", "args": []},
105
+ "server2": {"command": "cmd2", "args": []}
106
+ }
107
+
108
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
109
+
110
+ # Verify correct number of tools created
111
+ assert len(tools) == 2 # One tool per server
112
+ assert all(isinstance(tool, BaseTool) for tool in tools)
113
+
114
+ # Test cleanup
115
+ await cleanup()
116
+
117
+
118
+ """
119
+ @pytest.mark.asyncio
120
+ async def test_tool_execution(mock_stdio_client, mock_client_session):
121
+ server_configs = {
122
+ "test_server": {"command": "test", "args": []}
123
+ }
124
+
125
+ # Mock the tool execution response
126
+ session = mock_client_session.return_value.__aenter__.return_value
127
+ session.call_tool.return_value = MagicMock(
128
+ isError=False,
129
+ content={"result": "success"}
130
+ )
131
+
132
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
133
+
134
+ # Test tool execution
135
+ result = await tools[0]._arun(test_param="value")
136
+ assert result == {"result": "success"}
137
+
138
+ # Verify tool was called with correct parameters
139
+ session.call_tool.assert_called_once_with("tool1", {"test_param": "value"})
140
+
141
+ await cleanup()
142
+ """
143
+
144
+
145
+ @pytest.mark.asyncio
146
+ async def test_tool_execution_error(mock_stdio_client, mock_client_session):
147
+ server_configs = {
148
+ "test_server": {"command": "test", "args": []}
149
+ }
150
+
151
+ # Mock error response
152
+ session = mock_client_session.return_value.__aenter__.return_value
153
+ session.call_tool.return_value = MagicMock(
154
+ isError=True,
155
+ content="Error message"
156
+ )
157
+
158
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
159
+
160
+ # Test tool execution error
161
+ with pytest.raises(Exception):
162
+ await tools[0]._arun(test_param="value")
163
+
164
+ await cleanup()
@@ -1,11 +0,0 @@
1
- LICENSE
2
- README.md
3
- pyproject.toml
4
- langchain_mcp_tools/__init__.py
5
- langchain_mcp_tools/langchain_mcp_tools.py
6
- langchain_mcp_tools/py.typed
7
- langchain_mcp_tools.egg-info/PKG-INFO
8
- langchain_mcp_tools.egg-info/SOURCES.txt
9
- langchain_mcp_tools.egg-info/dependency_links.txt
10
- langchain_mcp_tools.egg-info/requires.txt
11
- langchain_mcp_tools.egg-info/top_level.txt