langchain-mcp-tools 0.0.17__tar.gz → 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.17
3
+ Version: 0.1.0
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -8,7 +8,6 @@ Requires-Python: >=3.11
8
8
  Description-Content-Type: text/markdown
9
9
  License-File: LICENSE
10
10
  Requires-Dist: jsonschema-pydantic>=0.6
11
- Requires-Dist: langchain-core>=0.3.29
12
11
  Requires-Dist: langchain>=0.3.14
13
12
  Requires-Dist: langchain-anthropic>=0.3.1
14
13
  Requires-Dist: langchain-groq>=0.2.3
@@ -19,7 +18,8 @@ Requires-Dist: pyjson5>=1.6.8
19
18
  Requires-Dist: pympler>=1.1
20
19
  Requires-Dist: python-dotenv>=1.0.1
21
20
  Provides-Extra: dev
22
- Requires-Dist: twine>=6.0.1; extra == "dev"
21
+ Requires-Dist: pytest>=8.3.4; extra == "dev"
22
+ Requires-Dist: pytest-asyncio>=0.25.2; extra == "dev"
23
23
 
24
24
  # MCP To LangChain Tools Conversion Utility [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
25
25
 
@@ -29,8 +29,7 @@ server tools with LangChain / Python.
29
29
 
30
30
  It contains a utility function `convert_mcp_to_langchain_tools()`.
31
31
  This function handles parallel initialization of specified multiple MCP servers
32
- and converts their available tools into a list of
33
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
32
+ and converts their available tools into a list of LangChain-compatible tools.
34
33
 
35
34
  A typescript equivalent of this utility library is available
36
35
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -54,25 +53,25 @@ but only the contents of the `mcpServers` property,
54
53
  and is expressed as a `dict`, e.g.:
55
54
 
56
55
  ```python
57
- mcp_configs = {
58
- 'filesystem': {
59
- 'command': 'npx',
60
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
61
- },
62
- 'fetch': {
63
- 'command': 'uvx',
64
- 'args': ['mcp-server-fetch']
65
- }
56
+ mcp_configs = {
57
+ 'filesystem': {
58
+ 'command': 'npx',
59
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
60
+ },
61
+ 'fetch': {
62
+ 'command': 'uvx',
63
+ 'args': ['mcp-server-fetch']
66
64
  }
65
+ }
67
66
 
68
- tools, cleanup = await convert_mcp_to_langchain_tools(
69
- mcp_configs
70
- )
67
+ tools, cleanup = await convert_mcp_to_langchain_tools(
68
+ mcp_configs
69
+ )
71
70
  ```
72
71
 
73
72
  This utility function initializes all specified MCP servers in parallel,
74
- and returns [LangChain Tools](https://python.langchain.com/api_reference/core/tools.html)
75
- (`tools: List[BaseTool]`)
73
+ and returns LangChain Tools
74
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
76
75
  by gathering available MCP tools from the servers,
77
76
  and by wrapping them into LangChain tools.
78
77
  It also returns an async callback function (`cleanup: McpServerCleanupFn`)
@@ -81,17 +80,17 @@ to be invoked to close all MCP server sessions when finished.
81
80
  The returned tools can be used with LangChain, e.g.:
82
81
 
83
82
  ```python
84
- # from langchain.chat_models import init_chat_model
85
- llm = init_chat_model(
86
- model='claude-3-5-haiku-latest',
87
- model_provider='anthropic'
88
- )
89
-
90
- # from langgraph.prebuilt import create_react_agent
91
- agent = create_react_agent(
92
- llm,
93
- tools
94
- )
83
+ # from langchain.chat_models import init_chat_model
84
+ llm = init_chat_model(
85
+ model='claude-3-5-haiku-latest',
86
+ model_provider='anthropic'
87
+ )
88
+
89
+ # from langgraph.prebuilt import create_react_agent
90
+ agent = create_react_agent(
91
+ llm,
92
+ tools
93
+ )
95
94
  ```
96
95
  A simple and experimentable usage example can be found
97
96
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -113,11 +112,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
113
112
  a big role here...
114
113
  I'll summarize the difficulties I faced below.
115
114
  The source code is available
116
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
115
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
117
116
  Any comments pointing out something I am missing would be greatly appreciated!
118
117
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
119
118
 
120
- 1. Core Challenge:
119
+ 1. Challenge:
121
120
 
122
121
  A key requirement for parallel initialization is that each server must be
123
122
  initialized in its own dedicated task - there's no way around this as far as
@@ -130,11 +129,10 @@ Any comments pointing out something I am missing would be greatly appreciated!
130
129
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
131
130
  - Initializing multiple MCP servers in parallel requires a dedicated
132
131
  `asyncio.Task` per server
133
- - Need to keep sessions alive for later use by different tasks
134
- after initialization
135
- - Need to ensure proper cleanup later in the same task that created them
132
+ - Server cleanup can be initiated later by a task other than the one
133
+ that initialized the resources
136
134
 
137
- 2. Solution Strategy:
135
+ 2. Solution:
138
136
 
139
137
  The key insight is to keep the initialization tasks alive throughout the
140
138
  session lifetime, rather than letting them complete after initialization.
@@ -178,5 +176,5 @@ It usually means I'm doing something very worng...
178
176
  I think it is a natural assumption that MCP SDK is designed with consideration
179
177
  for parallel server initialization.
180
178
  I'm not sure what I'm missing...
181
- (FYI, with the TypeScript MCP SDK, parallel initialization was
179
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
182
180
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -6,8 +6,7 @@ server tools with LangChain / Python.
6
6
 
7
7
  It contains a utility function `convert_mcp_to_langchain_tools()`.
8
8
  This function handles parallel initialization of specified multiple MCP servers
9
- and converts their available tools into a list of
10
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
9
+ and converts their available tools into a list of LangChain-compatible tools.
11
10
 
12
11
  A typescript equivalent of this utility library is available
13
12
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -31,25 +30,25 @@ but only the contents of the `mcpServers` property,
31
30
  and is expressed as a `dict`, e.g.:
32
31
 
33
32
  ```python
34
- mcp_configs = {
35
- 'filesystem': {
36
- 'command': 'npx',
37
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
38
- },
39
- 'fetch': {
40
- 'command': 'uvx',
41
- 'args': ['mcp-server-fetch']
42
- }
33
+ mcp_configs = {
34
+ 'filesystem': {
35
+ 'command': 'npx',
36
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
37
+ },
38
+ 'fetch': {
39
+ 'command': 'uvx',
40
+ 'args': ['mcp-server-fetch']
43
41
  }
42
+ }
44
43
 
45
- tools, cleanup = await convert_mcp_to_langchain_tools(
46
- mcp_configs
47
- )
44
+ tools, cleanup = await convert_mcp_to_langchain_tools(
45
+ mcp_configs
46
+ )
48
47
  ```
49
48
 
50
49
  This utility function initializes all specified MCP servers in parallel,
51
- and returns [LangChain Tools](https://python.langchain.com/api_reference/core/tools.html)
52
- (`tools: List[BaseTool]`)
50
+ and returns LangChain Tools
51
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
53
52
  by gathering available MCP tools from the servers,
54
53
  and by wrapping them into LangChain tools.
55
54
  It also returns an async callback function (`cleanup: McpServerCleanupFn`)
@@ -58,17 +57,17 @@ to be invoked to close all MCP server sessions when finished.
58
57
  The returned tools can be used with LangChain, e.g.:
59
58
 
60
59
  ```python
61
- # from langchain.chat_models import init_chat_model
62
- llm = init_chat_model(
63
- model='claude-3-5-haiku-latest',
64
- model_provider='anthropic'
65
- )
66
-
67
- # from langgraph.prebuilt import create_react_agent
68
- agent = create_react_agent(
69
- llm,
70
- tools
71
- )
60
+ # from langchain.chat_models import init_chat_model
61
+ llm = init_chat_model(
62
+ model='claude-3-5-haiku-latest',
63
+ model_provider='anthropic'
64
+ )
65
+
66
+ # from langgraph.prebuilt import create_react_agent
67
+ agent = create_react_agent(
68
+ llm,
69
+ tools
70
+ )
72
71
  ```
73
72
  A simple and experimentable usage example can be found
74
73
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -90,11 +89,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
90
89
  a big role here...
91
90
  I'll summarize the difficulties I faced below.
92
91
  The source code is available
93
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
92
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
94
93
  Any comments pointing out something I am missing would be greatly appreciated!
95
94
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
96
95
 
97
- 1. Core Challenge:
96
+ 1. Challenge:
98
97
 
99
98
  A key requirement for parallel initialization is that each server must be
100
99
  initialized in its own dedicated task - there's no way around this as far as
@@ -107,11 +106,10 @@ Any comments pointing out something I am missing would be greatly appreciated!
107
106
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
108
107
  - Initializing multiple MCP servers in parallel requires a dedicated
109
108
  `asyncio.Task` per server
110
- - Need to keep sessions alive for later use by different tasks
111
- after initialization
112
- - Need to ensure proper cleanup later in the same task that created them
109
+ - Server cleanup can be initiated later by a task other than the one
110
+ that initialized the resources
113
111
 
114
- 2. Solution Strategy:
112
+ 2. Solution:
115
113
 
116
114
  The key insight is to keep the initialization tasks alive throughout the
117
115
  session lifetime, rather than letting them complete after initialization.
@@ -155,5 +153,5 @@ It usually means I'm doing something very worng...
155
153
  I think it is a natural assumption that MCP SDK is designed with consideration
156
154
  for parallel server initialization.
157
155
  I'm not sure what I'm missing...
158
- (FYI, with the TypeScript MCP SDK, parallel initialization was
156
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
159
157
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -1,12 +1,11 @@
1
1
  [project]
2
2
  name = "langchain-mcp-tools"
3
- version = "0.0.17"
3
+ version = "0.1.0"
4
4
  description = "Model Context Protocol (MCP) To LangChain Tools Conversion Utility"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.11"
7
7
  dependencies = [
8
8
  "jsonschema-pydantic>=0.6",
9
- "langchain-core>=0.3.29",
10
9
  "langchain>=0.3.14",
11
10
  "langchain-anthropic>=0.3.1",
12
11
  "langchain-groq>=0.2.3",
@@ -20,15 +19,15 @@ dependencies = [
20
19
 
21
20
  [project.optional-dependencies]
22
21
  dev = [
23
- "twine>=6.0.1",
22
+ "pytest>=8.3.4",
23
+ "pytest-asyncio>=0.25.2",
24
24
  ]
25
25
 
26
+ [tool.setuptools]
27
+ package-dir = {"" = "src"}
28
+
26
29
  [tool.setuptools.packages.find]
27
- exclude = [
28
- "tests*",
29
- "docs*",
30
- "examples*",
31
- ]
30
+ where = ["src"]
32
31
 
33
32
  [tool.setuptools.package-data]
34
33
  langchain_mcp_tools = ["py.typed"]
@@ -36,7 +36,7 @@ This code implements a specific pattern for managing async resources that
36
36
  require context managers while enabling parallel initialization.
37
37
  The key aspects are:
38
38
 
39
- 1. Core Challenge:
39
+ 1. Challenge:
40
40
 
41
41
  A key requirement for parallel initialization is that each server must be
42
42
  initialized in its own dedicated task - there's no way around this as far as
@@ -49,11 +49,10 @@ The key aspects are:
49
49
  (based on the mcp python-sdk impl as of Jan 14, 2025)
50
50
  - Initializing multiple MCP servers in parallel requires a dedicated
51
51
  `asyncio.Task` per server
52
- - Need to keep sessions alive for later use by different tasks
53
- after initialization
54
- - Need to ensure proper cleanup later in the same task that created them
52
+ - Server cleanup can be initiated later by a task other than the one that
53
+ initialized the resources
55
54
 
56
- 2. Solution Strategy:
55
+ 2. Solution:
57
56
 
58
57
  The key insight is to keep the initialization tasks alive throughout the
59
58
  session lifetime, rather than letting them complete after initialization.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.17
3
+ Version: 0.1.0
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -8,7 +8,6 @@ Requires-Python: >=3.11
8
8
  Description-Content-Type: text/markdown
9
9
  License-File: LICENSE
10
10
  Requires-Dist: jsonschema-pydantic>=0.6
11
- Requires-Dist: langchain-core>=0.3.29
12
11
  Requires-Dist: langchain>=0.3.14
13
12
  Requires-Dist: langchain-anthropic>=0.3.1
14
13
  Requires-Dist: langchain-groq>=0.2.3
@@ -19,7 +18,8 @@ Requires-Dist: pyjson5>=1.6.8
19
18
  Requires-Dist: pympler>=1.1
20
19
  Requires-Dist: python-dotenv>=1.0.1
21
20
  Provides-Extra: dev
22
- Requires-Dist: twine>=6.0.1; extra == "dev"
21
+ Requires-Dist: pytest>=8.3.4; extra == "dev"
22
+ Requires-Dist: pytest-asyncio>=0.25.2; extra == "dev"
23
23
 
24
24
  # MCP To LangChain Tools Conversion Utility [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
25
25
 
@@ -29,8 +29,7 @@ server tools with LangChain / Python.
29
29
 
30
30
  It contains a utility function `convert_mcp_to_langchain_tools()`.
31
31
  This function handles parallel initialization of specified multiple MCP servers
32
- and converts their available tools into a list of
33
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
32
+ and converts their available tools into a list of LangChain-compatible tools.
34
33
 
35
34
  A typescript equivalent of this utility library is available
36
35
  [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
@@ -54,25 +53,25 @@ but only the contents of the `mcpServers` property,
54
53
  and is expressed as a `dict`, e.g.:
55
54
 
56
55
  ```python
57
- mcp_configs = {
58
- 'filesystem': {
59
- 'command': 'npx',
60
- 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
61
- },
62
- 'fetch': {
63
- 'command': 'uvx',
64
- 'args': ['mcp-server-fetch']
65
- }
56
+ mcp_configs = {
57
+ 'filesystem': {
58
+ 'command': 'npx',
59
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
60
+ },
61
+ 'fetch': {
62
+ 'command': 'uvx',
63
+ 'args': ['mcp-server-fetch']
66
64
  }
65
+ }
67
66
 
68
- tools, cleanup = await convert_mcp_to_langchain_tools(
69
- mcp_configs
70
- )
67
+ tools, cleanup = await convert_mcp_to_langchain_tools(
68
+ mcp_configs
69
+ )
71
70
  ```
72
71
 
73
72
  This utility function initializes all specified MCP servers in parallel,
74
- and returns [LangChain Tools](https://python.langchain.com/api_reference/core/tools.html)
75
- (`tools: List[BaseTool]`)
73
+ and returns LangChain Tools
74
+ ([`tools: List[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
76
75
  by gathering available MCP tools from the servers,
77
76
  and by wrapping them into LangChain tools.
78
77
  It also returns an async callback function (`cleanup: McpServerCleanupFn`)
@@ -81,17 +80,17 @@ to be invoked to close all MCP server sessions when finished.
81
80
  The returned tools can be used with LangChain, e.g.:
82
81
 
83
82
  ```python
84
- # from langchain.chat_models import init_chat_model
85
- llm = init_chat_model(
86
- model='claude-3-5-haiku-latest',
87
- model_provider='anthropic'
88
- )
89
-
90
- # from langgraph.prebuilt import create_react_agent
91
- agent = create_react_agent(
92
- llm,
93
- tools
94
- )
83
+ # from langchain.chat_models import init_chat_model
84
+ llm = init_chat_model(
85
+ model='claude-3-5-haiku-latest',
86
+ model_provider='anthropic'
87
+ )
88
+
89
+ # from langgraph.prebuilt import create_react_agent
90
+ agent = create_react_agent(
91
+ llm,
92
+ tools
93
+ )
95
94
  ```
96
95
  A simple and experimentable usage example can be found
97
96
  [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
@@ -113,11 +112,11 @@ I'm new to Python, so it is very possible that my ignorance is playing
113
112
  a big role here...
114
113
  I'll summarize the difficulties I faced below.
115
114
  The source code is available
116
- [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/langchain_mcp_tools/langchain_mcp_tools.py).
115
+ [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/src/langchain_mcp_tools/langchain_mcp_tools.py).
117
116
  Any comments pointing out something I am missing would be greatly appreciated!
118
117
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
119
118
 
120
- 1. Core Challenge:
119
+ 1. Challenge:
121
120
 
122
121
  A key requirement for parallel initialization is that each server must be
123
122
  initialized in its own dedicated task - there's no way around this as far as
@@ -130,11 +129,10 @@ Any comments pointing out something I am missing would be greatly appreciated!
130
129
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
131
130
  - Initializing multiple MCP servers in parallel requires a dedicated
132
131
  `asyncio.Task` per server
133
- - Need to keep sessions alive for later use by different tasks
134
- after initialization
135
- - Need to ensure proper cleanup later in the same task that created them
132
+ - Server cleanup can be initiated later by a task other than the one
133
+ that initialized the resources
136
134
 
137
- 2. Solution Strategy:
135
+ 2. Solution:
138
136
 
139
137
  The key insight is to keep the initialization tasks alive throughout the
140
138
  session lifetime, rather than letting them complete after initialization.
@@ -178,5 +176,5 @@ It usually means I'm doing something very worng...
178
176
  I think it is a natural assumption that MCP SDK is designed with consideration
179
177
  for parallel server initialization.
180
178
  I'm not sure what I'm missing...
181
- (FYI, with the TypeScript MCP SDK, parallel initialization was
179
+ (FYI, with the TypeScript MCP SDK, parallel initialization was
182
180
  [pretty straightforward](https://github.com/hideya/langchain-mcp-tools-ts/blob/main/src/langchain-mcp-tools.ts))
@@ -0,0 +1,12 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/langchain_mcp_tools/__init__.py
5
+ src/langchain_mcp_tools/langchain_mcp_tools.py
6
+ src/langchain_mcp_tools/py.typed
7
+ src/langchain_mcp_tools.egg-info/PKG-INFO
8
+ src/langchain_mcp_tools.egg-info/SOURCES.txt
9
+ src/langchain_mcp_tools.egg-info/dependency_links.txt
10
+ src/langchain_mcp_tools.egg-info/requires.txt
11
+ src/langchain_mcp_tools.egg-info/top_level.txt
12
+ tests/test_langchain_mcp_tools.py
@@ -1,5 +1,4 @@
1
1
  jsonschema-pydantic>=0.6
2
- langchain-core>=0.3.29
3
2
  langchain>=0.3.14
4
3
  langchain-anthropic>=0.3.1
5
4
  langchain-groq>=0.2.3
@@ -11,4 +10,5 @@ pympler>=1.1
11
10
  python-dotenv>=1.0.1
12
11
 
13
12
  [dev]
14
- twine>=6.0.1
13
+ pytest>=8.3.4
14
+ pytest-asyncio>=0.25.2
@@ -0,0 +1,164 @@
1
+ import pytest
2
+ from unittest.mock import AsyncMock, MagicMock, patch
3
+ from langchain_core.tools import BaseTool
4
+ from langchain_mcp_tools.langchain_mcp_tools import (
5
+ convert_mcp_to_langchain_tools,
6
+ )
7
+
8
+ # Fix the asyncio mark warning by installing pytest-asyncio
9
+ pytest_plugins = ('pytest_asyncio',)
10
+
11
+
12
+ @pytest.fixture
13
+ def mock_stdio_client():
14
+ with patch('langchain_mcp_tools.langchain_mcp_tools.stdio_client') as mock:
15
+ mock.return_value.__aenter__.return_value = (AsyncMock(), AsyncMock())
16
+ yield mock
17
+
18
+
19
+ @pytest.fixture
20
+ def mock_client_session():
21
+ with patch('langchain_mcp_tools.langchain_mcp_tools.ClientSession') \
22
+ as mock:
23
+ session = AsyncMock()
24
+ # Mock the list_tools response
25
+ session.list_tools.return_value = MagicMock(
26
+ tools=[
27
+ MagicMock(
28
+ name="tool1",
29
+ description="Test tool",
30
+ inputSchema={"type": "object", "properties": {}}
31
+ )
32
+ ]
33
+ )
34
+ mock.return_value.__aenter__.return_value = session
35
+ yield mock
36
+
37
+
38
+ @pytest.mark.asyncio
39
+ async def test_convert_mcp_to_langchain_tools_empty():
40
+ server_configs = {}
41
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
42
+ assert isinstance(tools, list)
43
+ assert len(tools) == 0
44
+ await cleanup()
45
+
46
+
47
+ """
48
+ @pytest.mark.asyncio
49
+ async def test_convert_mcp_to_langchain_tools_invalid_config():
50
+ server_configs = {"invalid": {"command": "nonexistent"}}
51
+ with pytest.raises(Exception):
52
+ await convert_mcp_to_langchain_tools(server_configs)
53
+ """
54
+
55
+
56
+ """
57
+ @pytest.mark.asyncio
58
+ async def test_convert_single_mcp_success(
59
+ mock_stdio_client,
60
+ mock_client_session
61
+ ):
62
+ # Test data
63
+ server_name = "test_server"
64
+ server_config = {
65
+ "command": "test_command",
66
+ "args": ["--test"],
67
+ "env": {"TEST_ENV": "value"}
68
+ }
69
+ langchain_tools = []
70
+ ready_event = asyncio.Event()
71
+ cleanup_event = asyncio.Event()
72
+
73
+ # Create task
74
+ task = asyncio.create_task(
75
+ convert_single_mcp_to_langchain_tools(
76
+ server_name,
77
+ server_config,
78
+ langchain_tools,
79
+ ready_event,
80
+ cleanup_event
81
+ )
82
+ )
83
+
84
+ # Wait for ready event
85
+ await asyncio.wait_for(ready_event.wait(), timeout=1.0)
86
+
87
+ # Verify tools were created
88
+ assert len(langchain_tools) == 1
89
+ assert isinstance(langchain_tools[0], BaseTool)
90
+ assert langchain_tools[0].name == "tool1"
91
+
92
+ # Trigger cleanup
93
+ cleanup_event.set()
94
+ await task
95
+ """
96
+
97
+
98
+ @pytest.mark.asyncio
99
+ async def test_convert_mcp_to_langchain_tools_multiple_servers(
100
+ mock_stdio_client,
101
+ mock_client_session
102
+ ):
103
+ server_configs = {
104
+ "server1": {"command": "cmd1", "args": []},
105
+ "server2": {"command": "cmd2", "args": []}
106
+ }
107
+
108
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
109
+
110
+ # Verify correct number of tools created
111
+ assert len(tools) == 2 # One tool per server
112
+ assert all(isinstance(tool, BaseTool) for tool in tools)
113
+
114
+ # Test cleanup
115
+ await cleanup()
116
+
117
+
118
+ """
119
+ @pytest.mark.asyncio
120
+ async def test_tool_execution(mock_stdio_client, mock_client_session):
121
+ server_configs = {
122
+ "test_server": {"command": "test", "args": []}
123
+ }
124
+
125
+ # Mock the tool execution response
126
+ session = mock_client_session.return_value.__aenter__.return_value
127
+ session.call_tool.return_value = MagicMock(
128
+ isError=False,
129
+ content={"result": "success"}
130
+ )
131
+
132
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
133
+
134
+ # Test tool execution
135
+ result = await tools[0]._arun(test_param="value")
136
+ assert result == {"result": "success"}
137
+
138
+ # Verify tool was called with correct parameters
139
+ session.call_tool.assert_called_once_with("tool1", {"test_param": "value"})
140
+
141
+ await cleanup()
142
+ """
143
+
144
+
145
+ @pytest.mark.asyncio
146
+ async def test_tool_execution_error(mock_stdio_client, mock_client_session):
147
+ server_configs = {
148
+ "test_server": {"command": "test", "args": []}
149
+ }
150
+
151
+ # Mock error response
152
+ session = mock_client_session.return_value.__aenter__.return_value
153
+ session.call_tool.return_value = MagicMock(
154
+ isError=True,
155
+ content="Error message"
156
+ )
157
+
158
+ tools, cleanup = await convert_mcp_to_langchain_tools(server_configs)
159
+
160
+ # Test tool execution error
161
+ with pytest.raises(Exception):
162
+ await tools[0]._arun(test_param="value")
163
+
164
+ await cleanup()
@@ -1,11 +0,0 @@
1
- LICENSE
2
- README.md
3
- pyproject.toml
4
- langchain_mcp_tools/__init__.py
5
- langchain_mcp_tools/langchain_mcp_tools.py
6
- langchain_mcp_tools/py.typed
7
- langchain_mcp_tools.egg-info/PKG-INFO
8
- langchain_mcp_tools.egg-info/SOURCES.txt
9
- langchain_mcp_tools.egg-info/dependency_links.txt
10
- langchain_mcp_tools.egg-info/requires.txt
11
- langchain_mcp_tools.egg-info/top_level.txt