langchain-mcp-tools 0.0.13__tar.gz → 0.0.15__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.13
3
+ Version: 0.0.15
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -108,7 +108,7 @@ Currently, only text results of tool calls are supported.
108
108
  ## Technical Details
109
109
 
110
110
  It was very tricky (for me) to get the parallel MCP server initialization
111
- to work successfully...
111
+ to work, including successful final resource cleanup...
112
112
 
113
113
  I'm new to Python, so it is very possible that my ignorance is playing
114
114
  a big role here...
@@ -119,23 +119,24 @@ Any comments pointing out something I am missing would be greatly appreciated!
119
119
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
120
120
 
121
121
  1. Core Challenge:
122
- - Async resources management for `stdio_client` and `ClientSession` seems
122
+
123
+ A key requirement for parallel initialization is that each server must be
124
+ initialized in its own dedicated task - there's no way around this as far as
125
+ I know. However, this poses a challenge when combined with
126
+ `asynccontextmanager`.
127
+
128
+ - Resources management for `stdio_client` and `ClientSession` seems
123
129
  to require relying exclusively on `asynccontextmanager` for cleanup,
124
130
  with no manual cleanup options
125
131
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
126
132
  - Initializing multiple MCP servers in parallel requires a dedicated
127
133
  `asyncio.Task` per server
128
- - Necessity of keeping sessions alive for later use by different tasks
134
+ - Need to keep sessions alive for later use by different tasks
129
135
  after initialization
130
- - Ensuring proper cleanup later in the same task that created them
136
+ - Need to ensure proper cleanup later in the same task that created them
131
137
 
132
138
  2. Solution Strategy:
133
139
 
134
- A key requirement for parallel initialization is that each server must be
135
- initialized in its own dedicated task - there's no way around this as far
136
- as I understand. However, this creates a challenge since we also need to
137
- maintain long-lived sessions and handle cleanup properly.
138
-
139
140
  The key insight is to keep the initialization tasks alive throughout the
140
141
  session lifetime, rather than letting them complete after initialization.
141
142
 
@@ -156,7 +157,7 @@ Any comments pointing out something I am missing would be greatly appreciated!
156
157
 
157
158
  3. Task Lifecycle:
158
159
 
159
- To allow the initialization task to stay alive waiting for cleanup:
160
+ The following illustrates how to implement the above-mentioned strategy:
160
161
  ```
161
162
  [Task starts]
162
163
 
@@ -83,7 +83,7 @@ Currently, only text results of tool calls are supported.
83
83
  ## Technical Details
84
84
 
85
85
  It was very tricky (for me) to get the parallel MCP server initialization
86
- to work successfully...
86
+ to work, including successful final resource cleanup...
87
87
 
88
88
  I'm new to Python, so it is very possible that my ignorance is playing
89
89
  a big role here...
@@ -94,23 +94,24 @@ Any comments pointing out something I am missing would be greatly appreciated!
94
94
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
95
95
 
96
96
  1. Core Challenge:
97
- - Async resources management for `stdio_client` and `ClientSession` seems
97
+
98
+ A key requirement for parallel initialization is that each server must be
99
+ initialized in its own dedicated task - there's no way around this as far as
100
+ I know. However, this poses a challenge when combined with
101
+ `asynccontextmanager`.
102
+
103
+ - Resources management for `stdio_client` and `ClientSession` seems
98
104
  to require relying exclusively on `asynccontextmanager` for cleanup,
99
105
  with no manual cleanup options
100
106
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
101
107
  - Initializing multiple MCP servers in parallel requires a dedicated
102
108
  `asyncio.Task` per server
103
- - Necessity of keeping sessions alive for later use by different tasks
109
+ - Need to keep sessions alive for later use by different tasks
104
110
  after initialization
105
- - Ensuring proper cleanup later in the same task that created them
111
+ - Need to ensure proper cleanup later in the same task that created them
106
112
 
107
113
  2. Solution Strategy:
108
114
 
109
- A key requirement for parallel initialization is that each server must be
110
- initialized in its own dedicated task - there's no way around this as far
111
- as I understand. However, this creates a challenge since we also need to
112
- maintain long-lived sessions and handle cleanup properly.
113
-
114
115
  The key insight is to keep the initialization tasks alive throughout the
115
116
  session lifetime, rather than letting them complete after initialization.
116
117
 
@@ -131,7 +132,7 @@ Any comments pointing out something I am missing would be greatly appreciated!
131
132
 
132
133
  3. Task Lifecycle:
133
134
 
134
- To allow the initialization task to stay alive waiting for cleanup:
135
+ The following illustrates how to implement the above-mentioned strategy:
135
136
  ```
136
137
  [Task starts]
137
138
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: langchain-mcp-tools
3
- Version: 0.0.13
3
+ Version: 0.0.15
4
4
  Summary: Model Context Protocol (MCP) To LangChain Tools Conversion Utility
5
5
  Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
6
  Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
@@ -108,7 +108,7 @@ Currently, only text results of tool calls are supported.
108
108
  ## Technical Details
109
109
 
110
110
  It was very tricky (for me) to get the parallel MCP server initialization
111
- to work successfully...
111
+ to work, including successful final resource cleanup...
112
112
 
113
113
  I'm new to Python, so it is very possible that my ignorance is playing
114
114
  a big role here...
@@ -119,23 +119,24 @@ Any comments pointing out something I am missing would be greatly appreciated!
119
119
  [(comment here)](https://github.com/hideya/langchain-mcp-tools-ts/issues)
120
120
 
121
121
  1. Core Challenge:
122
- - Async resources management for `stdio_client` and `ClientSession` seems
122
+
123
+ A key requirement for parallel initialization is that each server must be
124
+ initialized in its own dedicated task - there's no way around this as far as
125
+ I know. However, this poses a challenge when combined with
126
+ `asynccontextmanager`.
127
+
128
+ - Resources management for `stdio_client` and `ClientSession` seems
123
129
  to require relying exclusively on `asynccontextmanager` for cleanup,
124
130
  with no manual cleanup options
125
131
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
126
132
  - Initializing multiple MCP servers in parallel requires a dedicated
127
133
  `asyncio.Task` per server
128
- - Necessity of keeping sessions alive for later use by different tasks
134
+ - Need to keep sessions alive for later use by different tasks
129
135
  after initialization
130
- - Ensuring proper cleanup later in the same task that created them
136
+ - Need to ensure proper cleanup later in the same task that created them
131
137
 
132
138
  2. Solution Strategy:
133
139
 
134
- A key requirement for parallel initialization is that each server must be
135
- initialized in its own dedicated task - there's no way around this as far
136
- as I understand. However, this creates a challenge since we also need to
137
- maintain long-lived sessions and handle cleanup properly.
138
-
139
140
  The key insight is to keep the initialization tasks alive throughout the
140
141
  session lifetime, rather than letting them complete after initialization.
141
142
 
@@ -156,7 +157,7 @@ Any comments pointing out something I am missing would be greatly appreciated!
156
157
 
157
158
  3. Task Lifecycle:
158
159
 
159
- To allow the initialization task to stay alive waiting for cleanup:
160
+ The following illustrates how to implement the above-mentioned strategy:
160
161
  ```
161
162
  [Task starts]
162
163
 
@@ -1,12 +1,13 @@
1
1
  LICENSE
2
2
  README.md
3
3
  pyproject.toml
4
- langchain_mcp_tools/__init__.py
5
- langchain_mcp_tools/langchain_mcp_tools.py
6
- langchain_mcp_tools/py.typed
7
4
  langchain_mcp_tools.egg-info/PKG-INFO
8
5
  langchain_mcp_tools.egg-info/SOURCES.txt
9
6
  langchain_mcp_tools.egg-info/dependency_links.txt
10
7
  langchain_mcp_tools.egg-info/requires.txt
11
8
  langchain_mcp_tools.egg-info/top_level.txt
12
- tests/test_langchain_mcp_tools.py
9
+ src/langchain_mcp_tools/__init__.py
10
+ src/langchain_mcp_tools/langchain_mcp_tools.py
11
+ src/langchain_mcp_tools/py.typed
12
+ src/tests/__init__.py
13
+ src/tests/test_langchain_mcp_tools.py
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "langchain-mcp-tools"
3
- version = "0.0.13"
3
+ version = "0.0.15"
4
4
  description = "Model Context Protocol (MCP) To LangChain Tools Conversion Utility"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.11"
@@ -37,21 +37,23 @@ require context managers while enabling parallel initialization.
37
37
  The key aspects are:
38
38
 
39
39
  1. Core Challenge:
40
- - Async resources management for `stdio_client` and `ClientSession` seems
40
+
41
+ A key requirement for parallel initialization is that each server must be
42
+ initialized in its own dedicated task - there's no way around this as far as
43
+ I know. However, this poses a challenge when combined with
44
+ `asynccontextmanager`.
45
+
46
+ - Resources management for `stdio_client` and `ClientSession` seems
41
47
  to require relying exclusively on `asynccontextmanager` for cleanup,
42
48
  with no manual cleanup options
43
49
  (based on [the mcp python-sdk impl as of Jan 14, 2025](https://github.com/modelcontextprotocol/python-sdk/tree/99727a9/src/mcp/client))
44
50
  - Initializing multiple MCP servers in parallel requires a dedicated
45
51
  `asyncio.Task` per server
46
- - Necessity of keeping sessions alive for later use by different tasks
52
+ - Need to keep sessions alive for later use by different tasks
47
53
  after initialization
48
- - Ensuring proper cleanup later in the same task that created them
54
+ - Need to ensure proper cleanup later in the same task that created them
49
55
 
50
56
  2. Solution Strategy:
51
- A key requirement for parallel initialization is that each server must be
52
- initialized in its own dedicated task - there's no way around this as far
53
- as I understand. However, this creates a challenge since we also need to
54
- maintain long-lived sessions and handle cleanup properly.
55
57
 
56
58
  The key insight is to keep the initialization tasks alive throughout the
57
59
  session lifetime, rather than letting them complete after initialization.
@@ -73,7 +75,7 @@ The key aspects are:
73
75
 
74
76
  3. Task Lifecycle:
75
77
 
76
- To allow the initialization task to stay alive waiting for cleanup:
78
+ The following illustrates how to implement the above-mentioned strategy:
77
79
  ```
78
80
  [Task starts]
79
81
 
@@ -0,0 +1,2 @@
1
+
2
+ # Test package initialization
@@ -1,3 +0,0 @@
1
- build
2
- dist
3
- langchain_mcp_tools