langchain-mcp-tools 0.0.1__tar.gz → 0.0.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,104 @@
1
+ Metadata-Version: 2.2
2
+ Name: langchain-mcp-tools
3
+ Version: 0.0.3
4
+ Summary: MCP To LangChain Tools Conversion Utility
5
+ Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
+ Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
7
+ Requires-Python: >=3.11
8
+ Description-Content-Type: text/markdown
9
+ License-File: LICENSE
10
+ Requires-Dist: jsonschema-pydantic>=0.6
11
+ Requires-Dist: langchain-core>=0.3.29
12
+ Requires-Dist: langchain>=0.3.14
13
+ Requires-Dist: langchain-anthropic>=0.3.1
14
+ Requires-Dist: langchain-groq>=0.2.3
15
+ Requires-Dist: langchain-openai>=0.3.0
16
+ Requires-Dist: langgraph>=0.2.62
17
+ Requires-Dist: mcp>=1.2.0
18
+ Requires-Dist: pyjson5>=1.6.8
19
+ Requires-Dist: pympler>=1.1
20
+ Requires-Dist: python-dotenv>=1.0.1
21
+
22
+ # MCP To LangChain Tools Conversion Utility / Python [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
23
+
24
+ This package is intended to simplify the use of
25
+ [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
26
+ server tools with LangChain / Python.
27
+
28
+ It contains a utility function `convertMcpToLangchainTools()`.
29
+ This function handles parallel initialization of specified multiple MCP servers
30
+ and converts their available tools into an array of
31
+ [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
32
+
33
+ A typescript version of this MCP client is available
34
+ [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
35
+
36
+ ## Requirements
37
+
38
+ - Python 3.11+
39
+
40
+ ## Installation
41
+
42
+ ```bash
43
+ pip install langchain-mcp-tools
44
+ ```
45
+
46
+ ## Quick Start
47
+
48
+ `convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
49
+ that follow the same structure as
50
+ [Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
51
+ but only the contents of the `mcpServers` property,
52
+ and is expressed as a `dict`, e.g.:
53
+
54
+ ```python
55
+ mcp_configs = {
56
+ 'filesystem': {
57
+ 'command': 'npx',
58
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
59
+ },
60
+ 'fetch': {
61
+ 'command': 'uvx',
62
+ 'args': ['mcp-server-fetch']
63
+ }
64
+ }
65
+
66
+ tools, mcp_cleanup = await convert_mcp_to_langchain_tools(
67
+ mcp_configs
68
+ )
69
+ ```
70
+
71
+ The utility function initializes all specified MCP servers in parallel,
72
+ and returns LangChain Tools (`List[BaseTool]`)
73
+ by gathering all available MCP server tools,
74
+ and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
75
+ It also returns a cleanup callback function (`McpServerCleanupFn`)
76
+ to be invoked to close all MCP server sessions when finished.
77
+
78
+ The returned tools can be used with LangChain, e.g.:
79
+
80
+ ```python
81
+ # from langchain.chat_models import init_chat_model
82
+ llm = init_chat_model(
83
+ model='claude-3-5-haiku-latest',
84
+ model_provider='anthropic'
85
+ )
86
+
87
+ # from langgraph.prebuilt import create_react_agent
88
+ agent = create_react_agent(
89
+ llm,
90
+ tools
91
+ )
92
+ ```
93
+ <!-- A simple and experimentable usage example can be found
94
+ [here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
95
+
96
+ <!-- A more realistic usage example can be found
97
+ [here](https://github.com/hideya/langchain-mcp-client-ts) -->
98
+
99
+ An usage example can be found
100
+ [here](https://github.com/hideya/mcp-client-langchain-py)
101
+
102
+ ## Limitations
103
+
104
+ Currently, only text results of tool calls are supported.
@@ -0,0 +1,83 @@
1
+ # MCP To LangChain Tools Conversion Utility / Python [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
2
+
3
+ This package is intended to simplify the use of
4
+ [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
5
+ server tools with LangChain / Python.
6
+
7
+ It contains a utility function `convertMcpToLangchainTools()`.
8
+ This function handles parallel initialization of specified multiple MCP servers
9
+ and converts their available tools into an array of
10
+ [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
11
+
12
+ A typescript version of this MCP client is available
13
+ [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
14
+
15
+ ## Requirements
16
+
17
+ - Python 3.11+
18
+
19
+ ## Installation
20
+
21
+ ```bash
22
+ pip install langchain-mcp-tools
23
+ ```
24
+
25
+ ## Quick Start
26
+
27
+ `convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
28
+ that follow the same structure as
29
+ [Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
30
+ but only the contents of the `mcpServers` property,
31
+ and is expressed as a `dict`, e.g.:
32
+
33
+ ```python
34
+ mcp_configs = {
35
+ 'filesystem': {
36
+ 'command': 'npx',
37
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
38
+ },
39
+ 'fetch': {
40
+ 'command': 'uvx',
41
+ 'args': ['mcp-server-fetch']
42
+ }
43
+ }
44
+
45
+ tools, mcp_cleanup = await convert_mcp_to_langchain_tools(
46
+ mcp_configs
47
+ )
48
+ ```
49
+
50
+ The utility function initializes all specified MCP servers in parallel,
51
+ and returns LangChain Tools (`List[BaseTool]`)
52
+ by gathering all available MCP server tools,
53
+ and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
54
+ It also returns a cleanup callback function (`McpServerCleanupFn`)
55
+ to be invoked to close all MCP server sessions when finished.
56
+
57
+ The returned tools can be used with LangChain, e.g.:
58
+
59
+ ```python
60
+ # from langchain.chat_models import init_chat_model
61
+ llm = init_chat_model(
62
+ model='claude-3-5-haiku-latest',
63
+ model_provider='anthropic'
64
+ )
65
+
66
+ # from langgraph.prebuilt import create_react_agent
67
+ agent = create_react_agent(
68
+ llm,
69
+ tools
70
+ )
71
+ ```
72
+ <!-- A simple and experimentable usage example can be found
73
+ [here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
74
+
75
+ <!-- A more realistic usage example can be found
76
+ [here](https://github.com/hideya/langchain-mcp-client-ts) -->
77
+
78
+ An usage example can be found
79
+ [here](https://github.com/hideya/mcp-client-langchain-py)
80
+
81
+ ## Limitations
82
+
83
+ Currently, only text results of tool calls are supported.
@@ -0,0 +1,104 @@
1
+ Metadata-Version: 2.2
2
+ Name: langchain-mcp-tools
3
+ Version: 0.0.3
4
+ Summary: MCP To LangChain Tools Conversion Utility
5
+ Project-URL: Bug Tracker, https://github.com/hideya/langchain-mcp-tools-py/issues
6
+ Project-URL: Source Code, https://github.com/hideya/langchain-mcp-tools-py
7
+ Requires-Python: >=3.11
8
+ Description-Content-Type: text/markdown
9
+ License-File: LICENSE
10
+ Requires-Dist: jsonschema-pydantic>=0.6
11
+ Requires-Dist: langchain-core>=0.3.29
12
+ Requires-Dist: langchain>=0.3.14
13
+ Requires-Dist: langchain-anthropic>=0.3.1
14
+ Requires-Dist: langchain-groq>=0.2.3
15
+ Requires-Dist: langchain-openai>=0.3.0
16
+ Requires-Dist: langgraph>=0.2.62
17
+ Requires-Dist: mcp>=1.2.0
18
+ Requires-Dist: pyjson5>=1.6.8
19
+ Requires-Dist: pympler>=1.1
20
+ Requires-Dist: python-dotenv>=1.0.1
21
+
22
+ # MCP To LangChain Tools Conversion Utility / Python [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE) [![pypi version](https://img.shields.io/pypi/v/langchain-mcp-tools.svg)](https://pypi.org/project/langchain-mcp-tools/)
23
+
24
+ This package is intended to simplify the use of
25
+ [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
26
+ server tools with LangChain / Python.
27
+
28
+ It contains a utility function `convertMcpToLangchainTools()`.
29
+ This function handles parallel initialization of specified multiple MCP servers
30
+ and converts their available tools into an array of
31
+ [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
32
+
33
+ A typescript version of this MCP client is available
34
+ [here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
35
+
36
+ ## Requirements
37
+
38
+ - Python 3.11+
39
+
40
+ ## Installation
41
+
42
+ ```bash
43
+ pip install langchain-mcp-tools
44
+ ```
45
+
46
+ ## Quick Start
47
+
48
+ `convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
49
+ that follow the same structure as
50
+ [Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
51
+ but only the contents of the `mcpServers` property,
52
+ and is expressed as a `dict`, e.g.:
53
+
54
+ ```python
55
+ mcp_configs = {
56
+ 'filesystem': {
57
+ 'command': 'npx',
58
+ 'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
59
+ },
60
+ 'fetch': {
61
+ 'command': 'uvx',
62
+ 'args': ['mcp-server-fetch']
63
+ }
64
+ }
65
+
66
+ tools, mcp_cleanup = await convert_mcp_to_langchain_tools(
67
+ mcp_configs
68
+ )
69
+ ```
70
+
71
+ The utility function initializes all specified MCP servers in parallel,
72
+ and returns LangChain Tools (`List[BaseTool]`)
73
+ by gathering all available MCP server tools,
74
+ and by wrapping them into [LangChain Tools](https://js.langchain.com/docs/how_to/tool_calling/).
75
+ It also returns a cleanup callback function (`McpServerCleanupFn`)
76
+ to be invoked to close all MCP server sessions when finished.
77
+
78
+ The returned tools can be used with LangChain, e.g.:
79
+
80
+ ```python
81
+ # from langchain.chat_models import init_chat_model
82
+ llm = init_chat_model(
83
+ model='claude-3-5-haiku-latest',
84
+ model_provider='anthropic'
85
+ )
86
+
87
+ # from langgraph.prebuilt import create_react_agent
88
+ agent = create_react_agent(
89
+ llm,
90
+ tools
91
+ )
92
+ ```
93
+ <!-- A simple and experimentable usage example can be found
94
+ [here](https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/main/src/index.ts) -->
95
+
96
+ <!-- A more realistic usage example can be found
97
+ [here](https://github.com/hideya/langchain-mcp-client-ts) -->
98
+
99
+ An usage example can be found
100
+ [here](https://github.com/hideya/mcp-client-langchain-py)
101
+
102
+ ## Limitations
103
+
104
+ Currently, only text results of tool calls are supported.
@@ -0,0 +1,10 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ langchain_mcp_tools.egg-info/PKG-INFO
5
+ langchain_mcp_tools.egg-info/SOURCES.txt
6
+ langchain_mcp_tools.egg-info/dependency_links.txt
7
+ langchain_mcp_tools.egg-info/requires.txt
8
+ langchain_mcp_tools.egg-info/top_level.txt
9
+ src/langchain_mcp_tools.py
10
+ tests/test_langchain_mcp_tools.py
@@ -1,4 +1,5 @@
1
1
  jsonschema-pydantic>=0.6
2
+ langchain-core>=0.3.29
2
3
  langchain>=0.3.14
3
4
  langchain-anthropic>=0.3.1
4
5
  langchain-groq>=0.2.3
@@ -1,11 +1,12 @@
1
1
  [project]
2
2
  name = "langchain-mcp-tools"
3
- version = "0.0.1"
4
- description = "MCP Client Using LangChain / Python"
3
+ version = "0.0.3"
4
+ description = "MCP To LangChain Tools Conversion Utility"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.11"
7
7
  dependencies = [
8
8
  "jsonschema-pydantic>=0.6",
9
+ "langchain-core>=0.3.29",
9
10
  "langchain>=0.3.14",
10
11
  "langchain-anthropic>=0.3.1",
11
12
  "langchain-groq>=0.2.3",
@@ -23,3 +24,14 @@ dev = [
23
24
  "pytest-asyncio>=0.25.2",
24
25
  "twine>=6.0.1",
25
26
  ]
27
+
28
+ [tool.setuptools.packages.find]
29
+ exclude = [
30
+ "tests*",
31
+ "docs*",
32
+ "examples*",
33
+ ]
34
+
35
+ [project.urls]
36
+ "Bug Tracker" = "https://github.com/hideya/langchain-mcp-tools-py/issues"
37
+ "Source Code" = "https://github.com/hideya/langchain-mcp-tools-py"
@@ -1,93 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: langchain-mcp-tools
3
- Version: 0.0.1
4
- Summary: MCP Client Using LangChain / Python
5
- Requires-Python: >=3.11
6
- Description-Content-Type: text/markdown
7
- License-File: LICENSE
8
- Requires-Dist: jsonschema-pydantic>=0.6
9
- Requires-Dist: langchain>=0.3.14
10
- Requires-Dist: langchain-anthropic>=0.3.1
11
- Requires-Dist: langchain-groq>=0.2.3
12
- Requires-Dist: langchain-openai>=0.3.0
13
- Requires-Dist: langgraph>=0.2.62
14
- Requires-Dist: mcp>=1.2.0
15
- Requires-Dist: pyjson5>=1.6.8
16
- Requires-Dist: pympler>=1.1
17
- Requires-Dist: python-dotenv>=1.0.1
18
-
19
- # MCP Client Using LangChain / Python [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE)
20
-
21
- This simple [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
22
- client demonstrates MCP server invocations by LangChain ReAct Agent.
23
-
24
- It leverages a utility function `convert_mcp_to_langchain_tools()` from
25
- `langchain_mcp_tools`.
26
- This function handles parallel initialization of specified multiple MCP servers
27
- and converts their available tools into a list of
28
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
29
-
30
- LLMs from Anthropic, OpenAI and Groq are currently supported.
31
-
32
- A typescript version of this MCP client is available
33
- [here](https://github.com/hideya/mcp-client-langchain-ts)
34
-
35
- ## Requirements
36
-
37
- - Python 3.11+
38
- - [`uv`](https://docs.astral.sh/uv/) installation
39
- - API keys from [Anthropic](https://console.anthropic.com/settings/keys),
40
- [OpenAI](https://platform.openai.com/api-keys), and/or
41
- [Groq](https://console.groq.com/keys)
42
- as needed
43
-
44
- ## Setup
45
- 1. Install dependencies:
46
- ```bash
47
- make install
48
- ```
49
-
50
- 2. Setup API keys:
51
- ```bash
52
- cp .env.template .env
53
- ```
54
- - Update `.env` as needed.
55
- - `.gitignore` is configured to ignore `.env`
56
- to prevent accidental commits of the credentials.
57
-
58
- 3. Configure LLM and MCP Servers settings `llm_mcp_config.json5` as needed.
59
-
60
- - [The configuration file format](https://github.com/hideya/mcp-client-langchain-ts/blob/main/llm_mcp_config.json5)
61
- for MCP servers follows the same structure as
62
- [Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
63
- with one difference: the key name `mcpServers` has been changed
64
- to `mcp_servers` to follow the snake_case convention
65
- commonly used in JSON configuration files.
66
- - The file format is [JSON5](https://json5.org/),
67
- where comments and trailing commas are allowed.
68
- - The format is further extended to replace `${...}` notations
69
- with the values of corresponding environment variables.
70
- - Keep all the credentials and private info in the `.env` file
71
- and refer to them with `${...}` notation as needed.
72
-
73
-
74
- ## Usage
75
-
76
- Run the app:
77
- ```bash
78
- make start
79
- ```
80
-
81
- Run in verbose mode:
82
- ```bash
83
- make start-v
84
- ```
85
-
86
- See commandline options:
87
- ```bash
88
- make start-h
89
- ```
90
-
91
- At the prompt, you can simply press Enter to use example queries that perform MCP server tool invocations.
92
-
93
- Example queries can be configured in `llm_mcp_config.json5`
@@ -1,75 +0,0 @@
1
- # MCP Client Using LangChain / Python [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE)
2
-
3
- This simple [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
4
- client demonstrates MCP server invocations by LangChain ReAct Agent.
5
-
6
- It leverages a utility function `convert_mcp_to_langchain_tools()` from
7
- `langchain_mcp_tools`.
8
- This function handles parallel initialization of specified multiple MCP servers
9
- and converts their available tools into a list of
10
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
11
-
12
- LLMs from Anthropic, OpenAI and Groq are currently supported.
13
-
14
- A typescript version of this MCP client is available
15
- [here](https://github.com/hideya/mcp-client-langchain-ts)
16
-
17
- ## Requirements
18
-
19
- - Python 3.11+
20
- - [`uv`](https://docs.astral.sh/uv/) installation
21
- - API keys from [Anthropic](https://console.anthropic.com/settings/keys),
22
- [OpenAI](https://platform.openai.com/api-keys), and/or
23
- [Groq](https://console.groq.com/keys)
24
- as needed
25
-
26
- ## Setup
27
- 1. Install dependencies:
28
- ```bash
29
- make install
30
- ```
31
-
32
- 2. Setup API keys:
33
- ```bash
34
- cp .env.template .env
35
- ```
36
- - Update `.env` as needed.
37
- - `.gitignore` is configured to ignore `.env`
38
- to prevent accidental commits of the credentials.
39
-
40
- 3. Configure LLM and MCP Servers settings `llm_mcp_config.json5` as needed.
41
-
42
- - [The configuration file format](https://github.com/hideya/mcp-client-langchain-ts/blob/main/llm_mcp_config.json5)
43
- for MCP servers follows the same structure as
44
- [Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
45
- with one difference: the key name `mcpServers` has been changed
46
- to `mcp_servers` to follow the snake_case convention
47
- commonly used in JSON configuration files.
48
- - The file format is [JSON5](https://json5.org/),
49
- where comments and trailing commas are allowed.
50
- - The format is further extended to replace `${...}` notations
51
- with the values of corresponding environment variables.
52
- - Keep all the credentials and private info in the `.env` file
53
- and refer to them with `${...}` notation as needed.
54
-
55
-
56
- ## Usage
57
-
58
- Run the app:
59
- ```bash
60
- make start
61
- ```
62
-
63
- Run in verbose mode:
64
- ```bash
65
- make start-v
66
- ```
67
-
68
- See commandline options:
69
- ```bash
70
- make start-h
71
- ```
72
-
73
- At the prompt, you can simply press Enter to use example queries that perform MCP server tool invocations.
74
-
75
- Example queries can be configured in `llm_mcp_config.json5`
@@ -1,274 +0,0 @@
1
- # Standard library imports
2
- import argparse
3
- import asyncio
4
- from enum import Enum
5
- import json
6
- import logging
7
- import sys
8
- from pathlib import Path
9
- from typing import (
10
- List,
11
- Optional,
12
- Dict,
13
- Any,
14
- cast,
15
- )
16
-
17
- # Third-party imports
18
- try:
19
- from dotenv import load_dotenv
20
- from langchain.chat_models import init_chat_model
21
- from langchain.schema import (
22
- AIMessage,
23
- BaseMessage,
24
- HumanMessage,
25
- SystemMessage,
26
- )
27
- from langchain_core.runnables.base import Runnable
28
- from langchain_core.messages.tool import ToolMessage
29
- from langgraph.prebuilt import create_react_agent
30
- except ImportError as e:
31
- print(f'\nError: Required package not found: {e}')
32
- print('Please ensure all required packages are installed\n')
33
- sys.exit(1)
34
-
35
- # Local application imports
36
- from config_loader import load_config
37
- from langchain_mcp_tools import (
38
- convert_mcp_to_langchain_tools,
39
- McpServerCleanupFn,
40
- )
41
-
42
- # Type definitions
43
- ConfigType = Dict[str, Any]
44
-
45
-
46
- # ANSI color escape codes
47
- class Colors(str, Enum):
48
- YELLOW = '\033[33m' # color to yellow
49
- CYAN = '\033[36m' # color to cyan
50
- RESET = '\033[0m' # reset color
51
-
52
- def __str__(self):
53
- return self.value
54
-
55
-
56
- def parse_arguments() -> argparse.Namespace:
57
- """Parse and return command line args for config path and verbosity."""
58
- parser = argparse.ArgumentParser(
59
- description='CLI Chat Application',
60
- formatter_class=argparse.ArgumentDefaultsHelpFormatter
61
- )
62
- parser.add_argument(
63
- '-c', '--config',
64
- default='llm_mcp_config.json5',
65
- help='path to config file',
66
- type=Path,
67
- metavar='PATH'
68
- )
69
- parser.add_argument(
70
- '-v', '--verbose',
71
- action='store_true',
72
- help='run with verbose logging'
73
- )
74
- return parser.parse_args()
75
-
76
-
77
- def init_logger(verbose: bool) -> logging.Logger:
78
- """Initialize and return a logger with appropriate verbosity level."""
79
- logging.basicConfig(
80
- level=logging.DEBUG if verbose else logging.INFO,
81
- format='\x1b[90m[%(levelname)s]\x1b[0m %(message)s'
82
- )
83
- return logging.getLogger()
84
-
85
-
86
- def print_colored(text: str, color: Colors, end: str = "\n") -> None:
87
- """Print text in specified color and reset afterwards."""
88
- print(f"{color}{text}{Colors.RESET}", end=end)
89
-
90
-
91
- def set_color(color: Colors) -> None:
92
- """Set terminal color."""
93
- print(color, end='')
94
-
95
-
96
- def clear_line() -> None:
97
- """Move up one line and clear it."""
98
- print('\x1b[1A\x1b[2K', end='')
99
-
100
-
101
- async def get_user_query(remaining_queries: List[str]) -> Optional[str]:
102
- """Get user input or next example query, handling empty inputs
103
- and quit commands."""
104
- set_color(Colors.YELLOW)
105
- query = input('Query: ').strip()
106
-
107
- if len(query) == 0:
108
- if len(remaining_queries) > 0:
109
- query = remaining_queries.pop(0)
110
- clear_line()
111
- print_colored(f'Example Query: {query}', Colors.YELLOW)
112
- else:
113
- set_color(Colors.RESET)
114
- print('\nPlease type a query, or "quit" or "q" to exit\n')
115
- return await get_user_query(remaining_queries)
116
-
117
- print(Colors.RESET) # Reset after input
118
-
119
- if query.lower() in ['quit', 'q']:
120
- print_colored('Goodbye!\n', Colors.CYAN)
121
- return None
122
-
123
- return query
124
-
125
-
126
- async def handle_conversation(
127
- agent: Runnable,
128
- messages: List[BaseMessage],
129
- example_queries: List[str],
130
- verbose: bool
131
- ) -> None:
132
- """Manage an interactive conversation loop between the user and AI agent.
133
-
134
- Args:
135
- agent (Runnable): The initialized ReAct agent that processes queries
136
- messages (List[BaseMessage]): List to maintain conversation history
137
- example_queries (List[str]): list of example queries that can be used
138
- when user presses Enter
139
- verbose (bool): Flag to control detailed output of tool responses
140
-
141
- Exception handling:
142
- - TypeError: Ensures response is in correct string format
143
- - General exceptions: Allows conversation to continue after errors
144
-
145
- The conversation continues until user types 'quit' or 'q'.
146
- """
147
- print('\nConversation started. '
148
- 'Type "quit" or "q" to end the conversation.\n')
149
- if len(example_queries) > 0:
150
- print('Example Queries (just type Enter to supply them one by one):')
151
- for ex_q in example_queries:
152
- print(f"- {ex_q}")
153
- print()
154
-
155
- while True:
156
- try:
157
- query = await get_user_query(example_queries)
158
- if not query:
159
- break
160
-
161
- messages.append(HumanMessage(content=query))
162
-
163
- result = await agent.ainvoke({
164
- 'messages': messages
165
- })
166
-
167
- result_messages = cast(List[BaseMessage], result['messages'])
168
- # the last message should be an AIMessage
169
- response = result_messages[-1].content
170
- if not isinstance(response, str):
171
- raise TypeError(
172
- f"Expected string response, got {type(response)}"
173
- )
174
-
175
- # check if msg one before is a ToolMessage
176
- message_one_before = result_messages[-2]
177
- if isinstance(message_one_before, ToolMessage):
178
- if verbose:
179
- # show tools call response
180
- print(message_one_before.content)
181
- # new line after tool call output
182
- print()
183
- print_colored(f"{response}\n", Colors.CYAN)
184
- messages.append(AIMessage(content=response))
185
-
186
- except Exception as e:
187
- print(f'Error getting response: {str(e)}')
188
- print('You can continue chatting or type "quit" to exit.')
189
-
190
-
191
- async def init_react_agent(
192
- config: ConfigType,
193
- logger: logging.Logger
194
- ) -> tuple[Runnable, List[BaseMessage], McpServerCleanupFn]:
195
- """Initialize and configure a ReAct agent for conversation handling.
196
-
197
- Args:
198
- config (ConfigType): Configuration dictionary containing LLM and
199
- MCP server settings
200
- logger (logging.Logger): Logger instance for initialization
201
- status updates
202
-
203
- Returns:
204
- tuple[Runnable, List[BaseMessage], McpServerCleanupFn]:
205
- Returns a tuple containing:
206
- - Configured ReAct agent ready for conversation
207
- - Initial message list (empty or with system prompt)
208
- - Cleanup function for MCP server connections
209
- """
210
- llm_config = config['llm']
211
- logger.info(f'Initializing model... {json.dumps(llm_config, indent=2)}\n')
212
-
213
- llm = init_chat_model(
214
- model=llm_config['model'],
215
- model_provider=llm_config['model_provider'],
216
- temperature=llm_config['temperature'],
217
- max_tokens=llm_config['max_tokens'],
218
- )
219
-
220
- mcp_configs = config['mcp_servers']
221
- logger.info(f'Initializing {len(mcp_configs)} MCP server(s)...\n')
222
- tools, mcp_cleanup = await convert_mcp_to_langchain_tools(
223
- mcp_configs,
224
- logger
225
- )
226
-
227
- agent = create_react_agent(
228
- llm,
229
- tools
230
- )
231
-
232
- messages: List[BaseMessage] = []
233
- system_prompt = llm_config.get('system_prompt')
234
- if system_prompt and isinstance(system_prompt, str):
235
- messages.append(SystemMessage(content=system_prompt))
236
-
237
- return agent, messages, mcp_cleanup
238
-
239
-
240
- async def run() -> None:
241
- """Main async function to set up and run the simple chat app."""
242
- mcp_cleanup: Optional[McpServerCleanupFn] = None
243
- try:
244
- load_dotenv()
245
- args = parse_arguments()
246
- logger = init_logger(args.verbose)
247
- config = load_config(args.config)
248
- example_queries = (
249
- config.get('example_queries')[:]
250
- if config.get('example_queries') is not None
251
- else []
252
- )
253
-
254
- agent, messages, mcp_cleanup = await init_react_agent(config, logger)
255
-
256
- await handle_conversation(
257
- agent,
258
- messages,
259
- example_queries,
260
- args.verbose
261
- )
262
-
263
- finally:
264
- if mcp_cleanup is not None:
265
- await mcp_cleanup()
266
-
267
-
268
- def main() -> None:
269
- """Entry point of the script."""
270
- asyncio.run(run())
271
-
272
-
273
- if __name__ == '__main__':
274
- main()
@@ -1,39 +0,0 @@
1
- import pyjson5 as json5
2
- from pathlib import Path
3
- from typing import TypedDict, Optional, Any
4
-
5
-
6
- class LLMConfig(TypedDict):
7
- """Type definition for LLM configuration."""
8
- model_provider: str
9
- model: Optional[str]
10
- temperature: Optional[float]
11
- system_prompt: Optional[str]
12
-
13
-
14
- class ConfigError(Exception):
15
- """Base exception for configuration related errors."""
16
- pass
17
-
18
-
19
- class ConfigFileNotFoundError(ConfigError):
20
- """Raised when the configuration file cannot be found."""
21
- pass
22
-
23
-
24
- class ConfigValidationError(ConfigError):
25
- """Raised when the configuration fails validation."""
26
- pass
27
-
28
-
29
- def load_config(config_path: str):
30
- """Load and validate configuration from JSON5 file.
31
- """
32
- config_file = Path(config_path)
33
- if not config_file.exists():
34
- raise ConfigFileNotFoundError(f"Config file {config_path} not found")
35
-
36
- with open(config_file, 'r', encoding='utf-8') as f:
37
- config: dict[str, Any] = json5.load(f)
38
-
39
- return config
@@ -1,93 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: langchain-mcp-tools
3
- Version: 0.0.1
4
- Summary: MCP Client Using LangChain / Python
5
- Requires-Python: >=3.11
6
- Description-Content-Type: text/markdown
7
- License-File: LICENSE
8
- Requires-Dist: jsonschema-pydantic>=0.6
9
- Requires-Dist: langchain>=0.3.14
10
- Requires-Dist: langchain-anthropic>=0.3.1
11
- Requires-Dist: langchain-groq>=0.2.3
12
- Requires-Dist: langchain-openai>=0.3.0
13
- Requires-Dist: langgraph>=0.2.62
14
- Requires-Dist: mcp>=1.2.0
15
- Requires-Dist: pyjson5>=1.6.8
16
- Requires-Dist: pympler>=1.1
17
- Requires-Dist: python-dotenv>=1.0.1
18
-
19
- # MCP Client Using LangChain / Python [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/hideya/mcp-langchain-client-ts/blob/main/LICENSE)
20
-
21
- This simple [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
22
- client demonstrates MCP server invocations by LangChain ReAct Agent.
23
-
24
- It leverages a utility function `convert_mcp_to_langchain_tools()` from
25
- `langchain_mcp_tools`.
26
- This function handles parallel initialization of specified multiple MCP servers
27
- and converts their available tools into a list of
28
- [LangChain-compatible tools](https://js.langchain.com/docs/how_to/tool_calling/).
29
-
30
- LLMs from Anthropic, OpenAI and Groq are currently supported.
31
-
32
- A typescript version of this MCP client is available
33
- [here](https://github.com/hideya/mcp-client-langchain-ts)
34
-
35
- ## Requirements
36
-
37
- - Python 3.11+
38
- - [`uv`](https://docs.astral.sh/uv/) installation
39
- - API keys from [Anthropic](https://console.anthropic.com/settings/keys),
40
- [OpenAI](https://platform.openai.com/api-keys), and/or
41
- [Groq](https://console.groq.com/keys)
42
- as needed
43
-
44
- ## Setup
45
- 1. Install dependencies:
46
- ```bash
47
- make install
48
- ```
49
-
50
- 2. Setup API keys:
51
- ```bash
52
- cp .env.template .env
53
- ```
54
- - Update `.env` as needed.
55
- - `.gitignore` is configured to ignore `.env`
56
- to prevent accidental commits of the credentials.
57
-
58
- 3. Configure LLM and MCP Servers settings `llm_mcp_config.json5` as needed.
59
-
60
- - [The configuration file format](https://github.com/hideya/mcp-client-langchain-ts/blob/main/llm_mcp_config.json5)
61
- for MCP servers follows the same structure as
62
- [Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
63
- with one difference: the key name `mcpServers` has been changed
64
- to `mcp_servers` to follow the snake_case convention
65
- commonly used in JSON configuration files.
66
- - The file format is [JSON5](https://json5.org/),
67
- where comments and trailing commas are allowed.
68
- - The format is further extended to replace `${...}` notations
69
- with the values of corresponding environment variables.
70
- - Keep all the credentials and private info in the `.env` file
71
- and refer to them with `${...}` notation as needed.
72
-
73
-
74
- ## Usage
75
-
76
- Run the app:
77
- ```bash
78
- make start
79
- ```
80
-
81
- Run in verbose mode:
82
- ```bash
83
- make start-v
84
- ```
85
-
86
- See commandline options:
87
- ```bash
88
- make start-h
89
- ```
90
-
91
- At the prompt, you can simply press Enter to use example queries that perform MCP server tool invocations.
92
-
93
- Example queries can be configured in `llm_mcp_config.json5`
@@ -1,12 +0,0 @@
1
- LICENSE
2
- README.md
3
- pyproject.toml
4
- src/cli_chat.py
5
- src/config_loader.py
6
- src/langchain_mcp_tools.py
7
- src/langchain_mcp_tools.egg-info/PKG-INFO
8
- src/langchain_mcp_tools.egg-info/SOURCES.txt
9
- src/langchain_mcp_tools.egg-info/dependency_links.txt
10
- src/langchain_mcp_tools.egg-info/requires.txt
11
- src/langchain_mcp_tools.egg-info/top_level.txt
12
- tests/test_langchain_mcp_tools.py
@@ -1,3 +0,0 @@
1
- cli_chat
2
- config_loader
3
- langchain_mcp_tools