mcp-cli 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
mcp_cli-0.1.0/PKG-INFO ADDED
@@ -0,0 +1,113 @@
1
+ Metadata-Version: 2.1
2
+ Name: mcp-cli
3
+ Version: 0.1.0
4
+ Summary: A cli for the Model Context Provider
5
+ Keywords: llm,openai,claude,mcp,cli
6
+ Author-Email: Chris Hay <chrishayuk@googlemail.com>
7
+ License: MIT
8
+ Requires-Python: >=3.12
9
+ Requires-Dist: anyio>=4.6.2.post1
10
+ Requires-Dist: asyncio>=3.4.3
11
+ Requires-Dist: ollama>=0.4.2
12
+ Requires-Dist: openai>=1.55.3
13
+ Requires-Dist: python-dotenv>=1.0.1
14
+ Requires-Dist: requests>=2.32.3
15
+ Requires-Dist: rich>=13.9.4
16
+ Description-Content-Type: text/markdown
17
+
18
+ # Model Context Provider CLI
19
+ This repository contains a protocol-level CLI designed to interact with a Model Context Provider server. The client allows users to send commands, query data, and interact with various resources provided by the server.
20
+
21
+ ## Features
22
+ - Protocol-level communication with the Model Context Provider.
23
+ - Dynamic tool and resource exploration.
24
+ - Support for multiple providers and models:
25
+ - Providers: OpenAI, Ollama.
26
+ - Default models: `gpt-4o-mini` for OpenAI, `qwen2.5-coder` for Ollama.
27
+
28
+ ## Prerequisites
29
+ - Python 3.8 or higher.
30
+ - Required dependencies (see [Installation](#installation))
31
+ - If using ollama you should have ollama installed and running.
32
+ - If using openai you should have an api key set in your environment variables (OPENAI_API_KEY=yourkey)
33
+
34
+ ## Installation
35
+ 1. Clone the repository:
36
+
37
+ ```bash
38
+ git clone https://github.com/chrishayuk/mcp-cli
39
+ cd mcp-cli
40
+ ```
41
+
42
+ 2. Install UV:
43
+
44
+ ```bash
45
+ pip install uv
46
+ ```
47
+
48
+ 3. Resynchronize dependencies:
49
+
50
+ ```bash
51
+ uv sync --reinstall
52
+ ```
53
+
54
+ ## Usage
55
+ To start the client and interact with the SQLite server, run the following command:
56
+
57
+ ```bash
58
+ uv run main.py --server sqlite
59
+ ```
60
+
61
+ ### Command-line Arguments
62
+ - `--server`: Specifies the server configuration to use. Required.
63
+ - `--config-file`: (Optional) Path to the JSON configuration file. Defaults to `server_config.json`.
64
+ - `--provider`: (Optional) Specifies the provider to use (`openai` or `ollama`). Defaults to `openai`.
65
+ - `--model`: (Optional) Specifies the model to use. Defaults depend on the provider:
66
+ - `gpt-4o-mini` for OpenAI.
67
+ - `llama3.2` for Ollama.
68
+
69
+ ### Examples
70
+ Run the client with the default OpenAI provider and model:
71
+
72
+ ```bash
73
+ uv run main.py --server sqlite
74
+ ```
75
+
76
+ Run the client with a specific configuration and Ollama provider:
77
+
78
+ ```bash
79
+ uv run main.py --server sqlite --provider ollama --model llama3.2
80
+ ```
81
+
82
+ ## Interactive Mode
83
+ The client supports interactive mode, allowing you to execute commands dynamically. Type `help` for a list of available commands or `quit` to exit the program.
84
+
85
+ ## Supported Commands
86
+ - `ping`: Check if the server is responsive.
87
+ - `list-tools`: Display available tools.
88
+ - `list-resources`: Display available resources.
89
+ - `list-prompts`: Display available prompts.
90
+ - `chat`: Enter interactive chat mode.
91
+ - `clear`: Clear the terminal screen.
92
+ - `help`: Show a list of supported commands.
93
+ - `quit`/`exit`: Exit the client.
94
+
95
+ ### Chat Mode
96
+ To enter chat mode and interact with the server:
97
+
98
+ uv run main.py --server sqlite
99
+
100
+ In chat mode, you can use tools and query the server interactively. The provider and model used are specified during startup and displayed as follows:
101
+
102
+ Entering chat mode using provider 'ollama' and model 'llama3.2'...
103
+
104
+ #### Using OpenAI Provider:
105
+ If you wish to use openai models, you should
106
+
107
+ - set the `OPENAI_API_KEY` environment variable before running the client, either in .env or as an environment variable.
108
+
109
+ ## Contributing
110
+ Contributions are welcome! Please open an issue or submit a pull request with your proposed changes.
111
+
112
+ ## License
113
+ This project is licensed under the [MIT License](license.md).
@@ -0,0 +1,96 @@
1
+ # Model Context Provider CLI
2
+ This repository contains a protocol-level CLI designed to interact with a Model Context Provider server. The client allows users to send commands, query data, and interact with various resources provided by the server.
3
+
4
+ ## Features
5
+ - Protocol-level communication with the Model Context Provider.
6
+ - Dynamic tool and resource exploration.
7
+ - Support for multiple providers and models:
8
+ - Providers: OpenAI, Ollama.
9
+ - Default models: `gpt-4o-mini` for OpenAI, `qwen2.5-coder` for Ollama.
10
+
11
+ ## Prerequisites
12
+ - Python 3.8 or higher.
13
+ - Required dependencies (see [Installation](#installation))
14
+ - If using ollama you should have ollama installed and running.
15
+ - If using openai you should have an api key set in your environment variables (OPENAI_API_KEY=yourkey)
16
+
17
+ ## Installation
18
+ 1. Clone the repository:
19
+
20
+ ```bash
21
+ git clone https://github.com/chrishayuk/mcp-cli
22
+ cd mcp-cli
23
+ ```
24
+
25
+ 2. Install UV:
26
+
27
+ ```bash
28
+ pip install uv
29
+ ```
30
+
31
+ 3. Resynchronize dependencies:
32
+
33
+ ```bash
34
+ uv sync --reinstall
35
+ ```
36
+
37
+ ## Usage
38
+ To start the client and interact with the SQLite server, run the following command:
39
+
40
+ ```bash
41
+ uv run main.py --server sqlite
42
+ ```
43
+
44
+ ### Command-line Arguments
45
+ - `--server`: Specifies the server configuration to use. Required.
46
+ - `--config-file`: (Optional) Path to the JSON configuration file. Defaults to `server_config.json`.
47
+ - `--provider`: (Optional) Specifies the provider to use (`openai` or `ollama`). Defaults to `openai`.
48
+ - `--model`: (Optional) Specifies the model to use. Defaults depend on the provider:
49
+ - `gpt-4o-mini` for OpenAI.
50
+ - `llama3.2` for Ollama.
51
+
52
+ ### Examples
53
+ Run the client with the default OpenAI provider and model:
54
+
55
+ ```bash
56
+ uv run main.py --server sqlite
57
+ ```
58
+
59
+ Run the client with a specific configuration and Ollama provider:
60
+
61
+ ```bash
62
+ uv run main.py --server sqlite --provider ollama --model llama3.2
63
+ ```
64
+
65
+ ## Interactive Mode
66
+ The client supports interactive mode, allowing you to execute commands dynamically. Type `help` for a list of available commands or `quit` to exit the program.
67
+
68
+ ## Supported Commands
69
+ - `ping`: Check if the server is responsive.
70
+ - `list-tools`: Display available tools.
71
+ - `list-resources`: Display available resources.
72
+ - `list-prompts`: Display available prompts.
73
+ - `chat`: Enter interactive chat mode.
74
+ - `clear`: Clear the terminal screen.
75
+ - `help`: Show a list of supported commands.
76
+ - `quit`/`exit`: Exit the client.
77
+
78
+ ### Chat Mode
79
+ To enter chat mode and interact with the server:
80
+
81
+ uv run main.py --server sqlite
82
+
83
+ In chat mode, you can use tools and query the server interactively. The provider and model used are specified during startup and displayed as follows:
84
+
85
+ Entering chat mode using provider 'ollama' and model 'llama3.2'...
86
+
87
+ #### Using OpenAI Provider:
88
+ If you wish to use openai models, you should
89
+
90
+ - set the `OPENAI_API_KEY` environment variable before running the client, either in .env or as an environment variable.
91
+
92
+ ## Contributing
93
+ Contributions are welcome! Please open an issue or submit a pull request with your proposed changes.
94
+
95
+ ## License
96
+ This project is licensed under the [MIT License](license.md).
@@ -0,0 +1,43 @@
1
+ [build-system]
2
+ requires = [
3
+ "pdm-backend",
4
+ ]
5
+ build-backend = "pdm.backend"
6
+
7
+ [project]
8
+ name = "mcp-cli"
9
+ version = "0.1.0"
10
+ description = "A cli for the Model Context Provider"
11
+ requires-python = ">=3.12"
12
+ readme = "README.md"
13
+ authors = [
14
+ { name = "Chris Hay", email = "chrishayuk@googlemail.com" },
15
+ ]
16
+ keywords = [
17
+ "llm",
18
+ "openai",
19
+ "claude",
20
+ "mcp",
21
+ "cli",
22
+ ]
23
+ dependencies = [
24
+ "anyio>=4.6.2.post1",
25
+ "asyncio>=3.4.3",
26
+ "ollama>=0.4.2",
27
+ "openai>=1.55.3",
28
+ "python-dotenv>=1.0.1",
29
+ "requests>=2.32.3",
30
+ "rich>=13.9.4",
31
+ ]
32
+
33
+ [project.license]
34
+ text = "MIT"
35
+
36
+ [project.scripts]
37
+ mcp-cli = "mcpcli.__main__:main"
38
+
39
+ [tool.ruff.lint]
40
+ unfixable = [
41
+ "I",
42
+ "F401",
43
+ ]
File without changes
@@ -0,0 +1,367 @@
1
+ import argparse
2
+ import asyncio
3
+ import json
4
+ import logging
5
+ import os
6
+ import signal
7
+ import sys
8
+ from typing import List
9
+
10
+ import anyio
11
+
12
+ # Rich imports
13
+ from rich import print
14
+ from rich.markdown import Markdown
15
+ from rich.panel import Panel
16
+ from rich.prompt import Prompt
17
+
18
+ from mcpcli.chat_handler import handle_chat_mode
19
+ from mcpcli.config import load_config
20
+ from mcpcli.messages.ping import send_ping
21
+ from mcpcli.messages.prompts import send_prompts_list
22
+ from mcpcli.messages.resources import send_resources_list
23
+ from mcpcli.messages.send_initialize_message import send_initialize
24
+ from mcpcli.messages.tools import send_call_tool, send_tools_list
25
+ from mcpcli.transport.stdio.stdio_client import stdio_client
26
+
27
+ # Default path for the configuration file
28
+ DEFAULT_CONFIG_FILE = "server_config.json"
29
+
30
+ # Configure logging
31
+ logging.basicConfig(
32
+ level=logging.CRITICAL,
33
+ # level=logging.DEBUG,
34
+ format="%(asctime)s - %(levelname)s - %(message)s",
35
+ stream=sys.stderr,
36
+ )
37
+
38
+
39
+ def signal_handler(sig, frame):
40
+ # Ignore subsequent SIGINT signals
41
+ signal.signal(signal.SIGINT, signal.SIG_IGN)
42
+
43
+ # pretty exit
44
+ print("\n[bold red]Goodbye![/bold red]")
45
+
46
+ # Immediately and forcibly kill the process
47
+ os.kill(os.getpid(), signal.SIGKILL)
48
+
49
+
50
+ # signal handler
51
+ signal.signal(signal.SIGINT, signal_handler)
52
+
53
+
54
+ async def handle_command(command: str, server_streams: List[tuple]) -> bool:
55
+ """Handle specific commands dynamically with multiple servers."""
56
+ try:
57
+ if command == "ping":
58
+ print("[cyan]\nPinging Servers...[/cyan]")
59
+ for i, (read_stream, write_stream) in enumerate(server_streams):
60
+ result = await send_ping(read_stream, write_stream)
61
+ server_num = i + 1
62
+ if result:
63
+ ping_md = f"## Server {server_num} Ping Result\n\n✅ **Server is up and running**"
64
+ print(Panel(Markdown(ping_md), style="bold green"))
65
+ else:
66
+ ping_md = f"## Server {server_num} Ping Result\n\n❌ **Server ping failed**"
67
+ print(Panel(Markdown(ping_md), style="bold red"))
68
+
69
+ elif command == "list-tools":
70
+ print("[cyan]\nFetching Tools List from all servers...[/cyan]")
71
+ for i, (read_stream, write_stream) in enumerate(server_streams):
72
+ response = await send_tools_list(read_stream, write_stream)
73
+ tools_list = response.get("tools", [])
74
+ server_num = i + 1
75
+
76
+ if not tools_list:
77
+ tools_md = (
78
+ f"## Server {server_num} Tools List\n\nNo tools available."
79
+ )
80
+ else:
81
+ tools_md = f"## Server {server_num} Tools List\n\n" + "\n".join(
82
+ [
83
+ f"- **{t.get('name')}**: {t.get('description', 'No description')}"
84
+ for t in tools_list
85
+ ]
86
+ )
87
+ print(
88
+ Panel(
89
+ Markdown(tools_md),
90
+ title=f"Server {server_num} Tools",
91
+ style="bold cyan",
92
+ )
93
+ )
94
+
95
+ elif command == "call-tool":
96
+ tool_name = Prompt.ask(
97
+ "[bold magenta]Enter tool name[/bold magenta]"
98
+ ).strip()
99
+ if not tool_name:
100
+ print("[red]Tool name cannot be empty.[/red]")
101
+ return True
102
+
103
+ arguments_str = Prompt.ask(
104
+ "[bold magenta]Enter tool arguments as JSON (e.g., {'key': 'value'})[/bold magenta]"
105
+ ).strip()
106
+ try:
107
+ arguments = json.loads(arguments_str)
108
+ except json.JSONDecodeError as e:
109
+ print(f"[red]Invalid JSON arguments format:[/red] {e}")
110
+ return True
111
+
112
+ print(f"[cyan]\nCalling tool '{tool_name}' with arguments:\n[/cyan]")
113
+ print(
114
+ Panel(
115
+ Markdown(f"```json\n{json.dumps(arguments, indent=2)}\n```"),
116
+ style="dim",
117
+ )
118
+ )
119
+
120
+ result = await send_call_tool(tool_name, arguments, server_streams)
121
+ if result.get("isError"):
122
+ print(f"[red]Error calling tool:[/red] {result.get('error')}")
123
+ else:
124
+ response_content = result.get("content", "No content")
125
+ print(
126
+ Panel(
127
+ Markdown(f"### Tool Response\n\n{response_content}"),
128
+ style="green",
129
+ )
130
+ )
131
+
132
+ elif command == "list-resources":
133
+ print("[cyan]\nFetching Resources List from all servers...[/cyan]")
134
+ for i, (read_stream, write_stream) in enumerate(server_streams):
135
+ response = await send_resources_list(read_stream, write_stream)
136
+ resources_list = response.get("resources", [])
137
+ server_num = i + 1
138
+
139
+ if not resources_list:
140
+ resources_md = f"## Server {server_num} Resources List\n\nNo resources available."
141
+ else:
142
+ resources_md = f"## Server {server_num} Resources List\n"
143
+ for r in resources_list:
144
+ if isinstance(r, dict):
145
+ json_str = json.dumps(r, indent=2)
146
+ resources_md += f"\n```json\n{json_str}\n```"
147
+ else:
148
+ resources_md += f"\n- {r}"
149
+ print(
150
+ Panel(
151
+ Markdown(resources_md),
152
+ title=f"Server {server_num} Resources",
153
+ style="bold cyan",
154
+ )
155
+ )
156
+
157
+ elif command == "list-prompts":
158
+ print("[cyan]\nFetching Prompts List from all servers...[/cyan]")
159
+ for i, (read_stream, write_stream) in enumerate(server_streams):
160
+ response = await send_prompts_list(read_stream, write_stream)
161
+ prompts_list = response.get("prompts", [])
162
+ server_num = i + 1
163
+
164
+ if not prompts_list:
165
+ prompts_md = (
166
+ f"## Server {server_num} Prompts List\n\nNo prompts available."
167
+ )
168
+ else:
169
+ prompts_md = f"## Server {server_num} Prompts List\n\n" + "\n".join(
170
+ [f"- {p}" for p in prompts_list]
171
+ )
172
+ print(
173
+ Panel(
174
+ Markdown(prompts_md),
175
+ title=f"Server {server_num} Prompts",
176
+ style="bold cyan",
177
+ )
178
+ )
179
+
180
+ elif command == "chat":
181
+ provider = os.getenv("LLM_PROVIDER", "openai")
182
+ model = os.getenv("LLM_MODEL", "gpt-4o-mini")
183
+
184
+ # Clear the screen first
185
+ if sys.platform == "win32":
186
+ os.system("cls")
187
+ else:
188
+ os.system("clear")
189
+
190
+ chat_info_text = (
191
+ "Welcome to the Chat!\n\n"
192
+ f"**Provider:** {provider} | **Model:** {model}\n\n"
193
+ "Type 'exit' to quit."
194
+ )
195
+
196
+ print(
197
+ Panel(
198
+ Markdown(chat_info_text),
199
+ style="bold cyan",
200
+ title="Chat Mode",
201
+ title_align="center",
202
+ )
203
+ )
204
+ await handle_chat_mode(server_streams, provider, model)
205
+
206
+ elif command in ["quit", "exit"]:
207
+ print("\n[bold red]Goodbye![/bold red]")
208
+ return False
209
+
210
+ elif command == "clear":
211
+ if sys.platform == "win32":
212
+ os.system("cls")
213
+ else:
214
+ os.system("clear")
215
+
216
+ elif command == "help":
217
+ help_md = """
218
+ # Available Commands
219
+
220
+ - **ping**: Check if server is responsive
221
+ - **list-tools**: Display available tools
222
+ - **list-resources**: Display available resources
223
+ - **list-prompts**: Display available prompts
224
+ - **chat**: Enter chat mode
225
+ - **clear**: Clear the screen
226
+ - **help**: Show this help message
227
+ - **quit/exit**: Exit the program
228
+
229
+ **Note:** Commands use dashes (e.g., `list-tools` not `list tools`).
230
+ """
231
+ print(Panel(Markdown(help_md), style="yellow"))
232
+
233
+ else:
234
+ print(f"[red]\nUnknown command: {command}[/red]")
235
+ print("[yellow]Type 'help' for available commands[/yellow]")
236
+ except Exception as e:
237
+ print(f"\n[red]Error executing command:[/red] {e}")
238
+
239
+ return True
240
+
241
+
242
+ async def get_input():
243
+ """Get input asynchronously."""
244
+ loop = asyncio.get_event_loop()
245
+ return await loop.run_in_executor(None, lambda: input().strip().lower())
246
+
247
+
248
+ async def interactive_mode(server_streams: List[tuple]):
249
+ """Run the CLI in interactive mode with multiple servers."""
250
+ welcome_text = """
251
+ # Welcome to the Interactive MCP Command-Line Tool (Multi-Server Mode)
252
+
253
+ Type 'help' for available commands or 'quit' to exit.
254
+ """
255
+ print(Panel(Markdown(welcome_text), style="bold cyan"))
256
+
257
+ while True:
258
+ try:
259
+ command = Prompt.ask("[bold green]\n>[/bold green]").strip().lower()
260
+ if not command:
261
+ continue
262
+ should_continue = await handle_command(command, server_streams)
263
+ if not should_continue:
264
+ return
265
+ except EOFError:
266
+ break
267
+ except Exception as e:
268
+ print(f"\n[red]Error:[/red] {e}")
269
+
270
+
271
+ class GracefulExit(Exception):
272
+ """Custom exception for handling graceful exits."""
273
+
274
+ pass
275
+
276
+
277
+ async def main(config_path: str, server_names: List[str], command: str = None) -> None:
278
+ """Main function to manage server initialization, communication, and shutdown."""
279
+ # Clear screen before rendering anything
280
+ if sys.platform == "win32":
281
+ os.system("cls")
282
+ else:
283
+ os.system("clear")
284
+
285
+ # Load server configurations and establish connections for all servers
286
+ server_streams = []
287
+ context_managers = []
288
+ for server_name in server_names:
289
+ server_params = await load_config(config_path, server_name)
290
+
291
+ # Establish stdio communication for each server
292
+ cm = stdio_client(server_params)
293
+ (read_stream, write_stream) = await cm.__aenter__()
294
+ context_managers.append(cm)
295
+ server_streams.append((read_stream, write_stream))
296
+
297
+ init_result = await send_initialize(read_stream, write_stream)
298
+ if not init_result:
299
+ print(f"[red]Server initialization failed for {server_name}[/red]")
300
+ return
301
+
302
+ try:
303
+ if command:
304
+ # Single command mode
305
+ await handle_command(command, server_streams)
306
+ else:
307
+ # Interactive mode
308
+ await interactive_mode(server_streams)
309
+ finally:
310
+ # Clean up all streams
311
+ for cm in context_managers:
312
+ with anyio.move_on_after(1): # wait up to 1 second
313
+ await cm.__aexit__()
314
+
315
+
316
+ if __name__ == "__main__":
317
+ parser = argparse.ArgumentParser(description="MCP Command-Line Tool")
318
+
319
+ parser.add_argument(
320
+ "--config-file",
321
+ default=DEFAULT_CONFIG_FILE,
322
+ help="Path to the JSON configuration file containing server details.",
323
+ )
324
+
325
+ parser.add_argument(
326
+ "--server",
327
+ action="append",
328
+ dest="servers",
329
+ help="Server configuration(s) to use. Can be specified multiple times.",
330
+ default=[],
331
+ )
332
+
333
+ parser.add_argument(
334
+ "command",
335
+ nargs="?",
336
+ choices=["ping", "list-tools", "list-resources", "list-prompts"],
337
+ help="Command to execute (optional - if not provided, enters interactive mode).",
338
+ )
339
+
340
+ parser.add_argument(
341
+ "--provider",
342
+ choices=["openai", "ollama"],
343
+ default="openai",
344
+ help="LLM provider to use. Defaults to 'openai'.",
345
+ )
346
+
347
+ parser.add_argument(
348
+ "--model",
349
+ help=(
350
+ "Model to use. Defaults to 'gpt-4o-mini' for 'openai' and 'qwen2.5-coder' for 'ollama'."
351
+ ),
352
+ )
353
+
354
+ args = parser.parse_args()
355
+
356
+ model = args.model or (
357
+ "gpt-4o-mini" if args.provider == "openai" else "qwen2.5-coder"
358
+ )
359
+ os.environ["LLM_PROVIDER"] = args.provider
360
+ os.environ["LLM_MODEL"] = model
361
+
362
+ try:
363
+ result = anyio.run(main, args.config_file, args.servers, args.command)
364
+ sys.exit(result)
365
+ except Exception as e:
366
+ print(f"[red]Error occurred:[/red] {e}")
367
+ sys.exit(1)