casual-mcp 0.4.0__py3-none-any.whl → 0.6.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,399 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: casual-mcp
3
- Version: 0.4.0
4
- Summary: Multi-server MCP client for LLM tool orchestration
5
- Author: Alex Stansfield
6
- License: MIT
7
- Project-URL: Homepage, https://github.com/AlexStansfield/casual-mcp
8
- Project-URL: Repository, https://github.com/AlexStansfield/casual-mcp
9
- Project-URL: Issue Tracker, https://github.com/AlexStansfield/casual-mcp/issues
10
- Requires-Python: >=3.10
11
- Description-Content-Type: text/markdown
12
- License-File: LICENSE
13
- Requires-Dist: dateparser>=1.2.1
14
- Requires-Dist: fastapi>=0.115.12
15
- Requires-Dist: fastmcp>=2.12.4
16
- Requires-Dist: jinja2>=3.1.6
17
- Requires-Dist: ollama>=0.4.8
18
- Requires-Dist: openai>=1.78.0
19
- Requires-Dist: python-dotenv>=1.1.0
20
- Requires-Dist: requests>=2.32.3
21
- Requires-Dist: rich>=14.0.0
22
- Requires-Dist: typer>=0.19.2
23
- Requires-Dist: uvicorn>=0.34.2
24
- Provides-Extra: dev
25
- Requires-Dist: ruff; extra == "dev"
26
- Requires-Dist: black; extra == "dev"
27
- Requires-Dist: mypy; extra == "dev"
28
- Requires-Dist: pytest; extra == "dev"
29
- Requires-Dist: coverage; extra == "dev"
30
- Dynamic: license-file
31
-
32
- # 🧠 Casual MCP
33
-
34
- ![PyPI](https://img.shields.io/pypi/v/casual-mcp)
35
- ![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)
36
-
37
- **Casual MCP** is a Python framework for building, evaluating, and serving LLMs with tool-calling capabilities using [Model Context Protocol (MCP)](https://modelcontextprotocol.io).
38
- It includes:
39
-
40
- - ✅ A multi-server MCP client using [FastMCP](https://github.com/jlowin/fastmcp)
41
- - ✅ Provider support for OpenAI (and OpenAI compatible APIs)
42
- - ✅ A recursive tool-calling chat loop
43
- - ✅ System prompt templating with Jinja2
44
- - ✅ A basic API exposing a chat endpoint
45
-
46
- ## ✨ Features
47
-
48
- - Plug-and-play multi-server tool orchestration
49
- - Prompt templating with Jinja2
50
- - Configurable via JSON
51
- - CLI and API access
52
- - Extensible architecture
53
-
54
- ## 🔧 Installation
55
-
56
- ```bash
57
- pip install casual-mcp
58
- ```
59
-
60
- Or for development:
61
-
62
- ```bash
63
- git clone https://github.com/AlexStansfield/casual-mcp.git
64
- cd casual-mcp
65
- uv pip install -e .[dev]
66
- ```
67
-
68
- ## 🧩 Providers
69
-
70
- Providers allow access to LLMs. Currently, only an OpenAI provider is supplied. However, in the model configuration, you can supply an optional `endpoint` allowing you to use any OpenAI-compatible API (e.g., LM Studio).
71
-
72
- Ollama support is planned for a future version, along with support for custom pluggable providers via a standard interface.
73
-
74
- ## 🧩 System Prompt Templates
75
-
76
- System prompts are defined as [Jinja2](https://jinja.palletsprojects.com) templates in the `prompt-templates/` directory.
77
-
78
- They are used in the config file to specify a system prompt to use per model.
79
-
80
- This allows you to define custom prompts for each model — useful when using models that do not natively support tools. Templates are passed the tool list in the `tools` variable.
81
-
82
- ```jinja2
83
- # prompt-templates/example_prompt.j2
84
- Here is a list of functions in JSON format that you can invoke:
85
- [
86
- {% for tool in tools %}
87
- {
88
- "name": "{{ tool.name }}",
89
- "description": "{{ tool.description }}",
90
- "parameters": {
91
- {% for param_name, param in tool.inputSchema.items() %}
92
- "{{ param_name }}": {
93
- "description": "{{ param.description }}",
94
- "type": "{{ param.type }}"{% if param.default is defined %},
95
- "default": "{{ param.default }}"{% endif %}
96
- }{% if not loop.last %},{% endif %}
97
- {% endfor %}
98
- }
99
- }{% if not loop.last %},{% endif %}
100
- {% endfor %}
101
- ]
102
- ```
103
-
104
- ## ⚙️ Configuration File (`casual_mcp_config.json`)
105
-
106
- 📄 See the [Programmatic Usage](#-programmatic-usage) section to build configs and messages with typed models.
107
-
108
- The CLI and API can be configured using a `casual_mcp_config.json` file that defines:
109
-
110
- - 🔧 Available **models** and their providers
111
- - 🧰 Available **MCP tool servers**
112
- - 🧩 Optional tool namespacing behavior
113
-
114
- ### 🔸 Example
115
-
116
- ```json
117
- {
118
- "models": {
119
- "lm-qwen-3": {
120
- "provider": "openai",
121
- "endpoint": "http://localhost:1234/v1",
122
- "model": "qwen3-8b",
123
- "template": "lm-studio-native-tools"
124
- },
125
- "gpt-4.1": {
126
- "provider": "openai",
127
- "model": "gpt-4.1"
128
- }
129
- },
130
- "servers": {
131
- "time": {
132
- "command": "python",
133
- "args": ["mcp-servers/time/server.py"]
134
- },
135
- "weather": {
136
- "url": "http://localhost:5050/mcp"
137
- }
138
- }
139
- }
140
- ```
141
-
142
- ### 🔹 `models`
143
-
144
- Each model has:
145
-
146
- - `provider`: `"openai"` (more to come)
147
- - `model`: the model name (e.g., `gpt-4.1`, `qwen3-8b`)
148
- - `endpoint`: required for custom OpenAI-compatible backends (e.g., LM Studio)
149
- - `template`: optional name used to apply model-specific tool calling formatting
150
-
151
- ### 🔹 `servers`
152
-
153
- Servers can either be local (over stdio) or remote.
154
-
155
- #### Local Config:
156
- - `command`: the command to run the server, e.g `python`, `npm`
157
- - `args`: the arguments to pass to the server as a list, e.g `["time/server.py"]`
158
- - Optional: `env`: for subprocess environments, `system_prompt` to override server prompt
159
-
160
- #### Remote Config:
161
- - `url`: the url of the mcp server
162
- - Optional: `transport`: the type of transport, `http`, `sse`, `streamable-http`. Defaults to `http`
163
-
164
- ## Environmental Variables
165
-
166
- There are two environmental variables:
167
- - `OPEN_AI_API_KEY`: required when using the `openai` provider, if using a local model with an openai compatible API it can be any string
168
- - `TOOL_RESULT_FORMAT`: adjusts the format of the tool result given back to the LLM. Options are `result`, `function_result`, `function_args_result`. Defaults to `result`
169
-
170
- You can set them using `export` or by creating a `.env` file.
171
-
172
- ## 🛠 CLI Reference
173
-
174
- ### `casual-mcp serve`
175
- Start the API server.
176
-
177
- **Options:**
178
- - `--host`: Host to bind (default `0.0.0.0`)
179
- - `--port`: Port to serve on (default `8000`)
180
-
181
- ### `casual-mcp servers`
182
- Loads the config and outputs the list of MCP servers you have configured.
183
-
184
- #### Example Output
185
- ```
186
- $ casual-mcp servers
187
- ┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━┓
188
- ┃ Name ┃ Type ┃ Command / Url ┃ Env ┃
189
- ┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━┩
190
- │ math │ local │ mcp-servers/math/server.py │ │
191
- │ time │ local │ mcp-servers/time-v2/server.py │ │
192
- │ weather │ local │ mcp-servers/weather/server.py │ │
193
- │ words │ remote │ https://localhost:3000/mcp │ │
194
- └─────────┴────────┴───────────────────────────────┴─────┘
195
- ```
196
-
197
- ### `casual-mcp models`
198
- Loads the config and outputs the list of models you have configured.
199
-
200
- #### Example Output
201
- ```
202
- $ casual-mcp models
203
- ┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
204
- ┃ Name ┃ Provider ┃ Model ┃ Endpoint ┃
205
- ┡━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
206
- │ lm-phi-4-mini │ openai │ phi-4-mini-instruct │ http://kovacs:1234/v1 │
207
- │ lm-hermes-3 │ openai │ hermes-3-llama-3.2-3b │ http://kovacs:1234/v1 │
208
- │ lm-groq │ openai │ llama-3-groq-8b-tool-use │ http://kovacs:1234/v1 │
209
- │ gpt-4o-mini │ openai │ gpt-4o-mini │ │
210
- │ gpt-4.1-nano │ openai │ gpt-4.1-nano │ │
211
- │ gpt-4.1-mini │ openai │ gpt-4.1-mini │ │
212
- │ gpt-4.1 │ openai │ gpt-4.1 │ │
213
- └───────────────────┴──────────┴───────────────────────────┴────────────────────────┘
214
- ```
215
-
216
- ## 🧠 Programmatic Usage
217
-
218
- You can import and use the core framework in your own Python code.
219
-
220
- ### ✅ Exposed Interfaces
221
-
222
- #### `McpToolChat`
223
- Orchestrates LLM interaction with tools using a recursive loop.
224
-
225
- ```python
226
- from casual_mcp import McpToolChat
227
- from casual_mcp.models import SystemMessage, UserMessage
228
-
229
- chat = McpToolChat(mcp_client, provider, system_prompt)
230
-
231
- # Generate method to take user prompt
232
- response = await chat.generate("What time is it in London?")
233
-
234
- # Generate method with session
235
- response = await chat.generate("What time is it in London?", "my-session-id")
236
-
237
- # Chat method that takes list of chat messages
238
- # note: system prompt ignored if sent in messages so no need to set
239
- chat = McpToolChat(mcp_client, provider)
240
- messages = [
241
- SystemMessage(content="You are a cool dude who likes to help the user"),
242
- UserMessage(content="What time is it in London?")
243
- ]
244
- response = await chat.chat(messages)
245
- ```
246
-
247
- #### `ProviderFactory`
248
- Instantiates LLM providers based on the selected model config.
249
-
250
- ```python
251
- from casual_mcp import ProviderFactory
252
-
253
- provider_factory = ProviderFactory(mcp_client)
254
- provider = await provider_factory.get_provider("lm-qwen-3", model_config)
255
- ```
256
-
257
- #### `load_config`
258
- Loads your `casual_mcp_config.json` into a validated config object.
259
-
260
- ```python
261
- from casual_mcp import load_config
262
-
263
- config = load_config("casual_mcp_config.json")
264
- ```
265
-
266
- #### `load_mcp_client`
267
- Creats a multi server FastMCP client from the config object
268
-
269
- ```python
270
- from casual_mcp import load_mcp_client
271
-
272
- config = load_mcp_client(config)
273
- ```
274
-
275
- #### Model and Server Configs
276
-
277
- Exported models:
278
- - StdioServerConfig
279
- - RemoteServerConfig
280
- - OpenAIModelConfig
281
-
282
- Use these types to build valid configs:
283
-
284
- ```python
285
- from casual_mcp.models import OpenAIModelConfig, StdioServerConfig
286
-
287
- model = OpenAIModelConfig(model="llama3", endpoint="http://...")
288
- server = StdioServerConfig(command="python", args=["time/server.py"])
289
- ```
290
-
291
- #### Chat Messages
292
-
293
- Exported models:
294
- - AssistantMessage
295
- - SystemMessage
296
- - ToolResultMessage
297
- - UserMessage
298
-
299
- Use these types to build message chains:
300
-
301
- ```python
302
- from casual_mcp.models import SystemMessage, UserMessage
303
-
304
- messages = [
305
- SystemMessage(content="You are a friendly tool calling assistant."),
306
- UserMessage(content="What is the time?")
307
- ]
308
- ```
309
-
310
- ### Example
311
-
312
- ```python
313
- from casual_mcp import McpToolChat, load_config, load_mcp_client, ProviderFactory
314
- from casual_mcp.models import SystemMessage, UserMessage
315
-
316
- model = "gpt-4.1-nano"
317
- messages = [
318
- SystemMessage(content="""You are a tool calling assistant.
319
- You have access to up-to-date information through the tools.
320
- Respond naturally and confidently, as if you already know all the facts."""),
321
- UserMessage(content="Will I need to take my umbrella to London today?")
322
- ]
323
-
324
- # Load the Config from the File
325
- config = load_config("casual_mcp_config.json")
326
-
327
- # Setup the MCP Client
328
- mcp_client = load_mcp_client(config)
329
-
330
- # Get the Provider for the Model
331
- provider_factory = ProviderFactory(mcp_client)
332
- provider = await provider_factory.get_provider(model, config.models[model])
333
-
334
- # Perform the Chat and Tool calling
335
- chat = McpToolChat(mcp_client, provider)
336
- response_messages = await chat.chat(messages)
337
- ```
338
-
339
- ## 🚀 API Usage
340
-
341
- ### Start the API Server
342
-
343
- ```bash
344
- casual-mcp serve --host 0.0.0.0 --port 8000
345
- ```
346
-
347
- ### Chat
348
-
349
- #### Endpoint: `POST /chat`
350
-
351
- #### Request Body:
352
- - `model`: the LLM model to use
353
- - `messages`: list of chat messages (system, assistant, user, etc) that you can pass to the api, allowing you to keep your own chat session in the client calling the api
354
-
355
- #### Example:
356
- ```
357
- {
358
- "model": "gpt-4.1-nano",
359
- "messages": [
360
- {
361
- "role": "user",
362
- "content": "can you explain what the word consistent means?"
363
- }
364
- ]
365
- }
366
- ```
367
-
368
- ### Generate
369
-
370
- The generate endpoint allows you to send a user prompt as a string.
371
-
372
- It also support sessions that keep a record of all messages in the session and feeds them back into the LLM for context. Sessions are stored in memory so are cleared when the server is restarted
373
-
374
- #### Endpoint: `POST /generate`
375
-
376
- #### Request Body:
377
- - `model`: the LLM model to use
378
- - `prompt`: the user prompt
379
- - `session_id`: an optional ID that stores all the messages from the session and provides them back to the LLM for context
380
-
381
- #### Example:
382
- ```
383
- {
384
- "session_id": "my-session",
385
- "model": "gpt-4o-mini",
386
- "prompt": "can you explain what the word consistent means?"
387
- }
388
- ```
389
-
390
- ### Get Session
391
-
392
- Get all the messages from a session
393
-
394
- #### Endpoint: `GET /generate/session/{session_id}`
395
-
396
-
397
- ## License
398
-
399
- This software is released under the [MIT License](LICENSE)
@@ -1,24 +0,0 @@
1
- casual_mcp/__init__.py,sha256=pInJdGkFqSH8RwbQq-9mc96GWIQjLrtExeXnTYGtNHw,327
2
- casual_mcp/cli.py,sha256=6P_d77qPbY43AW1Ix6FfbHyy6Qc6sFeFqGvXxJCW2_M,2090
3
- casual_mcp/logging.py,sha256=o3rvT8GLJKGlu0ieeC9TY_SRSEUY-VO8jRQZjx-sSvY,863
4
- casual_mcp/main.py,sha256=sclybY2zplIUekXx-thWQDrPIGhqg1Yey9fX-mPz6F4,3388
5
- casual_mcp/mcp_tool_chat.py,sha256=VLvYg-EMJjReMMXdb0gQx43d3aFI8MvDTOsLJ5pJjPE,4956
6
- casual_mcp/utils.py,sha256=Nea0aRbPyjqm7mIjffJtGP2NssE7BsdPleO-yiuAWPE,2964
7
- casual_mcp/models/__init__.py,sha256=qlKylcCyRJOSIVteU2feiLOigZoY-m-soVGp4NALM_c,538
8
- casual_mcp/models/config.py,sha256=ITu3WAPMad7i2CS3ljkHapjT8lLm7k6HFUF6N73U1oo,294
9
- casual_mcp/models/generation_error.py,sha256=n1mF3vc1Sg_9yIe603G1nTP395Tht8JMKHqdMWFNAn0,259
10
- casual_mcp/models/mcp_server_config.py,sha256=0OHsHUEKxRoCl21lsye4E5GoCNmdZWIZCOOthcTpdsE,539
11
- casual_mcp/models/messages.py,sha256=7C0SoCC6Ee970iHprpCpsKsQrwvM66e39o96wfYm1Y8,683
12
- casual_mcp/models/model_config.py,sha256=gN5hNDfbur_bHgrji87CcU2WgNZO-F3eveK4pVWVSAE,435
13
- casual_mcp/models/tool_call.py,sha256=BKMxcmyW7EmNoG1jgS9PXXvf6RQIHf7wB8fElEbc4gA,271
14
- casual_mcp/providers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
15
- casual_mcp/providers/abstract_provider.py,sha256=TTEP3FeTxOtbD0By_k17UxS8cqxYCOGNRTRxYRrqGwc,292
16
- casual_mcp/providers/ollama_provider.py,sha256=IUSJFBtEYmza_-_7bk5YZKqed3N67l8A2lZEmHPiyHo,2581
17
- casual_mcp/providers/openai_provider.py,sha256=x50keKdD1fgC7mf1IrrvVwbK4T__mafCnTn-DL6w7wA,6198
18
- casual_mcp/providers/provider_factory.py,sha256=CyFHJ0mU2tjHqj04btF0SL0B3pf12LAJ52Msqsbnv_g,1766
19
- casual_mcp-0.4.0.dist-info/licenses/LICENSE,sha256=U3Zu2tkrh5vXdy7gIdE8WJGM9D4gGp3hohAAWdre-yo,1058
20
- casual_mcp-0.4.0.dist-info/METADATA,sha256=nUDsbv42OdLu_OLHvttU_V7hrl3w2ggxp3qeNdeHPEo,12932
21
- casual_mcp-0.4.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
22
- casual_mcp-0.4.0.dist-info/entry_points.txt,sha256=X48Np2cwl-SlRQdV26y2vPZ-2tJaODgZeVtfpHho-zg,50
23
- casual_mcp-0.4.0.dist-info/top_level.txt,sha256=K4CiI0Jf8PHICjuQVm32HuNMB44kp8Lb02bbbdiH5bo,11
24
- casual_mcp-0.4.0.dist-info/RECORD,,