casual-mcp 0.3.1__py3-none-any.whl → 0.5.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- casual_mcp/__init__.py +8 -1
- casual_mcp/cli.py +24 -24
- casual_mcp/convert_tools.py +68 -0
- casual_mcp/logging.py +6 -2
- casual_mcp/main.py +30 -55
- casual_mcp/mcp_tool_chat.py +62 -49
- casual_mcp/models/__init__.py +13 -8
- casual_mcp/models/config.py +2 -2
- casual_mcp/models/generation_error.py +1 -1
- casual_mcp/models/model_config.py +3 -3
- casual_mcp/provider_factory.py +47 -0
- casual_mcp/tool_cache.py +114 -0
- casual_mcp/utils.py +18 -11
- casual_mcp-0.5.0.dist-info/METADATA +630 -0
- casual_mcp-0.5.0.dist-info/RECORD +20 -0
- {casual_mcp-0.3.1.dist-info → casual_mcp-0.5.0.dist-info}/WHEEL +1 -1
- casual_mcp/models/messages.py +0 -31
- casual_mcp/models/tool_call.py +0 -14
- casual_mcp/providers/__init__.py +0 -0
- casual_mcp/providers/abstract_provider.py +0 -15
- casual_mcp/providers/ollama_provider.py +0 -72
- casual_mcp/providers/openai_provider.py +0 -178
- casual_mcp/providers/provider_factory.py +0 -56
- casual_mcp-0.3.1.dist-info/METADATA +0 -398
- casual_mcp-0.3.1.dist-info/RECORD +0 -24
- {casual_mcp-0.3.1.dist-info → casual_mcp-0.5.0.dist-info}/entry_points.txt +0 -0
- {casual_mcp-0.3.1.dist-info → casual_mcp-0.5.0.dist-info}/licenses/LICENSE +0 -0
- {casual_mcp-0.3.1.dist-info → casual_mcp-0.5.0.dist-info}/top_level.txt +0 -0
|
@@ -1,398 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.4
|
|
2
|
-
Name: casual-mcp
|
|
3
|
-
Version: 0.3.1
|
|
4
|
-
Summary: Multi-server MCP client for LLM tool orchestration
|
|
5
|
-
Author: Alex Stansfield
|
|
6
|
-
License: MIT
|
|
7
|
-
Project-URL: Homepage, https://github.com/AlexStansfield/casual-mcp
|
|
8
|
-
Project-URL: Repository, https://github.com/AlexStansfield/casual-mcp
|
|
9
|
-
Project-URL: Issue Tracker, https://github.com/AlexStansfield/casual-mcp/issues
|
|
10
|
-
Requires-Python: >=3.10
|
|
11
|
-
Description-Content-Type: text/markdown
|
|
12
|
-
License-File: LICENSE
|
|
13
|
-
Requires-Dist: dateparser>=1.2.1
|
|
14
|
-
Requires-Dist: fastapi>=0.115.12
|
|
15
|
-
Requires-Dist: fastmcp>=2.5.2
|
|
16
|
-
Requires-Dist: jinja2>=3.1.6
|
|
17
|
-
Requires-Dist: ollama>=0.4.8
|
|
18
|
-
Requires-Dist: openai>=1.78.0
|
|
19
|
-
Requires-Dist: python-dotenv>=1.1.0
|
|
20
|
-
Requires-Dist: requests>=2.32.3
|
|
21
|
-
Requires-Dist: rich>=14.0.0
|
|
22
|
-
Requires-Dist: uvicorn>=0.34.2
|
|
23
|
-
Provides-Extra: dev
|
|
24
|
-
Requires-Dist: ruff; extra == "dev"
|
|
25
|
-
Requires-Dist: black; extra == "dev"
|
|
26
|
-
Requires-Dist: mypy; extra == "dev"
|
|
27
|
-
Requires-Dist: pytest; extra == "dev"
|
|
28
|
-
Requires-Dist: coverage; extra == "dev"
|
|
29
|
-
Dynamic: license-file
|
|
30
|
-
|
|
31
|
-
# 🧠 Casual MCP
|
|
32
|
-
|
|
33
|
-

|
|
34
|
-

|
|
35
|
-
|
|
36
|
-
**Casual MCP** is a Python framework for building, evaluating, and serving LLMs with tool-calling capabilities using [Model Context Protocol (MCP)](https://modelcontextprotocol.io).
|
|
37
|
-
It includes:
|
|
38
|
-
|
|
39
|
-
- ✅ A multi-server MCP client using [FastMCP](https://github.com/jlowin/fastmcp)
|
|
40
|
-
- ✅ Provider support for OpenAI (and OpenAI compatible APIs)
|
|
41
|
-
- ✅ A recursive tool-calling chat loop
|
|
42
|
-
- ✅ System prompt templating with Jinja2
|
|
43
|
-
- ✅ A basic API exposing a chat endpoint
|
|
44
|
-
|
|
45
|
-
## ✨ Features
|
|
46
|
-
|
|
47
|
-
- Plug-and-play multi-server tool orchestration
|
|
48
|
-
- Prompt templating with Jinja2
|
|
49
|
-
- Configurable via JSON
|
|
50
|
-
- CLI and API access
|
|
51
|
-
- Extensible architecture
|
|
52
|
-
|
|
53
|
-
## 🔧 Installation
|
|
54
|
-
|
|
55
|
-
```bash
|
|
56
|
-
pip install casual-mcp
|
|
57
|
-
```
|
|
58
|
-
|
|
59
|
-
Or for development:
|
|
60
|
-
|
|
61
|
-
```bash
|
|
62
|
-
git clone https://github.com/AlexStansfield/casual-mcp.git
|
|
63
|
-
cd casual-mcp
|
|
64
|
-
uv pip install -e .[dev]
|
|
65
|
-
```
|
|
66
|
-
|
|
67
|
-
## 🧩 Providers
|
|
68
|
-
|
|
69
|
-
Providers allow access to LLMs. Currently, only an OpenAI provider is supplied. However, in the model configuration, you can supply an optional `endpoint` allowing you to use any OpenAI-compatible API (e.g., LM Studio).
|
|
70
|
-
|
|
71
|
-
Ollama support is planned for a future version, along with support for custom pluggable providers via a standard interface.
|
|
72
|
-
|
|
73
|
-
## 🧩 System Prompt Templates
|
|
74
|
-
|
|
75
|
-
System prompts are defined as [Jinja2](https://jinja.palletsprojects.com) templates in the `prompt-templates/` directory.
|
|
76
|
-
|
|
77
|
-
They are used in the config file to specify a system prompt to use per model.
|
|
78
|
-
|
|
79
|
-
This allows you to define custom prompts for each model — useful when using models that do not natively support tools. Templates are passed the tool list in the `tools` variable.
|
|
80
|
-
|
|
81
|
-
```jinja2
|
|
82
|
-
# prompt-templates/example_prompt.j2
|
|
83
|
-
Here is a list of functions in JSON format that you can invoke:
|
|
84
|
-
[
|
|
85
|
-
{% for tool in tools %}
|
|
86
|
-
{
|
|
87
|
-
"name": "{{ tool.name }}",
|
|
88
|
-
"description": "{{ tool.description }}",
|
|
89
|
-
"parameters": {
|
|
90
|
-
{% for param_name, param in tool.inputSchema.items() %}
|
|
91
|
-
"{{ param_name }}": {
|
|
92
|
-
"description": "{{ param.description }}",
|
|
93
|
-
"type": "{{ param.type }}"{% if param.default is defined %},
|
|
94
|
-
"default": "{{ param.default }}"{% endif %}
|
|
95
|
-
}{% if not loop.last %},{% endif %}
|
|
96
|
-
{% endfor %}
|
|
97
|
-
}
|
|
98
|
-
}{% if not loop.last %},{% endif %}
|
|
99
|
-
{% endfor %}
|
|
100
|
-
]
|
|
101
|
-
```
|
|
102
|
-
|
|
103
|
-
## ⚙️ Configuration File (`casual_mcp_config.json`)
|
|
104
|
-
|
|
105
|
-
📄 See the [Programmatic Usage](#-programmatic-usage) section to build configs and messages with typed models.
|
|
106
|
-
|
|
107
|
-
The CLI and API can be configured using a `casual_mcp_config.json` file that defines:
|
|
108
|
-
|
|
109
|
-
- 🔧 Available **models** and their providers
|
|
110
|
-
- 🧰 Available **MCP tool servers**
|
|
111
|
-
- 🧩 Optional tool namespacing behavior
|
|
112
|
-
|
|
113
|
-
### 🔸 Example
|
|
114
|
-
|
|
115
|
-
```json
|
|
116
|
-
{
|
|
117
|
-
"models": {
|
|
118
|
-
"lm-qwen-3": {
|
|
119
|
-
"provider": "openai",
|
|
120
|
-
"endpoint": "http://localhost:1234/v1",
|
|
121
|
-
"model": "qwen3-8b",
|
|
122
|
-
"template": "lm-studio-native-tools"
|
|
123
|
-
},
|
|
124
|
-
"gpt-4.1": {
|
|
125
|
-
"provider": "openai",
|
|
126
|
-
"model": "gpt-4.1"
|
|
127
|
-
}
|
|
128
|
-
},
|
|
129
|
-
"servers": {
|
|
130
|
-
"time": {
|
|
131
|
-
"command": "python",
|
|
132
|
-
"args": ["mcp-servers/time/server.py"]
|
|
133
|
-
},
|
|
134
|
-
"weather": {
|
|
135
|
-
"url": "http://localhost:5050/mcp"
|
|
136
|
-
}
|
|
137
|
-
}
|
|
138
|
-
}
|
|
139
|
-
```
|
|
140
|
-
|
|
141
|
-
### 🔹 `models`
|
|
142
|
-
|
|
143
|
-
Each model has:
|
|
144
|
-
|
|
145
|
-
- `provider`: `"openai"` (more to come)
|
|
146
|
-
- `model`: the model name (e.g., `gpt-4.1`, `qwen3-8b`)
|
|
147
|
-
- `endpoint`: required for custom OpenAI-compatible backends (e.g., LM Studio)
|
|
148
|
-
- `template`: optional name used to apply model-specific tool calling formatting
|
|
149
|
-
|
|
150
|
-
### 🔹 `servers`
|
|
151
|
-
|
|
152
|
-
Servers can either be local (over stdio) or remote.
|
|
153
|
-
|
|
154
|
-
#### Local Config:
|
|
155
|
-
- `command`: the command to run the server, e.g `python`, `npm`
|
|
156
|
-
- `args`: the arguments to pass to the server as a list, e.g `["time/server.py"]`
|
|
157
|
-
- Optional: `env`: for subprocess environments, `system_prompt` to override server prompt
|
|
158
|
-
|
|
159
|
-
#### Remote Config:
|
|
160
|
-
- `url`: the url of the mcp server
|
|
161
|
-
- Optional: `transport`: the type of transport, `http`, `sse`, `streamable-http`. Defaults to `http`
|
|
162
|
-
|
|
163
|
-
## Environmental Variables
|
|
164
|
-
|
|
165
|
-
There are two environmental variables:
|
|
166
|
-
- `OPEN_AI_API_KEY`: required when using the `openai` provider, if using a local model with an openai compatible API it can be any string
|
|
167
|
-
- `TOOL_RESULT_FORMAT`: adjusts the format of the tool result given back to the LLM. Options are `result`, `function_result`, `function_args_result`. Defaults to `result`
|
|
168
|
-
|
|
169
|
-
You can set them using `export` or by creating a `.env` file.
|
|
170
|
-
|
|
171
|
-
## 🛠 CLI Reference
|
|
172
|
-
|
|
173
|
-
### `casual-mcp serve`
|
|
174
|
-
Start the API server.
|
|
175
|
-
|
|
176
|
-
**Options:**
|
|
177
|
-
- `--host`: Host to bind (default `0.0.0.0`)
|
|
178
|
-
- `--port`: Port to serve on (default `8000`)
|
|
179
|
-
|
|
180
|
-
### `casual-mcp servers`
|
|
181
|
-
Loads the config and outputs the list of MCP servers you have configured.
|
|
182
|
-
|
|
183
|
-
#### Example Output
|
|
184
|
-
```
|
|
185
|
-
$ casual-mcp servers
|
|
186
|
-
┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━┓
|
|
187
|
-
┃ Name ┃ Type ┃ Command / Url ┃ Env ┃
|
|
188
|
-
┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━┩
|
|
189
|
-
│ math │ local │ mcp-servers/math/server.py │ │
|
|
190
|
-
│ time │ local │ mcp-servers/time-v2/server.py │ │
|
|
191
|
-
│ weather │ local │ mcp-servers/weather/server.py │ │
|
|
192
|
-
│ words │ remote │ https://localhost:3000/mcp │ │
|
|
193
|
-
└─────────┴────────┴───────────────────────────────┴─────┘
|
|
194
|
-
```
|
|
195
|
-
|
|
196
|
-
### `casual-mcp models`
|
|
197
|
-
Loads the config and outputs the list of models you have configured.
|
|
198
|
-
|
|
199
|
-
#### Example Output
|
|
200
|
-
```
|
|
201
|
-
$ casual-mcp models
|
|
202
|
-
┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
|
|
203
|
-
┃ Name ┃ Provider ┃ Model ┃ Endpoint ┃
|
|
204
|
-
┡━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
|
|
205
|
-
│ lm-phi-4-mini │ openai │ phi-4-mini-instruct │ http://kovacs:1234/v1 │
|
|
206
|
-
│ lm-hermes-3 │ openai │ hermes-3-llama-3.2-3b │ http://kovacs:1234/v1 │
|
|
207
|
-
│ lm-groq │ openai │ llama-3-groq-8b-tool-use │ http://kovacs:1234/v1 │
|
|
208
|
-
│ gpt-4o-mini │ openai │ gpt-4o-mini │ │
|
|
209
|
-
│ gpt-4.1-nano │ openai │ gpt-4.1-nano │ │
|
|
210
|
-
│ gpt-4.1-mini │ openai │ gpt-4.1-mini │ │
|
|
211
|
-
│ gpt-4.1 │ openai │ gpt-4.1 │ │
|
|
212
|
-
└───────────────────┴──────────┴───────────────────────────┴────────────────────────┘
|
|
213
|
-
```
|
|
214
|
-
|
|
215
|
-
## 🧠 Programmatic Usage
|
|
216
|
-
|
|
217
|
-
You can import and use the core framework in your own Python code.
|
|
218
|
-
|
|
219
|
-
### ✅ Exposed Interfaces
|
|
220
|
-
|
|
221
|
-
#### `McpToolChat`
|
|
222
|
-
Orchestrates LLM interaction with tools using a recursive loop.
|
|
223
|
-
|
|
224
|
-
```python
|
|
225
|
-
from casual_mcp import McpToolChat
|
|
226
|
-
from casual_mcp.models import SystemMessage, UserMessage
|
|
227
|
-
|
|
228
|
-
chat = McpToolChat(mcp_client, provider, system_prompt)
|
|
229
|
-
|
|
230
|
-
# Generate method to take user prompt
|
|
231
|
-
response = await chat.generate("What time is it in London?")
|
|
232
|
-
|
|
233
|
-
# Generate method with session
|
|
234
|
-
response = await chat.generate("What time is it in London?", "my-session-id")
|
|
235
|
-
|
|
236
|
-
# Chat method that takes list of chat messages
|
|
237
|
-
# note: system prompt ignored if sent in messages so no need to set
|
|
238
|
-
chat = McpToolChat(mcp_client, provider)
|
|
239
|
-
messages = [
|
|
240
|
-
SystemMessage(content="You are a cool dude who likes to help the user"),
|
|
241
|
-
UserMessage(content="What time is it in London?")
|
|
242
|
-
]
|
|
243
|
-
response = await chat.chat(messages)
|
|
244
|
-
```
|
|
245
|
-
|
|
246
|
-
#### `ProviderFactory`
|
|
247
|
-
Instantiates LLM providers based on the selected model config.
|
|
248
|
-
|
|
249
|
-
```python
|
|
250
|
-
from casual_mcp import ProviderFactory
|
|
251
|
-
|
|
252
|
-
provider_factory = ProviderFactory(mcp_client)
|
|
253
|
-
provider = await provider_factory.get_provider("lm-qwen-3", model_config)
|
|
254
|
-
```
|
|
255
|
-
|
|
256
|
-
#### `load_config`
|
|
257
|
-
Loads your `casual_mcp_config.json` into a validated config object.
|
|
258
|
-
|
|
259
|
-
```python
|
|
260
|
-
from casual_mcp import load_config
|
|
261
|
-
|
|
262
|
-
config = load_config("casual_mcp_config.json")
|
|
263
|
-
```
|
|
264
|
-
|
|
265
|
-
#### `load_mcp_client`
|
|
266
|
-
Creats a multi server FastMCP client from the config object
|
|
267
|
-
|
|
268
|
-
```python
|
|
269
|
-
from casual_mcp import load_mcp_client
|
|
270
|
-
|
|
271
|
-
config = load_mcp_client(config)
|
|
272
|
-
```
|
|
273
|
-
|
|
274
|
-
#### Model and Server Configs
|
|
275
|
-
|
|
276
|
-
Exported models:
|
|
277
|
-
- StdioServerConfig
|
|
278
|
-
- RemoteServerConfig
|
|
279
|
-
- OpenAIModelConfig
|
|
280
|
-
|
|
281
|
-
Use these types to build valid configs:
|
|
282
|
-
|
|
283
|
-
```python
|
|
284
|
-
from casual_mcp.models import OpenAIModelConfig, StdioServerConfig
|
|
285
|
-
|
|
286
|
-
model = OpenAIModelConfig(model="llama3", endpoint="http://...")
|
|
287
|
-
server = StdioServerConfig(command="python", args=["time/server.py"])
|
|
288
|
-
```
|
|
289
|
-
|
|
290
|
-
#### Chat Messages
|
|
291
|
-
|
|
292
|
-
Exported models:
|
|
293
|
-
- AssistantMessage
|
|
294
|
-
- SystemMessage
|
|
295
|
-
- ToolResultMessage
|
|
296
|
-
- UserMessage
|
|
297
|
-
|
|
298
|
-
Use these types to build message chains:
|
|
299
|
-
|
|
300
|
-
```python
|
|
301
|
-
from casual_mcp.models import SystemMessage, UserMessage
|
|
302
|
-
|
|
303
|
-
messages = [
|
|
304
|
-
SystemMessage(content="You are a friendly tool calling assistant."),
|
|
305
|
-
UserMessage(content="What is the time?")
|
|
306
|
-
]
|
|
307
|
-
```
|
|
308
|
-
|
|
309
|
-
### Example
|
|
310
|
-
|
|
311
|
-
```python
|
|
312
|
-
from casual_mcp import McpToolChat, load_config, load_mcp_client, ProviderFactory
|
|
313
|
-
from casual_mcp.models import SystemMessage, UserMessage
|
|
314
|
-
|
|
315
|
-
model = "gpt-4.1-nano"
|
|
316
|
-
messages = [
|
|
317
|
-
SystemMessage(content="""You are a tool calling assistant.
|
|
318
|
-
You have access to up-to-date information through the tools.
|
|
319
|
-
Respond naturally and confidently, as if you already know all the facts."""),
|
|
320
|
-
UserMessage(content="Will I need to take my umbrella to London today?")
|
|
321
|
-
]
|
|
322
|
-
|
|
323
|
-
# Load the Config from the File
|
|
324
|
-
config = load_config("casual_mcp_config.json")
|
|
325
|
-
|
|
326
|
-
# Setup the MCP Client
|
|
327
|
-
mcp_client = load_mcp_client(config)
|
|
328
|
-
|
|
329
|
-
# Get the Provider for the Model
|
|
330
|
-
provider_factory = ProviderFactory(mcp_client)
|
|
331
|
-
provider = await provider_factory.get_provider(model, config.models[model])
|
|
332
|
-
|
|
333
|
-
# Perform the Chat and Tool calling
|
|
334
|
-
chat = McpToolChat(mcp_client, provider)
|
|
335
|
-
response_messages = await chat.chat(messages)
|
|
336
|
-
```
|
|
337
|
-
|
|
338
|
-
## 🚀 API Usage
|
|
339
|
-
|
|
340
|
-
### Start the API Server
|
|
341
|
-
|
|
342
|
-
```bash
|
|
343
|
-
casual-mcp serve --host 0.0.0.0 --port 8000
|
|
344
|
-
```
|
|
345
|
-
|
|
346
|
-
### Chat
|
|
347
|
-
|
|
348
|
-
#### Endpoint: `POST /chat`
|
|
349
|
-
|
|
350
|
-
#### Request Body:
|
|
351
|
-
- `model`: the LLM model to use
|
|
352
|
-
- `messages`: list of chat messages (system, assistant, user, etc) that you can pass to the api, allowing you to keep your own chat session in the client calling the api
|
|
353
|
-
|
|
354
|
-
#### Example:
|
|
355
|
-
```
|
|
356
|
-
{
|
|
357
|
-
"model": "gpt-4.1-nano",
|
|
358
|
-
"messages": [
|
|
359
|
-
{
|
|
360
|
-
"role": "user",
|
|
361
|
-
"content": "can you explain what the word consistent means?"
|
|
362
|
-
}
|
|
363
|
-
]
|
|
364
|
-
}
|
|
365
|
-
```
|
|
366
|
-
|
|
367
|
-
### Generate
|
|
368
|
-
|
|
369
|
-
The generate endpoint allows you to send a user prompt as a string.
|
|
370
|
-
|
|
371
|
-
It also support sessions that keep a record of all messages in the session and feeds them back into the LLM for context. Sessions are stored in memory so are cleared when the server is restarted
|
|
372
|
-
|
|
373
|
-
#### Endpoint: `POST /generate`
|
|
374
|
-
|
|
375
|
-
#### Request Body:
|
|
376
|
-
- `model`: the LLM model to use
|
|
377
|
-
- `prompt`: the user prompt
|
|
378
|
-
- `session_id`: an optional ID that stores all the messages from the session and provides them back to the LLM for context
|
|
379
|
-
|
|
380
|
-
#### Example:
|
|
381
|
-
```
|
|
382
|
-
{
|
|
383
|
-
"session_id": "my-session",
|
|
384
|
-
"model": "gpt-4o-mini",
|
|
385
|
-
"prompt": "can you explain what the word consistent means?"
|
|
386
|
-
}
|
|
387
|
-
```
|
|
388
|
-
|
|
389
|
-
### Get Session
|
|
390
|
-
|
|
391
|
-
Get all the messages from a session
|
|
392
|
-
|
|
393
|
-
#### Endpoint: `GET /generate/session/{session_id}`
|
|
394
|
-
|
|
395
|
-
|
|
396
|
-
## License
|
|
397
|
-
|
|
398
|
-
This software is released under the [MIT License](LICENSE)
|
|
@@ -1,24 +0,0 @@
|
|
|
1
|
-
casual_mcp/__init__.py,sha256=pInJdGkFqSH8RwbQq-9mc96GWIQjLrtExeXnTYGtNHw,327
|
|
2
|
-
casual_mcp/cli.py,sha256=6P_d77qPbY43AW1Ix6FfbHyy6Qc6sFeFqGvXxJCW2_M,2090
|
|
3
|
-
casual_mcp/logging.py,sha256=o3rvT8GLJKGlu0ieeC9TY_SRSEUY-VO8jRQZjx-sSvY,863
|
|
4
|
-
casual_mcp/main.py,sha256=AzqQ6SUJsyKyMaqd3HIxLDozoftMd27KQAQNsfM9e2I,3385
|
|
5
|
-
casual_mcp/mcp_tool_chat.py,sha256=6MMRAEBDMRyw7-n1VGvIGdrh1ed2szZx8sC0MlR1g7I,4948
|
|
6
|
-
casual_mcp/utils.py,sha256=Nea0aRbPyjqm7mIjffJtGP2NssE7BsdPleO-yiuAWPE,2964
|
|
7
|
-
casual_mcp/models/__init__.py,sha256=qlKylcCyRJOSIVteU2feiLOigZoY-m-soVGp4NALM_c,538
|
|
8
|
-
casual_mcp/models/config.py,sha256=ITu3WAPMad7i2CS3ljkHapjT8lLm7k6HFUF6N73U1oo,294
|
|
9
|
-
casual_mcp/models/generation_error.py,sha256=n1mF3vc1Sg_9yIe603G1nTP395Tht8JMKHqdMWFNAn0,259
|
|
10
|
-
casual_mcp/models/mcp_server_config.py,sha256=0OHsHUEKxRoCl21lsye4E5GoCNmdZWIZCOOthcTpdsE,539
|
|
11
|
-
casual_mcp/models/messages.py,sha256=7C0SoCC6Ee970iHprpCpsKsQrwvM66e39o96wfYm1Y8,683
|
|
12
|
-
casual_mcp/models/model_config.py,sha256=gN5hNDfbur_bHgrji87CcU2WgNZO-F3eveK4pVWVSAE,435
|
|
13
|
-
casual_mcp/models/tool_call.py,sha256=BKMxcmyW7EmNoG1jgS9PXXvf6RQIHf7wB8fElEbc4gA,271
|
|
14
|
-
casual_mcp/providers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
15
|
-
casual_mcp/providers/abstract_provider.py,sha256=TTEP3FeTxOtbD0By_k17UxS8cqxYCOGNRTRxYRrqGwc,292
|
|
16
|
-
casual_mcp/providers/ollama_provider.py,sha256=IUSJFBtEYmza_-_7bk5YZKqed3N67l8A2lZEmHPiyHo,2581
|
|
17
|
-
casual_mcp/providers/openai_provider.py,sha256=uSjoqM-X9bVp_RVM8Ip6lqjZ7q3DdN0-p7o2HKrWxMI,6138
|
|
18
|
-
casual_mcp/providers/provider_factory.py,sha256=CyFHJ0mU2tjHqj04btF0SL0B3pf12LAJ52Msqsbnv_g,1766
|
|
19
|
-
casual_mcp-0.3.1.dist-info/licenses/LICENSE,sha256=U3Zu2tkrh5vXdy7gIdE8WJGM9D4gGp3hohAAWdre-yo,1058
|
|
20
|
-
casual_mcp-0.3.1.dist-info/METADATA,sha256=uqtEAq3-YfRInCxU79bwfBhrsGxFKbvUWAJ7D0XTA0g,12902
|
|
21
|
-
casual_mcp-0.3.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
22
|
-
casual_mcp-0.3.1.dist-info/entry_points.txt,sha256=X48Np2cwl-SlRQdV26y2vPZ-2tJaODgZeVtfpHho-zg,50
|
|
23
|
-
casual_mcp-0.3.1.dist-info/top_level.txt,sha256=K4CiI0Jf8PHICjuQVm32HuNMB44kp8Lb02bbbdiH5bo,11
|
|
24
|
-
casual_mcp-0.3.1.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|