fast-agent-mcp 0.1.10__py3-none-any.whl → 0.1.11__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: fast-agent-mcp
3
- Version: 0.1.10
3
+ Version: 0.1.11
4
4
  Summary: Define, Prompt and Test MCP enabled Agents and Workflows
5
5
  Author-email: Shaun Smith <fastagent@llmindset.co.uk>, Sarmad Qadri <sarmad@lastmileai.dev>
6
6
  License: Apache License
@@ -212,7 +212,7 @@ Requires-Python: >=3.10
212
212
  Requires-Dist: aiohttp>=3.11.13
213
213
  Requires-Dist: anthropic>=0.49.0
214
214
  Requires-Dist: fastapi>=0.115.6
215
- Requires-Dist: mcp>=1.4.1
215
+ Requires-Dist: mcp>=1.5.0
216
216
  Requires-Dist: numpy>=2.2.1
217
217
  Requires-Dist: openai>=1.63.2
218
218
  Requires-Dist: opentelemetry-distro>=0.50b0
@@ -241,10 +241,9 @@ Provides-Extra: temporal
241
241
  Requires-Dist: temporalio>=1.8.0; extra == 'temporal'
242
242
  Description-Content-Type: text/markdown
243
243
 
244
- ## fast-agent
245
-
246
244
  <p align="center">
247
245
  <a href="https://pypi.org/project/fast-agent-mcp/"><img src="https://img.shields.io/pypi/v/fast-agent-mcp?color=%2334D058&label=pypi" /></a>
246
+ <a href="#"><img src="https://github.com/evalstate/fast-agent/actions/workflows/main-checks.yml/badge.svg" /></a>
248
247
  <a href="https://github.com/evalstate/fast-agent/issues"><img src="https://img.shields.io/github/issues-raw/evalstate/fast-agent" /></a>
249
248
  <a href="https://lmai.link/discord/mcp-agent"><img src="https://shields.io/discord/1089284610329952357" alt="discord" /></a>
250
249
  <img alt="Pepy Total Downloads" src="https://img.shields.io/pepy/dt/fast-agent-mcp?label=pypi%20%7C%20downloads"/>
@@ -253,16 +252,14 @@ Description-Content-Type: text/markdown
253
252
 
254
253
  ## Overview
255
254
 
256
- **`fast-agent`** enables you to create and interact with sophisticated Agents and Workflows in minutes.
255
+ **`fast-agent`** enables you to create and interact with sophisticated Agents and Workflows in minutes. It is the first framework with complete, end-to-end tested MCP Feature support including Sampling. Both Anthropic (Haiku, Sonnet, Opus) and OpenAI models (gpt-4o family, o1/o3 family) are supported.
257
256
 
258
257
  The simple declarative syntax lets you concentrate on composing your Prompts and MCP Servers to [build effective agents](https://www.anthropic.com/research/building-effective-agents).
259
258
 
260
- Evaluate how different models handle Agent and MCP Server calling tasks, then build multi-model workflows using the best provider for each task.
261
-
262
- `fast-agent` is now multi-modal, supporting Images and PDFs for both Anthropic and OpenAI endpoints (for supported models), via Prompts, Resources and MCP Tool Call results.
259
+ `fast-agent` is multi-modal, supporting Images and PDFs for both Anthropic and OpenAI endpoints via Prompts, Resources and MCP Tool Call results. The inclusion of passthrough and playback LLMs enable rapid development and test of Python glue-code for your applications.
263
260
 
264
261
  > [!TIP]
265
- > `fast-agent` is now MCP Native! Coming Soon - Full Documentation Site.
262
+ > `fast-agent` is now MCP Native! Coming Soon - Full Documentation Site and Further MCP Examples.
266
263
 
267
264
  ### Agent Application Development
268
265
 
@@ -272,7 +269,7 @@ Chat with individual Agents and Components before, during and after workflow exe
272
269
 
273
270
  Simple model selection makes testing Model <-> MCP Server interaction painless. You can read more about the motivation behind this project [here](https://llmindset.co.uk/resources/fast-agent/)
274
271
 
275
- ![fast-agent](https://github.com/user-attachments/assets/3e692103-bf97-489a-b519-2d0fee036369)
272
+ ![2025-03-23-fast-agent](https://github.com/user-attachments/assets/8f6dbb69-43e3-4633-8e12-5572e9614728)
276
273
 
277
274
  ## Get started:
278
275
 
@@ -597,6 +594,14 @@ agent["greeter"].send("Good Evening!") # Dictionary access is supported
597
594
 
598
595
  Add Resources to prompts using either the inbuilt `prompt-server` or MCP Types directly. Convenience class are made available to do so simply, for example:
599
596
 
597
+ ```python
598
+ summary: str = await agent.with_resource(
599
+ "Summarise this PDF please",
600
+ "mcp_server",
601
+ "resource://fast-agent/sample.pdf",
602
+ )
603
+ ```
604
+
600
605
  #### MCP Tool Result Conversion
601
606
 
602
607
  LLM APIs have restrictions on the content types that can be returned as Tool Calls/Function results via their Chat Completions API's:
@@ -612,40 +617,33 @@ MCP Prompts are supported with `apply_prompt(name,arguments)`, which always retu
612
617
 
613
618
  Prompts can also be applied interactively through the interactive interface by using the `/prompt` command.
614
619
 
620
+ ### Sampling
621
+
622
+ Sampling LLMs are configured per Client/Server pair. Specify the model name in fastagent.config.yaml as follows:
623
+
624
+ ```yaml
625
+ mcp:
626
+ servers:
627
+ sampling_resource:
628
+ command: "uv"
629
+ args: ["run", "sampling_resource_server.py"]
630
+ sampling:
631
+ model: "haiku"
632
+ ```
633
+
615
634
  ### Secrets File
616
635
 
617
636
  > [!TIP]
618
637
  > fast-agent will look recursively for a fastagent.secrets.yaml file, so you only need to manage this at the root folder of your agent definitions.
619
638
 
639
+ ### Interactive Shell
640
+
641
+ ![fast-agent](https://github.com/user-attachments/assets/3e692103-bf97-489a-b519-2d0fee036369)
642
+
620
643
  ## Project Notes
621
644
 
622
645
  `fast-agent` builds on the [`mcp-agent`](https://github.com/lastmile-ai/mcp-agent) project by Sarmad Qadri.
623
646
 
624
- ### llmindset.co.uk fork:
625
-
626
- - Addition of MCP Prompts including Prompt Server and agent save/replay ability.
627
- - Overhaul of Eval/Opt for Conversation Management
628
- - Removed instructor/double-llm calling - native structured outputs for OAI.
629
- - Improved handling of Parallel/Fan-In and respose option
630
- - XML based generated prompts
631
- - "FastAgent" style prototyping, with per-agent models
632
- - API keys through Environment Variables
633
- - Warm-up / Post-Workflow Agent Interactions
634
- - Quick Setup
635
- - Interactive Prompt Mode
636
- - Simple Model Selection with aliases
637
- - User/Assistant and Tool Call message display
638
- - MCP Sever Environment Variable support
639
- - MCP Roots support
640
- - Comprehensive Progress display
641
- - JSONL file logging with secret revokation
642
- - OpenAI o1/o3-mini support with reasoning level
643
- - Enhanced Human Input Messaging and Handling
644
- - Declarative workflows
645
- - Numerous defect fixes
646
-
647
- ### Features to add (Commmitted)
648
-
649
- - Run Agent as MCP Server, with interop
650
- - Multi-part content types supporing Vision, PDF and multi-part Text.
651
- - Improved test automation (supported by prompt_server.py and augmented_llm_playback.py)
647
+ ### Contributing
648
+
649
+ Contributions and PRs are welcome - feel free to raise issues to discuss. Full guidelines for contributing and roadmap coming very soon. Get in touch!
@@ -1,6 +1,6 @@
1
1
  mcp_agent/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2
2
  mcp_agent/app.py,sha256=0_C1xmNZlk9qZoewnNI_mC7sSfO9oJgkOyiKkQ62MHU,10606
3
- mcp_agent/config.py,sha256=OpPTsk9gNm2IA1laUomAMkGA-pAlp5uILQpEPBjavQs,10644
3
+ mcp_agent/config.py,sha256=cEiY_J5MqKj23KkHtzP1h04yalaGgO2OiXErduiVf2M,10890
4
4
  mcp_agent/console.py,sha256=Gjf2QLFumwG1Lav__c07X_kZxxEUSkzV-1_-YbAwcwo,813
5
5
  mcp_agent/context.py,sha256=m1S5M9a2Kdxy5rEGG6Uwwmi19bDEpU6u-e5ZgPmVXfY,8031
6
6
  mcp_agent/context_dependent.py,sha256=TGqRLzYCOnsWGoaD1HtrliYtWo8MeaWCQk6ePUmyYCw,1446
@@ -17,7 +17,7 @@ mcp_agent/cli/commands/bootstrap.py,sha256=Rmwbuwl52eHfnya7fnwKk2J7nCsHpSh6irka4
17
17
  mcp_agent/cli/commands/config.py,sha256=32YTS5jmsYAs9QzAhjkG70_daAHqOemf4XbZBBSMz6g,204
18
18
  mcp_agent/cli/commands/setup.py,sha256=_SCpd6_PrixqbSaE72JQ7erIRkZnJGmh_3TvvwSzEiE,6392
19
19
  mcp_agent/core/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
20
- mcp_agent/core/agent_app.py,sha256=6fzvExSmVSXyNo-Rq9Xvu0qUKjKHKjOpuhRfzCthV8o,29735
20
+ mcp_agent/core/agent_app.py,sha256=coAbhzGT34SV_S0AsLbHuOkxyovOZFlpk_HUphRNU78,30807
21
21
  mcp_agent/core/agent_types.py,sha256=yKiMbv9QO2dduq4zXmoMZlOZpXJZhM4oNwIq1-134FE,318
22
22
  mcp_agent/core/agent_utils.py,sha256=QMvwmxZyCqYhBzSyL9xARsxTuwdmlyjQvrPpsH36HnQ,1888
23
23
  mcp_agent/core/decorators.py,sha256=dkAah1eIuYsEfQISDryG0u2GrzNnsO_jyN7lhpQfNlM,16191
@@ -28,7 +28,7 @@ mcp_agent/core/factory.py,sha256=MhlYS0G0IyFy_j46HVJdjEznJzfCFjx_NRhUPcbQIJI,190
28
28
  mcp_agent/core/fastagent.py,sha256=jJmO0DryFGwSkse_3q5Ll-5XONDvj7k_Oeb-ETBKFkA,19620
29
29
  mcp_agent/core/mcp_content.py,sha256=rXT2C5gP9qgC-TI5F362ZLJi_erzcEOnlP9D2ZKK0i0,6860
30
30
  mcp_agent/core/prompt.py,sha256=R-X3kptu3ehV_SQeiGnP6F9HMN-92I8e73gnkQ1tDVs,4317
31
- mcp_agent/core/proxies.py,sha256=a5tNv-EVcv67XNAkbzaybQVbRgkNEfhIkcveS1LBp2s,10242
31
+ mcp_agent/core/proxies.py,sha256=qsIqyJgiIh-b9ehHiZrM39YutQFJPHaHO14GOMFE1KI,10289
32
32
  mcp_agent/core/types.py,sha256=Zhi9iW7uiOfdpSt9NC0FCtGRFtJPg4mpZPK2aYi7a7M,817
33
33
  mcp_agent/core/validation.py,sha256=x0fsx5eLTawASFm9MDtEukwGOj_RTdY1OW064UihMR8,8309
34
34
  mcp_agent/eval/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
@@ -51,17 +51,19 @@ mcp_agent/logging/rich_progress.py,sha256=IEVFdFGA0nwg6pSt9Ydni5LCNYZZPKYMe-6DCi
51
51
  mcp_agent/logging/tracing.py,sha256=jQivxKYl870oXakmyUk7TXuTQSvsIzpHwZlSQfy4b0c,5203
52
52
  mcp_agent/logging/transport.py,sha256=MFgiCQ-YFP0tSMhDMpZCj585vflWcMydM4oyCFduVf0,17203
53
53
  mcp_agent/mcp/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
54
- mcp_agent/mcp/gen_client.py,sha256=u0HwdJiw9YCerS5JC7JDuGgBh9oTcLd7vv9vPjwibXc,3025
54
+ mcp_agent/mcp/gen_client.py,sha256=D92Yo088CAeuWG6M82Vlkq0H8igUTw9SwwOQinZZCkg,3052
55
+ mcp_agent/mcp/interfaces.py,sha256=hUA9R7RA1tF1td9RCfzWHBUVCLXF6FC1a4I1EZ5Fnh4,4629
55
56
  mcp_agent/mcp/mcp_activity.py,sha256=CajXCFWZ2cKEX9s4-HfNVAj471ePTVs4NOkvmIh65tE,592
56
- mcp_agent/mcp/mcp_agent_client_session.py,sha256=lfz38wzIoMfZyH3dAgclHohOVX0tR7Y2FCE2t7CVsPw,3956
57
+ mcp_agent/mcp/mcp_agent_client_session.py,sha256=3xZbhr48YV5SkBTQGMdNrT_KIGWOBSFPqCZLCSOK2HA,4156
57
58
  mcp_agent/mcp/mcp_agent_server.py,sha256=xP09HZTeguJi4Fq0p3fjLBP55uSYe5AdqM90xCgn9Ho,1639
58
- mcp_agent/mcp/mcp_aggregator.py,sha256=NuFslY5-0as2VAfcg6t-k3sgpX-mh3AWttuS9KHL4n4,37684
59
+ mcp_agent/mcp/mcp_aggregator.py,sha256=1DYZpmq1IJZo7cYKfahH6LeyVKuNkosGhSq6k59lrlM,37941
59
60
  mcp_agent/mcp/mcp_connection_manager.py,sha256=PdLia-rxbhUdAdEnW7TQbkf1qeI9RR3xhQw1j11Bi6o,13612
60
61
  mcp_agent/mcp/mime_utils.py,sha256=difepNR_gpb4MpMLkBRAoyhDk-AjXUHTiqKvT_VwS1o,1805
61
62
  mcp_agent/mcp/prompt_message_multipart.py,sha256=U7IN0JStmy26akTXcqE4x90oWzm8xs1qa0VeKIyPKmE,1962
62
63
  mcp_agent/mcp/prompt_serialization.py,sha256=StcXV7V4fqqtCmOCXGCyYXx5vpwNhL2xr3RG_awwdqI,16056
63
64
  mcp_agent/mcp/resource_utils.py,sha256=G9IBWyasxKKcbq3T_fSpM6mHE8PjBargEdfQnBPrkZY,6650
64
- mcp_agent/mcp/stdio.py,sha256=QJcxEw2CXJrhR7PHyhuwUekzaXoDng_cNjai-rdZNg0,4479
65
+ mcp_agent/mcp/sampling.py,sha256=iHjjI5ViCe2CYm_7EtJiHr-WPYug6MQyAuBtru0AnkI,4601
66
+ mcp_agent/mcp/stdio.py,sha256=fZr9yVqPvmPC8pkaf95rZtw0uD8BGND0UI_cUYyuSsE,4478
65
67
  mcp_agent/mcp/prompts/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
66
68
  mcp_agent/mcp/prompts/__main__.py,sha256=gr1Tdz9fcK0EXjEuZg_BOnKUmvhYq5AH2lFZicVyNb0,237
67
69
  mcp_agent/mcp/prompts/prompt_server.py,sha256=6K4FeKNW_JApWUNB055gl8UnWyC1mvtl_kPEvgUnPjk,17348
@@ -72,8 +74,8 @@ mcp_agent/resources/examples/data-analysis/analysis-campaign.py,sha256=EG-HhaDHl
72
74
  mcp_agent/resources/examples/data-analysis/analysis.py,sha256=5zLoioZQNKUfXt1EXLrGX3TU06-0N06-L9Gtp9BIr6k,2611
73
75
  mcp_agent/resources/examples/data-analysis/fastagent.config.yaml,sha256=ini94PHyJCfgpjcjHKMMbGuHs6LIj46F1NwY0ll5HVk,1609
74
76
  mcp_agent/resources/examples/data-analysis/mount-point/WA_Fn-UseC_-HR-Employee-Attrition.csv,sha256=pcMeOL1_r8m8MziE6xgbBrQbjl5Ijo98yycZn7O-dlk,227977
75
- mcp_agent/resources/examples/internal/agent.py,sha256=4EXhVJcX5mw2LuDqmZL4B4SM0zxMFmMou7NCEeoVeQ0,391
76
- mcp_agent/resources/examples/internal/fastagent.config.yaml,sha256=NF-plJ2ZMLZL8_YfdwmfsvRyafgsNEEHzsjm_p8vNlY,1858
77
+ mcp_agent/resources/examples/internal/agent.py,sha256=orShmYKkrjMc7qa3ZtfzoO80uOClZaPaw2Wvc4_FIH8,406
78
+ mcp_agent/resources/examples/internal/fastagent.config.yaml,sha256=U2s0Asc06wC04FstKnBMeB3J5gIa3xa-Rao-1-74XTk,1935
77
79
  mcp_agent/resources/examples/internal/job.py,sha256=WEKIAANMEAuKr13__rYf3PqJeTAsNB_kqYqbqVYQlUM,4093
78
80
  mcp_agent/resources/examples/internal/prompt_category.py,sha256=b3tjkfrVIW1EPoDjr4mG87wlZ7D0Uju9eg6asXAYYpI,551
79
81
  mcp_agent/resources/examples/internal/prompt_sizing.py,sha256=UtQ_jvwS4yMh80PHhUQXJ9WXk-fqNYlqUMNTNkZosKM,2003
@@ -120,7 +122,7 @@ mcp_agent/workflows/llm/anthropic_utils.py,sha256=OFmsVmDQ22880duDWQrEeQEB47xtvu
120
122
  mcp_agent/workflows/llm/augmented_llm.py,sha256=9cWy-4yNG13w4oQgXmisgWTcm6aoJIRCYTX85Bkf-MI,30554
121
123
  mcp_agent/workflows/llm/augmented_llm_anthropic.py,sha256=opV4PTai2eoYUzJS0gCPGEy4pe-lT2Eo1Sao6Y_EIiY,20140
122
124
  mcp_agent/workflows/llm/augmented_llm_openai.py,sha256=OUSmvY2m6HU1JOK5nEzKDHpHReT0ffjoHDFHk6aYhoc,21002
123
- mcp_agent/workflows/llm/augmented_llm_passthrough.py,sha256=IoMNOKK9l46bp4OxfXrB4uK7_4X7ufjuFyXSQCH4YnM,6219
125
+ mcp_agent/workflows/llm/augmented_llm_passthrough.py,sha256=aeQ2WWNIzdzgYWHijE-RWgzFzSUcRJNRv5zq0ug3B2U,7891
124
126
  mcp_agent/workflows/llm/augmented_llm_playback.py,sha256=5ypv3owJU6pscktqg9tkLQVKNgaA50e8OWmC1hAhrtE,4328
125
127
  mcp_agent/workflows/llm/llm_selector.py,sha256=G7pIybuBDwtmyxUDov_QrNYH2FoI0qFRu2JfoxWUF5Y,11045
126
128
  mcp_agent/workflows/llm/model_factory.py,sha256=UHePE5Ow03kpE44kjYtFGEhVFSYp0AY2yGri58yCBKU,7688
@@ -151,8 +153,8 @@ mcp_agent/workflows/swarm/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJW
151
153
  mcp_agent/workflows/swarm/swarm.py,sha256=-lAIeSWDqbGHGRPTvjiP9nIKWvxxy9DAojl9yQzO1Pw,11050
152
154
  mcp_agent/workflows/swarm/swarm_anthropic.py,sha256=pW8zFx5baUWGd5Vw3nIDF2oVOOGNorij4qvGJKdYPcs,1624
153
155
  mcp_agent/workflows/swarm/swarm_openai.py,sha256=wfteywvAGkT5bLmIxX_StHJq8144whYmCRnJASAjOes,1596
154
- fast_agent_mcp-0.1.10.dist-info/METADATA,sha256=Kum2eRyw2tDXTb1rG7JIf-2IrkOC-xWOnFyQpulgXq4,29760
155
- fast_agent_mcp-0.1.10.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
156
- fast_agent_mcp-0.1.10.dist-info/entry_points.txt,sha256=qPM7vwtN1_KmP3dXehxgiCxUBHtqP7yfenZigztvY-w,226
157
- fast_agent_mcp-0.1.10.dist-info/licenses/LICENSE,sha256=cN3FxDURL9XuzE5mhK9L2paZo82LTfjwCYVT7e3j0e4,10939
158
- fast_agent_mcp-0.1.10.dist-info/RECORD,,
156
+ fast_agent_mcp-0.1.11.dist-info/METADATA,sha256=ff0dlOdPoM72tfefKvN6bdVwszZIKE-5wIkSAI3qJTU,29678
157
+ fast_agent_mcp-0.1.11.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
158
+ fast_agent_mcp-0.1.11.dist-info/entry_points.txt,sha256=qPM7vwtN1_KmP3dXehxgiCxUBHtqP7yfenZigztvY-w,226
159
+ fast_agent_mcp-0.1.11.dist-info/licenses/LICENSE,sha256=cN3FxDURL9XuzE5mhK9L2paZo82LTfjwCYVT7e3j0e4,10939
160
+ fast_agent_mcp-0.1.11.dist-info/RECORD,,
mcp_agent/config.py CHANGED
@@ -18,6 +18,12 @@ class MCPServerAuthSettings(BaseModel):
18
18
  model_config = ConfigDict(extra="allow", arbitrary_types_allowed=True)
19
19
 
20
20
 
21
+ class MCPSamplingSettings(BaseModel):
22
+ model: str = "haiku"
23
+
24
+ model_config = ConfigDict(extra="allow", arbitrary_types_allowed=True)
25
+
26
+
21
27
  class MCPRootSettings(BaseModel):
22
28
  """Represents a root directory configuration for an MCP server."""
23
29
 
@@ -81,6 +87,9 @@ class MCPServerSettings(BaseModel):
81
87
  env: Dict[str, str] | None = None
82
88
  """Environment variables to pass to the server process."""
83
89
 
90
+ sampling: MCPSamplingSettings | None = None
91
+ """Sampling settings for this Client/Server pair"""
92
+
84
93
 
85
94
  class MCPSettings(BaseModel):
86
95
  """Configuration for all MCP servers."""
@@ -112,6 +112,35 @@ class AgentApp:
112
112
 
113
113
  proxy = self._agents[target]
114
114
  return await proxy.apply_prompt(prompt_name, arguments)
115
+
116
+ async def with_resource(
117
+ self,
118
+ prompt_content: Union[str, PromptMessageMultipart],
119
+ server_name: str,
120
+ resource_name: str,
121
+ agent_name: Optional[str] = None,
122
+ ) -> str:
123
+ """
124
+ Create a prompt with the given content and resource, then send it to the agent.
125
+
126
+ Args:
127
+ prompt_content: Either a string message or an existing PromptMessageMultipart
128
+ server_name: Name of the MCP server to retrieve the resource from
129
+ resource_name: Name or URI of the resource to retrieve
130
+ agent_name: The name of the agent to use (uses default if None)
131
+
132
+ Returns:
133
+ The agent's response as a string
134
+ """
135
+ target = agent_name or self._default
136
+ if not target:
137
+ raise ValueError("No default agent available")
138
+
139
+ if target not in self._agents:
140
+ raise ValueError(f"No agent named '{target}'")
141
+
142
+ proxy = self._agents[target]
143
+ return await proxy.with_resource(prompt_content, server_name, resource_name)
115
144
 
116
145
  async def prompt(self, agent_name: Optional[str] = None, default: str = "") -> str:
117
146
  """
mcp_agent/core/proxies.py CHANGED
@@ -1,6 +1,9 @@
1
1
  """
2
2
  Proxy classes for agent interactions.
3
3
  These proxies provide a consistent interface for interacting with different types of agents.
4
+
5
+ FOR COMPATIBILITY WITH LEGACY MCP-AGENT CODE
6
+
4
7
  """
5
8
 
6
9
  from typing import List, Optional, Dict, Union, TYPE_CHECKING
@@ -6,7 +6,7 @@ from anyio.streams.memory import MemoryObjectReceiveStream, MemoryObjectSendStre
6
6
  from mcp import ClientSession
7
7
 
8
8
  from mcp_agent.logging.logger import get_logger
9
- from mcp_agent.mcp_server_registry import ServerRegistry
9
+ from mcp_agent.mcp.interfaces import ServerRegistryProtocol
10
10
  from mcp_agent.mcp.mcp_agent_client_session import MCPAgentClientSession
11
11
 
12
12
  logger = get_logger(__name__)
@@ -15,7 +15,7 @@ logger = get_logger(__name__)
15
15
  @asynccontextmanager
16
16
  async def gen_client(
17
17
  server_name: str,
18
- server_registry: ServerRegistry,
18
+ server_registry: ServerRegistryProtocol,
19
19
  client_session_factory: Callable[
20
20
  [MemoryObjectReceiveStream, MemoryObjectSendStream, timedelta | None],
21
21
  ClientSession,
@@ -41,7 +41,7 @@ async def gen_client(
41
41
 
42
42
  async def connect(
43
43
  server_name: str,
44
- server_registry: ServerRegistry,
44
+ server_registry: ServerRegistryProtocol,
45
45
  client_session_factory: Callable[
46
46
  [MemoryObjectReceiveStream, MemoryObjectSendStream, timedelta | None],
47
47
  ClientSession,
@@ -67,7 +67,7 @@ async def connect(
67
67
 
68
68
  async def disconnect(
69
69
  server_name: str | None,
70
- server_registry: ServerRegistry,
70
+ server_registry: ServerRegistryProtocol,
71
71
  ) -> None:
72
72
  """
73
73
  Disconnect from the specified server. If server_name is None, disconnect from all servers.
@@ -0,0 +1,152 @@
1
+ """
2
+ Interface definitions to prevent circular imports.
3
+ This module defines protocols (interfaces) that can be used to break circular dependencies.
4
+ """
5
+
6
+ from contextlib import asynccontextmanager
7
+ from typing import Any, AsyncGenerator, Callable, Generic, List, Optional, Protocol, Type, TypeVar
8
+
9
+ from mcp import ClientSession
10
+ from mcp.types import CreateMessageRequestParams
11
+ from pydantic import Field
12
+
13
+
14
+ class ServerRegistryProtocol(Protocol):
15
+ """
16
+ Protocol defining the minimal interface of ServerRegistry needed by gen_client.
17
+ This allows gen_client to depend on this protocol rather than the full ServerRegistry class.
18
+ """
19
+
20
+ @asynccontextmanager
21
+ async def initialize_server(
22
+ self,
23
+ server_name: str,
24
+ client_session_factory=None,
25
+ init_hook=None,
26
+ ) -> AsyncGenerator[ClientSession, None]:
27
+ """Initialize a server and yield a client session."""
28
+ ...
29
+
30
+ @property
31
+ def connection_manager(self) -> "ConnectionManagerProtocol":
32
+ """Get the connection manager."""
33
+ ...
34
+
35
+
36
+ class ConnectionManagerProtocol(Protocol):
37
+ """
38
+ Protocol defining the minimal interface of ConnectionManager needed.
39
+ """
40
+
41
+ async def get_server(
42
+ self,
43
+ server_name: str,
44
+ client_session_factory=None,
45
+ ):
46
+ """Get a server connection."""
47
+ ...
48
+
49
+ async def disconnect_server(self, server_name: str) -> None:
50
+ """Disconnect from a server."""
51
+ ...
52
+
53
+ async def disconnect_all_servers(self) -> None:
54
+ """Disconnect from all servers."""
55
+ ...
56
+
57
+
58
+ # Type variables for generic protocols
59
+ MessageParamT = TypeVar("MessageParamT")
60
+ """A type representing an input message to an LLM."""
61
+
62
+ MessageT = TypeVar("MessageT")
63
+ """A type representing an output message from an LLM."""
64
+
65
+ ModelT = TypeVar("ModelT")
66
+ """A type representing a structured output message from an LLM."""
67
+
68
+
69
+ class RequestParams(CreateMessageRequestParams):
70
+ """
71
+ Parameters to configure the AugmentedLLM 'generate' requests.
72
+ """
73
+
74
+ messages: None = Field(exclude=True, default=None)
75
+ """
76
+ Ignored. 'messages' are removed from CreateMessageRequestParams
77
+ to avoid confusion with the 'message' parameter on 'generate' method.
78
+ """
79
+
80
+ maxTokens: int = 2048
81
+ """The maximum number of tokens to sample, as requested by the server."""
82
+
83
+ model: str | None = None
84
+ """
85
+ The model to use for the LLM generation.
86
+ If specified, this overrides the 'modelPreferences' selection criteria.
87
+ """
88
+
89
+ use_history: bool = True
90
+ """
91
+ Include the message history in the generate request.
92
+ """
93
+
94
+ max_iterations: int = 10
95
+ """
96
+ The maximum number of iterations to run the LLM for.
97
+ """
98
+
99
+ parallel_tool_calls: bool = True
100
+ """
101
+ Whether to allow multiple tool calls per iteration.
102
+ Also known as multi-step tool use.
103
+ """
104
+
105
+
106
+ class AugmentedLLMProtocol(Protocol, Generic[MessageParamT, MessageT]):
107
+ """Protocol defining the interface for augmented LLMs"""
108
+
109
+ async def generate(
110
+ self,
111
+ message: str | MessageParamT | List[MessageParamT],
112
+ request_params: RequestParams | None = None,
113
+ ) -> List[MessageT]:
114
+ """Request an LLM generation, which may run multiple iterations, and return the result"""
115
+
116
+ async def generate_str(
117
+ self,
118
+ message: str | MessageParamT | List[MessageParamT],
119
+ request_params: RequestParams | None = None,
120
+ ) -> str:
121
+ """Request an LLM generation and return the string representation of the result"""
122
+
123
+ async def generate_structured(
124
+ self,
125
+ message: str | MessageParamT | List[MessageParamT],
126
+ response_model: Type[ModelT],
127
+ request_params: RequestParams | None = None,
128
+ ) -> ModelT:
129
+ """Request a structured LLM generation and return the result as a Pydantic model."""
130
+
131
+
132
+ class ModelFactoryClassProtocol(Protocol):
133
+ """
134
+ Protocol defining the minimal interface of the ModelFactory class needed by sampling.
135
+ This allows sampling.py to depend on this protocol rather than the concrete ModelFactory class.
136
+ """
137
+
138
+ @classmethod
139
+ def create_factory(
140
+ cls, model_string: str, request_params: Optional[RequestParams] = None
141
+ ) -> Callable[..., AugmentedLLMProtocol[Any, Any]]:
142
+ """
143
+ Creates a factory function that can be used to construct an LLM instance.
144
+
145
+ Args:
146
+ model_string: The model specification string
147
+ request_params: Optional parameters to configure LLM behavior
148
+
149
+ Returns:
150
+ A factory function that can create an LLM instance
151
+ """
152
+ ...
@@ -24,6 +24,7 @@ from pydantic import AnyUrl
24
24
  from mcp_agent.config import MCPServerSettings
25
25
  from mcp_agent.context_dependent import ContextDependent
26
26
  from mcp_agent.logging.logger import get_logger
27
+ from mcp_agent.mcp.sampling import sample
27
28
 
28
29
  logger = get_logger(__name__)
29
30
 
@@ -40,7 +41,12 @@ async def list_roots(ctx: ClientSession) -> ListRootsResult:
40
41
  and ctx.session.server_config.roots
41
42
  ):
42
43
  roots = [
43
- Root(uri=AnyUrl(root.uri), name=root.name)
44
+ Root(
45
+ uri=AnyUrl(
46
+ root.server_uri_alias or root.uri,
47
+ ),
48
+ name=root.name,
49
+ )
44
50
  for root in ctx.session.server_config.roots
45
51
  ]
46
52
  return ListRootsResult(roots=roots or [])
@@ -58,7 +64,9 @@ class MCPAgentClientSession(ClientSession, ContextDependent):
58
64
  """
59
65
 
60
66
  def __init__(self, *args, **kwargs):
61
- super().__init__(*args, **kwargs, list_roots_callback=list_roots)
67
+ super().__init__(
68
+ *args, **kwargs, list_roots_callback=list_roots, sampling_callback=sample
69
+ )
62
70
  self.server_config: Optional[MCPServerSettings] = None
63
71
 
64
72
  async def send_request(
@@ -115,4 +123,4 @@ class MCPAgentClientSession(ClientSession, ContextDependent):
115
123
  )
116
124
  return await super().send_progress_notification(
117
125
  progress_token=progress_token, progress=progress, total=total
118
- )
126
+ )
@@ -16,6 +16,7 @@ from mcp.server.stdio import stdio_server
16
16
  from mcp.types import (
17
17
  CallToolResult,
18
18
  ListToolsResult,
19
+ TextContent,
19
20
  Tool,
20
21
  Prompt,
21
22
  )
@@ -459,7 +460,10 @@ class MCPAggregator(ContextDependent):
459
460
 
460
461
  if server_name is None or local_tool_name is None:
461
462
  logger.error(f"Error: Tool '{name}' not found")
462
- return CallToolResult(isError=True, message=f"Tool '{name}' not found")
463
+ return CallToolResult(
464
+ isError=True,
465
+ content=[TextContent(type="text", text=f"Tool '{name}' not found")]
466
+ )
463
467
 
464
468
  logger.info(
465
469
  "Requesting tool call",
@@ -477,7 +481,10 @@ class MCPAggregator(ContextDependent):
477
481
  operation_name=local_tool_name,
478
482
  method_name="call_tool",
479
483
  method_args={"name": local_tool_name, "arguments": arguments},
480
- error_factory=lambda msg: CallToolResult(isError=True, message=msg),
484
+ error_factory=lambda msg: CallToolResult(
485
+ isError=True,
486
+ content=[TextContent(type="text", text=msg)]
487
+ ),
481
488
  )
482
489
 
483
490
  async def get_prompt(
@@ -898,7 +905,10 @@ class MCPCompoundServer(Server):
898
905
  result = await self.aggregator.call_tool(name=name, arguments=arguments)
899
906
  return result.content
900
907
  except Exception as e:
901
- return CallToolResult(isError=True, message=f"Error calling tool: {e}")
908
+ return CallToolResult(
909
+ isError=True,
910
+ content=[TextContent(type="text", text=f"Error calling tool: {e}")]
911
+ )
902
912
 
903
913
  async def _get_prompt(
904
914
  self, name: str = None, arguments: dict[str, str] = None
@@ -0,0 +1,133 @@
1
+ """
2
+ Module for handling MCP Sampling functionality without causing circular imports.
3
+ This module is carefully designed to avoid circular imports in the agent system.
4
+ """
5
+
6
+ from mcp import ClientSession
7
+ from mcp.types import (
8
+ CreateMessageRequestParams,
9
+ CreateMessageResult,
10
+ TextContent,
11
+ )
12
+
13
+ from mcp_agent.logging.logger import get_logger
14
+ from mcp_agent.mcp.interfaces import AugmentedLLMProtocol
15
+ from mcp_agent.mcp.prompt_message_multipart import PromptMessageMultipart
16
+
17
+ # Protocol is sufficient to describe the interface - no need for TYPE_CHECKING imports
18
+
19
+ logger = get_logger(__name__)
20
+
21
+
22
+ def create_sampling_llm(
23
+ mcp_ctx: ClientSession, model_string: str
24
+ ) -> AugmentedLLMProtocol:
25
+ """
26
+ Create an LLM instance for sampling without tools support.
27
+ This utility function creates a minimal LLM instance based on the model string.
28
+
29
+ Args:
30
+ mcp_ctx: The MCP ClientSession
31
+ model_string: The model to use (e.g. "passthrough", "claude-3-5-sonnet-latest")
32
+
33
+ Returns:
34
+ An initialized LLM instance ready to use
35
+ """
36
+ from mcp_agent.workflows.llm.model_factory import ModelFactory
37
+ from mcp_agent.agents.agent import Agent, AgentConfig
38
+
39
+ # Get application context from global state if available
40
+ # We don't try to extract it from mcp_ctx as they're different contexts
41
+ app_context = None
42
+ try:
43
+ from mcp_agent.context import get_current_context
44
+
45
+ app_context = get_current_context()
46
+ except Exception:
47
+ logger.warning("App context not available for sampling call")
48
+
49
+ # Create a minimal agent configuration
50
+ agent_config = AgentConfig(
51
+ name="sampling_agent",
52
+ instruction="You are a sampling agent.",
53
+ servers=[], # No servers needed
54
+ )
55
+
56
+ # Create agent with our application context (not the MCP context)
57
+ # Set connection_persistence=False to avoid server connections
58
+ agent = Agent(
59
+ config=agent_config,
60
+ context=app_context,
61
+ server_names=[], # Make sure no server connections are attempted
62
+ connection_persistence=False, # Avoid server connection management
63
+ )
64
+
65
+ # Create the LLM using the factory
66
+ factory = ModelFactory.create_factory(model_string)
67
+ llm = factory(agent=agent)
68
+
69
+ # Attach the LLM to the agent
70
+ agent._llm = llm
71
+
72
+ return llm
73
+
74
+
75
+ async def sample(
76
+ mcp_ctx: ClientSession, params: CreateMessageRequestParams
77
+ ) -> CreateMessageResult:
78
+ """
79
+ Handle sampling requests from the MCP protocol.
80
+ This function extracts the model from the server config and
81
+ returns a simple response using the specified model.
82
+ """
83
+ model = None
84
+ try:
85
+ # Extract model from server config
86
+ if (
87
+ hasattr(mcp_ctx, "session")
88
+ and hasattr(mcp_ctx.session, "server_config")
89
+ and mcp_ctx.session.server_config
90
+ and hasattr(mcp_ctx.session.server_config, "sampling")
91
+ and mcp_ctx.session.server_config.sampling.model
92
+ ):
93
+ model = mcp_ctx.session.server_config.sampling.model
94
+
95
+ if model is None:
96
+ raise ValueError("No model configured")
97
+
98
+ # Create an LLM instance using our utility function
99
+ llm = create_sampling_llm(mcp_ctx, model)
100
+
101
+ # Get user message from the request params
102
+ user_message = params.messages[0].content.text
103
+
104
+ # Create a multipart prompt message with the user's input
105
+ prompt = PromptMessageMultipart(
106
+ role="user", content=[TextContent(type="text", text=user_message)]
107
+ )
108
+
109
+ try:
110
+ # Use the LLM to generate a response
111
+ logger.info(f"Processing input: {user_message[:50]}...")
112
+ llm_response = await llm.generate_prompt(prompt, None)
113
+ logger.info(f"Generated response: {llm_response[:50]}...")
114
+ except Exception as e:
115
+ # If there's an error in LLM processing, fall back to echo
116
+ logger.error(f"Error generating response: {str(e)}")
117
+ llm_response = f"Echo response: {user_message}"
118
+
119
+ # Return the LLM-generated response
120
+ return CreateMessageResult(
121
+ role="assistant",
122
+ content=TextContent(type="text", text=llm_response),
123
+ model=model,
124
+ stopReason="endTurn",
125
+ )
126
+ except Exception as e:
127
+ logger.error(f"Error in sampling: {str(e)}")
128
+ return CreateMessageResult(
129
+ role="assistant",
130
+ content=TextContent(type="text", text=f"Error in sampling: {str(e)}"),
131
+ model=model or "unknown",
132
+ stopReason="error",
133
+ )
mcp_agent/mcp/stdio.py CHANGED
@@ -14,7 +14,7 @@ from anyio.streams.memory import MemoryObjectReceiveStream, MemoryObjectSendStre
14
14
  logger = get_logger(__name__)
15
15
 
16
16
 
17
- # TODO this will be removed when client library 1.4.2 is released
17
+ # TODO this will be removed when client library with https://github.com/modelcontextprotocol/python-sdk/pull/343 is released
18
18
  @asynccontextmanager
19
19
  async def stdio_client_with_rich_stderr(server: StdioServerParameters):
20
20
  """
@@ -95,7 +95,6 @@ async def stdio_client_with_rich_stderr(server: StdioServerParameters):
95
95
  async with write_stream_reader:
96
96
  async for message in write_stream_reader:
97
97
  json = message.model_dump_json(by_alias=True, exclude_none=True)
98
- print(f"**********{id(process.stdin)}")
99
98
  await process.stdin.send(
100
99
  (json + "\n").encode(
101
100
  encoding=server.encoding,
@@ -6,7 +6,7 @@ fast = FastAgent("FastAgent Example")
6
6
 
7
7
 
8
8
  # Define the agent
9
- @fast.agent(servers=["fetch"])
9
+ @fast.agent(servers=["fetch", "mcp_hfspace"])
10
10
  async def main():
11
11
  # use the --model command line switch or agent arguments to change model
12
12
  async with fast.run() as agent:
@@ -50,3 +50,6 @@ mcp:
50
50
  category:
51
51
  command: "uv"
52
52
  args: ["run", "prompt_category.py"]
53
+ mcp_hfspace:
54
+ command: "npx"
55
+ args: ["@llmindset/mcp-hfspace"]
@@ -1,5 +1,5 @@
1
1
  from typing import Any, List, Optional, Type, Union
2
- import json
2
+ import json # Import at the module level
3
3
  from mcp import GetPromptResult
4
4
  from mcp.types import PromptMessage
5
5
  from pydantic_core import from_json
@@ -52,6 +52,13 @@ class PassthroughLLM(AugmentedLLM):
52
52
  self.show_user_message(message, model="fastagent-passthrough", chat_turn=0)
53
53
  await self.show_assistant_message(message, title="ASSISTANT/PASSTHROUGH")
54
54
 
55
+ # Handle PromptMessage by concatenating all parts
56
+ if isinstance(message, PromptMessage):
57
+ parts_text = []
58
+ for part in message.content:
59
+ parts_text.append(str(part))
60
+ return "\n".join(parts_text)
61
+
55
62
  return str(message)
56
63
 
57
64
  async def _call_tool_and_return_result(self, command: str) -> str:
@@ -65,42 +72,73 @@ class PassthroughLLM(AugmentedLLM):
65
72
  Tool result as a string
66
73
  """
67
74
  try:
68
- # Parse the tool name and optional arguments
69
- parts = command.split(" ", 2)
70
- if len(parts) < 2:
71
- return "Error: Invalid format. Expected '***CALL_TOOL <tool_name> [arguments_json]'"
72
-
73
- tool_name = parts[1].strip()
74
- arguments = None
75
-
76
- # Parse optional JSON arguments if provided
77
- if len(parts) > 2:
78
- try:
79
- arguments = json.loads(parts[2])
80
- except json.JSONDecodeError:
81
- return f"Error: Invalid JSON arguments: {parts[2]}"
82
-
83
- # Call the tool and get the result
84
- self.logger.info(f"Calling tool {tool_name} with arguments {arguments}")
75
+ tool_name, arguments = self._parse_tool_command(command)
85
76
  result = await self.aggregator.call_tool(tool_name, arguments)
77
+ return self._format_tool_result(tool_name, result)
78
+ except Exception as e:
79
+ self.logger.error(f"Error calling tool: {str(e)}")
80
+ return f"Error calling tool: {str(e)}"
81
+
82
+ def _parse_tool_command(self, command: str) -> tuple[str, Optional[dict]]:
83
+ """
84
+ Parse a tool command string into tool name and arguments.
85
+
86
+ Args:
87
+ command: The command string in format "***CALL_TOOL <tool_name> [arguments_json]"
88
+
89
+ Returns:
90
+ Tuple of (tool_name, arguments_dict)
91
+
92
+ Raises:
93
+ ValueError: If command format is invalid
94
+ """
95
+ parts = command.split(" ", 2)
96
+ if len(parts) < 2:
97
+ raise ValueError(
98
+ "Invalid format. Expected '***CALL_TOOL <tool_name> [arguments_json]'"
99
+ )
100
+
101
+ tool_name = parts[1].strip()
102
+ arguments = None
86
103
 
87
- # Format the result as a string
88
- if result.isError:
89
- return f"Error calling tool '{tool_name}': {result.message}"
104
+ if len(parts) > 2:
105
+ try:
106
+ arguments = json.loads(parts[2])
107
+ except json.JSONDecodeError:
108
+ raise ValueError(f"Invalid JSON arguments: {parts[2]}")
90
109
 
91
- # Extract text content from result
92
- result_text = []
110
+ self.logger.info(f"Calling tool {tool_name} with arguments {arguments}")
111
+ return tool_name, arguments
112
+
113
+ def _format_tool_result(self, tool_name: str, result) -> str:
114
+ """
115
+ Format tool execution result as a string.
116
+
117
+ Args:
118
+ tool_name: The name of the tool that was called
119
+ result: The result returned from the tool
120
+
121
+ Returns:
122
+ Formatted result as a string
123
+ """
124
+ if result.isError:
125
+ error_text = []
93
126
  for content_item in result.content:
94
127
  if hasattr(content_item, "text"):
95
- result_text.append(content_item.text)
128
+ error_text.append(content_item.text)
96
129
  else:
97
- result_text.append(str(content_item))
130
+ error_text.append(str(content_item))
131
+ error_message = "\n".join(error_text) if error_text else "Unknown error"
132
+ return f"Error calling tool '{tool_name}': {error_message}"
98
133
 
99
- return "\n".join(result_text)
134
+ result_text = []
135
+ for content_item in result.content:
136
+ if hasattr(content_item, "text"):
137
+ result_text.append(content_item.text)
138
+ else:
139
+ result_text.append(str(content_item))
100
140
 
101
- except Exception as e:
102
- self.logger.error(f"Error calling tool: {str(e)}")
103
- return f"Error calling tool: {str(e)}"
141
+ return "\n".join(result_text)
104
142
 
105
143
  async def generate_structured(
106
144
  self,
@@ -123,10 +161,25 @@ class PassthroughLLM(AugmentedLLM):
123
161
  async def generate_prompt(
124
162
  self, prompt: "PromptMessageMultipart", request_params: RequestParams | None
125
163
  ) -> str:
126
- message = prompt.content[0].text if prompt.content else ""
127
- if isinstance(message, str) and message.startswith("***CALL_TOOL "):
128
- return await self._call_tool_and_return_result(message)
129
- return await self.generate_str(message, request_params)
164
+ # Check if this prompt contains a tool call command
165
+ if (
166
+ prompt.content
167
+ and prompt.content[0].text
168
+ and prompt.content[0].text.startswith("***CALL_TOOL ")
169
+ ):
170
+ return await self._call_tool_and_return_result(prompt.content[0].text)
171
+
172
+ # Process all parts of the PromptMessageMultipart
173
+ parts_text = []
174
+ for part in prompt.content:
175
+ parts_text.append(str(part))
176
+
177
+ # If no parts found, return empty string
178
+ if not parts_text:
179
+ return ""
180
+
181
+ # Join all parts and process with generate_str
182
+ return await self.generate_str("\n".join(parts_text), request_params)
130
183
 
131
184
  async def apply_prompt_template(
132
185
  self, prompt_result: GetPromptResult, prompt_name: str