massgen 0.0.3__py3-none-any.whl → 0.1.0a1__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of massgen might be problematic. Click here for more details.
- massgen/__init__.py +142 -8
- massgen/adapters/__init__.py +29 -0
- massgen/adapters/ag2_adapter.py +483 -0
- massgen/adapters/base.py +183 -0
- massgen/adapters/tests/__init__.py +0 -0
- massgen/adapters/tests/test_ag2_adapter.py +439 -0
- massgen/adapters/tests/test_agent_adapter.py +128 -0
- massgen/adapters/utils/__init__.py +2 -0
- massgen/adapters/utils/ag2_utils.py +236 -0
- massgen/adapters/utils/tests/__init__.py +0 -0
- massgen/adapters/utils/tests/test_ag2_utils.py +138 -0
- massgen/agent_config.py +329 -55
- massgen/api_params_handler/__init__.py +10 -0
- massgen/api_params_handler/_api_params_handler_base.py +99 -0
- massgen/api_params_handler/_chat_completions_api_params_handler.py +176 -0
- massgen/api_params_handler/_claude_api_params_handler.py +113 -0
- massgen/api_params_handler/_response_api_params_handler.py +130 -0
- massgen/backend/__init__.py +39 -4
- massgen/backend/azure_openai.py +385 -0
- massgen/backend/base.py +341 -69
- massgen/backend/base_with_mcp.py +1102 -0
- massgen/backend/capabilities.py +386 -0
- massgen/backend/chat_completions.py +577 -130
- massgen/backend/claude.py +1033 -537
- massgen/backend/claude_code.py +1203 -0
- massgen/backend/cli_base.py +209 -0
- massgen/backend/docs/BACKEND_ARCHITECTURE.md +126 -0
- massgen/backend/{CLAUDE_API_RESEARCH.md → docs/CLAUDE_API_RESEARCH.md} +18 -18
- massgen/backend/{GEMINI_API_DOCUMENTATION.md → docs/GEMINI_API_DOCUMENTATION.md} +9 -9
- massgen/backend/docs/Gemini MCP Integration Analysis.md +1050 -0
- massgen/backend/docs/MCP_IMPLEMENTATION_CLAUDE_BACKEND.md +177 -0
- massgen/backend/docs/MCP_INTEGRATION_RESPONSE_BACKEND.md +352 -0
- massgen/backend/docs/OPENAI_GPT5_MODELS.md +211 -0
- massgen/backend/{OPENAI_RESPONSES_API_FORMAT.md → docs/OPENAI_RESPONSE_API_TOOL_CALLS.md} +3 -3
- massgen/backend/docs/OPENAI_response_streaming.md +20654 -0
- massgen/backend/docs/inference_backend.md +257 -0
- massgen/backend/docs/permissions_and_context_files.md +1085 -0
- massgen/backend/external.py +126 -0
- massgen/backend/gemini.py +1850 -241
- massgen/backend/grok.py +40 -156
- massgen/backend/inference.py +156 -0
- massgen/backend/lmstudio.py +171 -0
- massgen/backend/response.py +1095 -322
- massgen/chat_agent.py +131 -113
- massgen/cli.py +1504 -287
- massgen/config_builder.py +2165 -0
- massgen/configs/BACKEND_CONFIGURATION.md +458 -0
- massgen/configs/README.md +559 -216
- massgen/configs/ag2/ag2_case_study.yaml +27 -0
- massgen/configs/ag2/ag2_coder.yaml +34 -0
- massgen/configs/ag2/ag2_coder_case_study.yaml +36 -0
- massgen/configs/ag2/ag2_gemini.yaml +27 -0
- massgen/configs/ag2/ag2_groupchat.yaml +108 -0
- massgen/configs/ag2/ag2_groupchat_gpt.yaml +118 -0
- massgen/configs/ag2/ag2_single_agent.yaml +21 -0
- massgen/configs/basic/multi/fast_timeout_example.yaml +37 -0
- massgen/configs/basic/multi/gemini_4o_claude.yaml +31 -0
- massgen/configs/basic/multi/gemini_gpt5nano_claude.yaml +36 -0
- massgen/configs/{gemini_4o_claude.yaml → basic/multi/geminicode_4o_claude.yaml} +3 -3
- massgen/configs/basic/multi/geminicode_gpt5nano_claude.yaml +36 -0
- massgen/configs/basic/multi/glm_gemini_claude.yaml +25 -0
- massgen/configs/basic/multi/gpt4o_audio_generation.yaml +30 -0
- massgen/configs/basic/multi/gpt4o_image_generation.yaml +31 -0
- massgen/configs/basic/multi/gpt5nano_glm_qwen.yaml +26 -0
- massgen/configs/basic/multi/gpt5nano_image_understanding.yaml +26 -0
- massgen/configs/{three_agents_default.yaml → basic/multi/three_agents_default.yaml} +8 -4
- massgen/configs/basic/multi/three_agents_opensource.yaml +27 -0
- massgen/configs/basic/multi/three_agents_vllm.yaml +20 -0
- massgen/configs/basic/multi/two_agents_gemini.yaml +19 -0
- massgen/configs/{two_agents.yaml → basic/multi/two_agents_gpt5.yaml} +14 -6
- massgen/configs/basic/multi/two_agents_opensource_lmstudio.yaml +31 -0
- massgen/configs/basic/multi/two_qwen_vllm_sglang.yaml +28 -0
- massgen/configs/{single_agent.yaml → basic/single/single_agent.yaml} +1 -1
- massgen/configs/{single_flash2.5.yaml → basic/single/single_flash2.5.yaml} +1 -2
- massgen/configs/basic/single/single_gemini2.5pro.yaml +16 -0
- massgen/configs/basic/single/single_gpt4o_audio_generation.yaml +22 -0
- massgen/configs/basic/single/single_gpt4o_image_generation.yaml +22 -0
- massgen/configs/basic/single/single_gpt4o_video_generation.yaml +24 -0
- massgen/configs/basic/single/single_gpt5nano.yaml +20 -0
- massgen/configs/basic/single/single_gpt5nano_file_search.yaml +18 -0
- massgen/configs/basic/single/single_gpt5nano_image_understanding.yaml +17 -0
- massgen/configs/basic/single/single_gptoss120b.yaml +15 -0
- massgen/configs/basic/single/single_openrouter_audio_understanding.yaml +15 -0
- massgen/configs/basic/single/single_qwen_video_understanding.yaml +15 -0
- massgen/configs/debug/code_execution/command_filtering_blacklist.yaml +29 -0
- massgen/configs/debug/code_execution/command_filtering_whitelist.yaml +28 -0
- massgen/configs/debug/code_execution/docker_verification.yaml +29 -0
- massgen/configs/debug/skip_coordination_test.yaml +27 -0
- massgen/configs/debug/test_sdk_migration.yaml +17 -0
- massgen/configs/docs/DISCORD_MCP_SETUP.md +208 -0
- massgen/configs/docs/TWITTER_MCP_ENESCINAR_SETUP.md +82 -0
- massgen/configs/providers/azure/azure_openai_multi.yaml +21 -0
- massgen/configs/providers/azure/azure_openai_single.yaml +19 -0
- massgen/configs/providers/claude/claude.yaml +14 -0
- massgen/configs/providers/gemini/gemini_gpt5nano.yaml +28 -0
- massgen/configs/providers/local/lmstudio.yaml +11 -0
- massgen/configs/providers/openai/gpt5.yaml +46 -0
- massgen/configs/providers/openai/gpt5_nano.yaml +46 -0
- massgen/configs/providers/others/grok_single_agent.yaml +19 -0
- massgen/configs/providers/others/zai_coding_team.yaml +108 -0
- massgen/configs/providers/others/zai_glm45.yaml +12 -0
- massgen/configs/{creative_team.yaml → teams/creative/creative_team.yaml} +16 -6
- massgen/configs/{travel_planning.yaml → teams/creative/travel_planning.yaml} +16 -6
- massgen/configs/{news_analysis.yaml → teams/research/news_analysis.yaml} +16 -6
- massgen/configs/{research_team.yaml → teams/research/research_team.yaml} +15 -7
- massgen/configs/{technical_analysis.yaml → teams/research/technical_analysis.yaml} +16 -6
- massgen/configs/tools/code-execution/basic_command_execution.yaml +25 -0
- massgen/configs/tools/code-execution/code_execution_use_case_simple.yaml +41 -0
- massgen/configs/tools/code-execution/docker_claude_code.yaml +32 -0
- massgen/configs/tools/code-execution/docker_multi_agent.yaml +32 -0
- massgen/configs/tools/code-execution/docker_simple.yaml +29 -0
- massgen/configs/tools/code-execution/docker_with_resource_limits.yaml +32 -0
- massgen/configs/tools/code-execution/multi_agent_playwright_automation.yaml +57 -0
- massgen/configs/tools/filesystem/cc_gpt5_gemini_filesystem.yaml +34 -0
- massgen/configs/tools/filesystem/claude_code_context_sharing.yaml +68 -0
- massgen/configs/tools/filesystem/claude_code_flash2.5.yaml +43 -0
- massgen/configs/tools/filesystem/claude_code_flash2.5_gptoss.yaml +49 -0
- massgen/configs/tools/filesystem/claude_code_gpt5nano.yaml +31 -0
- massgen/configs/tools/filesystem/claude_code_single.yaml +40 -0
- massgen/configs/tools/filesystem/fs_permissions_test.yaml +87 -0
- massgen/configs/tools/filesystem/gemini_gemini_workspace_cleanup.yaml +54 -0
- massgen/configs/tools/filesystem/gemini_gpt5_filesystem_casestudy.yaml +30 -0
- massgen/configs/tools/filesystem/gemini_gpt5nano_file_context_path.yaml +43 -0
- massgen/configs/tools/filesystem/gemini_gpt5nano_protected_paths.yaml +45 -0
- massgen/configs/tools/filesystem/gpt5mini_cc_fs_context_path.yaml +31 -0
- massgen/configs/tools/filesystem/grok4_gpt5_gemini_filesystem.yaml +32 -0
- massgen/configs/tools/filesystem/multiturn/grok4_gpt5_claude_code_filesystem_multiturn.yaml +58 -0
- massgen/configs/tools/filesystem/multiturn/grok4_gpt5_gemini_filesystem_multiturn.yaml +58 -0
- massgen/configs/tools/filesystem/multiturn/two_claude_code_filesystem_multiturn.yaml +47 -0
- massgen/configs/tools/filesystem/multiturn/two_gemini_flash_filesystem_multiturn.yaml +48 -0
- massgen/configs/tools/mcp/claude_code_discord_mcp_example.yaml +27 -0
- massgen/configs/tools/mcp/claude_code_simple_mcp.yaml +35 -0
- massgen/configs/tools/mcp/claude_code_twitter_mcp_example.yaml +32 -0
- massgen/configs/tools/mcp/claude_mcp_example.yaml +24 -0
- massgen/configs/tools/mcp/claude_mcp_test.yaml +27 -0
- massgen/configs/tools/mcp/five_agents_travel_mcp_test.yaml +157 -0
- massgen/configs/tools/mcp/five_agents_weather_mcp_test.yaml +103 -0
- massgen/configs/tools/mcp/gemini_mcp_example.yaml +24 -0
- massgen/configs/tools/mcp/gemini_mcp_filesystem_test.yaml +23 -0
- massgen/configs/tools/mcp/gemini_mcp_filesystem_test_sharing.yaml +23 -0
- massgen/configs/tools/mcp/gemini_mcp_filesystem_test_single_agent.yaml +17 -0
- massgen/configs/tools/mcp/gemini_mcp_filesystem_test_with_claude_code.yaml +24 -0
- massgen/configs/tools/mcp/gemini_mcp_test.yaml +27 -0
- massgen/configs/tools/mcp/gemini_notion_mcp.yaml +52 -0
- massgen/configs/tools/mcp/gpt5_nano_mcp_example.yaml +24 -0
- massgen/configs/tools/mcp/gpt5_nano_mcp_test.yaml +27 -0
- massgen/configs/tools/mcp/gpt5mini_claude_code_discord_mcp_example.yaml +38 -0
- massgen/configs/tools/mcp/gpt_oss_mcp_example.yaml +25 -0
- massgen/configs/tools/mcp/gpt_oss_mcp_test.yaml +28 -0
- massgen/configs/tools/mcp/grok3_mini_mcp_example.yaml +24 -0
- massgen/configs/tools/mcp/grok3_mini_mcp_test.yaml +27 -0
- massgen/configs/tools/mcp/multimcp_gemini.yaml +111 -0
- massgen/configs/tools/mcp/qwen_api_mcp_example.yaml +25 -0
- massgen/configs/tools/mcp/qwen_api_mcp_test.yaml +28 -0
- massgen/configs/tools/mcp/qwen_local_mcp_example.yaml +24 -0
- massgen/configs/tools/mcp/qwen_local_mcp_test.yaml +27 -0
- massgen/configs/tools/planning/five_agents_discord_mcp_planning_mode.yaml +140 -0
- massgen/configs/tools/planning/five_agents_filesystem_mcp_planning_mode.yaml +151 -0
- massgen/configs/tools/planning/five_agents_notion_mcp_planning_mode.yaml +151 -0
- massgen/configs/tools/planning/five_agents_twitter_mcp_planning_mode.yaml +155 -0
- massgen/configs/tools/planning/gpt5_mini_case_study_mcp_planning_mode.yaml +73 -0
- massgen/configs/tools/web-search/claude_streamable_http_test.yaml +43 -0
- massgen/configs/tools/web-search/gemini_streamable_http_test.yaml +43 -0
- massgen/configs/tools/web-search/gpt5_mini_streamable_http_test.yaml +43 -0
- massgen/configs/tools/web-search/gpt_oss_streamable_http_test.yaml +44 -0
- massgen/configs/tools/web-search/grok3_mini_streamable_http_test.yaml +43 -0
- massgen/configs/tools/web-search/qwen_api_streamable_http_test.yaml +44 -0
- massgen/configs/tools/web-search/qwen_local_streamable_http_test.yaml +43 -0
- massgen/coordination_tracker.py +708 -0
- massgen/docker/README.md +462 -0
- massgen/filesystem_manager/__init__.py +21 -0
- massgen/filesystem_manager/_base.py +9 -0
- massgen/filesystem_manager/_code_execution_server.py +545 -0
- massgen/filesystem_manager/_docker_manager.py +477 -0
- massgen/filesystem_manager/_file_operation_tracker.py +248 -0
- massgen/filesystem_manager/_filesystem_manager.py +813 -0
- massgen/filesystem_manager/_path_permission_manager.py +1261 -0
- massgen/filesystem_manager/_workspace_tools_server.py +1815 -0
- massgen/formatter/__init__.py +10 -0
- massgen/formatter/_chat_completions_formatter.py +284 -0
- massgen/formatter/_claude_formatter.py +235 -0
- massgen/formatter/_formatter_base.py +156 -0
- massgen/formatter/_response_formatter.py +263 -0
- massgen/frontend/__init__.py +1 -2
- massgen/frontend/coordination_ui.py +471 -286
- massgen/frontend/displays/base_display.py +56 -11
- massgen/frontend/displays/create_coordination_table.py +1956 -0
- massgen/frontend/displays/rich_terminal_display.py +1259 -619
- massgen/frontend/displays/simple_display.py +9 -4
- massgen/frontend/displays/terminal_display.py +27 -68
- massgen/logger_config.py +681 -0
- massgen/mcp_tools/README.md +232 -0
- massgen/mcp_tools/__init__.py +105 -0
- massgen/mcp_tools/backend_utils.py +1035 -0
- massgen/mcp_tools/circuit_breaker.py +195 -0
- massgen/mcp_tools/client.py +894 -0
- massgen/mcp_tools/config_validator.py +138 -0
- massgen/mcp_tools/docs/circuit_breaker.md +646 -0
- massgen/mcp_tools/docs/client.md +950 -0
- massgen/mcp_tools/docs/config_validator.md +478 -0
- massgen/mcp_tools/docs/exceptions.md +1165 -0
- massgen/mcp_tools/docs/security.md +854 -0
- massgen/mcp_tools/exceptions.py +338 -0
- massgen/mcp_tools/hooks.py +212 -0
- massgen/mcp_tools/security.py +780 -0
- massgen/message_templates.py +342 -64
- massgen/orchestrator.py +1515 -241
- massgen/stream_chunk/__init__.py +35 -0
- massgen/stream_chunk/base.py +92 -0
- massgen/stream_chunk/multimodal.py +237 -0
- massgen/stream_chunk/text.py +162 -0
- massgen/tests/mcp_test_server.py +150 -0
- massgen/tests/multi_turn_conversation_design.md +0 -8
- massgen/tests/test_azure_openai_backend.py +156 -0
- massgen/tests/test_backend_capabilities.py +262 -0
- massgen/tests/test_backend_event_loop_all.py +179 -0
- massgen/tests/test_chat_completions_refactor.py +142 -0
- massgen/tests/test_claude_backend.py +15 -28
- massgen/tests/test_claude_code.py +268 -0
- massgen/tests/test_claude_code_context_sharing.py +233 -0
- massgen/tests/test_claude_code_orchestrator.py +175 -0
- massgen/tests/test_cli_backends.py +180 -0
- massgen/tests/test_code_execution.py +679 -0
- massgen/tests/test_external_agent_backend.py +134 -0
- massgen/tests/test_final_presentation_fallback.py +237 -0
- massgen/tests/test_gemini_planning_mode.py +351 -0
- massgen/tests/test_grok_backend.py +7 -10
- massgen/tests/test_http_mcp_server.py +42 -0
- massgen/tests/test_integration_simple.py +198 -0
- massgen/tests/test_mcp_blocking.py +125 -0
- massgen/tests/test_message_context_building.py +29 -47
- massgen/tests/test_orchestrator_final_presentation.py +48 -0
- massgen/tests/test_path_permission_manager.py +2087 -0
- massgen/tests/test_rich_terminal_display.py +14 -13
- massgen/tests/test_timeout.py +133 -0
- massgen/tests/test_v3_3agents.py +11 -12
- massgen/tests/test_v3_simple.py +8 -13
- massgen/tests/test_v3_three_agents.py +11 -18
- massgen/tests/test_v3_two_agents.py +8 -13
- massgen/token_manager/__init__.py +7 -0
- massgen/token_manager/token_manager.py +400 -0
- massgen/utils.py +52 -16
- massgen/v1/agent.py +45 -91
- massgen/v1/agents.py +18 -53
- massgen/v1/backends/gemini.py +50 -153
- massgen/v1/backends/grok.py +21 -54
- massgen/v1/backends/oai.py +39 -111
- massgen/v1/cli.py +36 -93
- massgen/v1/config.py +8 -12
- massgen/v1/logging.py +43 -127
- massgen/v1/main.py +18 -32
- massgen/v1/orchestrator.py +68 -209
- massgen/v1/streaming_display.py +62 -163
- massgen/v1/tools.py +8 -12
- massgen/v1/types.py +9 -23
- massgen/v1/utils.py +5 -23
- massgen-0.1.0a1.dist-info/METADATA +1287 -0
- massgen-0.1.0a1.dist-info/RECORD +273 -0
- massgen-0.1.0a1.dist-info/entry_points.txt +2 -0
- massgen/frontend/logging/__init__.py +0 -9
- massgen/frontend/logging/realtime_logger.py +0 -197
- massgen-0.0.3.dist-info/METADATA +0 -568
- massgen-0.0.3.dist-info/RECORD +0 -76
- massgen-0.0.3.dist-info/entry_points.txt +0 -2
- /massgen/backend/{Function calling openai responses.md → docs/Function calling openai responses.md} +0 -0
- {massgen-0.0.3.dist-info → massgen-0.1.0a1.dist-info}/WHEEL +0 -0
- {massgen-0.0.3.dist-info → massgen-0.1.0a1.dist-info}/licenses/LICENSE +0 -0
- {massgen-0.0.3.dist-info → massgen-0.1.0a1.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,177 @@
|
|
|
1
|
+
# MCP Integration in ClaudeBackend
|
|
2
|
+
|
|
3
|
+
## Overview
|
|
4
|
+
|
|
5
|
+
The `ClaudeBackend` in `massgen/backend/claude.py` features a robust and resilient Model Context Protocol (MCP) integration. Unlike other backends that might differentiate between multiple transport types, the Claude backend uses a **unified recursive model** to handle all external tools. It leverages a single `MultiMCPClient` to manage both `stdio` and `streamable-http` servers, providing a streamlined and powerful way to extend Claude's capabilities with custom tools.
|
|
6
|
+
|
|
7
|
+
The core of this implementation is the `_stream_mcp_recursive` function, which allows for multiple rounds of tool execution within a single user request, enabling complex, multi-step agentic workflows.
|
|
8
|
+
|
|
9
|
+
## Key Features
|
|
10
|
+
|
|
11
|
+
- **Recursive Execution Loop**: Enables multi-step tool usage where the model can call tools, process results, and call more tools in a continuous loop until it arrives at a final answer.
|
|
12
|
+
- **Unified Tool Management**: Seamlessly combines built-in Claude tools (Web Search, Code Execution), user-defined functions, and external MCP tools into a single, coherent toolset for the model.
|
|
13
|
+
- **Claude-Specific Format Conversion**: Automatically converts all tools into the specific `input_schema` format required by the Anthropic Messages API.
|
|
14
|
+
- **Resilience and Fault Tolerance**: Integrates a circuit breaker pattern to prevent calls to failing MCP servers and a retry mechanism with exponential backoff for individual function calls.
|
|
15
|
+
- **Advanced Streaming & UI Feedback**: Provides real-time status updates for MCP operations (connection, tool calls, completion) directly in the stream, allowing for a transparent user experience.
|
|
16
|
+
- **Graceful Error Handling**: If MCP servers fail during setup or execution, the backend automatically falls back to a non-MCP mode, ensuring the user still receives a response from the model.
|
|
17
|
+
|
|
18
|
+
## Architecture
|
|
19
|
+
|
|
20
|
+
### Architectural Flow
|
|
21
|
+
|
|
22
|
+
The Claude backend uses a simplified, unified architecture compared to multi-transport backends. All MCP tools (`stdio` and `streamable-http`) are managed by a single `MultiMCPClient`, and the core logic is contained within a recursive loop.
|
|
23
|
+
|
|
24
|
+
```mermaid
|
|
25
|
+
graph TD
|
|
26
|
+
subgraph "Configuration Layer"
|
|
27
|
+
A1[YAML Configuration] --> A2[mcp_servers]
|
|
28
|
+
end
|
|
29
|
+
|
|
30
|
+
subgraph "ClaudeBackend Layer"
|
|
31
|
+
B1[__init__] --> B2[Circuit Breaker Setup]
|
|
32
|
+
B2 --> B3[_setup_mcp_tools]
|
|
33
|
+
B3 --> B4[MultiMCPClient<br/>(stdio/streamable-http)]
|
|
34
|
+
end
|
|
35
|
+
|
|
36
|
+
subgraph "Recursive Execution Loop"
|
|
37
|
+
C1[_stream_mcp_recursive] --> C2{Model Needs Tool?}
|
|
38
|
+
C2 -->|Yes| C3[_execute_mcp_function_with_retry]
|
|
39
|
+
C3 --> C4[Append Result to History]
|
|
40
|
+
C4 --> C1
|
|
41
|
+
C2 -->|No| C5[Yield Final Response]
|
|
42
|
+
end
|
|
43
|
+
|
|
44
|
+
subgraph "External Systems"
|
|
45
|
+
D1[Anthropic API]
|
|
46
|
+
D2[MCP Servers]
|
|
47
|
+
end
|
|
48
|
+
|
|
49
|
+
A1 --> B1
|
|
50
|
+
B4 --> D2
|
|
51
|
+
C1 --> D1
|
|
52
|
+
C3 --> B4
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Unified MCP Client
|
|
56
|
+
|
|
57
|
+
The Claude backend does not separate MCP servers by transport type. Instead, it uses a single `MultiMCPClient` instance from `massgen.mcp_tools` to manage all configured `stdio` and `streamable-http` servers. This simplifies the architecture and allows for consistent handling of all external tools.
|
|
58
|
+
|
|
59
|
+
```python
|
|
60
|
+
# A single client manages all stdio and streamable-http servers
|
|
61
|
+
self._mcp_client: Optional[MultiMCPClient] = None
|
|
62
|
+
|
|
63
|
+
# A single function registry holds all discovered tools
|
|
64
|
+
self.functions: Dict[str, Function] = {}
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
### Recursive Execution Model
|
|
68
|
+
|
|
69
|
+
The backend's power lies in its recursive design. When the model uses an MCP tool, the backend doesn't simply return the result. Instead, it executes the tool, appends the result to the conversation history, and **calls the model again with the updated history**. This loop continues until the model generates a text response instead of another tool call.
|
|
70
|
+
|
|
71
|
+
This approach allows the agent to perform complex tasks that require sequential tool use, such as reading a file, analyzing its content, and then writing a summary to a new file.
|
|
72
|
+
|
|
73
|
+
## Core Components
|
|
74
|
+
|
|
75
|
+
- **`ClaudeBackend.__init__`**: Initializes the backend, loads MCP server configurations from the agent config, and prepares the single circuit breaker for all MCP tools.
|
|
76
|
+
|
|
77
|
+
- **`_setup_mcp_tools`**: This async method is the heart of the connection process. It normalizes server configurations, filters out any servers currently blocked by the circuit breaker, and initializes the `MultiMCPClient` to connect to the available servers and discover their tools.
|
|
78
|
+
|
|
79
|
+
- **`_convert_mcp_tools_to_claude_format`**: A utility that transforms the discovered MCP functions into the JSON format that the Anthropic API expects for its `tools` parameter.
|
|
80
|
+
|
|
81
|
+
- **`_build_claude_api_params`**: Constructs the final payload for the Anthropic API call. It gathers all tools—built-in, user-defined, and MCP—and converts them into the correct format. It also handles message history conversion.
|
|
82
|
+
|
|
83
|
+
- **`_stream_mcp_recursive`**: The core of the agentic loop. This function:
|
|
84
|
+
1. Sends the current message history to the Claude API.
|
|
85
|
+
2. Streams the response, identifying any `tool_use` blocks.
|
|
86
|
+
3. If an MCP tool is called, it executes it via `_execute_mcp_function_with_retry`.
|
|
87
|
+
4. Appends the tool result to the message history.
|
|
88
|
+
5. **Recursively calls itself** with the new, updated message history.
|
|
89
|
+
6. If no MCP tools are called, it yields the final text response and terminates the loop.
|
|
90
|
+
|
|
91
|
+
- **`_execute_mcp_function_with_retry`**: Executes a single MCP function, handling JSON parsing, retries with backoff, and reporting success or failure to the circuit breaker.
|
|
92
|
+
|
|
93
|
+
- **`_handle_mcp_error_and_fallback`**: A crucial error handler. If the MCP client fails to connect or a tool fails catastrophically, this function catches the error, yields a user-friendly message, and re-runs the request in a non-MCP mode.
|
|
94
|
+
|
|
95
|
+
- **Resource Management (`__aenter__`/`__aexit__`)**: The backend uses an async context manager to ensure that MCP resources are properly managed.
|
|
96
|
+
- The `__aenter__` method is responsible for calling `_setup_mcp_tools`, which establishes connections to the MCP servers before a request is processed.
|
|
97
|
+
- The `__aexit__` method ensures that `cleanup_mcp` is called when the context is exited, guaranteeing that connections are closed and resources are released, even if errors occur.
|
|
98
|
+
|
|
99
|
+
## Execution Workflow
|
|
100
|
+
|
|
101
|
+
A typical request involving an MCP tool follows this sequence:
|
|
102
|
+
|
|
103
|
+
```mermaid
|
|
104
|
+
sequenceDiagram
|
|
105
|
+
participant U as User
|
|
106
|
+
participant CB as ClaudeBackend
|
|
107
|
+
participant API as Anthropic API
|
|
108
|
+
participant MCP as MCP Server
|
|
109
|
+
|
|
110
|
+
U->>CB: stream_with_tools(messages)
|
|
111
|
+
activate CB
|
|
112
|
+
CB->>CB: _setup_mcp_tools()
|
|
113
|
+
CB->>MCP: Connect & Discover Tools
|
|
114
|
+
MCP-->>CB: Tools Registered
|
|
115
|
+
CB->>CB: _stream_mcp_recursive(messages)
|
|
116
|
+
activate CB
|
|
117
|
+
CB->>API: messages.create(messages, tools)
|
|
118
|
+
API-->>CB: Stream [tool_use block]
|
|
119
|
+
CB->>MCP: _execute_mcp_function_with_retry()
|
|
120
|
+
activate MCP
|
|
121
|
+
MCP-->>CB: Tool Result
|
|
122
|
+
deactivate MCP
|
|
123
|
+
CB->>CB: Append Result to History
|
|
124
|
+
CB->>API: messages.create(updated_messages, tools)
|
|
125
|
+
API-->>CB: Stream [text content]
|
|
126
|
+
deactivate CB
|
|
127
|
+
CB-->>U: yield StreamChunk(content)
|
|
128
|
+
deactivate CB
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
## Design Choices and Implications
|
|
132
|
+
|
|
133
|
+
- **Recursive Model**: The recursive approach is extremely powerful for creating autonomous agents that can solve multi-step problems. However, it carries a risk of long-running loops and high token costs. This is mitigated by the `MCPMessageManager`, which trims the conversation history to prevent uncontrolled growth.
|
|
134
|
+
|
|
135
|
+
- **Unified Client**: By using a single client for `stdio` and `streamable-http`, the architecture is simplified. This means the backend has full control over the connection and can support local development servers, but it does not support a native HTTP transport if Anthropic were to introduce one in the future.
|
|
136
|
+
|
|
137
|
+
## Security and Performance
|
|
138
|
+
|
|
139
|
+
- **Network Flexibility**: Because the implementation uses the `mcp_tools` library for `streamable-http`, it is not subject to the same network restrictions as a native API implementation. It can connect to servers on `localhost`, private networks, and corporate VPNs, making it ideal for development and internal enterprise tools.
|
|
140
|
+
- **Asynchronous Operations**: The entire workflow is built on `asyncio` for non-blocking I/O, ensuring efficient handling of network requests and subprocesses.
|
|
141
|
+
- **Circuit Breaker**: Prevents the system from repeatedly trying to connect to a failing server, improving stability.
|
|
142
|
+
- **Bounded History**: The `MCPMessageManager` is used to trim the conversation history within the recursive loop, preventing infinite growth and excessive token consumption.
|
|
143
|
+
- **Thread Safety**: A lock (`_stats_lock`) is used to safely increment tool usage counters in the async environment.
|
|
144
|
+
|
|
145
|
+
## Configuration Example
|
|
146
|
+
|
|
147
|
+
The `ClaudeBackend` is configured via the same `mcp_servers` block in the agent's YAML configuration file. It supports `stdio` and `streamable-http` transport types.
|
|
148
|
+
|
|
149
|
+
```yaml
|
|
150
|
+
# Example configuration for the Claude backend
|
|
151
|
+
mcp_servers:
|
|
152
|
+
# stdio transport: for local command-line tools
|
|
153
|
+
file_system_tools:
|
|
154
|
+
type: "stdio"
|
|
155
|
+
command: "python"
|
|
156
|
+
args: ["-m", "mcp_file_server"]
|
|
157
|
+
# Optional: specify which tools to use from this server
|
|
158
|
+
allowed_tools:
|
|
159
|
+
- "read_file"
|
|
160
|
+
- "write_file"
|
|
161
|
+
- "list_directory"
|
|
162
|
+
|
|
163
|
+
# streamable-http transport: for web-based streaming services
|
|
164
|
+
web_search_service:
|
|
165
|
+
type: "streamable-http"
|
|
166
|
+
url: "https://api.custom_search.example.com/mcp"
|
|
167
|
+
# Optional: add authorization headers
|
|
168
|
+
authorization: "Bearer ${SEARCH_API_TOKEN}"
|
|
169
|
+
```
|
|
170
|
+
|
|
171
|
+
### Key Configuration Options
|
|
172
|
+
|
|
173
|
+
- **`type`**: Specifies the transport. For the `ClaudeBackend`, this can be `stdio` or `streamable-http`.
|
|
174
|
+
- **`command` / `args`**: Used for `stdio` servers to define the command to execute the local tool server.
|
|
175
|
+
- **`url`**: Used for `streamable-http` servers to specify the endpoint of the remote tool server.
|
|
176
|
+
- **`authorization`**: (Optional) Provides an authorization header for `streamable-http` servers. It supports environment variable substitution with the `${VAR_NAME}` syntax.
|
|
177
|
+
- **`allowed_tools` / `exclude_tools`**: (Optional) A list of tool names to explicitly include or exclude from a given server, allowing for fine-grained control over the available tools.
|
|
@@ -0,0 +1,352 @@
|
|
|
1
|
+
# MCP Integration in ResponseBackend (OpenAI Response API)
|
|
2
|
+
|
|
3
|
+
## Overview
|
|
4
|
+
|
|
5
|
+
The `ResponseBackend` class in `massgen/backend/response.py` implements a sophisticated **three-transport Model Context Protocol (MCP)** integration system that allows seamless connectivity with different types of MCP servers while maintaining fault tolerance and optimal performance.
|
|
6
|
+
|
|
7
|
+
## Three Transport Types
|
|
8
|
+
|
|
9
|
+
The system supports three distinct transport types, each serving different use cases:
|
|
10
|
+
|
|
11
|
+
### 1. **stdio Transport**
|
|
12
|
+
- **Purpose**: Process-based MCP servers running as local subprocesses
|
|
13
|
+
- **Implementation**: Uses `MultiMCPClient` from `massgen.mcp_tools`
|
|
14
|
+
- **Communication**: Standard input/output streams
|
|
15
|
+
- **Use Cases**: Local tools, file system operations, command-line utilities
|
|
16
|
+
- **Examples**: File operations, local development tools, system utilities
|
|
17
|
+
|
|
18
|
+
### 2. **streamable-http Transport**
|
|
19
|
+
- **Purpose**: Web-based MCP servers with streaming capabilities
|
|
20
|
+
- **Implementation**: Uses `MultiMCPClient` from `massgen.mcp_tools`
|
|
21
|
+
- **Communication**: HTTP with streaming support
|
|
22
|
+
- **Use Cases**: Remote web services, cloud APIs, streaming data sources
|
|
23
|
+
- **Examples**: Weather APIs, stock data, real-time information services
|
|
24
|
+
|
|
25
|
+
### 3. **http Transport**
|
|
26
|
+
- **Purpose**: Direct OpenAI-native MCP server connections
|
|
27
|
+
- **Implementation**: Uses OpenAI's built-in MCP client
|
|
28
|
+
- **Communication**: Direct HTTP between OpenAI and external servers
|
|
29
|
+
- **Use Cases**: Enterprise MCP servers, high-performance services
|
|
30
|
+
- **Examples**: Corporate MCP servers, specialized AI services
|
|
31
|
+
|
|
32
|
+
## Why Three Transport Types?
|
|
33
|
+
|
|
34
|
+
### 1. **Flexibility and Compatibility**
|
|
35
|
+
- **stdio**: Supports traditional command-line tools and legacy systems
|
|
36
|
+
- **streamable-http**: Enables modern web-based services with real-time capabilities
|
|
37
|
+
- **http**: Provides direct, high-performance connectivity for enterprise deployments
|
|
38
|
+
|
|
39
|
+
### 2. **Optimized Performance**
|
|
40
|
+
- **Local Execution**: stdio servers run locally with minimal latency
|
|
41
|
+
- **Streaming Efficiency**: streamable-http supports real-time data streaming
|
|
42
|
+
- **Direct Connectivity**: http transport eliminates intermediate processing
|
|
43
|
+
|
|
44
|
+
### 3. **Reliability and Fault Tolerance**
|
|
45
|
+
- **Circuit Breakers**: Each transport type has dedicated circuit breaker protection
|
|
46
|
+
- **Graceful Degradation**: Failure in one transport doesn't affect others
|
|
47
|
+
- **Automatic Fallback**: System continues operating even if some transports fail
|
|
48
|
+
|
|
49
|
+
## Architecture
|
|
50
|
+
|
|
51
|
+
### Transport Separation
|
|
52
|
+
```python
|
|
53
|
+
# Servers are separated by transport type during initialization
|
|
54
|
+
self._mcp_tools_servers: List[Dict[str, Any]] = [] # stdio + streamable-http
|
|
55
|
+
self._http_servers: List[Dict[str, Any]] = [] # Native OpenAI HTTP
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
### Dual Execution Modes
|
|
59
|
+
The backend operates in two distinct modes based on available MCP servers:
|
|
60
|
+
|
|
61
|
+
#### Mode 1: MCP Tools Mode (stdio + streamable-http)
|
|
62
|
+
- Uses custom `MultiMCPClient` for server management
|
|
63
|
+
- Implements function execution loop with retry logic
|
|
64
|
+
- Supports complex multi-turn conversations with tool calls
|
|
65
|
+
- Handles streaming responses with real-time function execution
|
|
66
|
+
|
|
67
|
+
#### Mode 2: HTTP-Only Mode (http transport)
|
|
68
|
+
- Uses OpenAI's native MCP client
|
|
69
|
+
- Direct server-to-server communication
|
|
70
|
+
- Simplified execution model
|
|
71
|
+
- Better performance for simple request-response patterns
|
|
72
|
+
|
|
73
|
+
## Key Components
|
|
74
|
+
|
|
75
|
+
### 1. **MCP Server Management**
|
|
76
|
+
```python
|
|
77
|
+
def _separate_mcp_servers_by_transport_type(self) -> None:
|
|
78
|
+
"""Separate MCP servers into local execution and HTTP transport types."""
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### 2. **Circuit Breaker System**
|
|
82
|
+
- **Dedicated Circuit Breakers**: Separate protection for each transport type
|
|
83
|
+
- **Automatic Failover**: Servers are skipped when circuit breakers trip
|
|
84
|
+
- **Health Monitoring**: Real-time tracking of server availability
|
|
85
|
+
|
|
86
|
+
```python
|
|
87
|
+
# Circuit breakers for different transport types
|
|
88
|
+
self._mcp_tools_circuit_breaker = None # For stdio + streamable-http
|
|
89
|
+
self._http_circuit_breaker = None # For OpenAI native http
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
### 3. **Function Conversion and Execution**
|
|
93
|
+
- **Automatic Tool Discovery**: Converts MCP tools to OpenAI function format
|
|
94
|
+
- **Retry Logic**: Exponential backoff for failed function calls
|
|
95
|
+
- **Error Handling**: Comprehensive error recovery mechanisms
|
|
96
|
+
|
|
97
|
+
```python
|
|
98
|
+
async def _execute_mcp_function_with_retry(
|
|
99
|
+
self, function_name: str, arguments_json: str, max_retries: int = 3
|
|
100
|
+
) -> str:
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
## Execution Flow
|
|
104
|
+
|
|
105
|
+
### 1. **Enhanced Initialization with Error Recovery**
|
|
106
|
+
1. Parse MCP server configurations
|
|
107
|
+
2. Separate servers by transport type
|
|
108
|
+
3. Initialize circuit breakers for each transport
|
|
109
|
+
4. **Setup MultiMCPClient with error capture**:
|
|
110
|
+
- Try to establish MCP connections
|
|
111
|
+
- **Catch setup errors at top level** (no longer silently swallowed)
|
|
112
|
+
- **Emit user-visible status messages** for setup failures
|
|
113
|
+
- **Fall back gracefully** to non-MCP mode if setup fails
|
|
114
|
+
5. **Report connection status** to user via StreamChunk messages
|
|
115
|
+
|
|
116
|
+
### 2. **Intelligent Tool Processing**
|
|
117
|
+
1. Convert stdio/streamable-http tools to OpenAI function format
|
|
118
|
+
2. Convert http servers to OpenAI native MCP format
|
|
119
|
+
3. Apply circuit breaker filtering
|
|
120
|
+
4. **Check MCP availability** and notify if tools are unavailable
|
|
121
|
+
5. Combine with provider tools (web search, code interpreter)
|
|
122
|
+
|
|
123
|
+
### 3. **Request Processing with Multi-Level Fallback**
|
|
124
|
+
1. Choose execution mode based on available servers
|
|
125
|
+
2. Build API parameters with appropriate tool formats
|
|
126
|
+
3. **Handle setup-phase errors**: If MCP setup failed, use non-MCP streaming
|
|
127
|
+
4. Execute request with chosen transport
|
|
128
|
+
5. **Handle streaming-phase errors**: If MCP fails during execution, fall back to non-MCP
|
|
129
|
+
6. Handle streaming responses and function calls
|
|
130
|
+
|
|
131
|
+
### 4. **Robust Function Execution Loop** (stdio + streamable-http)
|
|
132
|
+
1. Detect function calls in streaming response
|
|
133
|
+
2. Execute functions with retry logic and exponential backoff
|
|
134
|
+
3. Update message history with results
|
|
135
|
+
4. **Handle function execution failures**: Continue with other functions or fall back
|
|
136
|
+
5. Continue iteration until completion or max iterations reached
|
|
137
|
+
6. **Provide status updates** for each execution step
|
|
138
|
+
|
|
139
|
+
## Error Handling and Recovery
|
|
140
|
+
|
|
141
|
+
### 1. **Multi-Phase Error Detection**
|
|
142
|
+
The system implements comprehensive error handling across two critical phases:
|
|
143
|
+
|
|
144
|
+
#### Setup Phase Error Handling (`__aenter__`)
|
|
145
|
+
- **Silent Error Capture**: MCP setup errors in `__aenter__` are now caught at the top level
|
|
146
|
+
- **User-Visible Notifications**: Clear status messages inform users when MCP setup fails
|
|
147
|
+
- **Graceful Fallback**: Automatic fallback to non-MCP streaming when setup errors occur
|
|
148
|
+
- **Error Propagation**: Setup errors no longer get silently swallowed
|
|
149
|
+
|
|
150
|
+
#### Runtime Phase Error Handling (Streaming)
|
|
151
|
+
- **Circuit Breaker Integration**: Automatic detection of server failures during streaming
|
|
152
|
+
- **Temporary Exclusion**: Failed servers are temporarily excluded with status tracking
|
|
153
|
+
- **Gradual Recovery**: Servers are gradually reintroduced after cooldown periods
|
|
154
|
+
|
|
155
|
+
### 2. **Enhanced Retry Mechanisms**
|
|
156
|
+
- **Exponential Backoff**: Increasing delays between retries for function calls
|
|
157
|
+
- **Maximum Retry Limits**: Prevent infinite retry loops with configurable limits
|
|
158
|
+
- **Error Categorization**: Different handling for MCP-specific vs generic errors
|
|
159
|
+
- **Function-Level Retries**: Individual MCP function calls have dedicated retry logic
|
|
160
|
+
|
|
161
|
+
### 3. **Advanced Graceful Degradation**
|
|
162
|
+
- **Transport Isolation**: Failure in one transport doesn't affect others
|
|
163
|
+
- **Multi-Level Fallback**: Setup errors → streaming errors → non-MCP mode
|
|
164
|
+
- **User-Friendly Messages**: Clear, contextual error communication:
|
|
165
|
+
```python
|
|
166
|
+
# Setup failure notification
|
|
167
|
+
"⚠️ [MCP] Setup failed; continuing without MCP"
|
|
168
|
+
|
|
169
|
+
# Runtime error with MCP tools
|
|
170
|
+
"⚠️ [MCP Tool] Connection failed; falling back to non-MCP tools"
|
|
171
|
+
|
|
172
|
+
# Successful MCP connection
|
|
173
|
+
"✅ [MCP] Connected to {n} servers"
|
|
174
|
+
```
|
|
175
|
+
- **Provider Tool Preservation**: Web search and code interpreter tools remain available during MCP failures
|
|
176
|
+
|
|
177
|
+
## Configuration Example
|
|
178
|
+
|
|
179
|
+
```yaml
|
|
180
|
+
# Example configuration showing all three transport types
|
|
181
|
+
mcp_servers:
|
|
182
|
+
# stdio transport - local command-line tools
|
|
183
|
+
file_tools:
|
|
184
|
+
type: "stdio"
|
|
185
|
+
command: "python"
|
|
186
|
+
args: ["-m", "mcp_file_server"]
|
|
187
|
+
|
|
188
|
+
# streamable-http transport - web-based streaming services
|
|
189
|
+
weather_service:
|
|
190
|
+
type: "streamable-http"
|
|
191
|
+
url: "https://api.weather.example.com/mcp"
|
|
192
|
+
|
|
193
|
+
# http transport - direct OpenAI MCP servers with authorization
|
|
194
|
+
enterprise_tools:
|
|
195
|
+
type: "http"
|
|
196
|
+
url: "https://mcp.enterprise.example.com"
|
|
197
|
+
authorization: "${ENTERPRISE_MCP_TOKEN}" # Environment variable substitution
|
|
198
|
+
require_approval: "never" # Optional: "never" (default) or "always"
|
|
199
|
+
allowed_tools: # Optional: limit to specific tools
|
|
200
|
+
- "create_report"
|
|
201
|
+
- "get_data"
|
|
202
|
+
|
|
203
|
+
# HTTP MCP server with always requiring approval
|
|
204
|
+
secure_payment_service:
|
|
205
|
+
type: "http"
|
|
206
|
+
url: "https://mcp.stripe.com"
|
|
207
|
+
authorization: "${STRIPE_OAUTH_ACCESS_TOKEN}"
|
|
208
|
+
require_approval: "always" # User must approve each tool call
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
### Configuration Options
|
|
212
|
+
|
|
213
|
+
#### require_approval
|
|
214
|
+
- **"never"**: Tools are executed automatically (default behavior)
|
|
215
|
+
- **"always"**: User must approve each tool execution before it runs
|
|
216
|
+
- **Purpose**: Control user interaction level for sensitive operations
|
|
217
|
+
|
|
218
|
+
#### authorization
|
|
219
|
+
- **Format**: Supports environment variable substitution with `${VAR_NAME}` pattern
|
|
220
|
+
- **Purpose**: Authenticate with HTTP MCP servers requiring API keys or OAuth tokens
|
|
221
|
+
- **Examples**: `"Bearer ${API_TOKEN}"`, `"${OAUTH_ACCESS_TOKEN}"`
|
|
222
|
+
|
|
223
|
+
#### allowed_tools
|
|
224
|
+
- **Purpose**: Limit server to specific tools (optional)
|
|
225
|
+
- **Format**: List of tool names to include
|
|
226
|
+
- **Use Case**: Restrict access to sensitive operations
|
|
227
|
+
|
|
228
|
+
## Performance Optimization
|
|
229
|
+
|
|
230
|
+
### 1. **Resource Management**
|
|
231
|
+
- **Connection Pooling**: Efficient connection reuse
|
|
232
|
+
- **Memory Management**: Bounded message history to prevent memory leaks
|
|
233
|
+
- **Async Processing**: Non-blocking operations for better performance
|
|
234
|
+
|
|
235
|
+
### 2. **Monitoring and Metrics**
|
|
236
|
+
- **Call Tracking**: Statistics on tool calls and failures
|
|
237
|
+
- **Performance Logging**: Detailed logging for optimization
|
|
238
|
+
- **Circuit Breaker Status**: Real-time health monitoring
|
|
239
|
+
|
|
240
|
+
### 3. **Scalability Features**
|
|
241
|
+
- **Multi-Server Support**: Simultaneous connections to multiple servers
|
|
242
|
+
- **Tool Filtering**: Selective tool inclusion/exclusion
|
|
243
|
+
- **Concurrent Execution**: Parallel tool execution when possible
|
|
244
|
+
|
|
245
|
+
## Security Considerations
|
|
246
|
+
|
|
247
|
+
### 1. **URL Validation**
|
|
248
|
+
- **Security Checks**: Validation of HTTP server URLs
|
|
249
|
+
- **Localhost Support**: Allowance for development environments
|
|
250
|
+
- **Private Network Support**: Support for internal corporate networks
|
|
251
|
+
|
|
252
|
+
### 2. **Command Sanitization**
|
|
253
|
+
- **Safe Execution**: Secure handling of stdio commands
|
|
254
|
+
- **Input Validation**: Validation of all function arguments
|
|
255
|
+
- **Sandboxing**: Isolation of potentially dangerous operations
|
|
256
|
+
|
|
257
|
+
## Limitations of OpenAI Native HTTP Transport
|
|
258
|
+
|
|
259
|
+
While the http transport type offers direct connectivity and high performance, it has significant limitations compared to stdio and streamable-http transports:
|
|
260
|
+
|
|
261
|
+
### 1. **Network Accessibility Restrictions**
|
|
262
|
+
- **No Local Servers**: OpenAI's native MCP client cannot access local HTTP servers running on `localhost`, `127.0.0.1`, or private IP addresses
|
|
263
|
+
- **Corporate Network Restrictions**: Servers behind corporate firewalls or in private networks are inaccessible
|
|
264
|
+
- **VPN/Proxy Limitations**: Some VPN configurations may block access to MCP servers
|
|
265
|
+
- **Geographic Restrictions**: OpenAI's network may have geographic limitations for certain server locations
|
|
266
|
+
|
|
267
|
+
### 2. **Security and Compliance Constraints**
|
|
268
|
+
- **OpenAI Security Policies**: All HTTP MCP servers must comply with OpenAI's security requirements and acceptable use policies
|
|
269
|
+
- **Content Filtering**: OpenAI may filter or block certain types of content or server responses
|
|
270
|
+
- **Rate Limiting**: Subject to OpenAI's rate limiting and usage policies
|
|
271
|
+
- **Data Privacy**: All server communications go through OpenAI's infrastructure
|
|
272
|
+
|
|
273
|
+
### 3. **Server Configuration Requirements**
|
|
274
|
+
- **HTTPS Mandatory**: All HTTP MCP servers must use HTTPS with valid SSL certificates
|
|
275
|
+
- **Domain Registration**: Servers must have properly registered domain names
|
|
276
|
+
- **Public Accessibility**: Servers must be publicly accessible on the internet
|
|
277
|
+
- **CORS Configuration**: Proper Cross-Origin Resource Sharing setup may be required
|
|
278
|
+
|
|
279
|
+
### 4. **Development and Testing Challenges**
|
|
280
|
+
- **Local Development**: Cannot test with local MCP servers during development
|
|
281
|
+
- **Staging Environments**: Staging servers in private networks are inaccessible
|
|
282
|
+
- **Debugging Complexity**: Harder to debug HTTP MCP issues due to limited visibility
|
|
283
|
+
- **Testing Overhead**: Requires deploying servers to public environments for testing
|
|
284
|
+
|
|
285
|
+
### 5. **Feature Limitations**
|
|
286
|
+
- **Streaming Support**: Limited or no support for streaming responses compared to streamable-http
|
|
287
|
+
- **Custom Headers**: Restrictions on custom HTTP headers and authentication methods
|
|
288
|
+
- **Session Management**: Limited control over session persistence and state management
|
|
289
|
+
- **Timeout Handling**: Fixed timeout configurations that cannot be customized
|
|
290
|
+
|
|
291
|
+
### 6. **Reliability and Control**
|
|
292
|
+
- **Dependency on OpenAI**: Relies on OpenAI's infrastructure and network stability
|
|
293
|
+
- **Limited Monitoring**: Reduced visibility into connection health and performance
|
|
294
|
+
- **Circuit Breaker Control**: Less granular control over circuit breaker behavior
|
|
295
|
+
- **Failover Options**: Limited failover capabilities when OpenAI's MCP client has issues
|
|
296
|
+
|
|
297
|
+
## When to Use Each Transport Type
|
|
298
|
+
|
|
299
|
+
### Use stdio Transport When:
|
|
300
|
+
- Working with local development tools
|
|
301
|
+
- Accessing file system operations
|
|
302
|
+
- Running command-line utilities
|
|
303
|
+
- Needing maximum control and visibility
|
|
304
|
+
- Working in isolated environments
|
|
305
|
+
|
|
306
|
+
### Use streamable-http Transport When:
|
|
307
|
+
- Accessing remote web services
|
|
308
|
+
- Needing real-time streaming capabilities
|
|
309
|
+
- Working with cloud-based APIs
|
|
310
|
+
- Requiring custom authentication
|
|
311
|
+
- Needing detailed debugging information
|
|
312
|
+
|
|
313
|
+
### Use http Transport When:
|
|
314
|
+
- Working with enterprise MCP servers
|
|
315
|
+
- Needing maximum performance
|
|
316
|
+
- Servers are publicly accessible
|
|
317
|
+
- Compliance with OpenAI policies is acceptable
|
|
318
|
+
- No local or private network access is required
|
|
319
|
+
|
|
320
|
+
## Migration Considerations
|
|
321
|
+
|
|
322
|
+
When migrating from stdio/streamable-http to http transport, consider:
|
|
323
|
+
|
|
324
|
+
1. **Server Deployment**: MCP servers must be deployed to public infrastructure
|
|
325
|
+
2. **Security Updates**: Ensure servers meet OpenAI's security requirements
|
|
326
|
+
3. **Testing Strategy**: Implement comprehensive testing for public accessibility
|
|
327
|
+
4. **Monitoring**: Add monitoring for public server availability and performance
|
|
328
|
+
5. **Fallback Plan**: Maintain stdio/streamable-http options as backup
|
|
329
|
+
|
|
330
|
+
## Benefits
|
|
331
|
+
|
|
332
|
+
### 1. **Maximum Compatibility**
|
|
333
|
+
- Supports all major MCP server types
|
|
334
|
+
- Works with existing tools and services
|
|
335
|
+
- Seamless integration with OpenAI's ecosystem
|
|
336
|
+
|
|
337
|
+
### 2. **High Reliability**
|
|
338
|
+
- Fault tolerance through multiple transport options
|
|
339
|
+
- Automatic recovery from failures
|
|
340
|
+
- Graceful degradation when issues occur
|
|
341
|
+
|
|
342
|
+
### 3. **Developer Experience**
|
|
343
|
+
- Simple configuration with YAML files
|
|
344
|
+
- Automatic tool discovery and conversion
|
|
345
|
+
- Comprehensive error handling and logging
|
|
346
|
+
|
|
347
|
+
### 4. **Production Ready**
|
|
348
|
+
- Circuit breaker patterns for stability
|
|
349
|
+
- Resource cleanup and memory management
|
|
350
|
+
- Comprehensive monitoring and metrics
|
|
351
|
+
|
|
352
|
+
This three-transport architecture provides a robust, flexible, and production-ready MCP integration system that can handle diverse use cases while maintaining high reliability and performance.
|