mcp-server-mas-sequential-thinking 0.2.1__py3-none-any.whl → 0.2.2__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- main.py +1 -1
- {mcp_server_mas_sequential_thinking-0.2.1.dist-info → mcp_server_mas_sequential_thinking-0.2.2.dist-info}/METADATA +7 -7
- mcp_server_mas_sequential_thinking-0.2.2.dist-info/RECORD +5 -0
- mcp_server_mas_sequential_thinking-0.2.1.dist-info/RECORD +0 -5
- {mcp_server_mas_sequential_thinking-0.2.1.dist-info → mcp_server_mas_sequential_thinking-0.2.2.dist-info}/WHEEL +0 -0
- {mcp_server_mas_sequential_thinking-0.2.1.dist-info → mcp_server_mas_sequential_thinking-0.2.2.dist-info}/entry_points.txt +0 -0
    
        main.py
    CHANGED
    
    | @@ -308,7 +308,7 @@ def get_model_config() -> tuple[Type[Model], str, str]: | |
| 308 308 | 
             
                    ModelClass = DeepSeek
         | 
| 309 309 | 
             
                    # Use environment variables for DeepSeek model IDs if set, otherwise use defaults
         | 
| 310 310 | 
             
                    team_model_id = os.environ.get("DEEPSEEK_TEAM_MODEL_ID", "deepseek-chat")
         | 
| 311 | 
            -
                    agent_model_id = os.environ.get("DEEPSEEK_AGENT_MODEL_ID", "deepseek- | 
| 311 | 
            +
                    agent_model_id = os.environ.get("DEEPSEEK_AGENT_MODEL_ID", "deepseek-chat")
         | 
| 312 312 | 
             
                    logger.info(f"Using DeepSeek: Team Model='{team_model_id}', Agent Model='{agent_model_id}'")
         | 
| 313 313 | 
             
                elif provider == "groq":
         | 
| 314 314 | 
             
                    ModelClass = Groq
         | 
| @@ -1,6 +1,6 @@ | |
| 1 1 | 
             
            Metadata-Version: 2.4
         | 
| 2 2 | 
             
            Name: mcp-server-mas-sequential-thinking
         | 
| 3 | 
            -
            Version: 0.2. | 
| 3 | 
            +
            Version: 0.2.2
         | 
| 4 4 | 
             
            Summary: MCP Agent Implementation for Sequential Thinking
         | 
| 5 5 | 
             
            Author-email: Frad LEE <fradser@gmail.com>
         | 
| 6 6 | 
             
            Requires-Python: >=3.10
         | 
| @@ -146,11 +146,11 @@ The `env` section should include the API key for your chosen `LLM_PROVIDER`. | |
| 146 146 | 
             
                # GROQ_TEAM_MODEL_ID="llama3-70b-8192"
         | 
| 147 147 | 
             
                # GROQ_AGENT_MODEL_ID="llama3-8b-8192"
         | 
| 148 148 | 
             
                # Example for DeepSeek:
         | 
| 149 | 
            -
                # DEEPSEEK_TEAM_MODEL_ID="deepseek- | 
| 150 | 
            -
                # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" | 
| 149 | 
            +
                # DEEPSEEK_TEAM_MODEL_ID="deepseek-chat"  # Note: `deepseek-reasoner` is not recommended as it doesn't support function calling
         | 
| 150 | 
            +
                # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat"  # Recommended for specialists
         | 
| 151 151 | 
             
                # Example for OpenRouter:
         | 
| 152 | 
            -
                # OPENROUTER_TEAM_MODEL_ID=" | 
| 153 | 
            -
                # OPENROUTER_AGENT_MODEL_ID=" | 
| 152 | 
            +
                # OPENROUTER_TEAM_MODEL_ID="deepseek/deepseek-r1"
         | 
| 153 | 
            +
                # OPENROUTER_AGENT_MODEL_ID="deepseek/deepseek-chat-v3-0324"
         | 
| 154 154 |  | 
| 155 155 | 
             
                # --- External Tools ---
         | 
| 156 156 | 
             
                # Required ONLY if the Researcher agent is used and needs Exa
         | 
| @@ -159,8 +159,8 @@ The `env` section should include the API key for your chosen `LLM_PROVIDER`. | |
| 159 159 |  | 
| 160 160 | 
             
                **Note on Model Selection:**
         | 
| 161 161 |  | 
| 162 | 
            -
                *   The `TEAM_MODEL_ID` is used by the Coordinator (the `Team` object itself). This role requires strong reasoning, synthesis, and delegation capabilities. Using a more powerful model (like `deepseek- | 
| 163 | 
            -
                *   The `AGENT_MODEL_ID` is used by the specialist agents (Planner, Researcher, etc.). These agents handle more focused sub-tasks. You might choose a faster or more cost-effective model (like `deepseek- | 
| 162 | 
            +
                *   The `TEAM_MODEL_ID` is used by the Coordinator (the `Team` object itself). This role requires strong reasoning, synthesis, and delegation capabilities. Using a more powerful model (like `deepseek-r1`, `claude-3-opus`, or `gpt-4-turbo`) is often beneficial here, even if it's slower or more expensive.
         | 
| 163 | 
            +
                *   The `AGENT_MODEL_ID` is used by the specialist agents (Planner, Researcher, etc.). These agents handle more focused sub-tasks. You might choose a faster or more cost-effective model (like `deepseek-v3`, `claude-3-sonnet`, `llama3-70b`) for specialists, depending on the complexity of the tasks they typically handle and your budget/performance requirements.
         | 
| 164 164 | 
             
                *   The defaults provided in `main.py` (e.g., `deepseek-reasoner` for agents when using DeepSeek) are starting points. Experimentation is encouraged to find the optimal balance for your specific use case.
         | 
| 165 165 |  | 
| 166 166 | 
             
            3.  **Install Dependencies:**
         | 
| @@ -0,0 +1,5 @@ | |
| 1 | 
            +
            main.py,sha256=VoFzDPitJlVwru3AfC3KH563h7TtUxcrM0OnvCUVQv8,44067
         | 
| 2 | 
            +
            mcp_server_mas_sequential_thinking-0.2.2.dist-info/METADATA,sha256=ZGYMdfaWS7-9VJXEbXaDAFKQc-iZwhi0rmu5LZ1YwfQ,15842
         | 
| 3 | 
            +
            mcp_server_mas_sequential_thinking-0.2.2.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
         | 
| 4 | 
            +
            mcp_server_mas_sequential_thinking-0.2.2.dist-info/entry_points.txt,sha256=wY2jq_6PmuqyKQzNnL6famc7DXnQiEhVnq3umzNVNiE,64
         | 
| 5 | 
            +
            mcp_server_mas_sequential_thinking-0.2.2.dist-info/RECORD,,
         | 
| @@ -1,5 +0,0 @@ | |
| 1 | 
            -
            main.py,sha256=Vm6SBMDmvFy9CwEmqI-ZqZ0YDLUgPA_E689La_Qc4Yo,44071
         | 
| 2 | 
            -
            mcp_server_mas_sequential_thinking-0.2.1.dist-info/METADATA,sha256=bIJGHFTRT2hldImbnA0xKyp4i8BRcc78WihpJAcMf5A,15807
         | 
| 3 | 
            -
            mcp_server_mas_sequential_thinking-0.2.1.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
         | 
| 4 | 
            -
            mcp_server_mas_sequential_thinking-0.2.1.dist-info/entry_points.txt,sha256=wY2jq_6PmuqyKQzNnL6famc7DXnQiEhVnq3umzNVNiE,64
         | 
| 5 | 
            -
            mcp_server_mas_sequential_thinking-0.2.1.dist-info/RECORD,,
         | 
| 
            File without changes
         |