mcp-server-mas-sequential-thinking 0.2.1__tar.gz → 0.2.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: mcp-server-mas-sequential-thinking
3
- Version: 0.2.1
3
+ Version: 0.2.2
4
4
  Summary: MCP Agent Implementation for Sequential Thinking
5
5
  Author-email: Frad LEE <fradser@gmail.com>
6
6
  Requires-Python: >=3.10
@@ -146,11 +146,11 @@ The `env` section should include the API key for your chosen `LLM_PROVIDER`.
146
146
  # GROQ_TEAM_MODEL_ID="llama3-70b-8192"
147
147
  # GROQ_AGENT_MODEL_ID="llama3-8b-8192"
148
148
  # Example for DeepSeek:
149
- # DEEPSEEK_TEAM_MODEL_ID="deepseek-reasoner" # Recommended for coordination
150
- # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # Recommended for specialists
149
+ # DEEPSEEK_TEAM_MODEL_ID="deepseek-chat" # Note: `deepseek-reasoner` is not recommended as it doesn't support function calling
150
+ # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # Recommended for specialists
151
151
  # Example for OpenRouter:
152
- # OPENROUTER_TEAM_MODEL_ID="anthropic/claude-3-haiku-20240307"
153
- # OPENROUTER_AGENT_MODEL_ID="google/gemini-flash-1.5"
152
+ # OPENROUTER_TEAM_MODEL_ID="deepseek/deepseek-r1"
153
+ # OPENROUTER_AGENT_MODEL_ID="deepseek/deepseek-chat-v3-0324"
154
154
 
155
155
  # --- External Tools ---
156
156
  # Required ONLY if the Researcher agent is used and needs Exa
@@ -159,8 +159,8 @@ The `env` section should include the API key for your chosen `LLM_PROVIDER`.
159
159
 
160
160
  **Note on Model Selection:**
161
161
 
162
- * The `TEAM_MODEL_ID` is used by the Coordinator (the `Team` object itself). This role requires strong reasoning, synthesis, and delegation capabilities. Using a more powerful model (like `deepseek-reasoner`, `claude-3-opus`, or `gpt-4-turbo`) is often beneficial here, even if it's slower or more expensive.
163
- * The `AGENT_MODEL_ID` is used by the specialist agents (Planner, Researcher, etc.). These agents handle more focused sub-tasks. You might choose a faster or more cost-effective model (like `deepseek-chat`, `claude-3-sonnet`, `llama3-70b`) for specialists, depending on the complexity of the tasks they typically handle and your budget/performance requirements.
162
+ * The `TEAM_MODEL_ID` is used by the Coordinator (the `Team` object itself). This role requires strong reasoning, synthesis, and delegation capabilities. Using a more powerful model (like `deepseek-r1`, `claude-3-opus`, or `gpt-4-turbo`) is often beneficial here, even if it's slower or more expensive.
163
+ * The `AGENT_MODEL_ID` is used by the specialist agents (Planner, Researcher, etc.). These agents handle more focused sub-tasks. You might choose a faster or more cost-effective model (like `deepseek-v3`, `claude-3-sonnet`, `llama3-70b`) for specialists, depending on the complexity of the tasks they typically handle and your budget/performance requirements.
164
164
  * The defaults provided in `main.py` (e.g., `deepseek-reasoner` for agents when using DeepSeek) are starting points. Experimentation is encouraged to find the optimal balance for your specific use case.
165
165
 
166
166
  3. **Install Dependencies:**
@@ -127,11 +127,11 @@ The `env` section should include the API key for your chosen `LLM_PROVIDER`.
127
127
  # GROQ_TEAM_MODEL_ID="llama3-70b-8192"
128
128
  # GROQ_AGENT_MODEL_ID="llama3-8b-8192"
129
129
  # Example for DeepSeek:
130
- # DEEPSEEK_TEAM_MODEL_ID="deepseek-reasoner" # Recommended for coordination
131
- # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # Recommended for specialists
130
+ # DEEPSEEK_TEAM_MODEL_ID="deepseek-chat" # Note: `deepseek-reasoner` is not recommended as it doesn't support function calling
131
+ # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # Recommended for specialists
132
132
  # Example for OpenRouter:
133
- # OPENROUTER_TEAM_MODEL_ID="anthropic/claude-3-haiku-20240307"
134
- # OPENROUTER_AGENT_MODEL_ID="google/gemini-flash-1.5"
133
+ # OPENROUTER_TEAM_MODEL_ID="deepseek/deepseek-r1"
134
+ # OPENROUTER_AGENT_MODEL_ID="deepseek/deepseek-chat-v3-0324"
135
135
 
136
136
  # --- External Tools ---
137
137
  # Required ONLY if the Researcher agent is used and needs Exa
@@ -140,8 +140,8 @@ The `env` section should include the API key for your chosen `LLM_PROVIDER`.
140
140
 
141
141
  **Note on Model Selection:**
142
142
 
143
- * The `TEAM_MODEL_ID` is used by the Coordinator (the `Team` object itself). This role requires strong reasoning, synthesis, and delegation capabilities. Using a more powerful model (like `deepseek-reasoner`, `claude-3-opus`, or `gpt-4-turbo`) is often beneficial here, even if it's slower or more expensive.
144
- * The `AGENT_MODEL_ID` is used by the specialist agents (Planner, Researcher, etc.). These agents handle more focused sub-tasks. You might choose a faster or more cost-effective model (like `deepseek-chat`, `claude-3-sonnet`, `llama3-70b`) for specialists, depending on the complexity of the tasks they typically handle and your budget/performance requirements.
143
+ * The `TEAM_MODEL_ID` is used by the Coordinator (the `Team` object itself). This role requires strong reasoning, synthesis, and delegation capabilities. Using a more powerful model (like `deepseek-r1`, `claude-3-opus`, or `gpt-4-turbo`) is often beneficial here, even if it's slower or more expensive.
144
+ * The `AGENT_MODEL_ID` is used by the specialist agents (Planner, Researcher, etc.). These agents handle more focused sub-tasks. You might choose a faster or more cost-effective model (like `deepseek-v3`, `claude-3-sonnet`, `llama3-70b`) for specialists, depending on the complexity of the tasks they typically handle and your budget/performance requirements.
145
145
  * The defaults provided in `main.py` (e.g., `deepseek-reasoner` for agents when using DeepSeek) are starting points. Experimentation is encouraged to find the optimal balance for your specific use case.
146
146
 
147
147
  3. **Install Dependencies:**
@@ -130,11 +130,11 @@
130
130
  # GROQ_TEAM_MODEL_ID="llama3-70b-8192"
131
131
  # GROQ_AGENT_MODEL_ID="llama3-8b-8192"
132
132
  # DeepSeek 示例:
133
- # DEEPSEEK_TEAM_MODEL_ID="deepseek-reasoner" # 推荐用于协调
134
- # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # 推荐用于专家智能体
133
+ # DEEPSEEK_TEAM_MODEL_ID="deepseek-chat" # 注意:不推荐使用 `deepseek-reasoner`,因为它不支持函数调用
134
+ # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # 推荐用于专家智能体
135
135
  # OpenRouter 示例:
136
- # OPENROUTER_TEAM_MODEL_ID="anthropic/claude-3-haiku-20240307"
137
- # OPENROUTER_AGENT_MODEL_ID="google/gemini-flash-1.5"
136
+ # OPENROUTER_TEAM_MODEL_ID="deepseek/deepseek-r1"
137
+ # OPENROUTER_AGENT_MODEL_ID="deepseek/deepseek-chat-v3-0324"
138
138
 
139
139
  # --- 外部工具 ---
140
140
  # 仅当研究员智能体被使用且需要 Exa 时才必需
@@ -143,9 +143,9 @@
143
143
 
144
144
  **关于模型选择的说明:**
145
145
 
146
- * `TEAM_MODEL_ID` 由协调器(`Team` 对象本身)使用。该角色需要强大的推理、综合和委派能力。在此处使用更强大的模型(如 `deepseek-reasoner`、`claude-3-opus` 或 `gpt-4-turbo`)通常更有益,即使它可能更慢或更昂贵。
147
- * `AGENT_MODEL_ID` 由专家智能体(规划器、研究员等)使用。这些智能体处理更集中的子任务。您可以为专家选择更快或更具成本效益的模型(如 `deepseek-chat`、`claude-3-sonnet`、`llama3-70b`),具体取决于它们通常处理的任务复杂性以及您的预算/性能要求。
148
- * `main.py` 中提供的默认值(例如,使用 DeepSeek 时,智能体默认为 `deepseek-reasoner`)是起点。鼓励进行实验,以找到适合您特定用例的最佳平衡点。
146
+ * `TEAM_MODEL_ID` 由协调器(`Team` 对象本身)使用。该角色需要强大的推理、综合和委派能力。使用更强大的模型(如 `deepseek-r1`、`claude-3-opus` 或 `gpt-4-turbo`)通常更有益,即使它可能更慢或更昂贵。
147
+ * `AGENT_MODEL_ID` 由专家智能体(规划器、研究员等)使用。这些智能体处理更集中的子任务。您可以为专家选择更快或更具成本效益的模型(如 `deepseek-v3`、`claude-3-sonnet`、`llama3-70b`),具体取决于它们通常处理的任务复杂性以及您的预算/性能要求。
148
+ * `main.py` 中提供的默认值(例如,使用 DeepSeek 时的 `deepseek-chat`)是起点。鼓励进行实验,以找到适合您特定用例的最佳平衡点。
149
149
 
150
150
  3. **安装依赖:**
151
151
 
@@ -308,7 +308,7 @@ def get_model_config() -> tuple[Type[Model], str, str]:
308
308
  ModelClass = DeepSeek
309
309
  # Use environment variables for DeepSeek model IDs if set, otherwise use defaults
310
310
  team_model_id = os.environ.get("DEEPSEEK_TEAM_MODEL_ID", "deepseek-chat")
311
- agent_model_id = os.environ.get("DEEPSEEK_AGENT_MODEL_ID", "deepseek-reasoner")
311
+ agent_model_id = os.environ.get("DEEPSEEK_AGENT_MODEL_ID", "deepseek-chat")
312
312
  logger.info(f"Using DeepSeek: Team Model='{team_model_id}', Agent Model='{agent_model_id}'")
313
313
  elif provider == "groq":
314
314
  ModelClass = Groq
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "mcp-server-mas-sequential-thinking"
3
- version = "0.2.1"
3
+ version = "0.2.2"
4
4
  description = "MCP Agent Implementation for Sequential Thinking"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.10"
@@ -433,7 +433,7 @@ wheels = [
433
433
 
434
434
  [[package]]
435
435
  name = "mcp-server-mas-sequential-thinking"
436
- version = "0.2.0"
436
+ version = "0.2.1"
437
437
  source = { editable = "." }
438
438
  dependencies = [
439
439
  { name = "agno" },