mcp-server-mas-sequential-thinking 0.2.1__tar.gz → 0.2.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,30 @@
1
+ ---
2
+ description:
3
+ globs:
4
+ alwaysApply: true
5
+ ---
6
+ # Project Structure and Key Components
7
+
8
+ This is a Python project implementing a Multi-Agent System (MAS) for sequential thinking using the **Agno** framework and served via **MCP**.
9
+
10
+ ## Core Files & Logic
11
+
12
+ * **Main Entry Point & Server Logic:** [`main.py`](mdc:main.py) - Sets up the FastMCP server, defines the `sequentialthinking` tool, instantiates the agents, and contains the primary coordination logic using the `Team` object from Agno.
13
+ * **Dependencies:** [`pyproject.toml`](mdc:pyproject.toml) and [`uv.lock`](mdc:uv.lock) define and lock project dependencies, managed preferably with `uv`.
14
+ * **Configuration:** Relies on environment variables (often stored in a `.env` file, see [`.gitignore`](mdc:.gitignore)) for API keys (Groq, DeepSeek, OpenRouter, Exa) and LLM model selection (`LLM_PROVIDER`, `*_MODEL_ID`, etc.).
15
+
16
+ ## Key Concepts & Libraries
17
+
18
+ * **Multi-Agent System (MAS):** Built using the **Agno** framework.
19
+ * **Coordinator:** A `Team` object manages the workflow.
20
+ * **Specialist Agents:** Roles like Planner, Researcher, Analyzer, Critic, Synthesizer handle sub-tasks (likely defined within [`main.py`](mdc:main.py) or imported modules).
21
+ * **Sequential Thinking Tool:** The primary functionality exposed is the `sequentialthinking` tool, which takes `ThoughtData` (likely defined via Pydantic models in [`main.py`](mdc:main.py) or related files) as input.
22
+ * **Data Validation:** Uses **Pydantic** for robust input and data structure validation.
23
+ * **External Tools:** Can integrate with tools like Exa via the Researcher agent.
24
+
25
+ ## Documentation
26
+
27
+ * **Primary README:** [`README.md`](mdc:README.md)
28
+ * **Chinese README:** [`README.zh-CN.md`](mdc:README.zh-CN.md)
29
+
30
+ Understanding the interaction between the Coordinator (`Team` in `coordinate` mode) and the specialist agents within [`main.py`](mdc:main.py) is crucial for modifying the core sequential thinking logic. Refer to the [`README.md`](mdc:README.md) for detailed workflow explanations.
@@ -0,0 +1,27 @@
1
+ # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
2
+ FROM python:3.10-alpine
3
+
4
+ # Set environment variables to prevent Python from writing .pyc files
5
+ ENV PYTHONDONTWRITEBYTECODE=1 \
6
+ PYTHONUNBUFFERED=1
7
+
8
+ # Install build dependencies
9
+ RUN apk add --no-cache gcc musl-dev libffi-dev openssl-dev
10
+
11
+ # Set working directory
12
+ WORKDIR /app
13
+
14
+ # Copy project files
15
+ COPY pyproject.toml ./
16
+ COPY main.py ./
17
+ COPY README.md ./
18
+
19
+ # Upgrade pip and install hatchling build tool
20
+ RUN pip install --upgrade pip && \
21
+ pip install hatchling
22
+
23
+ # Build and install the project
24
+ RUN pip install .
25
+
26
+ # Command to run the MCP server
27
+ CMD ["mcp-server-mas-sequential-thinking"]
@@ -0,0 +1,329 @@
1
+ Metadata-Version: 2.4
2
+ Name: mcp-server-mas-sequential-thinking
3
+ Version: 0.2.3
4
+ Summary: MCP Agent Implementation for Sequential Thinking
5
+ Author-email: Frad LEE <fradser@gmail.com>
6
+ Requires-Python: >=3.10
7
+ Requires-Dist: agno
8
+ Requires-Dist: asyncio
9
+ Requires-Dist: exa-py
10
+ Requires-Dist: groq
11
+ Requires-Dist: mcp
12
+ Requires-Dist: python-dotenv
13
+ Provides-Extra: dev
14
+ Requires-Dist: black; extra == 'dev'
15
+ Requires-Dist: isort; extra == 'dev'
16
+ Requires-Dist: mypy; extra == 'dev'
17
+ Requires-Dist: pytest; extra == 'dev'
18
+ Description-Content-Type: text/markdown
19
+
20
+ # Sequential Thinking Multi-Agent System (MAS) ![](https://img.shields.io/badge/A%20FRAD%20PRODUCT-WIP-yellow)
21
+
22
+ [![smithery badge](https://smithery.ai/badge/@FradSer/mcp-server-mas-sequential-thinking)](https://smithery.ai/server/@FradSer/mcp-server-mas-sequential-thinking) [![Twitter Follow](https://img.shields.io/twitter/follow/FradSer?style=social)](https://twitter.com/FradSer) [![Python Version](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/) [![Framework](https://img.shields.io/badge/Framework-Agno-orange.svg)](https://github.com/cognitivecomputations/agno)
23
+
24
+ English | [简体中文](README.zh-CN.md)
25
+
26
+ This project implements an advanced sequential thinking process using a **Multi-Agent System (MAS)** built with the **Agno** framework and served via **MCP**. It represents a significant evolution from simpler state-tracking approaches by leveraging coordinated, specialized agents for deeper analysis and problem decomposition.
27
+
28
+ ## Overview
29
+
30
+ This server provides a sophisticated `sequentialthinking` tool designed for complex problem-solving. Unlike [its predecessor](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking), this version utilizes a true Multi-Agent System (MAS) architecture where:
31
+
32
+ - **A Coordinating Agent** (the `Team` object in `coordinate` mode) manages the workflow.
33
+ - **Specialized Agents** (Planner, Researcher, Analyzer, Critic, Synthesizer) handle specific sub-tasks based on their defined roles and expertise.
34
+ - Incoming thoughts are actively **processed, analyzed, and synthesized** by the agent team, not just logged.
35
+ - The system supports complex thought patterns, including **revisions** of previous steps and **branching** to explore alternative paths.
36
+ - Integration with external tools like **Exa** (via the Researcher agent) allows for dynamic information gathering.
37
+ - Robust **Pydantic** validation ensures data integrity for thought steps.
38
+ - Detailed **logging** tracks the process, including agent interactions (handled by the coordinator).
39
+
40
+ The goal is to achieve a higher quality of analysis and a more nuanced thinking process than possible with a single agent or simple state tracking by harnessing the power of specialized roles working collaboratively.
41
+
42
+ ## Key Differences from Original Version (TypeScript)
43
+
44
+ This Python/Agno implementation marks a fundamental shift from the original TypeScript version:
45
+
46
+ | Feature/Aspect | Python/Agno Version (Current) | TypeScript Version (Original) |
47
+ | :------------------ | :------------------------------------------------------------------- | :--------------------------------------------------- |
48
+ | **Architecture** | **Multi-Agent System (MAS)**; Active processing by a team of agents. | **Single Class State Tracker**; Simple logging/storing. |
49
+ | **Intelligence** | **Distributed Agent Logic**; Embedded in specialized agents & Coordinator. | **External LLM Only**; No internal intelligence. |
50
+ | **Processing** | **Active Analysis & Synthesis**; Agents *act* on the thought. | **Passive Logging**; Merely recorded the thought. |
51
+ | **Frameworks** | **Agno (MAS) + FastMCP (Server)**; Uses dedicated MAS library. | **MCP SDK only**. |
52
+ | **Coordination** | **Explicit Team Coordination Logic** (`Team` in `coordinate` mode). | **None**; No coordination concept. |
53
+ | **Validation** | **Pydantic Schema Validation**; Robust data validation. | **Basic Type Checks**; Less reliable. |
54
+ | **External Tools** | **Integrated (Exa via Researcher)**; Can perform research tasks. | **None**. |
55
+ | **Logging** | **Structured Python Logging (File + Console)**; Configurable. | **Console Logging with Chalk**; Basic. |
56
+ | **Language & Ecosystem** | **Python**; Leverages Python AI/ML ecosystem. | **TypeScript/Node.js**. |
57
+
58
+ In essence, the system evolved from a passive thought *recorder* to an active thought *processor* powered by a collaborative team of AI agents.
59
+
60
+ ## How it Works (Coordinate Mode)
61
+
62
+ 1. **Initiation:** An external LLM uses the `sequential-thinking-starter` prompt to define the problem and initiate the process.
63
+ 2. **Tool Call:** The LLM calls the `sequentialthinking` tool with the first (or subsequent) thought, structured according to the `ThoughtData` Pydantic model.
64
+ 3. **Validation & Logging:** The tool receives the call, validates the input using Pydantic, logs the incoming thought, and updates the history/branch state via `AppContext`.
65
+ 4. **Coordinator Invocation:** The core thought content (along with context about revisions/branches) is passed to the `SequentialThinkingTeam`'s `arun` method.
66
+ 5. **Coordinator Analysis & Delegation:** The `Team` (acting as Coordinator) analyzes the input thought, breaks it down into sub-tasks, and delegates these sub-tasks to the *most relevant* specialist agents (e.g., Analyzer for analysis tasks, Researcher for information needs).
67
+ 6. **Specialist Execution:** Delegated agents execute their specific sub-tasks using their instructions, models, and tools (like `ThinkingTools` or `ExaTools`).
68
+ 7. **Response Collection:** Specialists return their results to the Coordinator.
69
+ 8. **Synthesis & Guidance:** The Coordinator synthesizes the specialists' responses into a single, cohesive output. This output may include recommendations for revision or branching based on the specialists' findings (especially from the Critic and Analyzer). It also provides guidance for the LLM on formulating the next thought.
70
+ 9. **Return Value:** The tool returns a JSON string containing the Coordinator's synthesized response, status, and updated context (branches, history length).
71
+ 10. **Iteration:** The calling LLM uses the Coordinator's response and guidance to formulate the next `sequentialthinking` tool call, potentially triggering revisions or branches as suggested.
72
+
73
+ ## Token Consumption Warning
74
+
75
+ ⚠️ **High Token Usage:** Due to the Multi-Agent System architecture, this tool consumes significantly **more tokens** than single-agent alternatives or the previous TypeScript version. Each `sequentialthinking` call invokes:
76
+
77
+ - The Coordinator agent (the `Team` itself).
78
+ - Multiple specialist agents (potentially Planner, Researcher, Analyzer, Critic, Synthesizer, depending on the Coordinator's delegation).
79
+
80
+ This parallel processing leads to substantially higher token usage (potentially 3-6x or more per thought step) compared to single-agent or state-tracking approaches. Budget and plan accordingly. This tool prioritizes **analysis depth and quality** over token efficiency.
81
+
82
+ ## Prerequisites
83
+
84
+ - Python 3.10+
85
+ - Access to a compatible LLM API (configured for `agno`). The system currently supports:
86
+ - **Groq:** Requires `GROQ_API_KEY`.
87
+ - **DeepSeek:** Requires `DEEPSEEK_API_KEY`.
88
+ - **OpenRouter:** Requires `OPENROUTER_API_KEY`.
89
+ - Configure the desired provider using the `LLM_PROVIDER` environment variable (defaults to `deepseek`).
90
+ - Exa API Key (required only if using the Researcher agent's capabilities)
91
+ - Set via the `EXA_API_KEY` environment variable.
92
+ - `uv` package manager (recommended) or `pip`.
93
+
94
+ ## MCP Server Configuration (Client-Side)
95
+
96
+ This server runs as a standard executable script that communicates via stdio, as expected by MCP. The exact configuration method depends on your specific MCP client implementation. Consult your client's documentation for details on integrating external tool servers.
97
+
98
+ The `env` section within your MCP client configuration should include the API key for your chosen `LLM_PROVIDER`.
99
+
100
+ ```json
101
+ {
102
+ "mcpServers": {
103
+ "mas-sequential-thinking": {
104
+ "command": "uvx", // Or "python", "path/to/venv/bin/python" etc.
105
+ "args": [
106
+ "mcp-server-mas-sequential-thinking" // Or the path to your main script, e.g., "main.py"
107
+ ],
108
+ "env": {
109
+ "LLM_PROVIDER": "deepseek", // Or "groq", "openrouter"
110
+ // "GROQ_API_KEY": "your_groq_api_key", // Only if LLM_PROVIDER="groq"
111
+ "DEEPSEEK_API_KEY": "your_deepseek_api_key", // Default provider
112
+ // "OPENROUTER_API_KEY": "your_openrouter_api_key", // Only if LLM_PROVIDER="openrouter"
113
+ "DEEPSEEK_BASE_URL": "your_base_url_if_needed", // Optional: If using a custom endpoint for DeepSeek
114
+ "EXA_API_KEY": "your_exa_api_key" // Only if using Exa
115
+ }
116
+ }
117
+ }
118
+ }
119
+ ```
120
+
121
+ ## Installation & Setup
122
+
123
+ ### Installing via Smithery
124
+
125
+ To install Sequential Thinking Multi-Agent System for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@FradSer/mcp-server-mas-sequential-thinking):
126
+
127
+ ```bash
128
+ npx -y @smithery/cli install @FradSer/mcp-server-mas-sequential-thinking --client claude
129
+ ```
130
+
131
+ ### Manual Installation
132
+ 1. **Clone the repository:**
133
+ ```bash
134
+ git clone git@github.com:FradSer/mcp-server-mas-sequential-thinking.git
135
+ cd mcp-server-mas-sequential-thinking
136
+ ```
137
+
138
+ 2. **Set Environment Variables:**
139
+ Create a `.env` file in the project root directory or export the variables directly into your environment:
140
+ ```dotenv
141
+ # --- LLM Configuration ---
142
+ # Select the LLM provider: "deepseek" (default), "groq", or "openrouter"
143
+ LLM_PROVIDER="deepseek"
144
+
145
+ # Provide the API key for the chosen provider:
146
+ # GROQ_API_KEY="your_groq_api_key"
147
+ DEEPSEEK_API_KEY="your_deepseek_api_key"
148
+ # OPENROUTER_API_KEY="your_openrouter_api_key"
149
+
150
+ # Optional: Base URL override (e.g., for custom DeepSeek endpoints)
151
+ # DEEPSEEK_BASE_URL="your_base_url_if_needed"
152
+
153
+ # Optional: Specify different models for Team Coordinator and Specialist Agents
154
+ # Defaults are set within the code based on the provider if these are not set.
155
+ # Example for Groq:
156
+ # GROQ_TEAM_MODEL_ID="llama3-70b-8192"
157
+ # GROQ_AGENT_MODEL_ID="llama3-8b-8192"
158
+ # Example for DeepSeek:
159
+ # DEEPSEEK_TEAM_MODEL_ID="deepseek-chat" # Note: `deepseek-reasoner` is not recommended as it doesn't support function calling
160
+ # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # Recommended for specialists
161
+ # Example for OpenRouter:
162
+ # OPENROUTER_TEAM_MODEL_ID="deepseek/deepseek-r1" # Example, adjust as needed
163
+ # OPENROUTER_AGENT_MODEL_ID="deepseek/deepseek-chat" # Example, adjust as needed
164
+
165
+ # --- External Tools ---
166
+ # Required ONLY if the Researcher agent is used and needs Exa
167
+ EXA_API_KEY="your_exa_api_key"
168
+ ```
169
+
170
+ **Note on Model Selection:**
171
+ - The `TEAM_MODEL_ID` is used by the Coordinator (`Team` object). This role benefits from strong reasoning, synthesis, and delegation capabilities. Consider using a more powerful model (e.g., `deepseek-chat`, `claude-3-opus`, `gpt-4-turbo`) here, potentially balancing capability with cost/speed.
172
+ - The `AGENT_MODEL_ID` is used by the specialist agents (Planner, Researcher, etc.). These handle focused sub-tasks. A faster or more cost-effective model (e.g., `deepseek-chat`, `claude-3-sonnet`, `llama3-8b`) might be suitable, depending on task complexity and budget/performance needs.
173
+ - Defaults are provided in the code (e.g., in `main.py`) if these environment variables are not set. Experimentation is encouraged to find the optimal balance for your use case.
174
+
175
+ 3. **Install Dependencies:**
176
+ It's highly recommended to use a virtual environment.
177
+
178
+ - **Using `uv` (Recommended):**
179
+ ```bash
180
+ # Install uv if you don't have it:
181
+ # curl -LsSf https://astral.sh/uv/install.sh | sh
182
+ # source $HOME/.cargo/env # Or restart your shell
183
+
184
+ # Create and activate a virtual environment (optional but recommended)
185
+ python -m venv .venv
186
+ source .venv/bin/activate # On Windows use `.venv\\Scripts\\activate`
187
+
188
+ # Install dependencies
189
+ uv pip install -r requirements.txt
190
+ # Or if a pyproject.toml exists with dependencies defined:
191
+ # uv pip install .
192
+ ```
193
+ - **Using `pip`:**
194
+ ```bash
195
+ # Create and activate a virtual environment (optional but recommended)
196
+ python -m venv .venv
197
+ source .venv/bin/activate # On Windows use `.venv\\Scripts\\activate`
198
+
199
+ # Install dependencies
200
+ pip install -r requirements.txt
201
+ # Or if a pyproject.toml exists with dependencies defined:
202
+ # pip install .
203
+ ```
204
+
205
+ ## Usage
206
+
207
+ Ensure your environment variables are set and the virtual environment (if used) is active.
208
+
209
+ Run the server. Choose one of the following methods:
210
+
211
+ 1. **Using `uv run` (Recommended):**
212
+ ```bash
213
+ uv --directory /path/to/mcp-server-mas-sequential-thinking run mcp-server-mas-sequential-thinking
214
+ ```
215
+ 2. **Directly using Python:**
216
+
217
+ ```bash
218
+ python main.py
219
+ ```
220
+
221
+ The server will start and listen for requests via stdio, making the `sequentialthinking` tool available to compatible MCP clients configured to use it.
222
+
223
+ ### `sequentialthinking` Tool Parameters
224
+
225
+ The tool expects arguments matching the `ThoughtData` Pydantic model:
226
+
227
+ ```python
228
+ # Simplified representation from src/models.py
229
+ class ThoughtData(BaseModel):
230
+ thought: str # Content of the current thought/step
231
+ thoughtNumber: int # Sequence number (>=1)
232
+ totalThoughts: int # Estimated total steps (>=1, suggest >=5)
233
+ nextThoughtNeeded: bool # Is another step required after this?
234
+ isRevision: bool = False # Is this revising a previous thought?
235
+ revisesThought: Optional[int] = None # If isRevision, which thought number?
236
+ branchFromThought: Optional[int] = None # If branching, from which thought?
237
+ branchId: Optional[str] = None # Unique ID for the new branch being created
238
+ needsMoreThoughts: bool = False # Signal if estimate is too low before last step
239
+ ```
240
+
241
+ ### Interacting with the Tool (Conceptual Example)
242
+
243
+ An LLM would interact with this tool iteratively:
244
+
245
+ 1. **LLM:** Uses a starter prompt (like `sequential-thinking-starter`) with the problem definition.
246
+ 2. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 1`, the initial `thought` (e.g., "Plan the analysis..."), an estimated `totalThoughts`, and `nextThoughtNeeded: True`.
247
+ 3. **Server:** MAS processes the thought. The Coordinator synthesizes responses from specialists and provides guidance (e.g., "Analysis plan complete. Suggest researching X next. No revisions recommended yet.").
248
+ 4. **LLM:** Receives the JSON response containing `coordinatorResponse`.
249
+ 5. **LLM:** Formulates the next thought based on the `coordinatorResponse` (e.g., "Research X using available tools...").
250
+ 6. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 2`, the new `thought`, potentially updated `totalThoughts`, `nextThoughtNeeded: True`.
251
+ 7. **Server:** MAS processes. The Coordinator synthesizes (e.g., "Research complete. Findings suggest a flaw in thought #1's assumption. RECOMMENDATION: Revise thought #1...").
252
+ 8. **LLM:** Receives the response, notes the recommendation.
253
+ 9. **LLM:** Formulates a revision thought.
254
+ 10. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 3`, the revision `thought`, `isRevision: True`, `revisesThought: 1`, `nextThoughtNeeded: True`.
255
+ 11. **... and so on, potentially branching or extending the process as needed.**
256
+
257
+ ### Tool Response Format
258
+
259
+ The tool returns a JSON string containing:
260
+
261
+ ```json
262
+ {
263
+ "processedThoughtNumber": int, // The thought number that was just processed
264
+ "estimatedTotalThoughts": int, // The current estimate of total thoughts
265
+ "nextThoughtNeeded": bool, // Whether the process indicates more steps are needed
266
+ "coordinatorResponse": "...", // Synthesized output from the agent team, including analysis, findings, and guidance for the next step.
267
+ "branches": ["main", "branch-id-1"], // List of active branch IDs
268
+ "thoughtHistoryLength": int, // Total number of thoughts processed so far (across all branches)
269
+ "branchDetails": {
270
+ "currentBranchId": "main", // The ID of the branch the processed thought belongs to
271
+ "branchOriginThought": null | int, // The thought number where the current branch diverged (null for 'main')
272
+ "allBranches": { // Count of thoughts in each active branch
273
+ "main": 5,
274
+ "branch-id-1": 2
275
+ }
276
+ },
277
+ "isRevision": bool, // Was the processed thought a revision?
278
+ "revisesThought": null | int, // Which thought number was revised (if isRevision is true)
279
+ "isBranch": bool, // Did this thought start a new branch?
280
+ "status": "success | validation_error | failed", // Outcome status
281
+ "error": null | "Error message..." // Error details if status is not 'success'
282
+ }
283
+ ```
284
+
285
+ ## Logging
286
+
287
+ - Logs are written to `~/.sequential_thinking/logs/sequential_thinking.log` by default. (Configuration might be adjustable in the logging setup code).
288
+ - Uses Python's standard `logging` module.
289
+ - Includes a rotating file handler (e.g., 10MB limit, 5 backups) and a console handler (typically INFO level).
290
+ - Logs include timestamps, levels, logger names, and messages, including structured representations of thoughts being processed.
291
+
292
+ ## Development
293
+
294
+ 1. **Clone the repository:** (As in Installation)
295
+ ```bash
296
+ git clone git@github.com:FradSer/mcp-server-mas-sequential-thinking.git
297
+ cd mcp-server-mas-sequential-thinking
298
+ ```
299
+ 2. **Set up Virtual Environment:** (Recommended)
300
+ ```bash
301
+ python -m venv .venv
302
+ source .venv/bin/activate # On Windows use `.venv\\Scripts\\activate`
303
+ ```
304
+ 3. **Install Dependencies (including dev):**
305
+ Ensure your `requirements-dev.txt` or `pyproject.toml` specifies development tools (like `pytest`, `ruff`, `black`, `mypy`).
306
+ ```bash
307
+ # Using uv
308
+ uv pip install -r requirements.txt
309
+ uv pip install -r requirements-dev.txt # Or install extras if defined in pyproject.toml: uv pip install -e ".[dev]"
310
+
311
+ # Using pip
312
+ pip install -r requirements.txt
313
+ pip install -r requirements-dev.txt # Or install extras if defined in pyproject.toml: pip install -e ".[dev]"
314
+ ```
315
+ 4. **Run Checks:**
316
+ Execute linters, formatters, and tests (adjust commands based on your project setup).
317
+ ```bash
318
+ # Example commands (replace with actual commands used in the project)
319
+ ruff check . --fix
320
+ black .
321
+ mypy .
322
+ pytest
323
+ ```
324
+ 5. **Contribution:**
325
+ (Consider adding contribution guidelines: branching strategy, pull request process, code style).
326
+
327
+ ## License
328
+
329
+ MIT