mcp-server-mas-sequential-thinking 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,11 @@
1
+ # Python-generated files
2
+ __pycache__/
3
+ *.py[oc]
4
+ build/
5
+ dist/
6
+ wheels/
7
+ *.egg-info
8
+
9
+ # Virtual environments
10
+ .venv
11
+ .env
@@ -0,0 +1,243 @@
1
+ Metadata-Version: 2.4
2
+ Name: mcp-server-mas-sequential-thinking
3
+ Version: 0.1.0
4
+ Summary: MCP Agent Implementation for Sequential Thinking
5
+ Author-email: Frad LEE <fradser@gmail.com>
6
+ Requires-Python: >=3.10
7
+ Requires-Dist: agno
8
+ Requires-Dist: asyncio
9
+ Requires-Dist: exa-py
10
+ Requires-Dist: mcp
11
+ Requires-Dist: python-dotenv
12
+ Provides-Extra: dev
13
+ Requires-Dist: black; extra == 'dev'
14
+ Requires-Dist: isort; extra == 'dev'
15
+ Requires-Dist: mypy; extra == 'dev'
16
+ Requires-Dist: pytest; extra == 'dev'
17
+ Description-Content-Type: text/markdown
18
+
19
+ # Sequential Thinking Multi-Agent System (MAS) ![](https://img.shields.io/badge/A%20FRAD%20PRODUCT-WIP-yellow)
20
+
21
+ [![Twitter Follow](https://img.shields.io/twitter/follow/FradSer?style=social)](https://twitter.com/FradSer) [![Python Version](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/) [![Framework](https://img.shields.io/badge/Framework-Agno-orange.svg)](https://github.com/cognitivecomputations/agno)
22
+
23
+ English | [简体中文](README.zh-CN.md)
24
+
25
+ This project implements an advanced sequential thinking process using a **Multi-Agent System (MAS)** built with the **Agno** framework and served via **MCP**. It represents a significant evolution from simpler state-tracking approaches, leveraging coordinated specialized agents for deeper analysis and problem decomposition.
26
+
27
+ ## Overview
28
+
29
+ This server provides a sophisticated `sequentialthinking` tool designed for complex problem-solving. Unlike [its predecessor](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking), this version utilizes a true Multi-Agent System (MAS) architecture where:
30
+
31
+ * **A Coordinating Agent** (the `Team` object in `coordinate` mode) manages the workflow.
32
+ * **Specialized Agents** (Planner, Researcher, Analyzer, Critic, Synthesizer) handle specific sub-tasks based on their defined roles and expertise.
33
+ * Incoming thoughts are actively **processed, analyzed, and synthesized** by the agent team, not just logged.
34
+ * The system supports complex thought patterns including **revisions** of previous steps and **branching** to explore alternative paths.
35
+ * Integration with external tools like **Exa** (via the Researcher agent) allows for dynamic information gathering.
36
+ * Robust **Pydantic** validation ensures data integrity for thought steps.
37
+ * Detailed **logging** tracks the process, including agent interactions (handled by the coordinator).
38
+
39
+ The goal is to achieve a higher quality of analysis and a more nuanced thinking process than possible with a single agent or simple state tracking, by harnessing the power of specialized roles working collaboratively.
40
+
41
+ ## Key Differences from Original Version (TypeScript)
42
+
43
+ This Python/Agno implementation marks a fundamental shift from the original TypeScript version:
44
+
45
+ | Feature/Aspect | Python/Agno Version (Current) | TypeScript Version (Original) |
46
+ | :------------------ | :------------------------------------------------------------------- | :--------------------------------------------------- |
47
+ | **Architecture** | **Multi-Agent System (MAS)**; Active processing by a team of agents. | **Single Class State Tracker**; Simple logging/storing. |
48
+ | **Intelligence** | **Distributed Agent Logic**; Embedded in specialized agents & Coordinator. | **External LLM Only**; No internal intelligence. |
49
+ | **Processing** | **Active Analysis & Synthesis**; Agents *act* on the thought. | **Passive Logging**; Merely recorded the thought. |
50
+ | **Frameworks** | **Agno (MAS) + FastMCP (Server)**; Uses dedicated MAS library. | **MCP SDK only**. |
51
+ | **Coordination** | **Explicit Team Coordination Logic** (`Team` in `coordinate` mode). | **None**; No coordination concept. |
52
+ | **Validation** | **Pydantic Schema Validation**; Robust data validation. | **Basic Type Checks**; Less reliable. |
53
+ | **External Tools** | **Integrated (Exa via Researcher)**; Can perform research tasks. | **None**. |
54
+ | **Logging** | **Structured Python Logging (File + Console)**; Configurable. | **Console Logging with Chalk**; Basic. |
55
+ | **Language & Ecosystem** | **Python**; Leverages Python AI/ML ecosystem. | **TypeScript/Node.js**. |
56
+
57
+ In essence, the system evolved from a passive thought *recorder* to an active thought *processor* powered by a collaborative team of AI agents.
58
+
59
+ ## How it Works (Coordinate Mode)
60
+
61
+ 1. **Initiation:** An external LLM uses the `sequential-thinking-starter` prompt to define the problem and initiate the process.
62
+ 2. **Tool Call:** The LLM calls the `sequentialthinking` tool with the first (or subsequent) thought, structured according to the `ThoughtData` model.
63
+ 3. **Validation & Logging:** The tool receives the call, validates the input using Pydantic, logs the incoming thought, and updates the history/branch state via `AppContext`.
64
+ 4. **Coordinator Invocation:** The core thought content (with context about revisions/branches) is passed to the `SequentialThinkingTeam`'s `arun` method.
65
+ 5. **Coordinator Analysis & Delegation:** The `Team` (acting as Coordinator) analyzes the input thought, breaks it into sub-tasks, and delegates these sub-tasks to the *most relevant* specialist agents (e.g., Analyzer for analysis tasks, Researcher for information needs).
66
+ 6. **Specialist Execution:** Delegated agents execute their specific sub-tasks using their instructions, models, and tools (like `ThinkingTools` or `ExaTools`).
67
+ 7. **Response Collection:** Specialists return their results to the Coordinator.
68
+ 8. **Synthesis & Guidance:** The Coordinator synthesizes the specialists' responses into a single, cohesive output. It may include recommendations for revision or branching based on the specialists' findings (especially the Critic and Analyzer). It also adds guidance for the LLM on formulating the next thought.
69
+ 9. **Return Value:** The tool returns a JSON string containing the Coordinator's synthesized response, status, and updated context (branches, history length).
70
+ 10. **Iteration:** The calling LLM uses the Coordinator's response and guidance to formulate the next `sequentialthinking` tool call, potentially triggering revisions or branches as suggested.
71
+
72
+ ## Token Consumption Warning
73
+
74
+ ⚠️ **High Token Usage:** Due to the Multi-Agent System architecture, this tool consumes significantly **more tokens** than single-agent alternatives or the previous TypeScript version. Each `sequentialthinking` call invokes:
75
+ * The Coordinator agent (the `Team` itself).
76
+ * Multiple specialist agents (potentially Planner, Researcher, Analyzer, Critic, Synthesizer, depending on the Coordinator's delegation).
77
+
78
+ This parallel processing leads to substantially higher token usage (potentially 3-6x or more per thought step) compared to single-agent or state-tracking approaches. Budget and plan accordingly. This tool prioritizes **analysis depth and quality** over token efficiency.
79
+
80
+ ## Prerequisites
81
+
82
+ * Python 3.10+
83
+ * Access to a compatible LLM API (configured for `agno`, e.g., DeepSeek)
84
+ * `DEEPSEEK_API_KEY` environment variable.
85
+ * Exa API Key (if using the Researcher agent's capabilities)
86
+ * `EXA_API_KEY` environment variable.
87
+ * `uv` package manager (recommended) or `pip`.
88
+
89
+ ## MCP Server Configuration (Client-Side)
90
+
91
+ This server runs as a standard executable script that communicates via stdio, as expected by MCP. The exact configuration method depends on your specific MCP client implementation. Consult your client's documentation for details.
92
+
93
+ ```
94
+ {
95
+ "mcpServers": {
96
+ "mas-sequential-thinking": {
97
+ "command": "uvx",
98
+ "args": [
99
+ "mcp-server-mas-sequential-thinking"
100
+ ],
101
+ env": {
102
+ "DEEPSEEK_API_KEY": "your_deepseek_api_key",
103
+ "DEEPSEEK_BASE_URL": "your_base_url_if_needed", # Optional: If using a custom endpoint
104
+ "EXA_API_KEY": "your_exa_api_key"
105
+ }
106
+ }
107
+ }
108
+ }
109
+ ```
110
+
111
+ ## Installation & Setup
112
+
113
+ 1. **Clone the repository:**
114
+ ```bash
115
+ git clone git@github.com:FradSer/mcp-server-mas-sequential-thinking.git
116
+ cd mcp-server-mas-sequential-thinking
117
+ ```
118
+
119
+ 2. **Set Environment Variables:**
120
+ Create a `.env` file in the root directory or export the variables:
121
+ ```dotenv
122
+ # Required for the LLM used by Agno agents/team
123
+ DEEPSEEK_API_KEY="your_deepseek_api_key"
124
+ # DEEPSEEK_BASE_URL="your_base_url_if_needed" # Optional: If using a custom endpoint
125
+
126
+ # Required ONLY if the Researcher agent is used and needs Exa
127
+ EXA_API_KEY="your_exa_api_key"
128
+ ```
129
+
130
+ 3. **Install Dependencies:**
131
+
132
+ * **Using `uv` (Recommended):**
133
+ ```bash
134
+ # Install uv if you don't have it:
135
+ # curl -LsSf [https://astral.sh/uv/install.sh](https://astral.sh/uv/install.sh) | sh
136
+ # source $HOME/.cargo/env # Or restart your shell
137
+
138
+ uv pip install -r requirements.txt
139
+ # Or if a pyproject.toml exists with dependencies:
140
+ # uv pip install .
141
+ ```
142
+ * **Using `pip`:**
143
+ ```bash
144
+ pip install -r requirements.txt
145
+ # Or if a pyproject.toml exists with dependencies:
146
+ # pip install .
147
+ ```
148
+
149
+ ## Usage
150
+
151
+ Run the server script (assuming the main script is named `main.py` or similar based on your file structure):
152
+
153
+ ```bash
154
+ python your_main_script_name.py
155
+ ```
156
+
157
+ The server will start and listen for requests via stdio, making the `sequentialthinking` tool available to compatible MCP clients (like certain LLMs or testing frameworks).
158
+
159
+ ### `sequentialthinking` Tool Parameters
160
+
161
+ The tool expects arguments matching the `ThoughtData` Pydantic model:
162
+
163
+ ```python
164
+ # Simplified representation
165
+ {
166
+ "thought": str, # Content of the current thought/step
167
+ "thoughtNumber": int, # Sequence number (>=1)
168
+ "totalThoughts": int, # Estimated total steps (>=1, suggest >=5)
169
+ "nextThoughtNeeded": bool, # Is another step required after this?
170
+ "isRevision": bool = False, # Is this revising a previous thought?
171
+ "revisesThought": Optional[int] = None, # If isRevision, which thought number?
172
+ "branchFromThought": Optional[int] = None, # If branching, from which thought?
173
+ "branchId": Optional[str] = None, # Unique ID for the branch
174
+ "needsMoreThoughts": bool = False # Signal if estimate is too low before last step
175
+ }
176
+ ```
177
+
178
+ ### Interacting with the Tool (Conceptual Example)
179
+
180
+ An LLM would interact with this tool iteratively:
181
+
182
+ 1. **LLM:** Uses `sequential-thinking-starter` prompt with the problem.
183
+ 2. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 1`, initial `thought` (e.g., "Plan the analysis..."), `totalThoughts` estimate, `nextThoughtNeeded: True`.
184
+ 3. **Server:** MAS processes the thought -> Coordinator synthesizes response & provides guidance (e.g., "Analysis plan complete. Suggest researching X next. No revisions recommended yet.").
185
+ 4. **LLM:** Receives JSON response containing `coordinatorResponse`.
186
+ 5. **LLM:** Formulates the next thought (e.g., "Research X using Exa...") based on the `coordinatorResponse`.
187
+ 6. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 2`, the new `thought`, updated `totalThoughts` (if needed), `nextThoughtNeeded: True`.
188
+ 7. **Server:** MAS processes -> Coordinator synthesizes (e.g., "Research complete. Findings suggest a flaw in thought #1's assumption. RECOMMENDATION: Revise thought #1...").
189
+ 8. **LLM:** Receives response, sees the recommendation.
190
+ 9. **LLM:** Formulates a revision thought.
191
+ 10. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 3`, the revision `thought`, `isRevision: True`, `revisesThought: 1`, `nextThoughtNeeded: True`.
192
+ 11. **... and so on, potentially branching or extending as needed.**
193
+
194
+ ### Tool Response Format
195
+
196
+ The tool returns a JSON string containing:
197
+
198
+ ```json
199
+ {
200
+ "processedThoughtNumber": int,
201
+ "estimatedTotalThoughts": int,
202
+ "nextThoughtNeeded": bool,
203
+ "coordinatorResponse": "Synthesized output from the agent team, including analysis, findings, and guidance for the next step...",
204
+ "branches": ["list", "of", "branch", "ids"],
205
+ "thoughtHistoryLength": int,
206
+ "branchDetails": {
207
+ "currentBranchId": "main | branchId",
208
+ "branchOriginThought": null | int,
209
+ "allBranches": {"main": count, "branchId": count, ...}
210
+ },
211
+ "isRevision": bool,
212
+ "revisesThought": null | int,
213
+ "isBranch": bool,
214
+ "status": "success | validation_error | failed",
215
+ "error": "Error message if status is not success" // Optional
216
+ }
217
+ ```
218
+
219
+ ## Logging
220
+
221
+ * Logs are written to `~/.sequential_thinking/logs/sequential_thinking.log`.
222
+ * Uses Python's standard `logging` module.
223
+ * Includes rotating file handler (10MB limit, 5 backups) and console handler (INFO level).
224
+ * Logs include timestamps, levels, logger names, and messages, including formatted thought representations.
225
+
226
+ ## Development
227
+
228
+ (Add development guidelines here if applicable, e.g., setting up dev environments, running tests, linting.)
229
+
230
+ 1. Clone the repository.
231
+ 2. Set up a virtual environment.
232
+ 3. Install dependencies, potentially including development extras:
233
+ ```bash
234
+ # Using uv
235
+ uv pip install -e ".[dev]"
236
+ # Using pip
237
+ pip install -e ".[dev]"
238
+ ```
239
+ 4. Run linters/formatters/tests.
240
+
241
+ ## License
242
+
243
+ MIT
@@ -0,0 +1,225 @@
1
+ # Sequential Thinking Multi-Agent System (MAS) ![](https://img.shields.io/badge/A%20FRAD%20PRODUCT-WIP-yellow)
2
+
3
+ [![Twitter Follow](https://img.shields.io/twitter/follow/FradSer?style=social)](https://twitter.com/FradSer) [![Python Version](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/) [![Framework](https://img.shields.io/badge/Framework-Agno-orange.svg)](https://github.com/cognitivecomputations/agno)
4
+
5
+ English | [简体中文](README.zh-CN.md)
6
+
7
+ This project implements an advanced sequential thinking process using a **Multi-Agent System (MAS)** built with the **Agno** framework and served via **MCP**. It represents a significant evolution from simpler state-tracking approaches, leveraging coordinated specialized agents for deeper analysis and problem decomposition.
8
+
9
+ ## Overview
10
+
11
+ This server provides a sophisticated `sequentialthinking` tool designed for complex problem-solving. Unlike [its predecessor](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking), this version utilizes a true Multi-Agent System (MAS) architecture where:
12
+
13
+ * **A Coordinating Agent** (the `Team` object in `coordinate` mode) manages the workflow.
14
+ * **Specialized Agents** (Planner, Researcher, Analyzer, Critic, Synthesizer) handle specific sub-tasks based on their defined roles and expertise.
15
+ * Incoming thoughts are actively **processed, analyzed, and synthesized** by the agent team, not just logged.
16
+ * The system supports complex thought patterns including **revisions** of previous steps and **branching** to explore alternative paths.
17
+ * Integration with external tools like **Exa** (via the Researcher agent) allows for dynamic information gathering.
18
+ * Robust **Pydantic** validation ensures data integrity for thought steps.
19
+ * Detailed **logging** tracks the process, including agent interactions (handled by the coordinator).
20
+
21
+ The goal is to achieve a higher quality of analysis and a more nuanced thinking process than possible with a single agent or simple state tracking, by harnessing the power of specialized roles working collaboratively.
22
+
23
+ ## Key Differences from Original Version (TypeScript)
24
+
25
+ This Python/Agno implementation marks a fundamental shift from the original TypeScript version:
26
+
27
+ | Feature/Aspect | Python/Agno Version (Current) | TypeScript Version (Original) |
28
+ | :------------------ | :------------------------------------------------------------------- | :--------------------------------------------------- |
29
+ | **Architecture** | **Multi-Agent System (MAS)**; Active processing by a team of agents. | **Single Class State Tracker**; Simple logging/storing. |
30
+ | **Intelligence** | **Distributed Agent Logic**; Embedded in specialized agents & Coordinator. | **External LLM Only**; No internal intelligence. |
31
+ | **Processing** | **Active Analysis & Synthesis**; Agents *act* on the thought. | **Passive Logging**; Merely recorded the thought. |
32
+ | **Frameworks** | **Agno (MAS) + FastMCP (Server)**; Uses dedicated MAS library. | **MCP SDK only**. |
33
+ | **Coordination** | **Explicit Team Coordination Logic** (`Team` in `coordinate` mode). | **None**; No coordination concept. |
34
+ | **Validation** | **Pydantic Schema Validation**; Robust data validation. | **Basic Type Checks**; Less reliable. |
35
+ | **External Tools** | **Integrated (Exa via Researcher)**; Can perform research tasks. | **None**. |
36
+ | **Logging** | **Structured Python Logging (File + Console)**; Configurable. | **Console Logging with Chalk**; Basic. |
37
+ | **Language & Ecosystem** | **Python**; Leverages Python AI/ML ecosystem. | **TypeScript/Node.js**. |
38
+
39
+ In essence, the system evolved from a passive thought *recorder* to an active thought *processor* powered by a collaborative team of AI agents.
40
+
41
+ ## How it Works (Coordinate Mode)
42
+
43
+ 1. **Initiation:** An external LLM uses the `sequential-thinking-starter` prompt to define the problem and initiate the process.
44
+ 2. **Tool Call:** The LLM calls the `sequentialthinking` tool with the first (or subsequent) thought, structured according to the `ThoughtData` model.
45
+ 3. **Validation & Logging:** The tool receives the call, validates the input using Pydantic, logs the incoming thought, and updates the history/branch state via `AppContext`.
46
+ 4. **Coordinator Invocation:** The core thought content (with context about revisions/branches) is passed to the `SequentialThinkingTeam`'s `arun` method.
47
+ 5. **Coordinator Analysis & Delegation:** The `Team` (acting as Coordinator) analyzes the input thought, breaks it into sub-tasks, and delegates these sub-tasks to the *most relevant* specialist agents (e.g., Analyzer for analysis tasks, Researcher for information needs).
48
+ 6. **Specialist Execution:** Delegated agents execute their specific sub-tasks using their instructions, models, and tools (like `ThinkingTools` or `ExaTools`).
49
+ 7. **Response Collection:** Specialists return their results to the Coordinator.
50
+ 8. **Synthesis & Guidance:** The Coordinator synthesizes the specialists' responses into a single, cohesive output. It may include recommendations for revision or branching based on the specialists' findings (especially the Critic and Analyzer). It also adds guidance for the LLM on formulating the next thought.
51
+ 9. **Return Value:** The tool returns a JSON string containing the Coordinator's synthesized response, status, and updated context (branches, history length).
52
+ 10. **Iteration:** The calling LLM uses the Coordinator's response and guidance to formulate the next `sequentialthinking` tool call, potentially triggering revisions or branches as suggested.
53
+
54
+ ## Token Consumption Warning
55
+
56
+ ⚠️ **High Token Usage:** Due to the Multi-Agent System architecture, this tool consumes significantly **more tokens** than single-agent alternatives or the previous TypeScript version. Each `sequentialthinking` call invokes:
57
+ * The Coordinator agent (the `Team` itself).
58
+ * Multiple specialist agents (potentially Planner, Researcher, Analyzer, Critic, Synthesizer, depending on the Coordinator's delegation).
59
+
60
+ This parallel processing leads to substantially higher token usage (potentially 3-6x or more per thought step) compared to single-agent or state-tracking approaches. Budget and plan accordingly. This tool prioritizes **analysis depth and quality** over token efficiency.
61
+
62
+ ## Prerequisites
63
+
64
+ * Python 3.10+
65
+ * Access to a compatible LLM API (configured for `agno`, e.g., DeepSeek)
66
+ * `DEEPSEEK_API_KEY` environment variable.
67
+ * Exa API Key (if using the Researcher agent's capabilities)
68
+ * `EXA_API_KEY` environment variable.
69
+ * `uv` package manager (recommended) or `pip`.
70
+
71
+ ## MCP Server Configuration (Client-Side)
72
+
73
+ This server runs as a standard executable script that communicates via stdio, as expected by MCP. The exact configuration method depends on your specific MCP client implementation. Consult your client's documentation for details.
74
+
75
+ ```
76
+ {
77
+ "mcpServers": {
78
+ "mas-sequential-thinking": {
79
+ "command": "uvx",
80
+ "args": [
81
+ "mcp-server-mas-sequential-thinking"
82
+ ],
83
+ env": {
84
+ "DEEPSEEK_API_KEY": "your_deepseek_api_key",
85
+ "DEEPSEEK_BASE_URL": "your_base_url_if_needed", # Optional: If using a custom endpoint
86
+ "EXA_API_KEY": "your_exa_api_key"
87
+ }
88
+ }
89
+ }
90
+ }
91
+ ```
92
+
93
+ ## Installation & Setup
94
+
95
+ 1. **Clone the repository:**
96
+ ```bash
97
+ git clone git@github.com:FradSer/mcp-server-mas-sequential-thinking.git
98
+ cd mcp-server-mas-sequential-thinking
99
+ ```
100
+
101
+ 2. **Set Environment Variables:**
102
+ Create a `.env` file in the root directory or export the variables:
103
+ ```dotenv
104
+ # Required for the LLM used by Agno agents/team
105
+ DEEPSEEK_API_KEY="your_deepseek_api_key"
106
+ # DEEPSEEK_BASE_URL="your_base_url_if_needed" # Optional: If using a custom endpoint
107
+
108
+ # Required ONLY if the Researcher agent is used and needs Exa
109
+ EXA_API_KEY="your_exa_api_key"
110
+ ```
111
+
112
+ 3. **Install Dependencies:**
113
+
114
+ * **Using `uv` (Recommended):**
115
+ ```bash
116
+ # Install uv if you don't have it:
117
+ # curl -LsSf [https://astral.sh/uv/install.sh](https://astral.sh/uv/install.sh) | sh
118
+ # source $HOME/.cargo/env # Or restart your shell
119
+
120
+ uv pip install -r requirements.txt
121
+ # Or if a pyproject.toml exists with dependencies:
122
+ # uv pip install .
123
+ ```
124
+ * **Using `pip`:**
125
+ ```bash
126
+ pip install -r requirements.txt
127
+ # Or if a pyproject.toml exists with dependencies:
128
+ # pip install .
129
+ ```
130
+
131
+ ## Usage
132
+
133
+ Run the server script (assuming the main script is named `main.py` or similar based on your file structure):
134
+
135
+ ```bash
136
+ python your_main_script_name.py
137
+ ```
138
+
139
+ The server will start and listen for requests via stdio, making the `sequentialthinking` tool available to compatible MCP clients (like certain LLMs or testing frameworks).
140
+
141
+ ### `sequentialthinking` Tool Parameters
142
+
143
+ The tool expects arguments matching the `ThoughtData` Pydantic model:
144
+
145
+ ```python
146
+ # Simplified representation
147
+ {
148
+ "thought": str, # Content of the current thought/step
149
+ "thoughtNumber": int, # Sequence number (>=1)
150
+ "totalThoughts": int, # Estimated total steps (>=1, suggest >=5)
151
+ "nextThoughtNeeded": bool, # Is another step required after this?
152
+ "isRevision": bool = False, # Is this revising a previous thought?
153
+ "revisesThought": Optional[int] = None, # If isRevision, which thought number?
154
+ "branchFromThought": Optional[int] = None, # If branching, from which thought?
155
+ "branchId": Optional[str] = None, # Unique ID for the branch
156
+ "needsMoreThoughts": bool = False # Signal if estimate is too low before last step
157
+ }
158
+ ```
159
+
160
+ ### Interacting with the Tool (Conceptual Example)
161
+
162
+ An LLM would interact with this tool iteratively:
163
+
164
+ 1. **LLM:** Uses `sequential-thinking-starter` prompt with the problem.
165
+ 2. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 1`, initial `thought` (e.g., "Plan the analysis..."), `totalThoughts` estimate, `nextThoughtNeeded: True`.
166
+ 3. **Server:** MAS processes the thought -> Coordinator synthesizes response & provides guidance (e.g., "Analysis plan complete. Suggest researching X next. No revisions recommended yet.").
167
+ 4. **LLM:** Receives JSON response containing `coordinatorResponse`.
168
+ 5. **LLM:** Formulates the next thought (e.g., "Research X using Exa...") based on the `coordinatorResponse`.
169
+ 6. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 2`, the new `thought`, updated `totalThoughts` (if needed), `nextThoughtNeeded: True`.
170
+ 7. **Server:** MAS processes -> Coordinator synthesizes (e.g., "Research complete. Findings suggest a flaw in thought #1's assumption. RECOMMENDATION: Revise thought #1...").
171
+ 8. **LLM:** Receives response, sees the recommendation.
172
+ 9. **LLM:** Formulates a revision thought.
173
+ 10. **LLM:** Calls `sequentialthinking` tool with `thoughtNumber: 3`, the revision `thought`, `isRevision: True`, `revisesThought: 1`, `nextThoughtNeeded: True`.
174
+ 11. **... and so on, potentially branching or extending as needed.**
175
+
176
+ ### Tool Response Format
177
+
178
+ The tool returns a JSON string containing:
179
+
180
+ ```json
181
+ {
182
+ "processedThoughtNumber": int,
183
+ "estimatedTotalThoughts": int,
184
+ "nextThoughtNeeded": bool,
185
+ "coordinatorResponse": "Synthesized output from the agent team, including analysis, findings, and guidance for the next step...",
186
+ "branches": ["list", "of", "branch", "ids"],
187
+ "thoughtHistoryLength": int,
188
+ "branchDetails": {
189
+ "currentBranchId": "main | branchId",
190
+ "branchOriginThought": null | int,
191
+ "allBranches": {"main": count, "branchId": count, ...}
192
+ },
193
+ "isRevision": bool,
194
+ "revisesThought": null | int,
195
+ "isBranch": bool,
196
+ "status": "success | validation_error | failed",
197
+ "error": "Error message if status is not success" // Optional
198
+ }
199
+ ```
200
+
201
+ ## Logging
202
+
203
+ * Logs are written to `~/.sequential_thinking/logs/sequential_thinking.log`.
204
+ * Uses Python's standard `logging` module.
205
+ * Includes rotating file handler (10MB limit, 5 backups) and console handler (INFO level).
206
+ * Logs include timestamps, levels, logger names, and messages, including formatted thought representations.
207
+
208
+ ## Development
209
+
210
+ (Add development guidelines here if applicable, e.g., setting up dev environments, running tests, linting.)
211
+
212
+ 1. Clone the repository.
213
+ 2. Set up a virtual environment.
214
+ 3. Install dependencies, potentially including development extras:
215
+ ```bash
216
+ # Using uv
217
+ uv pip install -e ".[dev]"
218
+ # Using pip
219
+ pip install -e ".[dev]"
220
+ ```
221
+ 4. Run linters/formatters/tests.
222
+
223
+ ## License
224
+
225
+ MIT