@nbiish/cognitive-tools-mcp 0.9.3 → 0.9.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +1 -32
- package/build/index.js +259 -114
- package/integration-prompts/new-prompts/latest.md +134 -0
- package/integration-tool-descriptions/old-descriptions/tool-descriptions-08.ts +458 -0
- package/package.json +1 -1
- /package/integration-prompts/{new-prompts → old-prompts}/integration-prompt-16.md +0 -0
- /package/integration-tool-descriptions/{new-description/latest.ts → old-descriptions/tool-descriptions-07.ts} +0 -0
package/README.md
CHANGED
|
@@ -26,7 +26,7 @@ Known as:
|
|
|
26
26
|
|
|
27
27
|
Both packages are maintained in parallel and receive the same updates. You can use either package name in your projects - they provide identical functionality.
|
|
28
28
|
|
|
29
|
-
**See the latest integration details in [`integration-prompts/new-prompts/
|
|
29
|
+
**See the latest integration details in [`integration-prompts/new-prompts/latest.md`](integration-prompts/new-prompts/latest.md).**
|
|
30
30
|
|
|
31
31
|
## Features
|
|
32
32
|
|
|
@@ -215,37 +215,6 @@ Example Response:
|
|
|
215
215
|
}
|
|
216
216
|
```
|
|
217
217
|
|
|
218
|
-
## Version History
|
|
219
|
-
|
|
220
|
-
**0.9.1**
|
|
221
|
-
- Fixed package publishing script to correctly publish both packages with different names
|
|
222
|
-
- Updated both packages to maintain version consistency
|
|
223
|
-
|
|
224
|
-
**0.9.0**
|
|
225
|
-
- Major update focused on iterative refinement and Chain of Draft methodology
|
|
226
|
-
- Updated tools with enhanced support for draft generation, analysis, and refinement
|
|
227
|
-
- Improved error handling, logging, and parameter descriptions
|
|
228
|
-
- Removed explicit version references for greater flexibility
|
|
229
|
-
|
|
230
|
-
**0.8.5**: Version update to resolve npm publish conflicts and maintain consistency between packages. Continues using the shortened tool name `assess_cuc_n_mode` to comply with MCP tool name length requirements.
|
|
231
|
-
- **0.8.4**: Version bump to align packages after updating the tool name from `assess_complexity_and_select_thought_mode` to `assess_cuc_n_mode`. Ensures consistent naming across all files.
|
|
232
|
-
- **0.8.3**: Updated package version to maintain consistency between `gikendaasowin-aabajichiganan-mcp` and `cognitive-tools-mcp` packages. Ensures all references to the tool use the shortened name `assess_cuc_n_mode`.
|
|
233
|
-
- **0.8.2**: Removed integration prompt references from codebase and made various refinements. Shortened `assess_complexity_and_select_thought_mode` to `assess_cuc_n_mode` to address MCP tool name length limitation.
|
|
234
|
-
- **0.8.1**: Updated tool function to integrate with external tools, renamed `assess_cuc_n_mode` to `assess_complexity_and_select_thought_mode`, improved validation of thought structure, aligned with AI Pair Programmer Prompt v0.8.1+
|
|
235
|
-
- **0.8.0**: Updated tool function design to return generated content for explicit analysis, renamed `assess_cuc_n` to `assess_cuc_n_mode`, aligned with AI Pair Programmer Prompt v0.8.0+
|
|
236
|
-
- **0.7.3**: Improved dual package publishing with automated scripts, consistent versioning, and documentation updates
|
|
237
|
-
- **0.7.2**: Updated tool names for length constraints (`assess_complexity_and_select_thought_mode` → `assess_cuc_n`), improved dual package publishing support, and aligned with AI Pair Programmer Prompt v0.7.2
|
|
238
|
-
- **0.7.1**: Updated to align with AI Pair Programmer Prompt v0.7.1+, renamed `assess_cuc_n_mode` to `assess_cuc_n`, enhanced cognitive tools for more explicit handling of tool needs
|
|
239
|
-
- **0.6.1**: Fixed tool naming issue for technical length limitation
|
|
240
|
-
- **0.3.9**: Updated tool descriptions and fixed error handling to improve reliability
|
|
241
|
-
- **0.3.6**: Updated repository URLs to point to gikendaasowin-aabajichiganan-mcp
|
|
242
|
-
- **0.3.5**: Updated license link and repository URLs
|
|
243
|
-
- **0.3.4**: Dual package publishing (Anishinaabemowin and English names)
|
|
244
|
-
- **0.3.3**: Fixed response format to comply with MCP schema, synchronized version numbers
|
|
245
|
-
- **0.3.2**: Updated response format structure
|
|
246
|
-
- **0.3.1**: Initial public release with basic functionality
|
|
247
|
-
- **0.3.0**: Development version
|
|
248
|
-
|
|
249
218
|
## Copyright
|
|
250
219
|
|
|
251
220
|
Copyright © 2025 ᓂᐲᔥ ᐙᐸᓂᒥᑮ-ᑭᓇᐙᐸᑭᓯ (Nbiish Waabanimikii-Kinawaabakizi), also known legally as JUSTIN PAUL KENWABIKISE, professionally documented as Nbiish-Justin Paul Kenwabikise, Anishinaabek Dodem (Anishinaabe Clan): Animikii (Thunder), a descendant of Chief ᑭᓇᐙᐸᑭᓯ (Kinwaabakizi) of the Beaver Island Band, and an enrolled member of the sovereign Grand Traverse Band of Ottawa and Chippewa Indians. All rights reserved.
|
package/build/index.js
CHANGED
|
@@ -3,244 +3,389 @@
|
|
|
3
3
|
* -----------------------------------------------------------------------------
|
|
4
4
|
* Gikendaasowin Aabajichiganan - Core Cognitive Tools MCP Server
|
|
5
5
|
*
|
|
6
|
-
*
|
|
7
|
-
*
|
|
8
|
-
*
|
|
9
|
-
*
|
|
10
|
-
*
|
|
11
|
-
*
|
|
6
|
+
* Version: 0.9.4
|
|
7
|
+
*
|
|
8
|
+
* Description: Provides a suite of cognitive tools for an AI Pair Programmer,
|
|
9
|
+
* enabling structured reasoning, planning, analysis, and iterative
|
|
10
|
+
* refinement (Chain of Thought, Chain of Draft, Reflection).
|
|
11
|
+
* This server focuses on managing the AI's *internal cognitive loop*,
|
|
12
|
+
* as described in the Anthropic research on the 'think' tool and
|
|
13
|
+
* related cognitive patterns. External actions are planned within
|
|
14
|
+
* the 'think' step but executed by the calling environment.
|
|
15
|
+
*
|
|
16
|
+
* Key Principles:
|
|
17
|
+
* 1. **Structured Deliberation:** Tools guide specific cognitive acts (planning,
|
|
18
|
+
* reasoning, critique).
|
|
19
|
+
* 2. **Centralized Analysis (`think`):** The `think` tool is mandatory after
|
|
20
|
+
* most cognitive actions or receiving external results, serving as the hub
|
|
21
|
+
* for analysis, planning the *next immediate step*, verification, and
|
|
22
|
+
* self-correction.
|
|
23
|
+
* 3. **CUC-N Assessment:** Task characteristics determine the required depth
|
|
24
|
+
* of cognition (full `think` vs. `quick_think`).
|
|
25
|
+
* 4. **Internal Generation First:** Tools like `plan_and_solve`, `chain_of_thought`,
|
|
26
|
+
* `reflection`, and `synthesize_prior_reasoning` are called *after* the AI
|
|
27
|
+
* has internally generated the relevant text (plan, CoT, critique, summary).
|
|
28
|
+
* The tool logs this generation and returns it, grounding the AI for the
|
|
29
|
+
* mandatory `think` analysis step.
|
|
30
|
+
* 5. **Iterative Refinement (Chain of Draft):** The `chain_of_draft` tool signals
|
|
31
|
+
* internal draft creation/modification, prompting analysis via `think`.
|
|
32
|
+
*
|
|
12
33
|
* Protocol: Model Context Protocol (MCP) over stdio.
|
|
13
34
|
* -----------------------------------------------------------------------------
|
|
14
35
|
*/
|
|
15
36
|
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
|
16
37
|
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
|
17
38
|
import { z } from "zod";
|
|
39
|
+
export const version = "0.9.4";
|
|
18
40
|
// --- Server Definition ---
|
|
19
41
|
const server = new McpServer({
|
|
20
42
|
name: "gikendaasowin-aabajichiganan-mcp",
|
|
21
|
-
version:
|
|
22
|
-
description: "ᑭᑫᓐᑖᓱᐎᓐ ᐋᐸᒋᒋᑲᓇᓐ - Core Cognitive Tools Suite: Enables structured, iterative reasoning (Chain of Draft), planning, and analysis for AI
|
|
43
|
+
version: version,
|
|
44
|
+
description: "ᑭᑫᓐᑖᓱᐎᓐ ᐋᐸᒋᒋᑲᓇᓐ - Core Cognitive Tools Suite v0.9.4: Enables structured, iterative reasoning (Chain of Thought/Draft), planning, and analysis for AI agents, focusing on the cognitive loop. MANDATORY `think` step integrates results."
|
|
23
45
|
});
|
|
24
|
-
// --- Logging
|
|
46
|
+
// --- Logging Helpers ---
|
|
47
|
+
/**
|
|
48
|
+
* Logs an incoming tool call to stderr.
|
|
49
|
+
* @param toolName The name of the tool being called.
|
|
50
|
+
* @param details Optional additional details about the call.
|
|
51
|
+
*/
|
|
25
52
|
function logToolCall(toolName, details) {
|
|
26
|
-
|
|
53
|
+
const timestamp = new Date().toISOString();
|
|
54
|
+
console.error(`[${timestamp}] [MCP Server] > Tool Call: ${toolName}${details ? ` - ${details}` : ''}`);
|
|
27
55
|
}
|
|
56
|
+
/**
|
|
57
|
+
* Logs the result (success or failure) of a tool execution to stderr.
|
|
58
|
+
* @param toolName The name of the tool executed.
|
|
59
|
+
* @param success Whether the execution was successful.
|
|
60
|
+
* @param resultDetails Optional details about the result.
|
|
61
|
+
*/
|
|
28
62
|
function logToolResult(toolName, success, resultDetails) {
|
|
29
|
-
|
|
63
|
+
const timestamp = new Date().toISOString();
|
|
64
|
+
console.error(`[${timestamp}] [MCP Server] < Tool Result: ${toolName} - ${success ? 'Success' : 'Failure'}${resultDetails ? ` - ${resultDetails}` : ''}`);
|
|
30
65
|
}
|
|
66
|
+
/**
|
|
67
|
+
* Logs an error during tool execution and formats a standard error response for the LLM.
|
|
68
|
+
* @param toolName The name of the tool where the error occurred.
|
|
69
|
+
* @param error The error object or message.
|
|
70
|
+
* @returns An McpToolResult containing the error message.
|
|
71
|
+
*/
|
|
31
72
|
function logToolError(toolName, error) {
|
|
73
|
+
const timestamp = new Date().toISOString();
|
|
32
74
|
const errorMessage = error instanceof Error ? error.message : String(error);
|
|
33
|
-
console.error(`[MCP Server] ! Tool Error: ${toolName} - ${errorMessage}`);
|
|
75
|
+
console.error(`[${timestamp}] [MCP Server] ! Tool Error: ${toolName} - ${errorMessage}`);
|
|
34
76
|
logToolResult(toolName, false, errorMessage); // Log failure result as well
|
|
35
77
|
// Return a structured error message suitable for the LLM
|
|
36
|
-
return {
|
|
78
|
+
return {
|
|
79
|
+
content: [{
|
|
80
|
+
type: "text",
|
|
81
|
+
text: `Error executing tool '${toolName}': ${errorMessage}. Please analyze this error in your next 'think' step and adjust your plan.`
|
|
82
|
+
}]
|
|
83
|
+
};
|
|
37
84
|
}
|
|
38
85
|
// --- Core Cognitive Deliberation & Refinement Tools ---
|
|
39
|
-
|
|
40
|
-
|
|
86
|
+
/**
|
|
87
|
+
* Tool: assess_cuc_n_mode
|
|
88
|
+
* Purpose: Mandatory initial assessment of task characteristics to determine cognitive strategy.
|
|
89
|
+
* Workflow: Call BEFORE starting complex tasks or significantly changing strategy.
|
|
90
|
+
* Output: Confirms assessment and selected mode (`think` or `quick_think`). Result MUST inform the subsequent cognitive flow.
|
|
91
|
+
*/
|
|
92
|
+
server.tool("assess_cuc_n_mode", "**Mandatory Pre-Deliberation Assessment.** Evaluates task Complexity, Uncertainty, Consequence, Novelty (CUC-N) to determine required cognitive depth and initial strategy. MUST be called before starting complex tasks or changing strategy. Selects 'think' (default) or 'quick_think' (only for verified Low CUC-N).", {
|
|
93
|
+
assessment_and_choice: z.string().describe("Your structured assessment including: 1) Situation Description, 2) CUC-N Ratings (Low/Medium/High for each), 3) Rationale for ratings, 4) Recommended Initial Cognitive Strategy (e.g., 'Start with chain_of_thought then think'), 5) Explicit Mode Selection ('Selected Mode: think' or 'Selected Mode: quick_think').")
|
|
41
94
|
}, async ({ assessment_and_choice }) => {
|
|
42
|
-
|
|
95
|
+
const toolName = 'assess_cuc_n_mode';
|
|
96
|
+
logToolCall(toolName);
|
|
43
97
|
try {
|
|
44
|
-
//
|
|
45
|
-
|
|
46
|
-
|
|
98
|
+
// Enhanced validation using regex for robustness
|
|
99
|
+
const modeRegex = /Selected Mode: (think|quick_think)/i;
|
|
100
|
+
const cucnRegex = /CUC-N Ratings:/i;
|
|
101
|
+
const strategyRegex = /Recommended Initial Strategy:/i;
|
|
102
|
+
if (!assessment_and_choice || typeof assessment_and_choice !== 'string') {
|
|
103
|
+
throw new Error('Input must be a non-empty string.');
|
|
104
|
+
}
|
|
105
|
+
if (!cucnRegex.test(assessment_and_choice)) {
|
|
106
|
+
throw new Error('Invalid assessment: String must include "CUC-N Ratings:".');
|
|
47
107
|
}
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
108
|
+
if (!strategyRegex.test(assessment_and_choice)) {
|
|
109
|
+
throw new Error('Invalid assessment: String must include "Recommended Initial Strategy:".');
|
|
110
|
+
}
|
|
111
|
+
const modeMatch = assessment_and_choice.match(modeRegex);
|
|
112
|
+
if (!modeMatch || !modeMatch[1]) {
|
|
113
|
+
throw new Error('Invalid assessment: String must include explicit "Selected Mode: think" or "Selected Mode: quick_think".');
|
|
114
|
+
}
|
|
115
|
+
const selectedMode = modeMatch[1].toLowerCase();
|
|
116
|
+
const resultText = `Cognitive Assessment Completed. CUC-N analysis indicates ${selectedMode === 'think' ? 'detailed deliberation' : 'quick check'} is appropriate. Proceeding with selected mode: ${selectedMode}. Full Assessment logged. Ensure subsequent actions align with this assessment.`;
|
|
117
|
+
logToolResult(toolName, true, `Selected mode: ${selectedMode}`);
|
|
118
|
+
// Log the full assessment server-side for traceability
|
|
119
|
+
console.error(`[${new Date().toISOString()}] [MCP Server] - ${toolName} Assessment Details:\n${assessment_and_choice}`);
|
|
51
120
|
return { content: [{ type: "text", text: resultText }] };
|
|
52
121
|
}
|
|
53
122
|
catch (error) {
|
|
54
|
-
return logToolError(
|
|
123
|
+
return logToolError(toolName, error);
|
|
55
124
|
}
|
|
56
125
|
});
|
|
57
|
-
|
|
58
|
-
|
|
126
|
+
/**
|
|
127
|
+
* Tool: think
|
|
128
|
+
* Purpose: The **CENTRAL HUB** for the cognitive loop. Mandatory after assessment, other cognitive tools, internal drafts, or external action results.
|
|
129
|
+
* Workflow: Analyze previous step -> Plan immediate next step -> Verify -> Assess Risk -> Self-Correct.
|
|
130
|
+
* Output: Returns the structured thought text itself, grounding the AI's reasoning process in the context.
|
|
131
|
+
*/
|
|
132
|
+
server.tool("think", "**MANDATORY Central Hub for Analysis, Planning, and Refinement.** Called after assessment, other cognitive tools (`plan_and_solve`, `chain_of_thought`, etc.), internal drafts (`chain_of_draft`), or external action results. Analyzes previous step's outcome/draft, plans the *immediate* next action (cognitive or planning external action), verifies plan, assesses risk/challenges, looks ahead, and self-corrects. Follow the MANDATORY structure in the `thought` parameter.", {
|
|
133
|
+
thought: z.string().describe("Your **detailed** internal monologue following the MANDATORY structure: ## Analysis: (Critically evaluate last result/draft/observation. What worked? What didn't? What are the implications?), ## Plan: (Define the *single, immediate* next action and its specific purpose. Is it calling another cognitive tool, generating a draft, planning an external action, or concluding?), ## Verification: (How will you confirm the next step is correct or successful?), ## Anticipated Challenges & Contingency: (What could go wrong with the next step? How will you handle it?), ## Risk Assessment: (Briefly assess risk of the planned step - Low/Medium/High), ## Lookahead: (How does this step fit into the overall goal?), ## Self-Correction & Learning: (Any adjustments needed based on the analysis? What was learned?).")
|
|
59
134
|
}, async ({ thought }) => {
|
|
60
|
-
|
|
135
|
+
const toolName = 'think';
|
|
136
|
+
logToolCall(toolName);
|
|
61
137
|
try {
|
|
62
138
|
if (!thought || typeof thought !== 'string' || thought.trim().length === 0) {
|
|
63
|
-
throw new Error('Invalid thought: Must be a non-empty string.');
|
|
139
|
+
throw new Error('Invalid thought: Must be a non-empty string containing the structured analysis and plan.');
|
|
64
140
|
}
|
|
65
|
-
// Basic check
|
|
141
|
+
// Basic structural check (case-insensitive) - Warning, not strict failure
|
|
66
142
|
const requiredSections = ["## Analysis:", "## Plan:", "## Verification:", "## Anticipated Challenges & Contingency:", "## Risk Assessment:", "## Lookahead:", "## Self-Correction & Learning:"];
|
|
67
|
-
const missingSections = requiredSections.filter(section => !thought.includes(section));
|
|
143
|
+
const missingSections = requiredSections.filter(section => !thought.toLowerCase().includes(section.toLowerCase()));
|
|
68
144
|
if (missingSections.length > 0) {
|
|
69
|
-
console.warn(`[MCP Server] Warning: '
|
|
145
|
+
console.warn(`[${new Date().toISOString()}] [MCP Server] Warning: '${toolName}' input might be missing sections: ${missingSections.join(', ')}. Ensure full structure is followed for optimal reasoning.`);
|
|
70
146
|
}
|
|
71
|
-
logToolResult(
|
|
72
|
-
// Returns the same thought text received.
|
|
147
|
+
logToolResult(toolName, true, `Thought logged (length: ${thought.length})`);
|
|
148
|
+
// Returns the same thought text received. This grounds the reasoning in the context.
|
|
149
|
+
// The AI uses this output implicitly as the starting point for its *next* internal step or external action.
|
|
73
150
|
return { content: [{ type: "text", text: thought }] };
|
|
74
151
|
}
|
|
75
152
|
catch (error) {
|
|
76
|
-
return logToolError(
|
|
153
|
+
return logToolError(toolName, error);
|
|
77
154
|
}
|
|
78
155
|
});
|
|
79
|
-
|
|
80
|
-
|
|
156
|
+
/**
|
|
157
|
+
* Tool: quick_think
|
|
158
|
+
* Purpose: A lightweight cognitive checkpoint for **strictly Low CUC-N situations** or trivial confirmations.
|
|
159
|
+
* Workflow: Use ONLY when `assess_cuc_n_mode` explicitly selected 'quick_think'. Use sparingly.
|
|
160
|
+
* Output: Logs the brief thought.
|
|
161
|
+
*/
|
|
162
|
+
server.tool("quick_think", "Cognitive Checkpoint ONLY for situations explicitly assessed as strictly Low CUC-N (via `assess_cuc_n_mode`) or for trivial confirmations/acknowledgements where detailed analysis via `think` is unnecessary. Use SPARINGLY.", {
|
|
163
|
+
brief_thought: z.string().describe("Your **concise** thought or confirmation for this simple, low CUC-N step. Briefly state the observation/action and confirm it's trivial.")
|
|
81
164
|
}, async ({ brief_thought }) => {
|
|
82
|
-
|
|
165
|
+
const toolName = 'quick_think';
|
|
166
|
+
logToolCall(toolName);
|
|
83
167
|
try {
|
|
84
168
|
if (!brief_thought || typeof brief_thought !== 'string' || brief_thought.trim().length === 0) {
|
|
85
|
-
throw new Error('Invalid brief_thought: Must be non-empty.');
|
|
169
|
+
throw new Error('Invalid brief_thought: Must be a non-empty string.');
|
|
86
170
|
}
|
|
87
|
-
logToolResult(
|
|
88
|
-
|
|
171
|
+
logToolResult(toolName, true, `Logged: ${brief_thought.substring(0, 80)}...`);
|
|
172
|
+
// Returns the brief thought, similar to 'think', for grounding.
|
|
173
|
+
return { content: [{ type: "text", text: brief_thought }] };
|
|
89
174
|
}
|
|
90
175
|
catch (error) {
|
|
91
|
-
return logToolError(
|
|
176
|
+
return logToolError(toolName, error);
|
|
92
177
|
}
|
|
93
178
|
});
|
|
94
|
-
|
|
95
|
-
|
|
179
|
+
/**
|
|
180
|
+
* Tool: gauge_confidence
|
|
181
|
+
* Purpose: Meta-Cognitive Checkpoint to explicitly state confidence in a preceding analysis, plan, or draft.
|
|
182
|
+
* Workflow: Generate assessment -> Call this tool with assessment text -> MANDATORY `think` step follows to analyze the confidence level.
|
|
183
|
+
* Output: Confirms confidence gauging and level. Emphasizes need for `think` analysis, especially if not High.
|
|
184
|
+
*/
|
|
185
|
+
server.tool("gauge_confidence", "Meta-Cognitive Checkpoint. Guides *internal stating* of **confidence (High/Medium/Low) and justification** regarding a specific plan, analysis, or draft you just formulated. Call this tool *with* the text containing your confidence assessment. Output MUST be analyzed in the mandatory `think` step immediately following.", {
|
|
186
|
+
assessment_and_confidence: z.string().describe("The text containing the item being assessed AND your explicit internal assessment: 1) Confidence Level: (High/Medium/Low). 2) Justification for this level.")
|
|
96
187
|
}, async ({ assessment_and_confidence }) => {
|
|
97
|
-
|
|
188
|
+
const toolName = 'gauge_confidence';
|
|
189
|
+
logToolCall(toolName);
|
|
98
190
|
try {
|
|
99
191
|
const confidenceRegex = /Confidence Level: (High|Medium|Low)/i;
|
|
100
|
-
if (!assessment_and_confidence || typeof assessment_and_confidence !== 'string'
|
|
101
|
-
throw new Error('
|
|
192
|
+
if (!assessment_and_confidence || typeof assessment_and_confidence !== 'string') {
|
|
193
|
+
throw new Error('Input must be a non-empty string.');
|
|
102
194
|
}
|
|
103
195
|
const match = assessment_and_confidence.match(confidenceRegex);
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
196
|
+
if (!match || !match[1]) {
|
|
197
|
+
throw new Error('Invalid confidence assessment: String must include "Confidence Level: High/Medium/Low" and justification.');
|
|
198
|
+
}
|
|
199
|
+
const level = match[1];
|
|
200
|
+
const emphasis = (level.toLowerCase() !== 'high') ? "CRITICAL: Analyze implications of non-High confidence." : "Proceed with analysis.";
|
|
201
|
+
const resultText = `Confidence Gauge Completed. Stated Level: ${level}. Assessment Text Logged. MANDATORY: Analyze this confidence level and justification in your next 'think' step. ${emphasis}`;
|
|
202
|
+
logToolResult(toolName, true, `Level: ${level}`);
|
|
203
|
+
console.error(`[${new Date().toISOString()}] [MCP Server] - ${toolName} Confidence Details:\n${assessment_and_confidence}`);
|
|
107
204
|
return { content: [{ type: "text", text: resultText }] };
|
|
108
205
|
}
|
|
109
206
|
catch (error) {
|
|
110
|
-
return logToolError(
|
|
207
|
+
return logToolError(toolName, error);
|
|
111
208
|
}
|
|
112
209
|
});
|
|
113
|
-
|
|
114
|
-
|
|
210
|
+
/**
|
|
211
|
+
* Tool: plan_and_solve
|
|
212
|
+
* Purpose: Guides the *internal generation* of a structured plan draft.
|
|
213
|
+
* Workflow: Internally generate plan -> Call this tool *with* the plan text -> MANDATORY `think` step follows to analyze/refine the plan.
|
|
214
|
+
* Output: Returns the provided plan text for grounding and analysis.
|
|
215
|
+
*/
|
|
216
|
+
server.tool("plan_and_solve", "Guides *internal generation* of a **structured plan draft**. Call this tool *with* the generated plan text you created internally. Returns the plan text. MANDATORY: Use the next `think` step to critically evaluate this plan's feasibility, refine it, and confirm the *first actionable step*.", {
|
|
217
|
+
generated_plan_text: z.string().describe("The **full, structured plan draft** you generated internally, including goals, steps, potential external tool needs, assumptions, and risks."),
|
|
115
218
|
task_objective: z.string().describe("The original high-level task objective this plan addresses.")
|
|
116
219
|
}, async ({ generated_plan_text, task_objective }) => {
|
|
117
|
-
|
|
220
|
+
const toolName = 'plan_and_solve';
|
|
221
|
+
logToolCall(toolName, `Objective: ${task_objective.substring(0, 80)}...`);
|
|
118
222
|
try {
|
|
119
223
|
if (!generated_plan_text || typeof generated_plan_text !== 'string' || generated_plan_text.trim().length === 0) {
|
|
120
|
-
throw new Error('Invalid generated_plan_text: Must be non-empty.');
|
|
224
|
+
throw new Error('Invalid generated_plan_text: Must be a non-empty string containing the plan.');
|
|
121
225
|
}
|
|
122
226
|
if (!task_objective || typeof task_objective !== 'string' || task_objective.trim().length === 0) {
|
|
123
|
-
throw new Error('Invalid task_objective.');
|
|
227
|
+
throw new Error('Invalid task_objective: Must provide the original objective.');
|
|
124
228
|
}
|
|
125
|
-
logToolResult(
|
|
126
|
-
// Returns the actual plan text received
|
|
229
|
+
logToolResult(toolName, true, `Returned plan draft for analysis (length: ${generated_plan_text.length})`);
|
|
230
|
+
// Returns the actual plan text received. The AI must analyze this in the next 'think' step.
|
|
127
231
|
return { content: [{ type: "text", text: generated_plan_text }] };
|
|
128
232
|
}
|
|
129
233
|
catch (error) {
|
|
130
|
-
return logToolError(
|
|
234
|
+
return logToolError(toolName, error);
|
|
131
235
|
}
|
|
132
236
|
});
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
237
|
+
/**
|
|
238
|
+
* Tool: chain_of_thought
|
|
239
|
+
* Purpose: Guides the *internal generation* of a detailed, step-by-step reasoning draft (CoT).
|
|
240
|
+
* Workflow: Internally generate CoT -> Call this tool *with* the CoT text -> MANDATORY `think` step follows to analyze the reasoning.
|
|
241
|
+
* Output: Returns the provided CoT text for grounding and analysis.
|
|
242
|
+
*/
|
|
243
|
+
server.tool("chain_of_thought", "Guides *internal generation* of **detailed, step-by-step reasoning draft (CoT)**. Call this tool *with* the generated CoT text you created internally. Returns the CoT text. MANDATORY: Use the next `think` step to analyze this reasoning, extract insights, identify flaws/gaps, and plan the next concrete action based on the CoT.", {
|
|
244
|
+
generated_cot_text: z.string().describe("The **full, step-by-step Chain of Thought draft** you generated internally to solve or analyze the problem."),
|
|
245
|
+
problem_statement: z.string().describe("The original problem statement or question this CoT addresses.")
|
|
136
246
|
}, async ({ generated_cot_text, problem_statement }) => {
|
|
137
|
-
|
|
247
|
+
const toolName = 'chain_of_thought';
|
|
248
|
+
logToolCall(toolName, `Problem: ${problem_statement.substring(0, 80)}...`);
|
|
138
249
|
try {
|
|
139
250
|
if (!generated_cot_text || typeof generated_cot_text !== 'string' || generated_cot_text.trim().length === 0) {
|
|
140
|
-
throw new Error('Invalid generated_cot_text: Must be non-empty.');
|
|
251
|
+
throw new Error('Invalid generated_cot_text: Must be a non-empty string containing the CoT.');
|
|
141
252
|
}
|
|
142
253
|
if (!problem_statement || typeof problem_statement !== 'string' || problem_statement.trim().length === 0) {
|
|
143
|
-
throw new Error('Invalid problem_statement.');
|
|
254
|
+
throw new Error('Invalid problem_statement: Must provide the original problem.');
|
|
144
255
|
}
|
|
145
|
-
logToolResult(
|
|
146
|
-
// Returns the actual CoT text received
|
|
256
|
+
logToolResult(toolName, true, `Returned CoT draft for analysis (length: ${generated_cot_text.length})`);
|
|
257
|
+
// Returns the actual CoT text received. The AI must analyze this in the next 'think' step.
|
|
147
258
|
return { content: [{ type: "text", text: generated_cot_text }] };
|
|
148
259
|
}
|
|
149
260
|
catch (error) {
|
|
150
|
-
return logToolError(
|
|
261
|
+
return logToolError(toolName, error);
|
|
151
262
|
}
|
|
152
263
|
});
|
|
153
|
-
|
|
154
|
-
|
|
264
|
+
/**
|
|
265
|
+
* Tool: chain_of_draft
|
|
266
|
+
* Purpose: Signals that internal drafts (code, text, plan fragments) have been generated or refined.
|
|
267
|
+
* Workflow: Internally generate/refine draft(s) -> Call this tool -> MANDATORY `think` step follows to analyze the draft(s).
|
|
268
|
+
* Output: Confirms readiness for analysis.
|
|
269
|
+
*/
|
|
270
|
+
server.tool("chain_of_draft", "Signals that one or more **internal drafts** (e.g., code snippets, documentation sections, refined plan steps) have been generated or refined and are ready for analysis. Call this tool *after* generating/refining draft(s) internally. Response confirms readiness. MANDATORY: Analyze these draft(s) in your next `think` step.", {
|
|
271
|
+
draft_description: z.string().describe("Brief but specific description of the draft(s) generated/refined internally (e.g., 'Initial Python function for API call', 'Refined error handling in plan step 3', 'Drafted README introduction').")
|
|
155
272
|
}, async ({ draft_description }) => {
|
|
156
|
-
|
|
273
|
+
const toolName = 'chain_of_draft';
|
|
274
|
+
logToolCall(toolName, `Description: ${draft_description}`);
|
|
157
275
|
try {
|
|
158
276
|
if (!draft_description || typeof draft_description !== 'string' || draft_description.trim().length === 0) {
|
|
159
|
-
throw new Error('Invalid draft_description.');
|
|
277
|
+
throw new Error('Invalid draft_description: Must provide a description.');
|
|
160
278
|
}
|
|
161
|
-
const resultText = `Internal draft(s) ready for analysis: ${draft_description}. MANDATORY: Analyze these draft(s) now in your next 'think' step.`;
|
|
162
|
-
logToolResult(
|
|
279
|
+
const resultText = `Internal draft(s) ready for analysis: \"${draft_description}\". MANDATORY: Analyze these draft(s) now using the structured format in your next 'think' step. Evaluate correctness, completeness, and alignment with goals.`;
|
|
280
|
+
logToolResult(toolName, true);
|
|
163
281
|
return { content: [{ type: "text", text: resultText }] };
|
|
164
282
|
}
|
|
165
283
|
catch (error) {
|
|
166
|
-
return logToolError(
|
|
284
|
+
return logToolError(toolName, error);
|
|
167
285
|
}
|
|
168
286
|
});
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
287
|
+
/**
|
|
288
|
+
* Tool: reflection
|
|
289
|
+
* Purpose: Guides the *internal generation* of a critical self-evaluation (critique) of a prior step, draft, or outcome.
|
|
290
|
+
* Workflow: Internally generate critique -> Call this tool *with* the critique text -> MANDATORY `think` step follows to act on the critique.
|
|
291
|
+
* Output: Returns the provided critique text for grounding and analysis.
|
|
292
|
+
*/
|
|
293
|
+
server.tool("reflection", "Guides *internal generation* of a critical self-evaluation (critique) on a prior step, draft, plan, or outcome. Call this tool *with* the **generated critique text** you created internally. Returns the critique text. MANDATORY: Use the next `think` step to analyze this critique and plan specific corrective actions or refinements based on it.", {
|
|
294
|
+
generated_critique_text: z.string().describe("The **full critique text** you generated internally, identifying specific flaws, strengths, assumptions, alternative approaches, and concrete suggestions for improvement."),
|
|
295
|
+
input_subject_description: z.string().describe("A brief description of the original reasoning, plan, code draft, or action result that was critiqued (e.g., 'Critique of the plan generated via plan_and_solve', 'Reflection on the CoT for problem X').")
|
|
172
296
|
}, async ({ generated_critique_text, input_subject_description }) => {
|
|
173
|
-
|
|
297
|
+
const toolName = 'reflection';
|
|
298
|
+
logToolCall(toolName, `Subject: ${input_subject_description}`);
|
|
174
299
|
try {
|
|
175
300
|
if (!generated_critique_text || typeof generated_critique_text !== 'string' || generated_critique_text.trim().length === 0) {
|
|
176
|
-
throw new Error('Invalid generated_critique_text: Must be non-empty.');
|
|
301
|
+
throw new Error('Invalid generated_critique_text: Must be a non-empty string containing the critique.');
|
|
177
302
|
}
|
|
178
303
|
if (!input_subject_description || typeof input_subject_description !== 'string' || input_subject_description.trim().length === 0) {
|
|
179
|
-
throw new Error('Invalid input_subject_description.');
|
|
304
|
+
throw new Error('Invalid input_subject_description: Must describe what was critiqued.');
|
|
180
305
|
}
|
|
181
|
-
logToolResult(
|
|
182
|
-
// Returns the actual critique text received
|
|
306
|
+
logToolResult(toolName, true, `Returned critique for analysis (length: ${generated_critique_text.length})`);
|
|
307
|
+
// Returns the actual critique text received. The AI must analyze this in the next 'think' step.
|
|
183
308
|
return { content: [{ type: "text", text: generated_critique_text }] };
|
|
184
309
|
}
|
|
185
310
|
catch (error) {
|
|
186
|
-
return logToolError(
|
|
311
|
+
return logToolError(toolName, error);
|
|
187
312
|
}
|
|
188
313
|
});
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
314
|
+
/**
|
|
315
|
+
* Tool: synthesize_prior_reasoning
|
|
316
|
+
* Purpose: Context Management Tool. Guides the *internal generation* of a structured summary of preceding context.
|
|
317
|
+
* Workflow: Internally generate summary -> Call this tool *with* the summary text -> MANDATORY `think` step follows to use the summary.
|
|
318
|
+
* Output: Returns the provided summary text for grounding and analysis.
|
|
319
|
+
*/
|
|
320
|
+
server.tool("synthesize_prior_reasoning", "Context Management Tool. Guides *internal generation* of a **structured summary** of preceding steps, decisions, key findings, or relevant context to consolidate understanding before proceeding. Call this tool *with* the generated summary text you created internally. Returns the summary. MANDATORY: Use the next `think` step to leverage this summary and inform the next action.", {
|
|
321
|
+
generated_summary_text: z.string().describe("The **full, structured summary text** you generated internally (e.g., key decisions made, open questions, current state of implementation, relevant facts gathered)."),
|
|
322
|
+
context_to_summarize_description: z.string().describe("Description of the reasoning span or context that was summarized (e.g., 'Summary of the last 5 steps', 'Consolidated findings from tool results A and B').")
|
|
192
323
|
}, async ({ generated_summary_text, context_to_summarize_description }) => {
|
|
193
|
-
|
|
324
|
+
const toolName = 'synthesize_prior_reasoning';
|
|
325
|
+
logToolCall(toolName, `Context: ${context_to_summarize_description}`);
|
|
194
326
|
try {
|
|
195
327
|
if (!generated_summary_text || typeof generated_summary_text !== 'string' || generated_summary_text.trim().length === 0) {
|
|
196
|
-
throw new Error('Invalid generated_summary_text: Must be non-empty.');
|
|
328
|
+
throw new Error('Invalid generated_summary_text: Must be a non-empty string containing the summary.');
|
|
197
329
|
}
|
|
198
330
|
if (!context_to_summarize_description || typeof context_to_summarize_description !== 'string' || context_to_summarize_description.trim().length === 0) {
|
|
199
|
-
throw new Error('Invalid context_to_summarize_description.');
|
|
331
|
+
throw new Error('Invalid context_to_summarize_description: Must describe what was summarized.');
|
|
200
332
|
}
|
|
201
|
-
logToolResult(
|
|
202
|
-
// Returns the actual summary text received
|
|
333
|
+
logToolResult(toolName, true, `Returned summary for analysis (length: ${generated_summary_text.length})`);
|
|
334
|
+
// Returns the actual summary text received. The AI must analyze/use this in the next 'think' step.
|
|
203
335
|
return { content: [{ type: "text", text: generated_summary_text }] };
|
|
204
336
|
}
|
|
205
337
|
catch (error) {
|
|
206
|
-
return logToolError(
|
|
338
|
+
return logToolError(toolName, error);
|
|
207
339
|
}
|
|
208
340
|
});
|
|
209
341
|
// --- Server Lifecycle and Error Handling ---
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
}
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
server.close().catch(err => console.error('[MCP Server] Error during shutdown on uncaughtException:', err)).finally(() => {
|
|
342
|
+
/**
|
|
343
|
+
* Gracefully shuts down the server.
|
|
344
|
+
*/
|
|
345
|
+
async function shutdown() {
|
|
346
|
+
console.error('\n[MCP Server] Shutting down gracefully...');
|
|
347
|
+
try {
|
|
348
|
+
await server.close();
|
|
349
|
+
console.error('[MCP Server] Server closed.');
|
|
350
|
+
process.exit(0);
|
|
351
|
+
}
|
|
352
|
+
catch (err) {
|
|
353
|
+
console.error('[MCP Server] Error during shutdown:', err);
|
|
223
354
|
process.exit(1);
|
|
224
|
-
}
|
|
355
|
+
}
|
|
356
|
+
}
|
|
357
|
+
process.on('SIGINT', shutdown);
|
|
358
|
+
process.on('SIGTERM', shutdown);
|
|
359
|
+
process.on('uncaughtException', (error, origin) => {
|
|
360
|
+
const timestamp = new Date().toISOString();
|
|
361
|
+
console.error(`[${timestamp}] [MCP Server] FATAL: Uncaught Exception at: ${origin}`, error);
|
|
362
|
+
// Attempt graceful shutdown, but exit quickly if it fails
|
|
363
|
+
shutdown().catch(() => process.exit(1));
|
|
225
364
|
});
|
|
226
365
|
process.on('unhandledRejection', (reason, promise) => {
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
|
|
230
|
-
|
|
366
|
+
const timestamp = new Date().toISOString();
|
|
367
|
+
console.error(`[${timestamp}] [MCP Server] FATAL: Unhandled Promise Rejection:`, reason);
|
|
368
|
+
// Attempt graceful shutdown, but exit quickly if it fails
|
|
369
|
+
shutdown().catch(() => process.exit(1));
|
|
231
370
|
});
|
|
232
371
|
// --- Start the Server ---
|
|
372
|
+
/**
|
|
373
|
+
* Initializes and starts the MCP server.
|
|
374
|
+
*/
|
|
233
375
|
async function main() {
|
|
234
376
|
try {
|
|
235
377
|
const transport = new StdioServerTransport();
|
|
236
378
|
await server.connect(transport);
|
|
237
|
-
|
|
238
|
-
console.error(
|
|
239
|
-
console.error(
|
|
240
|
-
console.error(
|
|
379
|
+
const border = '-----------------------------------------------------';
|
|
380
|
+
console.error(border);
|
|
381
|
+
console.error(` ᑭᑫᓐᑖᓱᐎᓐ ᐋᐸᒋᒋᑲᓇᓐ - Core Cognitive Tools Suite v0.9.4: Enables structured, iterative reasoning (Chain of Thought/Draft), planning, and analysis for AI agents, focusing on the cognitive loop. MANDATORY \`think\` step integrates results.`);
|
|
382
|
+
console.error(` Version: ${version}`);
|
|
383
|
+
console.error(' Status: Running on stdio, awaiting MCP requests...');
|
|
384
|
+
console.error(border);
|
|
241
385
|
}
|
|
242
386
|
catch (error) {
|
|
243
|
-
|
|
387
|
+
const timestamp = new Date().toISOString();
|
|
388
|
+
console.error(`[${timestamp}] [MCP Server] Fatal error during startup:`, error);
|
|
244
389
|
process.exit(1);
|
|
245
390
|
}
|
|
246
391
|
}
|
|
@@ -0,0 +1,134 @@
|
|
|
1
|
+
# SYSTEM PROMPT: Gikendaasowin Cognitive Agent
|
|
2
|
+
|
|
3
|
+
## ROLE AND GOAL
|
|
4
|
+
|
|
5
|
+
You are **Gikendaasowin**, an expert AI Pair Programmer and Cognitive Agent. Your primary function is to solve complex programming, reasoning, and knowledge-work tasks with exceptional clarity, structure, and robustness. You achieve this by meticulously applying the **Gikendaasowin Aabajichiganan (Core Cognitive Tools) MCP suite** (`gikendaasowin-aabajichiganan-mcp` v1.0.0). Your goal is not just to find an answer, but to demonstrate a traceable, verifiable, and self-correcting reasoning process using these tools. You operate within a **cognitive loop**, focusing on internal deliberation before planning external actions.
|
|
6
|
+
|
|
7
|
+
## GUIDING PRINCIPLES
|
|
8
|
+
|
|
9
|
+
1. **Structured Deliberation:** Use the provided tools for their specific cognitive functions (assessing, planning, reasoning, drafting, reflecting, summarizing, gauging confidence). Do not perform these actions implicitly; use the designated tool.
|
|
10
|
+
2. **Mandatory Centralized Analysis (`think`):** The `think` tool is the **absolute core** of your process. It is MANDATORY after initial assessment, after using *any* other cognitive tool, after generating internal drafts (`chain_of_draft`), and after receiving results from any external action (executed by the environment based on your plan in `think`). It's where you analyze, synthesize, plan the *immediate next step*, verify, and self-correct.
|
|
11
|
+
3. **Iterative Refinement:** Embrace a cycle of generation (thought, plan, draft, critique) followed by analysis (`think`). Use `chain_of_thought`, `plan_and_solve`, `chain_of_draft`, and `reflection` to structure these iterations.
|
|
12
|
+
4. **Context-Driven Cognitive Depth:** Use `assess_cuc_n_mode` at the start and when context shifts significantly to determine if deep deliberation (`think`) or a quick check (`quick_think`) is appropriate. Default to `think` unless CUC-N is demonstrably Low.
|
|
13
|
+
5. **Internal Focus First:** These tools manage your *internal* cognitive state and reasoning. Generate content (plans, CoTs, critiques, summaries, drafts) *internally first*, then call the corresponding tool (`plan_and_solve`, `chain_of_thought`, `reflection`, `synthesize_prior_reasoning`, `chain_of_draft`) *with* that generated content. The tool logs it and returns it, grounding you for the mandatory `think` analysis step. Planning for *external* actions (like running code, searching the web, asking the user) occurs within the `## Plan:` section of the `think` tool, but execution is handled by the environment.
|
|
14
|
+
6. **Traceability and Verification:** Your use of tools, especially the structured `think` output, must create a clear, step-by-step trail of your reasoning process.
|
|
15
|
+
|
|
16
|
+
## MANDATORY RULES (Non-Negotiable)
|
|
17
|
+
|
|
18
|
+
1. **ALWAYS Start with Assessment:** Your *very first action* for any non-trivial task MUST be to call `assess_cuc_n_mode`.
|
|
19
|
+
2. **ALWAYS Use `think` After:**
|
|
20
|
+
* `assess_cuc_n_mode` result.
|
|
21
|
+
* *Any* result from `plan_and_solve`, `chain_of_thought`, `reflection`, `synthesize_prior_reasoning`, `gauge_confidence`.
|
|
22
|
+
* *Any* result from `chain_of_draft`.
|
|
23
|
+
* *Any* result/observation from an external action (provided by the environment).
|
|
24
|
+
* The *only* exception is if `assess_cuc_n_mode` explicitly resulted in selecting `quick_think` for a strictly Low CUC-N step.
|
|
25
|
+
3. **`quick_think` Restriction:** ONLY use `quick_think` if `assess_cuc_n_mode` explicitly selected it for a confirmed Low CUC-N situation or for truly trivial confirmations. Be conservative; default to `think`.
|
|
26
|
+
4. **Generate Content BEFORE Tool Call:** For `plan_and_solve`, `chain_of_thought`, `reflection`, `synthesize_prior_reasoning`, and `chain_of_draft`, you MUST generate the relevant text (plan, CoT, critique, summary, draft description) *internally first* and pass it as the argument to the tool. The tool's purpose is to log this internal cognitive act and return the content to ground your subsequent `think` step.
|
|
27
|
+
5. **Strict `think` Structure:** ALWAYS adhere to the full, mandatory structure within the `think` tool's `thought` parameter (## Analysis:, ## Plan:, ## Verification:, ## Anticipated Challenges & Contingency:, ## Risk Assessment:, ## Lookahead:, ## Self-Correction & Learning:). Be detailed and specific in each section.
|
|
28
|
+
6. **Plan Only the IMMEDIATE Next Step:** The `## Plan:` section in `think` defines only the *single, next immediate action* (calling another cognitive tool, planning an external action, concluding). Do not outline multiple future steps here; use `plan_and_solve` for multi-step planning drafts.
|
|
29
|
+
7. **Analyze Errors:** If a tool returns an error, treat the error message as an observation. Your next step MUST be to call `think` and analyze the error in the `## Analysis:` section, then plan corrective action in the `## Plan:` section.
|
|
30
|
+
|
|
31
|
+
## CORE COGNITIVE WORKFLOW INSTRUCTIONS
|
|
32
|
+
|
|
33
|
+
1. **Receive Task:** Understand the user's request.
|
|
34
|
+
2. **Assess:** Call `assess_cuc_n_mode` with your detailed CUC-N analysis and mode selection (`think` or `quick_think`).
|
|
35
|
+
3. **Initial Think:** Call `think` (or `quick_think` if explicitly selected and appropriate).
|
|
36
|
+
* `## Analysis:` Analyze the task and the CUC-N assessment result.
|
|
37
|
+
* `## Plan:` Decide the first *cognitive* action (e.g., "Generate a plan using `plan_and_solve`", "Generate a CoT using `chain_of_thought`").
|
|
38
|
+
* Complete other `think` sections.
|
|
39
|
+
4. **Internal Generation:** *Internally* generate the content required for the planned cognitive tool (e.g., write the plan draft, write the CoT).
|
|
40
|
+
5. **Call Cognitive Tool:** Call the chosen tool (`plan_and_solve`, `chain_of_thought`, etc.) *with* the content you just generated.
|
|
41
|
+
6. **MANDATORY Think Analysis:** Call `think`.
|
|
42
|
+
* `## Analysis:` Critically analyze the tool's output (which is the plan/CoT/critique/summary you provided it, now logged). Is it complete? Correct? Any flaws? What are the implications?
|
|
43
|
+
* `## Plan:` Decide the *next immediate step*. This could be:
|
|
44
|
+
* Refining the previous step (e.g., "Generate reflection on the CoT using `reflection`").
|
|
45
|
+
* Generating a draft (e.g., "Generate code draft based on plan step 2, then call `chain_of_draft`").
|
|
46
|
+
* Planning an external action (e.g., "Plan to execute code snippet X", "Plan to search for Y"). The environment executes this.
|
|
47
|
+
* Gauging confidence (e.g., "Assess confidence in this plan using `gauge_confidence`").
|
|
48
|
+
* Synthesizing context (e.g., "Summarize key findings using `synthesize_prior_reasoning`").
|
|
49
|
+
* Concluding the task.
|
|
50
|
+
* Complete other `think` sections.
|
|
51
|
+
7. **Handle External Actions:** If the plan in `think` was for an external action, the environment will execute it and provide results. Upon receiving results, **immediately go back to Step 6 (MANDATORY Think Analysis)** to analyze the outcome.
|
|
52
|
+
8. **Iterate:** Repeat steps 4-7 (or variations involving `chain_of_draft`, `reflection`, `gauge_confidence`, `synthesize_prior_reasoning` followed by `think`) until the task is fully resolved.
|
|
53
|
+
9. **Conclude:** Formulate your final answer or conclusion within the `## Plan:` section of your final `think` step.
|
|
54
|
+
|
|
55
|
+
## TOOL-SPECIFIC INSTRUCTIONS
|
|
56
|
+
|
|
57
|
+
* **`assess_cuc_n_mode` (MANDATORY START):**
|
|
58
|
+
* **When:** Before starting any non-trivial task or significantly changing strategy.
|
|
59
|
+
* **Input (`assessment_and_choice`):** Provide a structured string containing: 1) Situation Description, 2) CUC-N Ratings (L/M/H for each + rationale), 3) Recommended Initial Strategy, 4) Explicit Mode Selection (`Selected Mode: think` or `Selected Mode: quick_think`).
|
|
60
|
+
* **Follow-up:** MANDATORY `think` (or `quick_think` if selected).
|
|
61
|
+
|
|
62
|
+
* **`think` (MANDATORY HUB):**
|
|
63
|
+
* **When:** After assessment, other tools, drafts, external results. See Rule #2.
|
|
64
|
+
* **Input (`thought`):** Provide your detailed internal monologue STRICTLY following the structure (See "Think Tool Deep Dive" below).
|
|
65
|
+
* **Follow-up:** Execute the *immediate next action* defined in your `## Plan:` section (call another tool, wait for external action result, or output final answer).
|
|
66
|
+
|
|
67
|
+
* **`quick_think` (Restricted Use):**
|
|
68
|
+
* **When:** ONLY if `assess_cuc_n_mode` selected it for a verified Low CUC-N situation or trivial confirmation.
|
|
69
|
+
* **Input (`brief_thought`):** Concise thought or confirmation.
|
|
70
|
+
* **Follow-up:** Execute the simple next step.
|
|
71
|
+
|
|
72
|
+
* **`gauge_confidence` (Meta-Cognition):**
|
|
73
|
+
* **When:** After formulating a plan, analysis, or draft where confidence needs explicit assessment.
|
|
74
|
+
* **Workflow:** 1. Internally determine confidence (H/M/L) and justification. 2. Call this tool.
|
|
75
|
+
* **Input (`assessment_and_confidence`):** The text describing what's being assessed PLUS your stated "Confidence Level: [H/M/L]" and "Justification: ...".
|
|
76
|
+
* **Follow-up:** MANDATORY `think` to analyze the stated confidence level and its implications.
|
|
77
|
+
|
|
78
|
+
* **`plan_and_solve` (Plan Generation):**
|
|
79
|
+
* **When:** When you need to create a structured, multi-step plan draft.
|
|
80
|
+
* **Workflow:** 1. Internally generate the full plan draft. 2. Call this tool.
|
|
81
|
+
* **Input (`generated_plan_text`, `task_objective`):** Your generated plan text; the original task goal.
|
|
82
|
+
* **Follow-up:** MANDATORY `think` to analyze, refine, and confirm the *first* step of the plan.
|
|
83
|
+
|
|
84
|
+
* **`chain_of_thought` (Reasoning Generation):**
|
|
85
|
+
* **When:** When you need to generate a detailed, step-by-step reasoning process to solve a problem or analyze a situation.
|
|
86
|
+
* **Workflow:** 1. Internally generate the full CoT text. 2. Call this tool.
|
|
87
|
+
* **Input (`generated_cot_text`, `problem_statement`):** Your generated CoT text; the original problem.
|
|
88
|
+
* **Follow-up:** MANDATORY `think` to analyze the CoT, extract insights, identify flaws, and plan the next action based on it.
|
|
89
|
+
|
|
90
|
+
* **`chain_of_draft` (Draft Management):**
|
|
91
|
+
* **When:** After internally generating or refining any draft (code, text, plan fragment, etc.).
|
|
92
|
+
* **Workflow:** 1. Internally generate/refine draft. 2. Call this tool.
|
|
93
|
+
* **Input (`draft_description`):** Brief, specific description of the draft(s).
|
|
94
|
+
* **Follow-up:** MANDATORY `think` to analyze the draft(s) described.
|
|
95
|
+
|
|
96
|
+
* **`reflection` (Critique Generation):**
|
|
97
|
+
* **When:** When you need to critically evaluate a previous step, plan, draft, or outcome.
|
|
98
|
+
* **Workflow:** 1. Internally generate the full critique text. 2. Call this tool.
|
|
99
|
+
* **Input (`generated_critique_text`, `input_subject_description`):** Your generated critique; description of what was critiqued.
|
|
100
|
+
* **Follow-up:** MANDATORY `think` to analyze the critique and plan specific corrective actions.
|
|
101
|
+
|
|
102
|
+
* **`synthesize_prior_reasoning` (Context Management):**
|
|
103
|
+
* **When:** When you need to consolidate understanding of previous steps or context before proceeding.
|
|
104
|
+
* **Workflow:** 1. Internally generate the structured summary. 2. Call this tool.
|
|
105
|
+
* **Input (`generated_summary_text`, `context_to_summarize_description`):** Your generated summary; description of the context summarized.
|
|
106
|
+
* **Follow-up:** MANDATORY `think` to leverage the summary and inform the next action.
|
|
107
|
+
|
|
108
|
+
## `think` TOOL DEEP DIVE (MANDATORY STRUCTURE)
|
|
109
|
+
|
|
110
|
+
Your `thought` input to the `think` tool MUST contain ALL of these sections, clearly marked:
|
|
111
|
+
|
|
112
|
+
* **`## Analysis:`** Critically evaluate the *immediately preceding* step's result, observation, or generated content (plan, CoT, draft, critique, summary). What are the key takeaways? What worked? What didn't? Are there inconsistencies? What are the implications for the overall goal? If analyzing an error, diagnose the cause.
|
|
113
|
+
* **`## Plan:`** Define the *single, immediate next action* you will take. Be specific. Examples: "Call `chain_of_thought` with the problem statement X.", "Call `chain_of_draft` describing the generated function Y.", "Plan external action: Execute the Python code snippet Z.", "Call `reflection` with critique of the previous plan.", "Call `think` to conclude the task and formulate the final response."
|
|
114
|
+
* **`## Verification:`** How will you check if the *planned next step* is successful or correct? (e.g., "Check tool output for expected format", "Analyze the code execution result for expected values", "Review the generated CoT for logical flow").
|
|
115
|
+
* **`## Anticipated Challenges & Contingency:`** What potential problems might arise with the *planned next step*? How will you handle them if they occur? (e.g., "Challenge: Tool might error if input is malformed. Contingency: Reformat input and retry.", "Challenge: Code might timeout. Contingency: Analyze logs in next `think` step and simplify code if needed.").
|
|
116
|
+
* **`## Risk Assessment:`** Briefly assess the risk of the *planned next step* failing or causing issues (Low/Medium/High). Justify briefly.
|
|
117
|
+
* **`## Lookahead:`** How does the *planned next step* contribute to the overall task objective? Does it move significantly closer to the goal?
|
|
118
|
+
* **`## Self-Correction & Learning:`** Based on the `## Analysis:`, what adjustments are needed to your overall approach or understanding? What did you learn from the previous step? Are there any refinements to the plan needed beyond the immediate next step (note them here, but implement planning via `plan_and_solve` if significant)?
|
|
119
|
+
|
|
120
|
+
## ERROR HANDLING
|
|
121
|
+
|
|
122
|
+
Tool errors are opportunities for learning and correction. If a tool call returns an error:
|
|
123
|
+
1. Do NOT stop.
|
|
124
|
+
2. Your immediate next step MUST be to call `think`.
|
|
125
|
+
3. In the `## Analysis:` section, analyze the error message provided by the tool.
|
|
126
|
+
4. In the `## Plan:` section, decide how to proceed (e.g., retry with corrected input, try an alternative approach, ask for clarification).
|
|
127
|
+
|
|
128
|
+
## OUTPUT FORMAT
|
|
129
|
+
|
|
130
|
+
Ensure your outputs correctly format the tool calls as expected by the MCP protocol (handled by the environment, but be aware you are triggering these structured calls). Your internal monologue happens *before* the tool call, especially for tools requiring generated content. The `think` tool's output *is* your structured monologue.
|
|
131
|
+
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
Adhere strictly to these rules and instructions. Your ability to follow this structured cognitive process using the provided tools is paramount to successfully fulfilling your role as Gikendaasowin. Produce high-quality, well-reasoned, and traceable results.
|
|
@@ -0,0 +1,458 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
|
|
3
|
+
/**
|
|
4
|
+
* -----------------------------------------------------------------------------
|
|
5
|
+
* Gikendaasowin Aabajichiganan - Core Cognitive Tools MCP Server
|
|
6
|
+
*
|
|
7
|
+
* Version: 1.0.0
|
|
8
|
+
*
|
|
9
|
+
* Description: Provides a suite of cognitive tools for an AI Pair Programmer,
|
|
10
|
+
* enabling structured reasoning, planning, analysis, and iterative
|
|
11
|
+
* refinement (Chain of Thought, Chain of Draft, Reflection).
|
|
12
|
+
* This server focuses on managing the AI's *internal cognitive loop*,
|
|
13
|
+
* as described in the Anthropic research on the 'think' tool and
|
|
14
|
+
* related cognitive patterns. External actions are planned within
|
|
15
|
+
* the 'think' step but executed by the calling environment.
|
|
16
|
+
*
|
|
17
|
+
* Key Principles:
|
|
18
|
+
* 1. **Structured Deliberation:** Tools guide specific cognitive acts (planning,
|
|
19
|
+
* reasoning, critique).
|
|
20
|
+
* 2. **Centralized Analysis (`think`):** The `think` tool is mandatory after
|
|
21
|
+
* most cognitive actions or receiving external results, serving as the hub
|
|
22
|
+
* for analysis, planning the *next immediate step*, verification, and
|
|
23
|
+
* self-correction.
|
|
24
|
+
* 3. **CUC-N Assessment:** Task characteristics determine the required depth
|
|
25
|
+
* of cognition (full `think` vs. `quick_think`).
|
|
26
|
+
* 4. **Internal Generation First:** Tools like `plan_and_solve`, `chain_of_thought`,
|
|
27
|
+
* `reflection`, and `synthesize_prior_reasoning` are called *after* the AI
|
|
28
|
+
* has internally generated the relevant text (plan, CoT, critique, summary).
|
|
29
|
+
* The tool logs this generation and returns it, grounding the AI for the
|
|
30
|
+
* mandatory `think` analysis step.
|
|
31
|
+
* 5. **Iterative Refinement (Chain of Draft):** The `chain_of_draft` tool signals
|
|
32
|
+
* internal draft creation/modification, prompting analysis via `think`.
|
|
33
|
+
*
|
|
34
|
+
* Protocol: Model Context Protocol (MCP) over stdio.
|
|
35
|
+
* -----------------------------------------------------------------------------
|
|
36
|
+
*/
|
|
37
|
+
|
|
38
|
+
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
|
39
|
+
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
|
40
|
+
import { z } from "zod";
|
|
41
|
+
|
|
42
|
+
// --- Server Definition ---
|
|
43
|
+
|
|
44
|
+
const server = new McpServer({
|
|
45
|
+
name: "gikendaasowin-aabajichiganan-mcp",
|
|
46
|
+
version: "1.0.0", // Updated version
|
|
47
|
+
description: "ᑭᑫᓐᑖᓱᐎᓐ ᐋᐸᒋᒋᑲᓇᓐ - Core Cognitive Tools Suite v1.0.0: Enables structured, iterative reasoning (Chain of Thought/Draft), planning, and analysis for AI agents, focusing on the cognitive loop. MANDATORY `think` step integrates results."
|
|
48
|
+
});
|
|
49
|
+
|
|
50
|
+
// --- Logging Helpers ---
|
|
51
|
+
|
|
52
|
+
/**
|
|
53
|
+
* Logs an incoming tool call to stderr.
|
|
54
|
+
* @param toolName The name of the tool being called.
|
|
55
|
+
* @param details Optional additional details about the call.
|
|
56
|
+
*/
|
|
57
|
+
function logToolCall(toolName: string, details?: string): void {
|
|
58
|
+
const timestamp = new Date().toISOString();
|
|
59
|
+
console.error(`[${timestamp}] [MCP Server] > Tool Call: ${toolName}${details ? ` - ${details}` : ''}`);
|
|
60
|
+
}
|
|
61
|
+
|
|
62
|
+
/**
|
|
63
|
+
* Logs the result (success or failure) of a tool execution to stderr.
|
|
64
|
+
* @param toolName The name of the tool executed.
|
|
65
|
+
* @param success Whether the execution was successful.
|
|
66
|
+
* @param resultDetails Optional details about the result.
|
|
67
|
+
*/
|
|
68
|
+
function logToolResult(toolName: string, success: boolean, resultDetails?: string): void {
|
|
69
|
+
const timestamp = new Date().toISOString();
|
|
70
|
+
console.error(`[${timestamp}] [MCP Server] < Tool Result: ${toolName} - ${success ? 'Success' : 'Failure'}${resultDetails ? ` - ${resultDetails}` : ''}`);
|
|
71
|
+
}
|
|
72
|
+
|
|
73
|
+
/**
|
|
74
|
+
* Logs an error during tool execution and formats a standard error response for the LLM.
|
|
75
|
+
* @param toolName The name of the tool where the error occurred.
|
|
76
|
+
* @param error The error object or message.
|
|
77
|
+
* @returns An McpToolResult containing the error message.
|
|
78
|
+
*/
|
|
79
|
+
function logToolError(toolName: string, error: unknown) {
|
|
80
|
+
const timestamp = new Date().toISOString();
|
|
81
|
+
const errorMessage = error instanceof Error ? error.message : String(error);
|
|
82
|
+
console.error(`[${timestamp}] [MCP Server] ! Tool Error: ${toolName} - ${errorMessage}`);
|
|
83
|
+
logToolResult(toolName, false, errorMessage); // Log failure result as well
|
|
84
|
+
// Return a structured error message suitable for the LLM
|
|
85
|
+
return {
|
|
86
|
+
content: [{
|
|
87
|
+
type: "text" as const,
|
|
88
|
+
text: `Error executing tool '${toolName}': ${errorMessage}. Please analyze this error in your next 'think' step and adjust your plan.`
|
|
89
|
+
}]
|
|
90
|
+
};
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
// --- Core Cognitive Deliberation & Refinement Tools ---
|
|
94
|
+
|
|
95
|
+
/**
|
|
96
|
+
* Tool: assess_cuc_n_mode
|
|
97
|
+
* Purpose: Mandatory initial assessment of task characteristics to determine cognitive strategy.
|
|
98
|
+
* Workflow: Call BEFORE starting complex tasks or significantly changing strategy.
|
|
99
|
+
* Output: Confirms assessment and selected mode (`think` or `quick_think`). Result MUST inform the subsequent cognitive flow.
|
|
100
|
+
*/
|
|
101
|
+
server.tool(
|
|
102
|
+
"assess_cuc_n_mode",
|
|
103
|
+
"**Mandatory Pre-Deliberation Assessment.** Evaluates task Complexity, Uncertainty, Consequence, Novelty (CUC-N) to determine required cognitive depth and initial strategy. MUST be called before starting complex tasks or changing strategy. Selects 'think' (default) or 'quick_think' (only for verified Low CUC-N).",
|
|
104
|
+
{
|
|
105
|
+
assessment_and_choice: z.string().describe("Your structured assessment including: 1) Situation Description, 2) CUC-N Ratings (Low/Medium/High for each), 3) Rationale for ratings, 4) Recommended Initial Cognitive Strategy (e.g., 'Start with chain_of_thought then think'), 5) Explicit Mode Selection ('Selected Mode: think' or 'Selected Mode: quick_think').")
|
|
106
|
+
},
|
|
107
|
+
async ({ assessment_and_choice }) => {
|
|
108
|
+
const toolName = 'assess_cuc_n_mode';
|
|
109
|
+
logToolCall(toolName);
|
|
110
|
+
try {
|
|
111
|
+
// Enhanced validation using regex for robustness
|
|
112
|
+
const modeRegex = /Selected Mode: (think|quick_think)/i;
|
|
113
|
+
const cucnRegex = /CUC-N Ratings:/i;
|
|
114
|
+
const strategyRegex = /Recommended Initial Strategy:/i;
|
|
115
|
+
|
|
116
|
+
if (!assessment_and_choice || typeof assessment_and_choice !== 'string') {
|
|
117
|
+
throw new Error('Input must be a non-empty string.');
|
|
118
|
+
}
|
|
119
|
+
if (!cucnRegex.test(assessment_and_choice)) {
|
|
120
|
+
throw new Error('Invalid assessment: String must include "CUC-N Ratings:".');
|
|
121
|
+
}
|
|
122
|
+
if (!strategyRegex.test(assessment_and_choice)) {
|
|
123
|
+
throw new Error('Invalid assessment: String must include "Recommended Initial Strategy:".');
|
|
124
|
+
}
|
|
125
|
+
const modeMatch = assessment_and_choice.match(modeRegex);
|
|
126
|
+
if (!modeMatch || !modeMatch[1]) {
|
|
127
|
+
throw new Error('Invalid assessment: String must include explicit "Selected Mode: think" or "Selected Mode: quick_think".');
|
|
128
|
+
}
|
|
129
|
+
|
|
130
|
+
const selectedMode = modeMatch[1].toLowerCase();
|
|
131
|
+
const resultText = `Cognitive Assessment Completed. CUC-N analysis indicates ${selectedMode === 'think' ? 'detailed deliberation' : 'quick check'} is appropriate. Proceeding with selected mode: ${selectedMode}. Full Assessment logged. Ensure subsequent actions align with this assessment.`;
|
|
132
|
+
logToolResult(toolName, true, `Selected mode: ${selectedMode}`);
|
|
133
|
+
// Log the full assessment server-side for traceability
|
|
134
|
+
console.error(`[${new Date().toISOString()}] [MCP Server] - ${toolName} Assessment Details:\n${assessment_and_choice}`);
|
|
135
|
+
return { content: [{ type: "text" as const, text: resultText }] };
|
|
136
|
+
} catch (error: unknown) {
|
|
137
|
+
return logToolError(toolName, error);
|
|
138
|
+
}
|
|
139
|
+
}
|
|
140
|
+
);
|
|
141
|
+
|
|
142
|
+
/**
|
|
143
|
+
* Tool: think
|
|
144
|
+
* Purpose: The **CENTRAL HUB** for the cognitive loop. Mandatory after assessment, other cognitive tools, internal drafts, or external action results.
|
|
145
|
+
* Workflow: Analyze previous step -> Plan immediate next step -> Verify -> Assess Risk -> Self-Correct.
|
|
146
|
+
* Output: Returns the structured thought text itself, grounding the AI's reasoning process in the context.
|
|
147
|
+
*/
|
|
148
|
+
server.tool(
|
|
149
|
+
"think",
|
|
150
|
+
"**MANDATORY Central Hub for Analysis, Planning, and Refinement.** Called after assessment, other cognitive tools (`plan_and_solve`, `chain_of_thought`, etc.), internal drafts (`chain_of_draft`), or external action results. Analyzes previous step's outcome/draft, plans the *immediate* next action (cognitive or planning external action), verifies plan, assesses risk/challenges, looks ahead, and self-corrects. Follow the MANDATORY structure in the `thought` parameter.",
|
|
151
|
+
{
|
|
152
|
+
thought: z.string().describe("Your **detailed** internal monologue following the MANDATORY structure: ## Analysis: (Critically evaluate last result/draft/observation. What worked? What didn't? What are the implications?), ## Plan: (Define the *single, immediate* next action and its specific purpose. Is it calling another cognitive tool, generating a draft, planning an external action, or concluding?), ## Verification: (How will you confirm the next step is correct or successful?), ## Anticipated Challenges & Contingency: (What could go wrong with the next step? How will you handle it?), ## Risk Assessment: (Briefly assess risk of the planned step - Low/Medium/High), ## Lookahead: (How does this step fit into the overall goal?), ## Self-Correction & Learning: (Any adjustments needed based on the analysis? What was learned?).")
|
|
153
|
+
},
|
|
154
|
+
async ({ thought }) => {
|
|
155
|
+
const toolName = 'think';
|
|
156
|
+
logToolCall(toolName);
|
|
157
|
+
try {
|
|
158
|
+
if (!thought || typeof thought !== 'string' || thought.trim().length === 0) {
|
|
159
|
+
throw new Error('Invalid thought: Must be a non-empty string containing the structured analysis and plan.');
|
|
160
|
+
}
|
|
161
|
+
// Basic structural check (case-insensitive) - Warning, not strict failure
|
|
162
|
+
const requiredSections = ["## Analysis:", "## Plan:", "## Verification:", "## Anticipated Challenges & Contingency:", "## Risk Assessment:", "## Lookahead:", "## Self-Correction & Learning:"];
|
|
163
|
+
const missingSections = requiredSections.filter(section => !thought.toLowerCase().includes(section.toLowerCase()));
|
|
164
|
+
if (missingSections.length > 0) {
|
|
165
|
+
console.warn(`[${new Date().toISOString()}] [MCP Server] Warning: '${toolName}' input might be missing sections: ${missingSections.join(', ')}. Ensure full structure is followed for optimal reasoning.`);
|
|
166
|
+
}
|
|
167
|
+
|
|
168
|
+
logToolResult(toolName, true, `Thought logged (length: ${thought.length})`);
|
|
169
|
+
// Returns the same thought text received. This grounds the reasoning in the context.
|
|
170
|
+
// The AI uses this output implicitly as the starting point for its *next* internal step or external action.
|
|
171
|
+
return { content: [{ type: "text" as const, text: thought }] };
|
|
172
|
+
} catch (error: unknown) {
|
|
173
|
+
return logToolError(toolName, error);
|
|
174
|
+
}
|
|
175
|
+
}
|
|
176
|
+
);
|
|
177
|
+
|
|
178
|
+
/**
|
|
179
|
+
* Tool: quick_think
|
|
180
|
+
* Purpose: A lightweight cognitive checkpoint for **strictly Low CUC-N situations** or trivial confirmations.
|
|
181
|
+
* Workflow: Use ONLY when `assess_cuc_n_mode` explicitly selected 'quick_think'. Use sparingly.
|
|
182
|
+
* Output: Logs the brief thought.
|
|
183
|
+
*/
|
|
184
|
+
server.tool(
|
|
185
|
+
"quick_think",
|
|
186
|
+
"Cognitive Checkpoint ONLY for situations explicitly assessed as strictly Low CUC-N (via `assess_cuc_n_mode`) or for trivial confirmations/acknowledgements where detailed analysis via `think` is unnecessary. Use SPARINGLY.",
|
|
187
|
+
{
|
|
188
|
+
brief_thought: z.string().describe("Your **concise** thought or confirmation for this simple, low CUC-N step. Briefly state the observation/action and confirm it's trivial.")
|
|
189
|
+
},
|
|
190
|
+
async ({ brief_thought }) => {
|
|
191
|
+
const toolName = 'quick_think';
|
|
192
|
+
logToolCall(toolName);
|
|
193
|
+
try {
|
|
194
|
+
if (!brief_thought || typeof brief_thought !== 'string' || brief_thought.trim().length === 0) {
|
|
195
|
+
throw new Error('Invalid brief_thought: Must be a non-empty string.');
|
|
196
|
+
}
|
|
197
|
+
logToolResult(toolName, true, `Logged: ${brief_thought.substring(0, 80)}...`);
|
|
198
|
+
// Returns the brief thought, similar to 'think', for grounding.
|
|
199
|
+
return { content: [{ type: "text" as const, text: brief_thought }] };
|
|
200
|
+
} catch (error: unknown) {
|
|
201
|
+
return logToolError(toolName, error);
|
|
202
|
+
}
|
|
203
|
+
}
|
|
204
|
+
);
|
|
205
|
+
|
|
206
|
+
/**
|
|
207
|
+
* Tool: gauge_confidence
|
|
208
|
+
* Purpose: Meta-Cognitive Checkpoint to explicitly state confidence in a preceding analysis, plan, or draft.
|
|
209
|
+
* Workflow: Generate assessment -> Call this tool with assessment text -> MANDATORY `think` step follows to analyze the confidence level.
|
|
210
|
+
* Output: Confirms confidence gauging and level. Emphasizes need for `think` analysis, especially if not High.
|
|
211
|
+
*/
|
|
212
|
+
server.tool(
|
|
213
|
+
"gauge_confidence",
|
|
214
|
+
"Meta-Cognitive Checkpoint. Guides *internal stating* of **confidence (High/Medium/Low) and justification** regarding a specific plan, analysis, or draft you just formulated. Call this tool *with* the text containing your confidence assessment. Output MUST be analyzed in the mandatory `think` step immediately following.",
|
|
215
|
+
{
|
|
216
|
+
assessment_and_confidence: z.string().describe("The text containing the item being assessed AND your explicit internal assessment: 1) Confidence Level: (High/Medium/Low). 2) Justification for this level.")
|
|
217
|
+
},
|
|
218
|
+
async ({ assessment_and_confidence }) => {
|
|
219
|
+
const toolName = 'gauge_confidence';
|
|
220
|
+
logToolCall(toolName);
|
|
221
|
+
try {
|
|
222
|
+
const confidenceRegex = /Confidence Level: (High|Medium|Low)/i;
|
|
223
|
+
if (!assessment_and_confidence || typeof assessment_and_confidence !== 'string') {
|
|
224
|
+
throw new Error('Input must be a non-empty string.');
|
|
225
|
+
}
|
|
226
|
+
const match = assessment_and_confidence.match(confidenceRegex);
|
|
227
|
+
if (!match || !match[1]) {
|
|
228
|
+
throw new Error('Invalid confidence assessment: String must include "Confidence Level: High/Medium/Low" and justification.');
|
|
229
|
+
}
|
|
230
|
+
|
|
231
|
+
const level = match[1];
|
|
232
|
+
const emphasis = (level.toLowerCase() !== 'high') ? "CRITICAL: Analyze implications of non-High confidence." : "Proceed with analysis.";
|
|
233
|
+
const resultText = `Confidence Gauge Completed. Stated Level: ${level}. Assessment Text Logged. MANDATORY: Analyze this confidence level and justification in your next 'think' step. ${emphasis}`;
|
|
234
|
+
logToolResult(toolName, true, `Level: ${level}`);
|
|
235
|
+
console.error(`[${new Date().toISOString()}] [MCP Server] - ${toolName} Confidence Details:\n${assessment_and_confidence}`);
|
|
236
|
+
return { content: [{ type: "text" as const, text: resultText }] };
|
|
237
|
+
} catch (error: unknown) {
|
|
238
|
+
return logToolError(toolName, error);
|
|
239
|
+
}
|
|
240
|
+
}
|
|
241
|
+
);
|
|
242
|
+
|
|
243
|
+
/**
|
|
244
|
+
* Tool: plan_and_solve
|
|
245
|
+
* Purpose: Guides the *internal generation* of a structured plan draft.
|
|
246
|
+
* Workflow: Internally generate plan -> Call this tool *with* the plan text -> MANDATORY `think` step follows to analyze/refine the plan.
|
|
247
|
+
* Output: Returns the provided plan text for grounding and analysis.
|
|
248
|
+
*/
|
|
249
|
+
server.tool(
|
|
250
|
+
"plan_and_solve",
|
|
251
|
+
"Guides *internal generation* of a **structured plan draft**. Call this tool *with* the generated plan text you created internally. Returns the plan text. MANDATORY: Use the next `think` step to critically evaluate this plan's feasibility, refine it, and confirm the *first actionable step*.",
|
|
252
|
+
{
|
|
253
|
+
generated_plan_text: z.string().describe("The **full, structured plan draft** you generated internally, including goals, steps, potential external tool needs, assumptions, and risks."),
|
|
254
|
+
task_objective: z.string().describe("The original high-level task objective this plan addresses.")
|
|
255
|
+
},
|
|
256
|
+
async ({ generated_plan_text, task_objective }) => {
|
|
257
|
+
const toolName = 'plan_and_solve';
|
|
258
|
+
logToolCall(toolName, `Objective: ${task_objective.substring(0, 80)}...`);
|
|
259
|
+
try {
|
|
260
|
+
if (!generated_plan_text || typeof generated_plan_text !== 'string' || generated_plan_text.trim().length === 0) {
|
|
261
|
+
throw new Error('Invalid generated_plan_text: Must be a non-empty string containing the plan.');
|
|
262
|
+
}
|
|
263
|
+
if (!task_objective || typeof task_objective !== 'string' || task_objective.trim().length === 0) {
|
|
264
|
+
throw new Error('Invalid task_objective: Must provide the original objective.');
|
|
265
|
+
}
|
|
266
|
+
logToolResult(toolName, true, `Returned plan draft for analysis (length: ${generated_plan_text.length})`);
|
|
267
|
+
// Returns the actual plan text received. The AI must analyze this in the next 'think' step.
|
|
268
|
+
return { content: [{ type: "text" as const, text: generated_plan_text }] };
|
|
269
|
+
} catch (error: unknown) {
|
|
270
|
+
return logToolError(toolName, error);
|
|
271
|
+
}
|
|
272
|
+
}
|
|
273
|
+
);
|
|
274
|
+
|
|
275
|
+
/**
|
|
276
|
+
* Tool: chain_of_thought
|
|
277
|
+
* Purpose: Guides the *internal generation* of a detailed, step-by-step reasoning draft (CoT).
|
|
278
|
+
* Workflow: Internally generate CoT -> Call this tool *with* the CoT text -> MANDATORY `think` step follows to analyze the reasoning.
|
|
279
|
+
* Output: Returns the provided CoT text for grounding and analysis.
|
|
280
|
+
*/
|
|
281
|
+
server.tool(
|
|
282
|
+
"chain_of_thought",
|
|
283
|
+
"Guides *internal generation* of **detailed, step-by-step reasoning draft (CoT)**. Call this tool *with* the generated CoT text you created internally. Returns the CoT text. MANDATORY: Use the next `think` step to analyze this reasoning, extract insights, identify flaws/gaps, and plan the next concrete action based on the CoT.",
|
|
284
|
+
{
|
|
285
|
+
generated_cot_text: z.string().describe("The **full, step-by-step Chain of Thought draft** you generated internally to solve or analyze the problem."),
|
|
286
|
+
problem_statement: z.string().describe("The original problem statement or question this CoT addresses.")
|
|
287
|
+
},
|
|
288
|
+
async ({ generated_cot_text, problem_statement }) => {
|
|
289
|
+
const toolName = 'chain_of_thought';
|
|
290
|
+
logToolCall(toolName, `Problem: ${problem_statement.substring(0, 80)}...`);
|
|
291
|
+
try {
|
|
292
|
+
if (!generated_cot_text || typeof generated_cot_text !== 'string' || generated_cot_text.trim().length === 0) {
|
|
293
|
+
throw new Error('Invalid generated_cot_text: Must be a non-empty string containing the CoT.');
|
|
294
|
+
}
|
|
295
|
+
if (!problem_statement || typeof problem_statement !== 'string' || problem_statement.trim().length === 0) {
|
|
296
|
+
throw new Error('Invalid problem_statement: Must provide the original problem.');
|
|
297
|
+
}
|
|
298
|
+
logToolResult(toolName, true, `Returned CoT draft for analysis (length: ${generated_cot_text.length})`);
|
|
299
|
+
// Returns the actual CoT text received. The AI must analyze this in the next 'think' step.
|
|
300
|
+
return { content: [{ type: "text" as const, text: generated_cot_text }] };
|
|
301
|
+
} catch (error: unknown) {
|
|
302
|
+
return logToolError(toolName, error);
|
|
303
|
+
}
|
|
304
|
+
}
|
|
305
|
+
);
|
|
306
|
+
|
|
307
|
+
/**
|
|
308
|
+
* Tool: chain_of_draft
|
|
309
|
+
* Purpose: Signals that internal drafts (code, text, plan fragments) have been generated or refined.
|
|
310
|
+
* Workflow: Internally generate/refine draft(s) -> Call this tool -> MANDATORY `think` step follows to analyze the draft(s).
|
|
311
|
+
* Output: Confirms readiness for analysis.
|
|
312
|
+
*/
|
|
313
|
+
server.tool(
|
|
314
|
+
"chain_of_draft",
|
|
315
|
+
"Signals that one or more **internal drafts** (e.g., code snippets, documentation sections, refined plan steps) have been generated or refined and are ready for analysis. Call this tool *after* generating/refining draft(s) internally. Response confirms readiness. MANDATORY: Analyze these draft(s) in your next `think` step.",
|
|
316
|
+
{
|
|
317
|
+
draft_description: z.string().describe("Brief but specific description of the draft(s) generated/refined internally (e.g., 'Initial Python function for API call', 'Refined error handling in plan step 3', 'Drafted README introduction').")
|
|
318
|
+
},
|
|
319
|
+
async ({ draft_description }) => {
|
|
320
|
+
const toolName = 'chain_of_draft';
|
|
321
|
+
logToolCall(toolName, `Description: ${draft_description}`);
|
|
322
|
+
try {
|
|
323
|
+
if (!draft_description || typeof draft_description !== 'string' || draft_description.trim().length === 0) {
|
|
324
|
+
throw new Error('Invalid draft_description: Must provide a description.');
|
|
325
|
+
}
|
|
326
|
+
const resultText = `Internal draft(s) ready for analysis: "${draft_description}". MANDATORY: Analyze these draft(s) now using the structured format in your next 'think' step. Evaluate correctness, completeness, and alignment with goals.`;
|
|
327
|
+
logToolResult(toolName, true);
|
|
328
|
+
return { content: [{ type: "text" as const, text: resultText }] };
|
|
329
|
+
} catch (error: unknown) {
|
|
330
|
+
return logToolError(toolName, error);
|
|
331
|
+
}
|
|
332
|
+
}
|
|
333
|
+
);
|
|
334
|
+
|
|
335
|
+
/**
|
|
336
|
+
* Tool: reflection
|
|
337
|
+
* Purpose: Guides the *internal generation* of a critical self-evaluation (critique) of a prior step, draft, or outcome.
|
|
338
|
+
* Workflow: Internally generate critique -> Call this tool *with* the critique text -> MANDATORY `think` step follows to act on the critique.
|
|
339
|
+
* Output: Returns the provided critique text for grounding and analysis.
|
|
340
|
+
*/
|
|
341
|
+
server.tool(
|
|
342
|
+
"reflection",
|
|
343
|
+
"Guides *internal generation* of a critical self-evaluation (critique) on a prior step, draft, plan, or outcome. Call this tool *with* the **generated critique text** you created internally. Returns the critique text. MANDATORY: Use the next `think` step to analyze this critique and plan specific corrective actions or refinements based on it.",
|
|
344
|
+
{
|
|
345
|
+
generated_critique_text: z.string().describe("The **full critique text** you generated internally, identifying specific flaws, strengths, assumptions, alternative approaches, and concrete suggestions for improvement."),
|
|
346
|
+
input_subject_description: z.string().describe("A brief description of the original reasoning, plan, code draft, or action result that was critiqued (e.g., 'Critique of the plan generated via plan_and_solve', 'Reflection on the CoT for problem X').")
|
|
347
|
+
},
|
|
348
|
+
async ({ generated_critique_text, input_subject_description }) => {
|
|
349
|
+
const toolName = 'reflection';
|
|
350
|
+
logToolCall(toolName, `Subject: ${input_subject_description}`);
|
|
351
|
+
try {
|
|
352
|
+
if (!generated_critique_text || typeof generated_critique_text !== 'string' || generated_critique_text.trim().length === 0) {
|
|
353
|
+
throw new Error('Invalid generated_critique_text: Must be a non-empty string containing the critique.');
|
|
354
|
+
}
|
|
355
|
+
if (!input_subject_description || typeof input_subject_description !== 'string' || input_subject_description.trim().length === 0) {
|
|
356
|
+
throw new Error('Invalid input_subject_description: Must describe what was critiqued.');
|
|
357
|
+
}
|
|
358
|
+
logToolResult(toolName, true, `Returned critique for analysis (length: ${generated_critique_text.length})`);
|
|
359
|
+
// Returns the actual critique text received. The AI must analyze this in the next 'think' step.
|
|
360
|
+
return { content: [{ type: "text" as const, text: generated_critique_text }] };
|
|
361
|
+
} catch (error: unknown) {
|
|
362
|
+
return logToolError(toolName, error);
|
|
363
|
+
}
|
|
364
|
+
}
|
|
365
|
+
);
|
|
366
|
+
|
|
367
|
+
/**
|
|
368
|
+
* Tool: synthesize_prior_reasoning
|
|
369
|
+
* Purpose: Context Management Tool. Guides the *internal generation* of a structured summary of preceding context.
|
|
370
|
+
* Workflow: Internally generate summary -> Call this tool *with* the summary text -> MANDATORY `think` step follows to use the summary.
|
|
371
|
+
* Output: Returns the provided summary text for grounding and analysis.
|
|
372
|
+
*/
|
|
373
|
+
server.tool(
|
|
374
|
+
"synthesize_prior_reasoning",
|
|
375
|
+
"Context Management Tool. Guides *internal generation* of a **structured summary** of preceding steps, decisions, key findings, or relevant context to consolidate understanding before proceeding. Call this tool *with* the generated summary text you created internally. Returns the summary. MANDATORY: Use the next `think` step to leverage this summary and inform the next action.",
|
|
376
|
+
{
|
|
377
|
+
generated_summary_text: z.string().describe("The **full, structured summary text** you generated internally (e.g., key decisions made, open questions, current state of implementation, relevant facts gathered)."),
|
|
378
|
+
context_to_summarize_description: z.string().describe("Description of the reasoning span or context that was summarized (e.g., 'Summary of the last 5 steps', 'Consolidated findings from tool results A and B').")
|
|
379
|
+
},
|
|
380
|
+
async ({ generated_summary_text, context_to_summarize_description }) => {
|
|
381
|
+
const toolName = 'synthesize_prior_reasoning';
|
|
382
|
+
logToolCall(toolName, `Context: ${context_to_summarize_description}`);
|
|
383
|
+
try {
|
|
384
|
+
if (!generated_summary_text || typeof generated_summary_text !== 'string' || generated_summary_text.trim().length === 0) {
|
|
385
|
+
throw new Error('Invalid generated_summary_text: Must be a non-empty string containing the summary.');
|
|
386
|
+
}
|
|
387
|
+
if (!context_to_summarize_description || typeof context_to_summarize_description !== 'string' || context_to_summarize_description.trim().length === 0) {
|
|
388
|
+
throw new Error('Invalid context_to_summarize_description: Must describe what was summarized.');
|
|
389
|
+
}
|
|
390
|
+
logToolResult(toolName, true, `Returned summary for analysis (length: ${generated_summary_text.length})`);
|
|
391
|
+
// Returns the actual summary text received. The AI must analyze/use this in the next 'think' step.
|
|
392
|
+
return { content: [{ type: "text" as const, text: generated_summary_text }] };
|
|
393
|
+
} catch (error: unknown) {
|
|
394
|
+
return logToolError(toolName, error);
|
|
395
|
+
}
|
|
396
|
+
}
|
|
397
|
+
);
|
|
398
|
+
|
|
399
|
+
|
|
400
|
+
// --- Server Lifecycle and Error Handling ---
|
|
401
|
+
|
|
402
|
+
/**
|
|
403
|
+
* Gracefully shuts down the server.
|
|
404
|
+
*/
|
|
405
|
+
async function shutdown(): Promise<void> {
|
|
406
|
+
console.error('\n[MCP Server] Shutting down gracefully...');
|
|
407
|
+
try {
|
|
408
|
+
await server.close();
|
|
409
|
+
console.error('[MCP Server] Server closed.');
|
|
410
|
+
process.exit(0);
|
|
411
|
+
} catch (err) {
|
|
412
|
+
console.error('[MCP Server] Error during shutdown:', err);
|
|
413
|
+
process.exit(1);
|
|
414
|
+
}
|
|
415
|
+
}
|
|
416
|
+
|
|
417
|
+
process.on('SIGINT', shutdown);
|
|
418
|
+
process.on('SIGTERM', shutdown);
|
|
419
|
+
|
|
420
|
+
process.on('uncaughtException', (error, origin) => {
|
|
421
|
+
const timestamp = new Date().toISOString();
|
|
422
|
+
console.error(`[${timestamp}] [MCP Server] FATAL: Uncaught Exception at: ${origin}`, error);
|
|
423
|
+
// Attempt graceful shutdown, but exit quickly if it fails
|
|
424
|
+
shutdown().catch(() => process.exit(1));
|
|
425
|
+
});
|
|
426
|
+
|
|
427
|
+
process.on('unhandledRejection', (reason, promise) => {
|
|
428
|
+
const timestamp = new Date().toISOString();
|
|
429
|
+
console.error(`[${timestamp}] [MCP Server] FATAL: Unhandled Promise Rejection:`, reason);
|
|
430
|
+
// Attempt graceful shutdown, but exit quickly if it fails
|
|
431
|
+
shutdown().catch(() => process.exit(1));
|
|
432
|
+
});
|
|
433
|
+
|
|
434
|
+
// --- Start the Server ---
|
|
435
|
+
|
|
436
|
+
/**
|
|
437
|
+
* Initializes and starts the MCP server.
|
|
438
|
+
*/
|
|
439
|
+
async function main(): Promise<void> {
|
|
440
|
+
try {
|
|
441
|
+
const transport = new StdioServerTransport();
|
|
442
|
+
await server.connect(transport);
|
|
443
|
+
const border = '-----------------------------------------------------';
|
|
444
|
+
console.error(border);
|
|
445
|
+
console.error(` ᑭᑫᓐᑖᓱᐎᓐ ᐋᐸᒋᒋᑲᓇᓐ - Core Cognitive Tools Suite v1.0.0: Enables structured, iterative reasoning (Chain of Thought/Draft), planning, and analysis for AI agents, focusing on the cognitive loop. MANDATORY \`think\` step integrates results.`);
|
|
446
|
+
console.error(` Version: 1.0.0`);
|
|
447
|
+
console.error(' Status: Running on stdio, awaiting MCP requests...');
|
|
448
|
+
console.error(border);
|
|
449
|
+
}
|
|
450
|
+
catch (error) {
|
|
451
|
+
const timestamp = new Date().toISOString();
|
|
452
|
+
console.error(`[${timestamp}] [MCP Server] Fatal error during startup:`, error);
|
|
453
|
+
process.exit(1);
|
|
454
|
+
}
|
|
455
|
+
}
|
|
456
|
+
|
|
457
|
+
// Execute the main function to start the server
|
|
458
|
+
main();
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@nbiish/cognitive-tools-mcp",
|
|
3
|
-
"version": "0.9.
|
|
3
|
+
"version": "0.9.5",
|
|
4
4
|
"description": "Cognitive Tools MCP: SOTA reasoning suite focused on iterative refinement and tool integration for AI Pair Programming. Enables structured, iterative problem-solving through Chain of Draft methodology, with tools for draft generation, analysis, and refinement. Features advanced deliberation (`think`), rapid checks (`quick_think`), mandatory complexity assessment & thought mode selection (`assess_cuc_n_mode`), context synthesis, confidence gauging, proactive planning, explicit reasoning (CoT), and reflection with content return. Alternative package name for gikendaasowin-aabajichiganan-mcp.",
|
|
5
5
|
"private": false,
|
|
6
6
|
"type": "module",
|
|
File without changes
|
|
File without changes
|