sequential-thinking-slim 2025.12.18-slim.1.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 MCPSlim
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,180 @@
1
+ # sequential-thinking-slim
2
+
3
+ > **Sequential Thinking MCP server optimized for AI assistants** — Reduce context window tokens by 55.0% while keeping full functionality. Compatible with Claude, ChatGPT, Gemini, Cursor, and all MCP clients.
4
+
5
+ [![npm version](https://img.shields.io/npm/v/sequential-thinking-slim.svg)](https://www.npmjs.com/package/sequential-thinking-slim)
6
+ [![Test Status](https://img.shields.io/badge/tests-passing-brightgreen)](https://github.com/mcpslim/mcpslim)
7
+ [![MCP Compatible](https://img.shields.io/badge/MCP-compatible-blue)](https://modelcontextprotocol.io)
8
+
9
+ ## What is sequential-thinking-slim?
10
+
11
+ A **token-optimized** version of the Sequential Thinking [Model Context Protocol (MCP)](https://modelcontextprotocol.io) server.
12
+
13
+ ### The Problem
14
+
15
+ MCP tool schemas consume significant **context window tokens**. When AI assistants like Claude or ChatGPT load MCP tools, each tool definition takes up valuable context space.
16
+
17
+ The original `@modelcontextprotocol/server-sequential-thinking` loads **1 tools** consuming approximately **~1,529 tokens** — that's space you could use for actual conversation.
18
+
19
+ ### The Solution
20
+
21
+ `sequential-thinking-slim` intelligently **groups 1 tools into 1 semantic operations**, reducing token usage by **55.0%** — with **zero functionality loss**.
22
+
23
+ Your AI assistant sees fewer, smarter tools. Every original capability remains available.
24
+
25
+ ## Performance
26
+
27
+ | Metric | Original | Slim | Reduction |
28
+ |--------|----------|------|-----------|
29
+ | Tools | 1 | 1 | **-0%** |
30
+ | Schema Tokens | 959 | 118 | **87.7%** |
31
+ | Claude Code (est.) | ~1,529 | ~688 | **~55.0%** |
32
+
33
+ > **Benchmark Info**
34
+ > - Original: `@modelcontextprotocol/server-sequential-thinking@2025.12.18`
35
+ > - Schema tokens measured with [tiktoken](https://github.com/openai/tiktoken) (cl100k_base)
36
+ > - Claude Code estimate includes ~570 tokens/tool overhead
37
+
38
+ ## Quick Start
39
+
40
+ ### One-Command Setup (Recommended)
41
+
42
+ ```bash
43
+ # Claude Desktop - auto-configure
44
+ npx sequential-thinking-slim --setup claude
45
+
46
+ # Cursor - auto-configure
47
+ npx sequential-thinking-slim --setup cursor
48
+
49
+ # Interactive mode (choose your client)
50
+ npx sequential-thinking-slim --setup
51
+ ```
52
+
53
+ Done! Restart your app to use sequential-thinking.
54
+
55
+ ### CLI Tools (already have CLI?)
56
+
57
+ ```bash
58
+ # Claude Code (creates .mcp.json in project root)
59
+ claude mcp add sequential-thinking -s project -- npx -y sequential-thinking-slim@latest
60
+
61
+ # Windows: use cmd /c wrapper
62
+ claude mcp add sequential-thinking -s project -- cmd /c npx -y sequential-thinking-slim@latest
63
+
64
+ # VS Code (Copilot, Cline, Roo Code)
65
+ code --add-mcp '{"name":"sequential-thinking","command":"npx","args":["-y","sequential-thinking-slim@latest"]}'
66
+ ```
67
+
68
+ ## Manual Setup
69
+
70
+ <details>
71
+ <summary>Click to expand manual configuration options</summary>
72
+
73
+ ### Claude Desktop
74
+
75
+ Add to your `claude_desktop_config.json`:
76
+
77
+ | OS | Path |
78
+ |----|------|
79
+ | Windows | `%APPDATA%\Claude\claude_desktop_config.json` |
80
+ | macOS | `~/Library/Application Support/Claude/claude_desktop_config.json` |
81
+
82
+ ```json
83
+ {
84
+ "mcpServers": {
85
+ "sequential-thinking": {
86
+ "command": "npx",
87
+ "args": ["-y", "sequential-thinking-slim@latest"]
88
+ }
89
+ }
90
+ }
91
+ ```
92
+
93
+ ### Cursor
94
+
95
+ Add to `.cursor/mcp.json` (global) or `<project>/.cursor/mcp.json` (project):
96
+
97
+ ```json
98
+ {
99
+ "mcpServers": {
100
+ "sequential-thinking": {
101
+ "command": "npx",
102
+ "args": ["-y", "sequential-thinking-slim@latest"]
103
+ }
104
+ }
105
+ }
106
+ ```
107
+
108
+ </details>
109
+
110
+ ## How It Works
111
+
112
+ MCPSlim acts as a **transparent bridge** between AI models and the original MCP server:
113
+
114
+ ```
115
+ ┌─────────────────────────────────────────────────────────────────┐
116
+ │ Without MCPSlim │
117
+ │ │
118
+ │ [AI Model] ──── reads 1 tool schemas ────→ [Original MCP] │
119
+ │ (~1,529 tokens loaded into context) │
120
+ ├─────────────────────────────────────────────────────────────────┤
121
+ │ With MCPSlim │
122
+ │ │
123
+ │ [AI Model] ───→ [MCPSlim Bridge] ───→ [Original MCP] │
124
+ │ │ │ │ │
125
+ │ Sees 1 grouped Translates to Executes actual │
126
+ │ tools only original call tool & returns │
127
+ │ (~688 tokens) │
128
+ └─────────────────────────────────────────────────────────────────┘
129
+ ```
130
+
131
+ ### How Translation Works
132
+
133
+ 1. **AI reads slim schema** — Only 1 grouped tools instead of 1
134
+ 2. **AI calls grouped tool** — e.g., `interaction({ action: "click", ... })`
135
+ 3. **MCPSlim translates** — Converts to original: `browser_click({ ... })`
136
+ 4. **Original MCP executes** — Real server processes the request
137
+ 5. **Response returned** — Result passes back unchanged
138
+
139
+ **Zero functionality loss. 55.0% token savings.**
140
+
141
+ ## Available Tool Groups
142
+
143
+ | Group | Actions |
144
+ |-------|---------|
145
+
146
+
147
+ Plus **1 passthrough tool** — tools that don't group well are kept as-is with optimized descriptions.
148
+
149
+ ## Compatibility
150
+
151
+ - ✅ **Full functionality** — All original `@modelcontextprotocol/server-sequential-thinking` features preserved
152
+ - ✅ **All AI assistants** — Works with Claude, ChatGPT, Gemini, Copilot, and any MCP client
153
+ - ✅ **Drop-in replacement** — Same capabilities, just use grouped action names
154
+ - ✅ **Tested** — Schema compatibility verified via automated tests
155
+
156
+ ## FAQ
157
+
158
+ ### Does this reduce functionality?
159
+
160
+ **No.** Every original tool is accessible. Tools are grouped semantically (e.g., `click`, `hover`, `drag` → `interaction`), but all actions remain available via the `action` parameter.
161
+
162
+ ### Why do AI assistants need token optimization?
163
+
164
+ AI models have limited context windows. MCP tool schemas consume tokens that could be used for conversation, code, or documents. Reducing tool schema size means more room for actual work.
165
+
166
+ ### Is this officially supported?
167
+
168
+ MCPSlim is a community project. It wraps official MCP servers transparently — the original server does all the real work.
169
+
170
+ ## License
171
+
172
+ MIT
173
+
174
+ ---
175
+
176
+ <p align="center">
177
+ Powered by <a href="https://github.com/mcpslim/mcpslim"><b>MCPSlim</b></a> — MCP Token Optimizer
178
+ <br>
179
+ <sub>Reduce AI context usage. Keep full functionality.</sub>
180
+ </p>
Binary file
Binary file
Binary file
Binary file
package/index.js ADDED
@@ -0,0 +1,271 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * sequential-thinking-slim - Slimmed sequential-thinking MCP for Claude
4
+ * Reduces token usage by grouping similar tools
5
+ *
6
+ * Usage:
7
+ * npx sequential-thinking-slim # Run MCP server
8
+ * npx sequential-thinking-slim --setup # Auto-configure (interactive)
9
+ * npx sequential-thinking-slim --setup claude # Auto-configure for Claude Desktop
10
+ * npx sequential-thinking-slim --setup cursor # Auto-configure for Cursor
11
+ * npx sequential-thinking-slim --setup claude-code # Auto-configure for Claude Code CLI
12
+ */
13
+
14
+ const { spawn, execSync } = require('child_process');
15
+ const path = require('path');
16
+ const os = require('os');
17
+ const fs = require('fs');
18
+ const readline = require('readline');
19
+
20
+ const MCP_NAME = 'sequential-thinking-slim';
21
+ const PACKAGE_NAME = 'sequential-thinking-slim';
22
+
23
+ // 환경변수 요구사항 (빌드 시 주입)
24
+ const REQUIRED_ENV_VARS = [];
25
+
26
+ // ============================================
27
+ // Setup Mode: Auto-configure MCP clients
28
+ // ============================================
29
+
30
+ const CONFIG_PATHS = {
31
+ 'claude': getClaudeDesktopConfigPath(),
32
+ 'claude-desktop': getClaudeDesktopConfigPath(),
33
+ 'cursor': getCursorConfigPath(),
34
+ };
35
+
36
+ function getClaudeDesktopConfigPath() {
37
+ if (os.platform() === 'win32') {
38
+ return path.join(process.env.APPDATA || '', 'Claude', 'claude_desktop_config.json');
39
+ } else if (os.platform() === 'darwin') {
40
+ return path.join(os.homedir(), 'Library', 'Application Support', 'Claude', 'claude_desktop_config.json');
41
+ } else {
42
+ return path.join(os.homedir(), '.config', 'claude', 'claude_desktop_config.json');
43
+ }
44
+ }
45
+
46
+ function getCursorConfigPath() {
47
+ // Global config
48
+ if (os.platform() === 'win32') {
49
+ return path.join(process.env.APPDATA || '', 'Cursor', 'User', 'globalStorage', 'mcp.json');
50
+ } else {
51
+ return path.join(os.homedir(), '.cursor', 'mcp.json');
52
+ }
53
+ }
54
+
55
+ function addToConfig(configPath, mcpName, mcpConfig) {
56
+ let config = { mcpServers: {} };
57
+
58
+ // 디렉토리 생성
59
+ const dir = path.dirname(configPath);
60
+ if (!fs.existsSync(dir)) {
61
+ fs.mkdirSync(dir, { recursive: true });
62
+ }
63
+
64
+ // 기존 설정 로드
65
+ if (fs.existsSync(configPath)) {
66
+ try {
67
+ config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
68
+ if (!config.mcpServers) config.mcpServers = {};
69
+ } catch (e) {
70
+ console.error('⚠️ Failed to parse existing config, creating new one');
71
+ }
72
+ }
73
+
74
+ // 이미 존재하는지 확인
75
+ if (config.mcpServers[mcpName]) {
76
+ console.log(`ℹ️ ${mcpName} already configured in ${configPath}`);
77
+ return false;
78
+ }
79
+
80
+ // 새 MCP 추가
81
+ config.mcpServers[mcpName] = mcpConfig;
82
+ fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
83
+ console.log(`✅ Added ${mcpName} to ${configPath}`);
84
+ return true;
85
+ }
86
+
87
+ async function interactiveSetup() {
88
+ const rl = readline.createInterface({
89
+ input: process.stdin,
90
+ output: process.stdout
91
+ });
92
+
93
+ const question = (q) => new Promise(resolve => rl.question(q, resolve));
94
+
95
+ console.log('\n🔧 ' + PACKAGE_NAME + ' Setup\n');
96
+ console.log('Select your MCP client:\n');
97
+ console.log(' 1. Claude Desktop');
98
+ console.log(' 2. Cursor');
99
+ console.log(' 3. Claude Code (CLI)');
100
+ console.log(' 4. VS Code (Copilot)');
101
+ console.log(' 5. Cancel\n');
102
+
103
+ const choice = await question('Enter choice (1-5): ');
104
+ rl.close();
105
+
106
+ // 환경변수 플래그 생성
107
+ const envFlags = REQUIRED_ENV_VARS.map(v => `--env ${v}=<YOUR_${v.split('_').pop()}>`).join(' ');
108
+ const envJson = REQUIRED_ENV_VARS.length > 0
109
+ ? `,"env":{${REQUIRED_ENV_VARS.map(v => `"${v}":"<YOUR_${v.split('_').pop()}>"`).join(',')}}`
110
+ : '';
111
+
112
+ switch (choice.trim()) {
113
+ case '1':
114
+ return setupClient('claude');
115
+ case '2':
116
+ return setupClient('cursor');
117
+ case '3':
118
+ console.log('\nRun this command:\n');
119
+ if (REQUIRED_ENV_VARS.length > 0) {
120
+ console.log(` claude mcp add ${MCP_NAME} -s project ${envFlags} -- npx -y ${PACKAGE_NAME}@latest\n`);
121
+ } else {
122
+ console.log(` claude mcp add ${MCP_NAME} -s project -- npx -y ${PACKAGE_NAME}@latest\n`);
123
+ }
124
+ console.log(' (Windows: use "cmd /c npx" instead of "npx")\n');
125
+ return true;
126
+ case '4':
127
+ console.log('\nRun this command:\n');
128
+ console.log(` code --add-mcp '{"name":"${MCP_NAME}","command":"npx","args":["-y","${PACKAGE_NAME}@latest"]${envJson}}'\n`);
129
+ return true;
130
+ case '5':
131
+ default:
132
+ console.log('Cancelled.');
133
+ return false;
134
+ }
135
+ }
136
+
137
+ function setupClaudeCode() {
138
+ // 환경변수 플래그 생성
139
+ const envFlags = REQUIRED_ENV_VARS.map(v => `--env ${v}=<YOUR_${v.split('_').pop()}>`).join(' ');
140
+
141
+ // Windows에서는 cmd /c wrapper 필요
142
+ const npxCmd = os.platform() === 'win32' ? 'cmd /c npx' : 'npx';
143
+
144
+ let cmd = `claude mcp add ${MCP_NAME} -s project`;
145
+ if (REQUIRED_ENV_VARS.length > 0) {
146
+ cmd += ` ${envFlags}`;
147
+ }
148
+ cmd += ` -- ${npxCmd} -y ${PACKAGE_NAME}@latest`;
149
+
150
+ console.log(`\n🔧 Adding ${MCP_NAME} to Claude Code...\n`);
151
+ console.log(`Running: ${cmd}\n`);
152
+
153
+ try {
154
+ execSync(cmd, { stdio: 'inherit', shell: true });
155
+ console.log(`\n🎉 Setup complete! ${MCP_NAME} is now available in Claude Code.\n`);
156
+ if (REQUIRED_ENV_VARS.length > 0) {
157
+ console.log(`⚠️ Don't forget to set environment variables:`);
158
+ for (const envVar of REQUIRED_ENV_VARS) {
159
+ console.log(` - ${envVar}`);
160
+ }
161
+ console.log('');
162
+ }
163
+ return true;
164
+ } catch (err) {
165
+ console.error(`\n❌ Failed to add MCP. Is Claude Code CLI installed?\n`);
166
+ console.log(`Try running manually:\n ${cmd}\n`);
167
+ return false;
168
+ }
169
+ }
170
+
171
+ function setupClient(client) {
172
+ // Claude Code CLI는 별도 처리
173
+ if (client === 'claude-code') {
174
+ return setupClaudeCode();
175
+ }
176
+
177
+ const configPath = CONFIG_PATHS[client];
178
+ if (!configPath) {
179
+ console.error(`❌ Unknown client: ${client}`);
180
+ console.log('Supported: claude, claude-desktop, cursor, claude-code');
181
+ return false;
182
+ }
183
+
184
+ const mcpConfig = {
185
+ command: 'npx',
186
+ args: ['-y', PACKAGE_NAME + '@latest']
187
+ };
188
+
189
+ // 환경변수가 필요한 경우 env 블록 추가 (플레이스홀더)
190
+ if (REQUIRED_ENV_VARS.length > 0) {
191
+ mcpConfig.env = {};
192
+ for (const envVar of REQUIRED_ENV_VARS) {
193
+ mcpConfig.env[envVar] = `<YOUR_${envVar.split('_').pop()}>`;
194
+ }
195
+ }
196
+
197
+ const success = addToConfig(configPath, MCP_NAME, mcpConfig);
198
+
199
+ if (success) {
200
+ if (REQUIRED_ENV_VARS.length > 0) {
201
+ console.log(`\n⚠️ This MCP requires environment variables!`);
202
+ console.log(` Edit ${configPath} and update:`);
203
+ for (const envVar of REQUIRED_ENV_VARS) {
204
+ console.log(` - ${envVar}`);
205
+ }
206
+ console.log(`\n🎉 Setup complete! Update env vars, then restart ${client}.\n`);
207
+ } else {
208
+ console.log(`\n🎉 Setup complete! Restart ${client} to use ${MCP_NAME}.\n`);
209
+ }
210
+ }
211
+
212
+ return success;
213
+ }
214
+
215
+ // ============================================
216
+ // Main: Check for --setup flag
217
+ // ============================================
218
+
219
+ const args = process.argv.slice(2);
220
+ const setupIndex = args.indexOf('--setup');
221
+
222
+ if (setupIndex !== -1) {
223
+ const client = args[setupIndex + 1];
224
+
225
+ if (client && !client.startsWith('-')) {
226
+ // Specific client: npx xxx-slim --setup claude
227
+ setupClient(client);
228
+ } else {
229
+ // Interactive: npx xxx-slim --setup
230
+ interactiveSetup().then(() => process.exit(0));
231
+ }
232
+ } else {
233
+ // ============================================
234
+ // Normal Mode: Run MCP server
235
+ // ============================================
236
+
237
+ // 플랫폼별 바이너리 이름 결정
238
+ function getBinaryName() {
239
+ const platform = os.platform();
240
+ const arch = os.arch();
241
+ if (platform === 'win32') {
242
+ return 'mcpslim-windows-x64.exe';
243
+ } else if (platform === 'darwin') {
244
+ return arch === 'arm64' ? 'mcpslim-darwin-arm64' : 'mcpslim-darwin-x64';
245
+ } else {
246
+ return 'mcpslim-linux-x64';
247
+ }
248
+ }
249
+
250
+ const binName = getBinaryName();
251
+ const mcpslimBin = path.join(__dirname, 'bin', binName);
252
+ const recipePath = path.join(__dirname, 'recipes', 'sequential-thinking.json');
253
+
254
+ // 원본 MCP 명령어
255
+ const originalMcp = process.env.MCPSLIM_ORIGINAL_MCP?.split(' ')
256
+ || ["npx","-y","@modelcontextprotocol/server-sequential-thinking"];
257
+
258
+ const bridgeArgs = ['bridge', '--recipe', recipePath, '--', ...originalMcp];
259
+
260
+ const child = spawn(mcpslimBin, bridgeArgs, {
261
+ stdio: 'inherit',
262
+ windowsHide: true
263
+ });
264
+
265
+ child.on('error', (err) => {
266
+ console.error('Failed to start MCPSlim:', err.message);
267
+ process.exit(1);
268
+ });
269
+
270
+ child.on('exit', (code) => process.exit(code || 0));
271
+ }
package/package.json ADDED
@@ -0,0 +1,40 @@
1
+ {
2
+ "name": "sequential-thinking-slim",
3
+ "version": "2025.12.18-slim.1.9",
4
+ "description": "sequential-thinking MCP (0% less tokens). Quick setup: npx sequential-thinking-slim --setup",
5
+ "bin": {
6
+ "sequential-thinking-slim": "./index.js"
7
+ },
8
+ "keywords": [
9
+ "mcp",
10
+ "claude",
11
+ "gemini",
12
+ "chatgpt",
13
+ "sequential-thinking",
14
+ "slim",
15
+ "token-reduction"
16
+ ],
17
+ "author": "",
18
+ "license": "MIT",
19
+ "files": [
20
+ "bin/",
21
+ "recipes/",
22
+ "index.js",
23
+ "README.md"
24
+ ],
25
+ "repository": {
26
+ "type": "git",
27
+ "url": "https://github.com/mcpslim/sequential-thinking-slim.git"
28
+ },
29
+ "mcpslim": {
30
+ "originalPackage": "@modelcontextprotocol/server-sequential-thinking",
31
+ "originalVersion": "2025.12.18",
32
+ "originalTools": 1,
33
+ "slimTools": 1,
34
+ "tokenReduction": "0%"
35
+ },
36
+ "homepage": "https://mcpslim.github.io/sequential-thinking-slim",
37
+ "bugs": {
38
+ "url": "https://github.com/mcpslim/sequential-thinking-slim/issues"
39
+ }
40
+ }
@@ -0,0 +1,13 @@
1
+ {
2
+ "mcp_name": "sequential-thinking",
3
+ "auto_generated": true,
4
+ "algorithm_version": "v2.1",
5
+ "rules": {
6
+ "default_minifier": "first_sentence",
7
+ "remove_params_description": false
8
+ },
9
+ "groups": [],
10
+ "passthrough": [
11
+ "sequentialthinking"
12
+ ]
13
+ }