serena-slim 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,173 @@
1
+ # serena-slim
2
+
3
+ > **Serena MCP server optimized for AI assistants** — Reduce context window tokens by 53.4% while keeping full functionality. Compatible with Claude, ChatGPT, Gemini, Cursor, and all MCP clients.
4
+
5
+ [![npm version](https://img.shields.io/npm/v/serena-slim.svg)](https://www.npmjs.com/package/serena-slim)
6
+ [![Test Status](https://img.shields.io/badge/tests-passing-brightgreen)](https://github.com/mcpslim/mcpslim)
7
+ [![MCP Compatible](https://img.shields.io/badge/MCP-compatible-blue)](https://modelcontextprotocol.io)
8
+
9
+ ## What is serena-slim?
10
+
11
+ A **token-optimized** version of the Serena [Model Context Protocol (MCP)](https://modelcontextprotocol.io) server.
12
+
13
+ ### The Problem
14
+
15
+ MCP tool schemas consume significant **context window tokens**. When AI assistants like Claude or ChatGPT load MCP tools, each tool definition takes up valuable context space.
16
+
17
+ The original `serena` loads **29 tools** consuming approximately **~23,878 tokens** — that's space you could use for actual conversation.
18
+
19
+ ### The Solution
20
+
21
+ `serena-slim` intelligently **groups 29 tools into 18 semantic operations**, reducing token usage by **53.4%** — with **zero functionality loss**.
22
+
23
+ Your AI assistant sees fewer, smarter tools. Every original capability remains available.
24
+
25
+ ## Performance
26
+
27
+ | Metric | Original | Slim | Reduction |
28
+ |--------|----------|------|-----------|
29
+ | Tools | 29 | 18 | **-38%** |
30
+ | Schema Tokens | 7,348 | 873 | **88.1%** |
31
+ | Claude Code (est.) | ~23,878 | ~11,133 | **~53.4%** |
32
+
33
+ > **Benchmark Info**
34
+ > - Original: `serena@0.0.1`
35
+ > - Schema tokens measured with [tiktoken](https://github.com/openai/tiktoken) (cl100k_base)
36
+ > - Claude Code estimate includes ~570 tokens/tool overhead
37
+
38
+ ## Installation
39
+
40
+ ```bash
41
+ npx serena-slim
42
+ ```
43
+
44
+ No additional setup required. The slim server wraps the original MCP transparently.
45
+
46
+ ## Usage
47
+
48
+ ### Claude Desktop
49
+
50
+ Add to your `claude_desktop_config.json`:
51
+
52
+ ```json
53
+ {
54
+ "mcpServers": {
55
+ "serena": {
56
+ "command": "npx",
57
+ "args": ["-y", "serena-slim"]
58
+ }
59
+ }
60
+ }
61
+ ```
62
+
63
+ ### Claude Code (CLI)
64
+
65
+ ```bash
66
+ claude mcp add serena -- npx -y serena-slim
67
+ ```
68
+
69
+ ### Gemini CLI
70
+
71
+ ```bash
72
+ gemini mcp add serena -- npx -y serena-slim
73
+ ```
74
+
75
+ ### VS Code (Copilot, Cline, Roo Code)
76
+
77
+ ```bash
78
+ code --add-mcp '{"name":"serena","command":"npx","args":["-y","serena-slim"]}'
79
+ ```
80
+
81
+ ### Cursor
82
+
83
+ Add to `.cursor/mcp.json`:
84
+
85
+ ```json
86
+ {
87
+ "mcpServers": {
88
+ "serena": {
89
+ "command": "npx",
90
+ "args": ["-y", "serena-slim"]
91
+ }
92
+ }
93
+ }
94
+ ```
95
+
96
+ ## How It Works
97
+
98
+ MCPSlim acts as a **transparent bridge** between AI models and the original MCP server:
99
+
100
+ ```
101
+ ┌─────────────────────────────────────────────────────────────────┐
102
+ │ Without MCPSlim │
103
+ │ │
104
+ │ [AI Model] ──── reads 29 tool schemas ────→ [Original MCP] │
105
+ │ (~23,878 tokens loaded into context) │
106
+ ├─────────────────────────────────────────────────────────────────┤
107
+ │ With MCPSlim │
108
+ │ │
109
+ │ [AI Model] ───→ [MCPSlim Bridge] ───→ [Original MCP] │
110
+ │ │ │ │ │
111
+ │ Sees 18 grouped Translates to Executes actual │
112
+ │ tools only original call tool & returns │
113
+ │ (~11,133 tokens) │
114
+ └─────────────────────────────────────────────────────────────────┘
115
+ ```
116
+
117
+ ### How Translation Works
118
+
119
+ 1. **AI reads slim schema** — Only 18 grouped tools instead of 29
120
+ 2. **AI calls grouped tool** — e.g., `interaction({ action: "click", ... })`
121
+ 3. **MCPSlim translates** — Converts to original: `browser_click({ ... })`
122
+ 4. **Original MCP executes** — Real server processes the request
123
+ 5. **Response returned** — Result passes back unchanged
124
+
125
+ **Zero functionality loss. 53.4% token savings.**
126
+
127
+ ## Available Tool Groups
128
+
129
+ | Group | Actions |
130
+ |-------|---------|
131
+ | `read` | 2 |
132
+ | `list` | 2 |
133
+ | `find` | 3 |
134
+ | `replace` | 2 |
135
+ | `get` | 2 |
136
+ | `insert` | 2 |
137
+ | `think` | 3 |
138
+ | `memory` | 3 |
139
+
140
+ Plus **10 passthrough tools** — tools that don't group well are kept as-is with optimized descriptions.
141
+
142
+ ## Compatibility
143
+
144
+ - ✅ **Full functionality** — All original `serena` features preserved
145
+ - ✅ **All AI assistants** — Works with Claude, ChatGPT, Gemini, Copilot, and any MCP client
146
+ - ✅ **Drop-in replacement** — Same capabilities, just use grouped action names
147
+ - ✅ **Tested** — Schema compatibility verified via automated tests
148
+
149
+ ## FAQ
150
+
151
+ ### Does this reduce functionality?
152
+
153
+ **No.** Every original tool is accessible. Tools are grouped semantically (e.g., `click`, `hover`, `drag` → `interaction`), but all actions remain available via the `action` parameter.
154
+
155
+ ### Why do AI assistants need token optimization?
156
+
157
+ AI models have limited context windows. MCP tool schemas consume tokens that could be used for conversation, code, or documents. Reducing tool schema size means more room for actual work.
158
+
159
+ ### Is this officially supported?
160
+
161
+ MCPSlim is a community project. It wraps official MCP servers transparently — the original server does all the real work.
162
+
163
+ ## License
164
+
165
+ MIT
166
+
167
+ ---
168
+
169
+ <p align="center">
170
+ Powered by <a href="https://github.com/mcpslim/mcpslim"><b>MCPSlim</b></a> — MCP Token Optimizer
171
+ <br>
172
+ <sub>Reduce AI context usage. Keep full functionality.</sub>
173
+ </p>
Binary file
Binary file
Binary file
Binary file
Binary file
package/index.js ADDED
@@ -0,0 +1,31 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * serena-slim - Slimmed serena MCP for Claude
4
+ * Reduces token usage by grouping similar tools
5
+ */
6
+
7
+ const { spawn } = require('child_process');
8
+ const path = require('path');
9
+ const os = require('os');
10
+
11
+ const binName = os.platform() === 'win32' ? 'mcpslim.exe' : 'mcpslim';
12
+ const mcpslimBin = path.join(__dirname, 'bin', binName);
13
+ const recipePath = path.join(__dirname, 'recipes', 'serena.json');
14
+
15
+ // 원본 MCP 명령어
16
+ const originalMcp = process.env.MCPSLIM_ORIGINAL_MCP?.split(' ')
17
+ || ["uvx","--from","git+https://github.com/oraios/serena","serena","start-mcp-server"];
18
+
19
+ const args = ['bridge', '--recipe', recipePath, '--', ...originalMcp];
20
+
21
+ const child = spawn(mcpslimBin, args, {
22
+ stdio: 'inherit',
23
+ windowsHide: true
24
+ });
25
+
26
+ child.on('error', (err) => {
27
+ console.error('Failed to start MCPSlim:', err.message);
28
+ process.exit(1);
29
+ });
30
+
31
+ child.on('exit', (code) => process.exit(code || 0));
package/package.json ADDED
@@ -0,0 +1,37 @@
1
+ {
2
+ "name": "serena-slim",
3
+ "version": "0.0.1",
4
+ "description": "Slimmed serena MCP - 38% token reduction for AI models",
5
+ "bin": {
6
+ "serena-slim": "./index.js"
7
+ },
8
+ "keywords": [
9
+ "mcp",
10
+ "claude",
11
+ "gemini",
12
+ "chatgpt",
13
+ "serena",
14
+ "slim",
15
+ "token-reduction"
16
+ ],
17
+ "author": "",
18
+ "license": "MIT",
19
+ "files": [
20
+ "bin/",
21
+ "recipes/",
22
+ "index.js",
23
+ "README.md"
24
+ ],
25
+ "repository": {
26
+ "type": "git",
27
+ "url": "https://github.com/palan-k/mcpslim.git",
28
+ "directory": "packages/serena-slim"
29
+ },
30
+ "mcpslim": {
31
+ "originalPackage": "serena",
32
+ "originalVersion": "0.0.1",
33
+ "originalTools": 29,
34
+ "slimTools": 18,
35
+ "tokenReduction": "38%"
36
+ }
37
+ }
@@ -0,0 +1,213 @@
1
+ {
2
+ "mcp_name": "serena",
3
+ "auto_generated": true,
4
+ "algorithm_version": "v2.1",
5
+ "rules": {
6
+ "default_minifier": "first_sentence",
7
+ "remove_params_description": true
8
+ },
9
+ "groups": [
10
+ {
11
+ "name": "read",
12
+ "description": "read operations",
13
+ "mapping": {
14
+ "file": "read_file",
15
+ "memory": "read_memory"
16
+ },
17
+ "common_schema": {
18
+ "type": "object",
19
+ "properties": {
20
+ "action": {
21
+ "type": "string",
22
+ "enum": [
23
+ "file",
24
+ "memory"
25
+ ]
26
+ }
27
+ },
28
+ "required": [
29
+ "action"
30
+ ]
31
+ }
32
+ },
33
+ {
34
+ "name": "list",
35
+ "description": "list operations",
36
+ "mapping": {
37
+ "dir": "list_dir",
38
+ "memories": "list_memories"
39
+ },
40
+ "common_schema": {
41
+ "type": "object",
42
+ "properties": {
43
+ "action": {
44
+ "type": "string",
45
+ "enum": [
46
+ "dir",
47
+ "memories"
48
+ ]
49
+ }
50
+ },
51
+ "required": [
52
+ "action"
53
+ ]
54
+ }
55
+ },
56
+ {
57
+ "name": "find",
58
+ "description": "find operations",
59
+ "mapping": {
60
+ "file": "find_file",
61
+ "symbol": "find_symbol",
62
+ "referencing_symbols": "find_referencing_symbols"
63
+ },
64
+ "common_schema": {
65
+ "type": "object",
66
+ "properties": {
67
+ "action": {
68
+ "type": "string",
69
+ "enum": [
70
+ "file",
71
+ "symbol",
72
+ "referencing_symbols"
73
+ ]
74
+ }
75
+ },
76
+ "required": [
77
+ "action"
78
+ ]
79
+ }
80
+ },
81
+ {
82
+ "name": "replace",
83
+ "description": "replace operations",
84
+ "mapping": {
85
+ "content": "replace_content",
86
+ "symbol_body": "replace_symbol_body"
87
+ },
88
+ "common_schema": {
89
+ "type": "object",
90
+ "properties": {
91
+ "action": {
92
+ "type": "string",
93
+ "enum": [
94
+ "content",
95
+ "symbol_body"
96
+ ]
97
+ }
98
+ },
99
+ "required": [
100
+ "action"
101
+ ]
102
+ }
103
+ },
104
+ {
105
+ "name": "get",
106
+ "description": "get operations",
107
+ "mapping": {
108
+ "symbols_overview": "get_symbols_overview",
109
+ "current_config": "get_current_config"
110
+ },
111
+ "common_schema": {
112
+ "type": "object",
113
+ "properties": {
114
+ "action": {
115
+ "type": "string",
116
+ "enum": [
117
+ "symbols_overview",
118
+ "current_config"
119
+ ]
120
+ }
121
+ },
122
+ "required": [
123
+ "action"
124
+ ]
125
+ }
126
+ },
127
+ {
128
+ "name": "insert",
129
+ "description": "insert operations",
130
+ "mapping": {
131
+ "after_symbol": "insert_after_symbol",
132
+ "before_symbol": "insert_before_symbol"
133
+ },
134
+ "common_schema": {
135
+ "type": "object",
136
+ "properties": {
137
+ "action": {
138
+ "type": "string",
139
+ "enum": [
140
+ "after_symbol",
141
+ "before_symbol"
142
+ ]
143
+ }
144
+ },
145
+ "required": [
146
+ "action"
147
+ ]
148
+ }
149
+ },
150
+ {
151
+ "name": "think",
152
+ "description": "think operations",
153
+ "mapping": {
154
+ "about_collected_information": "think_about_collected_information",
155
+ "about_task_adherence": "think_about_task_adherence",
156
+ "about_whether_you_are_done": "think_about_whether_you_are_done"
157
+ },
158
+ "common_schema": {
159
+ "type": "object",
160
+ "properties": {
161
+ "action": {
162
+ "type": "string",
163
+ "enum": [
164
+ "about_collected_information",
165
+ "about_task_adherence",
166
+ "about_whether_you_are_done"
167
+ ]
168
+ }
169
+ },
170
+ "required": [
171
+ "action"
172
+ ]
173
+ }
174
+ },
175
+ {
176
+ "name": "memory",
177
+ "description": "memory operations",
178
+ "mapping": {
179
+ "write": "write_memory",
180
+ "delete": "delete_memory",
181
+ "edit": "edit_memory"
182
+ },
183
+ "common_schema": {
184
+ "type": "object",
185
+ "properties": {
186
+ "action": {
187
+ "type": "string",
188
+ "enum": [
189
+ "write",
190
+ "delete",
191
+ "edit"
192
+ ]
193
+ }
194
+ },
195
+ "required": [
196
+ "action"
197
+ ]
198
+ }
199
+ }
200
+ ],
201
+ "passthrough": [
202
+ "create_text_file",
203
+ "search_for_pattern",
204
+ "rename_symbol",
205
+ "execute_shell_command",
206
+ "activate_project",
207
+ "switch_modes",
208
+ "check_onboarding_performed",
209
+ "onboarding",
210
+ "prepare_for_new_conversation",
211
+ "initial_instructions"
212
+ ]
213
+ }