serena-slim 0.0.1-slim.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,184 @@
1
+ # serena-slim
2
+
3
+ > **Serena MCP server optimized for AI assistants** — Reduce context window tokens by 51.2% while keeping full functionality. Compatible with Claude, ChatGPT, Gemini, Cursor, and all MCP clients.
4
+
5
+ [![npm version](https://img.shields.io/npm/v/serena-slim.svg)](https://www.npmjs.com/package/serena-slim)
6
+ [![Test Status](https://img.shields.io/badge/tests-passing-brightgreen)](https://github.com/mcpslim/mcpslim)
7
+ [![MCP Compatible](https://img.shields.io/badge/MCP-compatible-blue)](https://modelcontextprotocol.io)
8
+
9
+ ## What is serena-slim?
10
+
11
+ A **token-optimized** version of the Serena [Model Context Protocol (MCP)](https://modelcontextprotocol.io) server.
12
+
13
+ ### The Problem
14
+
15
+ MCP tool schemas consume significant **context window tokens**. When AI assistants like Claude or ChatGPT load MCP tools, each tool definition takes up valuable context space.
16
+
17
+ The original `serena` loads **29 tools** consuming approximately **~23,878 tokens** — that's space you could use for actual conversation.
18
+
19
+ ### The Solution
20
+
21
+ `serena-slim` intelligently **groups 29 tools into 18 semantic operations**, reducing token usage by **51.2%** — with **zero functionality loss**.
22
+
23
+ Your AI assistant sees fewer, smarter tools. Every original capability remains available.
24
+
25
+ ## Performance
26
+
27
+ | Metric | Original | Slim | Reduction |
28
+ |--------|----------|------|-----------|
29
+ | Tools | 29 | 18 | **-38%** |
30
+ | Schema Tokens | 7,348 | 1,395 | **81.0%** |
31
+ | Claude Code (est.) | ~23,878 | ~11,655 | **~51.2%** |
32
+
33
+ > **Benchmark Info**
34
+ > - Original: `serena@0.0.1`
35
+ > - Schema tokens measured with [tiktoken](https://github.com/openai/tiktoken) (cl100k_base)
36
+ > - Claude Code estimate includes ~570 tokens/tool overhead
37
+
38
+ ## Quick Start
39
+
40
+ ### One-Command Setup (Recommended)
41
+
42
+ ```bash
43
+ # Claude Desktop - auto-configure
44
+ npx serena-slim --setup claude
45
+
46
+ # Cursor - auto-configure
47
+ npx serena-slim --setup cursor
48
+
49
+ # Interactive mode (choose your client)
50
+ npx serena-slim --setup
51
+ ```
52
+
53
+ Done! Restart your app to use serena.
54
+
55
+ ### CLI Tools (already have CLI?)
56
+
57
+ ```bash
58
+ # Claude Code
59
+ claude mcp add serena -- npx -y serena-slim
60
+
61
+ # VS Code (Copilot, Cline, Roo Code)
62
+ code --add-mcp '{"name":"serena","command":"npx","args":["-y","serena-slim"]}'
63
+ ```
64
+
65
+ ## Manual Setup
66
+
67
+ <details>
68
+ <summary>Click to expand manual configuration options</summary>
69
+
70
+ ### Claude Desktop
71
+
72
+ Add to your `claude_desktop_config.json`:
73
+
74
+ | OS | Path |
75
+ |----|------|
76
+ | Windows | `%APPDATA%\Claude\claude_desktop_config.json` |
77
+ | macOS | `~/Library/Application Support/Claude/claude_desktop_config.json` |
78
+
79
+ ```json
80
+ {
81
+ "mcpServers": {
82
+ "serena": {
83
+ "command": "npx",
84
+ "args": ["-y", "serena-slim"]
85
+ }
86
+ }
87
+ }
88
+ ```
89
+
90
+ ### Cursor
91
+
92
+ Add to `.cursor/mcp.json` (global) or `<project>/.cursor/mcp.json` (project):
93
+
94
+ ```json
95
+ {
96
+ "mcpServers": {
97
+ "serena": {
98
+ "command": "npx",
99
+ "args": ["-y", "serena-slim"]
100
+ }
101
+ }
102
+ }
103
+ ```
104
+
105
+ </details>
106
+
107
+ ## How It Works
108
+
109
+ MCPSlim acts as a **transparent bridge** between AI models and the original MCP server:
110
+
111
+ ```
112
+ ┌─────────────────────────────────────────────────────────────────┐
113
+ │ Without MCPSlim │
114
+ │ │
115
+ │ [AI Model] ──── reads 29 tool schemas ────→ [Original MCP] │
116
+ │ (~23,878 tokens loaded into context) │
117
+ ├─────────────────────────────────────────────────────────────────┤
118
+ │ With MCPSlim │
119
+ │ │
120
+ │ [AI Model] ───→ [MCPSlim Bridge] ───→ [Original MCP] │
121
+ │ │ │ │ │
122
+ │ Sees 18 grouped Translates to Executes actual │
123
+ │ tools only original call tool & returns │
124
+ │ (~11,655 tokens) │
125
+ └─────────────────────────────────────────────────────────────────┘
126
+ ```
127
+
128
+ ### How Translation Works
129
+
130
+ 1. **AI reads slim schema** — Only 18 grouped tools instead of 29
131
+ 2. **AI calls grouped tool** — e.g., `interaction({ action: "click", ... })`
132
+ 3. **MCPSlim translates** — Converts to original: `browser_click({ ... })`
133
+ 4. **Original MCP executes** — Real server processes the request
134
+ 5. **Response returned** — Result passes back unchanged
135
+
136
+ **Zero functionality loss. 51.2% token savings.**
137
+
138
+ ## Available Tool Groups
139
+
140
+ | Group | Actions |
141
+ |-------|---------|
142
+ | `read` | 2 |
143
+ | `list` | 2 |
144
+ | `find` | 3 |
145
+ | `replace` | 2 |
146
+ | `get` | 2 |
147
+ | `insert` | 2 |
148
+ | `think` | 3 |
149
+ | `memory` | 3 |
150
+
151
+ Plus **10 passthrough tools** — tools that don't group well are kept as-is with optimized descriptions.
152
+
153
+ ## Compatibility
154
+
155
+ - ✅ **Full functionality** — All original `serena` features preserved
156
+ - ✅ **All AI assistants** — Works with Claude, ChatGPT, Gemini, Copilot, and any MCP client
157
+ - ✅ **Drop-in replacement** — Same capabilities, just use grouped action names
158
+ - ✅ **Tested** — Schema compatibility verified via automated tests
159
+
160
+ ## FAQ
161
+
162
+ ### Does this reduce functionality?
163
+
164
+ **No.** Every original tool is accessible. Tools are grouped semantically (e.g., `click`, `hover`, `drag` → `interaction`), but all actions remain available via the `action` parameter.
165
+
166
+ ### Why do AI assistants need token optimization?
167
+
168
+ AI models have limited context windows. MCP tool schemas consume tokens that could be used for conversation, code, or documents. Reducing tool schema size means more room for actual work.
169
+
170
+ ### Is this officially supported?
171
+
172
+ MCPSlim is a community project. It wraps official MCP servers transparently — the original server does all the real work.
173
+
174
+ ## License
175
+
176
+ MIT
177
+
178
+ ---
179
+
180
+ <p align="center">
181
+ Powered by <a href="https://github.com/mcpslim/mcpslim"><b>MCPSlim</b></a> — MCP Token Optimizer
182
+ <br>
183
+ <sub>Reduce AI context usage. Keep full functionality.</sub>
184
+ </p>
Binary file
Binary file
Binary file
Binary file
Binary file
package/index.js ADDED
@@ -0,0 +1,217 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * serena-slim - Slimmed serena MCP for Claude
4
+ * Reduces token usage by grouping similar tools
5
+ *
6
+ * Usage:
7
+ * npx serena-slim # Run MCP server
8
+ * npx serena-slim --setup # Auto-configure (interactive)
9
+ * npx serena-slim --setup claude # Auto-configure for Claude Desktop
10
+ * npx serena-slim --setup cursor # Auto-configure for Cursor
11
+ */
12
+
13
+ const { spawn, execSync } = require('child_process');
14
+ const path = require('path');
15
+ const os = require('os');
16
+ const fs = require('fs');
17
+ const readline = require('readline');
18
+
19
+ const MCP_NAME = 'serena';
20
+ const PACKAGE_NAME = 'serena-slim';
21
+
22
+ // 환경변수 요구사항 (빌드 시 주입)
23
+ const REQUIRED_ENV_VARS = [];
24
+
25
+ // ============================================
26
+ // Setup Mode: Auto-configure MCP clients
27
+ // ============================================
28
+
29
+ const CONFIG_PATHS = {
30
+ 'claude': getClaudeDesktopConfigPath(),
31
+ 'claude-desktop': getClaudeDesktopConfigPath(),
32
+ 'cursor': getCursorConfigPath(),
33
+ };
34
+
35
+ function getClaudeDesktopConfigPath() {
36
+ if (os.platform() === 'win32') {
37
+ return path.join(process.env.APPDATA || '', 'Claude', 'claude_desktop_config.json');
38
+ } else if (os.platform() === 'darwin') {
39
+ return path.join(os.homedir(), 'Library', 'Application Support', 'Claude', 'claude_desktop_config.json');
40
+ } else {
41
+ return path.join(os.homedir(), '.config', 'claude', 'claude_desktop_config.json');
42
+ }
43
+ }
44
+
45
+ function getCursorConfigPath() {
46
+ // Global config
47
+ if (os.platform() === 'win32') {
48
+ return path.join(process.env.APPDATA || '', 'Cursor', 'User', 'globalStorage', 'mcp.json');
49
+ } else {
50
+ return path.join(os.homedir(), '.cursor', 'mcp.json');
51
+ }
52
+ }
53
+
54
+ function addToConfig(configPath, mcpName, mcpConfig) {
55
+ let config = { mcpServers: {} };
56
+
57
+ // 디렉토리 생성
58
+ const dir = path.dirname(configPath);
59
+ if (!fs.existsSync(dir)) {
60
+ fs.mkdirSync(dir, { recursive: true });
61
+ }
62
+
63
+ // 기존 설정 로드
64
+ if (fs.existsSync(configPath)) {
65
+ try {
66
+ config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
67
+ if (!config.mcpServers) config.mcpServers = {};
68
+ } catch (e) {
69
+ console.error('⚠️ Failed to parse existing config, creating new one');
70
+ }
71
+ }
72
+
73
+ // 이미 존재하는지 확인
74
+ if (config.mcpServers[mcpName]) {
75
+ console.log(`ℹ️ ${mcpName} already configured in ${configPath}`);
76
+ return false;
77
+ }
78
+
79
+ // 새 MCP 추가
80
+ config.mcpServers[mcpName] = mcpConfig;
81
+ fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
82
+ console.log(`✅ Added ${mcpName} to ${configPath}`);
83
+ return true;
84
+ }
85
+
86
+ async function interactiveSetup() {
87
+ const rl = readline.createInterface({
88
+ input: process.stdin,
89
+ output: process.stdout
90
+ });
91
+
92
+ const question = (q) => new Promise(resolve => rl.question(q, resolve));
93
+
94
+ console.log('\n🔧 ' + PACKAGE_NAME + ' Setup\n');
95
+ console.log('Select your MCP client:\n');
96
+ console.log(' 1. Claude Desktop');
97
+ console.log(' 2. Cursor');
98
+ console.log(' 3. Claude Code (CLI)');
99
+ console.log(' 4. VS Code (Copilot)');
100
+ console.log(' 5. Cancel\n');
101
+
102
+ const choice = await question('Enter choice (1-5): ');
103
+ rl.close();
104
+
105
+ // 환경변수 플래그 생성
106
+ const envFlags = REQUIRED_ENV_VARS.map(v => `--env ${v}=<YOUR_${v.split('_').pop()}>`).join(' ');
107
+ const envJson = REQUIRED_ENV_VARS.length > 0
108
+ ? `,"env":{${REQUIRED_ENV_VARS.map(v => `"${v}":"<YOUR_${v.split('_').pop()}>"`).join(',')}}`
109
+ : '';
110
+
111
+ switch (choice.trim()) {
112
+ case '1':
113
+ return setupClient('claude');
114
+ case '2':
115
+ return setupClient('cursor');
116
+ case '3':
117
+ console.log('\nRun this command:\n');
118
+ if (REQUIRED_ENV_VARS.length > 0) {
119
+ console.log(` claude mcp add ${MCP_NAME} ${envFlags} -- npx -y ${PACKAGE_NAME}\n`);
120
+ } else {
121
+ console.log(` claude mcp add ${MCP_NAME} -- npx -y ${PACKAGE_NAME}\n`);
122
+ }
123
+ return true;
124
+ case '4':
125
+ console.log('\nRun this command:\n');
126
+ console.log(` code --add-mcp '{"name":"${MCP_NAME}","command":"npx","args":["-y","${PACKAGE_NAME}"]${envJson}}'\n`);
127
+ return true;
128
+ case '5':
129
+ default:
130
+ console.log('Cancelled.');
131
+ return false;
132
+ }
133
+ }
134
+
135
+ function setupClient(client) {
136
+ const configPath = CONFIG_PATHS[client];
137
+ if (!configPath) {
138
+ console.error(`❌ Unknown client: ${client}`);
139
+ console.log('Supported: claude, claude-desktop, cursor');
140
+ return false;
141
+ }
142
+
143
+ const mcpConfig = {
144
+ command: 'npx',
145
+ args: ['-y', PACKAGE_NAME]
146
+ };
147
+
148
+ // 환경변수가 필요한 경우 env 블록 추가 (플레이스홀더)
149
+ if (REQUIRED_ENV_VARS.length > 0) {
150
+ mcpConfig.env = {};
151
+ for (const envVar of REQUIRED_ENV_VARS) {
152
+ mcpConfig.env[envVar] = `<YOUR_${envVar.split('_').pop()}>`;
153
+ }
154
+ }
155
+
156
+ const success = addToConfig(configPath, MCP_NAME, mcpConfig);
157
+
158
+ if (success) {
159
+ if (REQUIRED_ENV_VARS.length > 0) {
160
+ console.log(`\n⚠️ This MCP requires environment variables!`);
161
+ console.log(` Edit ${configPath} and update:`);
162
+ for (const envVar of REQUIRED_ENV_VARS) {
163
+ console.log(` - ${envVar}`);
164
+ }
165
+ console.log(`\n🎉 Setup complete! Update env vars, then restart ${client}.\n`);
166
+ } else {
167
+ console.log(`\n🎉 Setup complete! Restart ${client} to use ${MCP_NAME}.\n`);
168
+ }
169
+ }
170
+
171
+ return success;
172
+ }
173
+
174
+ // ============================================
175
+ // Main: Check for --setup flag
176
+ // ============================================
177
+
178
+ const args = process.argv.slice(2);
179
+ const setupIndex = args.indexOf('--setup');
180
+
181
+ if (setupIndex !== -1) {
182
+ const client = args[setupIndex + 1];
183
+
184
+ if (client && !client.startsWith('-')) {
185
+ // Specific client: npx xxx-slim --setup claude
186
+ setupClient(client);
187
+ } else {
188
+ // Interactive: npx xxx-slim --setup
189
+ interactiveSetup().then(() => process.exit(0));
190
+ }
191
+ } else {
192
+ // ============================================
193
+ // Normal Mode: Run MCP server
194
+ // ============================================
195
+
196
+ const binName = os.platform() === 'win32' ? 'mcpslim.exe' : 'mcpslim';
197
+ const mcpslimBin = path.join(__dirname, 'bin', binName);
198
+ const recipePath = path.join(__dirname, 'recipes', 'serena.json');
199
+
200
+ // 원본 MCP 명령어
201
+ const originalMcp = process.env.MCPSLIM_ORIGINAL_MCP?.split(' ')
202
+ || ["uvx","--from","git+https://github.com/oraios/serena","serena","start-mcp-server"];
203
+
204
+ const bridgeArgs = ['bridge', '--recipe', recipePath, '--', ...originalMcp];
205
+
206
+ const child = spawn(mcpslimBin, bridgeArgs, {
207
+ stdio: 'inherit',
208
+ windowsHide: true
209
+ });
210
+
211
+ child.on('error', (err) => {
212
+ console.error('Failed to start MCPSlim:', err.message);
213
+ process.exit(1);
214
+ });
215
+
216
+ child.on('exit', (code) => process.exit(code || 0));
217
+ }
package/package.json ADDED
@@ -0,0 +1,37 @@
1
+ {
2
+ "name": "serena-slim",
3
+ "version": "0.0.1-slim.1",
4
+ "description": "Slimmed serena MCP - 38% token reduction for AI models",
5
+ "bin": {
6
+ "serena-slim": "./index.js"
7
+ },
8
+ "keywords": [
9
+ "mcp",
10
+ "claude",
11
+ "gemini",
12
+ "chatgpt",
13
+ "serena",
14
+ "slim",
15
+ "token-reduction"
16
+ ],
17
+ "author": "",
18
+ "license": "MIT",
19
+ "files": [
20
+ "bin/",
21
+ "recipes/",
22
+ "index.js",
23
+ "README.md"
24
+ ],
25
+ "repository": {
26
+ "type": "git",
27
+ "url": "https://github.com/palan-k/mcpslim.git",
28
+ "directory": "packages/serena-slim"
29
+ },
30
+ "mcpslim": {
31
+ "originalPackage": "serena",
32
+ "originalVersion": "0.0.1",
33
+ "originalTools": 29,
34
+ "slimTools": 18,
35
+ "tokenReduction": "38%"
36
+ }
37
+ }
@@ -0,0 +1,380 @@
1
+ {
2
+ "mcp_name": "serena",
3
+ "auto_generated": true,
4
+ "algorithm_version": "v2.1",
5
+ "rules": {
6
+ "default_minifier": "first_sentence",
7
+ "remove_params_description": true
8
+ },
9
+ "groups": [
10
+ {
11
+ "name": "read",
12
+ "description": "Access file contents or relevant memory data to provide contextual information for the current task.",
13
+ "mapping": {
14
+ "file": "read_file",
15
+ "memory": "read_memory"
16
+ },
17
+ "common_schema": {
18
+ "type": "object",
19
+ "properties": {
20
+ "action": {
21
+ "type": "string",
22
+ "enum": [
23
+ "file",
24
+ "memory"
25
+ ]
26
+ },
27
+ "relative_path": {
28
+ "type": "string"
29
+ },
30
+ "start_line": {
31
+ "type": "integer",
32
+ "default": 0
33
+ },
34
+ "end_line": {
35
+ "default": null
36
+ },
37
+ "max_answer_chars": {
38
+ "type": "integer",
39
+ "default": -1
40
+ },
41
+ "memory_file_name": {
42
+ "type": "string"
43
+ }
44
+ },
45
+ "required": [
46
+ "action",
47
+ "relative_path",
48
+ "memory_file_name"
49
+ ]
50
+ }
51
+ },
52
+ {
53
+ "name": "list",
54
+ "description": "Explore filesystem contents and available memories to locate specific files or data storage.",
55
+ "mapping": {
56
+ "dir": "list_dir",
57
+ "memories": "list_memories"
58
+ },
59
+ "common_schema": {
60
+ "type": "object",
61
+ "properties": {
62
+ "action": {
63
+ "type": "string",
64
+ "enum": [
65
+ "dir",
66
+ "memories"
67
+ ]
68
+ },
69
+ "relative_path": {
70
+ "type": "string"
71
+ },
72
+ "recursive": {
73
+ "type": "boolean"
74
+ },
75
+ "skip_ignored_files": {
76
+ "type": "boolean",
77
+ "default": false
78
+ },
79
+ "max_answer_chars": {
80
+ "type": "integer",
81
+ "default": -1
82
+ }
83
+ },
84
+ "required": [
85
+ "action",
86
+ "relative_path",
87
+ "recursive"
88
+ ]
89
+ }
90
+ },
91
+ {
92
+ "name": "find",
93
+ "description": "Locate files, discover code symbols, and pinpoint references to specific symbols within a codebase.",
94
+ "mapping": {
95
+ "file": "find_file",
96
+ "symbol": "find_symbol",
97
+ "referencing_symbols": "find_referencing_symbols"
98
+ },
99
+ "common_schema": {
100
+ "type": "object",
101
+ "properties": {
102
+ "action": {
103
+ "type": "string",
104
+ "enum": [
105
+ "file",
106
+ "symbol",
107
+ "referencing_symbols"
108
+ ]
109
+ },
110
+ "file_mask": {
111
+ "type": "string"
112
+ },
113
+ "relative_path": {
114
+ "type": "string"
115
+ },
116
+ "name_path_pattern": {
117
+ "type": "string"
118
+ },
119
+ "depth": {
120
+ "type": "integer",
121
+ "default": 0
122
+ },
123
+ "include_body": {
124
+ "type": "boolean",
125
+ "default": false
126
+ },
127
+ "include_kinds": {
128
+ "type": "array",
129
+ "default": [],
130
+ "items": {
131
+ "type": "integer"
132
+ }
133
+ },
134
+ "exclude_kinds": {
135
+ "type": "array",
136
+ "default": [],
137
+ "items": {
138
+ "type": "integer"
139
+ }
140
+ },
141
+ "substring_matching": {
142
+ "type": "boolean",
143
+ "default": false
144
+ },
145
+ "max_answer_chars": {
146
+ "type": "integer",
147
+ "default": -1
148
+ },
149
+ "name_path": {
150
+ "type": "string"
151
+ }
152
+ },
153
+ "required": [
154
+ "action",
155
+ "file_mask",
156
+ "relative_path",
157
+ "name_path_pattern",
158
+ "name_path"
159
+ ]
160
+ }
161
+ },
162
+ {
163
+ "name": "replace",
164
+ "description": "Replace file content via pattern matching or directly substitute the body of identified code symbols.",
165
+ "mapping": {
166
+ "content": "replace_content",
167
+ "symbol_body": "replace_symbol_body"
168
+ },
169
+ "common_schema": {
170
+ "type": "object",
171
+ "properties": {
172
+ "action": {
173
+ "type": "string",
174
+ "enum": [
175
+ "content",
176
+ "symbol_body"
177
+ ]
178
+ },
179
+ "relative_path": {
180
+ "type": "string"
181
+ },
182
+ "needle": {
183
+ "type": "string"
184
+ },
185
+ "repl": {
186
+ "type": "string"
187
+ },
188
+ "mode": {
189
+ "type": "string",
190
+ "enum": [
191
+ "literal",
192
+ "regex"
193
+ ]
194
+ },
195
+ "allow_multiple_occurrences": {
196
+ "type": "boolean",
197
+ "default": false
198
+ },
199
+ "name_path": {
200
+ "type": "string"
201
+ },
202
+ "body": {
203
+ "type": "string"
204
+ }
205
+ },
206
+ "required": [
207
+ "action",
208
+ "relative_path",
209
+ "needle",
210
+ "repl",
211
+ "mode",
212
+ "name_path",
213
+ "body"
214
+ ]
215
+ }
216
+ },
217
+ {
218
+ "name": "get",
219
+ "description": "Analyze code symbols within files and display the agent’s current working configuration details.",
220
+ "mapping": {
221
+ "symbols_overview": "get_symbols_overview",
222
+ "current_config": "get_current_config"
223
+ },
224
+ "common_schema": {
225
+ "type": "object",
226
+ "properties": {
227
+ "action": {
228
+ "type": "string",
229
+ "enum": [
230
+ "symbols_overview",
231
+ "current_config"
232
+ ]
233
+ },
234
+ "relative_path": {
235
+ "type": "string"
236
+ },
237
+ "depth": {
238
+ "type": "integer",
239
+ "default": 0
240
+ },
241
+ "max_answer_chars": {
242
+ "type": "integer",
243
+ "default": -1
244
+ }
245
+ },
246
+ "required": [
247
+ "action",
248
+ "relative_path"
249
+ ]
250
+ }
251
+ },
252
+ {
253
+ "name": "insert",
254
+ "description": "Modify code by precisely inserting new content either before or after existing symbol definitions.",
255
+ "mapping": {
256
+ "after_symbol": "insert_after_symbol",
257
+ "before_symbol": "insert_before_symbol"
258
+ },
259
+ "common_schema": {
260
+ "type": "object",
261
+ "properties": {
262
+ "action": {
263
+ "type": "string",
264
+ "enum": [
265
+ "after_symbol",
266
+ "before_symbol"
267
+ ]
268
+ },
269
+ "name_path": {
270
+ "type": "string"
271
+ },
272
+ "relative_path": {
273
+ "type": "string"
274
+ },
275
+ "body": {
276
+ "type": "string"
277
+ }
278
+ },
279
+ "required": [
280
+ "action",
281
+ "name_path",
282
+ "relative_path",
283
+ "body"
284
+ ]
285
+ }
286
+ },
287
+ {
288
+ "name": "think",
289
+ "description": "Evaluate gathered information, task progress, and completion status to guide effective AI assistant reasoning.",
290
+ "mapping": {
291
+ "about_collected_information": "think_about_collected_information",
292
+ "about_task_adherence": "think_about_task_adherence",
293
+ "about_whether_you_are_done": "think_about_whether_you_are_done"
294
+ },
295
+ "common_schema": {
296
+ "type": "object",
297
+ "properties": {
298
+ "action": {
299
+ "type": "string",
300
+ "enum": [
301
+ "about_collected_information",
302
+ "about_task_adherence",
303
+ "about_whether_you_are_done"
304
+ ]
305
+ }
306
+ },
307
+ "required": [
308
+ "action"
309
+ ]
310
+ }
311
+ },
312
+ {
313
+ "name": "memory",
314
+ "description": "Manage persistent project knowledge by storing, deleting, and regex-based updating of markdown files.",
315
+ "mapping": {
316
+ "write": "write_memory",
317
+ "delete": "delete_memory",
318
+ "edit": "edit_memory"
319
+ },
320
+ "common_schema": {
321
+ "type": "object",
322
+ "properties": {
323
+ "action": {
324
+ "type": "string",
325
+ "enum": [
326
+ "write",
327
+ "delete",
328
+ "edit"
329
+ ]
330
+ },
331
+ "memory_file_name": {
332
+ "type": "string"
333
+ },
334
+ "content": {
335
+ "type": "string"
336
+ },
337
+ "max_answer_chars": {
338
+ "type": "integer",
339
+ "default": -1
340
+ },
341
+ "needle": {
342
+ "type": "string"
343
+ },
344
+ "repl": {
345
+ "type": "string"
346
+ },
347
+ "mode": {
348
+ "type": "string",
349
+ "enum": [
350
+ "literal",
351
+ "regex"
352
+ ]
353
+ }
354
+ },
355
+ "required": [
356
+ "action",
357
+ "memory_file_name",
358
+ "content",
359
+ "needle",
360
+ "repl",
361
+ "mode"
362
+ ]
363
+ }
364
+ }
365
+ ],
366
+ "passthrough": [
367
+ "create_text_file",
368
+ "search_for_pattern",
369
+ "rename_symbol",
370
+ "execute_shell_command",
371
+ "activate_project",
372
+ "switch_modes",
373
+ "check_onboarding_performed",
374
+ "onboarding",
375
+ "prepare_for_new_conversation",
376
+ "initial_instructions"
377
+ ],
378
+ "ai_enhanced": true,
379
+ "enhanced_at": "2026-01-05T07:32:33.116Z"
380
+ }