toonify-mcp 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,123 +1,60 @@
1
- # 🎯 Claude Code Toonify
1
+ # 🎯 Toonify MCP
2
2
 
3
3
  **[English](README.md) | [繁體中文](README.zh-TW.md)**
4
4
 
5
- **Reduce Claude API token usage by 60%+ using TOON format optimization**
5
+ MCP server providing token optimization tools for converting structured data to TOON format.
6
+ Reduces Claude API token usage by 60%+ on average.
6
7
 
7
- An MCP (Model Context Protocol) server that provides **on-demand** token optimization tools for converting structured data to TOON (Token-Oriented Object Notation) format. Works with any MCP-compatible LLM client (Claude Code, ChatGPT, etc.).
8
+ ## Features
8
9
 
9
- ⚠️ **Important**: This MCP server provides **tools** that must be explicitly called - it does NOT automatically intercept content. See [Usage](#-usage) for details.
10
+ - **60%+ Token Reduction** for JSON, CSV, YAML data
11
+ - **MCP Integration** - Works with Claude Code, Claude Desktop
12
+ - **Built-in Metrics** - Track token savings locally
13
+ - **Silent Fallback** - Never breaks your workflow
10
14
 
11
- ## 🌟 Features
12
-
13
- - **🎯 60%+ Token Reduction**: Average 63.9% reduction for structured data
14
- - **💰 Significant Cost Savings**: ~$1,380 per million API calls
15
- - **⚡ Fast**: <10ms overhead for typical payloads
16
- - **🔄 Silent Fallback**: Never breaks your workflow
17
- - **📊 Built-in Metrics**: Track savings locally
18
- - **🔌 MCP Integration**: Works seamlessly with Claude Code
19
- - **🌍 Universal Compatibility**: Works with ANY LLM (GPT, Claude, Gemini, etc.)
20
-
21
- ## 📊 Performance
22
-
23
- | Format | Before | After | Savings |
24
- |--------|--------|-------|---------|
25
- | JSON | 247 bytes | 98 bytes | **60%** |
26
- | CSV | 180 bytes | 65 bytes | **64%** |
27
- | YAML | 215 bytes | 89 bytes | **59%** |
28
-
29
- **Token reduction example:**
30
- ```
31
- JSON (142 tokens):
32
- {
33
- "products": [
34
- {"id": 101, "name": "Laptop Pro", "price": 1299},
35
- {"id": 102, "name": "Magic Mouse", "price": 79}
36
- ]
37
- }
38
-
39
- TOON (57 tokens, 60% reduction):
40
- products[2]{id,name,price}:
41
- 101,Laptop Pro,1299
42
- 102,Magic Mouse,79
43
- ```
44
-
45
- ## 🚀 Installation
15
+ ## Installation
46
16
 
47
17
  ### 1. Install the package
48
18
 
49
19
  ```bash
50
- npm install -g @ktseng/toonify-mcp
20
+ npm install -g toonify-mcp
51
21
  ```
52
22
 
53
23
  ### 2. Register with Claude Code
54
24
 
55
25
  ```bash
56
- # Register the MCP server (user scope - available in all projects)
26
+ # User scope (available in all projects)
57
27
  claude mcp add --scope user --transport stdio toonify -- /opt/homebrew/bin/toonify-mcp
58
28
 
59
- # For project-specific registration
29
+ # Or project scope
60
30
  claude mcp add --scope project --transport stdio toonify -- /opt/homebrew/bin/toonify-mcp
61
31
  ```
62
32
 
63
33
  ### 3. Verify installation
64
34
 
65
35
  ```bash
66
- # Check if MCP server is registered and connected
67
36
  claude mcp list
68
-
69
37
  # Should show: toonify: /opt/homebrew/bin/toonify-mcp - ✓ Connected
70
38
  ```
71
39
 
72
- ## 📖 Usage
40
+ ## Usage
73
41
 
74
- ### How It Works: Tool-Based Optimization
42
+ ### Option A: MCP Tools (Manual)
75
43
 
76
- This MCP server provides **two tools** that the LLM can call:
77
- 1. `optimize_content` - Optimizes structured data to TOON format
78
- 2. `get_stats` - Returns optimization statistics
44
+ Call tools explicitly when needed:
79
45
 
80
- ⚠️ **Key Limitation**: The LLM must **explicitly decide** to call these tools. There is NO automatic interception of content.
81
-
82
- ### Usage Patterns
83
-
84
- **Pattern 1: Explicit User Request**
85
- ```
86
- User: "Optimize this JSON data for me"
87
- LLM: [Calls optimize_content tool] → Returns optimized TOON format
88
- ```
89
-
90
- **Pattern 2: LLM Decides to Optimize**
91
- ```
92
- LLM reads large JSON → Recognizes it's large → Calls optimize_content → Uses optimized version
93
- ```
46
+ ```bash
47
+ # Optimize content
48
+ claude mcp call toonify optimize_content '{"content": "..."}'
94
49
 
95
- **Pattern 3: Custom Instructions** (ChatGPT/Claude)
96
- ```
97
- "Before analyzing large JSON/CSV/YAML data, always call the optimize_content tool to reduce tokens"
50
+ # View stats
51
+ claude mcp call toonify get_stats '{}'
98
52
  ```
99
53
 
100
- ### For ChatGPT Users
54
+ ### Option B: Claude Code Hook (Automatic) ⭐ RECOMMENDED
101
55
 
102
- ChatGPT can use this MCP server, but:
103
- - ❌ **NOT automatic** - ChatGPT must decide to call the tool
104
- - ✅ **Works when prompted** - "Use toonify to optimize this data"
105
- - ✅ **Custom instructions** - Add to ChatGPT custom instructions to always optimize large content
56
+ Automatic interception for Claude Code users:
106
57
 
107
- ### For Claude Code Users
108
-
109
- #### Option A: MCP Server (Manual)
110
- - ❌ **NOT automatic** - Claude must decide to call the tool
111
- - ✅ **Works when prompted** - "Use toonify to optimize this data"
112
- - ✅ **Universal compatibility** - Works with any MCP client
113
-
114
- #### Option B: Claude Code Hook (Automatic) ⭐ RECOMMENDED
115
- - ✅ **Fully automatic** - Intercepts tool results transparently
116
- - ✅ **Zero overhead** - No manual calls needed
117
- - ✅ **Seamless integration** - Works with Read, Grep, and other file tools
118
- - ⚠️ **Claude Code only** - Doesn't work with other MCP clients
119
-
120
- **Installation**:
121
58
  ```bash
122
59
  cd hooks/
123
60
  npm install
@@ -128,11 +65,9 @@ npm run install-hook
128
65
  claude hooks list # Should show: PostToolUse
129
66
  ```
130
67
 
131
- See `hooks/README.md` for detailed setup and configuration.
68
+ See `hooks/README.md` for details.
132
69
 
133
- ## 🔧 Configuration
134
-
135
- Configure via environment variables or config file:
70
+ ## Configuration
136
71
 
137
72
  ```bash
138
73
  # Environment variables
@@ -141,7 +76,7 @@ export TOONIFY_MIN_TOKENS=50
141
76
  export TOONIFY_MIN_SAVINGS=30
142
77
  export TOONIFY_SKIP_TOOLS="Bash,Write"
143
78
 
144
- # Or via ~/.claude/toonify-config.json
79
+ # Or ~/.claude/toonify-config.json
145
80
  {
146
81
  "enabled": true,
147
82
  "minTokensThreshold": 50,
@@ -150,195 +85,12 @@ export TOONIFY_SKIP_TOOLS="Bash,Write"
150
85
  }
151
86
  ```
152
87
 
153
- ## 📊 Metrics Dashboard
154
-
155
- View your savings:
156
-
157
- ```bash
158
- claude mcp call toonify get_stats '{}'
159
- ```
160
-
161
- Output:
162
- ```
163
- 📊 Token Optimization Stats
164
- ━━━━━━━━━━━━━━━━━━━━━━━━
165
- Total Requests: 1,234
166
- Optimized: 856 (69.4%)
167
-
168
- Tokens Before: 1,456,789
169
- Tokens After: 567,234
170
- Total Savings: 889,555 (61.1%)
171
-
172
- 💰 Cost Savings (at $3/1M input tokens):
173
- $2.67 saved
174
- ```
175
-
176
- ## 🌍 Compatibility
177
-
178
- ### ✅ **This MCP Server Works With:**
179
- - **Claude Code CLI** (primary target)
180
- - **Claude Desktop App**
181
- - **Custom MCP clients**
182
- - **Any tool implementing MCP protocol**
183
-
184
- **Important**: MCP (Model Context Protocol) is an Anthropic protocol. This MCP server only works with MCP-compatible clients in the Claude ecosystem.
185
-
186
- ### 🔧 **Using TOON Format with Other LLMs**
187
-
188
- While this **MCP server** is Claude-specific, the **TOON format itself** reduces tokens for ANY LLM (GPT, Gemini, Llama, etc.). To use TOON optimization with non-MCP LLMs:
189
-
190
- **TypeScript/JavaScript:**
191
- ```typescript
192
- import { encode, decode } from '@toon-format/toon';
193
-
194
- // Optimize data before sending to any LLM API
195
- const data = {
196
- products: [
197
- { id: 101, name: 'Laptop Pro', price: 1299 },
198
- { id: 102, name: 'Magic Mouse', price: 79 }
199
- ]
200
- };
201
-
202
- const optimizedContent = encode(data); // 60% token reduction
203
-
204
- // Use with OpenAI
205
- await openai.chat.completions.create({
206
- model: 'gpt-4',
207
- messages: [{ role: 'user', content: `Analyze: ${optimizedContent}` }]
208
- });
209
-
210
- // Use with Gemini
211
- await gemini.generateContent({
212
- contents: [{ text: `Analyze: ${optimizedContent}` }]
213
- });
214
- ```
215
-
216
- **Python:**
217
- ```python
218
- # Install: pip install toonify
219
- from toonify import encode
220
- import openai
221
-
222
- data = {"products": [...]}
223
- optimized = encode(data)
224
-
225
- # Works with any LLM API
226
- openai.chat.completions.create(
227
- model="gpt-4",
228
- messages=[{"role": "user", "content": f"Analyze: {optimized}"}]
229
- )
230
- ```
231
-
232
- ### 📊 **MCP Server vs TOON Library**
233
-
234
- | Feature | This MCP Server | TOON Library Direct |
235
- |---------|----------------|---------------------|
236
- | **Target** | Claude Code/Desktop | Any LLM |
237
- | **Integration** | Automatic (via MCP) | Manual (code integration) |
238
- | **Setup** | Configure once | Import in each project |
239
- | **Compatibility** | MCP clients only | Universal |
240
-
241
- ## 🏗️ Architecture
242
-
243
- ```
244
- ┌──────────────────┐
245
- │ Claude Code │
246
- │ CLI │
247
- └─────────┬────────┘
248
-
249
-
250
- ┌──────────────────┐
251
- │ Toonify MCP │
252
- │ Server │
253
- ├──────────────────┤
254
- │ • optimize_content│ ← Compress structured data
255
- │ • get_stats │ ← View metrics
256
- └─────────┬────────┘
257
-
258
-
259
- ┌──────────────────┐
260
- │ TokenOptimizer │ ← TOON encoding
261
- └──────────────────┘
262
-
263
-
264
- ┌──────────────────┐
265
- │ MetricsCollector │ ← Track savings
266
- └──────────────────┘
267
- ```
268
-
269
- ## 🧪 Development
270
-
271
- ```bash
272
- # Clone repository
273
- git clone https://github.com/ktseng/toonify-mcp.git
274
- cd toonify-mcp
275
-
276
- # Install dependencies
277
- npm install
278
-
279
- # Build
280
- npm run build
281
-
282
- # Run tests
283
- npm test
284
-
285
- # Development mode (watch)
286
- npm run dev
287
- ```
288
-
289
- ## 🎯 When to Use
290
-
291
- **✅ Ideal for:**
292
- - Large JSON responses from tools
293
- - CSV/tabular data
294
- - Structured API responses
295
- - Database query results
296
- - File contents with structured data
297
-
298
- **⚠️ Not recommended for:**
299
- - Plain text content
300
- - Code files
301
- - Highly irregular data structures
302
- - Content < 50 tokens
303
-
304
- ## 📚 Technical Details
305
-
306
- ### How It Works
307
-
308
- 1. **Interception**: MCP server intercepts tool results via `optimize_content` tool
309
- 2. **Detection**: Analyzes content to identify JSON, CSV, or YAML
310
- 3. **Optimization**: Converts to TOON format using `@scrapegraph/toonify`
311
- 4. **Validation**: Ensures savings > 30% threshold
312
- 5. **Fallback**: Returns original if optimization fails or savings too low
313
-
314
- ### Dependencies
315
-
316
- - `@modelcontextprotocol/sdk` - MCP server framework
317
- - `@scrapegraph/toonify` - TOON format encoder/decoder
318
- - `tiktoken` - Token counting (Claude/GPT compatible)
319
- - `yaml` - YAML parsing support
320
-
321
- ## 🤝 Contributing
322
-
323
- Contributions welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
324
-
325
- ## 📄 License
326
-
327
- MIT License - see [LICENSE](LICENSE) file
328
-
329
- ## 🙏 Credits
330
-
331
- - [Toonify](https://github.com/ScrapeGraphAI/toonify) by ScrapeGraphAI team
332
- - [Claude Code](https://github.com/anthropics/claude-code) by Anthropic
333
- - Original inspiration: [awesome-llm-apps](https://github.com/Shubhamsaboo/awesome-llm-apps)
334
-
335
- ## 🔗 Links
88
+ ## Links
336
89
 
337
- - **NPM Package**: Coming soon
338
- - **GitHub**: https://github.com/ktseng/toonify-mcp
339
- - **Issues**: https://github.com/ktseng/toonify-mcp/issues
340
- - **MCP Documentation**: https://code.claude.com/docs/mcp
90
+ - **GitHub**: https://github.com/kevintseng/toonify-mcp
91
+ - **Issues**: https://github.com/kevintseng/toonify-mcp/issues
92
+ - **MCP Docs**: https://code.claude.com/docs/mcp
341
93
 
342
- ---
94
+ ## License
343
95
 
344
- **⭐ If this tool saves you money on API costs, consider starring the repo!**
96
+ MIT License - see [LICENSE](LICENSE)
package/README.zh-TW.md CHANGED
@@ -1,90 +1,60 @@
1
- # 🎯 Claude Code Toonify
1
+ # 🎯 Toonify MCP
2
2
 
3
3
  **[English](README.md) | [繁體中文](README.zh-TW.md)**
4
4
 
5
- **使用 TOON 格式優化,降低 Claude API Token 使用量達 60% 以上**
5
+ MCP 伺服器提供 Token 優化工具,將結構化資料轉換為 TOON 格式。
6
+ 平均降低 Claude API Token 使用量達 60% 以上。
6
7
 
7
- 一個 MCP (Model Context Protocol) 伺服器,提供**按需**的 Token 優化工具,將結構化資料轉換為 TOON (Token-Oriented Object Notation) 格式。支援任何 MCP 相容的 LLM 客戶端(Claude Code、ChatGPT 等)。
8
+ ## 功能特色
8
9
 
9
- ⚠️ **重要**:此 MCP 伺服器提供必須明確呼叫的**工具** - 不會自動攔截內容。詳見[使用方式](#-使用方式)。
10
+ - **60%+ Token 削減** - 支援 JSON、CSV、YAML 資料
11
+ - **MCP 整合** - 適用於 Claude Code、Claude Desktop
12
+ - **內建指標** - 本地追蹤 Token 節省量
13
+ - **靜默降級** - 絕不中斷工作流程
10
14
 
11
- ## 🌟 功能特色
12
-
13
- - **🎯 60%+ Token 削減**:結構化資料平均減少 63.9%
14
- - **💰 大幅節省成本**:每百萬次 API 呼叫節省約 $1,380
15
- - **⚡ 極速處理**:典型負載僅 <10ms 額外開銷
16
- - **🔄 靜默降級**:絕不中斷您的工作流程
17
- - **📊 內建指標**:本地追蹤節省數據
18
- - **🔌 MCP 整合**:與 Claude Code 無縫協作
19
- - **🌍 通用相容**:支援任何 LLM(GPT、Claude、Gemini 等)
20
-
21
- ## 📊 效能表現
22
-
23
- | 格式 | 優化前 | 優化後 | 節省 |
24
- |------|--------|--------|------|
25
- | JSON | 247 bytes | 98 bytes | **60%** |
26
- | CSV | 180 bytes | 65 bytes | **64%** |
27
- | YAML | 215 bytes | 89 bytes | **59%** |
28
-
29
- **Token 削減範例:**
30
- ```
31
- JSON (142 tokens):
32
- {
33
- "products": [
34
- {"id": 101, "name": "Laptop Pro", "price": 1299},
35
- {"id": 102, "name": "Magic Mouse", "price": 79}
36
- ]
37
- }
38
-
39
- TOON (57 tokens,削減 60%):
40
- products[2]{id,name,price}:
41
- 101,Laptop Pro,1299
42
- 102,Magic Mouse,79
43
- ```
44
-
45
- ## 🚀 安裝步驟
15
+ ## 安裝步驟
46
16
 
47
17
  ### 1. 安裝套件
48
18
 
49
19
  ```bash
50
- npm install -g @ktseng/toonify-mcp
20
+ npm install -g toonify-mcp
51
21
  ```
52
22
 
53
23
  ### 2. 註冊至 Claude Code
54
24
 
55
25
  ```bash
56
- # 註冊 MCP 伺服器(user scope - 所有專案可用)
26
+ # User scope(所有專案可用)
57
27
  claude mcp add --scope user --transport stdio toonify -- /opt/homebrew/bin/toonify-mcp
58
28
 
59
- # 專案專用註冊
29
+ # 或 Project scope
60
30
  claude mcp add --scope project --transport stdio toonify -- /opt/homebrew/bin/toonify-mcp
61
31
  ```
62
32
 
63
33
  ### 3. 驗證安裝
64
34
 
65
35
  ```bash
66
- # 檢查 MCP 伺服器是否已註冊並連線
67
36
  claude mcp list
68
-
69
37
  # 應顯示:toonify: /opt/homebrew/bin/toonify-mcp - ✓ Connected
70
38
  ```
71
39
 
72
- ## 📖 使用方式
40
+ ## 使用方式
73
41
 
74
- ### Claude Code 使用者
42
+ ### 選項 A:MCP 工具(手動)
75
43
 
76
- #### 選項 A:MCP 伺服器(手動)
77
- - ❌ **非自動** - Claude 必須主動呼叫工具
78
- - ✅ **提示時有效** - "使用 toonify 優化這個資料"
79
- - ✅ **通用相容性** - 適用於任何 MCP 客戶端
44
+ 需要時明確呼叫工具:
80
45
 
81
- #### 選項 B:Claude Code Hook(自動)⭐ 推薦
82
- - ✅ **完全自動** - 透明攔截工具結果
83
- - **零開銷** - 無需手動呼叫
84
- - ✅ **無縫整合** - 適用於 Read、Grep 等檔案工具
85
- - ⚠️ **僅限 Claude Code** - 不適用於其他 MCP 客戶端
46
+ ```bash
47
+ # 優化內容
48
+ claude mcp call toonify optimize_content '{"content": "..."}'
49
+
50
+ # 查看統計
51
+ claude mcp call toonify get_stats '{}'
52
+ ```
53
+
54
+ ### 選項 B:Claude Code Hook(自動)⭐ 推薦
55
+
56
+ Claude Code 使用者適用的自動攔截:
86
57
 
87
- **安裝方式**:
88
58
  ```bash
89
59
  cd hooks/
90
60
  npm install
@@ -95,24 +65,9 @@ npm run install-hook
95
65
  claude hooks list # 應顯示:PostToolUse
96
66
  ```
97
67
 
98
- 詳細設定請參閱 `hooks/README.md`。
68
+ 詳見 `hooks/README.md`。
99
69
 
100
- ### 手動使用 MCP 工具
101
-
102
- ```bash
103
- # 優化內容
104
- claude mcp call toonify optimize_content '{
105
- "content": "{\"products\": [{\"id\": 1, \"name\": \"Test\"}]}",
106
- "toolName": "Read"
107
- }'
108
-
109
- # 查看統計資料
110
- claude mcp call toonify get_stats '{}'
111
- ```
112
-
113
- ## 🔧 設定選項
114
-
115
- 透過環境變數或設定檔進行配置:
70
+ ## 設定選項
116
71
 
117
72
  ```bash
118
73
  # 環境變數
@@ -121,7 +76,7 @@ export TOONIFY_MIN_TOKENS=50
121
76
  export TOONIFY_MIN_SAVINGS=30
122
77
  export TOONIFY_SKIP_TOOLS="Bash,Write"
123
78
 
124
- # 或使用 ~/.claude/toonify-config.json
79
+ # ~/.claude/toonify-config.json
125
80
  {
126
81
  "enabled": true,
127
82
  "minTokensThreshold": 50,
@@ -130,195 +85,12 @@ export TOONIFY_SKIP_TOOLS="Bash,Write"
130
85
  }
131
86
  ```
132
87
 
133
- ## 📊 統計儀表板
134
-
135
- 查看您的節省成果:
136
-
137
- ```bash
138
- claude mcp call toonify get_stats '{}'
139
- ```
140
-
141
- 輸出範例:
142
- ```
143
- 📊 Token Optimization Stats
144
- ━━━━━━━━━━━━━━━━━━━━━━━━
145
- Total Requests: 1,234
146
- Optimized: 856 (69.4%)
147
-
148
- Tokens Before: 1,456,789
149
- Tokens After: 567,234
150
- Total Savings: 889,555 (61.1%)
151
-
152
- 💰 Cost Savings (at $3/1M input tokens):
153
- $2.67 saved
154
- ```
155
-
156
- ## 🏗️ 架構圖
157
-
158
- ```
159
- ┌──────────────────┐
160
- │ Claude Code │
161
- │ CLI │
162
- └─────────┬────────┘
163
-
164
-
165
- ┌──────────────────┐
166
- │ Toonify MCP │
167
- │ Server │
168
- ├──────────────────┤
169
- │ • optimize_content│ ← 壓縮結構化資料
170
- │ • get_stats │ ← 查看指標
171
- └─────────┬────────┘
172
-
173
-
174
- ┌──────────────────┐
175
- │ TokenOptimizer │ ← TOON 編碼
176
- └──────────────────┘
177
-
178
-
179
- ┌──────────────────┐
180
- │ MetricsCollector │ ← 追蹤節省量
181
- └──────────────────┘
182
- ```
183
-
184
- ## 🌍 相容性
185
-
186
- ### ✅ **此 MCP 伺服器支援:**
187
- - **Claude Code CLI**(主要目標)
188
- - **Claude Desktop App**
189
- - **自訂 MCP 客戶端**
190
- - **任何實作 MCP 協定的工具**
191
-
192
- **重要**:MCP(Model Context Protocol)是 Anthropic 的協定。此 MCP 伺服器僅適用於 Claude 生態系統中的 MCP 相容客戶端。
193
-
194
- ### 🔧 **在其他 LLM 使用 TOON 格式**
195
-
196
- 雖然此 **MCP 伺服器**僅限 Claude 使用,但 **TOON 格式本身**可為任何 LLM(GPT、Gemini、Llama 等)減少 Token 使用量。若要在非 MCP LLM 使用 TOON 優化:
197
-
198
- **TypeScript/JavaScript:**
199
- ```typescript
200
- import { encode, decode } from '@toon-format/toon';
201
-
202
- // 在傳送至任何 LLM API 前優化資料
203
- const data = {
204
- products: [
205
- { id: 101, name: 'Laptop Pro', price: 1299 },
206
- { id: 102, name: 'Magic Mouse', price: 79 }
207
- ]
208
- };
209
-
210
- const optimizedContent = encode(data); // 減少 60% tokens
211
-
212
- // 用於 OpenAI
213
- await openai.chat.completions.create({
214
- model: 'gpt-4',
215
- messages: [{ role: 'user', content: `分析:${optimizedContent}` }]
216
- });
217
-
218
- // 用於 Gemini
219
- await gemini.generateContent({
220
- contents: [{ text: `分析:${optimizedContent}` }]
221
- });
222
- ```
223
-
224
- **Python:**
225
- ```python
226
- # 安裝:pip install toonify
227
- from toonify import encode
228
- import openai
229
-
230
- data = {"products": [...]}
231
- optimized = encode(data)
232
-
233
- # 適用於任何 LLM API
234
- openai.chat.completions.create(
235
- model="gpt-4",
236
- messages=[{"role": "user", "content": f"分析:{optimized}"}]
237
- )
238
- ```
239
-
240
- ### 📊 **MCP 伺服器 vs TOON 函式庫**
241
-
242
- | 功能 | 此 MCP 伺服器 | 直接使用 TOON 函式庫 |
243
- |------|-------------|-------------------|
244
- | **目標** | Claude Code/Desktop | 任何 LLM |
245
- | **整合方式** | 自動(透過 MCP) | 手動(程式碼整合) |
246
- | **設定** | 配置一次 | 每個專案都要導入 |
247
- | **相容性** | 僅限 MCP 客戶端 | 通用 |
248
-
249
- ## 🧪 開發指南
250
-
251
- ```bash
252
- # 複製儲存庫
253
- git clone https://github.com/kevintseng/toonify-mcp.git
254
- cd toonify-mcp
255
-
256
- # 安裝依賴
257
- npm install
258
-
259
- # 建置
260
- npm run build
261
-
262
- # 執行測試
263
- npm test
264
-
265
- # 開發模式(監視變更)
266
- npm run dev
267
- ```
268
-
269
- ## 🎯 適用時機
270
-
271
- **✅ 最適合用於:**
272
- - 工具回傳的大型 JSON 回應
273
- - CSV/表格資料
274
- - 結構化 API 回應
275
- - 資料庫查詢結果
276
- - 包含結構化資料的檔案內容
277
-
278
- **⚠️ 不建議用於:**
279
- - 純文字內容
280
- - 程式碼檔案
281
- - 高度不規則的資料結構
282
- - 內容 < 50 tokens
283
-
284
- ## 📚 技術細節
285
-
286
- ### 運作原理
287
-
288
- 1. **攔截**:MCP 伺服器透過 `optimize_content` 工具攔截工具結果
289
- 2. **偵測**:分析內容以識別 JSON、CSV 或 YAML
290
- 3. **優化**:使用 `@toon-format/toon` 轉換為 TOON 格式
291
- 4. **驗證**:確保節省量 > 30% 閾值
292
- 5. **降級**:若優化失敗或節省量過低則返回原始內容
293
-
294
- ### 依賴套件
295
-
296
- - `@modelcontextprotocol/sdk` - MCP 伺服器框架
297
- - `@toon-format/toon` - TOON 格式編碼/解碼
298
- - `tiktoken` - Token 計數(相容 Claude/GPT)
299
- - `yaml` - YAML 解析支援
300
-
301
- ## 🤝 貢獻
302
-
303
- 歡迎貢獻!請參閱 [CONTRIBUTING.md](CONTRIBUTING.md) 了解指南。
304
-
305
- ## 📄 授權
306
-
307
- MIT License - 詳見 [LICENSE](LICENSE) 檔案
308
-
309
- ## 🙏 致謝
310
-
311
- - [Toonify](https://github.com/ScrapeGraphAI/toonify) by ScrapeGraphAI 團隊
312
- - [Claude Code](https://github.com/anthropics/claude-code) by Anthropic
313
- - 原始靈感來源:[awesome-llm-apps](https://github.com/Shubhamsaboo/awesome-llm-apps)
314
-
315
- ## 🔗 連結
88
+ ## 連結
316
89
 
317
- - **NPM 套件**:即將推出
318
90
  - **GitHub**:https://github.com/kevintseng/toonify-mcp
319
91
  - **問題回報**:https://github.com/kevintseng/toonify-mcp/issues
320
92
  - **MCP 文件**:https://code.claude.com/docs/mcp
321
93
 
322
- ---
94
+ ## 授權
323
95
 
324
- **⭐ 如果這個工具幫您節省了 API 成本,歡迎給個星星!**
96
+ MIT License - 詳見 [LICENSE](LICENSE)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "toonify-mcp",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "description": "MCP server for TOON format token optimization - works with any MCP client (Claude Code, ChatGPT, etc.)",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
package/jest.config.js DELETED
@@ -1,23 +0,0 @@
1
- /** @type {import('jest').Config} */
2
- export default {
3
- preset: 'ts-jest/presets/default-esm',
4
- testEnvironment: 'node',
5
- extensionsToTreatAsEsm: ['.ts'],
6
- moduleNameMapper: {
7
- '^(\\.{1,2}/.*)\\.js$': '$1',
8
- },
9
- transform: {
10
- '^.+\\.tsx?$': [
11
- 'ts-jest',
12
- {
13
- useESM: true,
14
- },
15
- ],
16
- },
17
- transformIgnorePatterns: [
18
- 'node_modules/(?!(@toon-format|tiktoken)/)',
19
- ],
20
- testMatch: ['**/tests/**/*.test.ts'],
21
- collectCoverageFrom: ['src/**/*.ts'],
22
- coveragePathIgnorePatterns: ['/node_modules/', '/dist/'],
23
- };
package/src/index.ts DELETED
@@ -1,23 +0,0 @@
1
- #!/usr/bin/env node
2
-
3
- /**
4
- * Claude Code Toonify MCP Server
5
- *
6
- * Optimizes token usage by converting structured data to TOON format
7
- * before sending to Claude API, achieving 60%+ token reduction.
8
- */
9
-
10
- import { ToonifyMCPServer } from './server/mcp-server.js';
11
-
12
- async function main() {
13
- const server = new ToonifyMCPServer();
14
-
15
- try {
16
- await server.start();
17
- } catch (error) {
18
- console.error('Failed to start Toonify MCP server:', error);
19
- process.exit(1);
20
- }
21
- }
22
-
23
- main();
@@ -1,125 +0,0 @@
1
- /**
2
- * MetricsCollector: Local-only metrics tracking
3
- */
4
-
5
- import { promises as fs } from 'fs';
6
- import path from 'path';
7
- import os from 'os';
8
- import type { TokenStats } from '../optimizer/types.js';
9
-
10
- export interface OptimizationMetric {
11
- timestamp: string;
12
- toolName: string;
13
- originalTokens: number;
14
- optimizedTokens: number;
15
- savings: number;
16
- savingsPercentage: number;
17
- wasOptimized: boolean;
18
- format?: string;
19
- reason?: string;
20
- }
21
-
22
- export class MetricsCollector {
23
- private statsPath: string;
24
-
25
- constructor() {
26
- // Store in user's home directory, not in project
27
- this.statsPath = path.join(
28
- os.homedir(),
29
- '.claude',
30
- 'token_stats.json'
31
- );
32
- }
33
-
34
- /**
35
- * Record an optimization attempt
36
- */
37
- async record(metric: OptimizationMetric): Promise<void> {
38
- try {
39
- const stats = await this.loadStats();
40
-
41
- // Update aggregated stats
42
- stats.totalRequests++;
43
- if (metric.wasOptimized) {
44
- stats.optimizedRequests++;
45
- }
46
-
47
- stats.tokensBeforeOptimization += metric.originalTokens;
48
- stats.tokensAfterOptimization += metric.optimizedTokens;
49
- stats.totalSavings += metric.savings;
50
-
51
- // Recalculate average
52
- stats.averageSavingsPercentage =
53
- stats.optimizedRequests > 0
54
- ? (stats.totalSavings / stats.tokensBeforeOptimization) * 100
55
- : 0;
56
-
57
- await this.saveStats(stats);
58
- } catch (error) {
59
- // Silent failure - metrics should never break functionality
60
- console.error('Failed to record metrics:', error);
61
- }
62
- }
63
-
64
- /**
65
- * Get current statistics
66
- */
67
- async getStats(): Promise<TokenStats> {
68
- return await this.loadStats();
69
- }
70
-
71
- /**
72
- * Load stats from disk
73
- */
74
- private async loadStats(): Promise<TokenStats> {
75
- try {
76
- // Ensure directory exists
77
- await fs.mkdir(path.dirname(this.statsPath), { recursive: true });
78
-
79
- const data = await fs.readFile(this.statsPath, 'utf-8');
80
- return JSON.parse(data);
81
- } catch {
82
- // Return empty stats if file doesn't exist
83
- return {
84
- totalRequests: 0,
85
- optimizedRequests: 0,
86
- tokensBeforeOptimization: 0,
87
- tokensAfterOptimization: 0,
88
- totalSavings: 0,
89
- averageSavingsPercentage: 0,
90
- };
91
- }
92
- }
93
-
94
- /**
95
- * Save stats to disk
96
- */
97
- private async saveStats(stats: TokenStats): Promise<void> {
98
- await fs.writeFile(
99
- this.statsPath,
100
- JSON.stringify(stats, null, 2),
101
- 'utf-8'
102
- );
103
- }
104
-
105
- /**
106
- * Format stats as dashboard
107
- */
108
- async formatDashboard(): Promise<string> {
109
- const stats = await this.getStats();
110
-
111
- return `
112
- 📊 Token Optimization Stats
113
- ━━━━━━━━━━━━━━━━━━━━━━━━
114
- Total Requests: ${stats.totalRequests}
115
- Optimized: ${stats.optimizedRequests} (${((stats.optimizedRequests / stats.totalRequests) * 100).toFixed(1)}%)
116
-
117
- Tokens Before: ${stats.tokensBeforeOptimization.toLocaleString()}
118
- Tokens After: ${stats.tokensAfterOptimization.toLocaleString()}
119
- Total Savings: ${stats.totalSavings.toLocaleString()} (${stats.averageSavingsPercentage.toFixed(1)}%)
120
-
121
- 💰 Cost Savings (at $3/1M input tokens):
122
- $${((stats.totalSavings / 1_000_000) * 3).toFixed(2)} saved
123
- `.trim();
124
- }
125
- }
@@ -1,214 +0,0 @@
1
- /**
2
- * TokenOptimizer: Core optimization logic using Toonify
3
- */
4
-
5
- import { encode as toonEncode, decode as toonDecode } from '@toon-format/toon';
6
- import { encoding_for_model } from 'tiktoken';
7
- import yaml from 'yaml';
8
- import type {
9
- OptimizationResult,
10
- ToolMetadata,
11
- StructuredData,
12
- OptimizationConfig
13
- } from './types.js';
14
-
15
- export class TokenOptimizer {
16
- private config: OptimizationConfig;
17
- private tokenEncoder;
18
-
19
- constructor(config: Partial<OptimizationConfig> = {}) {
20
- this.config = {
21
- enabled: true,
22
- minTokensThreshold: 50,
23
- minSavingsThreshold: 30,
24
- maxProcessingTime: 50,
25
- skipToolPatterns: [],
26
- ...config
27
- };
28
-
29
- // Use Claude tokenizer
30
- this.tokenEncoder = encoding_for_model('gpt-4');
31
- }
32
-
33
- /**
34
- * Main optimization method
35
- */
36
- async optimize(
37
- content: string,
38
- metadata?: ToolMetadata
39
- ): Promise<OptimizationResult> {
40
- const startTime = Date.now();
41
-
42
- // Quick path: skip if disabled or content too small
43
- if (!this.config.enabled || content.length < 200) {
44
- return {
45
- optimized: false,
46
- originalContent: content,
47
- originalTokens: this.countTokens(content),
48
- reason: 'Content too small'
49
- };
50
- }
51
-
52
- // Skip if tool matches skip patterns
53
- if (metadata?.toolName && this.shouldSkipTool(metadata.toolName)) {
54
- return {
55
- optimized: false,
56
- originalContent: content,
57
- originalTokens: this.countTokens(content),
58
- reason: `Tool ${metadata.toolName} in skip list`
59
- };
60
- }
61
-
62
- // Detect structured data
63
- const structuredData = this.detectStructuredData(content);
64
- if (!structuredData) {
65
- return {
66
- optimized: false,
67
- originalContent: content,
68
- originalTokens: this.countTokens(content),
69
- reason: 'Not structured data'
70
- };
71
- }
72
-
73
- try {
74
- // Convert to TOON format
75
- const toonContent = toonEncode(structuredData.data);
76
-
77
- // Count tokens
78
- const originalTokens = this.countTokens(content);
79
- const optimizedTokens = this.countTokens(toonContent);
80
-
81
- // Calculate savings
82
- const tokenSavings = originalTokens - optimizedTokens;
83
- const savingsPercentage = (tokenSavings / originalTokens) * 100;
84
-
85
- // Check if worth using
86
- if (savingsPercentage < this.config.minSavingsThreshold) {
87
- return {
88
- optimized: false,
89
- originalContent: content,
90
- originalTokens,
91
- reason: `Savings too low: ${savingsPercentage.toFixed(1)}%`
92
- };
93
- }
94
-
95
- // Check processing time
96
- const elapsed = Date.now() - startTime;
97
- if (elapsed > this.config.maxProcessingTime) {
98
- return {
99
- optimized: false,
100
- originalContent: content,
101
- originalTokens,
102
- reason: `Processing timeout: ${elapsed}ms`
103
- };
104
- }
105
-
106
- return {
107
- optimized: true,
108
- originalContent: content,
109
- optimizedContent: toonContent,
110
- originalTokens,
111
- optimizedTokens,
112
- savings: {
113
- tokens: tokenSavings,
114
- percentage: savingsPercentage
115
- },
116
- format: structuredData.type
117
- };
118
-
119
- } catch (error) {
120
- // Silent fallback on error
121
- return {
122
- optimized: false,
123
- originalContent: content,
124
- originalTokens: this.countTokens(content),
125
- reason: `Error: ${error instanceof Error ? error.message : 'Unknown'}`
126
- };
127
- }
128
- }
129
-
130
- /**
131
- * Detect if content is structured data (JSON/CSV/YAML)
132
- */
133
- private detectStructuredData(content: string): StructuredData | null {
134
- // Try JSON first
135
- try {
136
- const data = JSON.parse(content);
137
- if (typeof data === 'object' && data !== null) {
138
- return { type: 'json', data, confidence: 1.0 };
139
- }
140
- } catch {}
141
-
142
- // Try YAML
143
- try {
144
- const data = yaml.parse(content);
145
- if (typeof data === 'object' && data !== null) {
146
- return { type: 'yaml', data, confidence: 0.9 };
147
- }
148
- } catch {}
149
-
150
- // Try CSV (simple heuristic)
151
- if (this.looksLikeCSV(content)) {
152
- try {
153
- const data = this.parseSimpleCSV(content);
154
- return { type: 'csv', data, confidence: 0.8 };
155
- } catch {}
156
- }
157
-
158
- return null;
159
- }
160
-
161
- /**
162
- * Simple CSV detection heuristic
163
- */
164
- private looksLikeCSV(content: string): boolean {
165
- const lines = content.split('\n').filter(l => l.trim());
166
- if (lines.length < 2) return false;
167
-
168
- const firstLineCommas = (lines[0].match(/,/g) || []).length;
169
- if (firstLineCommas === 0) return false;
170
-
171
- // Check if most lines have similar comma count
172
- let matchingLines = 0;
173
- for (let i = 1; i < Math.min(lines.length, 10); i++) {
174
- const commas = (lines[i].match(/,/g) || []).length;
175
- if (commas === firstLineCommas) matchingLines++;
176
- }
177
-
178
- return matchingLines >= Math.min(lines.length - 1, 7);
179
- }
180
-
181
- /**
182
- * Parse simple CSV to array of objects
183
- */
184
- private parseSimpleCSV(content: string): any[] {
185
- const lines = content.split('\n').filter(l => l.trim());
186
- const headers = lines[0].split(',').map(h => h.trim());
187
-
188
- return lines.slice(1).map(line => {
189
- const values = line.split(',').map(v => v.trim());
190
- const obj: any = {};
191
- headers.forEach((header, i) => {
192
- obj[header] = values[i] || '';
193
- });
194
- return obj;
195
- });
196
- }
197
-
198
- /**
199
- * Count tokens in text
200
- */
201
- private countTokens(text: string): number {
202
- return this.tokenEncoder.encode(text).length;
203
- }
204
-
205
- /**
206
- * Check if tool should be skipped
207
- */
208
- private shouldSkipTool(toolName: string): boolean {
209
- return this.config.skipToolPatterns?.some(pattern => {
210
- const regex = new RegExp(pattern);
211
- return regex.test(toolName);
212
- }) ?? false;
213
- }
214
- }
@@ -1,46 +0,0 @@
1
- /**
2
- * Type definitions for token optimization
3
- */
4
-
5
- export interface OptimizationResult {
6
- optimized: boolean;
7
- originalContent: string;
8
- optimizedContent?: string;
9
- originalTokens: number;
10
- optimizedTokens?: number;
11
- savings?: {
12
- tokens: number;
13
- percentage: number;
14
- };
15
- format?: 'json' | 'csv' | 'yaml' | 'unknown';
16
- reason?: string; // Why optimization was skipped
17
- }
18
-
19
- export interface ToolMetadata {
20
- toolName: string;
21
- contentType?: string;
22
- size: number;
23
- }
24
-
25
- export interface StructuredData {
26
- type: 'json' | 'csv' | 'yaml';
27
- data: any;
28
- confidence: number;
29
- }
30
-
31
- export interface OptimizationConfig {
32
- enabled: boolean;
33
- minTokensThreshold: number; // Only optimize if content > N tokens
34
- minSavingsThreshold: number; // Only use if savings > N%
35
- maxProcessingTime: number; // Max ms to spend optimizing
36
- skipToolPatterns?: string[]; // Tool names to skip
37
- }
38
-
39
- export interface TokenStats {
40
- totalRequests: number;
41
- optimizedRequests: number;
42
- tokensBeforeOptimization: number;
43
- tokensAfterOptimization: number;
44
- totalSavings: number;
45
- averageSavingsPercentage: number;
46
- }
@@ -1,134 +0,0 @@
1
- /**
2
- * ToonifyMCPServer: MCP server that wraps tool results with token optimization
3
- */
4
-
5
- import { Server } from '@modelcontextprotocol/sdk/server/index.js';
6
- import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
7
- import {
8
- CallToolRequestSchema,
9
- ListToolsRequestSchema,
10
- } from '@modelcontextprotocol/sdk/types.js';
11
- import { TokenOptimizer } from '../optimizer/token-optimizer.js';
12
- import { MetricsCollector } from '../metrics/metrics-collector.js';
13
-
14
- export class ToonifyMCPServer {
15
- private server: Server;
16
- private optimizer: TokenOptimizer;
17
- private metrics: MetricsCollector;
18
-
19
- constructor() {
20
- this.server = new Server(
21
- {
22
- name: 'claude-code-toonify',
23
- version: '0.1.0',
24
- },
25
- {
26
- capabilities: {
27
- tools: {},
28
- },
29
- }
30
- );
31
-
32
- this.optimizer = new TokenOptimizer();
33
- this.metrics = new MetricsCollector();
34
-
35
- this.setupHandlers();
36
- }
37
-
38
- private setupHandlers() {
39
- // List available tools
40
- this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
41
- tools: [
42
- {
43
- name: 'optimize_content',
44
- description: 'Optimize structured data content for token efficiency using TOON format',
45
- inputSchema: {
46
- type: 'object',
47
- properties: {
48
- content: {
49
- type: 'string',
50
- description: 'The content to optimize (JSON, CSV, or YAML)',
51
- },
52
- toolName: {
53
- type: 'string',
54
- description: 'Name of the tool that generated this content',
55
- },
56
- },
57
- required: ['content'],
58
- },
59
- },
60
- {
61
- name: 'get_stats',
62
- description: 'Get token optimization statistics',
63
- inputSchema: {
64
- type: 'object',
65
- properties: {},
66
- },
67
- },
68
- ],
69
- }));
70
-
71
- // Handle tool calls
72
- this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
73
- const { name, arguments: args } = request.params;
74
-
75
- switch (name) {
76
- case 'optimize_content': {
77
- const { content, toolName } = args as {
78
- content: string;
79
- toolName?: string;
80
- };
81
-
82
- const result = await this.optimizer.optimize(content, {
83
- toolName: toolName || 'unknown',
84
- size: content.length,
85
- });
86
-
87
- // Record metrics
88
- await this.metrics.record({
89
- timestamp: new Date().toISOString(),
90
- toolName: toolName || 'unknown',
91
- originalTokens: result.originalTokens,
92
- optimizedTokens: result.optimizedTokens || result.originalTokens,
93
- savings: result.savings?.tokens || 0,
94
- savingsPercentage: result.savings?.percentage || 0,
95
- wasOptimized: result.optimized,
96
- format: result.format,
97
- reason: result.reason,
98
- });
99
-
100
- return {
101
- content: [
102
- {
103
- type: 'text',
104
- text: JSON.stringify(result, null, 2),
105
- },
106
- ],
107
- };
108
- }
109
-
110
- case 'get_stats': {
111
- const stats = await this.metrics.getStats();
112
- return {
113
- content: [
114
- {
115
- type: 'text',
116
- text: JSON.stringify(stats, null, 2),
117
- },
118
- ],
119
- };
120
- }
121
-
122
- default:
123
- throw new Error(`Unknown tool: ${name}`);
124
- }
125
- });
126
- }
127
-
128
- async start() {
129
- const transport = new StdioServerTransport();
130
- await this.server.connect(transport);
131
-
132
- console.error('Toonify MCP Server running on stdio');
133
- }
134
- }
package/test-mcp.json DELETED
@@ -1,5 +0,0 @@
1
- {
2
- "jsonrpc": "2.0",
3
- "id": 1,
4
- "method": "tools/list"
5
- }