mobile-device-mcp 0.2.2 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +322 -225
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,225 +1,322 @@
1
- # mobile-device-mcp
2
-
3
- MCP server that gives AI coding assistants (Claude Code, Cursor, Windsurf) the ability to **see and interact with mobile devices**. 34 tools for screenshots, UI inspection, touch interaction, AI-powered visual analysis, and Flutter widget tree inspection.
4
-
5
- > AI assistants can read your code but can't see your phone. This fixes that.
6
-
7
- ## The Problem
8
-
9
- Web developers have browser DevTools, Playwright, and Puppeteer — AI assistants can click around, take screenshots, and verify fixes. Mobile developers? They're stuck manually screenshotting, copying logs, and describing what's on screen. They're **human middleware** between the AI and the device.
10
-
11
- ## What This Does
12
-
13
- ```
14
- Developer: "The login button doesn't work"
15
-
16
- Without this tool: With this tool:
17
- 1. Manually screenshot 1. AI calls take_screenshot sees the screen
18
- 2. Paste into AI chat 2. AI calls smart_tap("login button") taps it
19
- 3. AI guesses what's wrong 3. AI calls verify_screen("error message shown") sees result
20
- 4. Apply fix, rebuild 4. AI calls visual_diff confirms fix worked
21
- 5. Repeat 4-5 times 5. Done.
22
- ```
23
-
24
- ## Quick Start
25
-
26
- ### Prerequisites
27
- - Node.js 18+
28
- - Android device/emulator connected via ADB
29
- - ADB installed (Android SDK Platform Tools)
30
-
31
- ### Setup (One-time, 30 seconds)
32
-
33
- 1. **Get a Google AI key** (free tier available): [aistudio.google.com/apikey](https://aistudio.google.com/apikey)
34
-
35
- 2. **Add `.mcp.json` to your project root:**
36
-
37
- ```json
38
- {
39
- "mcpServers": {
40
- "mobile-device": {
41
- "type": "stdio",
42
- "command": "npx",
43
- "args": ["-y", "mobile-device-mcp"],
44
- "env": {
45
- "GOOGLE_API_KEY": "your-google-api-key"
46
- }
47
- }
48
- }
49
- }
50
- ```
51
-
52
- 3. **Open your AI coding assistant** from that directory. That's it.
53
-
54
- The server starts and stops automatically — you never run it manually. Your AI assistant manages it as a background process via the MCP protocol.
55
-
56
- ### Verify It Works
57
-
58
- **Claude Code:** type `/mcp` you should see `mobile-device: Connected`
59
-
60
- **Cursor:** check MCP panel in settings
61
-
62
- Then just talk to your phone:
63
-
64
- ```
65
- You: "Open my app, tap the login button, type test@email.com in the email field"
66
- AI: [takes screenshot → sees the screen → smart_tap("login button") → smart_type("email field", "test@email.com")]
67
-
68
- You: "Find all the bugs on this screen"
69
- AI: [analyze_screen → inspects layout, checks for overflow, missing labels, broken states]
70
-
71
- You: "Navigate to settings and verify dark mode works"
72
- AI: [smart_tap("settings") → take_screenshot → smart_tap("dark mode toggle") → visual_diff → reports result]
73
- ```
74
-
75
- No test scripts. No manual screenshots. Just describe what you want in plain English.
76
-
77
- ### Works with Any AI Coding Assistant
78
-
79
- | Tool | Config file | Docs |
80
- |------|------------|------|
81
- | **Claude Code** | `.mcp.json` in project root | [claude.ai/docs](https://claude.ai/docs) |
82
- | **Cursor** | `.cursor/mcp.json` | [cursor.com/docs](https://cursor.com/docs) |
83
- | **VS Code + Copilot** | MCP settings | [code.visualstudio.com](https://code.visualstudio.com) |
84
- | **Windsurf** | MCP settings | [windsurf.com](https://windsurf.com) |
85
-
86
- All use the same JSON config — just put it in the right file for your editor.
87
-
88
- ### Drop Into Any Project
89
-
90
- Copy `.mcp.json` into any mobile project — Flutter, React Native, Kotlin, Swift — and your AI assistant gets device superpowers in that directory. No global install needed.
91
-
92
- ## Tools (34 total)
93
-
94
- ### Phase 1 Device Control (18 tools)
95
-
96
- | Tool | What it does |
97
- |------|-------------|
98
- | `list_devices` | List all connected Android devices/emulators |
99
- | `get_device_info` | Model, manufacturer, Android version, SDK level |
100
- | `get_screen_size` | Screen resolution in pixels |
101
- | `take_screenshot` | Capture screenshot (PNG or JPEG, configurable quality & resize) |
102
- | `get_ui_elements` | Get the accessibility/UI element tree as structured JSON |
103
- | `tap` | Tap at coordinates |
104
- | `double_tap` | Double tap at coordinates |
105
- | `long_press` | Long press at coordinates |
106
- | `swipe` | Swipe between two points |
107
- | `type_text` | Type text into the focused field |
108
- | `press_key` | Press a key (home, back, enter, volume, etc.) |
109
- | `list_apps` | List installed apps |
110
- | `get_current_app` | Get the foreground app |
111
- | `launch_app` | Launch an app by package name |
112
- | `stop_app` | Force stop an app |
113
- | `install_app` | Install an APK |
114
- | `uninstall_app` | Uninstall an app |
115
- | `get_logs` | Get logcat entries with filtering |
116
-
117
- ### Phase 2 — AI Visual Analysis (8 tools)
118
-
119
- These tools use AI vision (Claude or Gemini) to understand what's on screen. Requires `ANTHROPIC_API_KEY` or `GOOGLE_API_KEY`.
120
-
121
- | Tool | What it does |
122
- |------|-------------|
123
- | `analyze_screen` | AI describes the screen: app name, screen type, interactive elements, visible text, suggestions |
124
- | `find_element` | Find a UI element by description: *"the login button"*, *"email input field"* |
125
- | `smart_tap` | Find an element by description and tap it in one step |
126
- | `smart_type` | Find an input field by description, focus it, and type text |
127
- | `suggest_actions` | Plan actions to achieve a goal: *"log into the app"*, *"add item to cart"* |
128
- | `visual_diff` | Compare current screen with a previous screenshot — what changed? |
129
- | `extract_text` | Extract all visible text from the screen (AI-powered OCR) |
130
- | `verify_screen` | Verify an assertion: *"the login was successful"*, *"error message is showing"* |
131
-
132
- ### Phase 3 Flutter Widget Tree (8 tools)
133
-
134
- These tools connect to a running Flutter app in debug/profile mode via the Dart VM Service Protocol. Maps every widget to its source code location (`file:line`).
135
-
136
- | Tool | What it does |
137
- |------|-------------|
138
- | `flutter_connect` | Discover and connect to a running Flutter app on the device |
139
- | `flutter_disconnect` | Disconnect from the Flutter app and clean up resources |
140
- | `flutter_get_widget_tree` | Get the full widget tree (summary or detailed) |
141
- | `flutter_get_widget_details` | Get detailed properties of a specific widget by ID |
142
- | `flutter_find_widget` | Search the widget tree by type, text, or description |
143
- | `flutter_get_source_map` | Map every widget to its source code location (file:line:column) |
144
- | `flutter_screenshot_widget` | Screenshot a specific widget in isolation |
145
- | `flutter_debug_paint` | Toggle debug paint overlay (shows widget boundaries & padding) |
146
-
147
- ## Performance
148
-
149
- The server is optimized to minimize latency and AI token costs:
150
-
151
- - **3-tier element search**: local text match (<1ms) → cached AI → fresh AI. `smart_tap` is 37x faster than naive AI calls.
152
- - **Screenshot compression**: AI tools auto-compress to JPEG q=80, 720w — **65% smaller** (251KB → 88KB) with zero quality loss. Saves ~55K tokens per screenshot.
153
- - **Parallel capture**: Screenshot + UI tree fetched simultaneously via `Promise.all()`.
154
- - **TTL caching**: 3-second cache avoids redundant ADB calls for rapid-fire tool usage.
155
-
156
- ## Environment Variables
157
-
158
- | Variable | Description | Default |
159
- |----------|-------------|---------|
160
- | `ANTHROPIC_API_KEY` | Anthropic API key for Claude vision | — |
161
- | `GOOGLE_API_KEY` or `GEMINI_API_KEY` | Google API key for Gemini vision (recommended — cheapest) | — |
162
- | `MCP_AI_PROVIDER` | Force AI provider: `"anthropic"` or `"google"` | Auto-detected |
163
- | `MCP_AI_MODEL` | Override AI model | `gemini-2.5-flash` / `claude-sonnet-4-20250514` |
164
- | `MCP_ADB_PATH` | Custom ADB binary path | Auto-discovered |
165
- | `MCP_DEFAULT_DEVICE` | Default device serial | Auto-discovered |
166
- | `MCP_SCREENSHOT_FORMAT` | `"png"` or `"jpeg"` | `jpeg` |
167
- | `MCP_SCREENSHOT_QUALITY` | JPEG quality (1-100) | `80` |
168
- | `MCP_SCREENSHOT_MAX_WIDTH` | Resize screenshots to this max width | `720` |
169
- | `MCP_AI_SCREENSHOT` | Send screenshots to AI (`"true"`/`"false"`) | `true` |
170
- | `MCP_AI_UITREE` | Send UI tree to AI (`"true"`/`"false"`) | `true` |
171
-
172
- ## Architecture
173
-
174
- ```
175
- src/
176
- ├── index.ts # CLI entry point (auto-discovery, env config)
177
- ├── server.ts # MCP server factory
178
- ├── types.ts # Shared interfaces
179
- ├── drivers/android/ # ADB driver (DeviceDriver implementation)
180
- │ ├── adb.ts # Low-level ADB command wrapper
181
- │ └── index.ts # AndroidDriver class
182
- ├── tools/ # MCP tool registrations
183
- │ ├── device-tools.ts # Device management
184
- │ ├── screen-tools.ts # Screenshots & UI inspection
185
- │ ├── interaction-tools.ts # Touch, type, keys
186
- │ ├── app-tools.ts # App management
187
- │ ├── log-tools.ts # Logcat
188
- │ ├── ai-tools.ts # AI-powered tools
189
- │ └── flutter-tools.ts # Flutter widget inspection tools
190
- ├── drivers/flutter/ # Dart VM Service driver
191
- │ ├── index.ts # FlutterDriver (discovery, inspection, source mapping)
192
- │ └── vm-service.ts # JSON-RPC 2.0 WebSocket client (DDS redirect handling)
193
- ├── ai/ # AI visual analysis engine
194
- │ ├── client.ts # Multi-provider client (Anthropic + Google)
195
- │ ├── prompts.ts # System prompts & UI element summarizer
196
- │ ├── analyzer.ts # ScreenAnalyzer orchestrator
197
- │ └── element-search.ts # Local element search (no AI needed)
198
- └── utils/
199
- ├── discovery.ts # ADB auto-discovery
200
- └── image.ts # PNG parsing, JPEG compression, bilinear resize
201
- ```
202
-
203
- ## Roadmap
204
-
205
- - [x] Phase 1: Android ADB device control (18 tools)
206
- - [x] Phase 2: AI visual analysis layer (8 tools)
207
- - [x] Multi-provider AI (Anthropic Claude + Google Gemini)
208
- - [x] Performance optimization (3-tier search, caching, parallel capture)
209
- - [x] Screenshot compression pipeline (JPEG, resize, configurable quality)
210
- - [x] npm publish (`npx mobile-device-mcp`)
211
- - [x] Phase 3: Flutter widget tree integration (8 tools, Dart VM Service Protocol)
212
- - [ ] Phase 4: iOS support (simulators via xcrun simctl, devices via idevice)
213
- - [ ] Phase 5: Monetization (license keys, usage analytics)
214
- - [ ] Multi-device orchestration
215
-
216
- ## Tested On
217
-
218
- - Pixel 8, Android 16, SDK 36 44/44 tests passed (22 device + 10 AI + 12 Flutter)
219
- - Flutter 3.41.3, metroping app (debug mode)
220
- - Google Gemini 2.5 Flash
221
- - Windows 11 + wireless ADB
222
-
223
- ## License
224
-
225
- MIT
1
+ # mobile-device-mcp
2
+
3
+ [![npm version](https://img.shields.io/npm/v/mobile-device-mcp)](https://www.npmjs.com/package/mobile-device-mcp)
4
+ [![npm downloads](https://img.shields.io/npm/dm/mobile-device-mcp)](https://www.npmjs.com/package/mobile-device-mcp)
5
+ [![GitHub stars](https://img.shields.io/github/stars/saranshbamania/mobile-device-mcp)](https://github.com/saranshbamania/mobile-device-mcp)
6
+ [![License: BSL 1.1](https://img.shields.io/badge/License-BSL%201.1-blue.svg)](LICENSE)
7
+
8
+ MCP server that gives AI coding assistants (Claude Code, Cursor, Windsurf) the ability to **see and interact with mobile devices**. 49 tools for screenshots, UI inspection, touch interaction, AI-powered visual analysis, Flutter widget tree inspection, video recording, and test generation.
9
+
10
+ > AI assistants can read your code but can't see your phone. This fixes that.
11
+
12
+ ## Why This One?
13
+
14
+ | Feature | mobile-device-mcp | mobile-next/mobile-mcp | appium/appium-mcp |
15
+ |---------|:-:|:-:|:-:|
16
+ | Total tools | **49** | 20 | ~15 |
17
+ | Setup | `npx` (30 sec) | `npx` | Requires Appium server |
18
+ | AI visual analysis | **12 tools** (Claude + Gemini) | None | Vision-based finding |
19
+ | Flutter widget tree | **10 tools** (Dart VM Service) | None | None |
20
+ | Smart element finding | **4-tier** (<1ms local search) | Accessibility tree only | XPath/selectors |
21
+ | Companion app (23x faster UI tree) | Yes | No | No |
22
+ | Video recording | Yes | No | No |
23
+ | Test script generation | **TS, Python, JSON** | No | Java/TestNG only |
24
+ | iOS simulator support | Yes | Yes | Yes |
25
+ | iOS real device | Planned | Yes | Yes |
26
+ | Screenshot compression | **89%** (251KB->28KB) | None | 50-80% |
27
+ | Multi-provider AI | Claude + Gemini | N/A | Single provider |
28
+ | Price | Free tier + Pro | Free | Free |
29
+
30
+ ## The Problem
31
+
32
+ Web developers have browser DevTools, Playwright, and Puppeteer -- AI assistants can click around, take screenshots, and verify fixes. Mobile developers? They're stuck manually screenshotting, copying logs, and describing what's on screen. They're **human middleware** between the AI and the device.
33
+
34
+ ## What This Does
35
+
36
+ ```
37
+ Developer: "The login button doesn't work"
38
+
39
+ Without this tool: With this tool:
40
+ 1. Manually screenshot 1. AI calls take_screenshot -> sees the screen
41
+ 2. Paste into AI chat 2. AI calls smart_tap("login button") -> taps it
42
+ 3. AI guesses what's wrong 3. AI calls verify_screen("error message shown") -> sees result
43
+ 4. Apply fix, rebuild 4. AI calls visual_diff -> confirms fix worked
44
+ 5. Repeat 4-5 times 5. Done.
45
+ ```
46
+
47
+ ## Quick Start
48
+
49
+ ### Prerequisites
50
+ - Node.js 18+
51
+ - Android device/emulator connected via ADB
52
+ - ADB installed (Android SDK Platform Tools)
53
+
54
+ ### Setup (One-time, 30 seconds)
55
+
56
+ 1. **Get a Google AI key** (free tier available): [aistudio.google.com/apikey](https://aistudio.google.com/apikey)
57
+
58
+ 2. **Add `.mcp.json` to your project root:**
59
+
60
+ ```json
61
+ {
62
+ "mcpServers": {
63
+ "mobile-device": {
64
+ "type": "stdio",
65
+ "command": "npx",
66
+ "args": ["-y", "mobile-device-mcp"],
67
+ "env": {
68
+ "GOOGLE_API_KEY": "your-google-api-key"
69
+ }
70
+ }
71
+ }
72
+ }
73
+ ```
74
+
75
+ 3. **Open your AI coding assistant** from that directory. That's it.
76
+
77
+ The server starts and stops automatically -- you never run it manually. Your AI assistant manages it as a background process via the MCP protocol.
78
+
79
+ ### Verify It Works
80
+
81
+ **Claude Code:** type `/mcp` -- you should see `mobile-device: Connected`
82
+
83
+ **Cursor:** check MCP panel in settings
84
+
85
+ Then just talk to your phone:
86
+
87
+ ```
88
+ You: "Open my app, tap the login button, type test@email.com in the email field"
89
+ AI: [takes screenshot -> sees the screen -> smart_tap("login button") -> smart_type("email field", "test@email.com")]
90
+
91
+ You: "Find all the bugs on this screen"
92
+ AI: [analyze_screen -> inspects layout, checks for overflow, missing labels, broken states]
93
+
94
+ You: "Navigate to settings and verify dark mode works"
95
+ AI: [smart_tap("settings") -> take_screenshot -> smart_tap("dark mode toggle") -> visual_diff -> reports result]
96
+ ```
97
+
98
+ No test scripts. No manual screenshots. Just describe what you want in plain English.
99
+
100
+ ### Works with Any AI Coding Assistant
101
+
102
+ | Tool | Config file | Docs |
103
+ |------|------------|------|
104
+ | **Claude Code** | `.mcp.json` in project root | [claude.ai/docs](https://claude.ai/docs) |
105
+ | **Cursor** | `.cursor/mcp.json` | [cursor.com/docs](https://cursor.com/docs) |
106
+ | **VS Code + Copilot** | MCP settings | [code.visualstudio.com](https://code.visualstudio.com) |
107
+ | **Windsurf** | MCP settings | [windsurf.com](https://windsurf.com) |
108
+
109
+ All use the same JSON config -- just put it in the right file for your editor.
110
+
111
+ ### Drop Into Any Project
112
+
113
+ Copy `.mcp.json` into any mobile project -- Flutter, React Native, Kotlin, Swift -- and your AI assistant gets device superpowers in that directory. No global install needed.
114
+
115
+ ## Free vs Pro
116
+
117
+ <a name="pro"></a>
118
+
119
+ ### Free (14 tools) -- no license key needed
120
+
121
+ | Tool | What it does |
122
+ |------|-------------|
123
+ | `list_devices` | List all connected Android devices/emulators |
124
+ | `get_device_info` | Model, manufacturer, Android version, SDK level |
125
+ | `get_screen_size` | Screen resolution in pixels |
126
+ | `take_screenshot` | Capture screenshot (PNG or JPEG, configurable quality & resize) |
127
+ | `get_ui_elements` | Get the accessibility/UI element tree as structured JSON |
128
+ | `tap` | Tap at coordinates |
129
+ | `double_tap` | Double tap at coordinates |
130
+ | `long_press` | Long press at coordinates |
131
+ | `swipe` | Swipe between two points |
132
+ | `type_text` | Type text into the focused field |
133
+ | `press_key` | Press a key (home, back, enter, volume, etc.) |
134
+ | `list_apps` | List installed apps |
135
+ | `get_current_app` | Get the foreground app |
136
+ | `get_logs` | Get logcat entries with filtering |
137
+
138
+ ### Pro (35 additional tools) -- requires license key
139
+
140
+ Unlock all 49 tools by setting `MOBILE_MCP_LICENSE_KEY` in your `.mcp.json`:
141
+
142
+ ```json
143
+ {
144
+ "mcpServers": {
145
+ "mobile-device": {
146
+ "type": "stdio",
147
+ "command": "npx",
148
+ "args": ["-y", "mobile-device-mcp"],
149
+ "env": {
150
+ "GOOGLE_API_KEY": "your-google-api-key",
151
+ "MOBILE_MCP_LICENSE_KEY": "your-license-key"
152
+ }
153
+ }
154
+ }
155
+ }
156
+ ```
157
+
158
+ #### AI Visual Analysis (12 tools)
159
+
160
+ Use AI vision (Claude or Gemini) to understand what's on screen.
161
+
162
+ | Tool | What it does |
163
+ |------|-------------|
164
+ | `analyze_screen` | AI describes the screen: app name, screen type, interactive elements, visible text, suggestions |
165
+ | `find_element` | Find a UI element by description: *"the login button"*, *"email input field"* |
166
+ | `smart_tap` | Find an element by description and tap it in one step |
167
+ | `smart_type` | Find an input field by description, focus it, and type text |
168
+ | `suggest_actions` | Plan actions to achieve a goal: *"log into the app"*, *"add item to cart"* |
169
+ | `visual_diff` | Compare current screen with a previous screenshot -- what changed? |
170
+ | `extract_text` | Extract all visible text from the screen (AI-powered OCR) |
171
+ | `verify_screen` | Verify an assertion: *"the login was successful"*, *"error message is showing"* |
172
+ | `wait_for_settle` | Wait until the screen stops changing |
173
+ | `wait_for_element` | Wait for a specific element to appear on screen |
174
+ | `handle_popup` | Detect and dismiss popups, dialogs, permission prompts |
175
+ | `fill_form` | Fill multiple form fields in one step |
176
+
177
+ #### Flutter Widget Tree (10 tools)
178
+
179
+ Connect to running Flutter apps via Dart VM Service Protocol. Maps every widget to its source code location (`file:line`).
180
+
181
+ | Tool | What it does |
182
+ |------|-------------|
183
+ | `flutter_connect` | Discover and connect to a running Flutter app on the device |
184
+ | `flutter_disconnect` | Disconnect from the Flutter app and clean up resources |
185
+ | `flutter_get_widget_tree` | Get the full widget tree (summary or detailed) |
186
+ | `flutter_get_widget_details` | Get detailed properties of a specific widget by ID |
187
+ | `flutter_find_widget` | Search the widget tree by type, text, or description |
188
+ | `flutter_get_source_map` | Map every widget to its source code location (file:line:column) |
189
+ | `flutter_screenshot_widget` | Screenshot a specific widget in isolation |
190
+ | `flutter_debug_paint` | Toggle debug paint overlay (shows widget boundaries & padding) |
191
+ | `flutter_hot_reload` | Hot reload Flutter app (preserves state) |
192
+ | `flutter_hot_restart` | Hot restart Flutter app (resets state) |
193
+
194
+ #### iOS Simulator (4 tools)
195
+
196
+ macOS only. Control iOS simulators via `xcrun simctl`.
197
+
198
+ | Tool | What it does |
199
+ |------|-------------|
200
+ | `ios_list_simulators` | List available iOS simulators |
201
+ | `ios_boot_simulator` | Boot a simulator by name or UDID |
202
+ | `ios_shutdown_simulator` | Shut down a running simulator |
203
+ | `ios_screenshot` | Take a screenshot of a simulator |
204
+
205
+ #### Video Recording (2 tools)
206
+
207
+ | Tool | What it does |
208
+ |------|-------------|
209
+ | `record_screen` | Start recording the device screen |
210
+ | `stop_recording` | Stop recording and save the video |
211
+
212
+ #### Test Generation (3 tools)
213
+
214
+ | Tool | What it does |
215
+ |------|-------------|
216
+ | `start_test_recording` | Start recording your MCP tool calls |
217
+ | `stop_test_recording` | Stop recording and generate a test script |
218
+ | `get_recorded_actions` | Get recorded actions as TypeScript, Python, or JSON |
219
+
220
+ #### App Management (4 tools)
221
+
222
+ | Tool | What it does |
223
+ |------|-------------|
224
+ | `launch_app` | Launch an app by package name |
225
+ | `stop_app` | Force stop an app |
226
+ | `install_app` | Install an APK |
227
+ | `uninstall_app` | Uninstall an app |
228
+
229
+ ## Performance
230
+
231
+ The server is optimized to minimize latency and AI token costs:
232
+
233
+ - **4-tier element search**: companion app (instant) -> local text match (<1ms) -> cached AI -> fresh AI. `smart_tap` is **35x faster** than naive AI calls (205ms vs 7.6s).
234
+ - **Companion app**: AccessibilityService-based Android app provides UI tree in 105ms (23x faster than UIAutomator's 2448ms). Auto-installs on first use.
235
+ - **Screenshot compression**: AI tools auto-compress to JPEG q=60, 400w -- **89% smaller** (251KB -> 28KB) with zero AI quality loss.
236
+ - **Parallel capture**: Screenshot + UI tree fetched simultaneously via `Promise.all()`.
237
+ - **TTL caching**: 5-second cache avoids redundant ADB calls for rapid-fire tool usage.
238
+
239
+ ## Environment Variables
240
+
241
+ | Variable | Description | Default |
242
+ |----------|-------------|---------|
243
+ | `GOOGLE_API_KEY` or `GEMINI_API_KEY` | Google API key for Gemini vision (recommended) | -- |
244
+ | `ANTHROPIC_API_KEY` | Anthropic API key for Claude vision | -- |
245
+ | `MOBILE_MCP_LICENSE_KEY` | License key to unlock Pro tools | -- |
246
+ | `MCP_AI_PROVIDER` | Force AI provider: `"anthropic"` or `"google"` | Auto-detected |
247
+ | `MCP_AI_MODEL` | Override AI model | `gemini-2.5-flash` / `claude-sonnet-4-20250514` |
248
+ | `MCP_ADB_PATH` | Custom ADB binary path | Auto-discovered |
249
+ | `MCP_DEFAULT_DEVICE` | Default device serial | Auto-discovered |
250
+ | `MCP_SCREENSHOT_FORMAT` | `"png"` or `"jpeg"` | `jpeg` |
251
+ | `MCP_SCREENSHOT_QUALITY` | JPEG quality (1-100) | `80` |
252
+ | `MCP_SCREENSHOT_MAX_WIDTH` | Resize screenshots to this max width | `720` |
253
+
254
+ ## Architecture
255
+
256
+ ```
257
+ src/
258
+ |-- index.ts # CLI entry point (auto-discovery, env config)
259
+ |-- server.ts # MCP server factory
260
+ |-- license.ts # License validation and tier gating
261
+ |-- types.ts # Shared interfaces
262
+ |-- drivers/android/ # ADB driver (DeviceDriver implementation)
263
+ | |-- adb.ts # Low-level ADB command wrapper
264
+ | |-- companion-client.ts # TCP client for companion app
265
+ | +-- index.ts # AndroidDriver class (4-strategy UI element retrieval)
266
+ |-- drivers/flutter/ # Dart VM Service driver
267
+ | |-- index.ts # FlutterDriver (discovery, inspection, source mapping, hot reload)
268
+ | +-- vm-service.ts # JSON-RPC 2.0 WebSocket client (DDS redirect handling)
269
+ |-- drivers/ios/ # iOS Simulator driver (macOS only)
270
+ | |-- index.ts # IOSSimulatorDriver via xcrun simctl
271
+ | +-- simctl.ts # Low-level simctl command wrapper
272
+ |-- tools/ # MCP tool registrations (free + pro gating)
273
+ | |-- device-tools.ts # Device management
274
+ | |-- screen-tools.ts # Screenshots & UI inspection
275
+ | |-- interaction-tools.ts # Touch, type, keys
276
+ | |-- app-tools.ts # App management
277
+ | |-- log-tools.ts # Logcat
278
+ | |-- ai-tools.ts # AI-powered tools
279
+ | |-- flutter-tools.ts # Flutter widget inspection
280
+ | |-- ios-tools.ts # iOS simulator tools
281
+ | |-- video-tools.ts # Screen recording
282
+ | +-- recording-tools.ts # Test generation
283
+ |-- recording/ # Test script generation
284
+ | |-- recorder.ts # ActionRecorder (records MCP tool calls)
285
+ | +-- generator.ts # TestGenerator (TypeScript/Python/JSON output)
286
+ |-- ai/ # AI visual analysis engine
287
+ | |-- client.ts # Multi-provider client (Anthropic + Google)
288
+ | |-- prompts.ts # System prompts & UI element summarizer
289
+ | |-- analyzer.ts # ScreenAnalyzer orchestrator (caching, parallel capture)
290
+ | +-- element-search.ts # Local element search (text/alias matching, no AI needed)
291
+ +-- utils/
292
+ |-- discovery.ts # ADB auto-discovery
293
+ +-- image.ts # PNG parsing, JPEG compression, bilinear resize
294
+
295
+ companion-app/ # Android companion app (Kotlin)
296
+ # AccessibilityService + TCP JSON-RPC for fast UI tree
297
+ ```
298
+
299
+ ## Roadmap
300
+
301
+ - [ ] iOS physical device support
302
+ - [ ] Multi-device orchestration
303
+ - [ ] CI/CD integration
304
+ - [ ] Cloud device farm support
305
+
306
+ ## Tested On
307
+
308
+ - **Devices**: Pixel 8 (Android 16), Samsung Galaxy series, Android emulators
309
+ - **Apps**: Telegram, Instagram, Spotify, WhatsApp, YouTube, Chrome, Settings, and Flutter apps
310
+ - **AI Providers**: Google Gemini 2.5 Flash, Anthropic Claude
311
+ - **Platforms**: Windows 11, macOS (iOS simulators)
312
+ - **Connection**: USB and wireless ADB
313
+
314
+ ## License
315
+
316
+ [Business Source License 1.1](LICENSE)
317
+
318
+ - **Free for individuals and non-commercial use**
319
+ - **Commercial use requires a paid license**
320
+ - Converts to Apache 2.0 on March 23, 2030
321
+
322
+ See [LICENSE](LICENSE) for full terms.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mobile-device-mcp",
3
- "version": "0.2.2",
3
+ "version": "0.2.3",
4
4
  "description": "MCP server that gives AI coding assistants (Claude, Cursor, Windsurf) the ability to see and interact with Android mobile devices via ADB — AI-powered visual inspection, element finding, and device automation",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",