@elvatis_com/elvatis-mcp 0.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.env.example +61 -0
- package/LICENSE +190 -0
- package/README.md +471 -0
- package/dist/config.d.ts +39 -0
- package/dist/config.d.ts.map +1 -0
- package/dist/config.js +45 -0
- package/dist/config.js.map +1 -0
- package/dist/index.d.ts +18 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +146 -0
- package/dist/index.js.map +1 -0
- package/dist/spawn.d.ts +12 -0
- package/dist/spawn.d.ts.map +1 -0
- package/dist/spawn.js +55 -0
- package/dist/spawn.js.map +1 -0
- package/dist/ssh.d.ts +28 -0
- package/dist/ssh.d.ts.map +1 -0
- package/dist/ssh.js +171 -0
- package/dist/ssh.js.map +1 -0
- package/dist/tools/codex.d.ts +61 -0
- package/dist/tools/codex.d.ts.map +1 -0
- package/dist/tools/codex.js +63 -0
- package/dist/tools/codex.js.map +1 -0
- package/dist/tools/cron.d.ts +58 -0
- package/dist/tools/cron.d.ts.map +1 -0
- package/dist/tools/cron.js +88 -0
- package/dist/tools/cron.js.map +1 -0
- package/dist/tools/gemini.d.ts +75 -0
- package/dist/tools/gemini.d.ts.map +1 -0
- package/dist/tools/gemini.js +72 -0
- package/dist/tools/gemini.js.map +1 -0
- package/dist/tools/help.d.ts +31 -0
- package/dist/tools/help.d.ts.map +1 -0
- package/dist/tools/help.js +39 -0
- package/dist/tools/help.js.map +1 -0
- package/dist/tools/home.d.ts +108 -0
- package/dist/tools/home.d.ts.map +1 -0
- package/dist/tools/home.js +141 -0
- package/dist/tools/home.js.map +1 -0
- package/dist/tools/local-llm.d.ts +115 -0
- package/dist/tools/local-llm.d.ts.map +1 -0
- package/dist/tools/local-llm.js +123 -0
- package/dist/tools/local-llm.js.map +1 -0
- package/dist/tools/memory.d.ts +52 -0
- package/dist/tools/memory.d.ts.map +1 -0
- package/dist/tools/memory.js +92 -0
- package/dist/tools/memory.js.map +1 -0
- package/dist/tools/openclaw.d.ts +88 -0
- package/dist/tools/openclaw.d.ts.map +1 -0
- package/dist/tools/openclaw.js +119 -0
- package/dist/tools/openclaw.js.map +1 -0
- package/dist/tools/routing-rules.d.ts +21 -0
- package/dist/tools/routing-rules.d.ts.map +1 -0
- package/dist/tools/routing-rules.js +149 -0
- package/dist/tools/routing-rules.js.map +1 -0
- package/dist/tools/splitter.d.ts +49 -0
- package/dist/tools/splitter.d.ts.map +1 -0
- package/dist/tools/splitter.js +375 -0
- package/dist/tools/splitter.js.map +1 -0
- package/package.json +54 -0
package/README.md
ADDED
|
@@ -0,0 +1,471 @@
|
|
|
1
|
+
# elvatis-mcp
|
|
2
|
+
|
|
3
|
+
**MCP server for OpenClaw** -- expose your smart home, memory, cron automation, and AI sub-agent orchestration to Claude Desktop, Cursor, Windsurf, and any MCP-compatible AI client.
|
|
4
|
+
|
|
5
|
+
[](https://www.npmjs.com/package/@elvatis_com/elvatis-mcp)
|
|
6
|
+
[](LICENSE)
|
|
7
|
+
[](#test-results)
|
|
8
|
+
|
|
9
|
+
## What is this?
|
|
10
|
+
|
|
11
|
+
elvatis-mcp connects Claude (or any MCP client) to your infrastructure:
|
|
12
|
+
|
|
13
|
+
- **Smart home** control via Home Assistant (lights, thermostats, vacuum, sensors)
|
|
14
|
+
- **Memory** system with daily logs stored on your OpenClaw server
|
|
15
|
+
- **Cron** job management and triggering
|
|
16
|
+
- **Multi-LLM orchestration** through 4 AI backends: OpenClaw, Google Gemini, OpenAI Codex, and local LLMs
|
|
17
|
+
- **Smart prompt splitting** that analyzes complex requests and routes sub-tasks to the right AI
|
|
18
|
+
|
|
19
|
+
The key idea: Claude is the orchestrator, but it can delegate specialized work to other AI models. Coding tasks go to Codex. Research goes to Gemini. Simple formatting goes to your local LLM (free, private). Trading and automation go to OpenClaw. And `prompt_split` figures out the routing automatically.
|
|
20
|
+
|
|
21
|
+
## What is MCP?
|
|
22
|
+
|
|
23
|
+
[Model Context Protocol](https://modelcontextprotocol.io) is an open standard by Anthropic that lets AI clients connect to external tool servers. Once configured, Claude can directly call your tools without copy-pasting.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## Multi-LLM Architecture
|
|
28
|
+
|
|
29
|
+
```
|
|
30
|
+
You (Claude Desktop / Code / Cursor)
|
|
31
|
+
|
|
|
32
|
+
MCP Protocol (stdio/HTTP)
|
|
33
|
+
|
|
|
34
|
+
elvatis-mcp server
|
|
35
|
+
|
|
|
36
|
+
+-----------+-----------+-----------+-----------+
|
|
37
|
+
| | | | |
|
|
38
|
+
OpenClaw Gemini Codex Local LLM Home Asst.
|
|
39
|
+
(SSH) (CLI) (CLI) (HTTP API) (REST API)
|
|
40
|
+
| | | | |
|
|
41
|
+
Plugins 1M context Coding LM Studio Lights
|
|
42
|
+
Trading Multimodal Files Ollama Climate
|
|
43
|
+
Automation Analysis Debug llama.cpp Vacuum
|
|
44
|
+
Workflows Research Shell (free!) Sensors
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
### Sub-Agent Comparison
|
|
48
|
+
|
|
49
|
+
| Tool | Backend | Transport | Auth | Best for | Cost |
|
|
50
|
+
|---|---|---|---|---|---|
|
|
51
|
+
| `openclaw_run` | OpenClaw (plugins) | SSH | SSH key | Trading, automations, multi-step workflows | Self-hosted |
|
|
52
|
+
| `gemini_run` | Google Gemini | Local CLI | Google login | Long context (1M tokens), multimodal, research | API usage |
|
|
53
|
+
| `codex_run` | OpenAI Codex | Local CLI | OpenAI login | Coding, debugging, file editing, shell scripts | API usage |
|
|
54
|
+
| `local_llm_run` | LM Studio / Ollama / llama.cpp | HTTP | None | Classification, formatting, extraction, rewriting | **Free** |
|
|
55
|
+
|
|
56
|
+
### Smart Prompt Splitting
|
|
57
|
+
|
|
58
|
+
The `prompt_split` tool analyzes complex prompts and breaks them into sub-tasks:
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
User: "Search my memory for TurboQuant notes, summarize with Gemini,
|
|
62
|
+
reformat as JSON locally, then save a summary to memory"
|
|
63
|
+
|
|
64
|
+
prompt_split returns:
|
|
65
|
+
t1: openclaw_memory_search -- "Search memory for TurboQuant" (parallel)
|
|
66
|
+
t3: local_llm_run -- "Reformat raw notes as clean JSON" (parallel)
|
|
67
|
+
t2: gemini_run -- "Summarize the key findings" (after t1)
|
|
68
|
+
t4: openclaw_memory_write -- "Save summary to today's log" (after t2, t3)
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
Claude then executes the plan, calling tools in the right order and running parallel tasks concurrently. Three analysis strategies:
|
|
72
|
+
|
|
73
|
+
| Strategy | Speed | Quality | Uses |
|
|
74
|
+
|---|---|---|---|
|
|
75
|
+
| `heuristic` | Instant | Good for clear prompts | Keyword matching, no LLM call |
|
|
76
|
+
| `local` | 5-30s | Better reasoning | Your local LLM analyzes the prompt |
|
|
77
|
+
| `gemini` | 5-15s | Best quality | Gemini-flash analyzes the prompt |
|
|
78
|
+
| `auto` (default) | Varies | Best available | Short-circuits simple prompts, then tries gemini -> local -> heuristic |
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## Available Tools (20 total)
|
|
83
|
+
|
|
84
|
+
### Home Assistant (6 tools)
|
|
85
|
+
| Tool | Description |
|
|
86
|
+
|---|---|
|
|
87
|
+
| `home_get_state` | Read any Home Assistant entity state |
|
|
88
|
+
| `home_light` | Control lights: on/off/toggle, brightness, color temperature, RGB |
|
|
89
|
+
| `home_climate` | Control Tado thermostats: temperature, HVAC mode |
|
|
90
|
+
| `home_scene` | Activate Hue scenes by room |
|
|
91
|
+
| `home_vacuum` | Control Roborock vacuum: start, stop, dock, status |
|
|
92
|
+
| `home_sensors` | Read all temperature, humidity, and CO2 sensors |
|
|
93
|
+
|
|
94
|
+
### Memory (3 tools)
|
|
95
|
+
| Tool | Description |
|
|
96
|
+
|---|---|
|
|
97
|
+
| `openclaw_memory_write` | Write a note to today's daily log |
|
|
98
|
+
| `openclaw_memory_read_today` | Read today's memory log |
|
|
99
|
+
| `openclaw_memory_search` | Search memory files across the last N days |
|
|
100
|
+
|
|
101
|
+
### Cron Automation (3 tools)
|
|
102
|
+
| Tool | Description |
|
|
103
|
+
|---|---|
|
|
104
|
+
| `openclaw_cron_list` | List all scheduled OpenClaw cron jobs |
|
|
105
|
+
| `openclaw_cron_run` | Trigger a cron job immediately by ID |
|
|
106
|
+
| `openclaw_cron_status` | Get scheduler status and recent run history |
|
|
107
|
+
|
|
108
|
+
### OpenClaw Agent (3 tools)
|
|
109
|
+
| Tool | Description |
|
|
110
|
+
|---|---|
|
|
111
|
+
| `openclaw_run` | Send a prompt to the OpenClaw AI agent (all plugins available) |
|
|
112
|
+
| `openclaw_status` | Check if the OpenClaw daemon is running |
|
|
113
|
+
| `openclaw_plugins` | List all installed plugins |
|
|
114
|
+
|
|
115
|
+
### AI Sub-Agents (3 tools)
|
|
116
|
+
| Tool | Description |
|
|
117
|
+
|---|---|
|
|
118
|
+
| `gemini_run` | Send a prompt to Google Gemini via the local CLI. 1M token context. |
|
|
119
|
+
| `codex_run` | Send a coding task to OpenAI Codex via the local CLI. |
|
|
120
|
+
| `local_llm_run` | Send a prompt to a local LLM (LM Studio, Ollama, llama.cpp). Free, private. |
|
|
121
|
+
|
|
122
|
+
### Routing and Orchestration (2 tools)
|
|
123
|
+
| Tool | Description |
|
|
124
|
+
|---|---|
|
|
125
|
+
| `mcp_help` | Show routing guide. Pass a task to get a specific tool recommendation. |
|
|
126
|
+
| `prompt_split` | Analyze a complex prompt, split into sub-tasks with agent assignments. |
|
|
127
|
+
|
|
128
|
+
---
|
|
129
|
+
|
|
130
|
+
## Test Results
|
|
131
|
+
|
|
132
|
+
All tests run against live services (LM Studio with Deepseek R1 Qwen3 8B, OpenClaw server via SSH).
|
|
133
|
+
|
|
134
|
+
```
|
|
135
|
+
elvatis-mcp integration tests
|
|
136
|
+
|
|
137
|
+
Local LLM (local_llm_run)
|
|
138
|
+
|
|
139
|
+
Model: deepseek/deepseek-r1-0528-qwen3-8b
|
|
140
|
+
Response: "negative"
|
|
141
|
+
Tokens: 401 (prompt: 39, completion: 362)
|
|
142
|
+
PASS local_llm_run: simple classification (21000ms)
|
|
143
|
+
Extracted: {"name":"John Smith","age":34}
|
|
144
|
+
PASS local_llm_run: JSON extraction (24879ms)
|
|
145
|
+
Error: Could not connect to local LLM at http://localhost:19999/v1/chat/completions
|
|
146
|
+
PASS local_llm_run: connection error handling (4ms)
|
|
147
|
+
|
|
148
|
+
Prompt Splitter (prompt_split)
|
|
149
|
+
|
|
150
|
+
Strategy: heuristic
|
|
151
|
+
Agent: codex_run
|
|
152
|
+
Summary: Fix the authentication bug in the login handler
|
|
153
|
+
PASS prompt_split: single-domain coding prompt routes to codex (1ms)
|
|
154
|
+
Strategy: heuristic
|
|
155
|
+
Subtasks: 3
|
|
156
|
+
t1: codex_run -- "Refactor the auth module"
|
|
157
|
+
t2: openclaw_run -- "check my portfolio performance and"
|
|
158
|
+
t3: home_light -- "turn on the living room lights"
|
|
159
|
+
Parallel groups: [["t1","t3"],["t2"]]
|
|
160
|
+
Estimated time: 90s
|
|
161
|
+
PASS prompt_split: heuristic multi-agent splitting (0ms)
|
|
162
|
+
Subtasks: 4, Agents: openclaw_memory_write, gemini_run, local_llm_run
|
|
163
|
+
Parallel groups: [["t1","t3","t4"],["t2"]]
|
|
164
|
+
PASS prompt_split: cross-domain with dependencies (1ms)
|
|
165
|
+
Strategy: local->heuristic (fallback)
|
|
166
|
+
Subtasks: 1
|
|
167
|
+
PASS prompt_split: local LLM strategy (with fallback) (60007ms)
|
|
168
|
+
|
|
169
|
+
Routing Guide (mcp_help)
|
|
170
|
+
|
|
171
|
+
Guide length: 2418 chars
|
|
172
|
+
PASS mcp_help: returns guide without task (0ms)
|
|
173
|
+
Recommendation: local_llm_run (formatting task)
|
|
174
|
+
PASS mcp_help: routes formatting task to local_llm_run (0ms)
|
|
175
|
+
Recommendation: codex_run (coding task)
|
|
176
|
+
PASS mcp_help: routes coding task to codex_run (0ms)
|
|
177
|
+
|
|
178
|
+
Memory Search via SSH (openclaw_memory_search)
|
|
179
|
+
|
|
180
|
+
Query: "trading", Results: 5
|
|
181
|
+
PASS openclaw_memory_search: finds existing notes (208ms)
|
|
182
|
+
|
|
183
|
+
-----------------------------------------------------------
|
|
184
|
+
11 passed, 0 failed, 0 skipped
|
|
185
|
+
-----------------------------------------------------------
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
Run the tests yourself:
|
|
189
|
+
```bash
|
|
190
|
+
npx tsx tests/integration.test.ts
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
Prerequisites: `.env` configured, local LLM server running, OpenClaw server reachable via SSH.
|
|
194
|
+
|
|
195
|
+
---
|
|
196
|
+
|
|
197
|
+
## Requirements
|
|
198
|
+
|
|
199
|
+
- Node.js 18 or later
|
|
200
|
+
- OpenSSH client (built-in on Windows 10+, macOS, Linux)
|
|
201
|
+
- A running [OpenClaw](https://openclaw.ai) instance accessible via SSH
|
|
202
|
+
- A [Home Assistant](https://www.home-assistant.io) instance with a long-lived access token
|
|
203
|
+
|
|
204
|
+
**Optional (for sub-agents):**
|
|
205
|
+
- `gemini_run`: `npm install -g @google/gemini-cli` and `gemini auth login`
|
|
206
|
+
- `codex_run`: `npm install -g @openai/codex` and `codex login`
|
|
207
|
+
- `local_llm_run`: any OpenAI-compatible local server:
|
|
208
|
+
- [LM Studio](https://lmstudio.ai) (recommended, GUI, default port 1234)
|
|
209
|
+
- [Ollama](https://ollama.ai) (`ollama serve`, port 11434)
|
|
210
|
+
- [llama.cpp](https://github.com/ggml-org/llama.cpp) (`llama-server`, any port)
|
|
211
|
+
|
|
212
|
+
---
|
|
213
|
+
|
|
214
|
+
## Installation
|
|
215
|
+
|
|
216
|
+
Install globally:
|
|
217
|
+
```bash
|
|
218
|
+
npm install -g @elvatis_com/elvatis-mcp
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
Or use directly via npx (no install required):
|
|
222
|
+
```bash
|
|
223
|
+
npx @elvatis_com/elvatis-mcp
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
---
|
|
227
|
+
|
|
228
|
+
## Where Can I Use It?
|
|
229
|
+
|
|
230
|
+
elvatis-mcp works in every MCP-compatible client. Each client uses its own config file.
|
|
231
|
+
|
|
232
|
+
| Client | Transport | Config file |
|
|
233
|
+
|--------|-----------|-------------|
|
|
234
|
+
| **Claude Desktop / Cowork** (Windows MSIX) | stdio | `%LOCALAPPDATA%\Packages\Claude_pzs8sxrjxfjjc\LocalCache\Roaming\Claude\claude_desktop_config.json` |
|
|
235
|
+
| **Claude Desktop / Cowork** (macOS) | stdio | `~/Library/Application Support/Claude/claude_desktop_config.json` |
|
|
236
|
+
| **Claude Code** (global, all projects) | stdio | `~/.claude.json` |
|
|
237
|
+
| **Claude Code** (this project only) | stdio | `.mcp.json` in repo root (already included) |
|
|
238
|
+
| **Cursor / Windsurf / other** | stdio or HTTP | See app documentation |
|
|
239
|
+
|
|
240
|
+
> Claude Desktop and Cowork share the same config file. Claude Code is a separate system.
|
|
241
|
+
|
|
242
|
+
---
|
|
243
|
+
|
|
244
|
+
## Configuration
|
|
245
|
+
|
|
246
|
+
### 1. Create your `.env` file
|
|
247
|
+
|
|
248
|
+
```bash
|
|
249
|
+
cp .env.example .env
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
```env
|
|
253
|
+
# Required
|
|
254
|
+
HA_URL=http://your-home-assistant:8123
|
|
255
|
+
HA_TOKEN=your_long_lived_ha_token
|
|
256
|
+
SSH_HOST=your-openclaw-server-ip
|
|
257
|
+
SSH_USER=your-ssh-username
|
|
258
|
+
SSH_KEY_PATH=~/.ssh/your_key
|
|
259
|
+
|
|
260
|
+
# Optional: Local LLM
|
|
261
|
+
LOCAL_LLM_ENDPOINT=http://localhost:1234/v1 # LM Studio default
|
|
262
|
+
LOCAL_LLM_MODEL=deepseek-r1-0528-qwen3-8b # or omit to use loaded model
|
|
263
|
+
|
|
264
|
+
# Optional: Sub-agent models
|
|
265
|
+
GEMINI_MODEL=gemini-2.5-flash
|
|
266
|
+
CODEX_MODEL=o3
|
|
267
|
+
```
|
|
268
|
+
|
|
269
|
+
### 2. Configure your MCP client
|
|
270
|
+
|
|
271
|
+
#### Claude Desktop (macOS)
|
|
272
|
+
Edit `~/Library/Application Support/Claude/claude_desktop_config.json`:
|
|
273
|
+
|
|
274
|
+
```json
|
|
275
|
+
{
|
|
276
|
+
"mcpServers": {
|
|
277
|
+
"elvatis-mcp": {
|
|
278
|
+
"command": "npx",
|
|
279
|
+
"args": ["-y", "@elvatis_com/elvatis-mcp"],
|
|
280
|
+
"env": {
|
|
281
|
+
"HA_URL": "http://your-home-assistant:8123",
|
|
282
|
+
"HA_TOKEN": "your_token",
|
|
283
|
+
"SSH_HOST": "your-openclaw-server-ip",
|
|
284
|
+
"SSH_USER": "your-username",
|
|
285
|
+
"SSH_KEY_PATH": "/Users/your-username/.ssh/your_key"
|
|
286
|
+
}
|
|
287
|
+
}
|
|
288
|
+
}
|
|
289
|
+
}
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
#### Claude Desktop (Windows MSIX)
|
|
293
|
+
Open this file (create it if needed):
|
|
294
|
+
```
|
|
295
|
+
%LOCALAPPDATA%\Packages\Claude_pzs8sxrjxfjjc\LocalCache\Roaming\Claude\claude_desktop_config.json
|
|
296
|
+
```
|
|
297
|
+
|
|
298
|
+
```json
|
|
299
|
+
{
|
|
300
|
+
"mcpServers": {
|
|
301
|
+
"elvatis-mcp": {
|
|
302
|
+
"command": "C:\\Program Files\\nodejs\\node.exe",
|
|
303
|
+
"args": ["C:\\path\\to\\elvatis-mcp\\dist\\index.js"],
|
|
304
|
+
"env": {
|
|
305
|
+
"HA_URL": "http://your-home-assistant:8123",
|
|
306
|
+
"HA_TOKEN": "your_token",
|
|
307
|
+
"SSH_HOST": "your-openclaw-server-ip",
|
|
308
|
+
"SSH_USER": "your-username",
|
|
309
|
+
"SSH_KEY_PATH": "C:\\Users\\your-username\\.ssh\\your_key"
|
|
310
|
+
}
|
|
311
|
+
}
|
|
312
|
+
}
|
|
313
|
+
}
|
|
314
|
+
```
|
|
315
|
+
|
|
316
|
+
> On Windows, always use full absolute paths. The MSIX sandbox does not resolve `~` or relative paths.
|
|
317
|
+
|
|
318
|
+
#### Claude Code (this project)
|
|
319
|
+
`.mcp.json` is already included. Copy `.env.example` to `.env` and fill in your values.
|
|
320
|
+
|
|
321
|
+
#### Claude Code (global)
|
|
322
|
+
```bash
|
|
323
|
+
claude mcp add --scope user elvatis-mcp -- node /path/to/elvatis-mcp/dist/index.js
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
#### HTTP Transport (remote clients)
|
|
327
|
+
```bash
|
|
328
|
+
MCP_TRANSPORT=http MCP_HTTP_PORT=3333 npx @elvatis_com/elvatis-mcp
|
|
329
|
+
```
|
|
330
|
+
Connect your client to `http://your-server:3333/mcp`.
|
|
331
|
+
|
|
332
|
+
---
|
|
333
|
+
|
|
334
|
+
## Environment Variables
|
|
335
|
+
|
|
336
|
+
### Required
|
|
337
|
+
| Variable | Description |
|
|
338
|
+
|---|---|
|
|
339
|
+
| `HA_URL` | Home Assistant base URL, e.g. `http://192.168.x.x:8123` |
|
|
340
|
+
| `SSH_HOST` | OpenClaw server hostname or IP |
|
|
341
|
+
|
|
342
|
+
### Optional
|
|
343
|
+
| Variable | Default | Description |
|
|
344
|
+
|---|---|---|
|
|
345
|
+
| `HA_TOKEN` | -- | Home Assistant long-lived access token |
|
|
346
|
+
| `SSH_PORT` | `22` | SSH port |
|
|
347
|
+
| `SSH_USER` | `chef-linux` | SSH username |
|
|
348
|
+
| `SSH_KEY_PATH` | `~/.ssh/openclaw_tunnel` | Path to SSH private key |
|
|
349
|
+
| `OPENCLAW_GATEWAY_URL` | `http://localhost:18789` | OpenClaw Gateway URL |
|
|
350
|
+
| `OPENCLAW_GATEWAY_TOKEN` | -- | Optional Gateway API token |
|
|
351
|
+
| `OPENCLAW_DEFAULT_AGENT` | -- | Named agent for `openclaw_run` |
|
|
352
|
+
| `GEMINI_MODEL` | `gemini-2.5-flash` | Default model for `gemini_run` |
|
|
353
|
+
| `CODEX_MODEL` | -- | Default model for `codex_run` |
|
|
354
|
+
| `LOCAL_LLM_ENDPOINT` | `http://localhost:1234/v1` | Local LLM server URL (LM Studio default) |
|
|
355
|
+
| `LOCAL_LLM_MODEL` | -- | Default local model (omit to use server's loaded model) |
|
|
356
|
+
| `MCP_TRANSPORT` | `stdio` | Transport mode: `stdio` or `http` |
|
|
357
|
+
| `MCP_HTTP_PORT` | `3333` | HTTP port |
|
|
358
|
+
| `SSH_DEBUG` | -- | Set to `1` for verbose SSH output |
|
|
359
|
+
|
|
360
|
+
---
|
|
361
|
+
|
|
362
|
+
## Local LLM Setup
|
|
363
|
+
|
|
364
|
+
elvatis-mcp works with any OpenAI-compatible local server. Three popular options:
|
|
365
|
+
|
|
366
|
+
### LM Studio (recommended for desktop)
|
|
367
|
+
1. Download from [lmstudio.ai](https://lmstudio.ai)
|
|
368
|
+
2. Load a model (e.g. Deepseek R1 Qwen3 8B, Phi 4 Mini)
|
|
369
|
+
3. Click "Local Server" in the sidebar and enable it
|
|
370
|
+
4. Server runs at `http://localhost:1234/v1` (the default)
|
|
371
|
+
|
|
372
|
+
### Ollama
|
|
373
|
+
```bash
|
|
374
|
+
ollama serve # starts server on port 11434
|
|
375
|
+
ollama run llama3.2 # downloads and loads model
|
|
376
|
+
```
|
|
377
|
+
Set `LOCAL_LLM_ENDPOINT=http://localhost:11434/v1` in your `.env`.
|
|
378
|
+
|
|
379
|
+
### llama.cpp
|
|
380
|
+
```bash
|
|
381
|
+
llama-server -m model.gguf --port 8080
|
|
382
|
+
```
|
|
383
|
+
Set `LOCAL_LLM_ENDPOINT=http://localhost:8080/v1` in your `.env`.
|
|
384
|
+
|
|
385
|
+
### Recommended models by task
|
|
386
|
+
|
|
387
|
+
| Model | Size | Best for |
|
|
388
|
+
|---|---|---|
|
|
389
|
+
| Phi 4 Mini | 3B | Fast classification, formatting, extraction |
|
|
390
|
+
| Deepseek R1 Qwen3 | 8B | Reasoning, analysis, prompt splitting |
|
|
391
|
+
| Phi 4 Reasoning Plus | 15B | Complex reasoning with quality |
|
|
392
|
+
| GPT-OSS | 20B | General purpose, longer responses |
|
|
393
|
+
|
|
394
|
+
> Reasoning models (Deepseek R1, Phi 4 Reasoning) wrap their chain-of-thought in `<think>` tags. elvatis-mcp strips these automatically to give you clean responses.
|
|
395
|
+
|
|
396
|
+
---
|
|
397
|
+
|
|
398
|
+
## SSH Setup
|
|
399
|
+
|
|
400
|
+
The cron, memory, and OpenClaw tools communicate with your server via SSH.
|
|
401
|
+
|
|
402
|
+
```bash
|
|
403
|
+
# Verify connectivity
|
|
404
|
+
ssh -i ~/.ssh/your_key your-username@your-server "openclaw --version"
|
|
405
|
+
|
|
406
|
+
# Optional: SSH tunnel for OpenClaw WebSocket gateway
|
|
407
|
+
ssh -i ~/.ssh/your_key -L 18789:127.0.0.1:18789 -N your-username@your-server
|
|
408
|
+
```
|
|
409
|
+
|
|
410
|
+
On Windows, elvatis-mcp automatically resolves the SSH binary to `C:\Windows\System32\OpenSSH\ssh.exe` and retries on transient connection failures. Set `SSH_DEBUG=1` for verbose output.
|
|
411
|
+
|
|
412
|
+
---
|
|
413
|
+
|
|
414
|
+
## `/mcp-help` Slash Command
|
|
415
|
+
|
|
416
|
+
In Claude Code, the `/project:mcp-help` slash command is available:
|
|
417
|
+
|
|
418
|
+
```
|
|
419
|
+
/project:mcp-help
|
|
420
|
+
/project:mcp-help analyze this trading strategy for risk
|
|
421
|
+
```
|
|
422
|
+
|
|
423
|
+
---
|
|
424
|
+
|
|
425
|
+
## Development
|
|
426
|
+
|
|
427
|
+
```bash
|
|
428
|
+
git clone https://github.com/elvatis/elvatis-mcp
|
|
429
|
+
cd elvatis-mcp
|
|
430
|
+
npm install # builds automatically via prepare script
|
|
431
|
+
cp .env.example .env # fill in your values
|
|
432
|
+
node dist/index.js # starts in stdio mode, waits for MCP client
|
|
433
|
+
```
|
|
434
|
+
|
|
435
|
+
Build watch mode:
|
|
436
|
+
```bash
|
|
437
|
+
npm run dev
|
|
438
|
+
```
|
|
439
|
+
|
|
440
|
+
Run integration tests:
|
|
441
|
+
```bash
|
|
442
|
+
npx tsx tests/integration.test.ts
|
|
443
|
+
```
|
|
444
|
+
|
|
445
|
+
### Project layout
|
|
446
|
+
```
|
|
447
|
+
src/
|
|
448
|
+
index.ts MCP server entry, tool registration, transport
|
|
449
|
+
config.ts Environment variable configuration
|
|
450
|
+
ssh.ts SSH exec helper (Windows/macOS/Linux)
|
|
451
|
+
spawn.ts Local process spawner for CLI sub-agents
|
|
452
|
+
tools/
|
|
453
|
+
home.ts Home Assistant: light, climate, scene, vacuum, sensors
|
|
454
|
+
memory.ts Daily memory log: write, read, search (SSH)
|
|
455
|
+
cron.ts OpenClaw cron: list, run, status (SSH)
|
|
456
|
+
openclaw.ts OpenClaw agent orchestration (SSH)
|
|
457
|
+
gemini.ts Google Gemini sub-agent (local CLI)
|
|
458
|
+
codex.ts OpenAI Codex sub-agent (local CLI)
|
|
459
|
+
local-llm.ts Local LLM sub-agent (OpenAI-compatible HTTP)
|
|
460
|
+
splitter.ts Smart prompt splitter (multi-strategy)
|
|
461
|
+
help.ts Routing guide and task recommender
|
|
462
|
+
routing-rules.ts Shared routing rules and keyword matching
|
|
463
|
+
tests/
|
|
464
|
+
integration.test.ts Live integration tests
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
---
|
|
468
|
+
|
|
469
|
+
## License
|
|
470
|
+
|
|
471
|
+
Apache-2.0 -- Copyright 2026 [Elvatis](https://elvatis.com)
|
package/dist/config.d.ts
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Configuration for elvatis-mcp.
|
|
3
|
+
* All values are loaded from environment variables.
|
|
4
|
+
* Copy .env.example to .env and fill in your values.
|
|
5
|
+
*/
|
|
6
|
+
export interface Config {
|
|
7
|
+
/** OpenClaw Gateway URL (tunneled locally, e.g. http://localhost:18789) */
|
|
8
|
+
gatewayUrl: string;
|
|
9
|
+
/** Optional API key / Bearer token for the Gateway */
|
|
10
|
+
gatewayToken?: string;
|
|
11
|
+
/** Home Assistant base URL */
|
|
12
|
+
haUrl: string;
|
|
13
|
+
/** Home Assistant long-lived access token */
|
|
14
|
+
haToken?: string;
|
|
15
|
+
/** Transport: "stdio" (default, for Claude Desktop) or "http" */
|
|
16
|
+
transport: 'stdio' | 'http';
|
|
17
|
+
/** HTTP port when transport=http */
|
|
18
|
+
httpPort: number;
|
|
19
|
+
/** OpenClaw server host for SSH */
|
|
20
|
+
sshHost: string;
|
|
21
|
+
/** SSH port */
|
|
22
|
+
sshPort: number;
|
|
23
|
+
/** SSH username on the OpenClaw server */
|
|
24
|
+
sshUser: string;
|
|
25
|
+
/** Path to SSH private key (~/ is expanded) */
|
|
26
|
+
sshKeyPath: string;
|
|
27
|
+
/** Optional: override the default OpenClaw agent name for openclaw_run */
|
|
28
|
+
openclawDefaultAgent?: string;
|
|
29
|
+
/** Default Gemini model, e.g. "gemini-2.5-flash" or "gemini-2.5-pro" */
|
|
30
|
+
geminiModel?: string;
|
|
31
|
+
/** Default Codex model, e.g. "o3" or "gpt-5-codex" */
|
|
32
|
+
codexModel?: string;
|
|
33
|
+
/** Base URL for the local LLM server, e.g. "http://localhost:1234/v1" */
|
|
34
|
+
localLlmEndpoint?: string;
|
|
35
|
+
/** Default model identifier for the local LLM (as shown in LM Studio / Ollama) */
|
|
36
|
+
localLlmModel?: string;
|
|
37
|
+
}
|
|
38
|
+
export declare function loadConfig(): Config;
|
|
39
|
+
//# sourceMappingURL=config.d.ts.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"config.d.ts","sourceRoot":"","sources":["../src/config.ts"],"names":[],"mappings":"AAAA;;;;GAIG;AAYH,MAAM,WAAW,MAAM;IACrB,2EAA2E;IAC3E,UAAU,EAAE,MAAM,CAAC;IACnB,sDAAsD;IACtD,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,8BAA8B;IAC9B,KAAK,EAAE,MAAM,CAAC;IACd,6CAA6C;IAC7C,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,iEAAiE;IACjE,SAAS,EAAE,OAAO,GAAG,MAAM,CAAC;IAC5B,oCAAoC;IACpC,QAAQ,EAAE,MAAM,CAAC;IAEjB,mCAAmC;IACnC,OAAO,EAAE,MAAM,CAAC;IAChB,eAAe;IACf,OAAO,EAAE,MAAM,CAAC;IAChB,0CAA0C;IAC1C,OAAO,EAAE,MAAM,CAAC;IAChB,+CAA+C;IAC/C,UAAU,EAAE,MAAM,CAAC;IACnB,0EAA0E;IAC1E,oBAAoB,CAAC,EAAE,MAAM,CAAC;IAE9B,wEAAwE;IACxE,WAAW,CAAC,EAAE,MAAM,CAAC;IAErB,sDAAsD;IACtD,UAAU,CAAC,EAAE,MAAM,CAAC;IAEpB,yEAAyE;IACzE,gBAAgB,CAAC,EAAE,MAAM,CAAC;IAC1B,kFAAkF;IAClF,aAAa,CAAC,EAAE,MAAM,CAAC;CACxB;AAED,wBAAgB,UAAU,IAAI,MAAM,CA0BnC"}
|
package/dist/config.js
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
/**
|
|
3
|
+
* Configuration for elvatis-mcp.
|
|
4
|
+
* All values are loaded from environment variables.
|
|
5
|
+
* Copy .env.example to .env and fill in your values.
|
|
6
|
+
*/
|
|
7
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
8
|
+
exports.loadConfig = loadConfig;
|
|
9
|
+
function required(key) {
|
|
10
|
+
const val = process.env[key];
|
|
11
|
+
if (!val)
|
|
12
|
+
throw new Error(`Missing required env var: ${key} (copy .env.example to .env)`);
|
|
13
|
+
return val;
|
|
14
|
+
}
|
|
15
|
+
function optional(key, fallback) {
|
|
16
|
+
return process.env[key] ?? fallback;
|
|
17
|
+
}
|
|
18
|
+
function loadConfig() {
|
|
19
|
+
return {
|
|
20
|
+
// Home Assistant (required)
|
|
21
|
+
haUrl: required('HA_URL'),
|
|
22
|
+
haToken: optional('HA_TOKEN'),
|
|
23
|
+
// OpenClaw gateway (optional, only needed for WebSocket features)
|
|
24
|
+
gatewayUrl: optional('OPENCLAW_GATEWAY_URL', 'http://localhost:18789'),
|
|
25
|
+
gatewayToken: optional('OPENCLAW_GATEWAY_TOKEN'),
|
|
26
|
+
// Transport
|
|
27
|
+
transport: process.env['MCP_TRANSPORT'] ?? 'stdio',
|
|
28
|
+
httpPort: parseInt(process.env['MCP_HTTP_PORT'] ?? '3333', 10),
|
|
29
|
+
// SSH (required for cron, memory, openclaw tools)
|
|
30
|
+
sshHost: required('SSH_HOST'),
|
|
31
|
+
sshPort: parseInt(process.env['SSH_PORT'] ?? '22', 10),
|
|
32
|
+
sshUser: optional('SSH_USER', 'chef-linux'),
|
|
33
|
+
sshKeyPath: optional('SSH_KEY_PATH', '~/.ssh/openclaw_tunnel'),
|
|
34
|
+
// Optional: specify a named agent for openclaw_run (default: uses OpenClaw's default agent)
|
|
35
|
+
openclawDefaultAgent: optional('OPENCLAW_DEFAULT_AGENT'),
|
|
36
|
+
// Gemini CLI
|
|
37
|
+
geminiModel: optional('GEMINI_MODEL'),
|
|
38
|
+
// Codex CLI
|
|
39
|
+
codexModel: optional('CODEX_MODEL'),
|
|
40
|
+
// Local LLM (LM Studio default port: 1234, Ollama: 11434, llama.cpp: 8080)
|
|
41
|
+
localLlmEndpoint: optional('LOCAL_LLM_ENDPOINT'),
|
|
42
|
+
localLlmModel: optional('LOCAL_LLM_MODEL'),
|
|
43
|
+
};
|
|
44
|
+
}
|
|
45
|
+
//# sourceMappingURL=config.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"config.js","sourceRoot":"","sources":["../src/config.ts"],"names":[],"mappings":";AAAA;;;;GAIG;;AAiDH,gCA0BC;AAzED,SAAS,QAAQ,CAAC,GAAW;IAC3B,MAAM,GAAG,GAAG,OAAO,CAAC,GAAG,CAAC,GAAG,CAAC,CAAC;IAC7B,IAAI,CAAC,GAAG;QAAE,MAAM,IAAI,KAAK,CAAC,6BAA6B,GAAG,8BAA8B,CAAC,CAAC;IAC1F,OAAO,GAAG,CAAC;AACb,CAAC;AAED,SAAS,QAAQ,CAAC,GAAW,EAAE,QAAiB;IAC9C,OAAO,OAAO,CAAC,GAAG,CAAC,GAAG,CAAC,IAAI,QAAQ,CAAC;AACtC,CAAC;AAuCD,SAAgB,UAAU;IACxB,OAAO;QACL,4BAA4B;QAC5B,KAAK,EAAE,QAAQ,CAAC,QAAQ,CAAC;QACzB,OAAO,EAAE,QAAQ,CAAC,UAAU,CAAC;QAC7B,kEAAkE;QAClE,UAAU,EAAE,QAAQ,CAAC,sBAAsB,EAAE,wBAAwB,CAAE;QACvE,YAAY,EAAE,QAAQ,CAAC,wBAAwB,CAAC;QAChD,YAAY;QACZ,SAAS,EAAG,OAAO,CAAC,GAAG,CAAC,eAAe,CAAsB,IAAI,OAAO;QACxE,QAAQ,EAAE,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,eAAe,CAAC,IAAI,MAAM,EAAE,EAAE,CAAC;QAC9D,kDAAkD;QAClD,OAAO,EAAE,QAAQ,CAAC,UAAU,CAAC;QAC7B,OAAO,EAAE,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,UAAU,CAAC,IAAI,IAAI,EAAE,EAAE,CAAC;QACtD,OAAO,EAAE,QAAQ,CAAC,UAAU,EAAE,YAAY,CAAE;QAC5C,UAAU,EAAE,QAAQ,CAAC,cAAc,EAAE,wBAAwB,CAAE;QAC/D,4FAA4F;QAC5F,oBAAoB,EAAE,QAAQ,CAAC,wBAAwB,CAAC;QACxD,aAAa;QACb,WAAW,EAAE,QAAQ,CAAC,cAAc,CAAC;QACrC,YAAY;QACZ,UAAU,EAAE,QAAQ,CAAC,aAAa,CAAC;QACnC,2EAA2E;QAC3E,gBAAgB,EAAE,QAAQ,CAAC,oBAAoB,CAAC;QAChD,aAAa,EAAE,QAAQ,CAAC,iBAAiB,CAAC;KAC3C,CAAC;AACJ,CAAC"}
|
package/dist/index.d.ts
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* elvatis-mcp — MCP server exposing OpenClaw tools to Claude Desktop, Cursor, Windsurf, and any MCP client.
|
|
4
|
+
*
|
|
5
|
+
* Transports:
|
|
6
|
+
* stdio (default) — for Claude Desktop / local clients
|
|
7
|
+
* http — for remote clients (set MCP_TRANSPORT=http)
|
|
8
|
+
*
|
|
9
|
+
* Configuration:
|
|
10
|
+
* Copy .env.example to .env and fill in your values.
|
|
11
|
+
* Or set env vars directly in claude_desktop_config.json.
|
|
12
|
+
*
|
|
13
|
+
* Usage:
|
|
14
|
+
* npx @elvatis_com/elvatis-mcp
|
|
15
|
+
* MCP_TRANSPORT=http MCP_HTTP_PORT=3333 npx @elvatis_com/elvatis-mcp
|
|
16
|
+
*/
|
|
17
|
+
export {};
|
|
18
|
+
//# sourceMappingURL=index.d.ts.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":";AACA;;;;;;;;;;;;;;GAcG"}
|