@houtini/lm 2.0.0 → 2.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +67 -237
- package/dist/index.js +12 -5
- package/dist/index.js.map +1 -1
- package/package.json +17 -3
- package/server.json +44 -0
package/README.md
CHANGED
|
@@ -1,293 +1,123 @@
|
|
|
1
|
-
#
|
|
1
|
+
# @houtini/lm
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
[](https://www.npmjs.com/package/@houtini/lm)
|
|
4
|
+
[](https://opensource.org/licenses/MIT)
|
|
4
5
|
|
|
5
|
-
|
|
6
|
+
MCP server that connects Claude to **any OpenAI-compatible LLM endpoint** — LM Studio, Ollama, vLLM, llama.cpp, or any remote API.
|
|
6
7
|
|
|
7
|
-
|
|
8
|
+
Offload routine work to a local model. Keep your Claude context window for the hard stuff.
|
|
8
9
|
|
|
9
|
-
|
|
10
|
+
## Why
|
|
10
11
|
|
|
11
|
-
|
|
12
|
-
- 🔍 **Code analysis** - Deep insights into quality, security, and architecture
|
|
13
|
-
- 📝 **Documentation generation** - Professional docs from code analysis
|
|
14
|
-
- 🏗️ **Project scaffolding** - Complete applications, themes, and components
|
|
15
|
-
- 🎮 **Creative projects** - Games, CSS art, and interactive experiences
|
|
16
|
-
- 🛡️ **Security audits** - OWASP compliance and vulnerability detection
|
|
12
|
+
Claude is great at orchestration and reasoning. Local models are great at bulk analysis, classification, extraction, and summarisation. This server lets Claude delegate to a local model on the fly — no API keys, no cloud round-trips, no context wasted.
|
|
17
13
|
|
|
18
|
-
|
|
14
|
+
**Common use cases:**
|
|
19
15
|
|
|
20
|
-
|
|
16
|
+
- Classify or tag hundreds of items without burning Claude tokens
|
|
17
|
+
- Extract structured data from long documents
|
|
18
|
+
- Run a second opinion on generated code
|
|
19
|
+
- Summarise research before Claude synthesises it
|
|
21
20
|
|
|
22
|
-
|
|
23
|
-
Use houtini-lm to analyse the code quality in C:/my-project/src/UserAuth.js
|
|
24
|
-
```
|
|
25
|
-
|
|
26
|
-
```
|
|
27
|
-
Generate comprehensive unit tests using houtini-lm for my React component at C:/components/Dashboard.jsx
|
|
28
|
-
```
|
|
29
|
-
|
|
30
|
-
```
|
|
31
|
-
Use houtini-lm to create a WordPress plugin called "Event Manager" with custom post types and admin interface
|
|
32
|
-
```
|
|
33
|
-
|
|
34
|
-
```
|
|
35
|
-
Audit the security of my WordPress theme using houtini-lm at C:/themes/my-theme
|
|
36
|
-
```
|
|
37
|
-
|
|
38
|
-
```
|
|
39
|
-
Create a CSS art generator project using houtini-lm with space theme and neon colours
|
|
40
|
-
```
|
|
41
|
-
|
|
42
|
-
```
|
|
43
|
-
Use houtini-lm to convert my JavaScript file to TypeScript with strict mode enabled
|
|
44
|
-
```
|
|
45
|
-
|
|
46
|
-
```
|
|
47
|
-
Generate responsive HTML components using houtini-lm for a pricing card with dark mode support
|
|
48
|
-
```
|
|
49
|
-
|
|
50
|
-
## Prerequisites
|
|
51
|
-
|
|
52
|
-
**Essential Requirements:**
|
|
53
|
-
|
|
54
|
-
1. **LM Studio** - Download from [lmstudio.ai](https://lmstudio.ai)
|
|
55
|
-
- Must be running at `ws://127.0.0.1:1234`
|
|
56
|
-
- Model loaded and ready (13B+ parameters recommended)
|
|
57
|
-
|
|
58
|
-
2. **Desktop Commander MCP** - Essential for file operations
|
|
59
|
-
- Repository: [DesktopCommanderMCP](https://github.com/wonderwhy-er/DesktopCommanderMCP)
|
|
60
|
-
- Required for reading files and writing generated code
|
|
21
|
+
## Install
|
|
61
22
|
|
|
62
|
-
|
|
63
|
-
- Download from [nodejs.org](https://nodejs.org)
|
|
64
|
-
|
|
65
|
-
4. **Claude Desktop** - For the best experience
|
|
66
|
-
- Download from [claude.ai/download](https://claude.ai/download)
|
|
67
|
-
|
|
68
|
-
## Installation
|
|
69
|
-
|
|
70
|
-
### 1. Install the Package
|
|
23
|
+
### Claude Code (recommended)
|
|
71
24
|
|
|
72
25
|
```bash
|
|
73
|
-
|
|
74
|
-
npm install -g @houtini/lm
|
|
75
|
-
|
|
76
|
-
# Or use npx (no installation required)
|
|
77
|
-
npx @houtini/lm
|
|
26
|
+
claude mcp add houtini-lm -e LM_STUDIO_URL=http://localhost:1234 -- npx -y @houtini/lm
|
|
78
27
|
```
|
|
79
28
|
|
|
80
|
-
###
|
|
81
|
-
|
|
82
|
-
Add to your Claude Desktop configuration file:
|
|
29
|
+
### Claude Desktop
|
|
83
30
|
|
|
84
|
-
|
|
85
|
-
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
|
|
31
|
+
Add to `claude_desktop_config.json`:
|
|
86
32
|
|
|
87
33
|
```json
|
|
88
34
|
{
|
|
89
35
|
"mcpServers": {
|
|
90
36
|
"houtini-lm": {
|
|
91
37
|
"command": "npx",
|
|
92
|
-
"args": ["@houtini/lm"],
|
|
38
|
+
"args": ["-y", "@houtini/lm"],
|
|
93
39
|
"env": {
|
|
94
|
-
"
|
|
40
|
+
"LM_STUDIO_URL": "http://localhost:1234"
|
|
95
41
|
}
|
|
96
42
|
}
|
|
97
43
|
}
|
|
98
44
|
}
|
|
99
45
|
```
|
|
100
46
|
|
|
101
|
-
###
|
|
102
|
-
|
|
103
|
-
1. Launch LM Studio
|
|
104
|
-
2. Load a model (13B+ parameters recommended for best results)
|
|
105
|
-
3. Start the server at `ws://127.0.0.1:1234`
|
|
106
|
-
4. Verify the model is ready and responding
|
|
107
|
-
|
|
108
|
-
### 4. Verify Installation
|
|
47
|
+
### npx (standalone)
|
|
109
48
|
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
```
|
|
113
|
-
Use houtini-lm health check to verify everything is working
|
|
49
|
+
```bash
|
|
50
|
+
npx @houtini/lm
|
|
114
51
|
```
|
|
115
52
|
|
|
116
|
-
##
|
|
117
|
-
|
|
118
|
-
### 🔍 Analysis Functions (17 functions)
|
|
119
|
-
- **`analyze_single_file`** - Deep code analysis and quality assessment
|
|
120
|
-
- **`count_files`** - Project structure with beautiful markdown trees
|
|
121
|
-
- **`find_unused_files`** - Dead code detection with risk assessment
|
|
122
|
-
- **`security_audit`** - OWASP compliance and vulnerability scanning
|
|
123
|
-
- **`analyze_dependencies`** - Circular dependencies and unused imports
|
|
124
|
-
- And 12 more specialized analysis tools...
|
|
125
|
-
|
|
126
|
-
### 🛠️ Generation Functions (10 functions)
|
|
127
|
-
- **`generate_unit_tests`** - Comprehensive test suites with framework patterns
|
|
128
|
-
- **`generate_documentation`** - Professional docs from code analysis
|
|
129
|
-
- **`convert_to_typescript`** - JavaScript to TypeScript with type safety
|
|
130
|
-
- **`generate_wordpress_plugin`** - Complete WordPress plugin creation
|
|
131
|
-
- **`generate_responsive_component`** - Accessible HTML/CSS components
|
|
132
|
-
- And 5 more generation tools...
|
|
133
|
-
|
|
134
|
-
### 🎮 Creative Functions (3 functions)
|
|
135
|
-
- **`css_art_generator`** - Pure CSS art and animations
|
|
136
|
-
- **`arcade_game`** - Complete playable HTML5 games
|
|
137
|
-
- **`create_text_adventure`** - Interactive fiction with branching stories
|
|
138
|
-
|
|
139
|
-
### ⚙️ System Functions (5 functions)
|
|
140
|
-
- **`health_check`** - Verify LM Studio connection
|
|
141
|
-
- **`list_functions`** - Discover all available functions
|
|
142
|
-
- **`resolve_path`** - Path analysis and suggestions
|
|
143
|
-
- And 2 more system utilities...
|
|
53
|
+
## Configuration
|
|
144
54
|
|
|
145
|
-
|
|
55
|
+
Set via environment variables or in your MCP client config:
|
|
146
56
|
|
|
147
|
-
|
|
57
|
+
| Variable | Default | Description |
|
|
58
|
+
|----------|---------|-------------|
|
|
59
|
+
| `LM_STUDIO_URL` | `http://localhost:1234` | Base URL of the OpenAI-compatible API |
|
|
60
|
+
| `LM_STUDIO_MODEL` | *(auto-detect)* | Model identifier — leave blank to use whatever's loaded |
|
|
61
|
+
| `LM_STUDIO_PASSWORD` | *(none)* | Bearer token for authenticated endpoints |
|
|
148
62
|
|
|
149
|
-
|
|
63
|
+
## Tools
|
|
150
64
|
|
|
151
|
-
|
|
65
|
+
### `chat`
|
|
152
66
|
|
|
153
|
-
|
|
154
|
-
// Context detection from your loaded model
|
|
155
|
-
const contextLength = await model.getContextLength(); // e.g., 16,384 tokens
|
|
67
|
+
Send a message, get a response. The workhorse.
|
|
156
68
|
|
|
157
|
-
// Dynamic allocation - 95% utilization
|
|
158
|
-
const responseTokens = Math.floor(contextLength * 0.95); // 15,565 tokens available
|
|
159
69
|
```
|
|
160
|
-
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
- ✅ **Future-proof** - Automatically adapts to larger models
|
|
165
|
-
|
|
166
|
-
### Three-Stage Prompt System
|
|
167
|
-
|
|
168
|
-
Houtini LM uses a sophisticated prompt architecture that separates concerns for optimal token management:
|
|
169
|
-
|
|
170
|
-
**Stage 1: System Context** - Expert persona and analysis methodology
|
|
171
|
-
**Stage 2: Data Payload** - Your code, files, or project content
|
|
172
|
-
**Stage 3: Output Instructions** - Structured response requirements
|
|
173
|
-
|
|
174
|
-
```
|
|
175
|
-
┌─────────────────────┐
|
|
176
|
-
│ System Context │ ← Expert role, methodologies
|
|
177
|
-
├─────────────────────┤
|
|
178
|
-
│ Data Payload │ ← Your files/code (chunked if needed)
|
|
179
|
-
├─────────────────────┤
|
|
180
|
-
│ Output Instructions │ ← Response format, requirements
|
|
181
|
-
└─────────────────────┘
|
|
70
|
+
message (required) — what to send
|
|
71
|
+
system — system prompt
|
|
72
|
+
temperature — 0–2, default 0.3
|
|
73
|
+
max_tokens — default 4096
|
|
182
74
|
```
|
|
183
75
|
|
|
184
|
-
|
|
185
|
-
- **Small files** → Single-stage execution for speed
|
|
186
|
-
- **Large files** → Automatic chunking with coherent aggregation
|
|
187
|
-
- **Multi-file projects** → Optimized batch processing
|
|
188
|
-
|
|
189
|
-
### Automatic Chunking Capability
|
|
190
|
-
|
|
191
|
-
When files exceed available context space, Houtini LM automatically chunks content while maintaining analysis quality:
|
|
76
|
+
### `custom_prompt`
|
|
192
77
|
|
|
193
|
-
|
|
194
|
-
- 🔍 **Natural boundaries** - Splits at logical sections, not arbitrary points
|
|
195
|
-
- 🔄 **Context preservation** - Maintains analysis continuity across chunks
|
|
196
|
-
- 📊 **Intelligent aggregation** - Combines chunk results into coherent reports
|
|
197
|
-
- ⚡ **Performance optimization** - Parallel processing where possible
|
|
78
|
+
Structured prompt with separate system, context, and instruction fields. Better for analysis tasks where you're passing data + instructions.
|
|
198
79
|
|
|
199
|
-
**Example Chunking Process:**
|
|
200
80
|
```
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
81
|
+
instruction (required) — what to do with the context
|
|
82
|
+
system — system prompt / persona
|
|
83
|
+
context — data or background to analyse
|
|
84
|
+
temperature — default 0.3
|
|
85
|
+
max_tokens — default 4096
|
|
206
86
|
```
|
|
207
87
|
|
|
208
|
-
###
|
|
88
|
+
### `list_models`
|
|
209
89
|
|
|
210
|
-
|
|
90
|
+
Returns the models currently loaded on the LLM server.
|
|
211
91
|
|
|
212
|
-
|
|
213
|
-
- 🔍 **Complex analysis** - Security audits, architecture analysis, and comprehensive code reviews take time
|
|
214
|
-
- 💻 **System compatibility** - Works reliably on older hardware and resource-constrained environments
|
|
215
|
-
- 🧠 **Model processing** - Larger local models (13B-33B parameters) require more inference time
|
|
216
|
-
- 📊 **Quality over speed** - Comprehensive reports are worth the wait
|
|
92
|
+
### `health_check`
|
|
217
93
|
|
|
218
|
-
|
|
219
|
-
- **Simple analysis** (100 lines): 15-30 seconds
|
|
220
|
-
- **Medium files** (500 lines): 30-60 seconds
|
|
221
|
-
- **Large files** (1000+ lines): 60-120 seconds
|
|
222
|
-
- **Multi-file projects**: 90-180 seconds
|
|
94
|
+
Checks connectivity. Returns response time, auth status, and loaded model count.
|
|
223
95
|
|
|
224
|
-
|
|
225
|
-
- Use faster models (13B vs 33B) for quicker responses
|
|
226
|
-
- Enable GPU acceleration in LM Studio for better performance
|
|
227
|
-
- Consider using `analysisDepth="basic"` for faster results when appropriate
|
|
96
|
+
## Compatible endpoints
|
|
228
97
|
|
|
229
|
-
|
|
98
|
+
| Provider | URL | Notes |
|
|
99
|
+
|----------|-----|-------|
|
|
100
|
+
| [LM Studio](https://lmstudio.ai) | `http://localhost:1234` | Default, zero config |
|
|
101
|
+
| [Ollama](https://ollama.com) | `http://localhost:11434` | Use OpenAI-compatible mode |
|
|
102
|
+
| [vLLM](https://docs.vllm.ai) | `http://localhost:8000` | Native OpenAI API |
|
|
103
|
+
| [llama.cpp](https://github.com/ggml-org/llama.cpp) | `http://localhost:8080` | Server mode |
|
|
104
|
+
| Remote / cloud APIs | Any URL | Set `LM_STUDIO_URL` + `LM_STUDIO_PASSWORD` |
|
|
230
105
|
|
|
231
|
-
|
|
232
|
-
**Resource Management**: Automatic cleanup of large contexts after processing
|
|
233
|
-
**Streaming Responses**: Progressive output delivery for better user experience
|
|
106
|
+
## Development
|
|
234
107
|
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
- [Generation Functions Guide](docs/generation-functions-md.md) - All 10 creation tools
|
|
242
|
-
- [Creative Functions Guide](docs/creative-functions-md.md) - Games and art tools
|
|
243
|
-
- [System Functions Guide](docs/system-functions-md.md) - Utilities and diagnostics
|
|
244
|
-
- [Complete User Guide](docs/user-guide-md.md) - Comprehensive usage manual
|
|
245
|
-
|
|
246
|
-
## Recommended Setup
|
|
247
|
-
|
|
248
|
-
**For Professional Development:**
|
|
249
|
-
- **CPU**: 8-core or better (for local LLM processing)
|
|
250
|
-
- **RAM**: 32GB (24GB for model, 8GB for development)
|
|
251
|
-
- **Storage**: SSD with 100GB+ free space
|
|
252
|
-
- **Model**: Qwen2.5-Coder-14B-Instruct or similar
|
|
253
|
-
|
|
254
|
-
**Performance Tips:**
|
|
255
|
-
- Use 13B+ parameter models for professional-quality results
|
|
256
|
-
- Configure `LLM_MCP_ALLOWED_DIRS` to include your project directories
|
|
257
|
-
- Install Desktop Commander MCP for complete file operation support
|
|
258
|
-
- Keep LM Studio running and model loaded for instant responses
|
|
108
|
+
```bash
|
|
109
|
+
git clone https://github.com/houtini-ai/lm.git
|
|
110
|
+
cd lm
|
|
111
|
+
npm install
|
|
112
|
+
npm run build
|
|
113
|
+
```
|
|
259
114
|
|
|
260
|
-
|
|
115
|
+
Run the test suite against a live LLM server:
|
|
261
116
|
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
- ✅ WordPress-specific tools and auditing
|
|
266
|
-
- ✅ Creative project generators
|
|
267
|
-
- ✅ Comprehensive security analysis
|
|
268
|
-
- ✅ TypeScript conversion and test generation
|
|
269
|
-
- ✅ Cross-file integration analysis
|
|
117
|
+
```bash
|
|
118
|
+
node test.mjs
|
|
119
|
+
```
|
|
270
120
|
|
|
271
121
|
## License
|
|
272
122
|
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
## Contributing
|
|
276
|
-
|
|
277
|
-
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details on:
|
|
278
|
-
- Code standards and patterns
|
|
279
|
-
- Testing requirements
|
|
280
|
-
- Documentation updates
|
|
281
|
-
- Issue reporting
|
|
282
|
-
|
|
283
|
-
## Support
|
|
284
|
-
|
|
285
|
-
- **Issues**: [GitHub Issues](https://github.com/houtini-ai/lm/issues)
|
|
286
|
-
- **Discussions**: [GitHub Discussions](https://github.com/houtini-ai/lm/discussions)
|
|
287
|
-
- **Documentation**: Complete guides in the `docs/` directory
|
|
288
|
-
|
|
289
|
-
---
|
|
290
|
-
|
|
291
|
-
**Ready to supercharge your development workflow?** Install Houtini LM and start building amazing things with unlimited local AI assistance.
|
|
292
|
-
|
|
293
|
-
*Built for developers who think clearly but can't afford to think expensively.*
|
|
123
|
+
MIT
|
package/dist/index.js
CHANGED
|
@@ -8,10 +8,17 @@
|
|
|
8
8
|
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
|
|
9
9
|
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
|
10
10
|
import { CallToolRequestSchema, ListToolsRequestSchema, } from '@modelcontextprotocol/sdk/types.js';
|
|
11
|
-
const LM_BASE_URL = process.env.LM_STUDIO_URL || 'http://
|
|
11
|
+
const LM_BASE_URL = process.env.LM_STUDIO_URL || 'http://localhost:1234';
|
|
12
12
|
const LM_MODEL = process.env.LM_STUDIO_MODEL || '';
|
|
13
|
+
const LM_PASSWORD = process.env.LM_STUDIO_PASSWORD || '';
|
|
13
14
|
const DEFAULT_MAX_TOKENS = 4096;
|
|
14
15
|
const DEFAULT_TEMPERATURE = 0.3;
|
|
16
|
+
function apiHeaders() {
|
|
17
|
+
const h = { 'Content-Type': 'application/json' };
|
|
18
|
+
if (LM_PASSWORD)
|
|
19
|
+
h['Authorization'] = `Bearer ${LM_PASSWORD}`;
|
|
20
|
+
return h;
|
|
21
|
+
}
|
|
15
22
|
async function chatCompletion(messages, options = {}) {
|
|
16
23
|
const body = {
|
|
17
24
|
messages,
|
|
@@ -24,7 +31,7 @@ async function chatCompletion(messages, options = {}) {
|
|
|
24
31
|
}
|
|
25
32
|
const res = await fetch(`${LM_BASE_URL}/v1/chat/completions`, {
|
|
26
33
|
method: 'POST',
|
|
27
|
-
headers:
|
|
34
|
+
headers: apiHeaders(),
|
|
28
35
|
body: JSON.stringify(body),
|
|
29
36
|
});
|
|
30
37
|
if (!res.ok) {
|
|
@@ -34,7 +41,7 @@ async function chatCompletion(messages, options = {}) {
|
|
|
34
41
|
return res.json();
|
|
35
42
|
}
|
|
36
43
|
async function listModels() {
|
|
37
|
-
const res = await fetch(`${LM_BASE_URL}/v1/models
|
|
44
|
+
const res = await fetch(`${LM_BASE_URL}/v1/models`, { headers: apiHeaders() });
|
|
38
45
|
if (!res.ok)
|
|
39
46
|
throw new Error(`Failed to list models: ${res.status}`);
|
|
40
47
|
const data = (await res.json());
|
|
@@ -83,7 +90,7 @@ const TOOLS = [
|
|
|
83
90
|
},
|
|
84
91
|
];
|
|
85
92
|
// ── MCP Server ───────────────────────────────────────────────────────
|
|
86
|
-
const server = new Server({ name: 'houtini-lm', version: '2.0.
|
|
93
|
+
const server = new Server({ name: 'houtini-lm', version: '2.0.1' }, { capabilities: { tools: {} } });
|
|
87
94
|
server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: TOOLS }));
|
|
88
95
|
server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
89
96
|
const { name, arguments: args } = request.params;
|
|
@@ -143,7 +150,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
|
143
150
|
content: [
|
|
144
151
|
{
|
|
145
152
|
type: 'text',
|
|
146
|
-
text: `Connected to ${LM_BASE_URL} (${ms}ms)\nModels loaded: ${models.length}${models.length ? '\n' + models.join(', ') : ''}`,
|
|
153
|
+
text: `Connected to ${LM_BASE_URL} (${ms}ms)\nAuth: ${LM_PASSWORD ? 'enabled' : 'none'}\nModels loaded: ${models.length}${models.length ? '\n' + models.join(', ') : ''}`,
|
|
147
154
|
},
|
|
148
155
|
],
|
|
149
156
|
};
|
package/dist/index.js.map
CHANGED
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":";AACA;;;;;GAKG;AAEH,OAAO,EAAE,MAAM,EAAE,MAAM,2CAA2C,CAAC;AACnE,OAAO,EAAE,oBAAoB,EAAE,MAAM,2CAA2C,CAAC;AACjF,OAAO,EACL,qBAAqB,EACrB,sBAAsB,GACvB,MAAM,oCAAoC,CAAC;AAE5C,MAAM,WAAW,GAAG,OAAO,CAAC,GAAG,CAAC,aAAa,IAAI,
|
|
1
|
+
{"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":";AACA;;;;;GAKG;AAEH,OAAO,EAAE,MAAM,EAAE,MAAM,2CAA2C,CAAC;AACnE,OAAO,EAAE,oBAAoB,EAAE,MAAM,2CAA2C,CAAC;AACjF,OAAO,EACL,qBAAqB,EACrB,sBAAsB,GACvB,MAAM,oCAAoC,CAAC;AAE5C,MAAM,WAAW,GAAG,OAAO,CAAC,GAAG,CAAC,aAAa,IAAI,uBAAuB,CAAC;AACzE,MAAM,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,eAAe,IAAI,EAAE,CAAC;AACnD,MAAM,WAAW,GAAG,OAAO,CAAC,GAAG,CAAC,kBAAkB,IAAI,EAAE,CAAC;AACzD,MAAM,kBAAkB,GAAG,IAAI,CAAC;AAChC,MAAM,mBAAmB,GAAG,GAAG,CAAC;AAEhC,SAAS,UAAU;IACjB,MAAM,CAAC,GAA2B,EAAE,cAAc,EAAE,kBAAkB,EAAE,CAAC;IACzE,IAAI,WAAW;QAAE,CAAC,CAAC,eAAe,CAAC,GAAG,UAAU,WAAW,EAAE,CAAC;IAC9D,OAAO,CAAC,CAAC;AACX,CAAC;AAmBD,KAAK,UAAU,cAAc,CAC3B,QAAuB,EACvB,UAAwE,EAAE;IAE1E,MAAM,IAAI,GAA4B;QACpC,QAAQ;QACR,WAAW,EAAE,OAAO,CAAC,WAAW,IAAI,mBAAmB;QACvD,UAAU,EAAE,OAAO,CAAC,SAAS,IAAI,kBAAkB;QACnD,MAAM,EAAE,KAAK;KACd,CAAC;IACF,IAAI,OAAO,CAAC,KAAK,IAAI,QAAQ,EAAE,CAAC;QAC9B,IAAI,CAAC,KAAK,GAAG,OAAO,CAAC,KAAK,IAAI,QAAQ,CAAC;IACzC,CAAC;IAED,MAAM,GAAG,GAAG,MAAM,KAAK,CAAC,GAAG,WAAW,sBAAsB,EAAE;QAC5D,MAAM,EAAE,MAAM;QACd,OAAO,EAAE,UAAU,EAAE;QACrB,IAAI,EAAE,IAAI,CAAC,SAAS,CAAC,IAAI,CAAC;KAC3B,CAAC,CAAC;IAEH,IAAI,CAAC,GAAG,CAAC,EAAE,EAAE,CAAC;QACZ,MAAM,IAAI,GAAG,MAAM,GAAG,CAAC,IAAI,EAAE,CAAC,KAAK,CAAC,GAAG,EAAE,CAAC,EAAE,CAAC,CAAC;QAC9C,MAAM,IAAI,KAAK,CAAC,uBAAuB,GAAG,CAAC,MAAM,KAAK,IAAI,EAAE,CAAC,CAAC;IAChE,CAAC;IAED,OAAO,GAAG,CAAC,IAAI,EAAqC,CAAC;AACvD,CAAC;AAED,KAAK,UAAU,UAAU;IACvB,MAAM,GAAG,GAAG,MAAM,KAAK,CAAC,GAAG,WAAW,YAAY,EAAE,EAAE,OAAO,EAAE,UAAU,EAAE,EAAE,CAAC,CAAC;IAC/E,IAAI,CAAC,GAAG,CAAC,EAAE;QAAE,MAAM,IAAI,KAAK,CAAC,0BAA0B,GAAG,CAAC,MAAM,EAAE,CAAC,CAAC;IACrE,MAAM,IAAI,GAAG,CAAC,MAAM,GAAG,CAAC,IAAI,EAAE,CAAoC,CAAC;IACnE,OAAO,IAAI,CAAC,IAAI,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC;AACpC,CAAC;AAED,wEAAwE;AAExE,MAAM,KAAK,GAAG;IACZ;QACE,IAAI,EAAE,MAAM;QACZ,WAAW,EACT,4IAA4I;QAC9I,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,OAAO,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,sBAAsB,EAAE;gBAChE,MAAM,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,wBAAwB,EAAE;gBACjE,WAAW,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,yCAAyC,EAAE;gBACvF,UAAU,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,uCAAuC,EAAE;aACrF;YACD,QAAQ,EAAE,CAAC,SAAS,CAAC;SACtB;KACF;IACD;QACE,IAAI,EAAE,eAAe;QACrB,WAAW,EACT,8FAA8F;QAChG,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,MAAM,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,yBAAyB,EAAE;gBAClE,OAAO,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,uCAAuC,EAAE;gBACjF,WAAW,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,6BAA6B,EAAE;gBAC3E,WAAW,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,oCAAoC,EAAE;gBAClF,UAAU,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,2BAA2B,EAAE;aACzE;YACD,QAAQ,EAAE,CAAC,aAAa,CAAC;SAC1B;KACF;IACD;QACE,IAAI,EAAE,aAAa;QACnB,WAAW,EAAE,4CAA4C;QACzD,WAAW,EAAE,EAAE,IAAI,EAAE,QAAiB,EAAE,UAAU,EAAE,EAAE,EAAE;KACzD;IACD;QACE,IAAI,EAAE,cAAc;QACpB,WAAW,EAAE,qDAAqD;QAClE,WAAW,EAAE,EAAE,IAAI,EAAE,QAAiB,EAAE,UAAU,EAAE,EAAE,EAAE;KACzD;CACF,CAAC;AAEF,wEAAwE;AAExE,MAAM,MAAM,GAAG,IAAI,MAAM,CACvB,EAAE,IAAI,EAAE,YAAY,EAAE,OAAO,EAAE,OAAO,EAAE,EACxC,EAAE,YAAY,EAAE,EAAE,KAAK,EAAE,EAAE,EAAE,EAAE,CAChC,CAAC;AAEF,MAAM,CAAC,iBAAiB,CAAC,sBAAsB,EAAE,KAAK,IAAI,EAAE,CAAC,CAAC,EAAE,KAAK,EAAE,KAAK,EAAE,CAAC,CAAC,CAAC;AAEjF,MAAM,CAAC,iBAAiB,CAAC,qBAAqB,EAAE,KAAK,EAAE,OAAO,EAAE,EAAE;IAChE,MAAM,EAAE,IAAI,EAAE,SAAS,EAAE,IAAI,EAAE,GAAG,OAAO,CAAC,MAAM,CAAC;IAEjD,IAAI,CAAC;QACH,QAAQ,IAAI,EAAE,CAAC;YACb,KAAK,MAAM,CAAC,CAAC,CAAC;gBACZ,MAAM,EAAE,OAAO,EAAE,MAAM,EAAE,WAAW,EAAE,UAAU,EAAE,GAAG,IAKpD,CAAC;gBACF,MAAM,QAAQ,GAAkB,EAAE,CAAC;gBACnC,IAAI,MAAM;oBAAE,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,CAAC,CAAC;gBAC/D,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,OAAO,EAAE,OAAO,EAAE,CAAC,CAAC;gBAElD,MAAM,IAAI,GAAG,MAAM,cAAc,CAAC,QAAQ,EAAE;oBAC1C,WAAW;oBACX,SAAS,EAAE,UAAU;iBACtB,CAAC,CAAC;gBAEH,MAAM,KAAK,GAAG,IAAI,CAAC,OAAO,CAAC,CAAC,CAAC,EAAE,OAAO,EAAE,OAAO,IAAI,EAAE,CAAC;gBACtD,MAAM,KAAK,GAAG,IAAI,CAAC,KAAK;oBACtB,CAAC,CAAC,mBAAmB,IAAI,CAAC,KAAK,cAAc,IAAI,CAAC,KAAK,CAAC,aAAa,IAAI,IAAI,CAAC,KAAK,CAAC,iBAAiB,EAAE;oBACvG,CAAC,CAAC,EAAE,CAAC;gBAEP,OAAO,EAAE,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,KAAK,GAAG,KAAK,EAAE,CAAC,EAAE,CAAC;YAC9D,CAAC;YAED,KAAK,eAAe,CAAC,CAAC,CAAC;gBACrB,MAAM,EAAE,MAAM,EAAE,OAAO,EAAE,WAAW,EAAE,WAAW,EAAE,UAAU,EAAE,GAAG,IAMjE,CAAC;gBAEF,MAAM,QAAQ,GAAkB,EAAE,CAAC;gBACnC,IAAI,MAAM;oBAAE,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,CAAC,CAAC;gBAE/D,IAAI,WAAW,GAAG,WAAW,CAAC;gBAC9B,IAAI,OAAO;oBAAE,WAAW,GAAG,aAAa,OAAO,qBAAqB,WAAW,EAAE,CAAC;gBAClF,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,OAAO,EAAE,WAAW,EAAE,CAAC,CAAC;gBAEtD,MAAM,IAAI,GAAG,MAAM,cAAc,CAAC,QAAQ,EAAE;oBAC1C,WAAW;oBACX,SAAS,EAAE,UAAU;iBACtB,CAAC,CAAC;gBAEH,OAAO;oBACL,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,OAAO,CAAC,CAAC,CAAC,EAAE,OAAO,EAAE,OAAO,IAAI,EAAE,EAAE,CAAC;iBAC3E,CAAC;YACJ,CAAC;YAED,KAAK,aAAa,CAAC,CAAC,CAAC;gBACnB,MAAM,MAAM,GAAG,MAAM,UAAU,EAAE,CAAC;gBAClC,OAAO;oBACL,OAAO,EAAE;wBACP;4BACE,IAAI,EAAE,MAAM;4BACZ,IAAI,EAAE,MAAM,CAAC,MAAM;gCACjB,CAAC,CAAC,mBAAmB,MAAM,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,OAAO,CAAC,EAAE,CAAC,CAAC,IAAI,CAAC,IAAI,CAAC,EAAE;gCAC/D,CAAC,CAAC,6BAA6B;yBAClC;qBACF;iBACF,CAAC;YACJ,CAAC;YAED,KAAK,cAAc,CAAC,CAAC,CAAC;gBACpB,MAAM,KAAK,GAAG,IAAI,CAAC,GAAG,EAAE,CAAC;gBACzB,MAAM,MAAM,GAAG,MAAM,UAAU,EAAE,CAAC;gBAClC,MAAM,EAAE,GAAG,IAAI,CAAC,GAAG,EAAE,GAAG,KAAK,CAAC;gBAC9B,OAAO;oBACL,OAAO,EAAE;wBACP;4BACE,IAAI,EAAE,MAAM;4BACZ,IAAI,EAAE,gBAAgB,WAAW,KAAK,EAAE,cAAc,WAAW,CAAC,CAAC,CAAC,SAAS,CAAC,CAAC,CAAC,MAAM,oBAAoB,MAAM,CAAC,MAAM,GAAG,MAAM,CAAC,MAAM,CAAC,CAAC,CAAC,IAAI,GAAG,MAAM,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC,EAAE,EAAE;yBAC1K;qBACF;iBACF,CAAC;YACJ,CAAC;YAED;gBACE,MAAM,IAAI,KAAK,CAAC,iBAAiB,IAAI,EAAE,CAAC,CAAC;QAC7C,CAAC;IACH,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO;YACL,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,UAAU,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,MAAM,CAAC,KAAK,CAAC,EAAE,EAAE,CAAC;YACrG,OAAO,EAAE,IAAI;SACd,CAAC;IACJ,CAAC;AACH,CAAC,CAAC,CAAC;AAEH,KAAK,UAAU,IAAI;IACjB,MAAM,SAAS,GAAG,IAAI,oBAAoB,EAAE,CAAC;IAC7C,MAAM,MAAM,CAAC,OAAO,CAAC,SAAS,CAAC,CAAC;IAChC,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,8BAA8B,WAAW,KAAK,CAAC,CAAC;AACvE,CAAC;AAED,IAAI,EAAE,CAAC,KAAK,CAAC,CAAC,KAAK,EAAE,EAAE;IACrB,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,gBAAgB,KAAK,IAAI,CAAC,CAAC;IAChD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;AAClB,CAAC,CAAC,CAAC"}
|
package/package.json
CHANGED
|
@@ -1,24 +1,33 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@houtini/lm",
|
|
3
|
-
"version": "2.0.
|
|
3
|
+
"version": "2.0.1",
|
|
4
4
|
"type": "module",
|
|
5
5
|
"description": "MCP server for local LLMs — connects to LM Studio or any OpenAI-compatible endpoint",
|
|
6
|
+
"mcpName": "io.github.houtini-ai/lm",
|
|
6
7
|
"main": "dist/index.js",
|
|
7
8
|
"bin": {
|
|
8
9
|
"houtini-lm": "dist/index.js"
|
|
9
10
|
},
|
|
10
11
|
"scripts": {
|
|
11
12
|
"build": "tsc && node add-shebang.mjs",
|
|
12
|
-
"dev": "tsc --watch"
|
|
13
|
+
"dev": "tsc --watch",
|
|
14
|
+
"prepublishOnly": "npm run build"
|
|
13
15
|
},
|
|
14
16
|
"keywords": [
|
|
15
17
|
"mcp",
|
|
16
18
|
"model-context-protocol",
|
|
19
|
+
"mcp-server",
|
|
17
20
|
"lm-studio",
|
|
21
|
+
"ollama",
|
|
22
|
+
"vllm",
|
|
18
23
|
"openai",
|
|
24
|
+
"openai-compatible",
|
|
19
25
|
"local-llm",
|
|
20
26
|
"claude",
|
|
21
|
-
"ai-tools"
|
|
27
|
+
"ai-tools",
|
|
28
|
+
"llama-cpp",
|
|
29
|
+
"ai",
|
|
30
|
+
"llm"
|
|
22
31
|
],
|
|
23
32
|
"author": "Richard Baxter <richard@richardbaxter.co> (https://richardbaxter.co)",
|
|
24
33
|
"license": "MIT",
|
|
@@ -30,6 +39,10 @@
|
|
|
30
39
|
"bugs": {
|
|
31
40
|
"url": "https://github.com/houtini-ai/lm/issues"
|
|
32
41
|
},
|
|
42
|
+
"funding": {
|
|
43
|
+
"type": "github",
|
|
44
|
+
"url": "https://github.com/sponsors/houtini-ai"
|
|
45
|
+
},
|
|
33
46
|
"dependencies": {
|
|
34
47
|
"@modelcontextprotocol/sdk": "^1.26.0"
|
|
35
48
|
},
|
|
@@ -42,6 +55,7 @@
|
|
|
42
55
|
},
|
|
43
56
|
"files": [
|
|
44
57
|
"dist/**/*",
|
|
58
|
+
"server.json",
|
|
45
59
|
"README.md",
|
|
46
60
|
"LICENSE"
|
|
47
61
|
],
|
package/server.json
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
|
|
3
|
+
"name": "Houtini LM",
|
|
4
|
+
"description": "MCP server that connects Claude to any OpenAI-compatible LLM endpoint. Offload routine analysis to a local model and preserve your Claude context window.",
|
|
5
|
+
"icon": "https://houtini.ai/favicon.ico",
|
|
6
|
+
"repository": {
|
|
7
|
+
"url": "https://github.com/houtini-ai/lm",
|
|
8
|
+
"source": "github"
|
|
9
|
+
},
|
|
10
|
+
"version": "2.0.1",
|
|
11
|
+
"packages": [
|
|
12
|
+
{
|
|
13
|
+
"registryType": "npm",
|
|
14
|
+
"identifier": "@houtini/lm",
|
|
15
|
+
"version": "2.0.1",
|
|
16
|
+
"transport": [
|
|
17
|
+
{
|
|
18
|
+
"type": "stdio"
|
|
19
|
+
}
|
|
20
|
+
],
|
|
21
|
+
"environmentVariables": [
|
|
22
|
+
{
|
|
23
|
+
"name": "LM_STUDIO_URL",
|
|
24
|
+
"description": "Base URL of the OpenAI-compatible API endpoint",
|
|
25
|
+
"isRequired": false,
|
|
26
|
+
"format": "url"
|
|
27
|
+
},
|
|
28
|
+
{
|
|
29
|
+
"name": "LM_STUDIO_MODEL",
|
|
30
|
+
"description": "Model identifier to use for requests (auto-detected if not set)",
|
|
31
|
+
"isRequired": false,
|
|
32
|
+
"format": "string"
|
|
33
|
+
},
|
|
34
|
+
{
|
|
35
|
+
"name": "LM_STUDIO_PASSWORD",
|
|
36
|
+
"description": "Bearer token for API authentication (no auth if blank)",
|
|
37
|
+
"isRequired": false,
|
|
38
|
+
"isSecret": true,
|
|
39
|
+
"format": "string"
|
|
40
|
+
}
|
|
41
|
+
]
|
|
42
|
+
}
|
|
43
|
+
]
|
|
44
|
+
}
|