@houtini/lm 2.1.0 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,39 +3,23 @@
3
3
  [![npm version](https://img.shields.io/npm/v/@houtini/lm)](https://www.npmjs.com/package/@houtini/lm)
4
4
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5
5
 
6
- MCP server that connects Claude to **any OpenAI-compatible LLM endpoint** LM Studio, Ollama, vLLM, llama.cpp, or any remote API.
6
+ An MCP server that connects Claude to any OpenAI-compatible LLM - LM Studio, Ollama, vLLM, llama.cpp, whatever you've got running locally.
7
7
 
8
- Offload routine work to a local model. Keep your Claude context window for the hard stuff.
8
+ The idea's simple. Claude's brilliant at orchestration and reasoning, but you're burning tokens on stuff a local model handles just fine. Boilerplate, code review, summarisation, classification - hand it off. Claude keeps working on the hard stuff while your local model chews through the grunt work. Free, parallel, no API keys.
9
9
 
10
- ## Why
10
+ ## Quick start
11
11
 
12
- Claude is great at orchestration and reasoning. Local models are great at bulk analysis, classification, extraction, and summarisation. This server lets Claude delegate to a local model on the fly — no API keys, no cloud round-trips, no context wasted.
13
-
14
- **Common use cases:**
15
-
16
- - Classify or tag hundreds of items without burning Claude tokens
17
- - Extract structured data from long documents
18
- - Run a second opinion on generated code
19
- - Summarise research before Claude synthesises it
20
- - Delegate code review to a local model while Claude handles other work
21
-
22
- ## What's new in v2.1.0
23
-
24
- - **Smarter tool descriptions** — tool descriptions now encode prompting best practices for local LLMs, so Claude automatically sends well-structured prompts (complete code, capped output tokens, explicit format instructions)
25
- - **New `code_task` tool** — purpose-built for code analysis with an optimised system prompt and sensible defaults (temp 0.2, 500 token cap)
26
- - **Delegation guidance** — each tool description tells Claude when to use it, what output to expect, and what to avoid (e.g. never send truncated code to a local model)
27
-
28
- ## Install
29
-
30
- ### Claude Code (recommended)
12
+ ### Claude Code
31
13
 
32
14
  ```bash
33
- claude mcp add houtini-lm -e LM_STUDIO_URL=http://localhost:1234 -- npx -y @houtini/lm
15
+ claude mcp add houtini-lm -- npx -y @houtini/lm
34
16
  ```
35
17
 
18
+ That's it. If LM Studio's running on `localhost:1234` (the default), Claude can start delegating straight away.
19
+
36
20
  ### Claude Desktop
37
21
 
38
- Add to `claude_desktop_config.json`:
22
+ Drop this into your `claude_desktop_config.json`:
39
23
 
40
24
  ```json
41
25
  {
@@ -51,93 +35,198 @@ Add to `claude_desktop_config.json`:
51
35
  }
52
36
  ```
53
37
 
54
- ### npx (standalone)
38
+ ### LLM on a different machine
39
+
40
+ If you've got a GPU box on your network (I run mine on a separate machine called hopper), point the URL at it:
55
41
 
56
42
  ```bash
57
- npx @houtini/lm
43
+ claude mcp add houtini-lm -e LM_STUDIO_URL=http://192.168.1.50:1234 -- npx -y @houtini/lm
58
44
  ```
59
45
 
60
- ## Configuration
46
+ ## What's it good for?
61
47
 
62
- Set via environment variables or in your MCP client config:
48
+ Real examples you can throw at it right now.
63
49
 
64
- | Variable | Default | Description |
65
- |----------|---------|-------------|
66
- | `LM_STUDIO_URL` | `http://localhost:1234` | Base URL of the OpenAI-compatible API |
67
- | `LM_STUDIO_MODEL` | *(auto-detect)* | Model identifier — leave blank to use whatever's loaded |
68
- | `LM_STUDIO_PASSWORD` | *(none)* | Bearer token for authenticated endpoints |
50
+ **Explain something you just read**
51
+ ```
52
+ "Explain what this function does in 2-3 sentences."
53
+ + paste the function
54
+ ```
55
+
56
+ **Second opinion on generated code**
57
+ ```
58
+ "Find bugs in this TypeScript module. Return a JSON array of {line, issue, fix}."
59
+ + paste the module
60
+ ```
61
+
62
+ **Draft a commit message**
63
+ ```
64
+ "Write a concise commit message for this diff. One line summary, then bullet points."
65
+ + paste the diff
66
+ ```
67
+
68
+ **Generate boilerplate**
69
+ ```
70
+ "Write a Jest test file for this React component. Cover the happy path and one error case."
71
+ + paste the component
72
+ ```
73
+
74
+ **Extract structured data**
75
+ ```
76
+ "Extract all API endpoints from this Express router. Return as JSON: {method, path, handler}."
77
+ + paste the router file
78
+ ```
79
+
80
+ **Translate formats**
81
+ ```
82
+ "Convert this JSON config to YAML. Return only the YAML, no explanation."
83
+ + paste the JSON
84
+ ```
85
+
86
+ **Brainstorm before committing to an approach**
87
+ ```
88
+ "I need to add caching to this API client. List 3 approaches with trade-offs. Be brief."
89
+ + paste the client code
90
+ ```
69
91
 
70
92
  ## Tools
71
93
 
72
94
  ### `chat`
73
95
 
74
- Delegate a bounded task to the local LLM. The workhorse for quick questions, code explanation, and pattern recognition.
96
+ The workhorse. Send a task, get an answer. Optional system persona if you want to steer the model's perspective.
75
97
 
76
- ```
77
- message (required) — the task, with explicit output format instructions
78
- system — persona (be specific: "Senior TypeScript dev", not "helpful assistant")
79
- temperature — 0.1 for code, 0.3 for analysis (default), 0.5 for suggestions
80
- max_tokens — match to expected output: 150 for quick answers, 300 for explanations, 500 for code gen
98
+ | Parameter | Required | Default | What it does |
99
+ |-----------|----------|---------|-------------|
100
+ | `message` | yes | - | The task. Be specific about output format. |
101
+ | `system` | no | - | Persona - "Senior TypeScript dev", not "helpful assistant" |
102
+ | `temperature` | no | 0.3 | 0.1 for code, 0.3 for analysis, 0.7 for creative |
103
+ | `max_tokens` | no | 2048 | Lower for quick answers, higher for generation |
104
+
105
+ **Quick factual question:**
106
+ ```json
107
+ {
108
+ "message": "What HTTP status code means 'too many requests'? Just the number and name.",
109
+ "max_tokens": 50
110
+ }
81
111
  ```
82
112
 
83
- **Tip:** Always send complete code — local models hallucinate details for truncated input.
113
+ **Code explanation with persona:**
114
+ ```json
115
+ {
116
+ "message": "Explain this function. What does it do, what are the edge cases?\n\n```ts\nfunction debounce(fn, ms) { ... }\n```",
117
+ "system": "Senior TypeScript developer"
118
+ }
119
+ ```
84
120
 
85
121
  ### `custom_prompt`
86
122
 
87
- Structured 3-part prompt with separate system, context, and instruction fields. The separation prevents context bleed in local models better results than stuffing everything into a single message.
123
+ Three-part prompt: system, context, instruction. Keeping them separate stops context bleed - you'll get better results than stuffing everything into one message, especially with smaller models.
124
+
125
+ | Parameter | Required | Default | What it does |
126
+ |-----------|----------|---------|-------------|
127
+ | `instruction` | yes | - | What to produce. Under 50 words works best. |
128
+ | `system` | no | - | Persona, specific and under 30 words |
129
+ | `context` | no | - | Complete data to analyse. Never truncate. |
130
+ | `temperature` | no | 0.3 | 0.1 for review, 0.3 for analysis |
131
+ | `max_tokens` | no | 2048 | Match to expected output length |
88
132
 
133
+ **Code review:**
134
+ ```json
135
+ {
136
+ "system": "Expert Node.js developer focused on error handling and edge cases.",
137
+ "context": "< full source code here >",
138
+ "instruction": "List the top 3 bugs as bullet points. For each: line number, what's wrong, how to fix it."
139
+ }
89
140
  ```
90
- instruction (required) — what to produce (under 50 words works best)
91
- system — persona, specific and under 30 words
92
- context — COMPLETE data to analyse (never truncated)
93
- temperature — 0.1 for review, 0.3 for analysis (default)
94
- max_tokens — 200 for bullets, 400 for detailed review, 600 for code gen
141
+
142
+ **Compare two implementations:**
143
+ ```json
144
+ {
145
+ "system": "Performance-focused Python developer.",
146
+ "context": "Implementation A:\n...\n\nImplementation B:\n...",
147
+ "instruction": "Which is faster for 10k+ items? Why? One paragraph."
148
+ }
95
149
  ```
96
150
 
97
151
  ### `code_task`
98
152
 
99
- Purpose-built for code analysis. Wraps the local LLM with an optimised code-review system prompt and low temperature (0.2).
153
+ Built specifically for code analysis. Wraps your request with an optimised code-review system prompt and drops the temperature to 0.2 so the model stays focused.
100
154
 
155
+ | Parameter | Required | Default | What it does |
156
+ |-----------|----------|---------|-------------|
157
+ | `code` | yes | - | Complete source code. Never truncate. |
158
+ | `task` | yes | - | "Find bugs", "Explain this", "Write tests" |
159
+ | `language` | no | - | "typescript", "python", "rust", etc. |
160
+ | `max_tokens` | no | 2048 | Match to expected output length |
161
+
162
+ ```json
163
+ {
164
+ "code": "< full source file >",
165
+ "task": "Find bugs and suggest improvements. Reference line numbers.",
166
+ "language": "typescript"
167
+ }
101
168
  ```
102
- code (required) — complete source code (never truncate)
103
- task (required) — what to do: "Find bugs", "Explain this function", "Add error handling"
104
- language — "typescript", "python", "rust", etc.
105
- max_tokens — default 500 (200 for quick answers, 800 for code generation)
106
- ```
107
169
 
108
- **The local LLM excels at:** explaining code, finding common bugs, suggesting improvements, comparing patterns, generating boilerplate.
170
+ ### `discover`
171
+
172
+ Checks if the local LLM's online. Returns the model name, context window size, and response latency. Typically under a second, or an offline status within 5 seconds if the host isn't reachable.
109
173
 
110
- **It struggles with:** subtle/adversarial bugs, multi-file reasoning, design tasks requiring integration.
174
+ No parameters. Call it before delegating if you're not sure the LLM's available.
111
175
 
112
176
  ### `list_models`
113
177
 
114
- Returns the models currently loaded on the LLM server.
178
+ Lists everything loaded on the LLM server with context window sizes.
115
179
 
116
- ### `health_check`
180
+ ## How it works
117
181
 
118
- Checks connectivity. Returns response time, auth status, and loaded model count.
182
+ ```
183
+ Claude ──MCP──> houtini-lm ──HTTP/SSE──> LM Studio (or any OpenAI-compatible API)
184
+
185
+ ├─ Streaming: tokens arrive incrementally via SSE
186
+ ├─ Soft timeout: returns partial results at 55s
187
+ └─ Graceful failure: returns "offline" if host unreachable
188
+ ```
189
+
190
+ All inference calls use Server-Sent Events streaming (since v2.3.0). In practice, this means:
191
+
192
+ - Tokens arrive as they're generated, keeping the connection alive
193
+ - If generation takes longer than 55 seconds, you get a partial result instead of a timeout error - the footer shows `⚠ TRUNCATED` when this happens
194
+ - If the host is off or unreachable, you get a clean "offline" message within 5 seconds instead of hanging
195
+
196
+ The 55-second soft timeout exists because the MCP SDK has a hard ~60s client-side timeout. Without streaming, any response that took longer than 60 seconds just vanished. Now you get whatever the model managed to generate before the deadline.
197
+
198
+ ## Configuration
199
+
200
+ | Variable | Default | What it does |
201
+ |----------|---------|-------------|
202
+ | `LM_STUDIO_URL` | `http://localhost:1234` | Base URL of the OpenAI-compatible API |
203
+ | `LM_STUDIO_MODEL` | *(auto-detect)* | Model identifier - leave blank to use whatever's loaded |
204
+ | `LM_STUDIO_PASSWORD` | *(none)* | Bearer token for authenticated endpoints |
205
+ | `LM_CONTEXT_WINDOW` | `100000` | Fallback context window if the API doesn't report it |
206
+
207
+ ## Getting good results
119
208
 
120
- ## Performance guide
209
+ **Send complete code.** Local models hallucinate details when you give them truncated input. If a file's too large, send the relevant function - not a snippet with `...` in the middle.
121
210
 
122
- At typical local LLM speeds (~3-4 tokens/second on consumer hardware):
211
+ **Be explicit about output format.** "Return a JSON array" or "respond in bullet points" - don't leave it open-ended. Smaller models especially need this.
123
212
 
124
- | max_tokens | Response time | Best for |
125
- |------------|--------------|----------|
126
- | 150 | ~45 seconds | Quick questions, classifications |
127
- | 300 | ~100 seconds | Code explanations, summaries |
128
- | 500 | ~170 seconds | Code review, generation |
213
+ **One call at a time.** If your LLM server runs a single model, parallel calls queue up and stack timeouts. Send them sequentially.
129
214
 
130
- Set `max_tokens` to match your expected output lower values mean faster responses.
215
+ **Match max_tokens to expected output.** 200 for quick answers, 500 for explanations, 2048 for code generation. Lower values mean faster responses.
216
+
217
+ **Set a specific persona.** "Expert Rust developer who cares about memory safety" gets noticeably better results than "helpful assistant" (or no persona at all).
131
218
 
132
219
  ## Compatible endpoints
133
220
 
134
- | Provider | URL | Notes |
135
- |----------|-----|-------|
221
+ Works with anything that speaks the OpenAI `/v1/chat/completions` API:
222
+
223
+ | What | URL | Notes |
224
+ |------|-----|-------|
136
225
  | [LM Studio](https://lmstudio.ai) | `http://localhost:1234` | Default, zero config |
137
- | [Ollama](https://ollama.com) | `http://localhost:11434` | Use OpenAI-compatible mode |
226
+ | [Ollama](https://ollama.com) | `http://localhost:11434` | Set `LM_STUDIO_URL` |
138
227
  | [vLLM](https://docs.vllm.ai) | `http://localhost:8000` | Native OpenAI API |
139
228
  | [llama.cpp](https://github.com/ggml-org/llama.cpp) | `http://localhost:8080` | Server mode |
140
- | Remote / cloud APIs | Any URL | Set `LM_STUDIO_URL` + `LM_STUDIO_PASSWORD` |
229
+ | Any OpenAI-compatible API | Any URL | Set URL + password |
141
230
 
142
231
  ## Development
143
232
 
@@ -148,12 +237,6 @@ npm install
148
237
  npm run build
149
238
  ```
150
239
 
151
- Run the test suite against a live LLM server:
152
-
153
- ```bash
154
- node test.mjs
155
- ```
156
-
157
- ## License
240
+ ## Licence
158
241
 
159
242
  MIT
package/dist/index.d.ts CHANGED
@@ -3,6 +3,6 @@
3
3
  * Houtini LM — MCP Server for Local LLMs via OpenAI-compatible API
4
4
  *
5
5
  * Connects to LM Studio (or any OpenAI-compatible endpoint) and exposes
6
- * chat, custom prompts, and model info as MCP tools.
6
+ * chat, custom prompts, code tasks, and model discovery as MCP tools.
7
7
  */
8
8
  export {};
package/dist/index.js CHANGED
@@ -3,7 +3,7 @@
3
3
  * Houtini LM — MCP Server for Local LLMs via OpenAI-compatible API
4
4
  *
5
5
  * Connects to LM Studio (or any OpenAI-compatible endpoint) and exposes
6
- * chat, custom prompts, and model info as MCP tools.
6
+ * chat, custom prompts, code tasks, and model discovery as MCP tools.
7
7
  */
8
8
  import { Server } from '@modelcontextprotocol/sdk/server/index.js';
9
9
  import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
@@ -11,117 +11,296 @@ import { CallToolRequestSchema, ListToolsRequestSchema, } from '@modelcontextpro
11
11
  const LM_BASE_URL = process.env.LM_STUDIO_URL || 'http://localhost:1234';
12
12
  const LM_MODEL = process.env.LM_STUDIO_MODEL || '';
13
13
  const LM_PASSWORD = process.env.LM_STUDIO_PASSWORD || '';
14
- const DEFAULT_MAX_TOKENS = 4096;
14
+ const DEFAULT_MAX_TOKENS = 2048;
15
15
  const DEFAULT_TEMPERATURE = 0.3;
16
+ const CONNECT_TIMEOUT_MS = 5000;
17
+ const INFERENCE_CONNECT_TIMEOUT_MS = 30_000; // generous connect timeout for inference
18
+ const SOFT_TIMEOUT_MS = 55_000; // return partial results before MCP SDK ~60s timeout
19
+ const READ_CHUNK_TIMEOUT_MS = 30_000; // max wait for a single SSE chunk
20
+ const FALLBACK_CONTEXT_LENGTH = parseInt(process.env.LM_CONTEXT_WINDOW || '100000', 10);
16
21
  function apiHeaders() {
17
22
  const h = { 'Content-Type': 'application/json' };
18
23
  if (LM_PASSWORD)
19
24
  h['Authorization'] = `Bearer ${LM_PASSWORD}`;
20
25
  return h;
21
26
  }
22
- async function chatCompletion(messages, options = {}) {
27
+ /**
28
+ * Fetch with a connect timeout so Claude doesn't hang when the host is offline.
29
+ */
30
+ async function fetchWithTimeout(url, options, timeoutMs = CONNECT_TIMEOUT_MS) {
31
+ const controller = new AbortController();
32
+ const timer = setTimeout(() => controller.abort(), timeoutMs);
33
+ try {
34
+ return await fetch(url, { ...options, signal: controller.signal });
35
+ }
36
+ finally {
37
+ clearTimeout(timer);
38
+ }
39
+ }
40
+ /**
41
+ * Read from a stream with a per-chunk timeout.
42
+ * Prevents hanging forever if the LLM stalls mid-generation.
43
+ */
44
+ async function timedRead(reader, timeoutMs) {
45
+ let timer;
46
+ const timeout = new Promise((resolve) => {
47
+ timer = setTimeout(() => resolve('timeout'), timeoutMs);
48
+ });
49
+ try {
50
+ return await Promise.race([reader.read(), timeout]);
51
+ }
52
+ finally {
53
+ clearTimeout(timer);
54
+ }
55
+ }
56
+ /**
57
+ * Streaming chat completion with soft timeout.
58
+ *
59
+ * Uses SSE streaming (`stream: true`) so tokens arrive incrementally.
60
+ * If we approach the MCP SDK's ~60s timeout (soft limit at 55s), we
61
+ * return whatever content we have so far with `truncated: true`.
62
+ * This means large code reviews return partial results instead of nothing.
63
+ */
64
+ async function chatCompletionStreaming(messages, options = {}) {
23
65
  const body = {
24
66
  messages,
25
67
  temperature: options.temperature ?? DEFAULT_TEMPERATURE,
26
68
  max_tokens: options.maxTokens ?? DEFAULT_MAX_TOKENS,
27
- stream: false,
69
+ stream: true,
28
70
  };
29
71
  if (options.model || LM_MODEL) {
30
72
  body.model = options.model || LM_MODEL;
31
73
  }
32
- const res = await fetch(`${LM_BASE_URL}/v1/chat/completions`, {
33
- method: 'POST',
34
- headers: apiHeaders(),
35
- body: JSON.stringify(body),
36
- });
74
+ const startTime = Date.now();
75
+ const res = await fetchWithTimeout(`${LM_BASE_URL}/v1/chat/completions`, { method: 'POST', headers: apiHeaders(), body: JSON.stringify(body) }, INFERENCE_CONNECT_TIMEOUT_MS);
37
76
  if (!res.ok) {
38
77
  const text = await res.text().catch(() => '');
39
78
  throw new Error(`LM Studio API error ${res.status}: ${text}`);
40
79
  }
41
- return res.json();
80
+ if (!res.body) {
81
+ throw new Error('Response body is null — streaming not supported by endpoint');
82
+ }
83
+ const reader = res.body.getReader();
84
+ const decoder = new TextDecoder();
85
+ let content = '';
86
+ let model = '';
87
+ let usage;
88
+ let finishReason = '';
89
+ let truncated = false;
90
+ let buffer = '';
91
+ try {
92
+ while (true) {
93
+ // Check soft timeout before each read
94
+ const elapsed = Date.now() - startTime;
95
+ if (elapsed > SOFT_TIMEOUT_MS) {
96
+ truncated = true;
97
+ process.stderr.write(`[houtini-lm] Soft timeout at ${elapsed}ms, returning ${content.length} chars of partial content\n`);
98
+ break;
99
+ }
100
+ // Read with per-chunk timeout (handles stalled generation)
101
+ const remaining = SOFT_TIMEOUT_MS - elapsed;
102
+ const chunkTimeout = Math.min(READ_CHUNK_TIMEOUT_MS, remaining);
103
+ const result = await timedRead(reader, chunkTimeout);
104
+ if (result === 'timeout') {
105
+ truncated = true;
106
+ process.stderr.write(`[houtini-lm] Chunk read timeout, returning ${content.length} chars of partial content\n`);
107
+ break;
108
+ }
109
+ if (result.done)
110
+ break;
111
+ buffer += decoder.decode(result.value, { stream: true });
112
+ // Parse SSE lines
113
+ const lines = buffer.split('\n');
114
+ buffer = lines.pop() || ''; // Keep incomplete line in buffer
115
+ for (const line of lines) {
116
+ const trimmed = line.trim();
117
+ if (!trimmed || trimmed === 'data: [DONE]')
118
+ continue;
119
+ if (!trimmed.startsWith('data: '))
120
+ continue;
121
+ try {
122
+ const json = JSON.parse(trimmed.slice(6));
123
+ if (json.model)
124
+ model = json.model;
125
+ const delta = json.choices?.[0]?.delta;
126
+ if (delta?.content)
127
+ content += delta.content;
128
+ const reason = json.choices?.[0]?.finish_reason;
129
+ if (reason)
130
+ finishReason = reason;
131
+ // Some endpoints include usage in the final streaming chunk
132
+ if (json.usage)
133
+ usage = json.usage;
134
+ }
135
+ catch {
136
+ // Skip unparseable chunks (partial JSON, comments, etc.)
137
+ }
138
+ }
139
+ }
140
+ }
141
+ finally {
142
+ // Release the reader — don't await cancel() as it can hang
143
+ reader.releaseLock();
144
+ }
145
+ return { content, model, usage, finishReason, truncated };
42
146
  }
43
- async function listModels() {
44
- const res = await fetch(`${LM_BASE_URL}/v1/models`, { headers: apiHeaders() });
147
+ async function listModelsRaw() {
148
+ const res = await fetchWithTimeout(`${LM_BASE_URL}/v1/models`, { headers: apiHeaders() });
45
149
  if (!res.ok)
46
150
  throw new Error(`Failed to list models: ${res.status}`);
47
151
  const data = (await res.json());
48
- return data.data.map((m) => m.id);
152
+ return data.data;
153
+ }
154
+ function getContextLength(model) {
155
+ // LM Studio uses context_length, vLLM uses max_model_len, fall back to env/100k
156
+ return model.context_length ?? model.max_model_len ?? FALLBACK_CONTEXT_LENGTH;
157
+ }
158
+ /**
159
+ * Format a footer line for streaming results showing model, usage, and truncation status.
160
+ */
161
+ function formatFooter(resp, extra) {
162
+ const parts = [];
163
+ if (resp.model)
164
+ parts.push(`Model: ${resp.model}`);
165
+ if (resp.usage)
166
+ parts.push(`Tokens: ${resp.usage.prompt_tokens}→${resp.usage.completion_tokens}`);
167
+ if (extra)
168
+ parts.push(extra);
169
+ if (resp.truncated)
170
+ parts.push('⚠ TRUNCATED (soft timeout — partial result)');
171
+ return parts.length > 0 ? `\n\n---\n${parts.join(' | ')}` : '';
49
172
  }
50
173
  // ── MCP Tool definitions ─────────────────────────────────────────────
51
174
  const TOOLS = [
52
175
  {
53
176
  name: 'chat',
54
- description: 'Delegate a bounded task to the local LLM (Qwen3-Coder, ~3-4 tok/s). ' +
55
- 'Best for: quick code explanation, pattern recognition, boilerplate generation, knowledge questions. ' +
56
- 'Use when you can work in parallel fire this off and continue your own work. ' +
57
- 'RULES: (1) Always send COMPLETE code, never truncated the local LLM WILL hallucinate details for missing code. ' +
58
- '(2) Set max_tokens to match expected output: 150 for quick answers (~45s), 300 for explanations (~100s), 500 for code generation (~170s). ' +
59
- '(3) Be explicit about output format in your message ("respond in 3 bullets", "return only the function"). ' +
60
- 'DO NOT use for: multi-step reasoning, creative writing, tasks needing >500 token output, or anything requiring tool use (use lm-taskrunner for tool-augmented tasks via mcpo).',
177
+ description: 'Send a task to a local LLM running on a separate machine. This is a FREE, parallel worker — ' +
178
+ 'use it to offload bounded work while you continue doing other things. The local LLM runs independently ' +
179
+ 'and does not consume your tokens or rate limits.\n\n' +
180
+ 'WHEN TO USE (delegate generouslyit costs nothing):\n' +
181
+ ' Explain or summarise code/docs you just read\n' +
182
+ ' Generate boilerplate, test stubs, type definitions, mock data\n' +
183
+ ' Answer factual questions about languages, frameworks, APIs\n' +
184
+ '• Draft commit messages, PR descriptions, comments\n' +
185
+ '• Translate or reformat content (JSON↔YAML, snake_case↔camelCase)\n' +
186
+ '• Brainstorm approaches before you commit to one\n' +
187
+ '• Any self-contained subtask that does not need tool access\n\n' +
188
+ 'RULES:\n' +
189
+ '(1) Always send COMPLETE code/context — never truncate, the local LLM cannot access files.\n' +
190
+ '(2) Be explicit about output format ("respond as a JSON array", "return only the function").\n' +
191
+ '(3) Call discover first if you are unsure whether the local LLM is online.\n\n' +
192
+ 'The local model, context window, and speed vary — call the discover tool to check what is loaded.',
61
193
  inputSchema: {
62
194
  type: 'object',
63
195
  properties: {
64
- message: { type: 'string', description: 'The task. Be specific about expected output format and length. Include COMPLETE code — never truncate.' },
65
- system: { type: 'string', description: 'Persona for the local LLM. Be specific: "Senior TypeScript dev" not "helpful assistant". Short personas (under 30 words) get best results.' },
66
- temperature: { type: 'number', description: '0.1 for factual/code tasks, 0.3 for analysis (default), 0.7 for creative suggestions. Stay under 0.5 for code.' },
67
- max_tokens: { type: 'number', description: 'Cap this to match expected output. 150=quick answer, 300=explanation, 500=code generation. Lower = faster. Default 4096 is almost always too high.' },
196
+ message: {
197
+ type: 'string',
198
+ description: 'The task. Be specific about expected output format. Include COMPLETE code/context never truncate.',
199
+ },
200
+ system: {
201
+ type: 'string',
202
+ description: 'Persona for the local LLM. Be specific: "Senior TypeScript dev" not "helpful assistant".',
203
+ },
204
+ temperature: {
205
+ type: 'number',
206
+ description: '0.1 for factual/code, 0.3 for analysis (default), 0.7 for creative. Stay under 0.5 for code.',
207
+ },
208
+ max_tokens: {
209
+ type: 'number',
210
+ description: 'Max response tokens. Default 2048. Use higher for code generation, lower for quick answers.',
211
+ },
68
212
  },
69
213
  required: ['message'],
70
214
  },
71
215
  },
72
216
  {
73
217
  name: 'custom_prompt',
74
- description: 'Structured analysis on the local LLM with explicit system/context/instruction separation. ' +
75
- 'This 3-part format gets the best results from local models — the separation prevents context bleed. ' +
76
- 'System sets persona (be specific: "Senior TypeScript dev reviewing for security bugs"). ' +
77
- 'Context provides COMPLETE data (full source file, full error log — never truncated). ' +
78
- 'Instruction states exactly what to produce (under 50 words for best results). ' +
79
- 'Best for: code review, comparison, refactoring suggestions, structured analysis. ' +
80
- 'Expect 30-180s response time depending on max_tokens.',
218
+ description: 'Structured analysis via the local LLM with explicit system/context/instruction separation. ' +
219
+ 'This 3-part format prevents context bleed and gets the best results from local models.\n\n' +
220
+ 'WHEN TO USE:\n' +
221
+ ' Code review paste full source, ask for bugs/improvements\n' +
222
+ ' Comparison paste two implementations, ask which is better and why\n' +
223
+ ' Refactoring suggestions paste code, ask for a cleaner version\n' +
224
+ ' Content analysis paste text, ask for structure/tone/issues\n' +
225
+ '• Any task where separating context from instruction improves clarity\n\n' +
226
+ 'System sets persona. Context provides COMPLETE data (never truncate). ' +
227
+ 'Instruction states exactly what to produce.',
81
228
  inputSchema: {
82
229
  type: 'object',
83
230
  properties: {
84
- system: { type: 'string', description: 'Persona. Be specific and under 30 words. Example: "Expert Node.js developer focused on error handling and edge cases."' },
85
- context: { type: 'string', description: 'The COMPLETE data to analyse. Full source code, full logs, full text. NEVER truncate — the local LLM fills gaps with plausible hallucinations.' },
86
- instruction: { type: 'string', description: 'What to produce. Under 50 words. Specify format: "List 3 bugs as bullet points" or "Return a JSON array of {line, issue, fix}".' },
87
- temperature: { type: 'number', description: '0.1 for bugs/review, 0.3 for analysis (default), 0.5 for suggestions.' },
88
- max_tokens: { type: 'number', description: 'Match to expected output. 200 for bullets, 400 for detailed review, 600 for code generation.' },
231
+ system: {
232
+ type: 'string',
233
+ description: 'Persona. Be specific: "Expert Node.js developer focused on error handling and edge cases."',
234
+ },
235
+ context: {
236
+ type: 'string',
237
+ description: 'The COMPLETE data to analyse. Full source code, full logs, full text. NEVER truncate.',
238
+ },
239
+ instruction: {
240
+ type: 'string',
241
+ description: 'What to produce. Specify format: "List 3 bugs as bullet points" or "Return a JSON array of {line, issue, fix}".',
242
+ },
243
+ temperature: {
244
+ type: 'number',
245
+ description: '0.1 for bugs/review, 0.3 for analysis (default), 0.5 for suggestions.',
246
+ },
247
+ max_tokens: {
248
+ type: 'number',
249
+ description: 'Max response tokens. Default 2048.',
250
+ },
89
251
  },
90
252
  required: ['instruction'],
91
253
  },
92
254
  },
93
255
  {
94
256
  name: 'code_task',
95
- description: 'Purpose-built for code analysis tasks. Wraps the local LLM with an optimised code-review system prompt. ' +
96
- 'Provide COMPLETE source code and a specific task — returns analysis in ~30-180s. ' +
97
- 'Ideal for parallel execution: delegate to local LLM while you handle other work. ' +
98
- 'The local LLM excels at: explaining code, finding common bugs, suggesting improvements, comparing patterns, generating boilerplate. ' +
99
- 'It struggles with: subtle/adversarial bugs, multi-file reasoning, design tasks requiring integration. ' +
100
- 'Output capped at 500 tokens by default (override with max_tokens).',
257
+ description: 'Send a code analysis task to the local LLM. Wraps the request with an optimised code-review system prompt.\n\n' +
258
+ 'WHEN TO USE:\n' +
259
+ ' Explain what a function/class does\n' +
260
+ ' Find bugs or suggest improvements\n' +
261
+ ' Generate unit tests or type definitions for existing code\n' +
262
+ ' Add error handling, logging, or validation\n' +
263
+ '• Convert between languages or patterns\n\n' +
264
+ 'Provide COMPLETE source code (the local LLM cannot read files) and a specific task.',
101
265
  inputSchema: {
102
266
  type: 'object',
103
267
  properties: {
104
- code: { type: 'string', description: 'COMPLETE source code. Never truncate. Include imports and full function bodies.' },
105
- task: { type: 'string', description: 'What to do. Be specific and concise: "Find bugs", "Explain this function", "Add error handling to fetchData", "Compare these two approaches".' },
106
- language: { type: 'string', description: 'Programming language for context: "typescript", "python", "rust", etc.' },
107
- max_tokens: { type: 'number', description: 'Expected output size. Default 500. Use 200 for quick answers, 800 for code generation.' },
268
+ code: {
269
+ type: 'string',
270
+ description: 'COMPLETE source code. Never truncate. Include imports and full function bodies.',
271
+ },
272
+ task: {
273
+ type: 'string',
274
+ description: 'What to do: "Find bugs", "Explain this", "Add error handling to fetchData", "Write tests".',
275
+ },
276
+ language: {
277
+ type: 'string',
278
+ description: 'Programming language: "typescript", "python", "rust", etc.',
279
+ },
280
+ max_tokens: {
281
+ type: 'number',
282
+ description: 'Max response tokens. Default 2048.',
283
+ },
108
284
  },
109
285
  required: ['code', 'task'],
110
286
  },
111
287
  },
112
288
  {
113
- name: 'list_models',
114
- description: 'List models currently loaded in LM Studio. Use to verify which model will handle delegated tasks.',
289
+ name: 'discover',
290
+ description: 'Check whether the local LLM is online and what model is loaded. Returns model name, context window size, ' +
291
+ 'and response latency. Call this if you are unsure whether the local LLM is available before delegating work. ' +
292
+ 'Fast — typically responds in under 1 second, or returns an offline status within 5 seconds if the host is unreachable.',
115
293
  inputSchema: { type: 'object', properties: {} },
116
294
  },
117
295
  {
118
- name: 'health_check',
119
- description: 'Check connectivity and latency to the local LM Studio instance. Run before delegating time-sensitive tasks.',
296
+ name: 'list_models',
297
+ description: 'List all models currently loaded in the local LLM server, with context window sizes. ' +
298
+ 'Use discover instead for a quick availability check.',
120
299
  inputSchema: { type: 'object', properties: {} },
121
300
  },
122
301
  ];
123
302
  // ── MCP Server ───────────────────────────────────────────────────────
124
- const server = new Server({ name: 'houtini-lm', version: '2.1.0' }, { capabilities: { tools: {} } });
303
+ const server = new Server({ name: 'houtini-lm', version: '2.3.0' }, { capabilities: { tools: {} } });
125
304
  server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: TOOLS }));
126
305
  server.setRequestHandler(CallToolRequestSchema, async (request) => {
127
306
  const { name, arguments: args } = request.params;
@@ -133,15 +312,12 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
133
312
  if (system)
134
313
  messages.push({ role: 'system', content: system });
135
314
  messages.push({ role: 'user', content: message });
136
- const resp = await chatCompletion(messages, {
315
+ const resp = await chatCompletionStreaming(messages, {
137
316
  temperature,
138
317
  maxTokens: max_tokens,
139
318
  });
140
- const reply = resp.choices[0]?.message?.content ?? '';
141
- const usage = resp.usage
142
- ? `\n\n---\nModel: ${resp.model} | Tokens: ${resp.usage.prompt_tokens}→${resp.usage.completion_tokens}`
143
- : '';
144
- return { content: [{ type: 'text', text: reply + usage }] };
319
+ const footer = formatFooter(resp);
320
+ return { content: [{ type: 'text', text: resp.content + footer }] };
145
321
  }
146
322
  case 'custom_prompt': {
147
323
  const { system, context, instruction, temperature, max_tokens } = args;
@@ -152,12 +328,13 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
152
328
  if (context)
153
329
  userContent = `Context:\n${context}\n\nInstruction:\n${instruction}`;
154
330
  messages.push({ role: 'user', content: userContent });
155
- const resp = await chatCompletion(messages, {
331
+ const resp = await chatCompletionStreaming(messages, {
156
332
  temperature,
157
333
  maxTokens: max_tokens,
158
334
  });
335
+ const footer = formatFooter(resp);
159
336
  return {
160
- content: [{ type: 'text', text: resp.choices[0]?.message?.content ?? '' }],
337
+ content: [{ type: 'text', text: resp.content + footer }],
161
338
  };
162
339
  }
163
340
  case 'code_task': {
@@ -173,40 +350,70 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
173
350
  content: `Task: ${task}\n\n\`\`\`${lang}\n${code}\n\`\`\``,
174
351
  },
175
352
  ];
176
- const codeResp = await chatCompletion(codeMessages, {
353
+ const codeResp = await chatCompletionStreaming(codeMessages, {
177
354
  temperature: 0.2,
178
- maxTokens: codeMaxTokens ?? 500,
355
+ maxTokens: codeMaxTokens ?? DEFAULT_MAX_TOKENS,
179
356
  });
180
- const codeReply = codeResp.choices[0]?.message?.content ?? '';
181
- const codeUsage = codeResp.usage
182
- ? `\n\n---\nModel: ${codeResp.model} | Tokens: ${codeResp.usage.prompt_tokens}→${codeResp.usage.completion_tokens} | ${lang}`
183
- : '';
184
- return { content: [{ type: 'text', text: codeReply + codeUsage }] };
357
+ const codeFooter = formatFooter(codeResp, lang);
358
+ return { content: [{ type: 'text', text: codeResp.content + codeFooter }] };
185
359
  }
186
- case 'list_models': {
187
- const models = await listModels();
360
+ case 'discover': {
361
+ const start = Date.now();
362
+ let models;
363
+ try {
364
+ models = await listModelsRaw();
365
+ }
366
+ catch (err) {
367
+ const ms = Date.now() - start;
368
+ const reason = err instanceof Error && err.name === 'AbortError'
369
+ ? `Host unreachable (timed out after ${ms}ms)`
370
+ : `Connection failed: ${err instanceof Error ? err.message : String(err)}`;
371
+ return {
372
+ content: [{
373
+ type: 'text',
374
+ text: `Status: OFFLINE\nEndpoint: ${LM_BASE_URL}\n${reason}\n\nThe local LLM is not available right now. Do not attempt to delegate tasks to it.`,
375
+ }],
376
+ };
377
+ }
378
+ const ms = Date.now() - start;
379
+ if (models.length === 0) {
380
+ return {
381
+ content: [{
382
+ type: 'text',
383
+ text: `Status: ONLINE (no model loaded)\nEndpoint: ${LM_BASE_URL}\nLatency: ${ms}ms\n\nThe server is running but no model is loaded. Ask the user to load a model in LM Studio.`,
384
+ }],
385
+ };
386
+ }
387
+ const lines = models.map((m) => {
388
+ const ctx = getContextLength(m);
389
+ return ` • ${m.id} (context: ${ctx.toLocaleString()} tokens)`;
390
+ });
391
+ const primary = models[0];
392
+ const ctx = getContextLength(primary);
188
393
  return {
189
- content: [
190
- {
394
+ content: [{
191
395
  type: 'text',
192
- text: models.length
193
- ? `Loaded models:\n${models.map((m) => ` • ${m}`).join('\n')}`
194
- : 'No models currently loaded.',
195
- },
196
- ],
396
+ text: `Status: ONLINE\n` +
397
+ `Endpoint: ${LM_BASE_URL}\n` +
398
+ `Latency: ${ms}ms\n` +
399
+ `Model: ${primary.id}\n` +
400
+ `Context window: ${ctx.toLocaleString()} tokens\n` +
401
+ `\nLoaded models:\n${lines.join('\n')}\n\n` +
402
+ `The local LLM is available. You can delegate tasks using chat, custom_prompt, or code_task.`,
403
+ }],
197
404
  };
198
405
  }
199
- case 'health_check': {
200
- const start = Date.now();
201
- const models = await listModels();
202
- const ms = Date.now() - start;
406
+ case 'list_models': {
407
+ const models = await listModelsRaw();
408
+ if (!models.length) {
409
+ return { content: [{ type: 'text', text: 'No models currently loaded.' }] };
410
+ }
411
+ const lines = models.map((m) => {
412
+ const ctx = getContextLength(m);
413
+ return ` • ${m.id}${ctx ? ` (context: ${ctx.toLocaleString()} tokens)` : ''}`;
414
+ });
203
415
  return {
204
- content: [
205
- {
206
- type: 'text',
207
- text: `Connected to ${LM_BASE_URL} (${ms}ms)\nAuth: ${LM_PASSWORD ? 'enabled' : 'none'}\nModels loaded: ${models.length}${models.length ? '\n' + models.join(', ') : ''}`,
208
- },
209
- ],
416
+ content: [{ type: 'text', text: `Loaded models:\n${lines.join('\n')}` }],
210
417
  };
211
418
  }
212
419
  default:
package/dist/index.js.map CHANGED
@@ -1 +1 @@
1
- {"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":";AACA;;;;;GAKG;AAEH,OAAO,EAAE,MAAM,EAAE,MAAM,2CAA2C,CAAC;AACnE,OAAO,EAAE,oBAAoB,EAAE,MAAM,2CAA2C,CAAC;AACjF,OAAO,EACL,qBAAqB,EACrB,sBAAsB,GACvB,MAAM,oCAAoC,CAAC;AAE5C,MAAM,WAAW,GAAG,OAAO,CAAC,GAAG,CAAC,aAAa,IAAI,uBAAuB,CAAC;AACzE,MAAM,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,eAAe,IAAI,EAAE,CAAC;AACnD,MAAM,WAAW,GAAG,OAAO,CAAC,GAAG,CAAC,kBAAkB,IAAI,EAAE,CAAC;AACzD,MAAM,kBAAkB,GAAG,IAAI,CAAC;AAChC,MAAM,mBAAmB,GAAG,GAAG,CAAC;AAEhC,SAAS,UAAU;IACjB,MAAM,CAAC,GAA2B,EAAE,cAAc,EAAE,kBAAkB,EAAE,CAAC;IACzE,IAAI,WAAW;QAAE,CAAC,CAAC,eAAe,CAAC,GAAG,UAAU,WAAW,EAAE,CAAC;IAC9D,OAAO,CAAC,CAAC;AACX,CAAC;AAmBD,KAAK,UAAU,cAAc,CAC3B,QAAuB,EACvB,UAAwE,EAAE;IAE1E,MAAM,IAAI,GAA4B;QACpC,QAAQ;QACR,WAAW,EAAE,OAAO,CAAC,WAAW,IAAI,mBAAmB;QACvD,UAAU,EAAE,OAAO,CAAC,SAAS,IAAI,kBAAkB;QACnD,MAAM,EAAE,KAAK;KACd,CAAC;IACF,IAAI,OAAO,CAAC,KAAK,IAAI,QAAQ,EAAE,CAAC;QAC9B,IAAI,CAAC,KAAK,GAAG,OAAO,CAAC,KAAK,IAAI,QAAQ,CAAC;IACzC,CAAC;IAED,MAAM,GAAG,GAAG,MAAM,KAAK,CAAC,GAAG,WAAW,sBAAsB,EAAE;QAC5D,MAAM,EAAE,MAAM;QACd,OAAO,EAAE,UAAU,EAAE;QACrB,IAAI,EAAE,IAAI,CAAC,SAAS,CAAC,IAAI,CAAC;KAC3B,CAAC,CAAC;IAEH,IAAI,CAAC,GAAG,CAAC,EAAE,EAAE,CAAC;QACZ,MAAM,IAAI,GAAG,MAAM,GAAG,CAAC,IAAI,EAAE,CAAC,KAAK,CAAC,GAAG,EAAE,CAAC,EAAE,CAAC,CAAC;QAC9C,MAAM,IAAI,KAAK,CAAC,uBAAuB,GAAG,CAAC,MAAM,KAAK,IAAI,EAAE,CAAC,CAAC;IAChE,CAAC;IAED,OAAO,GAAG,CAAC,IAAI,EAAqC,CAAC;AACvD,CAAC;AAED,KAAK,UAAU,UAAU;IACvB,MAAM,GAAG,GAAG,MAAM,KAAK,CAAC,GAAG,WAAW,YAAY,EAAE,EAAE,OAAO,EAAE,UAAU,EAAE,EAAE,CAAC,CAAC;IAC/E,IAAI,CAAC,GAAG,CAAC,EAAE;QAAE,MAAM,IAAI,KAAK,CAAC,0BAA0B,GAAG,CAAC,MAAM,EAAE,CAAC,CAAC;IACrE,MAAM,IAAI,GAAG,CAAC,MAAM,GAAG,CAAC,IAAI,EAAE,CAAoC,CAAC;IACnE,OAAO,IAAI,CAAC,IAAI,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC;AACpC,CAAC;AAED,wEAAwE;AAExE,MAAM,KAAK,GAAG;IACZ;QACE,IAAI,EAAE,MAAM;QACZ,WAAW,EACT,sEAAsE;YACtE,sGAAsG;YACtG,gFAAgF;YAChF,mHAAmH;YACnH,4IAA4I;YAC5I,4GAA4G;YAC5G,gLAAgL;QAClL,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,OAAO,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,wGAAwG,EAAE;gBAClJ,MAAM,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,4IAA4I,EAAE;gBACrL,WAAW,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,gHAAgH,EAAE;gBAC9J,UAAU,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,oJAAoJ,EAAE;aAClM;YACD,QAAQ,EAAE,CAAC,SAAS,CAAC;SACtB;KACF;IACD;QACE,IAAI,EAAE,eAAe;QACrB,WAAW,EACT,4FAA4F;YAC5F,sGAAsG;YACtG,0FAA0F;YAC1F,uFAAuF;YACvF,gFAAgF;YAChF,mFAAmF;YACnF,uDAAuD;QACzD,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,MAAM,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,wHAAwH,EAAE;gBACjK,OAAO,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,gJAAgJ,EAAE;gBAC1L,WAAW,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,iIAAiI,EAAE;gBAC/K,WAAW,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,uEAAuE,EAAE;gBACrH,UAAU,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,8FAA8F,EAAE;aAC5I;YACD,QAAQ,EAAE,CAAC,aAAa,CAAC;SAC1B;KACF;IACD;QACE,IAAI,EAAE,WAAW;QACjB,WAAW,EACT,0GAA0G;YAC1G,mFAAmF;YACnF,mFAAmF;YACnF,sIAAsI;YACtI,wGAAwG;YACxG,oEAAoE;QACtE,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,IAAI,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,iFAAiF,EAAE;gBACxH,IAAI,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,+IAA+I,EAAE;gBACtL,QAAQ,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,wEAAwE,EAAE;gBACnH,UAAU,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,wFAAwF,EAAE;aACtI;YACD,QAAQ,EAAE,CAAC,MAAM,EAAE,MAAM,CAAC;SAC3B;KACF;IACD;QACE,IAAI,EAAE,aAAa;QACnB,WAAW,EAAE,mGAAmG;QAChH,WAAW,EAAE,EAAE,IAAI,EAAE,QAAiB,EAAE,UAAU,EAAE,EAAE,EAAE;KACzD;IACD;QACE,IAAI,EAAE,cAAc;QACpB,WAAW,EAAE,6GAA6G;QAC1H,WAAW,EAAE,EAAE,IAAI,EAAE,QAAiB,EAAE,UAAU,EAAE,EAAE,EAAE;KACzD;CACF,CAAC;AAEF,wEAAwE;AAExE,MAAM,MAAM,GAAG,IAAI,MAAM,CACvB,EAAE,IAAI,EAAE,YAAY,EAAE,OAAO,EAAE,OAAO,EAAE,EACxC,EAAE,YAAY,EAAE,EAAE,KAAK,EAAE,EAAE,EAAE,EAAE,CAChC,CAAC;AAEF,MAAM,CAAC,iBAAiB,CAAC,sBAAsB,EAAE,KAAK,IAAI,EAAE,CAAC,CAAC,EAAE,KAAK,EAAE,KAAK,EAAE,CAAC,CAAC,CAAC;AAEjF,MAAM,CAAC,iBAAiB,CAAC,qBAAqB,EAAE,KAAK,EAAE,OAAO,EAAE,EAAE;IAChE,MAAM,EAAE,IAAI,EAAE,SAAS,EAAE,IAAI,EAAE,GAAG,OAAO,CAAC,MAAM,CAAC;IAEjD,IAAI,CAAC;QACH,QAAQ,IAAI,EAAE,CAAC;YACb,KAAK,MAAM,CAAC,CAAC,CAAC;gBACZ,MAAM,EAAE,OAAO,EAAE,MAAM,EAAE,WAAW,EAAE,UAAU,EAAE,GAAG,IAKpD,CAAC;gBACF,MAAM,QAAQ,GAAkB,EAAE,CAAC;gBACnC,IAAI,MAAM;oBAAE,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,CAAC,CAAC;gBAC/D,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,OAAO,EAAE,OAAO,EAAE,CAAC,CAAC;gBAElD,MAAM,IAAI,GAAG,MAAM,cAAc,CAAC,QAAQ,EAAE;oBAC1C,WAAW;oBACX,SAAS,EAAE,UAAU;iBACtB,CAAC,CAAC;gBAEH,MAAM,KAAK,GAAG,IAAI,CAAC,OAAO,CAAC,CAAC,CAAC,EAAE,OAAO,EAAE,OAAO,IAAI,EAAE,CAAC;gBACtD,MAAM,KAAK,GAAG,IAAI,CAAC,KAAK;oBACtB,CAAC,CAAC,mBAAmB,IAAI,CAAC,KAAK,cAAc,IAAI,CAAC,KAAK,CAAC,aAAa,IAAI,IAAI,CAAC,KAAK,CAAC,iBAAiB,EAAE;oBACvG,CAAC,CAAC,EAAE,CAAC;gBAEP,OAAO,EAAE,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,KAAK,GAAG,KAAK,EAAE,CAAC,EAAE,CAAC;YAC9D,CAAC;YAED,KAAK,eAAe,CAAC,CAAC,CAAC;gBACrB,MAAM,EAAE,MAAM,EAAE,OAAO,EAAE,WAAW,EAAE,WAAW,EAAE,UAAU,EAAE,GAAG,IAMjE,CAAC;gBAEF,MAAM,QAAQ,GAAkB,EAAE,CAAC;gBACnC,IAAI,MAAM;oBAAE,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,CAAC,CAAC;gBAE/D,IAAI,WAAW,GAAG,WAAW,CAAC;gBAC9B,IAAI,OAAO;oBAAE,WAAW,GAAG,aAAa,OAAO,qBAAqB,WAAW,EAAE,CAAC;gBAClF,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,OAAO,EAAE,WAAW,EAAE,CAAC,CAAC;gBAEtD,MAAM,IAAI,GAAG,MAAM,cAAc,CAAC,QAAQ,EAAE;oBAC1C,WAAW;oBACX,SAAS,EAAE,UAAU;iBACtB,CAAC,CAAC;gBAEH,OAAO;oBACL,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,OAAO,CAAC,CAAC,CAAC,EAAE,OAAO,EAAE,OAAO,IAAI,EAAE,EAAE,CAAC;iBAC3E,CAAC;YACJ,CAAC;YAED,KAAK,WAAW,CAAC,CAAC,CAAC;gBACjB,MAAM,EAAE,IAAI,EAAE,IAAI,EAAE,QAAQ,EAAE,UAAU,EAAE,aAAa,EAAE,GAAG,IAK3D,CAAC;gBAEF,MAAM,IAAI,GAAG,QAAQ,IAAI,SAAS,CAAC;gBACnC,MAAM,YAAY,GAAkB;oBAClC;wBACE,IAAI,EAAE,QAAQ;wBACd,OAAO,EAAE,UAAU,IAAI,qJAAqJ;qBAC7K;oBACD;wBACE,IAAI,EAAE,MAAM;wBACZ,OAAO,EAAE,SAAS,IAAI,aAAa,IAAI,KAAK,IAAI,UAAU;qBAC3D;iBACF,CAAC;gBAEF,MAAM,QAAQ,GAAG,MAAM,cAAc,CAAC,YAAY,EAAE;oBAClD,WAAW,EAAE,GAAG;oBAChB,SAAS,EAAE,aAAa,IAAI,GAAG;iBAChC,CAAC,CAAC;gBAEH,MAAM,SAAS,GAAG,QAAQ,CAAC,OAAO,CAAC,CAAC,CAAC,EAAE,OAAO,EAAE,OAAO,IAAI,EAAE,CAAC;gBAC9D,MAAM,SAAS,GAAG,QAAQ,CAAC,KAAK;oBAC9B,CAAC,CAAC,mBAAmB,QAAQ,CAAC,KAAK,cAAc,QAAQ,CAAC,KAAK,CAAC,aAAa,IAAI,QAAQ,CAAC,KAAK,CAAC,iBAAiB,MAAM,IAAI,EAAE;oBAC7H,CAAC,CAAC,EAAE,CAAC;gBAEP,OAAO,EAAE,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,SAAS,GAAG,SAAS,EAAE,CAAC,EAAE,CAAC;YACtE,CAAC;YAED,KAAK,aAAa,CAAC,CAAC,CAAC;gBACnB,MAAM,MAAM,GAAG,MAAM,UAAU,EAAE,CAAC;gBAClC,OAAO;oBACL,OAAO,EAAE;wBACP;4BACE,IAAI,EAAE,MAAM;4BACZ,IAAI,EAAE,MAAM,CAAC,MAAM;gCACjB,CAAC,CAAC,mBAAmB,MAAM,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,OAAO,CAAC,EAAE,CAAC,CAAC,IAAI,CAAC,IAAI,CAAC,EAAE;gCAC/D,CAAC,CAAC,6BAA6B;yBAClC;qBACF;iBACF,CAAC;YACJ,CAAC;YAED,KAAK,cAAc,CAAC,CAAC,CAAC;gBACpB,MAAM,KAAK,GAAG,IAAI,CAAC,GAAG,EAAE,CAAC;gBACzB,MAAM,MAAM,GAAG,MAAM,UAAU,EAAE,CAAC;gBAClC,MAAM,EAAE,GAAG,IAAI,CAAC,GAAG,EAAE,GAAG,KAAK,CAAC;gBAC9B,OAAO;oBACL,OAAO,EAAE;wBACP;4BACE,IAAI,EAAE,MAAM;4BACZ,IAAI,EAAE,gBAAgB,WAAW,KAAK,EAAE,cAAc,WAAW,CAAC,CAAC,CAAC,SAAS,CAAC,CAAC,CAAC,MAAM,oBAAoB,MAAM,CAAC,MAAM,GAAG,MAAM,CAAC,MAAM,CAAC,CAAC,CAAC,IAAI,GAAG,MAAM,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC,EAAE,EAAE;yBAC1K;qBACF;iBACF,CAAC;YACJ,CAAC;YAED;gBACE,MAAM,IAAI,KAAK,CAAC,iBAAiB,IAAI,EAAE,CAAC,CAAC;QAC7C,CAAC;IACH,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO;YACL,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,UAAU,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,MAAM,CAAC,KAAK,CAAC,EAAE,EAAE,CAAC;YACrG,OAAO,EAAE,IAAI;SACd,CAAC;IACJ,CAAC;AACH,CAAC,CAAC,CAAC;AAEH,KAAK,UAAU,IAAI;IACjB,MAAM,SAAS,GAAG,IAAI,oBAAoB,EAAE,CAAC;IAC7C,MAAM,MAAM,CAAC,OAAO,CAAC,SAAS,CAAC,CAAC;IAChC,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,8BAA8B,WAAW,KAAK,CAAC,CAAC;AACvE,CAAC;AAED,IAAI,EAAE,CAAC,KAAK,CAAC,CAAC,KAAK,EAAE,EAAE;IACrB,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,gBAAgB,KAAK,IAAI,CAAC,CAAC;IAChD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;AAClB,CAAC,CAAC,CAAC"}
1
+ {"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":";AACA;;;;;GAKG;AAEH,OAAO,EAAE,MAAM,EAAE,MAAM,2CAA2C,CAAC;AACnE,OAAO,EAAE,oBAAoB,EAAE,MAAM,2CAA2C,CAAC;AACjF,OAAO,EACL,qBAAqB,EACrB,sBAAsB,GACvB,MAAM,oCAAoC,CAAC;AAE5C,MAAM,WAAW,GAAG,OAAO,CAAC,GAAG,CAAC,aAAa,IAAI,uBAAuB,CAAC;AACzE,MAAM,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,eAAe,IAAI,EAAE,CAAC;AACnD,MAAM,WAAW,GAAG,OAAO,CAAC,GAAG,CAAC,kBAAkB,IAAI,EAAE,CAAC;AACzD,MAAM,kBAAkB,GAAG,IAAI,CAAC;AAChC,MAAM,mBAAmB,GAAG,GAAG,CAAC;AAChC,MAAM,kBAAkB,GAAG,IAAI,CAAC;AAChC,MAAM,4BAA4B,GAAG,MAAM,CAAC,CAAC,yCAAyC;AACtF,MAAM,eAAe,GAAG,MAAM,CAAC,CAAc,qDAAqD;AAClG,MAAM,qBAAqB,GAAG,MAAM,CAAC,CAAQ,kCAAkC;AAC/E,MAAM,uBAAuB,GAAG,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,iBAAiB,IAAI,QAAQ,EAAE,EAAE,CAAC,CAAC;AAExF,SAAS,UAAU;IACjB,MAAM,CAAC,GAA2B,EAAE,cAAc,EAAE,kBAAkB,EAAE,CAAC;IACzE,IAAI,WAAW;QAAE,CAAC,CAAC,eAAe,CAAC,GAAG,UAAU,WAAW,EAAE,CAAC;IAC9D,OAAO,CAAC,CAAC;AACX,CAAC;AAyBD;;GAEG;AACH,KAAK,UAAU,gBAAgB,CAC7B,GAAW,EACX,OAAoB,EACpB,YAAoB,kBAAkB;IAEtC,MAAM,UAAU,GAAG,IAAI,eAAe,EAAE,CAAC;IACzC,MAAM,KAAK,GAAG,UAAU,CAAC,GAAG,EAAE,CAAC,UAAU,CAAC,KAAK,EAAE,EAAE,SAAS,CAAC,CAAC;IAC9D,IAAI,CAAC;QACH,OAAO,MAAM,KAAK,CAAC,GAAG,EAAE,EAAE,GAAG,OAAO,EAAE,MAAM,EAAE,UAAU,CAAC,MAAM,EAAE,CAAC,CAAC;IACrE,CAAC;YAAS,CAAC;QACT,YAAY,CAAC,KAAK,CAAC,CAAC;IACtB,CAAC;AACH,CAAC;AAED;;;GAGG;AACH,KAAK,UAAU,SAAS,CACtB,MAA+C,EAC/C,SAAiB;IAEjB,IAAI,KAAoC,CAAC;IACzC,MAAM,OAAO,GAAG,IAAI,OAAO,CAAY,CAAC,OAAO,EAAE,EAAE;QACjD,KAAK,GAAG,UAAU,CAAC,GAAG,EAAE,CAAC,OAAO,CAAC,SAAS,CAAC,EAAE,SAAS,CAAC,CAAC;IAC1D,CAAC,CAAC,CAAC;IACH,IAAI,CAAC;QACH,OAAO,MAAM,OAAO,CAAC,IAAI,CAAC,CAAC,MAAM,CAAC,IAAI,EAAE,EAAE,OAAO,CAAC,CAAC,CAAC;IACtD,CAAC;YAAS,CAAC;QACT,YAAY,CAAC,KAAM,CAAC,CAAC;IACvB,CAAC;AACH,CAAC;AAED;;;;;;;GAOG;AACH,KAAK,UAAU,uBAAuB,CACpC,QAAuB,EACvB,UAAwE,EAAE;IAE1E,MAAM,IAAI,GAA4B;QACpC,QAAQ;QACR,WAAW,EAAE,OAAO,CAAC,WAAW,IAAI,mBAAmB;QACvD,UAAU,EAAE,OAAO,CAAC,SAAS,IAAI,kBAAkB;QACnD,MAAM,EAAE,IAAI;KACb,CAAC;IACF,IAAI,OAAO,CAAC,KAAK,IAAI,QAAQ,EAAE,CAAC;QAC9B,IAAI,CAAC,KAAK,GAAG,OAAO,CAAC,KAAK,IAAI,QAAQ,CAAC;IACzC,CAAC;IAED,MAAM,SAAS,GAAG,IAAI,CAAC,GAAG,EAAE,CAAC;IAE7B,MAAM,GAAG,GAAG,MAAM,gBAAgB,CAChC,GAAG,WAAW,sBAAsB,EACpC,EAAE,MAAM,EAAE,MAAM,EAAE,OAAO,EAAE,UAAU,EAAE,EAAE,IAAI,EAAE,IAAI,CAAC,SAAS,CAAC,IAAI,CAAC,EAAE,EACrE,4BAA4B,CAC7B,CAAC;IAEF,IAAI,CAAC,GAAG,CAAC,EAAE,EAAE,CAAC;QACZ,MAAM,IAAI,GAAG,MAAM,GAAG,CAAC,IAAI,EAAE,CAAC,KAAK,CAAC,GAAG,EAAE,CAAC,EAAE,CAAC,CAAC;QAC9C,MAAM,IAAI,KAAK,CAAC,uBAAuB,GAAG,CAAC,MAAM,KAAK,IAAI,EAAE,CAAC,CAAC;IAChE,CAAC;IAED,IAAI,CAAC,GAAG,CAAC,IAAI,EAAE,CAAC;QACd,MAAM,IAAI,KAAK,CAAC,6DAA6D,CAAC,CAAC;IACjF,CAAC;IAED,MAAM,MAAM,GAAG,GAAG,CAAC,IAAI,CAAC,SAAS,EAAE,CAAC;IACpC,MAAM,OAAO,GAAG,IAAI,WAAW,EAAE,CAAC;IAClC,IAAI,OAAO,GAAG,EAAE,CAAC;IACjB,IAAI,KAAK,GAAG,EAAE,CAAC;IACf,IAAI,KAA+B,CAAC;IACpC,IAAI,YAAY,GAAG,EAAE,CAAC;IACtB,IAAI,SAAS,GAAG,KAAK,CAAC;IACtB,IAAI,MAAM,GAAG,EAAE,CAAC;IAEhB,IAAI,CAAC;QACH,OAAO,IAAI,EAAE,CAAC;YACZ,sCAAsC;YACtC,MAAM,OAAO,GAAG,IAAI,CAAC,GAAG,EAAE,GAAG,SAAS,CAAC;YACvC,IAAI,OAAO,GAAG,eAAe,EAAE,CAAC;gBAC9B,SAAS,GAAG,IAAI,CAAC;gBACjB,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,gCAAgC,OAAO,iBAAiB,OAAO,CAAC,MAAM,6BAA6B,CAAC,CAAC;gBAC1H,MAAM;YACR,CAAC;YAED,2DAA2D;YAC3D,MAAM,SAAS,GAAG,eAAe,GAAG,OAAO,CAAC;YAC5C,MAAM,YAAY,GAAG,IAAI,CAAC,GAAG,CAAC,qBAAqB,EAAE,SAAS,CAAC,CAAC;YAChE,MAAM,MAAM,GAAG,MAAM,SAAS,CAAC,MAAM,EAAE,YAAY,CAAC,CAAC;YAErD,IAAI,MAAM,KAAK,SAAS,EAAE,CAAC;gBACzB,SAAS,GAAG,IAAI,CAAC;gBACjB,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,8CAA8C,OAAO,CAAC,MAAM,6BAA6B,CAAC,CAAC;gBAChH,MAAM;YACR,CAAC;YAED,IAAI,MAAM,CAAC,IAAI;gBAAE,MAAM;YAEvB,MAAM,IAAI,OAAO,CAAC,MAAM,CAAC,MAAM,CAAC,KAAK,EAAE,EAAE,MAAM,EAAE,IAAI,EAAE,CAAC,CAAC;YAEzD,kBAAkB;YAClB,MAAM,KAAK,GAAG,MAAM,CAAC,KAAK,CAAC,IAAI,CAAC,CAAC;YACjC,MAAM,GAAG,KAAK,CAAC,GAAG,EAAE,IAAI,EAAE,CAAC,CAAC,iCAAiC;YAE7D,KAAK,MAAM,IAAI,IAAI,KAAK,EAAE,CAAC;gBACzB,MAAM,OAAO,GAAG,IAAI,CAAC,IAAI,EAAE,CAAC;gBAC5B,IAAI,CAAC,OAAO,IAAI,OAAO,KAAK,cAAc;oBAAE,SAAS;gBACrD,IAAI,CAAC,OAAO,CAAC,UAAU,CAAC,QAAQ,CAAC;oBAAE,SAAS;gBAE5C,IAAI,CAAC;oBACH,MAAM,IAAI,GAAG,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC;oBAC1C,IAAI,IAAI,CAAC,KAAK;wBAAE,KAAK,GAAG,IAAI,CAAC,KAAK,CAAC;oBAEnC,MAAM,KAAK,GAAG,IAAI,CAAC,OAAO,EAAE,CAAC,CAAC,CAAC,EAAE,KAAK,CAAC;oBACvC,IAAI,KAAK,EAAE,OAAO;wBAAE,OAAO,IAAI,KAAK,CAAC,OAAO,CAAC;oBAE7C,MAAM,MAAM,GAAG,IAAI,CAAC,OAAO,EAAE,CAAC,CAAC,CAAC,EAAE,aAAa,CAAC;oBAChD,IAAI,MAAM;wBAAE,YAAY,GAAG,MAAM,CAAC;oBAElC,4DAA4D;oBAC5D,IAAI,IAAI,CAAC,KAAK;wBAAE,KAAK,GAAG,IAAI,CAAC,KAAK,CAAC;gBACrC,CAAC;gBAAC,MAAM,CAAC;oBACP,yDAAyD;gBAC3D,CAAC;YACH,CAAC;QACH,CAAC;IACH,CAAC;YAAS,CAAC;QACT,2DAA2D;QAC3D,MAAM,CAAC,WAAW,EAAE,CAAC;IACvB,CAAC;IAED,OAAO,EAAE,OAAO,EAAE,KAAK,EAAE,KAAK,EAAE,YAAY,EAAE,SAAS,EAAE,CAAC;AAC5D,CAAC;AAED,KAAK,UAAU,aAAa;IAC1B,MAAM,GAAG,GAAG,MAAM,gBAAgB,CAChC,GAAG,WAAW,YAAY,EAC1B,EAAE,OAAO,EAAE,UAAU,EAAE,EAAE,CAC1B,CAAC;IACF,IAAI,CAAC,GAAG,CAAC,EAAE;QAAE,MAAM,IAAI,KAAK,CAAC,0BAA0B,GAAG,CAAC,MAAM,EAAE,CAAC,CAAC;IACrE,MAAM,IAAI,GAAG,CAAC,MAAM,GAAG,CAAC,IAAI,EAAE,CAA0B,CAAC;IACzD,OAAO,IAAI,CAAC,IAAI,CAAC;AACnB,CAAC;AAED,SAAS,gBAAgB,CAAC,KAAgB;IACxC,gFAAgF;IAChF,OAAO,KAAK,CAAC,cAAc,IAAI,KAAK,CAAC,aAAa,IAAI,uBAAuB,CAAC;AAChF,CAAC;AAED;;GAEG;AACH,SAAS,YAAY,CAAC,IAAqB,EAAE,KAAc;IACzD,MAAM,KAAK,GAAa,EAAE,CAAC;IAC3B,IAAI,IAAI,CAAC,KAAK;QAAE,KAAK,CAAC,IAAI,CAAC,UAAU,IAAI,CAAC,KAAK,EAAE,CAAC,CAAC;IACnD,IAAI,IAAI,CAAC,KAAK;QAAE,KAAK,CAAC,IAAI,CAAC,WAAW,IAAI,CAAC,KAAK,CAAC,aAAa,IAAI,IAAI,CAAC,KAAK,CAAC,iBAAiB,EAAE,CAAC,CAAC;IAClG,IAAI,KAAK;QAAE,KAAK,CAAC,IAAI,CAAC,KAAK,CAAC,CAAC;IAC7B,IAAI,IAAI,CAAC,SAAS;QAAE,KAAK,CAAC,IAAI,CAAC,6CAA6C,CAAC,CAAC;IAE9E,OAAO,KAAK,CAAC,MAAM,GAAG,CAAC,CAAC,CAAC,CAAC,YAAY,KAAK,CAAC,IAAI,CAAC,KAAK,CAAC,EAAE,CAAC,CAAC,CAAC,EAAE,CAAC;AACjE,CAAC;AAED,wEAAwE;AAExE,MAAM,KAAK,GAAG;IACZ;QACE,IAAI,EAAE,MAAM;QACZ,WAAW,EACT,8FAA8F;YAC9F,yGAAyG;YACzG,sDAAsD;YACtD,yDAAyD;YACzD,kDAAkD;YAClD,mEAAmE;YACnE,gEAAgE;YAChE,sDAAsD;YACtD,qEAAqE;YACrE,oDAAoD;YACpD,iEAAiE;YACjE,UAAU;YACV,8FAA8F;YAC9F,gGAAgG;YAChG,gFAAgF;YAChF,mGAAmG;QACrG,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,OAAO,EAAE;oBACP,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,qGAAqG;iBACnH;gBACD,MAAM,EAAE;oBACN,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,0FAA0F;iBACxG;gBACD,WAAW,EAAE;oBACX,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,8FAA8F;iBAC5G;gBACD,UAAU,EAAE;oBACV,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,6FAA6F;iBAC3G;aACF;YACD,QAAQ,EAAE,CAAC,SAAS,CAAC;SACtB;KACF;IACD;QACE,IAAI,EAAE,eAAe;QACrB,WAAW,EACT,6FAA6F;YAC7F,4FAA4F;YAC5F,gBAAgB;YAChB,gEAAgE;YAChE,yEAAyE;YACzE,qEAAqE;YACrE,kEAAkE;YAClE,2EAA2E;YAC3E,wEAAwE;YACxE,6CAA6C;QAC/C,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,MAAM,EAAE;oBACN,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,4FAA4F;iBAC1G;gBACD,OAAO,EAAE;oBACP,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,uFAAuF;iBACrG;gBACD,WAAW,EAAE;oBACX,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,iHAAiH;iBAC/H;gBACD,WAAW,EAAE;oBACX,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,uEAAuE;iBACrF;gBACD,UAAU,EAAE;oBACV,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,oCAAoC;iBAClD;aACF;YACD,QAAQ,EAAE,CAAC,aAAa,CAAC;SAC1B;KACF;IACD;QACE,IAAI,EAAE,WAAW;QACjB,WAAW,EACT,gHAAgH;YAChH,gBAAgB;YAChB,wCAAwC;YACxC,uCAAuC;YACvC,+DAA+D;YAC/D,gDAAgD;YAChD,6CAA6C;YAC7C,qFAAqF;QACvF,WAAW,EAAE;YACX,IAAI,EAAE,QAAiB;YACvB,UAAU,EAAE;gBACV,IAAI,EAAE;oBACJ,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,iFAAiF;iBAC/F;gBACD,IAAI,EAAE;oBACJ,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,4FAA4F;iBAC1G;gBACD,QAAQ,EAAE;oBACR,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,4DAA4D;iBAC1E;gBACD,UAAU,EAAE;oBACV,IAAI,EAAE,QAAQ;oBACd,WAAW,EAAE,oCAAoC;iBAClD;aACF;YACD,QAAQ,EAAE,CAAC,MAAM,EAAE,MAAM,CAAC;SAC3B;KACF;IACD;QACE,IAAI,EAAE,UAAU;QAChB,WAAW,EACT,2GAA2G;YAC3G,+GAA+G;YAC/G,wHAAwH;QAC1H,WAAW,EAAE,EAAE,IAAI,EAAE,QAAiB,EAAE,UAAU,EAAE,EAAE,EAAE;KACzD;IACD;QACE,IAAI,EAAE,aAAa;QACnB,WAAW,EACT,uFAAuF;YACvF,sDAAsD;QACxD,WAAW,EAAE,EAAE,IAAI,EAAE,QAAiB,EAAE,UAAU,EAAE,EAAE,EAAE;KACzD;CACF,CAAC;AAEF,wEAAwE;AAExE,MAAM,MAAM,GAAG,IAAI,MAAM,CACvB,EAAE,IAAI,EAAE,YAAY,EAAE,OAAO,EAAE,OAAO,EAAE,EACxC,EAAE,YAAY,EAAE,EAAE,KAAK,EAAE,EAAE,EAAE,EAAE,CAChC,CAAC;AAEF,MAAM,CAAC,iBAAiB,CAAC,sBAAsB,EAAE,KAAK,IAAI,EAAE,CAAC,CAAC,EAAE,KAAK,EAAE,KAAK,EAAE,CAAC,CAAC,CAAC;AAEjF,MAAM,CAAC,iBAAiB,CAAC,qBAAqB,EAAE,KAAK,EAAE,OAAO,EAAE,EAAE;IAChE,MAAM,EAAE,IAAI,EAAE,SAAS,EAAE,IAAI,EAAE,GAAG,OAAO,CAAC,MAAM,CAAC;IAEjD,IAAI,CAAC;QACH,QAAQ,IAAI,EAAE,CAAC;YACb,KAAK,MAAM,CAAC,CAAC,CAAC;gBACZ,MAAM,EAAE,OAAO,EAAE,MAAM,EAAE,WAAW,EAAE,UAAU,EAAE,GAAG,IAKpD,CAAC;gBACF,MAAM,QAAQ,GAAkB,EAAE,CAAC;gBACnC,IAAI,MAAM;oBAAE,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,CAAC,CAAC;gBAC/D,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,OAAO,EAAE,OAAO,EAAE,CAAC,CAAC;gBAElD,MAAM,IAAI,GAAG,MAAM,uBAAuB,CAAC,QAAQ,EAAE;oBACnD,WAAW;oBACX,SAAS,EAAE,UAAU;iBACtB,CAAC,CAAC;gBAEH,MAAM,MAAM,GAAG,YAAY,CAAC,IAAI,CAAC,CAAC;gBAClC,OAAO,EAAE,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,OAAO,GAAG,MAAM,EAAE,CAAC,EAAE,CAAC;YACtE,CAAC;YAED,KAAK,eAAe,CAAC,CAAC,CAAC;gBACrB,MAAM,EAAE,MAAM,EAAE,OAAO,EAAE,WAAW,EAAE,WAAW,EAAE,UAAU,EAAE,GAAG,IAMjE,CAAC;gBAEF,MAAM,QAAQ,GAAkB,EAAE,CAAC;gBACnC,IAAI,MAAM;oBAAE,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,CAAC,CAAC;gBAE/D,IAAI,WAAW,GAAG,WAAW,CAAC;gBAC9B,IAAI,OAAO;oBAAE,WAAW,GAAG,aAAa,OAAO,qBAAqB,WAAW,EAAE,CAAC;gBAClF,QAAQ,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,OAAO,EAAE,WAAW,EAAE,CAAC,CAAC;gBAEtD,MAAM,IAAI,GAAG,MAAM,uBAAuB,CAAC,QAAQ,EAAE;oBACnD,WAAW;oBACX,SAAS,EAAE,UAAU;iBACtB,CAAC,CAAC;gBAEH,MAAM,MAAM,GAAG,YAAY,CAAC,IAAI,CAAC,CAAC;gBAClC,OAAO;oBACL,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,OAAO,GAAG,MAAM,EAAE,CAAC;iBACzD,CAAC;YACJ,CAAC;YAED,KAAK,WAAW,CAAC,CAAC,CAAC;gBACjB,MAAM,EAAE,IAAI,EAAE,IAAI,EAAE,QAAQ,EAAE,UAAU,EAAE,aAAa,EAAE,GAAG,IAK3D,CAAC;gBAEF,MAAM,IAAI,GAAG,QAAQ,IAAI,SAAS,CAAC;gBACnC,MAAM,YAAY,GAAkB;oBAClC;wBACE,IAAI,EAAE,QAAQ;wBACd,OAAO,EAAE,UAAU,IAAI,qJAAqJ;qBAC7K;oBACD;wBACE,IAAI,EAAE,MAAM;wBACZ,OAAO,EAAE,SAAS,IAAI,aAAa,IAAI,KAAK,IAAI,UAAU;qBAC3D;iBACF,CAAC;gBAEF,MAAM,QAAQ,GAAG,MAAM,uBAAuB,CAAC,YAAY,EAAE;oBAC3D,WAAW,EAAE,GAAG;oBAChB,SAAS,EAAE,aAAa,IAAI,kBAAkB;iBAC/C,CAAC,CAAC;gBAEH,MAAM,UAAU,GAAG,YAAY,CAAC,QAAQ,EAAE,IAAI,CAAC,CAAC;gBAChD,OAAO,EAAE,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,QAAQ,CAAC,OAAO,GAAG,UAAU,EAAE,CAAC,EAAE,CAAC;YAC9E,CAAC;YAED,KAAK,UAAU,CAAC,CAAC,CAAC;gBAChB,MAAM,KAAK,GAAG,IAAI,CAAC,GAAG,EAAE,CAAC;gBACzB,IAAI,MAAmB,CAAC;gBACxB,IAAI,CAAC;oBACH,MAAM,GAAG,MAAM,aAAa,EAAE,CAAC;gBACjC,CAAC;gBAAC,OAAO,GAAG,EAAE,CAAC;oBACb,MAAM,EAAE,GAAG,IAAI,CAAC,GAAG,EAAE,GAAG,KAAK,CAAC;oBAC9B,MAAM,MAAM,GAAG,GAAG,YAAY,KAAK,IAAI,GAAG,CAAC,IAAI,KAAK,YAAY;wBAC9D,CAAC,CAAC,qCAAqC,EAAE,KAAK;wBAC9C,CAAC,CAAC,sBAAsB,GAAG,YAAY,KAAK,CAAC,CAAC,CAAC,GAAG,CAAC,OAAO,CAAC,CAAC,CAAC,MAAM,CAAC,GAAG,CAAC,EAAE,CAAC;oBAC7E,OAAO;wBACL,OAAO,EAAE,CAAC;gCACR,IAAI,EAAE,MAAM;gCACZ,IAAI,EAAE,8BAA8B,WAAW,KAAK,MAAM,uFAAuF;6BAClJ,CAAC;qBACH,CAAC;gBACJ,CAAC;gBACD,MAAM,EAAE,GAAG,IAAI,CAAC,GAAG,EAAE,GAAG,KAAK,CAAC;gBAE9B,IAAI,MAAM,CAAC,MAAM,KAAK,CAAC,EAAE,CAAC;oBACxB,OAAO;wBACL,OAAO,EAAE,CAAC;gCACR,IAAI,EAAE,MAAM;gCACZ,IAAI,EAAE,+CAA+C,WAAW,cAAc,EAAE,gGAAgG;6BACjL,CAAC;qBACH,CAAC;gBACJ,CAAC;gBAED,MAAM,KAAK,GAAG,MAAM,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE;oBAC7B,MAAM,GAAG,GAAG,gBAAgB,CAAC,CAAC,CAAC,CAAC;oBAChC,OAAO,OAAO,CAAC,CAAC,EAAE,cAAc,GAAG,CAAC,cAAc,EAAE,UAAU,CAAC;gBACjE,CAAC,CAAC,CAAC;gBAEH,MAAM,OAAO,GAAG,MAAM,CAAC,CAAC,CAAC,CAAC;gBAC1B,MAAM,GAAG,GAAG,gBAAgB,CAAC,OAAO,CAAC,CAAC;gBAEtC,OAAO;oBACL,OAAO,EAAE,CAAC;4BACR,IAAI,EAAE,MAAM;4BACZ,IAAI,EACF,kBAAkB;gCAClB,aAAa,WAAW,IAAI;gCAC5B,YAAY,EAAE,MAAM;gCACpB,UAAU,OAAO,CAAC,EAAE,IAAI;gCACxB,mBAAmB,GAAG,CAAC,cAAc,EAAE,WAAW;gCAClD,qBAAqB,KAAK,CAAC,IAAI,CAAC,IAAI,CAAC,MAAM;gCAC3C,6FAA6F;yBAChG,CAAC;iBACH,CAAC;YACJ,CAAC;YAED,KAAK,aAAa,CAAC,CAAC,CAAC;gBACnB,MAAM,MAAM,GAAG,MAAM,aAAa,EAAE,CAAC;gBACrC,IAAI,CAAC,MAAM,CAAC,MAAM,EAAE,CAAC;oBACnB,OAAO,EAAE,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,6BAA6B,EAAE,CAAC,EAAE,CAAC;gBAC9E,CAAC;gBACD,MAAM,KAAK,GAAG,MAAM,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE;oBAC7B,MAAM,GAAG,GAAG,gBAAgB,CAAC,CAAC,CAAC,CAAC;oBAChC,OAAO,OAAO,CAAC,CAAC,EAAE,GAAG,GAAG,CAAC,CAAC,CAAC,cAAc,GAAG,CAAC,cAAc,EAAE,UAAU,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC;gBACjF,CAAC,CAAC,CAAC;gBACH,OAAO;oBACL,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,mBAAmB,KAAK,CAAC,IAAI,CAAC,IAAI,CAAC,EAAE,EAAE,CAAC;iBACzE,CAAC;YACJ,CAAC;YAED;gBACE,MAAM,IAAI,KAAK,CAAC,iBAAiB,IAAI,EAAE,CAAC,CAAC;QAC7C,CAAC;IACH,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO;YACL,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,UAAU,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,MAAM,CAAC,KAAK,CAAC,EAAE,EAAE,CAAC;YACrG,OAAO,EAAE,IAAI;SACd,CAAC;IACJ,CAAC;AACH,CAAC,CAAC,CAAC;AAEH,KAAK,UAAU,IAAI;IACjB,MAAM,SAAS,GAAG,IAAI,oBAAoB,EAAE,CAAC;IAC7C,MAAM,MAAM,CAAC,OAAO,CAAC,SAAS,CAAC,CAAC;IAChC,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,8BAA8B,WAAW,KAAK,CAAC,CAAC;AACvE,CAAC;AAED,IAAI,EAAE,CAAC,KAAK,CAAC,CAAC,KAAK,EAAE,EAAE;IACrB,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,gBAAgB,KAAK,IAAI,CAAC,CAAC;IAChD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;AAClB,CAAC,CAAC,CAAC"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@houtini/lm",
3
- "version": "2.1.0",
3
+ "version": "2.3.0",
4
4
  "type": "module",
5
5
  "description": "MCP server for local LLMs — connects to LM Studio or any OpenAI-compatible endpoint",
6
6
  "mcpName": "io.github.houtini-ai/lm",