@aigne/lmstudio 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,18 @@
1
+ # Changelog
2
+
3
+ ## 1.0.0 (2025-08-28)
4
+
5
+
6
+ ### Features
7
+
8
+ * add lmstudio model adapter ([#406](https://github.com/AIGNE-io/aigne-framework/issues/406)) ([6610993](https://github.com/AIGNE-io/aigne-framework/commit/6610993cb500b1fac2bf5d17f40f351d4c897bd7))
9
+
10
+
11
+ ### Dependencies
12
+
13
+ * The following workspace dependencies were updated
14
+ * dependencies
15
+ * @aigne/openai bumped to 0.13.2
16
+ * devDependencies
17
+ * @aigne/core bumped to 1.57.0
18
+ * @aigne/test-utils bumped to 0.5.38
package/LICENSE.md ADDED
@@ -0,0 +1,93 @@
1
+ Elastic License 2.0
2
+
3
+ URL: https://www.elastic.co/licensing/elastic-license
4
+
5
+ ## Acceptance
6
+
7
+ By using the software, you agree to all of the terms and conditions below.
8
+
9
+ ## Copyright License
10
+
11
+ The licensor grants you a non-exclusive, royalty-free, worldwide,
12
+ non-sublicensable, non-transferable license to use, copy, distribute, make
13
+ available, and prepare derivative works of the software, in each case subject to
14
+ the limitations and conditions below.
15
+
16
+ ## Limitations
17
+
18
+ You may not provide the software to third parties as a hosted or managed
19
+ service, where the service provides users with access to any substantial set of
20
+ the features or functionality of the software.
21
+
22
+ You may not move, change, disable, or circumvent the license key functionality
23
+ in the software, and you may not remove or obscure any functionality in the
24
+ software that is protected by the license key.
25
+
26
+ You may not alter, remove, or obscure any licensing, copyright, or other notices
27
+ of the licensor in the software. Any use of the licensor’s trademarks is subject
28
+ to applicable law.
29
+
30
+ ## Patents
31
+
32
+ The licensor grants you a license, under any patent claims the licensor can
33
+ license, or becomes able to license, to make, have made, use, sell, offer for
34
+ sale, import and have imported the software, in each case subject to the
35
+ limitations and conditions in this license. This license does not cover any
36
+ patent claims that you cause to be infringed by modifications or additions to
37
+ the software. If you or your company make any written claim that the software
38
+ infringes or contributes to infringement of any patent, your patent license for
39
+ the software granted under these terms ends immediately. If your company makes
40
+ such a claim, your patent license ends immediately for work on behalf of your
41
+ company.
42
+
43
+ ## Notices
44
+
45
+ You must ensure that anyone who gets a copy of any part of the software from you
46
+ also gets a copy of these terms.
47
+
48
+ If you modify the software, you must include in any modified copies of the
49
+ software prominent notices stating that you have modified the software.
50
+
51
+ ## No Other Rights
52
+
53
+ These terms do not imply any licenses other than those expressly granted in
54
+ these terms.
55
+
56
+ ## Termination
57
+
58
+ If you use the software in violation of these terms, such use is not licensed,
59
+ and your licenses will automatically terminate. If the licensor provides you
60
+ with a notice of your violation, and you cease all violation of this license no
61
+ later than 30 days after you receive that notice, your licenses will be
62
+ reinstated retroactively. However, if you violate these terms after such
63
+ reinstatement, any additional violation of these terms will cause your licenses
64
+ to terminate automatically and permanently.
65
+
66
+ ## No Liability
67
+
68
+ *As far as the law allows, the software comes as is, without any warranty or
69
+ condition, and the licensor will not be liable to you for any damages arising
70
+ out of these terms or the use or nature of the software, under any kind of
71
+ legal claim.*
72
+
73
+ ## Definitions
74
+
75
+ The **licensor** is the entity offering these terms, and the **software** is the
76
+ software the licensor makes available under these terms, including any portion
77
+ of it.
78
+
79
+ **you** refers to the individual or entity agreeing to these terms.
80
+
81
+ **your company** is any legal entity, sole proprietorship, or other kind of
82
+ organization that you work for, plus all organizations that have control over,
83
+ are under the control of, or are under common control with that
84
+ organization. **control** means ownership of substantially all the assets of an
85
+ entity, or the power to direct its management and policies by vote, contract, or
86
+ otherwise. Control can be direct or indirect.
87
+
88
+ **your licenses** are all the licenses granted to you for the software under
89
+ these terms.
90
+
91
+ **use** means anything you do with the software requiring one of your licenses.
92
+
93
+ **trademark** means trademarks, service marks, and similar rights.
package/README.md ADDED
@@ -0,0 +1,235 @@
1
+ # @aigne/lmstudio
2
+
3
+ AIGNE LM Studio model adapter for integrating with locally hosted AI models via LM Studio.
4
+
5
+ ## Overview
6
+
7
+ This model adapter provides a seamless integration with LM Studio's OpenAI-compatible API, allowing you to run local Large Language Models (LLMs) through the AIGNE framework. LM Studio provides a user-friendly interface for running local AI models with an OpenAI-compatible API.
8
+
9
+ ## Installation
10
+
11
+ ```bash
12
+ npm install @aigne/lmstudio
13
+ # or
14
+ pnpm add @aigne/lmstudio
15
+ # or
16
+ yarn add @aigne/lmstudio
17
+ ```
18
+
19
+ ## Prerequisites
20
+
21
+ 1. **Install LM Studio**: Download and install LM Studio from [https://lmstudio.ai/](https://lmstudio.ai/)
22
+ 2. **Download Models**: Use LM Studio's interface to download local models (e.g., Llama 3.2, Mistral, etc.)
23
+ 3. **Start Local Server**: Launch the local server from LM Studio's "Local Server" tab
24
+
25
+ ## Quick Start
26
+
27
+ ```typescript
28
+ import { LMStudioChatModel } from "@aigne/lmstudio";
29
+
30
+ // Create a new LM Studio chat model
31
+ const model = new LMStudioChatModel({
32
+ baseURL: "http://localhost:1234/v1", // Default LM Studio server URL
33
+ model: "llama-3.2-3b-instruct", // Model name as shown in LM Studio
34
+ modelOptions: {
35
+ temperature: 0.7,
36
+ maxTokens: 2048,
37
+ },
38
+ });
39
+
40
+ // Basic usage
41
+ const response = await model.invoke({
42
+ messages: [
43
+ { role: "user", content: "What is the capital of France?" }
44
+ ],
45
+ });
46
+
47
+ console.log(response.text); // "The capital of France is Paris."
48
+ ```
49
+
50
+ ## Configuration
51
+
52
+ ### Environment Variables
53
+
54
+ You can configure the LM Studio connection using environment variables:
55
+
56
+ ```bash
57
+ # LM Studio server URL (default: http://localhost:1234/v1)
58
+ LM_STUDIO_BASE_URL=http://localhost:1234/v1
59
+
60
+ # API Key (not required for local LM Studio, defaults to "not-required")
61
+ # Only set this if you have configured authentication in LM Studio
62
+ # LM_STUDIO_API_KEY=your-key-if-needed
63
+ ```
64
+
65
+ **Note:** LM Studio typically runs locally without authentication. The API key is set to a placeholder value "not-required" by default.
66
+
67
+ ### Model Options
68
+
69
+ ```typescript
70
+ const model = new LMStudioChatModel({
71
+ baseURL: "http://localhost:1234/v1",
72
+ model: "llama-3.2-3b-instruct",
73
+ // apiKey: "not-required", // Optional, not required for local LM Studio
74
+ modelOptions: {
75
+ temperature: 0.7, // Controls randomness (0.0 to 2.0)
76
+ maxTokens: 2048, // Maximum tokens in response
77
+ topP: 0.9, // Nucleus sampling
78
+ frequencyPenalty: 0, // Frequency penalty
79
+ presencePenalty: 0, // Presence penalty
80
+ },
81
+ });
82
+ ```
83
+
84
+ ## Features
85
+
86
+ ### Streaming Support
87
+
88
+ ```typescript
89
+ const model = new LMStudioChatModel();
90
+
91
+ const stream = await model.invoke(
92
+ {
93
+ messages: [{ role: "user", content: "Tell me a short story" }],
94
+ },
95
+ { streaming: true }
96
+ );
97
+
98
+ for await (const chunk of stream) {
99
+ if (chunk.type === "delta" && chunk.delta.text) {
100
+ process.stdout.write(chunk.delta.text.text);
101
+ }
102
+ }
103
+ ```
104
+
105
+ ### Tool/Function Calling
106
+
107
+ LM Studio supports OpenAI-compatible function calling:
108
+
109
+ ```typescript
110
+ const tools = [
111
+ {
112
+ type: "function" as const,
113
+ function: {
114
+ name: "get_weather",
115
+ description: "Get current weather information",
116
+ parameters: {
117
+ type: "object",
118
+ properties: {
119
+ location: {
120
+ type: "string",
121
+ description: "The city and state, e.g. San Francisco, CA",
122
+ },
123
+ },
124
+ required: ["location"],
125
+ },
126
+ },
127
+ },
128
+ ];
129
+
130
+ const response = await model.invoke({
131
+ messages: [
132
+ { role: "user", content: "What's the weather like in New York?" }
133
+ ],
134
+ tools,
135
+ });
136
+
137
+ if (response.toolCalls?.length) {
138
+ console.log("Tool calls:", response.toolCalls);
139
+ }
140
+ ```
141
+
142
+ ### Structured Output
143
+
144
+ Generate structured JSON responses:
145
+
146
+ ```typescript
147
+ const responseFormat = {
148
+ type: "json_schema" as const,
149
+ json_schema: {
150
+ name: "weather_response",
151
+ schema: {
152
+ type: "object",
153
+ properties: {
154
+ location: { type: "string" },
155
+ temperature: { type: "number" },
156
+ description: { type: "string" },
157
+ },
158
+ required: ["location", "temperature", "description"],
159
+ },
160
+ },
161
+ };
162
+
163
+ const response = await model.invoke({
164
+ messages: [
165
+ { role: "user", content: "Get weather for Paris in JSON format" }
166
+ ],
167
+ responseFormat,
168
+ });
169
+
170
+ console.log(response.json); // Parsed JSON object
171
+ ```
172
+
173
+ ## Supported Models
174
+
175
+ LM Studio supports a wide variety of models. Popular choices include:
176
+
177
+ - **Llama 3.2** (3B, 8B, 70B variants)
178
+ - **Llama 3.1** (8B, 70B, 405B variants)
179
+ - **Mistral** (7B, 8x7B variants)
180
+ - **CodeLlama** (7B, 13B, 34B variants)
181
+ - **Qwen** (various sizes)
182
+ - **Phi-3** (mini, small, medium variants)
183
+
184
+ The model name should match exactly what appears in your LM Studio interface.
185
+
186
+ ## Error Handling
187
+
188
+ ```typescript
189
+ import { LMStudioChatModel } from "@aigne/lmstudio";
190
+
191
+ const model = new LMStudioChatModel();
192
+
193
+ try {
194
+ const response = await model.invoke({
195
+ messages: [{ role: "user", content: "Hello!" }],
196
+ });
197
+ console.log(response.text);
198
+ } catch (error) {
199
+ if (error.code === "ECONNREFUSED") {
200
+ console.error("LM Studio server is not running. Please start the local server.");
201
+ } else {
202
+ console.error("Error:", error.message);
203
+ }
204
+ }
205
+ ```
206
+
207
+ ## Performance Tips
208
+
209
+ 1. **Model Selection**: Smaller models (3B-8B parameters) are faster but less capable
210
+ 2. **Context Length**: Be mindful of context window limits for your chosen model
211
+ 3. **Temperature**: Lower values (0.1-0.3) for factual tasks, higher (0.7-1.0) for creative tasks
212
+ 4. **Batch Processing**: Process multiple requests concurrently when possible
213
+
214
+ ## Troubleshooting
215
+
216
+ ### Common Issues
217
+
218
+ 1. **Connection Refused**: Ensure LM Studio's local server is running
219
+ 2. **Model Not Found**: Verify the model name matches what's shown in LM Studio
220
+ 3. **Out of Memory**: Try using a smaller model or reduce context length
221
+ 4. **Slow Responses**: Consider using GPU acceleration if available
222
+
223
+ ## License
224
+
225
+ This project is licensed under the Elastic License 2.0 - see the [LICENSE](../../LICENSE) file for details.
226
+
227
+ ## Contributing
228
+
229
+ Contributions are welcome! Please read our [contributing guidelines](../../CONTRIBUTING.md) first.
230
+
231
+ ## Support
232
+
233
+ - [GitHub Issues](https://github.com/AIGNE-io/aigne-framework/issues)
234
+ - [Documentation](https://www.arcblock.io/docs/aigne-framework)
235
+ - [LM Studio Documentation](https://lmstudio.ai/docs)
@@ -0,0 +1 @@
1
+ export * from "./lmstudio-chat-model.js";
@@ -0,0 +1,17 @@
1
+ "use strict";
2
+ var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
3
+ if (k2 === undefined) k2 = k;
4
+ var desc = Object.getOwnPropertyDescriptor(m, k);
5
+ if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
6
+ desc = { enumerable: true, get: function() { return m[k]; } };
7
+ }
8
+ Object.defineProperty(o, k2, desc);
9
+ }) : (function(o, m, k, k2) {
10
+ if (k2 === undefined) k2 = k;
11
+ o[k2] = m[k];
12
+ }));
13
+ var __exportStar = (this && this.__exportStar) || function(m, exports) {
14
+ for (var p in m) if (p !== "default" && !Object.prototype.hasOwnProperty.call(exports, p)) __createBinding(exports, m, p);
15
+ };
16
+ Object.defineProperty(exports, "__esModule", { value: true });
17
+ __exportStar(require("./lmstudio-chat-model.js"), exports);
@@ -0,0 +1,24 @@
1
+ import { OpenAIChatModel, type OpenAIChatModelOptions } from "@aigne/openai";
2
+ /**
3
+ * Implementation of the ChatModel interface for LM Studio
4
+ *
5
+ * This model allows you to run local LLMs through LM Studio,
6
+ * with an OpenAI-compatible API interface.
7
+ *
8
+ * Default model: 'llama-3.2-3b-instruct'
9
+ *
10
+ * @example
11
+ * Here's how to create and use an LM Studio chat model:
12
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model}
13
+ *
14
+ * @example
15
+ * Here's an example with streaming response:
16
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model-streaming}
17
+ */
18
+ export declare class LMStudioChatModel extends OpenAIChatModel {
19
+ constructor(options?: OpenAIChatModelOptions);
20
+ protected apiKeyEnvName: string;
21
+ protected apiKeyDefault: string;
22
+ protected supportsNativeStructuredOutputs: boolean;
23
+ protected supportsTemperature: boolean;
24
+ }
@@ -0,0 +1,36 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ exports.LMStudioChatModel = void 0;
4
+ const openai_1 = require("@aigne/openai");
5
+ const LM_STUDIO_DEFAULT_BASE_URL = "http://localhost:1234/v1";
6
+ const LM_STUDIO_DEFAULT_CHAT_MODEL = "llama-3.2-3b-instruct";
7
+ /**
8
+ * Implementation of the ChatModel interface for LM Studio
9
+ *
10
+ * This model allows you to run local LLMs through LM Studio,
11
+ * with an OpenAI-compatible API interface.
12
+ *
13
+ * Default model: 'llama-3.2-3b-instruct'
14
+ *
15
+ * @example
16
+ * Here's how to create and use an LM Studio chat model:
17
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model}
18
+ *
19
+ * @example
20
+ * Here's an example with streaming response:
21
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model-streaming}
22
+ */
23
+ class LMStudioChatModel extends openai_1.OpenAIChatModel {
24
+ constructor(options) {
25
+ super({
26
+ ...options,
27
+ model: options?.model || LM_STUDIO_DEFAULT_CHAT_MODEL,
28
+ baseURL: options?.baseURL || process.env.LM_STUDIO_BASE_URL || LM_STUDIO_DEFAULT_BASE_URL,
29
+ });
30
+ }
31
+ apiKeyEnvName = "LM_STUDIO_API_KEY";
32
+ apiKeyDefault = "not-required";
33
+ supportsNativeStructuredOutputs = false;
34
+ supportsTemperature = true;
35
+ }
36
+ exports.LMStudioChatModel = LMStudioChatModel;
@@ -0,0 +1,3 @@
1
+ {
2
+ "type": "commonjs"
3
+ }
@@ -0,0 +1 @@
1
+ export * from "./lmstudio-chat-model.js";
@@ -0,0 +1,24 @@
1
+ import { OpenAIChatModel, type OpenAIChatModelOptions } from "@aigne/openai";
2
+ /**
3
+ * Implementation of the ChatModel interface for LM Studio
4
+ *
5
+ * This model allows you to run local LLMs through LM Studio,
6
+ * with an OpenAI-compatible API interface.
7
+ *
8
+ * Default model: 'llama-3.2-3b-instruct'
9
+ *
10
+ * @example
11
+ * Here's how to create and use an LM Studio chat model:
12
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model}
13
+ *
14
+ * @example
15
+ * Here's an example with streaming response:
16
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model-streaming}
17
+ */
18
+ export declare class LMStudioChatModel extends OpenAIChatModel {
19
+ constructor(options?: OpenAIChatModelOptions);
20
+ protected apiKeyEnvName: string;
21
+ protected apiKeyDefault: string;
22
+ protected supportsNativeStructuredOutputs: boolean;
23
+ protected supportsTemperature: boolean;
24
+ }
@@ -0,0 +1 @@
1
+ export * from "./lmstudio-chat-model.js";
@@ -0,0 +1 @@
1
+ export * from "./lmstudio-chat-model.js";
@@ -0,0 +1,24 @@
1
+ import { OpenAIChatModel, type OpenAIChatModelOptions } from "@aigne/openai";
2
+ /**
3
+ * Implementation of the ChatModel interface for LM Studio
4
+ *
5
+ * This model allows you to run local LLMs through LM Studio,
6
+ * with an OpenAI-compatible API interface.
7
+ *
8
+ * Default model: 'llama-3.2-3b-instruct'
9
+ *
10
+ * @example
11
+ * Here's how to create and use an LM Studio chat model:
12
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model}
13
+ *
14
+ * @example
15
+ * Here's an example with streaming response:
16
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model-streaming}
17
+ */
18
+ export declare class LMStudioChatModel extends OpenAIChatModel {
19
+ constructor(options?: OpenAIChatModelOptions);
20
+ protected apiKeyEnvName: string;
21
+ protected apiKeyDefault: string;
22
+ protected supportsNativeStructuredOutputs: boolean;
23
+ protected supportsTemperature: boolean;
24
+ }
@@ -0,0 +1,32 @@
1
+ import { OpenAIChatModel } from "@aigne/openai";
2
+ const LM_STUDIO_DEFAULT_BASE_URL = "http://localhost:1234/v1";
3
+ const LM_STUDIO_DEFAULT_CHAT_MODEL = "llama-3.2-3b-instruct";
4
+ /**
5
+ * Implementation of the ChatModel interface for LM Studio
6
+ *
7
+ * This model allows you to run local LLMs through LM Studio,
8
+ * with an OpenAI-compatible API interface.
9
+ *
10
+ * Default model: 'llama-3.2-3b-instruct'
11
+ *
12
+ * @example
13
+ * Here's how to create and use an LM Studio chat model:
14
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model}
15
+ *
16
+ * @example
17
+ * Here's an example with streaming response:
18
+ * {@includeCode ../test/lmstudio-chat-model.test.ts#example-lmstudio-chat-model-streaming}
19
+ */
20
+ export class LMStudioChatModel extends OpenAIChatModel {
21
+ constructor(options) {
22
+ super({
23
+ ...options,
24
+ model: options?.model || LM_STUDIO_DEFAULT_CHAT_MODEL,
25
+ baseURL: options?.baseURL || process.env.LM_STUDIO_BASE_URL || LM_STUDIO_DEFAULT_BASE_URL,
26
+ });
27
+ }
28
+ apiKeyEnvName = "LM_STUDIO_API_KEY";
29
+ apiKeyDefault = "not-required";
30
+ supportsNativeStructuredOutputs = false;
31
+ supportsTemperature = true;
32
+ }
@@ -0,0 +1,3 @@
1
+ {
2
+ "type": "module"
3
+ }
package/package.json ADDED
@@ -0,0 +1,54 @@
1
+ {
2
+ "name": "@aigne/lmstudio",
3
+ "version": "1.0.0",
4
+ "description": "AIGNE LM Studio model adapter for integrating with locally hosted AI models via LM Studio",
5
+ "publishConfig": {
6
+ "access": "public"
7
+ },
8
+ "author": "Arcblock <blocklet@arcblock.io> https://github.com/blocklet",
9
+ "homepage": "https://github.com/AIGNE-io/aigne-framework",
10
+ "license": "Elastic-2.0",
11
+ "repository": {
12
+ "type": "git",
13
+ "url": "git+https://github.com/AIGNE-io/aigne-framework"
14
+ },
15
+ "files": [
16
+ "lib/cjs",
17
+ "lib/dts",
18
+ "lib/esm",
19
+ "LICENSE",
20
+ "README.md",
21
+ "CHANGELOG.md"
22
+ ],
23
+ "type": "module",
24
+ "main": "./lib/cjs/index.js",
25
+ "module": "./lib/esm/index.js",
26
+ "types": "./lib/dts/index.d.ts",
27
+ "exports": {
28
+ ".": {
29
+ "import": "./lib/esm/index.js",
30
+ "require": "./lib/cjs/index.js",
31
+ "types": "./lib/dts/index.d.ts"
32
+ }
33
+ },
34
+ "dependencies": {
35
+ "@aigne/openai": "^0.13.2"
36
+ },
37
+ "devDependencies": {
38
+ "@types/bun": "^1.2.18",
39
+ "@types/node": "^24.0.12",
40
+ "npm-run-all": "^4.1.5",
41
+ "rimraf": "^6.0.1",
42
+ "typescript": "^5.8.3",
43
+ "@aigne/core": "^1.57.0",
44
+ "@aigne/test-utils": "^0.5.38"
45
+ },
46
+ "scripts": {
47
+ "lint": "tsc --noEmit",
48
+ "build": "tsc --build scripts/tsconfig.build.json",
49
+ "clean": "rimraf lib test/coverage",
50
+ "test": "bun test",
51
+ "test:coverage": "bun test --coverage --coverage-reporter=lcov --coverage-reporter=text",
52
+ "postbuild": "node ../../scripts/post-build-lib.mjs"
53
+ }
54
+ }