@elizaos/plugin-local-ai 0.25.6-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Shaw Walters, aka Moon aka @lalalune
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,153 @@
1
+ # Local AI Plugin
2
+
3
+ This plugin provides local AI model capabilities through the ElizaOS platform, supporting text generation, image analysis, speech synthesis, and audio transcription.
4
+
5
+ ## Usage
6
+
7
+ Add the plugin to your character configuration:
8
+
9
+ ```json
10
+ "plugins": ["@elizaos/plugin-local-ai"]
11
+ ```
12
+
13
+ ## Configuration
14
+
15
+ The plugin requires these environment variables (can be set in .env file or character settings):
16
+
17
+ ```json
18
+ "settings": {
19
+ "USE_LOCAL_AI": true,
20
+ "USE_STUDIOLM_TEXT_MODELS": false,
21
+ "USE_OLLAMA_TEXT_MODELS": false,
22
+
23
+ "OLLAMA_SERVER_URL": "http://localhost:11434",
24
+ "OLLAMA_MODEL": "deepseek-r1-distill-qwen-7b",
25
+ "USE_OLLAMA_EMBEDDING": false,
26
+ "OLLAMA_EMBEDDING_MODEL": "",
27
+ "SMALL_OLLAMA_MODEL": "deepseek-r1:1.5b",
28
+ "MEDIUM_OLLAMA_MODEL": "deepseek-r1:7b",
29
+ "LARGE_OLLAMA_MODEL": "deepseek-r1:7b",
30
+
31
+ "STUDIOLM_SERVER_URL": "http://localhost:1234",
32
+ "STUDIOLM_SMALL_MODEL": "lmstudio-community/deepseek-r1-distill-qwen-1.5b",
33
+ "STUDIOLM_MEDIUM_MODEL": "deepseek-r1-distill-qwen-7b",
34
+ "STUDIOLM_EMBEDDING_MODEL": false
35
+ }
36
+ ```
37
+
38
+ Or in `.env` file:
39
+ ```env
40
+ # Local AI Configuration
41
+ USE_LOCAL_AI=true
42
+ USE_STUDIOLM_TEXT_MODELS=false
43
+ USE_OLLAMA_TEXT_MODELS=false
44
+
45
+ # Ollama Configuration
46
+ OLLAMA_SERVER_URL=http://localhost:11434
47
+ OLLAMA_MODEL=deepseek-r1-distill-qwen-7b
48
+ USE_OLLAMA_EMBEDDING=false
49
+ OLLAMA_EMBEDDING_MODEL=
50
+ SMALL_OLLAMA_MODEL=deepseek-r1:1.5b
51
+ MEDIUM_OLLAMA_MODEL=deepseek-r1:7b
52
+ LARGE_OLLAMA_MODEL=deepseek-r1:7b
53
+
54
+ # StudioLM Configuration
55
+ STUDIOLM_SERVER_URL=http://localhost:1234
56
+ STUDIOLM_SMALL_MODEL=lmstudio-community/deepseek-r1-distill-qwen-1.5b
57
+ STUDIOLM_MEDIUM_MODEL=deepseek-r1-distill-qwen-7b
58
+ STUDIOLM_EMBEDDING_MODEL=false
59
+ ```
60
+
61
+ ### Configuration Options
62
+
63
+ #### Text Model Source (Choose One)
64
+ - `USE_STUDIOLM_TEXT_MODELS`: Enable StudioLM text models
65
+ - `USE_OLLAMA_TEXT_MODELS`: Enable Ollama text models
66
+ Note: Only one text model source can be enabled at a time
67
+
68
+ #### Ollama Settings
69
+ - `OLLAMA_SERVER_URL`: Ollama API endpoint (default: http://localhost:11434)
70
+ - `OLLAMA_MODEL`: Default model for general use
71
+ - `USE_OLLAMA_EMBEDDING`: Enable Ollama for embeddings
72
+ - `OLLAMA_EMBEDDING_MODEL`: Model for embeddings when enabled
73
+ - `SMALL_OLLAMA_MODEL`: Model for lighter tasks
74
+ - `MEDIUM_OLLAMA_MODEL`: Model for standard tasks
75
+ - `LARGE_OLLAMA_MODEL`: Model for complex tasks
76
+
77
+ #### StudioLM Settings
78
+ - `STUDIOLM_SERVER_URL`: StudioLM API endpoint (default: http://localhost:1234)
79
+ - `STUDIOLM_SMALL_MODEL`: Model for lighter tasks
80
+ - `STUDIOLM_MEDIUM_MODEL`: Model for standard tasks
81
+ - `STUDIOLM_EMBEDDING_MODEL`: Model for embeddings (or false to disable)
82
+
83
+ ## Features
84
+
85
+ The plugin provides these model classes:
86
+ - `TEXT_SMALL`: Fast, efficient text generation using smaller models
87
+ - `TEXT_LARGE`: More capable text generation using larger models
88
+ - `IMAGE_DESCRIPTION`: Local image analysis using Florence-2 vision model
89
+ - `TEXT_TO_SPEECH`: Local text-to-speech synthesis
90
+ - `TRANSCRIPTION`: Local audio transcription using Whisper
91
+
92
+ ### Image Analysis
93
+ ```typescript
94
+ const { title, description } = await runtime.useModel(
95
+ ModelClass.IMAGE_DESCRIPTION,
96
+ "https://example.com/image.jpg"
97
+ );
98
+ ```
99
+
100
+ ### Text-to-Speech
101
+ ```typescript
102
+ const audioStream = await runtime.useModel(
103
+ ModelClass.TEXT_TO_SPEECH,
104
+ "Text to convert to speech"
105
+ );
106
+ ```
107
+
108
+ ### Audio Transcription
109
+ ```typescript
110
+ const transcription = await runtime.useModel(
111
+ ModelClass.TRANSCRIPTION,
112
+ audioBuffer
113
+ );
114
+ ```
115
+
116
+ ### Text Generation
117
+ ```typescript
118
+ // Using small model
119
+ const smallResponse = await runtime.useModel(
120
+ ModelClass.TEXT_SMALL,
121
+ {
122
+ context: "Generate a short response",
123
+ stopSequences: []
124
+ }
125
+ );
126
+
127
+ // Using large model
128
+ const largeResponse = await runtime.useModel(
129
+ ModelClass.TEXT_LARGE,
130
+ {
131
+ context: "Generate a detailed response",
132
+ stopSequences: []
133
+ }
134
+ );
135
+ ```
136
+
137
+ ## Model Sources
138
+
139
+ ### 1. StudioLM (LM Studio)
140
+ - Local inference server for running various open models
141
+ - Supports chat completion API similar to OpenAI
142
+ - Configure with `USE_STUDIOLM_TEXT_MODELS=true`
143
+ - Supports both small and medium-sized models
144
+ - Optional embedding model support
145
+
146
+ ### 2. Ollama
147
+ - Local model server with optimized inference
148
+ - Supports various open models in GGUF format
149
+ - Configure with `USE_OLLAMA_TEXT_MODELS=true`
150
+ - Supports small, medium, and large models
151
+ - Optional embedding model support
152
+
153
+ Note: The plugin validates that only one text model source is enabled at a time to prevent conflicts.