@elizaos/plugin-local-ai 1.0.0-beta.49 → 1.0.0-beta.51

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,7 +7,7 @@ This plugin provides local AI model capabilities through the ElizaOS platform, s
7
7
  Add the plugin to your character configuration:
8
8
 
9
9
  ```json
10
- "plugins": ["@elizaos/plugin-local-ai"]
10
+ "plugins": ["@elizaos-plugins/plugin-local-ai"]
11
11
  ```
12
12
 
13
13
  ## Configuration
@@ -41,34 +41,6 @@ Or in `.env` file:
41
41
  - `LOCAL_EMBEDDING_MODEL` (Optional): Specifies the filename for the text embedding model (e.g., `bge-small-en-v1.5.Q4_K_M.gguf`) located in the models directory.
42
42
  - `LOCAL_EMBEDDING_DIMENSIONS` (Optional): Defines the expected dimension size for text embeddings. This is primarily used as a fallback dimension if the embedding model fails to generate an embedding. If not set, it defaults to the embedding model's native dimension size (e.g., 384 for `bge-small-en-v1.5.Q4_K_M.gguf`).
43
43
 
44
- ## Prerequisites
45
-
46
- ### FFmpeg for Audio Transcription
47
-
48
- The audio transcription feature (`ModelType.TRANSCRIPTION`) relies on **FFmpeg** to process audio files. If FFmpeg is not installed or not found in your system's PATH, transcription will fail.
49
-
50
- **Installation:**
51
-
52
- - **macOS (using Homebrew):**
53
- ```bash
54
- brew install ffmpeg
55
- ```
56
- - **Linux (Debian/Ubuntu):**
57
- ```bash
58
- sudo apt-get update && sudo apt-get install ffmpeg
59
- ```
60
- - **Linux (Fedora):**
61
- ```bash
62
- sudo dnf install ffmpeg
63
- ```
64
- - **Windows (using Chocolatey):**
65
- ```bash
66
- choco install ffmpeg
67
- ```
68
- Alternatively, download FFmpeg from the [official FFmpeg website](https://ffmpeg.org/download.html) and add the `bin` directory (containing `ffmpeg.exe`) to your system's PATH environment variable.
69
-
70
- After installation, ensure that the `ffmpeg` command is accessible from your terminal. You may need to restart your terminal or your application for the changes to take effect.
71
-
72
44
  ## Features
73
45
 
74
46
  The plugin provides these model classes:
@@ -96,21 +68,6 @@ const largeResponse = await runtime.useModel(ModelType.TEXT_LARGE, {
96
68
  });
97
69
  ```
98
70
 
99
- ### Text-to-Speech
100
-
101
- This plugin uses the [`transformers.js`](https://huggingface.co/docs/transformers.js) library for Text-to-Speech synthesis, running directly in the Node.js environment without external Python dependencies for this feature.
102
-
103
- ```typescript
104
- const audioStream = await runtime.useModel(ModelType.TEXT_TO_SPEECH, 'Text to convert to speech');
105
- ```
106
-
107
- **Current Implementation Details:**
108
-
109
- - **Model:** By default, it uses the [`Xenova/speecht5_tts`](https://huggingface.co/Xenova/speecht5_tts) model (ONNX format), which is optimized for `transformers.js`.
110
- - **Engine:** `@huggingface/transformers` library.
111
- - **Speaker:** It uses a default speaker embedding for `SpeechT5`. The specific voice cannot be configured through environment variables currently.
112
- - **Caching:** The ONNX model files and the default speaker embedding will be automatically downloaded and cached by `transformers.js` (typically in `~/.cache/huggingface/hub` or as configured by `transformers.js` environment variables) on first use.
113
-
114
71
  ### Text Embedding
115
72
 
116
73
  ```typescript
@@ -128,6 +85,12 @@ const { title, description } = await runtime.useModel(
128
85
  );
129
86
  ```
130
87
 
88
+ ### Text-to-Speech
89
+
90
+ ```typescript
91
+ const audioStream = await runtime.useModel(ModelType.TEXT_TO_SPEECH, 'Text to convert to speech');
92
+ ```
93
+
131
94
  ### Audio Transcription
132
95
 
133
96
  ```typescript
@@ -0,0 +1,9 @@
1
+ import { Plugin } from '@elizaos/core';
2
+
3
+ /**
4
+ * Plugin that provides functionality for local AI using LLaMA models.
5
+ * @type {Plugin}
6
+ */
7
+ declare const localAiPlugin: Plugin;
8
+
9
+ export { localAiPlugin as default, localAiPlugin };