whisper-cpp-node 0.2.5 → 0.2.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +353 -353
  2. package/package.json +3 -3
package/README.md CHANGED
@@ -1,353 +1,353 @@
1
- # whisper-cpp-node
2
-
3
- Node.js bindings for [whisper.cpp](https://github.com/ggerganov/whisper.cpp) - fast speech-to-text with GPU acceleration.
4
-
5
- ## Features
6
-
7
- - **Fast**: Native whisper.cpp performance with GPU acceleration
8
- - **Cross-platform**: macOS (Metal), Windows (Vulkan)
9
- - **Core ML**: Optional Apple Neural Engine support for 3x+ speedup (macOS)
10
- - **OpenVINO**: Optional Intel CPU/GPU encoder acceleration (Windows/Linux)
11
- - **Streaming VAD**: Built-in Silero voice activity detection
12
- - **TypeScript**: Full type definitions included
13
- - **Self-contained**: No external dependencies, just install and use
14
-
15
- ## Requirements
16
-
17
- **macOS:**
18
- - macOS 13.3+ (Ventura or later)
19
- - Apple Silicon (M1/M2/M3/M4)
20
- - Node.js 18+
21
-
22
- **Windows:**
23
- - Windows 10/11 (x64)
24
- - Node.js 18+
25
- - Vulkan-capable GPU (optional, for GPU acceleration)
26
-
27
- ## Installation
28
-
29
- ```bash
30
- npm install whisper-cpp-node
31
- # or
32
- pnpm add whisper-cpp-node
33
- ```
34
-
35
- The platform-specific binary is automatically installed:
36
- - macOS ARM64: `@whisper-cpp-node/darwin-arm64`
37
- - Windows x64: `@whisper-cpp-node/win32-x64`
38
-
39
- ## Quick Start
40
-
41
- ### File-based transcription
42
-
43
- ```typescript
44
- import {
45
- createWhisperContext,
46
- transcribeAsync,
47
- } from "whisper-cpp-node";
48
-
49
- // Create a context with your model
50
- const ctx = createWhisperContext({
51
- model: "./models/ggml-base.en.bin",
52
- use_gpu: true,
53
- });
54
-
55
- // Transcribe audio file
56
- const result = await transcribeAsync(ctx, {
57
- fname_inp: "./audio.wav",
58
- language: "en",
59
- });
60
-
61
- // Result: { segments: [["00:00:00,000", "00:00:02,500", " Hello world"], ...] }
62
- for (const [start, end, text] of result.segments) {
63
- console.log(`[${start} --> ${end}]${text}`);
64
- }
65
-
66
- // Clean up
67
- ctx.free();
68
- ```
69
-
70
- ### Buffer-based transcription
71
-
72
- ```typescript
73
- import {
74
- createWhisperContext,
75
- transcribeAsync,
76
- } from "whisper-cpp-node";
77
-
78
- const ctx = createWhisperContext({
79
- model: "./models/ggml-base.en.bin",
80
- use_gpu: true,
81
- });
82
-
83
- // Pass raw PCM audio (16kHz, mono, float32)
84
- const pcmData = new Float32Array(/* your audio samples */);
85
- const result = await transcribeAsync(ctx, {
86
- pcmf32: pcmData,
87
- language: "en",
88
- });
89
-
90
- for (const [start, end, text] of result.segments) {
91
- console.log(`[${start} --> ${end}]${text}`);
92
- }
93
-
94
- ctx.free();
95
- ```
96
-
97
- ### Streaming transcription
98
-
99
- Get real-time output as audio is processed. The `on_new_segment` callback fires for each segment as it's generated, while the final callback still receives all segments at completion (backward compatible):
100
-
101
- ```typescript
102
- import { createWhisperContext, transcribe } from "whisper-cpp-node";
103
-
104
- const ctx = createWhisperContext({
105
- model: "./models/ggml-base.en.bin",
106
- });
107
-
108
- transcribe(ctx, {
109
- fname_inp: "./long-audio.wav",
110
- language: "en",
111
-
112
- // Called for each segment as it's generated
113
- on_new_segment: (segment) => {
114
- console.log(`[${segment.start}]${segment.text}`);
115
- },
116
- }, (err, result) => {
117
- // Final callback still receives ALL segments at completion
118
- console.log(`Done! ${result.segments.length} segments`);
119
- ctx.free();
120
- });
121
- ```
122
-
123
- ## API
124
-
125
- ### `createWhisperContext(options)`
126
-
127
- Create a persistent context for transcription.
128
-
129
- ```typescript
130
- interface WhisperContextOptions {
131
- model: string; // Path to GGML model file (required)
132
- use_gpu?: boolean; // Enable GPU acceleration (default: true)
133
- // Uses Metal on macOS, Vulkan on Windows
134
- use_coreml?: boolean; // Enable Core ML on macOS (default: false)
135
- use_openvino?: boolean; // Enable OpenVINO encoder on Intel (default: false)
136
- openvino_device?: string; // OpenVINO device: 'CPU', 'GPU', 'NPU' (default: 'CPU')
137
- openvino_model_path?: string; // Path to OpenVINO encoder model (auto-derived)
138
- openvino_cache_dir?: string; // Cache dir for compiled OpenVINO models
139
- flash_attn?: boolean; // Enable Flash Attention (default: false)
140
- gpu_device?: number; // GPU device index (default: 0)
141
- dtw?: string; // DTW preset for word timestamps
142
- no_prints?: boolean; // Suppress log output (default: false)
143
- }
144
- ```
145
-
146
- ### `transcribeAsync(context, options)`
147
-
148
- Transcribe audio (Promise-based). Accepts either a file path or PCM buffer.
149
-
150
- ```typescript
151
- // File input
152
- interface TranscribeOptionsFile {
153
- fname_inp: string; // Path to audio file
154
- // ... common options
155
- }
156
-
157
- // Buffer input
158
- interface TranscribeOptionsBuffer {
159
- pcmf32: Float32Array; // Raw PCM (16kHz, mono, float32, -1.0 to 1.0)
160
- // ... common options
161
- }
162
-
163
- // Common options (partial list - see types.ts for full options)
164
- interface TranscribeOptionsBase {
165
- // Language
166
- language?: string; // Language code ('en', 'zh', 'auto')
167
- translate?: boolean; // Translate to English
168
- detect_language?: boolean; // Auto-detect language
169
-
170
- // Threading
171
- n_threads?: number; // CPU threads (default: 4)
172
- n_processors?: number; // Parallel processors
173
-
174
- // Audio processing
175
- offset_ms?: number; // Start offset in ms
176
- duration_ms?: number; // Duration to process (0 = all)
177
-
178
- // Output control
179
- no_timestamps?: boolean; // Disable timestamps
180
- max_len?: number; // Max segment length (chars)
181
- max_tokens?: number; // Max tokens per segment
182
- split_on_word?: boolean; // Split on word boundaries
183
- token_timestamps?: boolean; // Include token-level timestamps
184
-
185
- // Sampling
186
- temperature?: number; // Sampling temperature (0.0 = greedy)
187
- beam_size?: number; // Beam search size (-1 = greedy)
188
- best_of?: number; // Best-of-N sampling
189
-
190
- // Thresholds
191
- entropy_thold?: number; // Entropy threshold
192
- logprob_thold?: number; // Log probability threshold
193
- no_speech_thold?: number; // No-speech probability threshold
194
-
195
- // Context
196
- prompt?: string; // Initial prompt text
197
- no_context?: boolean; // Don't use previous context
198
-
199
- // VAD preprocessing
200
- vad?: boolean; // Enable VAD preprocessing
201
- vad_model?: string; // Path to VAD model
202
- vad_threshold?: number; // VAD threshold (0.0-1.0)
203
- vad_min_speech_duration_ms?: number;
204
- vad_min_silence_duration_ms?: number;
205
- vad_speech_pad_ms?: number;
206
-
207
- // Callbacks
208
- progress_callback?: (progress: number) => void;
209
- on_new_segment?: (segment: StreamingSegment) => void; // Streaming callback
210
- }
211
-
212
- // Streaming segment (passed to on_new_segment callback)
213
- interface StreamingSegment {
214
- start: string; // Start timestamp "HH:MM:SS,mmm"
215
- end: string; // End timestamp
216
- text: string; // Transcribed text
217
- segment_index: number; // 0-based index
218
- is_partial: boolean; // Reserved for future use
219
- tokens?: StreamingToken[]; // Only if token_timestamps enabled
220
- }
221
-
222
- // Result
223
- interface TranscribeResult {
224
- segments: TranscriptSegment[];
225
- }
226
-
227
- // Segment is a tuple: [start, end, text]
228
- type TranscriptSegment = [string, string, string];
229
- // Example: ["00:00:00,000", "00:00:02,500", " Hello world"]
230
- ```
231
-
232
- ### `createVadContext(options)`
233
-
234
- Create a voice activity detection context for streaming audio.
235
-
236
- ```typescript
237
- interface VadContextOptions {
238
- model: string; // Path to Silero VAD model
239
- threshold?: number; // Speech threshold (default: 0.5)
240
- n_threads?: number; // Number of threads (default: 1)
241
- no_prints?: boolean; // Suppress log output
242
- }
243
-
244
- interface VadContext {
245
- getWindowSamples(): number; // Returns 512 (32ms at 16kHz)
246
- getSampleRate(): number; // Returns 16000
247
- process(samples: Float32Array): number; // Returns probability 0.0-1.0
248
- reset(): void; // Reset LSTM state
249
- free(): void; // Release resources
250
- }
251
- ```
252
-
253
- #### VAD Example
254
-
255
- ```typescript
256
- import { createVadContext } from "whisper-cpp-node";
257
-
258
- const vad = createVadContext({
259
- model: "./models/ggml-silero-v6.2.0.bin",
260
- threshold: 0.5,
261
- });
262
-
263
- const windowSize = vad.getWindowSamples(); // 512 samples
264
-
265
- // Process audio in 32ms chunks
266
- function processAudioChunk(samples: Float32Array) {
267
- const probability = vad.process(samples);
268
- if (probability >= 0.5) {
269
- console.log("Speech detected!", probability);
270
- }
271
- }
272
-
273
- // Reset when starting new audio stream
274
- vad.reset();
275
-
276
- // Clean up when done
277
- vad.free();
278
- ```
279
-
280
- ## Core ML Acceleration (macOS)
281
-
282
- For 3x+ faster encoding on Apple Silicon:
283
-
284
- 1. Generate a Core ML model:
285
- ```bash
286
- pip install ane_transformers openai-whisper coremltools
287
- ./models/generate-coreml-model.sh base.en
288
- ```
289
-
290
- 2. Place it next to your GGML model:
291
- ```
292
- models/ggml-base.en.bin
293
- models/ggml-base.en-encoder.mlmodelc/
294
- ```
295
-
296
- 3. Enable Core ML:
297
- ```typescript
298
- const ctx = createWhisperContext({
299
- model: "./models/ggml-base.en.bin",
300
- use_coreml: true,
301
- });
302
- ```
303
-
304
- ## OpenVINO Acceleration (Intel)
305
-
306
- For faster encoder inference on Intel CPUs and GPUs (requires build with OpenVINO support):
307
-
308
- 1. Install OpenVINO and convert the model:
309
- ```bash
310
- pip install openvino openvino-dev
311
- python models/convert-whisper-to-openvino.py --model base.en
312
- ```
313
-
314
- 2. The OpenVINO model files are placed next to your GGML model:
315
- ```
316
- models/ggml-base.en.bin
317
- models/ggml-base.en-encoder-openvino.xml
318
- models/ggml-base.en-encoder-openvino.bin
319
- ```
320
-
321
- 3. Enable OpenVINO:
322
- ```typescript
323
- const ctx = createWhisperContext({
324
- model: "./models/ggml-base.en.bin",
325
- use_openvino: true,
326
- openvino_device: "CPU", // or "GPU" for Intel iGPU
327
- openvino_cache_dir: "./openvino_cache", // optional, speeds up init
328
- });
329
- ```
330
-
331
- **Note:** OpenVINO support requires the addon to be built with `-DADDON_OPENVINO=ON`.
332
-
333
- ## Models
334
-
335
- Download models from [Hugging Face](https://huggingface.co/ggerganov/whisper.cpp):
336
-
337
- ```bash
338
- # Base English model (~150MB)
339
- curl -L -o models/ggml-base.en.bin \
340
- https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
341
-
342
- # Large v3 Turbo quantized (~500MB)
343
- curl -L -o models/ggml-large-v3-turbo-q4_0.bin \
344
- https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo-q4_0.bin
345
-
346
- # Silero VAD model (for streaming VAD)
347
- curl -L -o models/ggml-silero-v6.2.0.bin \
348
- https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-silero-v6.2.0.bin
349
- ```
350
-
351
- ## License
352
-
353
- MIT
1
+ # whisper-cpp-node
2
+
3
+ Node.js bindings for [whisper.cpp](https://github.com/ggerganov/whisper.cpp) - fast speech-to-text with GPU acceleration.
4
+
5
+ ## Features
6
+
7
+ - **Fast**: Native whisper.cpp performance with GPU acceleration
8
+ - **Cross-platform**: macOS (Metal), Windows (Vulkan)
9
+ - **Core ML**: Optional Apple Neural Engine support for 3x+ speedup (macOS)
10
+ - **OpenVINO**: Optional Intel CPU/GPU encoder acceleration (Windows/Linux)
11
+ - **Streaming VAD**: Built-in Silero voice activity detection
12
+ - **TypeScript**: Full type definitions included
13
+ - **Self-contained**: No external dependencies, just install and use
14
+
15
+ ## Requirements
16
+
17
+ **macOS:**
18
+ - macOS 13.3+ (Ventura or later)
19
+ - Apple Silicon (M1/M2/M3/M4)
20
+ - Node.js 18+
21
+
22
+ **Windows:**
23
+ - Windows 10/11 (x64)
24
+ - Node.js 18+
25
+ - Vulkan-capable GPU (optional, for GPU acceleration)
26
+
27
+ ## Installation
28
+
29
+ ```bash
30
+ npm install whisper-cpp-node
31
+ # or
32
+ pnpm add whisper-cpp-node
33
+ ```
34
+
35
+ The platform-specific binary is automatically installed:
36
+ - macOS ARM64: `@whisper-cpp-node/darwin-arm64`
37
+ - Windows x64: `@whisper-cpp-node/win32-x64`
38
+
39
+ ## Quick Start
40
+
41
+ ### File-based transcription
42
+
43
+ ```typescript
44
+ import {
45
+ createWhisperContext,
46
+ transcribeAsync,
47
+ } from "whisper-cpp-node";
48
+
49
+ // Create a context with your model
50
+ const ctx = createWhisperContext({
51
+ model: "./models/ggml-base.en.bin",
52
+ use_gpu: true,
53
+ });
54
+
55
+ // Transcribe audio file
56
+ const result = await transcribeAsync(ctx, {
57
+ fname_inp: "./audio.wav",
58
+ language: "en",
59
+ });
60
+
61
+ // Result: { segments: [["00:00:00,000", "00:00:02,500", " Hello world"], ...] }
62
+ for (const [start, end, text] of result.segments) {
63
+ console.log(`[${start} --> ${end}]${text}`);
64
+ }
65
+
66
+ // Clean up
67
+ ctx.free();
68
+ ```
69
+
70
+ ### Buffer-based transcription
71
+
72
+ ```typescript
73
+ import {
74
+ createWhisperContext,
75
+ transcribeAsync,
76
+ } from "whisper-cpp-node";
77
+
78
+ const ctx = createWhisperContext({
79
+ model: "./models/ggml-base.en.bin",
80
+ use_gpu: true,
81
+ });
82
+
83
+ // Pass raw PCM audio (16kHz, mono, float32)
84
+ const pcmData = new Float32Array(/* your audio samples */);
85
+ const result = await transcribeAsync(ctx, {
86
+ pcmf32: pcmData,
87
+ language: "en",
88
+ });
89
+
90
+ for (const [start, end, text] of result.segments) {
91
+ console.log(`[${start} --> ${end}]${text}`);
92
+ }
93
+
94
+ ctx.free();
95
+ ```
96
+
97
+ ### Streaming transcription
98
+
99
+ Get real-time output as audio is processed. The `on_new_segment` callback fires for each segment as it's generated, while the final callback still receives all segments at completion (backward compatible):
100
+
101
+ ```typescript
102
+ import { createWhisperContext, transcribe } from "whisper-cpp-node";
103
+
104
+ const ctx = createWhisperContext({
105
+ model: "./models/ggml-base.en.bin",
106
+ });
107
+
108
+ transcribe(ctx, {
109
+ fname_inp: "./long-audio.wav",
110
+ language: "en",
111
+
112
+ // Called for each segment as it's generated
113
+ on_new_segment: (segment) => {
114
+ console.log(`[${segment.start}]${segment.text}`);
115
+ },
116
+ }, (err, result) => {
117
+ // Final callback still receives ALL segments at completion
118
+ console.log(`Done! ${result.segments.length} segments`);
119
+ ctx.free();
120
+ });
121
+ ```
122
+
123
+ ## API
124
+
125
+ ### `createWhisperContext(options)`
126
+
127
+ Create a persistent context for transcription.
128
+
129
+ ```typescript
130
+ interface WhisperContextOptions {
131
+ model: string; // Path to GGML model file (required)
132
+ use_gpu?: boolean; // Enable GPU acceleration (default: true)
133
+ // Uses Metal on macOS, Vulkan on Windows
134
+ use_coreml?: boolean; // Enable Core ML on macOS (default: false)
135
+ use_openvino?: boolean; // Enable OpenVINO encoder on Intel (default: false)
136
+ openvino_device?: string; // OpenVINO device: 'CPU', 'GPU', 'NPU' (default: 'CPU')
137
+ openvino_model_path?: string; // Path to OpenVINO encoder model (auto-derived)
138
+ openvino_cache_dir?: string; // Cache dir for compiled OpenVINO models
139
+ flash_attn?: boolean; // Enable Flash Attention (default: false)
140
+ gpu_device?: number; // GPU device index (default: 0)
141
+ dtw?: string; // DTW preset for word timestamps
142
+ no_prints?: boolean; // Suppress log output (default: false)
143
+ }
144
+ ```
145
+
146
+ ### `transcribeAsync(context, options)`
147
+
148
+ Transcribe audio (Promise-based). Accepts either a file path or PCM buffer.
149
+
150
+ ```typescript
151
+ // File input
152
+ interface TranscribeOptionsFile {
153
+ fname_inp: string; // Path to audio file
154
+ // ... common options
155
+ }
156
+
157
+ // Buffer input
158
+ interface TranscribeOptionsBuffer {
159
+ pcmf32: Float32Array; // Raw PCM (16kHz, mono, float32, -1.0 to 1.0)
160
+ // ... common options
161
+ }
162
+
163
+ // Common options (partial list - see types.ts for full options)
164
+ interface TranscribeOptionsBase {
165
+ // Language
166
+ language?: string; // Language code ('en', 'zh', 'auto')
167
+ translate?: boolean; // Translate to English
168
+ detect_language?: boolean; // Auto-detect language
169
+
170
+ // Threading
171
+ n_threads?: number; // CPU threads (default: 4)
172
+ n_processors?: number; // Parallel processors
173
+
174
+ // Audio processing
175
+ offset_ms?: number; // Start offset in ms
176
+ duration_ms?: number; // Duration to process (0 = all)
177
+
178
+ // Output control
179
+ no_timestamps?: boolean; // Disable timestamps
180
+ max_len?: number; // Max segment length (chars)
181
+ max_tokens?: number; // Max tokens per segment
182
+ split_on_word?: boolean; // Split on word boundaries
183
+ token_timestamps?: boolean; // Include token-level timestamps
184
+
185
+ // Sampling
186
+ temperature?: number; // Sampling temperature (0.0 = greedy)
187
+ beam_size?: number; // Beam search size (-1 = greedy)
188
+ best_of?: number; // Best-of-N sampling
189
+
190
+ // Thresholds
191
+ entropy_thold?: number; // Entropy threshold
192
+ logprob_thold?: number; // Log probability threshold
193
+ no_speech_thold?: number; // No-speech probability threshold
194
+
195
+ // Context
196
+ prompt?: string; // Initial prompt text
197
+ no_context?: boolean; // Don't use previous context
198
+
199
+ // VAD preprocessing
200
+ vad?: boolean; // Enable VAD preprocessing
201
+ vad_model?: string; // Path to VAD model
202
+ vad_threshold?: number; // VAD threshold (0.0-1.0)
203
+ vad_min_speech_duration_ms?: number;
204
+ vad_min_silence_duration_ms?: number;
205
+ vad_speech_pad_ms?: number;
206
+
207
+ // Callbacks
208
+ progress_callback?: (progress: number) => void;
209
+ on_new_segment?: (segment: StreamingSegment) => void; // Streaming callback
210
+ }
211
+
212
+ // Streaming segment (passed to on_new_segment callback)
213
+ interface StreamingSegment {
214
+ start: string; // Start timestamp "HH:MM:SS,mmm"
215
+ end: string; // End timestamp
216
+ text: string; // Transcribed text
217
+ segment_index: number; // 0-based index
218
+ is_partial: boolean; // Reserved for future use
219
+ tokens?: StreamingToken[]; // Only if token_timestamps enabled
220
+ }
221
+
222
+ // Result
223
+ interface TranscribeResult {
224
+ segments: TranscriptSegment[];
225
+ }
226
+
227
+ // Segment is a tuple: [start, end, text]
228
+ type TranscriptSegment = [string, string, string];
229
+ // Example: ["00:00:00,000", "00:00:02,500", " Hello world"]
230
+ ```
231
+
232
+ ### `createVadContext(options)`
233
+
234
+ Create a voice activity detection context for streaming audio.
235
+
236
+ ```typescript
237
+ interface VadContextOptions {
238
+ model: string; // Path to Silero VAD model
239
+ threshold?: number; // Speech threshold (default: 0.5)
240
+ n_threads?: number; // Number of threads (default: 1)
241
+ no_prints?: boolean; // Suppress log output
242
+ }
243
+
244
+ interface VadContext {
245
+ getWindowSamples(): number; // Returns 512 (32ms at 16kHz)
246
+ getSampleRate(): number; // Returns 16000
247
+ process(samples: Float32Array): number; // Returns probability 0.0-1.0
248
+ reset(): void; // Reset LSTM state
249
+ free(): void; // Release resources
250
+ }
251
+ ```
252
+
253
+ #### VAD Example
254
+
255
+ ```typescript
256
+ import { createVadContext } from "whisper-cpp-node";
257
+
258
+ const vad = createVadContext({
259
+ model: "./models/ggml-silero-v6.2.0.bin",
260
+ threshold: 0.5,
261
+ });
262
+
263
+ const windowSize = vad.getWindowSamples(); // 512 samples
264
+
265
+ // Process audio in 32ms chunks
266
+ function processAudioChunk(samples: Float32Array) {
267
+ const probability = vad.process(samples);
268
+ if (probability >= 0.5) {
269
+ console.log("Speech detected!", probability);
270
+ }
271
+ }
272
+
273
+ // Reset when starting new audio stream
274
+ vad.reset();
275
+
276
+ // Clean up when done
277
+ vad.free();
278
+ ```
279
+
280
+ ## Core ML Acceleration (macOS)
281
+
282
+ For 3x+ faster encoding on Apple Silicon:
283
+
284
+ 1. Generate a Core ML model:
285
+ ```bash
286
+ pip install ane_transformers openai-whisper coremltools
287
+ ./models/generate-coreml-model.sh base.en
288
+ ```
289
+
290
+ 2. Place it next to your GGML model:
291
+ ```
292
+ models/ggml-base.en.bin
293
+ models/ggml-base.en-encoder.mlmodelc/
294
+ ```
295
+
296
+ 3. Enable Core ML:
297
+ ```typescript
298
+ const ctx = createWhisperContext({
299
+ model: "./models/ggml-base.en.bin",
300
+ use_coreml: true,
301
+ });
302
+ ```
303
+
304
+ ## OpenVINO Acceleration (Intel)
305
+
306
+ For faster encoder inference on Intel CPUs and GPUs (requires build with OpenVINO support):
307
+
308
+ 1. Install OpenVINO and convert the model:
309
+ ```bash
310
+ pip install openvino openvino-dev
311
+ python models/convert-whisper-to-openvino.py --model base.en
312
+ ```
313
+
314
+ 2. The OpenVINO model files are placed next to your GGML model:
315
+ ```
316
+ models/ggml-base.en.bin
317
+ models/ggml-base.en-encoder-openvino.xml
318
+ models/ggml-base.en-encoder-openvino.bin
319
+ ```
320
+
321
+ 3. Enable OpenVINO:
322
+ ```typescript
323
+ const ctx = createWhisperContext({
324
+ model: "./models/ggml-base.en.bin",
325
+ use_openvino: true,
326
+ openvino_device: "CPU", // or "GPU" for Intel iGPU
327
+ openvino_cache_dir: "./openvino_cache", // optional, speeds up init
328
+ });
329
+ ```
330
+
331
+ **Note:** OpenVINO support requires the addon to be built with `-DADDON_OPENVINO=ON`.
332
+
333
+ ## Models
334
+
335
+ Download models from [Hugging Face](https://huggingface.co/ggerganov/whisper.cpp):
336
+
337
+ ```bash
338
+ # Base English model (~150MB)
339
+ curl -L -o models/ggml-base.en.bin \
340
+ https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
341
+
342
+ # Large v3 Turbo quantized (~500MB)
343
+ curl -L -o models/ggml-large-v3-turbo-q4_0.bin \
344
+ https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo-q4_0.bin
345
+
346
+ # Silero VAD model (for streaming VAD)
347
+ curl -L -o models/ggml-silero-v6.2.0.bin \
348
+ https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-silero-v6.2.0.bin
349
+ ```
350
+
351
+ ## License
352
+
353
+ MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "whisper-cpp-node",
3
- "version": "0.2.5",
3
+ "version": "0.2.6",
4
4
  "description": "Node.js bindings for whisper.cpp - fast speech-to-text with GPU acceleration",
5
5
  "license": "MIT",
6
6
  "repository": {
@@ -21,8 +21,8 @@
21
21
  "dist"
22
22
  ],
23
23
  "optionalDependencies": {
24
- "@whisper-cpp-node/darwin-arm64": "0.2.3",
25
- "@whisper-cpp-node/win32-x64": "0.2.4"
24
+ "@whisper-cpp-node/win32-x64": "0.2.6",
25
+ "@whisper-cpp-node/darwin-arm64": "0.2.3"
26
26
  },
27
27
  "devDependencies": {
28
28
  "@types/node": "^20.0.0",