whisper-cpp-node 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +316 -316
  2. package/package.json +2 -2
package/README.md CHANGED
@@ -1,316 +1,316 @@
1
- # whisper-cpp-node
2
-
3
- Node.js bindings for [whisper.cpp](https://github.com/ggerganov/whisper.cpp) - fast speech-to-text with GPU acceleration.
4
-
5
- ## Features
6
-
7
- - **Fast**: Native whisper.cpp performance with GPU acceleration
8
- - **Cross-platform**: macOS (Metal), Windows (Vulkan)
9
- - **Core ML**: Optional Apple Neural Engine support for 3x+ speedup (macOS)
10
- - **OpenVINO**: Optional Intel CPU/GPU encoder acceleration (Windows/Linux)
11
- - **Streaming VAD**: Built-in Silero voice activity detection
12
- - **TypeScript**: Full type definitions included
13
- - **Self-contained**: No external dependencies, just install and use
14
-
15
- ## Requirements
16
-
17
- **macOS:**
18
- - macOS 13.3+ (Ventura or later)
19
- - Apple Silicon (M1/M2/M3/M4)
20
- - Node.js 18+
21
-
22
- **Windows:**
23
- - Windows 10/11 (x64)
24
- - Node.js 18+
25
- - Vulkan-capable GPU (optional, for GPU acceleration)
26
-
27
- ## Installation
28
-
29
- ```bash
30
- npm install whisper-cpp-node
31
- # or
32
- pnpm add whisper-cpp-node
33
- ```
34
-
35
- The platform-specific binary is automatically installed:
36
- - macOS ARM64: `@whisper-cpp-node/darwin-arm64`
37
- - Windows x64: `@whisper-cpp-node/win32-x64`
38
-
39
- ## Quick Start
40
-
41
- ### File-based transcription
42
-
43
- ```typescript
44
- import {
45
- createWhisperContext,
46
- transcribeAsync,
47
- } from "whisper-cpp-node";
48
-
49
- // Create a context with your model
50
- const ctx = createWhisperContext({
51
- model: "./models/ggml-base.en.bin",
52
- use_gpu: true,
53
- });
54
-
55
- // Transcribe audio file
56
- const result = await transcribeAsync(ctx, {
57
- fname_inp: "./audio.wav",
58
- language: "en",
59
- });
60
-
61
- // Result: { segments: [["00:00:00,000", "00:00:02,500", " Hello world"], ...] }
62
- for (const [start, end, text] of result.segments) {
63
- console.log(`[${start} --> ${end}]${text}`);
64
- }
65
-
66
- // Clean up
67
- ctx.free();
68
- ```
69
-
70
- ### Buffer-based transcription
71
-
72
- ```typescript
73
- import {
74
- createWhisperContext,
75
- transcribeAsync,
76
- } from "whisper-cpp-node";
77
-
78
- const ctx = createWhisperContext({
79
- model: "./models/ggml-base.en.bin",
80
- use_gpu: true,
81
- });
82
-
83
- // Pass raw PCM audio (16kHz, mono, float32)
84
- const pcmData = new Float32Array(/* your audio samples */);
85
- const result = await transcribeAsync(ctx, {
86
- pcmf32: pcmData,
87
- language: "en",
88
- });
89
-
90
- for (const [start, end, text] of result.segments) {
91
- console.log(`[${start} --> ${end}]${text}`);
92
- }
93
-
94
- ctx.free();
95
- ```
96
-
97
- ## API
98
-
99
- ### `createWhisperContext(options)`
100
-
101
- Create a persistent context for transcription.
102
-
103
- ```typescript
104
- interface WhisperContextOptions {
105
- model: string; // Path to GGML model file (required)
106
- use_gpu?: boolean; // Enable GPU acceleration (default: true)
107
- // Uses Metal on macOS, Vulkan on Windows
108
- use_coreml?: boolean; // Enable Core ML on macOS (default: false)
109
- use_openvino?: boolean; // Enable OpenVINO encoder on Intel (default: false)
110
- openvino_device?: string; // OpenVINO device: 'CPU', 'GPU', 'NPU' (default: 'CPU')
111
- openvino_model_path?: string; // Path to OpenVINO encoder model (auto-derived)
112
- openvino_cache_dir?: string; // Cache dir for compiled OpenVINO models
113
- flash_attn?: boolean; // Enable Flash Attention (default: false)
114
- gpu_device?: number; // GPU device index (default: 0)
115
- dtw?: string; // DTW preset for word timestamps
116
- no_prints?: boolean; // Suppress log output (default: false)
117
- }
118
- ```
119
-
120
- ### `transcribeAsync(context, options)`
121
-
122
- Transcribe audio (Promise-based). Accepts either a file path or PCM buffer.
123
-
124
- ```typescript
125
- // File input
126
- interface TranscribeOptionsFile {
127
- fname_inp: string; // Path to audio file
128
- // ... common options
129
- }
130
-
131
- // Buffer input
132
- interface TranscribeOptionsBuffer {
133
- pcmf32: Float32Array; // Raw PCM (16kHz, mono, float32, -1.0 to 1.0)
134
- // ... common options
135
- }
136
-
137
- // Common options (partial list - see types.ts for full options)
138
- interface TranscribeOptionsBase {
139
- // Language
140
- language?: string; // Language code ('en', 'zh', 'auto')
141
- translate?: boolean; // Translate to English
142
- detect_language?: boolean; // Auto-detect language
143
-
144
- // Threading
145
- n_threads?: number; // CPU threads (default: 4)
146
- n_processors?: number; // Parallel processors
147
-
148
- // Audio processing
149
- offset_ms?: number; // Start offset in ms
150
- duration_ms?: number; // Duration to process (0 = all)
151
-
152
- // Output control
153
- no_timestamps?: boolean; // Disable timestamps
154
- max_len?: number; // Max segment length (chars)
155
- max_tokens?: number; // Max tokens per segment
156
- split_on_word?: boolean; // Split on word boundaries
157
- token_timestamps?: boolean; // Include token-level timestamps
158
-
159
- // Sampling
160
- temperature?: number; // Sampling temperature (0.0 = greedy)
161
- beam_size?: number; // Beam search size (-1 = greedy)
162
- best_of?: number; // Best-of-N sampling
163
-
164
- // Thresholds
165
- entropy_thold?: number; // Entropy threshold
166
- logprob_thold?: number; // Log probability threshold
167
- no_speech_thold?: number; // No-speech probability threshold
168
-
169
- // Context
170
- prompt?: string; // Initial prompt text
171
- no_context?: boolean; // Don't use previous context
172
-
173
- // VAD preprocessing
174
- vad?: boolean; // Enable VAD preprocessing
175
- vad_model?: string; // Path to VAD model
176
- vad_threshold?: number; // VAD threshold (0.0-1.0)
177
- vad_min_speech_duration_ms?: number;
178
- vad_min_silence_duration_ms?: number;
179
- vad_speech_pad_ms?: number;
180
-
181
- // Callbacks
182
- progress_callback?: (progress: number) => void;
183
- }
184
-
185
- // Result
186
- interface TranscribeResult {
187
- segments: TranscriptSegment[];
188
- }
189
-
190
- // Segment is a tuple: [start, end, text]
191
- type TranscriptSegment = [string, string, string];
192
- // Example: ["00:00:00,000", "00:00:02,500", " Hello world"]
193
- ```
194
-
195
- ### `createVadContext(options)`
196
-
197
- Create a voice activity detection context for streaming audio.
198
-
199
- ```typescript
200
- interface VadContextOptions {
201
- model: string; // Path to Silero VAD model
202
- threshold?: number; // Speech threshold (default: 0.5)
203
- n_threads?: number; // Number of threads (default: 1)
204
- no_prints?: boolean; // Suppress log output
205
- }
206
-
207
- interface VadContext {
208
- getWindowSamples(): number; // Returns 512 (32ms at 16kHz)
209
- getSampleRate(): number; // Returns 16000
210
- process(samples: Float32Array): number; // Returns probability 0.0-1.0
211
- reset(): void; // Reset LSTM state
212
- free(): void; // Release resources
213
- }
214
- ```
215
-
216
- #### VAD Example
217
-
218
- ```typescript
219
- import { createVadContext } from "whisper-cpp-node";
220
-
221
- const vad = createVadContext({
222
- model: "./models/ggml-silero-v6.2.0.bin",
223
- threshold: 0.5,
224
- });
225
-
226
- const windowSize = vad.getWindowSamples(); // 512 samples
227
-
228
- // Process audio in 32ms chunks
229
- function processAudioChunk(samples: Float32Array) {
230
- const probability = vad.process(samples);
231
- if (probability >= 0.5) {
232
- console.log("Speech detected!", probability);
233
- }
234
- }
235
-
236
- // Reset when starting new audio stream
237
- vad.reset();
238
-
239
- // Clean up when done
240
- vad.free();
241
- ```
242
-
243
- ## Core ML Acceleration (macOS)
244
-
245
- For 3x+ faster encoding on Apple Silicon:
246
-
247
- 1. Generate a Core ML model:
248
- ```bash
249
- pip install ane_transformers openai-whisper coremltools
250
- ./models/generate-coreml-model.sh base.en
251
- ```
252
-
253
- 2. Place it next to your GGML model:
254
- ```
255
- models/ggml-base.en.bin
256
- models/ggml-base.en-encoder.mlmodelc/
257
- ```
258
-
259
- 3. Enable Core ML:
260
- ```typescript
261
- const ctx = createWhisperContext({
262
- model: "./models/ggml-base.en.bin",
263
- use_coreml: true,
264
- });
265
- ```
266
-
267
- ## OpenVINO Acceleration (Intel)
268
-
269
- For faster encoder inference on Intel CPUs and GPUs (requires build with OpenVINO support):
270
-
271
- 1. Install OpenVINO and convert the model:
272
- ```bash
273
- pip install openvino openvino-dev
274
- python models/convert-whisper-to-openvino.py --model base.en
275
- ```
276
-
277
- 2. The OpenVINO model files are placed next to your GGML model:
278
- ```
279
- models/ggml-base.en.bin
280
- models/ggml-base.en-encoder-openvino.xml
281
- models/ggml-base.en-encoder-openvino.bin
282
- ```
283
-
284
- 3. Enable OpenVINO:
285
- ```typescript
286
- const ctx = createWhisperContext({
287
- model: "./models/ggml-base.en.bin",
288
- use_openvino: true,
289
- openvino_device: "CPU", // or "GPU" for Intel iGPU
290
- openvino_cache_dir: "./openvino_cache", // optional, speeds up init
291
- });
292
- ```
293
-
294
- **Note:** OpenVINO support requires the addon to be built with `-DADDON_OPENVINO=ON`.
295
-
296
- ## Models
297
-
298
- Download models from [Hugging Face](https://huggingface.co/ggerganov/whisper.cpp):
299
-
300
- ```bash
301
- # Base English model (~150MB)
302
- curl -L -o models/ggml-base.en.bin \
303
- https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
304
-
305
- # Large v3 Turbo quantized (~500MB)
306
- curl -L -o models/ggml-large-v3-turbo-q4_0.bin \
307
- https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo-q4_0.bin
308
-
309
- # Silero VAD model (for streaming VAD)
310
- curl -L -o models/ggml-silero-v6.2.0.bin \
311
- https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-silero-v6.2.0.bin
312
- ```
313
-
314
- ## License
315
-
316
- MIT
1
+ # whisper-cpp-node
2
+
3
+ Node.js bindings for [whisper.cpp](https://github.com/ggerganov/whisper.cpp) - fast speech-to-text with GPU acceleration.
4
+
5
+ ## Features
6
+
7
+ - **Fast**: Native whisper.cpp performance with GPU acceleration
8
+ - **Cross-platform**: macOS (Metal), Windows (Vulkan)
9
+ - **Core ML**: Optional Apple Neural Engine support for 3x+ speedup (macOS)
10
+ - **OpenVINO**: Optional Intel CPU/GPU encoder acceleration (Windows/Linux)
11
+ - **Streaming VAD**: Built-in Silero voice activity detection
12
+ - **TypeScript**: Full type definitions included
13
+ - **Self-contained**: No external dependencies, just install and use
14
+
15
+ ## Requirements
16
+
17
+ **macOS:**
18
+ - macOS 13.3+ (Ventura or later)
19
+ - Apple Silicon (M1/M2/M3/M4)
20
+ - Node.js 18+
21
+
22
+ **Windows:**
23
+ - Windows 10/11 (x64)
24
+ - Node.js 18+
25
+ - Vulkan-capable GPU (optional, for GPU acceleration)
26
+
27
+ ## Installation
28
+
29
+ ```bash
30
+ npm install whisper-cpp-node
31
+ # or
32
+ pnpm add whisper-cpp-node
33
+ ```
34
+
35
+ The platform-specific binary is automatically installed:
36
+ - macOS ARM64: `@whisper-cpp-node/darwin-arm64`
37
+ - Windows x64: `@whisper-cpp-node/win32-x64`
38
+
39
+ ## Quick Start
40
+
41
+ ### File-based transcription
42
+
43
+ ```typescript
44
+ import {
45
+ createWhisperContext,
46
+ transcribeAsync,
47
+ } from "whisper-cpp-node";
48
+
49
+ // Create a context with your model
50
+ const ctx = createWhisperContext({
51
+ model: "./models/ggml-base.en.bin",
52
+ use_gpu: true,
53
+ });
54
+
55
+ // Transcribe audio file
56
+ const result = await transcribeAsync(ctx, {
57
+ fname_inp: "./audio.wav",
58
+ language: "en",
59
+ });
60
+
61
+ // Result: { segments: [["00:00:00,000", "00:00:02,500", " Hello world"], ...] }
62
+ for (const [start, end, text] of result.segments) {
63
+ console.log(`[${start} --> ${end}]${text}`);
64
+ }
65
+
66
+ // Clean up
67
+ ctx.free();
68
+ ```
69
+
70
+ ### Buffer-based transcription
71
+
72
+ ```typescript
73
+ import {
74
+ createWhisperContext,
75
+ transcribeAsync,
76
+ } from "whisper-cpp-node";
77
+
78
+ const ctx = createWhisperContext({
79
+ model: "./models/ggml-base.en.bin",
80
+ use_gpu: true,
81
+ });
82
+
83
+ // Pass raw PCM audio (16kHz, mono, float32)
84
+ const pcmData = new Float32Array(/* your audio samples */);
85
+ const result = await transcribeAsync(ctx, {
86
+ pcmf32: pcmData,
87
+ language: "en",
88
+ });
89
+
90
+ for (const [start, end, text] of result.segments) {
91
+ console.log(`[${start} --> ${end}]${text}`);
92
+ }
93
+
94
+ ctx.free();
95
+ ```
96
+
97
+ ## API
98
+
99
+ ### `createWhisperContext(options)`
100
+
101
+ Create a persistent context for transcription.
102
+
103
+ ```typescript
104
+ interface WhisperContextOptions {
105
+ model: string; // Path to GGML model file (required)
106
+ use_gpu?: boolean; // Enable GPU acceleration (default: true)
107
+ // Uses Metal on macOS, Vulkan on Windows
108
+ use_coreml?: boolean; // Enable Core ML on macOS (default: false)
109
+ use_openvino?: boolean; // Enable OpenVINO encoder on Intel (default: false)
110
+ openvino_device?: string; // OpenVINO device: 'CPU', 'GPU', 'NPU' (default: 'CPU')
111
+ openvino_model_path?: string; // Path to OpenVINO encoder model (auto-derived)
112
+ openvino_cache_dir?: string; // Cache dir for compiled OpenVINO models
113
+ flash_attn?: boolean; // Enable Flash Attention (default: false)
114
+ gpu_device?: number; // GPU device index (default: 0)
115
+ dtw?: string; // DTW preset for word timestamps
116
+ no_prints?: boolean; // Suppress log output (default: false)
117
+ }
118
+ ```
119
+
120
+ ### `transcribeAsync(context, options)`
121
+
122
+ Transcribe audio (Promise-based). Accepts either a file path or PCM buffer.
123
+
124
+ ```typescript
125
+ // File input
126
+ interface TranscribeOptionsFile {
127
+ fname_inp: string; // Path to audio file
128
+ // ... common options
129
+ }
130
+
131
+ // Buffer input
132
+ interface TranscribeOptionsBuffer {
133
+ pcmf32: Float32Array; // Raw PCM (16kHz, mono, float32, -1.0 to 1.0)
134
+ // ... common options
135
+ }
136
+
137
+ // Common options (partial list - see types.ts for full options)
138
+ interface TranscribeOptionsBase {
139
+ // Language
140
+ language?: string; // Language code ('en', 'zh', 'auto')
141
+ translate?: boolean; // Translate to English
142
+ detect_language?: boolean; // Auto-detect language
143
+
144
+ // Threading
145
+ n_threads?: number; // CPU threads (default: 4)
146
+ n_processors?: number; // Parallel processors
147
+
148
+ // Audio processing
149
+ offset_ms?: number; // Start offset in ms
150
+ duration_ms?: number; // Duration to process (0 = all)
151
+
152
+ // Output control
153
+ no_timestamps?: boolean; // Disable timestamps
154
+ max_len?: number; // Max segment length (chars)
155
+ max_tokens?: number; // Max tokens per segment
156
+ split_on_word?: boolean; // Split on word boundaries
157
+ token_timestamps?: boolean; // Include token-level timestamps
158
+
159
+ // Sampling
160
+ temperature?: number; // Sampling temperature (0.0 = greedy)
161
+ beam_size?: number; // Beam search size (-1 = greedy)
162
+ best_of?: number; // Best-of-N sampling
163
+
164
+ // Thresholds
165
+ entropy_thold?: number; // Entropy threshold
166
+ logprob_thold?: number; // Log probability threshold
167
+ no_speech_thold?: number; // No-speech probability threshold
168
+
169
+ // Context
170
+ prompt?: string; // Initial prompt text
171
+ no_context?: boolean; // Don't use previous context
172
+
173
+ // VAD preprocessing
174
+ vad?: boolean; // Enable VAD preprocessing
175
+ vad_model?: string; // Path to VAD model
176
+ vad_threshold?: number; // VAD threshold (0.0-1.0)
177
+ vad_min_speech_duration_ms?: number;
178
+ vad_min_silence_duration_ms?: number;
179
+ vad_speech_pad_ms?: number;
180
+
181
+ // Callbacks
182
+ progress_callback?: (progress: number) => void;
183
+ }
184
+
185
+ // Result
186
+ interface TranscribeResult {
187
+ segments: TranscriptSegment[];
188
+ }
189
+
190
+ // Segment is a tuple: [start, end, text]
191
+ type TranscriptSegment = [string, string, string];
192
+ // Example: ["00:00:00,000", "00:00:02,500", " Hello world"]
193
+ ```
194
+
195
+ ### `createVadContext(options)`
196
+
197
+ Create a voice activity detection context for streaming audio.
198
+
199
+ ```typescript
200
+ interface VadContextOptions {
201
+ model: string; // Path to Silero VAD model
202
+ threshold?: number; // Speech threshold (default: 0.5)
203
+ n_threads?: number; // Number of threads (default: 1)
204
+ no_prints?: boolean; // Suppress log output
205
+ }
206
+
207
+ interface VadContext {
208
+ getWindowSamples(): number; // Returns 512 (32ms at 16kHz)
209
+ getSampleRate(): number; // Returns 16000
210
+ process(samples: Float32Array): number; // Returns probability 0.0-1.0
211
+ reset(): void; // Reset LSTM state
212
+ free(): void; // Release resources
213
+ }
214
+ ```
215
+
216
+ #### VAD Example
217
+
218
+ ```typescript
219
+ import { createVadContext } from "whisper-cpp-node";
220
+
221
+ const vad = createVadContext({
222
+ model: "./models/ggml-silero-v6.2.0.bin",
223
+ threshold: 0.5,
224
+ });
225
+
226
+ const windowSize = vad.getWindowSamples(); // 512 samples
227
+
228
+ // Process audio in 32ms chunks
229
+ function processAudioChunk(samples: Float32Array) {
230
+ const probability = vad.process(samples);
231
+ if (probability >= 0.5) {
232
+ console.log("Speech detected!", probability);
233
+ }
234
+ }
235
+
236
+ // Reset when starting new audio stream
237
+ vad.reset();
238
+
239
+ // Clean up when done
240
+ vad.free();
241
+ ```
242
+
243
+ ## Core ML Acceleration (macOS)
244
+
245
+ For 3x+ faster encoding on Apple Silicon:
246
+
247
+ 1. Generate a Core ML model:
248
+ ```bash
249
+ pip install ane_transformers openai-whisper coremltools
250
+ ./models/generate-coreml-model.sh base.en
251
+ ```
252
+
253
+ 2. Place it next to your GGML model:
254
+ ```
255
+ models/ggml-base.en.bin
256
+ models/ggml-base.en-encoder.mlmodelc/
257
+ ```
258
+
259
+ 3. Enable Core ML:
260
+ ```typescript
261
+ const ctx = createWhisperContext({
262
+ model: "./models/ggml-base.en.bin",
263
+ use_coreml: true,
264
+ });
265
+ ```
266
+
267
+ ## OpenVINO Acceleration (Intel)
268
+
269
+ For faster encoder inference on Intel CPUs and GPUs (requires build with OpenVINO support):
270
+
271
+ 1. Install OpenVINO and convert the model:
272
+ ```bash
273
+ pip install openvino openvino-dev
274
+ python models/convert-whisper-to-openvino.py --model base.en
275
+ ```
276
+
277
+ 2. The OpenVINO model files are placed next to your GGML model:
278
+ ```
279
+ models/ggml-base.en.bin
280
+ models/ggml-base.en-encoder-openvino.xml
281
+ models/ggml-base.en-encoder-openvino.bin
282
+ ```
283
+
284
+ 3. Enable OpenVINO:
285
+ ```typescript
286
+ const ctx = createWhisperContext({
287
+ model: "./models/ggml-base.en.bin",
288
+ use_openvino: true,
289
+ openvino_device: "CPU", // or "GPU" for Intel iGPU
290
+ openvino_cache_dir: "./openvino_cache", // optional, speeds up init
291
+ });
292
+ ```
293
+
294
+ **Note:** OpenVINO support requires the addon to be built with `-DADDON_OPENVINO=ON`.
295
+
296
+ ## Models
297
+
298
+ Download models from [Hugging Face](https://huggingface.co/ggerganov/whisper.cpp):
299
+
300
+ ```bash
301
+ # Base English model (~150MB)
302
+ curl -L -o models/ggml-base.en.bin \
303
+ https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
304
+
305
+ # Large v3 Turbo quantized (~500MB)
306
+ curl -L -o models/ggml-large-v3-turbo-q4_0.bin \
307
+ https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo-q4_0.bin
308
+
309
+ # Silero VAD model (for streaming VAD)
310
+ curl -L -o models/ggml-silero-v6.2.0.bin \
311
+ https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-silero-v6.2.0.bin
312
+ ```
313
+
314
+ ## License
315
+
316
+ MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "whisper-cpp-node",
3
- "version": "0.2.0",
3
+ "version": "0.2.1",
4
4
  "description": "Node.js bindings for whisper.cpp - fast speech-to-text with GPU acceleration",
5
5
  "license": "MIT",
6
6
  "repository": {
@@ -22,7 +22,7 @@
22
22
  ],
23
23
  "optionalDependencies": {
24
24
  "@whisper-cpp-node/darwin-arm64": "0.2.1",
25
- "@whisper-cpp-node/win32-x64": "0.2.0"
25
+ "@whisper-cpp-node/win32-x64": "0.2.1"
26
26
  },
27
27
  "devDependencies": {
28
28
  "@types/node": "^20.0.0",