valenceai 0.5.1 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +357 -177
- package/package.json +15 -6
- package/src/config.js +25 -14
- package/src/rateLimit.js +77 -0
- package/src/streaming.js +193 -0
- package/src/utils/logger.js +3 -3
- package/src/valenceClient.js +173 -68
- package/tests/asyncAudio.test.js +128 -71
- package/tests/client.test.js +10 -25
- package/tests/config.test.js +21 -21
- package/tests/e2e.asyncWorkflow.test.js +343 -0
- package/tests/e2e.streaming.test.js +420 -0
- package/tests/logger.test.js +3 -0
- package/tests/rateLimit.test.js +137 -0
- package/tests/setup.js +5 -4
- package/tests/streaming.test.js +187 -0
- package/tests/valenceClient.test.js +50 -5
package/README.md
CHANGED
|
@@ -1,250 +1,367 @@
|
|
|
1
1
|
# Valence SDK for Emotion Detection
|
|
2
2
|
|
|
3
|
-
**valenceai** is a Node.js SDK for interacting with the [Valence AI](https://getvalenceai.com) Pulse API for emotion detection. It provides a convenient interface to upload audio files
|
|
3
|
+
**valenceai** is a Node.js SDK for interacting with the [Valence AI](https://getvalenceai.com) Pulse API for emotion detection. It provides a convenient interface to upload audio files, stream real-time audio, and retrieve detected emotional states.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
7
|
-
- **Discrete audio processing** -
|
|
8
|
-
- **
|
|
9
|
-
- **
|
|
7
|
+
- **Discrete audio processing** - Real-time analysis for short audio clips (4-10s)
|
|
8
|
+
- **Async audio processing** - Multipart streaming for long files with timeline data
|
|
9
|
+
- **Streaming API** - Real-time WebSocket streaming for live audio
|
|
10
|
+
- **Rate limiting** - Monitor API usage and limits
|
|
11
|
+
- **Model selection** - Choose between 4emotions and 7emotions models
|
|
12
|
+
- **Timeline analysis** - Get emotion changes over time with timestamps
|
|
13
|
+
- **Environment configuration** - Built-in support for .env files
|
|
10
14
|
- **Enhanced logging** - Configurable log levels with timestamps
|
|
11
15
|
- **Robust error handling** - Comprehensive validation and error recovery
|
|
12
16
|
- **TypeScript ready** - Full JSDoc documentation for all functions
|
|
13
|
-
- **100% tested** - Comprehensive test suite with
|
|
17
|
+
- **100% tested** - Comprehensive test suite with high coverage
|
|
14
18
|
- **Security focused** - Input validation and secure error handling
|
|
15
19
|
|
|
16
20
|
The emotional classification model used in our APIs is optimized for North American English conversational data.
|
|
17
21
|
|
|
18
|
-
|
|
22
|
+
## Emotion Models
|
|
19
23
|
|
|
20
|
-
|
|
24
|
+
The SDK supports two emotion detection models:
|
|
25
|
+
|
|
26
|
+
- **4emotions** (default): angry, happy, neutral, sad
|
|
27
|
+
- **7emotions**: happy, sad, angry, neutral, surprised, disgusted, calm
|
|
21
28
|
|
|
22
29
|
The number of emotions, emotional buckets, and language support can be customized. If you are interested in a custom model, please [contact us](https://www.getvalenceai.com/contact).
|
|
23
30
|
|
|
24
|
-
## API
|
|
31
|
+
## API Overview
|
|
25
32
|
|
|
26
|
-
|
|
33
|
+
| API | Best For | Input | Output | Response Time |
|
|
34
|
+
|-----|----------|-------|--------|---------------|
|
|
35
|
+
| **Discrete** | Real-time analysis | Short audio (4-10s) | Single emotion prediction | 100-500ms |
|
|
36
|
+
| **Async** | Pre-recorded files | Long audio (up to 1GB) | Timeline with emotion changes | Depends on file size |
|
|
37
|
+
| **Streaming** | Live audio streams | Audio chunks via WebSocket | Real-time emotion updates | Near real-time |
|
|
27
38
|
|
|
28
|
-
|
|
29
|
-
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
|
|
30
|
-
| Inputs | A short audio file, 4-10s in length. | A long audio file, at least 5s in length. Inputs can be up to 1 GB large. |
|
|
31
|
-
| Outputs | A JSON that includes the primary emotion detected in the file, along with its confidence. The confidence scores of all other emotions in the model are also returned. | A time-stamped JSON that includes the classified emotion and its confidence at a rate of 1 classification per 5 seconds of audio. |
|
|
32
|
-
| Response Time | 100-500 ms | Dependent upon file size |
|
|
39
|
+
## Async API Processing Workflow
|
|
33
40
|
|
|
34
|
-
The
|
|
41
|
+
The Async API uses a multi-step process to handle long audio files. Understanding this workflow is crucial for proper implementation:
|
|
35
42
|
|
|
36
|
-
|
|
43
|
+
### 1. Upload Phase (Client-Side)
|
|
37
44
|
|
|
38
|
-
|
|
45
|
+
When you call `client.asynch.upload(filePath)`:
|
|
39
46
|
|
|
40
|
-
|
|
47
|
+
- SDK splits your file into parts (5MB chunks by default)
|
|
48
|
+
- Uploads parts to S3 using presigned URLs
|
|
49
|
+
- **Returns a `requestId`** - This is a tracking identifier, NOT a completion signal
|
|
50
|
+
- At this point: File is uploaded to S3, but **NOT processed yet**
|
|
41
51
|
|
|
42
|
-
|
|
43
|
-
npm install valenceai
|
|
44
|
-
```
|
|
52
|
+
### 2. Background Processing (Server-Side)
|
|
45
53
|
|
|
46
|
-
|
|
54
|
+
After upload completes, the server automatically:
|
|
47
55
|
|
|
48
|
-
|
|
56
|
+
- Background processor checks for new uploads every 10 seconds
|
|
57
|
+
- Downloads audio from S3 when detected
|
|
58
|
+
- Splits audio into 5-second segments
|
|
59
|
+
- Extracts audio features (MFCC) from each segment
|
|
60
|
+
- Invokes machine learning model for emotion detection
|
|
61
|
+
- Stores results in database
|
|
62
|
+
- Updates status to `completed`
|
|
49
63
|
|
|
50
|
-
|
|
51
|
-
VALENCE_API_KEY=your_api_key # Required: Your Valence API key
|
|
52
|
-
VALENCE_DISCRETE_URL=https://discrete-api-url # Optional: Discrete audio endpoint
|
|
53
|
-
VALENCE_ASYNCH_URL=https://asynch-api-url # Optional: Asynch audio endpoint
|
|
54
|
-
VALENCE_LOG_LEVEL=info # Optional: debug, info, warn, error
|
|
55
|
-
```
|
|
64
|
+
**Processing Time**: Typically 1-2 minutes for a 60-minute audio file. The exact time depends on file length and current server load.
|
|
56
65
|
|
|
57
|
-
###
|
|
66
|
+
### 3. Results Retrieval (Client-Side)
|
|
58
67
|
|
|
59
|
-
|
|
60
|
-
import { validateConfig } from 'valenceai';
|
|
68
|
+
When you call `client.asynch.emotions(requestId)`:
|
|
61
69
|
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
70
|
+
- Polls the status endpoint at regular intervals
|
|
71
|
+
- Waits for status progression:
|
|
72
|
+
- `initiated` → Upload started
|
|
73
|
+
- `upload_completed` → File uploaded to S3 (processing not started)
|
|
74
|
+
- `processing` → Background processing in progress
|
|
75
|
+
- `completed` → Results ready
|
|
76
|
+
- Returns emotion timeline when status is `completed`
|
|
69
77
|
|
|
70
|
-
|
|
78
|
+
### Status Values
|
|
71
79
|
|
|
72
|
-
|
|
80
|
+
| Status | Meaning | What's Happening |
|
|
81
|
+
|--------|---------|------------------|
|
|
82
|
+
| `initiated` | Upload started | SDK is uploading file parts to S3 |
|
|
83
|
+
| `upload_completed` | Upload finished | File is in S3, waiting for background processor |
|
|
84
|
+
| `processing` | Processing active | Server is analyzing audio with ML model |
|
|
85
|
+
| `completed` | Results ready | Emotion timeline is available |
|
|
73
86
|
|
|
74
|
-
|
|
75
|
-
import { ValenceClient } from 'valenceai';
|
|
87
|
+
### Important Notes
|
|
76
88
|
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
89
|
+
- **The `requestId` is NOT a completion indicator** - It's just a tracking ID
|
|
90
|
+
- **`upload()` completing does NOT mean results are ready** - It only means the file is in S3
|
|
91
|
+
- **Background processing takes time** - Plan for 1-2 minutes per hour of audio
|
|
92
|
+
- **You can check status anytime** - The `requestId` remains valid for retrieving results
|
|
93
|
+
|
|
94
|
+
## Installation
|
|
95
|
+
|
|
96
|
+
```bash
|
|
97
|
+
npm install valenceai
|
|
84
98
|
```
|
|
85
99
|
|
|
86
|
-
|
|
100
|
+
## Quick Start
|
|
87
101
|
|
|
88
|
-
```
|
|
102
|
+
```javascript
|
|
89
103
|
import { ValenceClient } from 'valenceai';
|
|
90
104
|
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
105
|
+
// Initialize client (uses VALENCE_API_KEY environment variable)
|
|
106
|
+
const client = new ValenceClient({ apiKey: 'your_api_key' });
|
|
107
|
+
|
|
108
|
+
// Discrete API - Quick emotion detection
|
|
109
|
+
const result = await client.discrete.emotions('short_audio.wav', '4emotions');
|
|
110
|
+
console.log(`Emotion: ${result.dominant_emotion}`);
|
|
111
|
+
|
|
112
|
+
// Async API - Long audio with timeline
|
|
113
|
+
// Step 1: Upload file to S3 (returns tracking ID, NOT results)
|
|
114
|
+
const requestId = await client.asynch.upload('long_audio.wav');
|
|
115
|
+
// Step 2: Wait for server processing and get results (polls until complete)
|
|
116
|
+
const emotions = await client.asynch.emotions(requestId, 30, 10000);
|
|
117
|
+
// Step 3: Access timeline and dominant emotion from results
|
|
118
|
+
const timeline = await client.asynch.getTimeline(requestId);
|
|
119
|
+
const dominant = await client.asynch.getDominantEmotion(requestId);
|
|
120
|
+
|
|
121
|
+
// Streaming API - Real-time audio
|
|
122
|
+
const stream = client.streaming.connect('4emotions');
|
|
123
|
+
stream.on('prediction', (data) => console.log(data.main_emotion));
|
|
124
|
+
stream.connect();
|
|
125
|
+
stream.sendAudio(audioBuffer);
|
|
126
|
+
stream.disconnect();
|
|
127
|
+
|
|
128
|
+
// Rate Limit API - Monitor usage
|
|
129
|
+
const status = await client.rateLimit.getStatus();
|
|
130
|
+
const health = await client.rateLimit.getHealth();
|
|
104
131
|
```
|
|
105
132
|
|
|
106
|
-
|
|
133
|
+
## Configuration
|
|
107
134
|
|
|
108
|
-
|
|
109
|
-
import { ValenceClient } from 'valenceai';
|
|
135
|
+
### Environment Variables
|
|
110
136
|
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
137
|
+
Create a `.env` file in your project root:
|
|
138
|
+
|
|
139
|
+
```env
|
|
140
|
+
VALENCE_API_KEY=your_api_key # Required
|
|
141
|
+
VALENCE_API_BASE_URL=https://api.getvalenceai.com # Optional
|
|
142
|
+
VALENCE_WEBSOCKET_URL=wss://api.getvalenceai.com # Optional
|
|
143
|
+
VALENCE_LOG_LEVEL=info # Optional: debug, info, warn, error
|
|
144
|
+
```
|
|
116
145
|
|
|
117
|
-
|
|
118
|
-
const requestId = await client.asynch.upload('huge_file.wav');
|
|
146
|
+
### Client Configuration
|
|
119
147
|
|
|
120
|
-
|
|
121
|
-
const
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
)
|
|
148
|
+
```javascript
|
|
149
|
+
const client = new ValenceClient({
|
|
150
|
+
apiKey: 'your_api_key', // API key (required)
|
|
151
|
+
baseUrl: 'https://custom.api', // Custom API endpoint (optional)
|
|
152
|
+
websocketUrl: 'wss://custom.api', // Custom WebSocket endpoint (optional)
|
|
153
|
+
partSize: 5 * 1024 * 1024, // Upload chunk size (default: 5MB)
|
|
154
|
+
maxRetries: 3 // Max retry attempts (default: 3)
|
|
155
|
+
});
|
|
126
156
|
```
|
|
127
157
|
|
|
128
158
|
## API Reference
|
|
129
159
|
|
|
130
|
-
###
|
|
160
|
+
### Discrete API
|
|
131
161
|
|
|
132
|
-
|
|
162
|
+
For short audio files requiring immediate emotion detection.
|
|
133
163
|
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
164
|
+
```javascript
|
|
165
|
+
// File upload
|
|
166
|
+
const result = await client.discrete.emotions(
|
|
167
|
+
'audio.wav',
|
|
168
|
+
'4emotions' // or '7emotions'
|
|
169
|
+
);
|
|
137
170
|
|
|
138
|
-
|
|
171
|
+
// In-memory audio array
|
|
172
|
+
const result = await client.discrete.emotions(
|
|
173
|
+
[0.1, 0.2, 0.3, ...],
|
|
174
|
+
'4emotions'
|
|
175
|
+
);
|
|
176
|
+
```
|
|
139
177
|
|
|
140
|
-
|
|
178
|
+
**Response:**
|
|
179
|
+
```javascript
|
|
180
|
+
{
|
|
181
|
+
emotions: {
|
|
182
|
+
happy: 0.78,
|
|
183
|
+
sad: 0.12,
|
|
184
|
+
angry: 0.05,
|
|
185
|
+
neutral: 0.05
|
|
186
|
+
},
|
|
187
|
+
dominant_emotion: 'happy'
|
|
188
|
+
}
|
|
189
|
+
```
|
|
141
190
|
|
|
142
|
-
|
|
191
|
+
### Async API
|
|
143
192
|
|
|
144
|
-
|
|
145
|
-
- `filePath` (string): Path to the audio file
|
|
193
|
+
For long audio files with timeline analysis.
|
|
146
194
|
|
|
147
|
-
**
|
|
195
|
+
**Workflow**: The Async API uses a 3-step process:
|
|
148
196
|
|
|
149
|
-
**
|
|
197
|
+
1. **Upload** (`upload()`) - Multipart upload to S3, returns `requestId` (tracking ID)
|
|
198
|
+
2. **Background Processing** (automatic) - Server processes audio in 5-second chunks
|
|
199
|
+
3. **Results Retrieval** (`emotions()`) - Polls status endpoint until processing completes
|
|
150
200
|
|
|
151
|
-
|
|
201
|
+
**Processing Time**: Typically 1-2 minutes per hour of audio.
|
|
152
202
|
|
|
153
|
-
|
|
203
|
+
**Status Progression**: `initiated` → `upload_completed` → `processing` → `completed`
|
|
154
204
|
|
|
155
|
-
|
|
156
|
-
- `filePath` (string): Path to the audio file
|
|
205
|
+
#### Upload Audio
|
|
157
206
|
|
|
158
|
-
|
|
207
|
+
```javascript
|
|
208
|
+
// Upload file to S3 (multipart upload)
|
|
209
|
+
const requestId = await client.asynch.upload('long_audio.wav');
|
|
210
|
+
// Returns: requestId (tracking ID, NOT completion signal)
|
|
211
|
+
// File is uploaded to S3 but NOT processed yet
|
|
212
|
+
```
|
|
159
213
|
|
|
160
|
-
|
|
214
|
+
#### Get Emotion Results
|
|
161
215
|
|
|
162
|
-
|
|
216
|
+
```javascript
|
|
217
|
+
// Poll for results until processing completes
|
|
218
|
+
const result = await client.asynch.emotions(
|
|
219
|
+
requestId,
|
|
220
|
+
20, // maxTries (default: 20, range: 1-100)
|
|
221
|
+
5000 // intervalMs (default: 5000, range: 1000-60000)
|
|
222
|
+
);
|
|
223
|
+
// This method waits for server processing to complete
|
|
224
|
+
// Returns when status is 'completed'
|
|
225
|
+
```
|
|
163
226
|
|
|
164
|
-
|
|
227
|
+
**Response:**
|
|
228
|
+
```javascript
|
|
229
|
+
{
|
|
230
|
+
emotions: [
|
|
231
|
+
{
|
|
232
|
+
timestamp: 0.5,
|
|
233
|
+
start_time: 0.0,
|
|
234
|
+
end_time: 1.0,
|
|
235
|
+
emotion: 'happy',
|
|
236
|
+
confidence: 0.9,
|
|
237
|
+
all_predictions: { happy: 0.9, sad: 0.1, ... }
|
|
238
|
+
},
|
|
239
|
+
{
|
|
240
|
+
timestamp: 1.5,
|
|
241
|
+
start_time: 1.0,
|
|
242
|
+
end_time: 2.0,
|
|
243
|
+
emotion: 'neutral',
|
|
244
|
+
confidence: 0.85,
|
|
245
|
+
all_predictions: { neutral: 0.85, happy: 0.15, ... }
|
|
246
|
+
}
|
|
247
|
+
],
|
|
248
|
+
status: 'completed'
|
|
249
|
+
}
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
#### Timeline Analysis
|
|
165
253
|
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
- `intervalSeconds` (number, optional): Polling interval in seconds (default: 5, range: 1-60)
|
|
254
|
+
```javascript
|
|
255
|
+
// Get full timeline
|
|
256
|
+
const timeline = await client.asynch.getTimeline(requestId);
|
|
170
257
|
|
|
171
|
-
|
|
258
|
+
// Get emotion at specific time
|
|
259
|
+
const emotion = await client.asynch.getEmotionAtTime(requestId, 5.2);
|
|
172
260
|
|
|
173
|
-
|
|
261
|
+
// Get dominant emotion across entire audio
|
|
262
|
+
const dominant = await client.asynch.getDominantEmotion(requestId);
|
|
263
|
+
```
|
|
174
264
|
|
|
175
|
-
###
|
|
265
|
+
### Streaming API
|
|
176
266
|
|
|
177
|
-
|
|
267
|
+
For real-time emotion detection on live audio streams.
|
|
178
268
|
|
|
179
|
-
|
|
269
|
+
```javascript
|
|
270
|
+
// Create streaming connection
|
|
271
|
+
const stream = client.streaming.connect('4emotions');
|
|
180
272
|
|
|
181
|
-
|
|
273
|
+
// Register event handlers
|
|
274
|
+
stream.on('prediction', (data) => {
|
|
275
|
+
console.log(`Emotion: ${data.main_emotion}`);
|
|
276
|
+
});
|
|
182
277
|
|
|
183
|
-
|
|
184
|
-
|
|
278
|
+
stream.on('error', (error) => {
|
|
279
|
+
console.error(`Error: ${error.message}`);
|
|
280
|
+
});
|
|
185
281
|
|
|
186
|
-
|
|
282
|
+
stream.on('connected', (info) => {
|
|
283
|
+
console.log(`Connected: ${info.session_id}`);
|
|
284
|
+
});
|
|
187
285
|
|
|
188
|
-
|
|
286
|
+
// Connect to WebSocket
|
|
287
|
+
await stream.connect();
|
|
189
288
|
|
|
190
|
-
|
|
289
|
+
// Send audio chunks (Buffer or ArrayBuffer)
|
|
290
|
+
stream.sendAudio(audioBuffer);
|
|
191
291
|
|
|
192
|
-
|
|
292
|
+
// Check connection status
|
|
293
|
+
if (stream.connected) {
|
|
294
|
+
console.log('Streaming active');
|
|
295
|
+
}
|
|
193
296
|
|
|
194
|
-
|
|
297
|
+
// Disconnect
|
|
298
|
+
stream.disconnect();
|
|
299
|
+
```
|
|
195
300
|
|
|
196
|
-
|
|
301
|
+
**Prediction Event:**
|
|
302
|
+
```javascript
|
|
197
303
|
{
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
}
|
|
304
|
+
main_emotion: 'happy',
|
|
305
|
+
confidence: 0.87,
|
|
306
|
+
all_predictions: {
|
|
307
|
+
happy: 0.87,
|
|
308
|
+
sad: 0.05,
|
|
309
|
+
angry: 0.03,
|
|
310
|
+
neutral: 0.05
|
|
311
|
+
},
|
|
312
|
+
timestamp: 1234567890
|
|
206
313
|
}
|
|
207
314
|
```
|
|
208
315
|
|
|
209
|
-
|
|
316
|
+
### Rate Limit API
|
|
317
|
+
|
|
318
|
+
Monitor your API usage and limits.
|
|
319
|
+
|
|
320
|
+
```javascript
|
|
321
|
+
// Get rate limit status
|
|
322
|
+
const status = await client.rateLimit.getStatus();
|
|
323
|
+
console.log(status);
|
|
324
|
+
// {
|
|
325
|
+
// limits: {
|
|
326
|
+
// second: { limit: 10, remaining: 8, reset: 1234567890 },
|
|
327
|
+
// minute: { limit: 100, remaining: 95, reset: 1234567890 },
|
|
328
|
+
// hour: { limit: 1000, remaining: 950, reset: 1234567890 },
|
|
329
|
+
// day: { limit: 10000, remaining: 9500, reset: 1234567890 }
|
|
330
|
+
// },
|
|
331
|
+
// current_usage: {
|
|
332
|
+
// second: 2,
|
|
333
|
+
// minute: 5,
|
|
334
|
+
// hour: 50,
|
|
335
|
+
// day: 500
|
|
336
|
+
// }
|
|
337
|
+
// }
|
|
338
|
+
|
|
339
|
+
// Check API health
|
|
340
|
+
const health = await client.rateLimit.getHealth();
|
|
341
|
+
console.log(health);
|
|
342
|
+
// { status: 'healthy', timestamp: 1234567890 }
|
|
343
|
+
```
|
|
210
344
|
|
|
211
|
-
|
|
345
|
+
## Audio Input Requirements
|
|
212
346
|
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
"confidence": 0.82791723
|
|
222
|
-
},
|
|
223
|
-
{
|
|
224
|
-
"t": "00:05",
|
|
225
|
-
"emotion": "neutral",
|
|
226
|
-
"confidence": 0.719817432
|
|
227
|
-
},
|
|
228
|
-
{
|
|
229
|
-
"t": "00:10",
|
|
230
|
-
"emotion": "happy",
|
|
231
|
-
"confidence": 0.917309381
|
|
232
|
-
},
|
|
233
|
-
{
|
|
234
|
-
"t": "00:15",
|
|
235
|
-
"emotion": "neutral",
|
|
236
|
-
"confidence": 0.414097846
|
|
237
|
-
}
|
|
238
|
-
"..."
|
|
239
|
-
]
|
|
240
|
-
}
|
|
241
|
-
```
|
|
347
|
+
### Format Specifications
|
|
348
|
+
|
|
349
|
+
- **Format**: WAV (mono)
|
|
350
|
+
- **Recommended sampling rate**: 44.1 kHz (44100 Hz)
|
|
351
|
+
- **Minimum sampling rate**: 8 kHz
|
|
352
|
+
- **Channel**: Mono (single channel)
|
|
353
|
+
|
|
354
|
+
### API-Specific Requirements
|
|
242
355
|
|
|
243
|
-
|
|
356
|
+
- **Discrete API**: 4-10 seconds per file
|
|
357
|
+
- **Async API**: Minimum 5 seconds, maximum 1 GB
|
|
358
|
+
- **Streaming API**: Real-time audio chunks (Buffer or ArrayBuffer)
|
|
359
|
+
|
|
360
|
+
For custom microphone specifications or stereo/multi-channel support, please [contact us](https://www.getvalenceai.com/contact).
|
|
244
361
|
|
|
245
362
|
## Examples
|
|
246
363
|
|
|
247
|
-
|
|
364
|
+
Example scripts are available in the [`examples/`](./examples) directory:
|
|
248
365
|
|
|
249
366
|
```bash
|
|
250
367
|
# Install dependencies
|
|
@@ -253,12 +370,37 @@ npm install
|
|
|
253
370
|
# Run discrete audio example
|
|
254
371
|
npm run example:discrete
|
|
255
372
|
|
|
256
|
-
# Run
|
|
373
|
+
# Run async audio example
|
|
257
374
|
npm run example:asynch
|
|
258
375
|
|
|
376
|
+
# Run streaming example
|
|
377
|
+
npm run example:streaming
|
|
378
|
+
|
|
259
379
|
# Or run directly
|
|
260
380
|
node examples/uploadShort.js
|
|
261
381
|
node examples/uploadLong.js
|
|
382
|
+
node examples/streamingAudio.js
|
|
383
|
+
```
|
|
384
|
+
|
|
385
|
+
## Error Handling
|
|
386
|
+
|
|
387
|
+
```javascript
|
|
388
|
+
import { ValenceClient } from 'valenceai';
|
|
389
|
+
|
|
390
|
+
try {
|
|
391
|
+
const client = new ValenceClient({ apiKey: 'your_key' });
|
|
392
|
+
const result = await client.discrete.emotions('audio.wav');
|
|
393
|
+
} catch (error) {
|
|
394
|
+
if (error.message.includes('API key')) {
|
|
395
|
+
console.error('Authentication error:', error.message);
|
|
396
|
+
} else if (error.message.includes('File not found')) {
|
|
397
|
+
console.error('File error:', error.message);
|
|
398
|
+
} else if (error.message.includes('API error')) {
|
|
399
|
+
console.error('API error:', error.message);
|
|
400
|
+
} else {
|
|
401
|
+
console.error('Unexpected error:', error.message);
|
|
402
|
+
}
|
|
403
|
+
}
|
|
262
404
|
```
|
|
263
405
|
|
|
264
406
|
## Development
|
|
@@ -274,6 +416,9 @@ npm run test:coverage
|
|
|
274
416
|
|
|
275
417
|
# Watch mode for development
|
|
276
418
|
npm run test:watch
|
|
419
|
+
|
|
420
|
+
# Run specific test file
|
|
421
|
+
npm test -- discrete.test.js
|
|
277
422
|
```
|
|
278
423
|
|
|
279
424
|
### Building and Publishing
|
|
@@ -287,23 +432,58 @@ npm login
|
|
|
287
432
|
npm publish --access public
|
|
288
433
|
```
|
|
289
434
|
|
|
290
|
-
##
|
|
435
|
+
## Migration from v0.x
|
|
436
|
+
|
|
437
|
+
### Key Changes in v1.0.0
|
|
438
|
+
|
|
439
|
+
1. **Environment Variable**: `VALENCE_API_KEY` is now the standard (consistent naming)
|
|
440
|
+
2. **Unified Client**: Single `ValenceClient` class with nested APIs
|
|
441
|
+
3. **Streaming API**: New WebSocket-based real-time emotion detection
|
|
442
|
+
4. **Rate Limiting**: New API for monitoring usage
|
|
443
|
+
5. **Timeline Data**: Async API now returns detailed timestamp information
|
|
444
|
+
6. **Model Selection**: Explicit model parameter for 4emotions or 7emotions
|
|
445
|
+
|
|
446
|
+
### Updating Your Code
|
|
447
|
+
|
|
448
|
+
```javascript
|
|
449
|
+
// Old (v0.x)
|
|
450
|
+
import { predictDiscreteAudioEmotion } from 'valenceai';
|
|
451
|
+
const result = await predictDiscreteAudioEmotion('file.wav');
|
|
452
|
+
|
|
453
|
+
// New (v1.0.0)
|
|
454
|
+
import { ValenceClient } from 'valenceai';
|
|
455
|
+
const client = new ValenceClient({ apiKey: 'your_key' });
|
|
456
|
+
const result = await client.discrete.emotions('file.wav', '4emotions');
|
|
457
|
+
|
|
458
|
+
// New streaming capability
|
|
459
|
+
const stream = client.streaming.connect('4emotions');
|
|
460
|
+
stream.on('prediction', callback);
|
|
461
|
+
await stream.connect();
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
### Breaking Changes
|
|
291
465
|
|
|
292
|
-
|
|
293
|
-
-
|
|
294
|
-
-
|
|
295
|
-
-
|
|
296
|
-
-
|
|
297
|
-
- **Single Import**: `import { ValenceClient } from 'valenceai'`
|
|
466
|
+
- `predictDiscreteAudioEmotion()` → `client.discrete.emotions()`
|
|
467
|
+
- `uploadAsyncAudio()` → `client.asynch.upload()`
|
|
468
|
+
- `getEmotions()` → `client.asynch.emotions()`
|
|
469
|
+
- All methods now require creating a `ValenceClient` instance first
|
|
470
|
+
- Model parameter is now required and explicit
|
|
298
471
|
|
|
299
|
-
|
|
300
|
-
- **API Symmetry** - Identical structure to Python SDK
|
|
301
|
-
- **Intuitive Organization** - Related methods grouped together
|
|
302
|
-
- **Consistent Naming** - Same method names across Python and JavaScript
|
|
303
|
-
- **Enhanced Documentation** - Updated examples and migration guide
|
|
304
|
-
- **Maintained Quality** - All existing functionality preserved
|
|
472
|
+
See [CHANGELOG.md](./CHANGELOG.md) for complete migration guide.
|
|
305
473
|
|
|
306
|
-
|
|
474
|
+
## TypeScript Support
|
|
475
|
+
|
|
476
|
+
The SDK includes comprehensive JSDoc annotations for full TypeScript IntelliSense:
|
|
477
|
+
|
|
478
|
+
```typescript
|
|
479
|
+
import { ValenceClient } from 'valenceai';
|
|
480
|
+
|
|
481
|
+
const client: ValenceClient = new ValenceClient({ apiKey: 'your_key' });
|
|
482
|
+
|
|
483
|
+
// Full type inference and autocomplete
|
|
484
|
+
const result = await client.discrete.emotions('audio.wav', '4emotions');
|
|
485
|
+
// result.dominant_emotion is typed
|
|
486
|
+
```
|
|
307
487
|
|
|
308
488
|
## Contributing
|
|
309
489
|
|
|
@@ -317,9 +497,9 @@ We welcome contributions! Please:
|
|
|
317
497
|
|
|
318
498
|
## Support
|
|
319
499
|
|
|
320
|
-
- **Documentation**:
|
|
500
|
+
- **Documentation**: [API Documentation](https://docs.getvalenceai.com)
|
|
321
501
|
- **Issues**: [GitHub Issues](https://github.com/valencevibrations/valence-sdk-js/issues)
|
|
322
|
-
- **Questions**:
|
|
502
|
+
- **Questions**: [Valence AI Support](https://www.getvalenceai.com/contact)
|
|
323
503
|
|
|
324
504
|
## License
|
|
325
505
|
|