valenceai 1.0.1 → 1.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,35 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.0.3] - 2025-01-28
9
+
10
+ ### Added
11
+
12
+ - **`getEmotionCounts()` method**: Returns an object of emotion occurrence counts for the entire audio file (e.g., `{happy: 10, sad: 3, angry: 8, neutral: 9}`)
13
+ - **`majorityEmotion()` method**: Alias for `getDominantEmotion()`, returns the most frequently occurring emotion as a string
14
+
15
+ ### Technical Improvements
16
+
17
+ - **Refactored `getDominantEmotion()`**: Now uses `getEmotionCounts()` internally to avoid code duplication
18
+
19
+ ### Usage Example
20
+
21
+ ```javascript
22
+ import { ValenceClient } from 'valenceai';
23
+
24
+ const client = new ValenceClient({ apiKey: 'your_api_key' });
25
+ const requestId = await client.asynch.upload('audio.wav');
26
+ const result = await client.asynch.emotions(requestId);
27
+
28
+ // Get emotion counts
29
+ const counts = await client.asynch.getEmotionCounts(requestId);
30
+ // Returns: { happy: 10, sad: 3, angry: 8, neutral: 9 }
31
+
32
+ // Get majority emotion
33
+ const majority = await client.asynch.majorityEmotion(requestId);
34
+ // Returns: "happy"
35
+ ```
36
+
8
37
  ## [1.0.1] - 2025-12-29
9
38
 
10
39
  ### Fixed
package/README.md CHANGED
@@ -1,15 +1,13 @@
1
1
  # Valence SDK for Emotion Detection
2
2
 
3
- **valenceai** is a Node.js SDK for interacting with the [Valence AI](https://getvalenceai.com) Pulse API for emotion detection. It provides a convenient interface to upload audio files, stream real-time audio, and retrieve detected emotional states.
3
+ **valenceai** is a Node.js SDK for interacting with the [Valence AI](https://getvalenceai.com) API for emotion analysis. It provides a convenient interface to upload audio files, stream real-time audio, and retrieve detected emotional states.
4
4
 
5
5
  ## Features
6
6
 
7
7
  - **Discrete audio processing** - Real-time analysis for short audio clips (4-10s)
8
- - **Async audio processing** - Multipart streaming for long files with timeline data
8
+ - **Asynch audio processing** - Multipart streaming for long files with timeline data
9
9
  - **Streaming API** - Real-time WebSocket streaming for live audio
10
10
  - **Rate limiting** - Monitor API usage and limits
11
- - **Model selection** - Choose between 4emotions and 7emotions models
12
- - **Timeline analysis** - Get emotion changes over time with timestamps
13
11
  - **Environment configuration** - Built-in support for .env files
14
12
  - **Enhanced logging** - Configurable log levels with timestamps
15
13
  - **Robust error handling** - Comprehensive validation and error recovery
@@ -17,28 +15,19 @@
17
15
  - **100% tested** - Comprehensive test suite with high coverage
18
16
  - **Security focused** - Input validation and secure error handling
19
17
 
20
- The emotional classification model used in our APIs is optimized for North American English conversational data.
21
-
22
- ## Emotion Models
23
-
24
- The SDK supports two emotion detection models:
25
-
26
- - **4emotions** (default): angry, happy, neutral, sad
27
- - **7emotions**: happy, sad, angry, neutral, surprised, disgusted, calm
28
-
29
- The number of emotions, emotional buckets, and language support can be customized. If you are interested in a custom model, please [contact us](https://www.getvalenceai.com/contact).
18
+ The emotional classification model used in our APIs is optimized for North American English conversational data. The model detects four emotions: angry, happy, neutral, and sad.
30
19
 
31
20
  ## API Overview
32
21
 
33
- | API | Best For | Input | Output | Response Time |
34
- |-----|----------|-------|--------|---------------|
35
- | **Discrete** | Real-time analysis | Short audio (4-10s) | Single emotion prediction | 100-500ms |
36
- | **Async** | Pre-recorded files | Long audio (up to 1GB) | Timeline with emotion changes | Depends on file size |
37
- | **Streaming** | Live audio streams | Audio chunks via WebSocket | Real-time emotion updates | Near real-time |
22
+ | API | Best For | Input | Output |
23
+ |-----|----------|-------|--------|
24
+ | **Discrete** | Real-time analysis | Short audio (4-10s) | Single emotion prediction |
25
+ | **Asynch** | Pre-recorded files | Long audio (up to 1GB) | Timeline with emotion changes |
26
+ | **Streaming** | Live audio streams | Audio chunks via WebSocket | Real-time emotion updates |
38
27
 
39
- ## Async API Processing Workflow
28
+ ## Asynch API Processing Workflow
40
29
 
41
- The Async API uses a multi-step process to handle long audio files. Understanding this workflow is crucial for proper implementation:
30
+ The Asynch API uses a multi-step process to handle long audio files. Understanding this workflow is crucial for proper implementation:
42
31
 
43
32
  ### 1. Upload Phase (Client-Side)
44
33
 
@@ -61,7 +50,7 @@ After upload completes, the server automatically:
61
50
  - Stores results in database
62
51
  - Updates status to `completed`
63
52
 
64
- **Processing Time**: Typically 1-2 minutes for a 60-minute audio file. The exact time depends on file length and current server load.
53
+ **Processing Time**: Varies based on file length and server load.
65
54
 
66
55
  ### 3. Results Retrieval (Client-Side)
67
56
 
@@ -87,8 +76,8 @@ When you call `client.asynch.emotions(requestId)`:
87
76
  ### Important Notes
88
77
 
89
78
  - **The `requestId` is NOT a completion indicator** - It's just a tracking ID
90
- - **`upload()` completing does NOT mean results are ready** - It only means the file is in S3
91
- - **Background processing takes time** - Plan for 1-2 minutes per hour of audio
79
+ - **`upload()` completing does NOT mean results are ready** - It only means the file is uploaded
80
+ - **Background processing takes time** - Processing time varies based on file length and server load
92
81
  - **You can check status anytime** - The `requestId` remains valid for retrieving results
93
82
 
94
83
  ## Installation
@@ -106,20 +95,23 @@ import { ValenceClient } from 'valenceai';
106
95
  const client = new ValenceClient({ apiKey: 'your_api_key' });
107
96
 
108
97
  // Discrete API - Quick emotion detection
109
- const result = await client.discrete.emotions('short_audio.wav', '4emotions');
110
- console.log(`Emotion: ${result.dominant_emotion}`);
98
+ const result = await client.discrete.emotions('short_audio.wav');
99
+ console.log(`Emotion: ${result.main_emotion}`);
111
100
 
112
- // Async API - Long audio with timeline
113
- // Step 1: Upload file to S3 (returns tracking ID, NOT results)
101
+ // Asynch API - Long audio with timeline
102
+ // Step 1: Upload file (returns tracking ID, NOT results)
114
103
  const requestId = await client.asynch.upload('long_audio.wav');
115
104
  // Step 2: Wait for server processing and get results (polls until complete)
116
105
  const emotions = await client.asynch.emotions(requestId, 30, 10000);
117
- // Step 3: Access timeline and dominant emotion from results
118
- const timeline = await client.asynch.getTimeline(requestId);
119
- const dominant = await client.asynch.getDominantEmotion(requestId);
106
+ // Step 3: Access emotion data from results
107
+ const emotionList = emotions.emotions; // List of emotion predictions with timestamps
108
+
109
+ // Get summary statistics
110
+ const majority = await client.asynch.majorityEmotion(requestId); // Most frequent emotion
111
+ const counts = await client.asynch.emotionCounts(requestId); // { happy: 10, sad: 3, ... }
120
112
 
121
113
  // Streaming API - Real-time audio
122
- const stream = client.streaming.connect('4emotions');
114
+ const stream = client.streaming.connect();
123
115
  stream.on('prediction', (data) => console.log(data.main_emotion));
124
116
  stream.connect();
125
117
  stream.sendAudio(audioBuffer);
@@ -151,7 +143,9 @@ const client = new ValenceClient({
151
143
  baseUrl: 'https://custom.api', // Custom API endpoint (optional)
152
144
  websocketUrl: 'wss://custom.api', // Custom WebSocket endpoint (optional)
153
145
  partSize: 5 * 1024 * 1024, // Upload chunk size (default: 5MB)
154
- maxRetries: 3 // Max retry attempts (default: 3)
146
+ maxRetries: 3, // Max retry attempts (default: 3)
147
+ comprehensiveOutput: false // When false: returns timestamp, main_emotion, confidence only.
148
+ // When true: also includes all_predictions with all emotion confidences (default: false)
155
149
  });
156
150
  ```
157
151
 
@@ -163,16 +157,10 @@ For short audio files requiring immediate emotion detection.
163
157
 
164
158
  ```javascript
165
159
  // File upload
166
- const result = await client.discrete.emotions(
167
- 'audio.wav',
168
- '4emotions' // or '7emotions'
169
- );
160
+ const result = await client.discrete.emotions('audio.wav');
170
161
 
171
162
  // In-memory audio array
172
- const result = await client.discrete.emotions(
173
- [0.1, 0.2, 0.3, ...],
174
- '4emotions'
175
- );
163
+ const result = await client.discrete.emotions([0.1, 0.2, 0.3, ...]);
176
164
  ```
177
165
 
178
166
  **Response:**
@@ -184,31 +172,31 @@ const result = await client.discrete.emotions(
184
172
  angry: 0.05,
185
173
  neutral: 0.05
186
174
  },
187
- dominant_emotion: 'happy'
175
+ main_emotion: 'happy'
188
176
  }
189
177
  ```
190
178
 
191
- ### Async API
179
+ ### Asynch API
192
180
 
193
181
  For long audio files with timeline analysis.
194
182
 
195
- **Workflow**: The Async API uses a 3-step process:
183
+ **Workflow**: The Asynch API uses a 3-step process:
196
184
 
197
- 1. **Upload** (`upload()`) - Multipart upload to S3, returns `requestId` (tracking ID)
185
+ 1. **Upload** (`upload()`) - Multipart upload, returns `requestId` (tracking ID)
198
186
  2. **Background Processing** (automatic) - Server processes audio in 5-second chunks
199
187
  3. **Results Retrieval** (`emotions()`) - Polls status endpoint until processing completes
200
188
 
201
- **Processing Time**: Typically 1-2 minutes per hour of audio.
189
+ **Processing Time**: Varies based on file length and server load.
202
190
 
203
191
  **Status Progression**: `initiated` → `upload_completed` → `processing` → `completed`
204
192
 
205
193
  #### Upload Audio
206
194
 
207
195
  ```javascript
208
- // Upload file to S3 (multipart upload)
196
+ // Upload file (multipart upload)
209
197
  const requestId = await client.asynch.upload('long_audio.wav');
210
198
  // Returns: requestId (tracking ID, NOT completion signal)
211
- // File is uploaded to S3 but NOT processed yet
199
+ // File is uploaded but NOT processed yet
212
200
  ```
213
201
 
214
202
  #### Get Emotion Results
@@ -249,17 +237,18 @@ const result = await client.asynch.emotions(
249
237
  }
250
238
  ```
251
239
 
252
- #### Timeline Analysis
240
+ Note: The `all_predictions` field is only included when `comprehensiveOutput: true` is set in the client constructor.
253
241
 
254
- ```javascript
255
- // Get full timeline
256
- const timeline = await client.asynch.getTimeline(requestId);
242
+ #### Helper Methods
257
243
 
258
- // Get emotion at specific time
259
- const emotion = await client.asynch.getEmotionAtTime(requestId, 5.2);
244
+ ```javascript
245
+ // Get the most frequently occurring emotion across the entire file
246
+ const majority = await client.asynch.majorityEmotion(requestId);
247
+ // Returns: "happy"
260
248
 
261
- // Get dominant emotion across entire audio
262
- const dominant = await client.asynch.getDominantEmotion(requestId);
249
+ // Get emotion occurrence counts for the entire file
250
+ const counts = await client.asynch.emotionCounts(requestId);
251
+ // Returns: { happy: 10, sad: 3, angry: 8, neutral: 9 }
263
252
  ```
264
253
 
265
254
  ### Streaming API
@@ -268,7 +257,7 @@ For real-time emotion detection on live audio streams.
268
257
 
269
258
  ```javascript
270
259
  // Create streaming connection
271
- const stream = client.streaming.connect('4emotions');
260
+ const stream = client.streaming.connect();
272
261
 
273
262
  // Register event handlers
274
263
  stream.on('prediction', (data) => {
@@ -309,10 +298,12 @@ stream.disconnect();
309
298
  angry: 0.03,
310
299
  neutral: 0.05
311
300
  },
312
- timestamp: 1234567890
301
+ timestamp: 1706486400000 // Unix timestamp (UTC) in milliseconds
313
302
  }
314
303
  ```
315
304
 
305
+ The `timestamp` is a Unix timestamp (UTC) in milliseconds representing when the server generated the prediction.
306
+
316
307
  ### Rate Limit API
317
308
 
318
309
  Monitor your API usage and limits.
@@ -354,7 +345,7 @@ console.log(health);
354
345
  ### API-Specific Requirements
355
346
 
356
347
  - **Discrete API**: 4-10 seconds per file
357
- - **Async API**: Minimum 5 seconds, maximum 1 GB
348
+ - **Asynch API**: Minimum 5 seconds, maximum 1 GB
358
349
  - **Streaming API**: Real-time audio chunks (Buffer or ArrayBuffer)
359
350
 
360
351
  For custom microphone specifications or stereo/multi-channel support, please [contact us](https://www.getvalenceai.com/contact).
@@ -382,16 +373,66 @@ node examples/uploadLong.js
382
373
  node examples/streamingAudio.js
383
374
  ```
384
375
 
376
+ ## Error Responses
377
+
378
+ ### Discrete API Errors
379
+
380
+ | HTTP Status | Error Code | Description |
381
+ |-------------|------------|-------------|
382
+ | 400 | `AUDIO_TOO_SHORT` | Audio duration below minimum (4.5 seconds). Response includes `min_duration_seconds` and `actual_duration_seconds` |
383
+ | 400 | Bad Request | Invalid request format or parameters |
384
+ | 401 | Unauthorized | Invalid or missing API key |
385
+ | 500 | Server Error | Internal server error |
386
+
387
+ ### Asynch API Errors
388
+
389
+ | HTTP Status | Error Code | Description |
390
+ |-------------|------------|-------------|
391
+ | 400 | `AUDIO_TOO_SHORT` | Audio duration below minimum (5 seconds) |
392
+ | 400 | Bad Request | Invalid request format or parameters |
393
+ | 401 | Unauthorized | Invalid or missing API key |
394
+ | 404 | Not Found | Request ID not found |
395
+ | 500 | Server Error | Internal server error |
396
+
397
+ **Asynch Status Values:**
398
+
399
+ | Status | Meaning |
400
+ |--------|---------|
401
+ | `initiated` | Upload in progress |
402
+ | `upload_completed` | Upload finished, awaiting processing |
403
+ | `processing` | Server analyzing audio |
404
+ | `completed` | Results ready |
405
+ | `failed` | Processing failed |
406
+
407
+ ### Streaming API Errors
408
+
409
+ | Event | Description |
410
+ |-------|-------------|
411
+ | `error` | Server-side error during streaming |
412
+ | `warning` | Non-fatal warning from server |
413
+ | `connect_error` | WebSocket connection failed |
414
+ | `disconnect` | Connection closed |
415
+
416
+ ### Rate Limit API Errors
417
+
418
+ | HTTP Status | Description |
419
+ |-------------|-------------|
420
+ | 401 | Unauthorized - Invalid API key |
421
+ | 429 | Too Many Requests - Rate limit exceeded |
422
+ | 500 | Server Error |
423
+
385
424
  ## Error Handling
386
425
 
387
426
  ```javascript
388
- import { ValenceClient } from 'valenceai';
427
+ import { ValenceClient, AudioTooShortError } from 'valenceai';
389
428
 
390
429
  try {
391
430
  const client = new ValenceClient({ apiKey: 'your_key' });
392
431
  const result = await client.discrete.emotions('audio.wav');
393
432
  } catch (error) {
394
- if (error.message.includes('API key')) {
433
+ if (error instanceof AudioTooShortError) {
434
+ console.error(`Audio too short: ${error.actualDuration}s (min: ${error.minDuration}s)`);
435
+ } else if (error.message.includes('API key')) {
395
436
  console.error('Authentication error:', error.message);
396
437
  } else if (error.message.includes('File not found')) {
397
438
  console.error('File error:', error.message);
@@ -440,8 +481,7 @@ npm publish --access public
440
481
  2. **Unified Client**: Single `ValenceClient` class with nested APIs
441
482
  3. **Streaming API**: New WebSocket-based real-time emotion detection
442
483
  4. **Rate Limiting**: New API for monitoring usage
443
- 5. **Timeline Data**: Async API now returns detailed timestamp information
444
- 6. **Model Selection**: Explicit model parameter for 4emotions or 7emotions
484
+ 5. **Timeline Data**: Asynch API now returns detailed timestamp information
445
485
 
446
486
  ### Updating Your Code
447
487
 
@@ -453,10 +493,10 @@ const result = await predictDiscreteAudioEmotion('file.wav');
453
493
  // New (v1.0.0)
454
494
  import { ValenceClient } from 'valenceai';
455
495
  const client = new ValenceClient({ apiKey: 'your_key' });
456
- const result = await client.discrete.emotions('file.wav', '4emotions');
496
+ const result = await client.discrete.emotions('file.wav');
457
497
 
458
498
  // New streaming capability
459
- const stream = client.streaming.connect('4emotions');
499
+ const stream = client.streaming.connect();
460
500
  stream.on('prediction', callback);
461
501
  await stream.connect();
462
502
  ```
@@ -467,7 +507,6 @@ await stream.connect();
467
507
  - `uploadAsyncAudio()` → `client.asynch.upload()`
468
508
  - `getEmotions()` → `client.asynch.emotions()`
469
509
  - All methods now require creating a `ValenceClient` instance first
470
- - Model parameter is now required and explicit
471
510
 
472
511
  See [CHANGELOG.md](./CHANGELOG.md) for complete migration guide.
473
512
 
@@ -481,8 +520,8 @@ import { ValenceClient } from 'valenceai';
481
520
  const client: ValenceClient = new ValenceClient({ apiKey: 'your_key' });
482
521
 
483
522
  // Full type inference and autocomplete
484
- const result = await client.discrete.emotions('audio.wav', '4emotions');
485
- // result.dominant_emotion is typed
523
+ const result = await client.discrete.emotions('audio.wav');
524
+ // result.main_emotion is typed
486
525
  ```
487
526
 
488
527
  ## Contributing
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "valenceai",
3
- "version": "1.0.1",
3
+ "version": "1.0.3",
4
4
  "type": "module",
5
5
  "main": "src/index.js",
6
6
  "description": "Node.js SDK for Valence AI Emotion Detection API - Real-time, Async, and Streaming Support",
package/src/errors.js ADDED
@@ -0,0 +1,26 @@
1
+ /**
2
+ * Base error class for Valence SDK errors.
3
+ */
4
+ export class ValenceSDKError extends Error {
5
+ constructor(message) {
6
+ super(message);
7
+ this.name = 'ValenceSDKError';
8
+ }
9
+ }
10
+
11
+ /**
12
+ * Error thrown when the audio file is shorter than the minimum required duration.
13
+ */
14
+ export class AudioTooShortError extends ValenceSDKError {
15
+ /**
16
+ * @param {string} message - Error message
17
+ * @param {number|null} minDuration - Minimum required duration in seconds
18
+ * @param {number|null} actualDuration - Actual duration of the provided audio in seconds
19
+ */
20
+ constructor(message, minDuration = null, actualDuration = null) {
21
+ super(message);
22
+ this.name = 'AudioTooShortError';
23
+ this.minDuration = minDuration;
24
+ this.actualDuration = actualDuration;
25
+ }
26
+ }
package/src/index.js CHANGED
@@ -1,2 +1,3 @@
1
1
  export { ValenceClient } from './valenceClient.js';
2
- export { validateConfig } from './config.js';
2
+ export { validateConfig } from './config.js';
3
+ export { ValenceSDKError, AudioTooShortError } from './errors.js';
@@ -6,6 +6,7 @@ import { getHeaders } from './client.js';
6
6
  import { log } from './utils/logger.js';
7
7
  import { RateLimitAPI } from './rateLimit.js';
8
8
  import { StreamingAPI } from './streaming.js';
9
+ import { AudioTooShortError } from './errors.js';
9
10
 
10
11
  /**
11
12
  * Client for discrete (short) audio processing
@@ -87,6 +88,14 @@ class DiscreteClient {
87
88
  log(`Error getting discrete emotions: ${error.message}`, 'error');
88
89
 
89
90
  if (error.response) {
91
+ // Check for AUDIO_TOO_SHORT error
92
+ if (error.response.status === 400 && error.response.data?.error_code === 'AUDIO_TOO_SHORT') {
93
+ throw new AudioTooShortError(
94
+ error.response.data.error || 'Audio file is too short',
95
+ error.response.data.min_duration_seconds,
96
+ error.response.data.actual_duration_seconds
97
+ );
98
+ }
90
99
  throw new Error(`API error (${error.response.status}): ${error.response.data?.message || error.response.statusText}`);
91
100
  } else if (error.request) {
92
101
  throw new Error('Network error: Unable to reach the API');
@@ -101,10 +110,11 @@ class DiscreteClient {
101
110
  * Client for async (long) audio processing
102
111
  */
103
112
  class AsyncClient {
104
- constructor(clientConfig, partSize = 5 * 1024 * 1024, maxRetries = 3) {
113
+ constructor(clientConfig, partSize = 5 * 1024 * 1024, maxRetries = 3, comprehensiveOutput = false) {
105
114
  this.config = clientConfig;
106
115
  this.partSize = partSize;
107
116
  this.maxRetries = maxRetries;
117
+ this.comprehensiveOutput = comprehensiveOutput;
108
118
  }
109
119
 
110
120
  /**
@@ -190,8 +200,16 @@ class AsyncClient {
190
200
  return data.request_id;
191
201
  } catch (error) {
192
202
  log(`Error uploading async audio: ${error.message}`, 'error');
193
-
203
+
194
204
  if (error.response) {
205
+ // Check for AUDIO_TOO_SHORT error
206
+ if (error.response.status === 400 && error.response.data?.error_code === 'AUDIO_TOO_SHORT') {
207
+ throw new AudioTooShortError(
208
+ error.response.data.error || 'Audio file is too short',
209
+ error.response.data.min_duration_seconds,
210
+ error.response.data.actual_duration_seconds
211
+ );
212
+ }
195
213
  throw new Error(`API error (${error.response.status}): ${error.response.data?.message || error.response.statusText}`);
196
214
  } else if (error.request) {
197
215
  throw new Error('Network error: Unable to reach the API');
@@ -231,6 +249,7 @@ class AsyncClient {
231
249
 
232
250
  const url = `${this.config.baseUrl}/v1/asynch/emotion/status/${requestId}`;
233
251
  const intervalMs = intervalSeconds * 1000;
252
+ const params = this.comprehensiveOutput ? { comprehensive_output: 'true' } : {};
234
253
 
235
254
  for (let i = 0; i < maxAttempts; i++) {
236
255
  try {
@@ -238,6 +257,7 @@ class AsyncClient {
238
257
 
239
258
  const res = await axios.get(url, {
240
259
  headers: { 'x-api-key': this.config.apiKey },
260
+ params,
241
261
  timeout: 15000
242
262
  });
243
263
 
@@ -304,20 +324,44 @@ class AsyncClient {
304
324
  * @returns {Promise<string|null>} The dominant emotion across the timeline
305
325
  */
306
326
  async getDominantEmotion(requestId) {
327
+ const counts = await this.getEmotionCounts(requestId);
328
+ if (!counts || Object.keys(counts).length === 0) {
329
+ return null;
330
+ }
331
+
332
+ return Object.keys(counts).reduce((a, b) =>
333
+ counts[a] > counts[b] ? a : b
334
+ );
335
+ }
336
+
337
+ /**
338
+ * Alias for getDominantEmotion. Get the most frequently occurring emotion.
339
+ * @param {string} requestId - Request ID from upload method
340
+ * @returns {Promise<string|null>} The dominant emotion across the timeline
341
+ */
342
+ async majorityEmotion(requestId) {
343
+ return this.getDominantEmotion(requestId);
344
+ }
345
+
346
+ /**
347
+ * Get counts of each emotion in the timeline
348
+ * @param {string} requestId - Request ID from upload method
349
+ * @returns {Promise<Object>} Object mapping emotion names to their occurrence counts
350
+ * (e.g., {happy: 10, sad: 3, angry: 8, neutral: 9})
351
+ */
352
+ async getEmotionCounts(requestId) {
307
353
  const timeline = await this.getTimeline(requestId);
308
354
  if (!timeline || timeline.length === 0) {
309
- return null;
355
+ return {};
310
356
  }
311
357
 
312
- const emotionCounts = {};
358
+ const counts = {};
313
359
  for (const emotionData of timeline) {
314
360
  const emotion = emotionData.emotion;
315
- emotionCounts[emotion] = (emotionCounts[emotion] || 0) + 1;
361
+ counts[emotion] = (counts[emotion] || 0) + 1;
316
362
  }
317
363
 
318
- return Object.keys(emotionCounts).reduce((a, b) =>
319
- emotionCounts[a] > emotionCounts[b] ? a : b
320
- );
364
+ return counts;
321
365
  }
322
366
  }
323
367
 
@@ -333,6 +377,7 @@ export class ValenceClient {
333
377
  * @param {string} options.websocketUrl - WebSocket URL for streaming (default: wss://demo.getvalenceai.com)
334
378
  * @param {number} options.partSize - Size of parts for multipart upload (default: 5MB)
335
379
  * @param {number} options.maxRetries - Max retry attempts for uploads (default: 3)
380
+ * @param {boolean} options.comprehensiveOutput - Include all_predictions in async emotion responses (default: false)
336
381
  */
337
382
  constructor(options = {}) {
338
383
  // Build configuration with priority: parameter > env var > default
@@ -351,10 +396,11 @@ export class ValenceClient {
351
396
 
352
397
  const partSize = options.partSize || 5 * 1024 * 1024;
353
398
  const maxRetries = options.maxRetries || 3;
399
+ const comprehensiveOutput = options.comprehensiveOutput || false;
354
400
 
355
401
  // Initialize nested clients
356
402
  this.discrete = new DiscreteClient(this.config);
357
- this.asynch = new AsyncClient(this.config, partSize, maxRetries);
403
+ this.asynch = new AsyncClient(this.config, partSize, maxRetries, comprehensiveOutput);
358
404
  this.rateLimit = new RateLimitAPI(this.config);
359
405
  this.streaming = new StreamingAPI(this.config);
360
406
  }
@@ -2,6 +2,7 @@ import { describe, test, expect, beforeEach, afterEach, jest } from '@jest/globa
2
2
  import nock from 'nock';
3
3
  import fs from 'fs';
4
4
  import { ValenceClient } from '../src/valenceClient.js';
5
+ import { AudioTooShortError } from '../src/errors.js';
5
6
 
6
7
  describe('AsyncAudio', () => {
7
8
  const originalEnv = process.env;
@@ -166,7 +167,7 @@ describe('AsyncAudio', () => {
166
167
  test('should handle API errors', async () => {
167
168
  fsMock.mockReturnValue(true);
168
169
  statMock.mockReturnValue({ size: 5242880 });
169
-
170
+
170
171
  nock('https://test-api.com')
171
172
  .get('/upload/initiate')
172
173
  .reply(400, { message: 'Invalid request' });
@@ -176,6 +177,34 @@ describe('AsyncAudio', () => {
176
177
  'API error (400): Invalid request'
177
178
  );
178
179
  });
180
+
181
+ test('should throw AudioTooShortError when audio is too short', async () => {
182
+ fsMock.mockReturnValue(true);
183
+ statMock.mockReturnValue({ size: 1000 }); // Small file
184
+
185
+ nock('https://test-api.com')
186
+ .post('/v1/asynch/emotion/upload/initiate')
187
+ .query(true)
188
+ .reply(400, {
189
+ error: 'Audio file is too short. Minimum duration: 4.5 seconds, provided: 1.00 seconds',
190
+ error_code: 'AUDIO_TOO_SHORT',
191
+ min_duration_seconds: 4.5,
192
+ actual_duration_seconds: 1.0
193
+ });
194
+
195
+ const client = new ValenceClient({ apiKey: 'test-api-key' });
196
+
197
+ try {
198
+ await client.asynch.upload('short_audio.wav');
199
+ expect.fail('Should have thrown AudioTooShortError');
200
+ } catch (error) {
201
+ expect(error).toBeInstanceOf(AudioTooShortError);
202
+ expect(error.name).toBe('AudioTooShortError');
203
+ expect(error.minDuration).toBe(4.5);
204
+ expect(error.actualDuration).toBe(1.0);
205
+ expect(error.message).toContain('too short');
206
+ }
207
+ });
179
208
  });
180
209
 
181
210
  describe('getEmotions', () => {
@@ -2,6 +2,7 @@ import { describe, test, expect, beforeEach, afterEach, jest } from '@jest/globa
2
2
  import nock from 'nock';
3
3
  import fs from 'fs';
4
4
  import { ValenceClient } from '../src/valenceClient.js';
5
+ import { AudioTooShortError } from '../src/errors.js';
5
6
 
6
7
  describe('DiscreteAudio', () => {
7
8
  const originalEnv = process.env;
@@ -152,7 +153,7 @@ describe('DiscreteAudio', () => {
152
153
 
153
154
  test('should include correct headers', async () => {
154
155
  fsMock.mockReturnValue(true);
155
-
156
+
156
157
  const scope = nock('https://test-api.com')
157
158
  .post('/predict?model=4emotions')
158
159
  .matchHeader('x-api-key', 'test-api-key')
@@ -161,8 +162,48 @@ describe('DiscreteAudio', () => {
161
162
 
162
163
  const client = new ValenceClient();
163
164
  await client.discrete.emotions('test.wav');
164
-
165
+
165
166
  expect(scope.isDone()).toBe(true);
166
167
  });
168
+
169
+ test('should throw AudioTooShortError when audio is too short', async () => {
170
+ fsMock.mockReturnValue(true);
171
+
172
+ nock('https://test-api.com')
173
+ .post('/predict?model=4emotions')
174
+ .reply(400, {
175
+ error: 'Audio file is too short. Minimum duration: 4.5 seconds, provided: 1.50 seconds',
176
+ error_code: 'AUDIO_TOO_SHORT',
177
+ min_duration_seconds: 4.5,
178
+ actual_duration_seconds: 1.5
179
+ });
180
+
181
+ const client = new ValenceClient();
182
+
183
+ try {
184
+ await client.discrete.emotions('short_audio.wav');
185
+ expect.fail('Should have thrown AudioTooShortError');
186
+ } catch (error) {
187
+ expect(error).toBeInstanceOf(AudioTooShortError);
188
+ expect(error.name).toBe('AudioTooShortError');
189
+ expect(error.minDuration).toBe(4.5);
190
+ expect(error.actualDuration).toBe(1.5);
191
+ expect(error.message).toContain('too short');
192
+ }
193
+ });
194
+
195
+ test('should throw regular error for other 400 errors', async () => {
196
+ fsMock.mockReturnValue(true);
197
+
198
+ nock('https://test-api.com')
199
+ .post('/predict?model=4emotions')
200
+ .reply(400, { message: 'Invalid file format' });
201
+
202
+ const client = new ValenceClient();
203
+
204
+ await expect(client.discrete.emotions('test.wav')).rejects.toThrow(
205
+ 'API error (400): Invalid file format'
206
+ );
207
+ });
167
208
  });
168
209
  });