valenceai 0.5.1 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,250 +1,367 @@
1
1
  # Valence SDK for Emotion Detection
2
2
 
3
- **valenceai** is a Node.js SDK for interacting with the [Valence AI](https://getvalenceai.com) Pulse API for emotion detection. It provides a convenient interface to upload audio files -— short or long —- and retrieve detected emotional states.
3
+ **valenceai** is a Node.js SDK for interacting with the [Valence AI](https://getvalenceai.com) Pulse API for emotion detection. It provides a convenient interface to upload audio files, stream real-time audio, and retrieve detected emotional states.
4
4
 
5
5
  ## Features
6
6
 
7
- - **Discrete audio processing** - Single API call for short audio files
8
- - **Asynch audio processing** - Multipart streaming for long audio files
9
- - **Environment configuration** - Built-in support for `.env` configuration
7
+ - **Discrete audio processing** - Real-time analysis for short audio clips (4-10s)
8
+ - **Async audio processing** - Multipart streaming for long files with timeline data
9
+ - **Streaming API** - Real-time WebSocket streaming for live audio
10
+ - **Rate limiting** - Monitor API usage and limits
11
+ - **Model selection** - Choose between 4emotions and 7emotions models
12
+ - **Timeline analysis** - Get emotion changes over time with timestamps
13
+ - **Environment configuration** - Built-in support for .env files
10
14
  - **Enhanced logging** - Configurable log levels with timestamps
11
15
  - **Robust error handling** - Comprehensive validation and error recovery
12
16
  - **TypeScript ready** - Full JSDoc documentation for all functions
13
- - **100% tested** - Comprehensive test suite with 95%+ coverage
17
+ - **100% tested** - Comprehensive test suite with high coverage
14
18
  - **Security focused** - Input validation and secure error handling
15
19
 
16
20
  The emotional classification model used in our APIs is optimized for North American English conversational data.
17
21
 
18
- The API includes a baseline model of 4 basic emotions. The emotions included by default are angry, happy, neutral, and sad. Our other model offerings include different subsets of the following emotions: happy, sad, angry, neutral, surprised, disgusted, nervous, irritated, excited, sleepy. 
22
+ ## Emotion Models
19
23
 
20
- _Coming soon_ – The API will include a model choice parameter, allowing users to choose between models of 4, 5, and 7 emotions.
24
+ The SDK supports two emotion detection models:
25
+
26
+ - **4emotions** (default): angry, happy, neutral, sad
27
+ - **7emotions**: happy, sad, angry, neutral, surprised, disgusted, calm
21
28
 
22
29
  The number of emotions, emotional buckets, and language support can be customized. If you are interested in a custom model, please [contact us](https://www.getvalenceai.com/contact).
23
30
 
24
- ## API Functionality
31
+ ## API Overview
25
32
 
26
- While our APIs include the same model offerings in the backend, they are best suited for different purposes.
33
+ | API | Best For | Input | Output | Response Time |
34
+ |-----|----------|-------|--------|---------------|
35
+ | **Discrete** | Real-time analysis | Short audio (4-10s) | Single emotion prediction | 100-500ms |
36
+ | **Async** | Pre-recorded files | Long audio (up to 1GB) | Timeline with emotion changes | Depends on file size |
37
+ | **Streaming** | Live audio streams | Audio chunks via WebSocket | Real-time emotion updates | Near real-time |
27
38
 
28
- | | DiscreteAPI | AsynchAPI |
29
- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
30
- | Inputs | A short audio file, 4-10s in length. | A long audio file, at least 5s in length. Inputs can be up to 1 GB large. |
31
- | Outputs | A JSON that includes the primary emotion detected in the file, along with its confidence. The confidence scores of all other emotions in the model are also returned. | A time-stamped JSON that includes the classified emotion and its confidence at a rate of 1 classification per 5 seconds of audio. |
32
- | Response Time | 100-500 ms | Dependent upon file size |
39
+ ## Async API Processing Workflow
33
40
 
34
- The **DiscreteAPI** is built for real-time analysis of emotions in audio data. Small snippets of audio are sent to the API to receive feedback in real-time of what emotions are detected based on tone of voice. This API operates on an approximate per-sentence basis, and audio must be cut to the appropriate size.
41
+ The Async API uses a multi-step process to handle long audio files. Understanding this workflow is crucial for proper implementation:
35
42
 
36
- The **AsynchAPI** is built for emotion analysis of pre-recorded audio files. Files of any length, up to 1 GB in size, can be sent to the API to receive a summary of emotions throughout the file. Similar to the DiscreteAPI, this API operates on an approximate per-sentence basis, but the AsyncAPI provides timestamps to show the change in emotions over time.
43
+ ### 1. Upload Phase (Client-Side)
37
44
 
38
- _Coming soon_ – StreamingAPI via WebSockets for real-time analysis of an audio stream.
45
+ When you call `client.asynch.upload(filePath)`:
39
46
 
40
- ## Installation
47
+ - SDK splits your file into parts (5MB chunks by default)
48
+ - Uploads parts to S3 using presigned URLs
49
+ - **Returns a `requestId`** - This is a tracking identifier, NOT a completion signal
50
+ - At this point: File is uploaded to S3, but **NOT processed yet**
41
51
 
42
- ```bash
43
- npm install valenceai
44
- ```
52
+ ### 2. Background Processing (Server-Side)
45
53
 
46
- ## Configuration
54
+ After upload completes, the server automatically:
47
55
 
48
- Create a `.env` file in your project root:
56
+ - Background processor checks for new uploads every 10 seconds
57
+ - Downloads audio from S3 when detected
58
+ - Splits audio into 5-second segments
59
+ - Extracts audio features (MFCC) from each segment
60
+ - Invokes machine learning model for emotion detection
61
+ - Stores results in database
62
+ - Updates status to `completed`
49
63
 
50
- ```env
51
- VALENCE_API_KEY=your_api_key # Required: Your Valence API key
52
- VALENCE_DISCRETE_URL=https://discrete-api-url # Optional: Discrete audio endpoint
53
- VALENCE_ASYNCH_URL=https://asynch-api-url # Optional: Asynch audio endpoint
54
- VALENCE_LOG_LEVEL=info # Optional: debug, info, warn, error
55
- ```
64
+ **Processing Time**: Typically 1-2 minutes for a 60-minute audio file. The exact time depends on file length and current server load.
56
65
 
57
- ### Configuration Validation
66
+ ### 3. Results Retrieval (Client-Side)
58
67
 
59
- ```js
60
- import { validateConfig } from 'valenceai';
68
+ When you call `client.asynch.emotions(requestId)`:
61
69
 
62
- try {
63
- validateConfig();
64
- console.log('Configuration is valid!');
65
- } catch (error) {
66
- console.error('Configuration error:', error.message);
67
- }
68
- ```
70
+ - Polls the status endpoint at regular intervals
71
+ - Waits for status progression:
72
+ - `initiated` → Upload started
73
+ - `upload_completed` → File uploaded to S3 (processing not started)
74
+ - `processing` → Background processing in progress
75
+ - `completed` → Results ready
76
+ - Returns emotion timeline when status is `completed`
69
77
 
70
- ## Usage
78
+ ### Status Values
71
79
 
72
- ### Discrete Audio (Short Files)
80
+ | Status | Meaning | What's Happening |
81
+ |--------|---------|------------------|
82
+ | `initiated` | Upload started | SDK is uploading file parts to S3 |
83
+ | `upload_completed` | Upload finished | File is in S3, waiting for background processor |
84
+ | `processing` | Processing active | Server is analyzing audio with ML model |
85
+ | `completed` | Results ready | Emotion timeline is available |
73
86
 
74
- ```js
75
- import { ValenceClient } from 'valenceai';
87
+ ### Important Notes
76
88
 
77
- try {
78
- const client = new ValenceClient();
79
- const result = await client.discrete.emotions('YOUR_FILE.wav');
80
- console.log('Emotion detected:', result);
81
- } catch (error) {
82
- console.error('Error:', error.message);
83
- }
89
+ - **The `requestId` is NOT a completion indicator** - It's just a tracking ID
90
+ - **`upload()` completing does NOT mean results are ready** - It only means the file is in S3
91
+ - **Background processing takes time** - Plan for 1-2 minutes per hour of audio
92
+ - **You can check status anytime** - The `requestId` remains valid for retrieving results
93
+
94
+ ## Installation
95
+
96
+ ```bash
97
+ npm install valenceai
84
98
  ```
85
99
 
86
- ### Asynch Audio (Long Files)
100
+ ## Quick Start
87
101
 
88
- ```js
102
+ ```javascript
89
103
  import { ValenceClient } from 'valenceai';
90
104
 
91
- try {
92
- const client = new ValenceClient();
93
-
94
- // Upload the audio file
95
- const requestId = await client.asynch.upload('YOUR_FILE.wav');
96
- console.log('Upload complete. Request ID:', requestId);
97
-
98
- // Get emotions from uploaded audio
99
- const emotions = await client.asynch.emotions(requestId);
100
- console.log('Emotions detected:', emotions);
101
- } catch (error) {
102
- console.error('Error:', error.message);
103
- }
105
+ // Initialize client (uses VALENCE_API_KEY environment variable)
106
+ const client = new ValenceClient({ apiKey: 'your_api_key' });
107
+
108
+ // Discrete API - Quick emotion detection
109
+ const result = await client.discrete.emotions('short_audio.wav', '4emotions');
110
+ console.log(`Emotion: ${result.dominant_emotion}`);
111
+
112
+ // Async API - Long audio with timeline
113
+ // Step 1: Upload file to S3 (returns tracking ID, NOT results)
114
+ const requestId = await client.asynch.upload('long_audio.wav');
115
+ // Step 2: Wait for server processing and get results (polls until complete)
116
+ const emotions = await client.asynch.emotions(requestId, 30, 10000);
117
+ // Step 3: Access timeline and dominant emotion from results
118
+ const timeline = await client.asynch.getTimeline(requestId);
119
+ const dominant = await client.asynch.getDominantEmotion(requestId);
120
+
121
+ // Streaming API - Real-time audio
122
+ const stream = client.streaming.connect('4emotions');
123
+ stream.on('prediction', (data) => console.log(data.main_emotion));
124
+ stream.connect();
125
+ stream.sendAudio(audioBuffer);
126
+ stream.disconnect();
127
+
128
+ // Rate Limit API - Monitor usage
129
+ const status = await client.rateLimit.getStatus();
130
+ const health = await client.rateLimit.getHealth();
104
131
  ```
105
132
 
106
- ### Advanced Usage
133
+ ## Configuration
107
134
 
108
- ```js
109
- import { ValenceClient } from 'valenceai';
135
+ ### Environment Variables
110
136
 
111
- // Custom client configuration
112
- const client = new ValenceClient(
113
- 2 * 1024 * 1024, // 2MB parts
114
- 5 // 5 retry attempts
115
- );
137
+ Create a `.env` file in your project root:
138
+
139
+ ```env
140
+ VALENCE_API_KEY=your_api_key # Required
141
+ VALENCE_API_BASE_URL=https://api.getvalenceai.com # Optional
142
+ VALENCE_WEBSOCKET_URL=wss://api.getvalenceai.com # Optional
143
+ VALENCE_LOG_LEVEL=info # Optional: debug, info, warn, error
144
+ ```
116
145
 
117
- // Upload with custom configuration
118
- const requestId = await client.asynch.upload('huge_file.wav');
146
+ ### Client Configuration
119
147
 
120
- // Custom polling with more attempts and shorter intervals
121
- const emotions = await client.asynch.emotions(
122
- requestId,
123
- 50, // 50 polling attempts
124
- 3 // 3 second intervals
125
- );
148
+ ```javascript
149
+ const client = new ValenceClient({
150
+ apiKey: 'your_api_key', // API key (required)
151
+ baseUrl: 'https://custom.api', // Custom API endpoint (optional)
152
+ websocketUrl: 'wss://custom.api', // Custom WebSocket endpoint (optional)
153
+ partSize: 5 * 1024 * 1024, // Upload chunk size (default: 5MB)
154
+ maxRetries: 3 // Max retry attempts (default: 3)
155
+ });
126
156
  ```
127
157
 
128
158
  ## API Reference
129
159
 
130
- ### `new ValenceClient(partSize?, maxRetries?)`
160
+ ### Discrete API
131
161
 
132
- Creates a new Valence client with nested discrete and asynch clients.
162
+ For short audio files requiring immediate emotion detection.
133
163
 
134
- **Parameters:**
135
- - `partSize` (number, optional): Size of each part in bytes for asynch uploads (default: 5MB)
136
- - `maxRetries` (number, optional): Maximum retry attempts for asynch uploads (default: 3)
164
+ ```javascript
165
+ // File upload
166
+ const result = await client.discrete.emotions(
167
+ 'audio.wav',
168
+ '4emotions' // or '7emotions'
169
+ );
137
170
 
138
- **Returns:** `ValenceClient` instance with `discrete` and `asynch` properties
171
+ // In-memory audio array
172
+ const result = await client.discrete.emotions(
173
+ [0.1, 0.2, 0.3, ...],
174
+ '4emotions'
175
+ );
176
+ ```
139
177
 
140
- ### `client.discrete.emotions(filePath)`
178
+ **Response:**
179
+ ```javascript
180
+ {
181
+ emotions: {
182
+ happy: 0.78,
183
+ sad: 0.12,
184
+ angry: 0.05,
185
+ neutral: 0.05
186
+ },
187
+ dominant_emotion: 'happy'
188
+ }
189
+ ```
141
190
 
142
- Predicts emotions for discrete (short) audio files using a single API call.
191
+ ### Async API
143
192
 
144
- **Parameters:**
145
- - `filePath` (string): Path to the audio file
193
+ For long audio files with timeline analysis.
146
194
 
147
- **Returns:** `Promise<Object>` - Emotion prediction results
195
+ **Workflow**: The Async API uses a 3-step process:
148
196
 
149
- **Throws:** Error if file doesn't exist, API key missing, or request fails
197
+ 1. **Upload** (`upload()`) - Multipart upload to S3, returns `requestId` (tracking ID)
198
+ 2. **Background Processing** (automatic) - Server processes audio in 5-second chunks
199
+ 3. **Results Retrieval** (`emotions()`) - Polls status endpoint until processing completes
150
200
 
151
- ### `client.asynch.upload(filePath)`
201
+ **Processing Time**: Typically 1-2 minutes per hour of audio.
152
202
 
153
- Uploads asynch (long) audio files using multipart upload for processing.
203
+ **Status Progression**: `initiated` `upload_completed` `processing` `completed`
154
204
 
155
- **Parameters:**
156
- - `filePath` (string): Path to the audio file
205
+ #### Upload Audio
157
206
 
158
- **Returns:** `Promise<string>` - Request ID for tracking the upload
207
+ ```javascript
208
+ // Upload file to S3 (multipart upload)
209
+ const requestId = await client.asynch.upload('long_audio.wav');
210
+ // Returns: requestId (tracking ID, NOT completion signal)
211
+ // File is uploaded to S3 but NOT processed yet
212
+ ```
159
213
 
160
- **Throws:** Error if file doesn't exist, API key missing, or upload fails
214
+ #### Get Emotion Results
161
215
 
162
- ### `client.asynch.emotions(requestId, maxAttempts?, intervalSeconds?)`
216
+ ```javascript
217
+ // Poll for results until processing completes
218
+ const result = await client.asynch.emotions(
219
+ requestId,
220
+ 20, // maxTries (default: 20, range: 1-100)
221
+ 5000 // intervalMs (default: 5000, range: 1000-60000)
222
+ );
223
+ // This method waits for server processing to complete
224
+ // Returns when status is 'completed'
225
+ ```
163
226
 
164
- Retrieves emotion prediction results for asynch audio processing.
227
+ **Response:**
228
+ ```javascript
229
+ {
230
+ emotions: [
231
+ {
232
+ timestamp: 0.5,
233
+ start_time: 0.0,
234
+ end_time: 1.0,
235
+ emotion: 'happy',
236
+ confidence: 0.9,
237
+ all_predictions: { happy: 0.9, sad: 0.1, ... }
238
+ },
239
+ {
240
+ timestamp: 1.5,
241
+ start_time: 1.0,
242
+ end_time: 2.0,
243
+ emotion: 'neutral',
244
+ confidence: 0.85,
245
+ all_predictions: { neutral: 0.85, happy: 0.15, ... }
246
+ }
247
+ ],
248
+ status: 'completed'
249
+ }
250
+ ```
251
+
252
+ #### Timeline Analysis
165
253
 
166
- **Parameters:**
167
- - `requestId` (string): Request ID from `client.asynch.upload`
168
- - `maxAttempts` (number, optional): Maximum polling attempts (default: 20, range: 1-100)
169
- - `intervalSeconds` (number, optional): Polling interval in seconds (default: 5, range: 1-60)
254
+ ```javascript
255
+ // Get full timeline
256
+ const timeline = await client.asynch.getTimeline(requestId);
170
257
 
171
- **Returns:** `Promise<Object>` - Emotion detection results
258
+ // Get emotion at specific time
259
+ const emotion = await client.asynch.getEmotionAtTime(requestId, 5.2);
172
260
 
173
- **Throws:** Error if requestId is invalid or detection times out
261
+ // Get dominant emotion across entire audio
262
+ const dominant = await client.asynch.getDominantEmotion(requestId);
263
+ ```
174
264
 
175
- ### `validateConfig()`
265
+ ### Streaming API
176
266
 
177
- Validates the current SDK configuration.
267
+ For real-time emotion detection on live audio streams.
178
268
 
179
- **Throws:** Error if required configuration is missing or invalid
269
+ ```javascript
270
+ // Create streaming connection
271
+ const stream = client.streaming.connect('4emotions');
180
272
 
181
- ## Inputs and Outputs
273
+ // Register event handlers
274
+ stream.on('prediction', (data) => {
275
+ console.log(`Emotion: ${data.main_emotion}`);
276
+ });
182
277
 
183
- ### Inputs
184
- The APIs expect mono audio in the .wav format. An ideal audio file is recorded at 44100 Hz (44.1 kHz), though sampling rates as low as 8 kHz can still be used with high accuracy. For custom use cases, microphone specifications can be customized based on audio environment, including optimizations for mono/stereo audio, single microphone applications, noisy environments, etc. 
278
+ stream.on('error', (error) => {
279
+ console.error(`Error: ${error.message}`);
280
+ });
185
281
 
186
- For the **DiscreteAPI**, input data is an audio file in the .wav format.
282
+ stream.on('connected', (info) => {
283
+ console.log(`Connected: ${info.session_id}`);
284
+ });
187
285
 
188
- For the **AsynchAPI**, input data is an audio file in the .wav format.
286
+ // Connect to WebSocket
287
+ await stream.connect();
189
288
 
190
- ### Outputs
289
+ // Send audio chunks (Buffer or ArrayBuffer)
290
+ stream.sendAudio(audioBuffer);
191
291
 
192
- Outputs are returned as JSONs in the following formats: 
292
+ // Check connection status
293
+ if (stream.connected) {
294
+ console.log('Streaming active');
295
+ }
193
296
 
194
- **DiscreteAPI:**
297
+ // Disconnect
298
+ stream.disconnect();
299
+ ```
195
300
 
196
- ```json
301
+ **Prediction Event:**
302
+ ```javascript
197
303
  {
198
- "main_emotion": "happy",
199
- "confidence": 0.777777777,
200
- "all_predictions": {
201
- "angry": 0.123456789,
202
- "happy": 0.777777777,
203
- "neutral": 0.23456789,
204
- "sad": 0.098765432
205
- }
304
+ main_emotion: 'happy',
305
+ confidence: 0.87,
306
+ all_predictions: {
307
+ happy: 0.87,
308
+ sad: 0.05,
309
+ angry: 0.03,
310
+ neutral: 0.05
311
+ },
312
+ timestamp: 1234567890
206
313
  }
207
314
  ```
208
315
 
209
- The emotion returned in `main_emotion` is the highest confidence emotion returned from the model. Within `all_predictions`, each emotion is followed by its level of confidence. Some may use the top two highest confidence emotions to generate more nuanced states. We recommend dropping a `main_emotion` with confidence under 0.38, but that is at the user's discretion.
316
+ ### Rate Limit API
317
+
318
+ Monitor your API usage and limits.
319
+
320
+ ```javascript
321
+ // Get rate limit status
322
+ const status = await client.rateLimit.getStatus();
323
+ console.log(status);
324
+ // {
325
+ // limits: {
326
+ // second: { limit: 10, remaining: 8, reset: 1234567890 },
327
+ // minute: { limit: 100, remaining: 95, reset: 1234567890 },
328
+ // hour: { limit: 1000, remaining: 950, reset: 1234567890 },
329
+ // day: { limit: 10000, remaining: 9500, reset: 1234567890 }
330
+ // },
331
+ // current_usage: {
332
+ // second: 2,
333
+ // minute: 5,
334
+ // hour: 50,
335
+ // day: 500
336
+ // }
337
+ // }
338
+
339
+ // Check API health
340
+ const health = await client.rateLimit.getHealth();
341
+ console.log(health);
342
+ // { status: 'healthy', timestamp: 1234567890 }
343
+ ```
210
344
 
211
- **AsynchAPI:**
345
+ ## Audio Input Requirements
212
346
 
213
- ```json
214
- {
215
- "request_id": "27a33189-bdd7-47ca-9817-abacfb7bdaf4",
216
- "status": "completed",
217
- "emotions": [
218
- {
219
- "t": "00:00",
220
- "emotion": "neutral",
221
- "confidence": 0.82791723
222
- },
223
- {
224
- "t": "00:05",
225
- "emotion": "neutral",
226
- "confidence": 0.719817432
227
- },
228
- {
229
- "t": "00:10",
230
- "emotion": "happy",
231
- "confidence": 0.917309381
232
- },
233
- {
234
- "t": "00:15",
235
- "emotion": "neutral",
236
- "confidence": 0.414097846
237
- }
238
- "..."
239
- ]
240
- }
241
- ```
347
+ ### Format Specifications
348
+
349
+ - **Format**: WAV (mono)
350
+ - **Recommended sampling rate**: 44.1 kHz (44100 Hz)
351
+ - **Minimum sampling rate**: 8 kHz
352
+ - **Channel**: Mono (single channel)
353
+
354
+ ### API-Specific Requirements
242
355
 
243
- The emotions returned in `emotions` are the highest confidence emotion returned from the model, alongside the timestamp and confidence. The number of values in `emotions` correlates directly to the length of the input file. We recommend dropping `emotions` with confidence under 0.38, but that is at the user's discretion.
356
+ - **Discrete API**: 4-10 seconds per file
357
+ - **Async API**: Minimum 5 seconds, maximum 1 GB
358
+ - **Streaming API**: Real-time audio chunks (Buffer or ArrayBuffer)
359
+
360
+ For custom microphone specifications or stereo/multi-channel support, please [contact us](https://www.getvalenceai.com/contact).
244
361
 
245
362
  ## Examples
246
363
 
247
- Run the included examples:
364
+ Example scripts are available in the [`examples/`](./examples) directory:
248
365
 
249
366
  ```bash
250
367
  # Install dependencies
@@ -253,12 +370,37 @@ npm install
253
370
  # Run discrete audio example
254
371
  npm run example:discrete
255
372
 
256
- # Run asynch audio example
373
+ # Run async audio example
257
374
  npm run example:asynch
258
375
 
376
+ # Run streaming example
377
+ npm run example:streaming
378
+
259
379
  # Or run directly
260
380
  node examples/uploadShort.js
261
381
  node examples/uploadLong.js
382
+ node examples/streamingAudio.js
383
+ ```
384
+
385
+ ## Error Handling
386
+
387
+ ```javascript
388
+ import { ValenceClient } from 'valenceai';
389
+
390
+ try {
391
+ const client = new ValenceClient({ apiKey: 'your_key' });
392
+ const result = await client.discrete.emotions('audio.wav');
393
+ } catch (error) {
394
+ if (error.message.includes('API key')) {
395
+ console.error('Authentication error:', error.message);
396
+ } else if (error.message.includes('File not found')) {
397
+ console.error('File error:', error.message);
398
+ } else if (error.message.includes('API error')) {
399
+ console.error('API error:', error.message);
400
+ } else {
401
+ console.error('Unexpected error:', error.message);
402
+ }
403
+ }
262
404
  ```
263
405
 
264
406
  ## Development
@@ -274,6 +416,9 @@ npm run test:coverage
274
416
 
275
417
  # Watch mode for development
276
418
  npm run test:watch
419
+
420
+ # Run specific test file
421
+ npm test -- discrete.test.js
277
422
  ```
278
423
 
279
424
  ### Building and Publishing
@@ -287,23 +432,58 @@ npm login
287
432
  npm publish --access public
288
433
  ```
289
434
 
290
- ## What's New in v0.5.0
435
+ ## Migration from v0.x
436
+
437
+ ### Key Changes in v1.0.0
438
+
439
+ 1. **Environment Variable**: `VALENCE_API_KEY` is now the standard (consistent naming)
440
+ 2. **Unified Client**: Single `ValenceClient` class with nested APIs
441
+ 3. **Streaming API**: New WebSocket-based real-time emotion detection
442
+ 4. **Rate Limiting**: New API for monitoring usage
443
+ 5. **Timeline Data**: Async API now returns detailed timestamp information
444
+ 6. **Model Selection**: Explicit model parameter for 4emotions or 7emotions
445
+
446
+ ### Updating Your Code
447
+
448
+ ```javascript
449
+ // Old (v0.x)
450
+ import { predictDiscreteAudioEmotion } from 'valenceai';
451
+ const result = await predictDiscreteAudioEmotion('file.wav');
452
+
453
+ // New (v1.0.0)
454
+ import { ValenceClient } from 'valenceai';
455
+ const client = new ValenceClient({ apiKey: 'your_key' });
456
+ const result = await client.discrete.emotions('file.wav', '4emotions');
457
+
458
+ // New streaming capability
459
+ const stream = client.streaming.connect('4emotions');
460
+ stream.on('prediction', callback);
461
+ await stream.connect();
462
+ ```
463
+
464
+ ### Breaking Changes
291
465
 
292
- ### Major Changes
293
- - **Unified Client Architecture** - Single `ValenceClient` with nested `discrete` and `asynch` clients
294
- - **API Restructure**: `predictDiscreteAudioEmotion()` → `client.discrete.emotions()`
295
- - **API Restructure**: `uploadAsyncAudio()` `client.asynch.upload()`
296
- - **API Restructure**: `getEmotions()` `client.asynch.emotions()`
297
- - **Single Import**: `import { ValenceClient } from 'valenceai'`
466
+ - `predictDiscreteAudioEmotion()` → `client.discrete.emotions()`
467
+ - `uploadAsyncAudio()` `client.asynch.upload()`
468
+ - `getEmotions()` → `client.asynch.emotions()`
469
+ - All methods now require creating a `ValenceClient` instance first
470
+ - Model parameter is now required and explicit
298
471
 
299
- ### Benefits
300
- - **API Symmetry** - Identical structure to Python SDK
301
- - **Intuitive Organization** - Related methods grouped together
302
- - **Consistent Naming** - Same method names across Python and JavaScript
303
- - **Enhanced Documentation** - Updated examples and migration guide
304
- - **Maintained Quality** - All existing functionality preserved
472
+ See [CHANGELOG.md](./CHANGELOG.md) for complete migration guide.
305
473
 
306
- See [CHANGELOG.md](./CHANGELOG.md) for complete details and migration guide.
474
+ ## TypeScript Support
475
+
476
+ The SDK includes comprehensive JSDoc annotations for full TypeScript IntelliSense:
477
+
478
+ ```typescript
479
+ import { ValenceClient } from 'valenceai';
480
+
481
+ const client: ValenceClient = new ValenceClient({ apiKey: 'your_key' });
482
+
483
+ // Full type inference and autocomplete
484
+ const result = await client.discrete.emotions('audio.wav', '4emotions');
485
+ // result.dominant_emotion is typed
486
+ ```
307
487
 
308
488
  ## Contributing
309
489
 
@@ -317,9 +497,9 @@ We welcome contributions! Please:
317
497
 
318
498
  ## Support
319
499
 
320
- - **Documentation**: See [API Reference](#api-reference) above
500
+ - **Documentation**: [API Documentation](https://docs.getvalenceai.com)
321
501
  - **Issues**: [GitHub Issues](https://github.com/valencevibrations/valence-sdk-js/issues)
322
- - **Questions**: Contact [Valence AI](https://getvalenceai.com)
502
+ - **Questions**: [Valence AI Support](https://www.getvalenceai.com/contact)
323
503
 
324
504
  ## License
325
505