@makemore/agent-frontend 1.3.0 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -23,14 +23,17 @@ Most chat widgets are tightly coupled to specific frameworks or require complex
23
23
  | Feature | Description |
24
24
  |---------|-------------|
25
25
  | 💬 **Real-time Streaming** | SSE-based message streaming for instant, token-by-token responses |
26
+ | 🔊 **Text-to-Speech** | ElevenLabs integration with secure Django proxy support |
26
27
  | 🎨 **Theming** | Customize colors, titles, messages, and position |
27
28
  | 🌙 **Dark Mode** | Automatic dark mode based on system preferences |
28
29
  | 📱 **Responsive** | Works seamlessly on desktop and mobile |
29
30
  | 🔧 **Debug Mode** | Toggle visibility of tool calls and results |
30
- | 🤖 **Demo Flows** | Built-in auto-run mode for showcasing agent journeys |
31
+ | 🤖 **Demo Flows** | Built-in auto-run mode with automatic, confirm, and manual modes |
31
32
  | 🔒 **Sessions** | Automatic anonymous session creation and management |
32
33
  | 💾 **Persistence** | Conversations persist across page reloads via localStorage |
33
34
  | 🛡️ **Isolated CSS** | Scoped styles that won't leak into or from your page |
35
+ | 🎯 **Configurable APIs** | Customize backend endpoints to match your server structure |
36
+ | 📝 **Enhanced Markdown** | Optional rich markdown with tables, code blocks, and syntax highlighting |
34
37
 
35
38
  ## Installation
36
39
 
@@ -85,7 +88,7 @@ The widget automatically detects and uses the enhanced markdown parser if availa
85
88
 
86
89
  ## Quick Start
87
90
 
88
- ### Initialize the widget
91
+ ### Basic Setup
89
92
 
90
93
  ```html
91
94
  <script>
@@ -98,6 +101,23 @@ The widget automatically detects and uses the enhanced markdown parser if availa
98
101
  </script>
99
102
  ```
100
103
 
104
+ ### With Text-to-Speech (Recommended: Django Proxy)
105
+
106
+ ```html
107
+ <script>
108
+ ChatWidget.init({
109
+ backendUrl: 'https://your-api.com',
110
+ agentKey: 'your-agent',
111
+ title: 'Voice-Enabled Chat',
112
+ primaryColor: '#0066cc',
113
+ enableTTS: true,
114
+ ttsProxyUrl: 'https://your-api.com/api/tts/speak/',
115
+ });
116
+ </script>
117
+ ```
118
+
119
+ See `django-tts-example.py` for the complete Django backend implementation.
120
+
101
121
  ### With custom API paths
102
122
 
103
123
  ```html
@@ -139,6 +159,111 @@ The widget automatically detects and uses the enhanced markdown parser if availa
139
159
  | `apiPaths` | object | See below | API endpoint paths (customizable for different backends) |
140
160
  | `autoRunMode` | string | `'automatic'` | Demo flow mode: `'automatic'`, `'confirm'`, or `'manual'` |
141
161
  | `autoRunDelay` | number | `1000` | Delay in milliseconds before auto-generating next message (automatic mode) |
162
+ | `enableTTS` | boolean | `false` | Enable text-to-speech for messages |
163
+ | `ttsProxyUrl` | string | `null` | Django proxy URL for TTS (recommended for security) |
164
+ | `elevenLabsApiKey` | string | `null` | ElevenLabs API key (only if not using proxy) |
165
+ | `ttsVoices` | object | `{ assistant: null, user: null }` | Voice IDs (only if not using proxy) |
166
+ | `ttsModel` | string | `'eleven_turbo_v2_5'` | ElevenLabs model (only if not using proxy) |
167
+ | `ttsSettings` | object | See below | ElevenLabs voice settings (only if not using proxy) |
168
+ | `availableVoices` | array | `[]` | List of available voices (auto-populated from ElevenLabs API) |
169
+ | `showClearButton` | boolean | `true` | Show clear conversation button in header |
170
+ | `showDebugButton` | boolean | `true` | Show debug mode toggle button in header |
171
+ | `showTTSButton` | boolean | `true` | Show TTS toggle button in header |
172
+ | `showVoiceSettings` | boolean | `true` | Show voice settings button in header (direct API only) |
173
+ | `showExpandButton` | boolean | `true` | Show expand/minimize button in header |
174
+
175
+ ### Text-to-Speech (ElevenLabs)
176
+
177
+ Add realistic voice narration to your chat widget using ElevenLabs. Two integration options:
178
+
179
+ #### Option 1: Secure Django Proxy (Recommended)
180
+
181
+ Keep your API key secure on the server:
182
+
183
+ ```javascript
184
+ ChatWidget.init({
185
+ enableTTS: true,
186
+ ttsProxyUrl: 'https://your-backend.com/api/tts/speak/',
187
+ // No API key or voice IDs needed - configured on server
188
+ });
189
+ ```
190
+
191
+ **Django Setup:**
192
+
193
+ See `django-tts-example.py` for a complete Django REST Framework implementation. Quick setup:
194
+
195
+ 1. Install: `pip install requests`
196
+ 2. Add to `settings.py`:
197
+ ```python
198
+ ELEVENLABS_API_KEY = 'your_api_key_here'
199
+ ELEVENLABS_VOICES = {
200
+ 'assistant': 'EXAVITQu4vr4xnSDxMaL', # Bella
201
+ 'user': 'pNInz6obpgDQGcFmaJgB', # Adam
202
+ }
203
+ ```
204
+ 3. Add view from `django-tts-example.py` to your Django app
205
+ 4. Add URL route: `path('api/tts/speak/', views.text_to_speech)`
206
+
207
+ #### Option 2: Direct API (Client-Side)
208
+
209
+ For testing or simple deployments:
210
+
211
+ ```javascript
212
+ ChatWidget.init({
213
+ enableTTS: true,
214
+ elevenLabsApiKey: 'your_elevenlabs_api_key', // ⚠️ Exposed to client
215
+ ttsVoices: {
216
+ assistant: 'EXAVITQu4vr4xnSDxMaL', // Bella
217
+ user: 'pNInz6obpgDQGcFmaJgB', // Adam
218
+ },
219
+ ttsModel: 'eleven_turbo_v2_5',
220
+ ttsSettings: {
221
+ stability: 0.5,
222
+ similarity_boost: 0.75,
223
+ style: 0.0,
224
+ use_speaker_boost: true,
225
+ },
226
+ });
227
+ ```
228
+
229
+ **Features:**
230
+ - Speaks assistant responses automatically
231
+ - Speaks simulated user messages in demo mode
232
+ - Queues messages to prevent overlap
233
+ - Waits for speech to finish before continuing demo (automatic mode)
234
+ - Toggle TTS on/off with button in header
235
+ - Visual indicator when speaking (pulsing icon)
236
+
237
+ **Get Voice IDs:**
238
+ 1. Go to https://elevenlabs.io/app/voice-library
239
+ 2. Choose voices and copy their IDs
240
+ 3. Or use the API: https://api.elevenlabs.io/v1/voices
241
+
242
+ **Control TTS:**
243
+ ```javascript
244
+ ChatWidget.toggleTTS(); // Toggle on/off
245
+ ChatWidget.stopSpeech(); // Stop current speech and clear queue
246
+ ChatWidget.setVoice('assistant', 'voice_id'); // Change assistant voice
247
+ ChatWidget.setVoice('user', 'voice_id'); // Change user voice
248
+ ```
249
+
250
+ **Voice Settings UI:**
251
+
252
+ When using direct API mode (not proxy), a voice settings button (🎙️) appears in the header. Click it to:
253
+ - Select assistant voice from dropdown
254
+ - Select customer voice for demo mode
255
+ - Voices are automatically fetched from your ElevenLabs account
256
+
257
+ **Customize Header Buttons:**
258
+ ```javascript
259
+ ChatWidget.init({
260
+ showClearButton: true, // Show/hide clear button
261
+ showDebugButton: true, // Show/hide debug button
262
+ showTTSButton: true, // Show/hide TTS toggle
263
+ showVoiceSettings: true, // Show/hide voice settings (direct API only)
264
+ showExpandButton: true, // Show/hide expand button
265
+ });
266
+ ```
142
267
 
143
268
  ### Demo Flow Control
144
269
 
@@ -233,6 +358,12 @@ ChatWidget.send('Hello, I need help!');
233
358
  // Clear the conversation
234
359
  ChatWidget.clearMessages();
235
360
 
361
+ // Text-to-speech controls
362
+ ChatWidget.toggleTTS(); // Toggle TTS on/off
363
+ ChatWidget.stopSpeech(); // Stop current speech and clear queue
364
+ ChatWidget.setVoice('assistant', 'voice_id'); // Change assistant voice
365
+ ChatWidget.setVoice('user', 'voice_id'); // Change user voice
366
+
236
367
  // Start a demo flow
237
368
  ChatWidget.startDemoFlow('quote');
238
369
 
@@ -377,6 +508,36 @@ agent-frontend/
377
508
 
378
509
  Requires: `EventSource` (SSE), `fetch`, `localStorage`
379
510
 
511
+ ## Version History
512
+
513
+ ### v1.4.0 (Latest)
514
+ - ✨ **Text-to-Speech**: ElevenLabs integration with secure Django proxy support
515
+ - 🔊 Automatic speech for assistant and simulated user messages
516
+ - 🎛️ Smart speech queuing to prevent overlap
517
+ - 🔐 Secure proxy approach keeps API keys on server
518
+
519
+ ### v1.3.0
520
+ - 🎮 **Demo Flow Control**: Three modes (automatic, confirm-next, manual)
521
+ - ⏱️ Configurable delay for automatic mode (0-5000ms)
522
+ - 🎯 Real-time mode switching via dropdown menu
523
+ - ▶️ Continue button for confirm mode
524
+
525
+ ### v1.2.0
526
+ - 📝 **Enhanced Markdown**: Optional rich markdown with tables and code blocks
527
+ - 🎨 Syntax highlighting support via highlight.js
528
+ - 🔧 Automatic detection of markdown addon
529
+
530
+ ### v1.1.0
531
+ - 🔌 **Configurable API Paths**: Customize backend endpoints
532
+ - 🛠️ Support for different backend URL structures
533
+
534
+ ### v1.0.0
535
+ - 🎉 Initial release
536
+ - 💬 Real-time SSE streaming
537
+ - 🎨 Theming and customization
538
+ - 🤖 Demo flows
539
+ - 🔒 Session management
540
+
380
541
  ## License
381
542
 
382
543
  MIT © 2024
@@ -159,6 +159,108 @@
159
159
  color: #ffd700;
160
160
  }
161
161
 
162
+ .cw-btn-speaking {
163
+ animation: pulse-speaking 1.5s ease-in-out infinite;
164
+ }
165
+
166
+ @keyframes pulse-speaking {
167
+ 0%, 100% {
168
+ background: rgba(255, 255, 255, 0.3);
169
+ }
170
+ 50% {
171
+ background: rgba(255, 255, 255, 0.5);
172
+ }
173
+ }
174
+
175
+ /* Voice Settings */
176
+ .cw-voice-settings {
177
+ background: var(--cw-bg-muted);
178
+ border-bottom: 1px solid var(--cw-border);
179
+ animation: slideDown 0.2s ease-out;
180
+ }
181
+
182
+ @keyframes slideDown {
183
+ from {
184
+ max-height: 0;
185
+ opacity: 0;
186
+ }
187
+ to {
188
+ max-height: 200px;
189
+ opacity: 1;
190
+ }
191
+ }
192
+
193
+ .cw-voice-settings-header {
194
+ display: flex;
195
+ justify-content: space-between;
196
+ align-items: center;
197
+ padding: 8px 16px;
198
+ font-size: 13px;
199
+ font-weight: 600;
200
+ color: var(--cw-text);
201
+ border-bottom: 1px solid var(--cw-border);
202
+ }
203
+
204
+ .cw-voice-settings-close {
205
+ all: initial;
206
+ font-family: inherit;
207
+ background: none;
208
+ border: none;
209
+ color: var(--cw-text-muted);
210
+ cursor: pointer;
211
+ font-size: 16px;
212
+ padding: 4px;
213
+ line-height: 1;
214
+ transition: color 0.15s;
215
+ }
216
+
217
+ .cw-voice-settings-close:hover {
218
+ color: var(--cw-text);
219
+ }
220
+
221
+ .cw-voice-settings-content {
222
+ padding: 12px 16px;
223
+ display: flex;
224
+ flex-direction: column;
225
+ gap: 12px;
226
+ }
227
+
228
+ .cw-voice-setting {
229
+ display: flex;
230
+ flex-direction: column;
231
+ gap: 4px;
232
+ }
233
+
234
+ .cw-voice-setting label {
235
+ font-size: 12px;
236
+ font-weight: 500;
237
+ color: var(--cw-text-muted);
238
+ }
239
+
240
+ .cw-voice-select {
241
+ all: initial;
242
+ font-family: inherit;
243
+ width: 100%;
244
+ padding: 8px 12px;
245
+ border: 1px solid var(--cw-border);
246
+ border-radius: 6px;
247
+ background: var(--cw-bg);
248
+ color: var(--cw-text);
249
+ font-size: 13px;
250
+ cursor: pointer;
251
+ transition: border-color 0.15s;
252
+ }
253
+
254
+ .cw-voice-select:hover {
255
+ border-color: var(--cw-primary);
256
+ }
257
+
258
+ .cw-voice-select:focus {
259
+ outline: none;
260
+ border-color: var(--cw-primary);
261
+ box-shadow: 0 0 0 3px rgba(0, 102, 204, 0.1);
262
+ }
263
+
162
264
  /* Status bar */
163
265
  .cw-status-bar {
164
266
  display: flex;
@@ -46,6 +46,28 @@
46
46
  // Demo flow control
47
47
  autoRunDelay: 1000, // Delay in ms before auto-generating next message
48
48
  autoRunMode: 'automatic', // 'automatic', 'confirm', or 'manual'
49
+ // Text-to-speech (ElevenLabs)
50
+ enableTTS: false,
51
+ ttsProxyUrl: null, // If set, uses Django proxy instead of direct API calls
52
+ elevenLabsApiKey: null, // Only needed if not using proxy
53
+ ttsVoices: {
54
+ assistant: null, // ElevenLabs voice ID for assistant (not needed if using proxy)
55
+ user: null, // ElevenLabs voice ID for simulated user (not needed if using proxy)
56
+ },
57
+ ttsModel: 'eleven_turbo_v2_5', // ElevenLabs model (not needed if using proxy)
58
+ ttsSettings: {
59
+ stability: 0.5,
60
+ similarity_boost: 0.75,
61
+ style: 0.0,
62
+ use_speaker_boost: true,
63
+ },
64
+ availableVoices: [], // List of available voices for UI dropdown
65
+ // UI visibility controls
66
+ showClearButton: true,
67
+ showDebugButton: true,
68
+ showTTSButton: true,
69
+ showVoiceSettings: true,
70
+ showExpandButton: true,
49
71
  };
50
72
 
51
73
  // State
@@ -64,6 +86,10 @@
64
86
  sessionToken: null,
65
87
  error: null,
66
88
  eventSource: null,
89
+ currentAudio: null,
90
+ isSpeaking: false,
91
+ speechQueue: [],
92
+ voiceSettingsOpen: false,
67
93
  };
68
94
 
69
95
  // DOM elements
@@ -138,6 +164,164 @@
138
164
  }
139
165
  }
140
166
 
167
+ // ============================================================================
168
+ // Text-to-Speech (ElevenLabs)
169
+ // ============================================================================
170
+
171
+ async function speakText(text, role) {
172
+ if (!config.enableTTS) return;
173
+
174
+ // Check if we have either proxy or direct API access
175
+ if (!config.ttsProxyUrl && !config.elevenLabsApiKey) return;
176
+
177
+ // If using direct API, check for voice ID
178
+ if (!config.ttsProxyUrl) {
179
+ const voiceId = role === 'assistant' ? config.ttsVoices.assistant : config.ttsVoices.user;
180
+ if (!voiceId) return;
181
+ }
182
+
183
+ // Add to queue
184
+ state.speechQueue.push({ text, role });
185
+
186
+ // Process queue if not already speaking
187
+ if (!state.isSpeaking) {
188
+ processSpeechQueue();
189
+ }
190
+ }
191
+
192
+ async function processSpeechQueue() {
193
+ if (state.speechQueue.length === 0) {
194
+ state.isSpeaking = false;
195
+ render();
196
+
197
+ // If auto-run is waiting for speech to finish, continue
198
+ if (state.autoRunActive && state.autoRunPaused && config.autoRunMode === 'automatic') {
199
+ setTimeout(() => {
200
+ if (state.autoRunActive && !state.isSpeaking) {
201
+ continueAutoRun();
202
+ }
203
+ }, config.autoRunDelay);
204
+ }
205
+ return;
206
+ }
207
+
208
+ state.isSpeaking = true;
209
+ render();
210
+
211
+ const { text, role } = state.speechQueue.shift();
212
+
213
+ try {
214
+ let response;
215
+
216
+ if (config.ttsProxyUrl) {
217
+ // Use Django proxy
218
+ response = await fetch(config.ttsProxyUrl, {
219
+ method: 'POST',
220
+ headers: {
221
+ 'Content-Type': 'application/json',
222
+ ...(state.sessionToken ? { [config.anonymousTokenHeader]: state.sessionToken } : {}),
223
+ },
224
+ body: JSON.stringify({
225
+ text: text,
226
+ role: role,
227
+ }),
228
+ });
229
+ } else {
230
+ // Direct ElevenLabs API call
231
+ const voiceId = role === 'assistant' ? config.ttsVoices.assistant : config.ttsVoices.user;
232
+ response = await fetch(`https://api.elevenlabs.io/v1/text-to-speech/${voiceId}`, {
233
+ method: 'POST',
234
+ headers: {
235
+ 'Accept': 'audio/mpeg',
236
+ 'Content-Type': 'application/json',
237
+ 'xi-api-key': config.elevenLabsApiKey,
238
+ },
239
+ body: JSON.stringify({
240
+ text: text,
241
+ model_id: config.ttsModel,
242
+ voice_settings: config.ttsSettings,
243
+ }),
244
+ });
245
+ }
246
+
247
+ if (!response.ok) {
248
+ throw new Error(`TTS API error: ${response.status}`);
249
+ }
250
+
251
+ const audioBlob = await response.blob();
252
+ const audioUrl = URL.createObjectURL(audioBlob);
253
+ const audio = new Audio(audioUrl);
254
+
255
+ state.currentAudio = audio;
256
+
257
+ audio.onended = () => {
258
+ URL.revokeObjectURL(audioUrl);
259
+ state.currentAudio = null;
260
+ processSpeechQueue();
261
+ };
262
+
263
+ audio.onerror = () => {
264
+ console.error('[ChatWidget] Audio playback error');
265
+ URL.revokeObjectURL(audioUrl);
266
+ state.currentAudio = null;
267
+ processSpeechQueue();
268
+ };
269
+
270
+ await audio.play();
271
+ } catch (err) {
272
+ console.error('[ChatWidget] TTS error:', err);
273
+ state.currentAudio = null;
274
+ processSpeechQueue();
275
+ }
276
+ }
277
+
278
+ function stopSpeech() {
279
+ if (state.currentAudio) {
280
+ state.currentAudio.pause();
281
+ state.currentAudio = null;
282
+ }
283
+ state.speechQueue = [];
284
+ state.isSpeaking = false;
285
+ render();
286
+ }
287
+
288
+ function toggleTTS() {
289
+ config.enableTTS = !config.enableTTS;
290
+ if (!config.enableTTS) {
291
+ stopSpeech();
292
+ }
293
+ render();
294
+ }
295
+
296
+ function toggleVoiceSettings() {
297
+ state.voiceSettingsOpen = !state.voiceSettingsOpen;
298
+ render();
299
+ }
300
+
301
+ function setVoice(role, voiceId) {
302
+ config.ttsVoices[role] = voiceId;
303
+ render();
304
+ }
305
+
306
+ async function fetchAvailableVoices() {
307
+ if (!config.elevenLabsApiKey) return;
308
+
309
+ try {
310
+ const response = await fetch('https://api.elevenlabs.io/v1/voices', {
311
+ headers: {
312
+ 'xi-api-key': config.elevenLabsApiKey,
313
+ },
314
+ });
315
+
316
+ if (response.ok) {
317
+ const data = await response.json();
318
+ config.availableVoices = data.voices || [];
319
+ }
320
+ } catch (err) {
321
+ console.error('[ChatWidget] Failed to fetch voices:', err);
322
+ }
323
+ }
324
+
141
325
  // ============================================================================
142
326
  // Session Management
143
327
  // ============================================================================
@@ -347,10 +531,21 @@
347
531
  state.eventSource = null;
348
532
  render();
349
533
 
534
+ // Speak assistant message if TTS enabled
535
+ if (assistantContent && !state.error) {
536
+ speakText(assistantContent, 'assistant');
537
+ }
538
+
350
539
  // Trigger auto-run if enabled
351
540
  if (state.autoRunActive && !state.error) {
352
541
  if (config.autoRunMode === 'automatic') {
353
- setTimeout(() => triggerAutoRun(), config.autoRunDelay);
542
+ // Wait for speech to finish before continuing
543
+ if (config.enableTTS && assistantContent) {
544
+ state.autoRunPaused = true;
545
+ // processSpeechQueue will continue when done
546
+ } else {
547
+ setTimeout(() => triggerAutoRun(), config.autoRunDelay);
548
+ }
354
549
  } else if (config.autoRunMode === 'confirm') {
355
550
  state.autoRunPaused = true;
356
551
  render();
@@ -402,6 +597,12 @@
402
597
  const data = await response.json();
403
598
  if (data.response) {
404
599
  state.isSimulating = false;
600
+
601
+ // Speak simulated user message if TTS enabled
602
+ if (config.enableTTS && config.ttsVoices.user) {
603
+ await speakText(data.response, 'user');
604
+ }
605
+
405
606
  await sendMessage(data.response);
406
607
  return;
407
608
  }
@@ -530,6 +731,44 @@
530
731
  `;
531
732
  }
532
733
 
734
+ function renderVoiceSettings() {
735
+ if (!state.voiceSettingsOpen) return '';
736
+
737
+ const voiceOptions = (role) => {
738
+ if (config.availableVoices.length === 0) {
739
+ return '<option value="">Loading voices...</option>';
740
+ }
741
+ return config.availableVoices.map(voice => `
742
+ <option value="${voice.voice_id}" ${config.ttsVoices[role] === voice.voice_id ? 'selected' : ''}>
743
+ ${escapeHtml(voice.name)}
744
+ </option>
745
+ `).join('');
746
+ };
747
+
748
+ return `
749
+ <div class="cw-voice-settings">
750
+ <div class="cw-voice-settings-header">
751
+ <span>🎙️ Voice Settings</span>
752
+ <button class="cw-voice-settings-close" data-action="toggle-voice-settings">✕</button>
753
+ </div>
754
+ <div class="cw-voice-settings-content">
755
+ <div class="cw-voice-setting">
756
+ <label>Assistant Voice</label>
757
+ <select class="cw-voice-select" data-role="assistant" onchange="ChatWidget.setVoice('assistant', this.value)">
758
+ ${voiceOptions('assistant')}
759
+ </select>
760
+ </div>
761
+ <div class="cw-voice-setting">
762
+ <label>Customer Voice (Demo)</label>
763
+ <select class="cw-voice-select" data-role="user" onchange="ChatWidget.setVoice('user', this.value)">
764
+ ${voiceOptions('user')}
765
+ </select>
766
+ </div>
767
+ </div>
768
+ </div>
769
+ `;
770
+ }
771
+
533
772
  function renderJourneyDropdown() {
534
773
  if (!config.enableAutoRun || Object.keys(config.journeyTypes).length === 0) {
535
774
  return '';
@@ -660,13 +899,15 @@
660
899
  <div class="cw-header" style="background-color: ${config.primaryColor}">
661
900
  <span class="cw-title">${escapeHtml(config.title)}</span>
662
901
  <div class="cw-header-actions">
663
- <button class="cw-header-btn" data-action="clear" title="Clear Conversation" ${state.isLoading || state.messages.length === 0 ? 'disabled' : ''}>
664
- <svg class="cw-icon-sm" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
665
- <polyline points="3 6 5 6 21 6"></polyline>
666
- <path d="M19 6v14a2 2 0 0 1-2 2H7a2 2 0 0 1-2-2V6m3 0V4a2 2 0 0 1 2-2h4a2 2 0 0 1 2 2v2"></path>
667
- </svg>
668
- </button>
669
- ${config.enableDebugMode ? `
902
+ ${config.showClearButton ? `
903
+ <button class="cw-header-btn" data-action="clear" title="Clear Conversation" ${state.isLoading || state.messages.length === 0 ? 'disabled' : ''}>
904
+ <svg class="cw-icon-sm" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
905
+ <polyline points="3 6 5 6 21 6"></polyline>
906
+ <path d="M19 6v14a2 2 0 0 1-2 2H7a2 2 0 0 1-2-2V6m3 0V4a2 2 0 0 1 2-2h4a2 2 0 0 1 2 2v2"></path>
907
+ </svg>
908
+ </button>
909
+ ` : ''}
910
+ ${config.showDebugButton && config.enableDebugMode ? `
670
911
  <button class="cw-header-btn ${state.debugMode ? 'cw-btn-active' : ''}" data-action="toggle-debug" title="${state.debugMode ? 'Hide Debug Info' : 'Show Debug Info'}">
671
912
  <svg class="cw-icon-sm ${state.debugMode ? 'cw-icon-warning' : ''}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
672
913
  <path d="M12 2a10 10 0 1 0 10 10A10 10 0 0 0 12 2zm0 18a8 8 0 1 1 8-8 8 8 0 0 1-8 8z"></path>
@@ -674,15 +915,30 @@
674
915
  </svg>
675
916
  </button>
676
917
  ` : ''}
918
+ ${config.showTTSButton && (config.elevenLabsApiKey || config.ttsProxyUrl) ? `
919
+ <button class="cw-header-btn ${config.enableTTS ? 'cw-btn-active' : ''} ${state.isSpeaking ? 'cw-btn-speaking' : ''}"
920
+ data-action="toggle-tts"
921
+ title="${config.enableTTS ? (state.isSpeaking ? 'Speaking...' : 'TTS Enabled') : 'TTS Disabled'}">
922
+ ${state.isSpeaking ? '🔊' : (config.enableTTS ? '🔉' : '🔇')}
923
+ </button>
924
+ ` : ''}
925
+ ${config.showVoiceSettings && config.elevenLabsApiKey && !config.ttsProxyUrl ? `
926
+ <button class="cw-header-btn ${state.voiceSettingsOpen ? 'cw-btn-active' : ''}" data-action="toggle-voice-settings" title="Voice Settings">
927
+ 🎙️
928
+ </button>
929
+ ` : ''}
677
930
  ${renderJourneyDropdown()}
678
- <button class="cw-header-btn" data-action="toggle-expand" title="${state.isExpanded ? 'Minimize' : 'Expand'}">
679
- ${state.isExpanded ? '' : ''}
680
- </button>
931
+ ${config.showExpandButton ? `
932
+ <button class="cw-header-btn" data-action="toggle-expand" title="${state.isExpanded ? 'Minimize' : 'Expand'}">
933
+ ${state.isExpanded ? '⊖' : '⊕'}
934
+ </button>
935
+ ` : ''}
681
936
  <button class="cw-header-btn" data-action="close" title="Close">
682
937
 
683
938
  </button>
684
939
  </div>
685
940
  </div>
941
+ ${renderVoiceSettings()}
686
942
  ${statusBar}
687
943
  <div class="cw-messages" id="cw-messages">
688
944
  ${messagesHtml}
@@ -721,6 +977,8 @@
721
977
  case 'close': closeWidget(); break;
722
978
  case 'toggle-expand': toggleExpand(); break;
723
979
  case 'toggle-debug': toggleDebugMode(); break;
980
+ case 'toggle-tts': toggleTTS(); break;
981
+ case 'toggle-voice-settings': toggleVoiceSettings(); break;
724
982
  case 'clear': clearMessages(); break;
725
983
  case 'stop-autorun': stopAutoRun(); break;
726
984
  case 'continue-autorun': continueAutoRun(); break;
@@ -797,6 +1055,11 @@
797
1055
  // Initial render
798
1056
  render();
799
1057
 
1058
+ // Fetch available voices if using direct API
1059
+ if (config.elevenLabsApiKey && !config.ttsProxyUrl) {
1060
+ fetchAvailableVoices();
1061
+ }
1062
+
800
1063
  console.log('[ChatWidget] Initialized with config:', config);
801
1064
  }
802
1065
 
@@ -838,6 +1101,9 @@
838
1101
  continueAutoRun,
839
1102
  setAutoRunMode,
840
1103
  setAutoRunDelay,
1104
+ toggleTTS,
1105
+ stopSpeech,
1106
+ setVoice,
841
1107
  getState: () => ({ ...state }),
842
1108
  getConfig: () => ({ ...config }),
843
1109
  };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@makemore/agent-frontend",
3
- "version": "1.3.0",
3
+ "version": "1.5.0",
4
4
  "description": "A standalone, zero-dependency chat widget for AI agents. Embed conversational AI into any website with a single script tag.",
5
5
  "main": "dist/chat-widget.js",
6
6
  "files": [