ultron-ai-sdk 1.1.3 → 1.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -370,8 +370,81 @@ Set the background image of target html block in which the character is present.
370
370
 
371
371
  ```javascript
372
372
  sceneCanvas.setBackgroundImage('IMAGE_URL')
373
+
374
+ ```
375
+
376
+ ---
377
+
378
+ ## 🔁 Voice-to-Voice AI Wrapper Mode (`AIWrapperMode`)
379
+
380
+ Your SDK can be used as a **seamless voice interface** for any AI system that accepts **text input and returns text responses**. By enabling `AIWrapperMode`, your SDK will:
381
+
382
+ 1. Listen to the user's **voice input**
383
+ 2. Automatically transcribe the speech to **text**
384
+ 3. Allow the developer to **forward that text to any AI API**
385
+ 4. Speak back the **AI-generated text** response using the character's voice
386
+
387
+ This makes your character a complete **voice-controlled AI interface** with just a few lines of code.
388
+
389
+ ---
390
+
391
+ ### ✨ Getting Started
392
+
393
+ Enable the wrapper mode:
394
+
395
+ ```js
396
+ character.AIWrapperMode = true;
373
397
  ```
374
398
 
399
+ Then add a handler to process transcribed speech:
400
+
401
+ ```js
402
+ character.onSpeechTranscribed = async (text) => {
403
+ console.log("User said:", text);
404
+
405
+
406
+ /////////////////////////// Your AI Workflow ///////////////////////////////////////
407
+ // The section below can be any API which chats with AI like openai or your own agentic API (Text to text)
408
+ // Send the transcribed text to your AI API
409
+ const response = await fetch("https://your-ai-api.com/endpoint", {
410
+ method: "POST",
411
+ headers: { "Content-Type": "application/json" },
412
+ body: JSON.stringify({ prompt: text }),
413
+ });
414
+
415
+ const data = await response.json();
416
+
417
+ const AIResponseText = data.responseText
418
+
419
+ //////////////////////////////////////////////////////////////////////////////////
420
+
421
+
422
+ // Speak the AI response using the character
423
+ character.say(AIResponseText);
424
+ };
425
+ ```
426
+
427
+ ---
428
+
429
+ ### 🧠 Example Use Cases
430
+
431
+ * Wrap OpenAI's GPT API, Cohere, or any custom chatbot backend
432
+ * Create hands-free, conversational assistants
433
+ * Build AI-powered characters for education, sales, or support
434
+
435
+ ---
436
+
437
+ ### 🗣️ Notes
438
+
439
+ * Ensure the character has voice input and output properly configured
440
+ * You can use `character.mute()` or `character.listen()` as needed to control flow
441
+ * `onSpeechTranscribed` only fires when speech input has been successfully captured and transcribed
442
+
443
+ ---
444
+
445
+ Let me know if you'd like me to include a full working example with a specific API like OpenAI or Hugging Face.
446
+
447
+
375
448
  ### Support
376
449
 
377
450
  For technical support or questions, please contact our support team or visit our documentation portal.