ultron-ai-sdk 1.1.3 → 1.1.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +65 -0
- package/dist/index.cjs +1 -1
- package/dist/index.mjs +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -370,8 +370,73 @@ Set the background image of target html block in which the character is present.
|
|
|
370
370
|
|
|
371
371
|
```javascript
|
|
372
372
|
sceneCanvas.setBackgroundImage('IMAGE_URL')
|
|
373
|
+
|
|
374
|
+
```
|
|
375
|
+
|
|
376
|
+
---
|
|
377
|
+
|
|
378
|
+
## 🔁 Voice-to-Voice AI Wrapper Mode (`AIWrapperMode`)
|
|
379
|
+
|
|
380
|
+
Your SDK can be used as a **seamless voice interface** for any AI system that accepts **text input and returns text responses**. By enabling `AIWrapperMode`, your SDK will:
|
|
381
|
+
|
|
382
|
+
1. Listen to the user's **voice input**
|
|
383
|
+
2. Automatically transcribe the speech to **text**
|
|
384
|
+
3. Allow the developer to **forward that text to any AI API**
|
|
385
|
+
4. Speak back the **AI-generated text** response using the character's voice
|
|
386
|
+
|
|
387
|
+
This makes your character a complete **voice-controlled AI interface** with just a few lines of code.
|
|
388
|
+
|
|
389
|
+
---
|
|
390
|
+
|
|
391
|
+
### ✨ Getting Started
|
|
392
|
+
|
|
393
|
+
Enable the wrapper mode:
|
|
394
|
+
|
|
395
|
+
```js
|
|
396
|
+
character.AIWrapperMode = true;
|
|
373
397
|
```
|
|
374
398
|
|
|
399
|
+
Then add a handler to process transcribed speech:
|
|
400
|
+
|
|
401
|
+
```js
|
|
402
|
+
character.onSpeechTranscribed = async (text) => {
|
|
403
|
+
console.log("User said:", text);
|
|
404
|
+
|
|
405
|
+
// Send the transcribed text to your AI API
|
|
406
|
+
const response = await fetch("https://your-ai-api.com/endpoint", {
|
|
407
|
+
method: "POST",
|
|
408
|
+
headers: { "Content-Type": "application/json" },
|
|
409
|
+
body: JSON.stringify({ prompt: text }),
|
|
410
|
+
});
|
|
411
|
+
|
|
412
|
+
const data = await response.json();
|
|
413
|
+
|
|
414
|
+
// Speak the AI response using the character
|
|
415
|
+
character.speak(data.reply);
|
|
416
|
+
};
|
|
417
|
+
```
|
|
418
|
+
|
|
419
|
+
---
|
|
420
|
+
|
|
421
|
+
### 🧠 Example Use Cases
|
|
422
|
+
|
|
423
|
+
* Wrap OpenAI's GPT API, Cohere, or any custom chatbot backend
|
|
424
|
+
* Create hands-free, conversational assistants
|
|
425
|
+
* Build AI-powered characters for education, sales, or support
|
|
426
|
+
|
|
427
|
+
---
|
|
428
|
+
|
|
429
|
+
### 🗣️ Notes
|
|
430
|
+
|
|
431
|
+
* Ensure the character has voice input and output properly configured
|
|
432
|
+
* You can use `character.mute()` or `character.listen()` as needed to control flow
|
|
433
|
+
* `onSpeechTranscribed` only fires when speech input has been successfully captured and transcribed
|
|
434
|
+
|
|
435
|
+
---
|
|
436
|
+
|
|
437
|
+
Let me know if you'd like me to include a full working example with a specific API like OpenAI or Hugging Face.
|
|
438
|
+
|
|
439
|
+
|
|
375
440
|
### Support
|
|
376
441
|
|
|
377
442
|
For technical support or questions, please contact our support team or visit our documentation portal.
|