@mastra/mcp-docs-server 1.1.32-alpha.4 → 1.1.32-alpha.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/agents/adding-voice.md +14 -13
- package/.docs/docs/server/server-adapters.md +91 -8
- package/.docs/docs/voice/overview.md +66 -0
- package/.docs/docs/voice/speech-to-speech.md +45 -1
- package/.docs/docs/workspace/search.md +39 -0
- package/.docs/guides/getting-started/nestjs.md +238 -0
- package/.docs/reference/datasets/startExperiment.md +28 -2
- package/.docs/reference/index.md +2 -0
- package/.docs/reference/server/nestjs-adapter.md +169 -0
- package/.docs/reference/voice/aws-nova-sonic.md +247 -0
- package/.docs/reference/workspace/workspace-class.md +1 -1
- package/CHANGELOG.md +14 -0
- package/package.json +5 -5
|
@@ -327,19 +327,20 @@ For the complete list of supported AI SDK providers and their capabilities:
|
|
|
327
327
|
|
|
328
328
|
Mastra supports multiple voice providers for text-to-speech (TTS) and speech-to-text (STT) capabilities:
|
|
329
329
|
|
|
330
|
-
| Provider | Package | Features
|
|
331
|
-
| --------------- | ------------------------------- |
|
|
332
|
-
| OpenAI | `@mastra/voice-openai` | TTS, STT
|
|
333
|
-
| OpenAI Realtime | `@mastra/voice-openai-realtime` | Realtime speech-to-speech
|
|
334
|
-
|
|
|
335
|
-
|
|
|
336
|
-
|
|
|
337
|
-
|
|
|
338
|
-
|
|
|
339
|
-
|
|
|
340
|
-
|
|
|
341
|
-
|
|
|
342
|
-
|
|
|
330
|
+
| Provider | Package | Features | Reference |
|
|
331
|
+
| --------------- | ------------------------------- | ----------------------------------------- | ------------------------------------------------------------------ |
|
|
332
|
+
| OpenAI | `@mastra/voice-openai` | TTS, STT | [Documentation](https://mastra.ai/reference/voice/openai) |
|
|
333
|
+
| OpenAI Realtime | `@mastra/voice-openai-realtime` | Realtime speech-to-speech | [Documentation](https://mastra.ai/reference/voice/openai-realtime) |
|
|
334
|
+
| AWS Nova Sonic | `@mastra/voice-aws-nova-sonic` | Realtime speech-to-speech via AWS Bedrock | [Documentation](https://mastra.ai/reference/voice/aws-nova-sonic) |
|
|
335
|
+
| ElevenLabs | `@mastra/voice-elevenlabs` | High-quality TTS | [Documentation](https://mastra.ai/reference/voice/elevenlabs) |
|
|
336
|
+
| PlayAI | `@mastra/voice-playai` | TTS | [Documentation](https://mastra.ai/reference/voice/playai) |
|
|
337
|
+
| Google | `@mastra/voice-google` | TTS, STT | [Documentation](https://mastra.ai/reference/voice/google) |
|
|
338
|
+
| Deepgram | `@mastra/voice-deepgram` | STT | [Documentation](https://mastra.ai/reference/voice/deepgram) |
|
|
339
|
+
| Murf | `@mastra/voice-murf` | TTS | [Documentation](https://mastra.ai/reference/voice/murf) |
|
|
340
|
+
| Speechify | `@mastra/voice-speechify` | TTS | [Documentation](https://mastra.ai/reference/voice/speechify) |
|
|
341
|
+
| Sarvam | `@mastra/voice-sarvam` | TTS, STT | [Documentation](https://mastra.ai/reference/voice/sarvam) |
|
|
342
|
+
| Azure | `@mastra/voice-azure` | TTS, STT | [Documentation](https://mastra.ai/reference/voice/mastra-voice) |
|
|
343
|
+
| Cloudflare | `@mastra/voice-cloudflare` | TTS | [Documentation](https://mastra.ai/reference/voice/mastra-voice) |
|
|
343
344
|
|
|
344
345
|
## Next steps
|
|
345
346
|
|
|
@@ -20,6 +20,7 @@ Mastra currently provides these official server adapters:
|
|
|
20
20
|
- [@mastra/hono](https://mastra.ai/reference/server/hono-adapter)
|
|
21
21
|
- [@mastra/fastify](https://mastra.ai/reference/server/fastify-adapter)
|
|
22
22
|
- [@mastra/koa](https://mastra.ai/reference/server/koa-adapter)
|
|
23
|
+
- [@mastra/nestjs](https://mastra.ai/reference/server/nestjs-adapter)
|
|
23
24
|
|
|
24
25
|
You can build your own adapter, read [custom adapters](https://mastra.ai/docs/server/custom-adapters) for details.
|
|
25
26
|
|
|
@@ -71,7 +72,7 @@ pnpm add @mastra/express@latest
|
|
|
71
72
|
yarn add @mastra/express@latest
|
|
72
73
|
```
|
|
73
74
|
|
|
74
|
-
**
|
|
75
|
+
**NestJS**:
|
|
75
76
|
|
|
76
77
|
```bash
|
|
77
78
|
bun add @mastra/express@latest
|
|
@@ -227,6 +228,56 @@ yarn add @mastra/koa@latest
|
|
|
227
228
|
bun add @mastra/koa@latest
|
|
228
229
|
```
|
|
229
230
|
|
|
231
|
+
**Tab 21**:
|
|
232
|
+
|
|
233
|
+
**npm**:
|
|
234
|
+
|
|
235
|
+
```bash
|
|
236
|
+
npm install @mastra/nestjs@latest
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
**pnpm**:
|
|
240
|
+
|
|
241
|
+
```bash
|
|
242
|
+
pnpm add @mastra/nestjs@latest
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
**Yarn**:
|
|
246
|
+
|
|
247
|
+
```bash
|
|
248
|
+
yarn add @mastra/nestjs@latest
|
|
249
|
+
```
|
|
250
|
+
|
|
251
|
+
**Bun**:
|
|
252
|
+
|
|
253
|
+
```bash
|
|
254
|
+
bun add @mastra/nestjs@latest
|
|
255
|
+
```
|
|
256
|
+
|
|
257
|
+
**Tab 22**:
|
|
258
|
+
|
|
259
|
+
```bash
|
|
260
|
+
npm install @mastra/nestjs@latest
|
|
261
|
+
```
|
|
262
|
+
|
|
263
|
+
**Tab 23**:
|
|
264
|
+
|
|
265
|
+
```bash
|
|
266
|
+
pnpm add @mastra/nestjs@latest
|
|
267
|
+
```
|
|
268
|
+
|
|
269
|
+
**Tab 24**:
|
|
270
|
+
|
|
271
|
+
```bash
|
|
272
|
+
yarn add @mastra/nestjs@latest
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
**Tab 25**:
|
|
276
|
+
|
|
277
|
+
```bash
|
|
278
|
+
bun add @mastra/nestjs@latest
|
|
279
|
+
```
|
|
280
|
+
|
|
230
281
|
## Configuration
|
|
231
282
|
|
|
232
283
|
Initialize your app as usual, then create a `MastraServer` by passing in the `app` and your main `mastra` instance from `src/mastra/index.ts`. Calling `init()` automatically registers Mastra middleware and all available endpoints. You can continue adding your own routes as normal, either before or after `init()`, and they’ll run alongside Mastra’s endpoints.
|
|
@@ -336,6 +387,37 @@ app.listen(port, () => {
|
|
|
336
387
|
|
|
337
388
|
> **Info:** See the [Koa Adapter](https://mastra.ai/reference/server/koa-adapter) documentation for full configuration options.
|
|
338
389
|
|
|
390
|
+
**NestJS**:
|
|
391
|
+
|
|
392
|
+
```typescript
|
|
393
|
+
import { Module } from '@nestjs/common'
|
|
394
|
+
import { MastraModule } from '@mastra/nestjs'
|
|
395
|
+
import { mastra } from './mastra'
|
|
396
|
+
|
|
397
|
+
@Module({
|
|
398
|
+
imports: [
|
|
399
|
+
MastraModule.register({
|
|
400
|
+
mastra,
|
|
401
|
+
}),
|
|
402
|
+
],
|
|
403
|
+
})
|
|
404
|
+
export class AppModule {}
|
|
405
|
+
```
|
|
406
|
+
|
|
407
|
+
```typescript
|
|
408
|
+
import { NestFactory } from '@nestjs/core'
|
|
409
|
+
import { AppModule } from './app.module'
|
|
410
|
+
|
|
411
|
+
async function bootstrap() {
|
|
412
|
+
const app = await NestFactory.create(AppModule)
|
|
413
|
+
await app.listen(3000)
|
|
414
|
+
}
|
|
415
|
+
|
|
416
|
+
bootstrap()
|
|
417
|
+
```
|
|
418
|
+
|
|
419
|
+
> **Info:** See the [NestJS Adapter](https://mastra.ai/reference/server/nestjs-adapter) documentation for full configuration options.
|
|
420
|
+
|
|
339
421
|
## Initialization flow
|
|
340
422
|
|
|
341
423
|
Calling `init()` runs three steps in order. Understanding this flow helps when you need to insert your own middleware at specific points.
|
|
@@ -553,10 +635,11 @@ See [MCP](https://mastra.ai/docs/mcp/overview) for configuration details and how
|
|
|
553
635
|
|
|
554
636
|
## Related
|
|
555
637
|
|
|
556
|
-
- [Hono Adapter](https://mastra.ai/reference/server/hono-adapter)
|
|
557
|
-
- [Express Adapter](https://mastra.ai/reference/server/express-adapter)
|
|
558
|
-
- [
|
|
559
|
-
- [
|
|
560
|
-
- [
|
|
561
|
-
- [
|
|
562
|
-
- [
|
|
638
|
+
- [Hono Adapter](https://mastra.ai/reference/server/hono-adapter) - Hono-specific setup
|
|
639
|
+
- [Express Adapter](https://mastra.ai/reference/server/express-adapter) - Express-specific setup
|
|
640
|
+
- [NestJS Adapter](https://mastra.ai/reference/server/nestjs-adapter) - NestJS-specific setup
|
|
641
|
+
- [Custom Adapters](https://mastra.ai/docs/server/custom-adapters) - Building adapters for other frameworks
|
|
642
|
+
- [Server Configuration](https://mastra.ai/docs/server/mastra-server) - Using `mastra build` instead
|
|
643
|
+
- [Authentication](https://mastra.ai/docs/server/auth) - Configuring auth for your server
|
|
644
|
+
- [MastraServer Reference](https://mastra.ai/reference/server/mastra-server) - Full API reference
|
|
645
|
+
- [createRoute() Reference](https://mastra.ai/reference/server/create-route) - Creating type-safe custom routes
|
|
@@ -588,6 +588,49 @@ await voiceAgent.voice.send(micStream)
|
|
|
588
588
|
|
|
589
589
|
Visit the [Google Gemini Live Reference](https://mastra.ai/reference/voice/google-gemini-live) for more information on the Google Gemini Live voice provider.
|
|
590
590
|
|
|
591
|
+
**AWS Nova Sonic**:
|
|
592
|
+
|
|
593
|
+
```typescript
|
|
594
|
+
import { Agent } from '@mastra/core/agent'
|
|
595
|
+
import { playAudio, getMicrophoneStream } from '@mastra/node-audio'
|
|
596
|
+
import { NovaSonicVoice } from '@mastra/voice-aws-nova-sonic'
|
|
597
|
+
|
|
598
|
+
const voiceAgent = new Agent({
|
|
599
|
+
id: 'voice-agent',
|
|
600
|
+
name: 'Voice Agent',
|
|
601
|
+
instructions: 'You are a voice assistant that can help users with their tasks.',
|
|
602
|
+
model: 'openai/gpt-5.4',
|
|
603
|
+
voice: new NovaSonicVoice({
|
|
604
|
+
region: 'us-east-1',
|
|
605
|
+
speaker: 'matthew',
|
|
606
|
+
// Static credentials are optional. The default AWS credential
|
|
607
|
+
// provider chain is used when none are passed.
|
|
608
|
+
}),
|
|
609
|
+
})
|
|
610
|
+
|
|
611
|
+
// Connect before using speak/send
|
|
612
|
+
await voiceAgent.voice.connect()
|
|
613
|
+
|
|
614
|
+
// Listen for assistant audio (Int16Array PCM)
|
|
615
|
+
voiceAgent.voice.on('speaking', ({ audioData }) => {
|
|
616
|
+
if (audioData) playAudio(audioData)
|
|
617
|
+
})
|
|
618
|
+
|
|
619
|
+
// Listen for transcribed text
|
|
620
|
+
voiceAgent.voice.on('writing', ({ text, role }) => {
|
|
621
|
+
console.log(`${role}: ${text}`)
|
|
622
|
+
})
|
|
623
|
+
|
|
624
|
+
// Initiate the conversation
|
|
625
|
+
await voiceAgent.voice.speak('How can I help you today?')
|
|
626
|
+
|
|
627
|
+
// Send continuous audio from the microphone
|
|
628
|
+
const micStream = getMicrophoneStream()
|
|
629
|
+
await voiceAgent.voice.send(micStream)
|
|
630
|
+
```
|
|
631
|
+
|
|
632
|
+
Visit the [AWS Nova Sonic Reference](https://mastra.ai/reference/voice/aws-nova-sonic) for more information on the AWS Nova Sonic voice provider.
|
|
633
|
+
|
|
591
634
|
## Voice configuration
|
|
592
635
|
|
|
593
636
|
Each voice provider can be configured with different models and options. Below are the detailed configuration options for all supported providers:
|
|
@@ -828,6 +871,28 @@ const voice = new GeminiLiveVoice({
|
|
|
828
871
|
|
|
829
872
|
Visit the [Google Gemini Live Reference](https://mastra.ai/reference/voice/google-gemini-live) for more information on the Google Gemini Live voice provider.
|
|
830
873
|
|
|
874
|
+
**AWS Nova Sonic**:
|
|
875
|
+
|
|
876
|
+
```typescript
|
|
877
|
+
// AWS Nova Sonic Voice Configuration
|
|
878
|
+
const voice = new NovaSonicVoice({
|
|
879
|
+
region: 'us-east-1',
|
|
880
|
+
speaker: 'matthew',
|
|
881
|
+
sessionConfig: {
|
|
882
|
+
inferenceConfiguration: {
|
|
883
|
+
temperature: 0.7,
|
|
884
|
+
maxTokens: 1024,
|
|
885
|
+
},
|
|
886
|
+
turnDetectionConfiguration: {
|
|
887
|
+
endpointingSensitivity: 'MEDIUM',
|
|
888
|
+
},
|
|
889
|
+
},
|
|
890
|
+
// AWS Nova Sonic is a realtime bidirectional API without separate speech and listening models
|
|
891
|
+
})
|
|
892
|
+
```
|
|
893
|
+
|
|
894
|
+
Visit the [AWS Nova Sonic Reference](https://mastra.ai/reference/voice/aws-nova-sonic) for more information on the AWS Nova Sonic voice provider.
|
|
895
|
+
|
|
831
896
|
**AI SDK**:
|
|
832
897
|
|
|
833
898
|
```typescript
|
|
@@ -957,6 +1022,7 @@ For more information on the CompositeVoice, refer to the [CompositeVoice Referen
|
|
|
957
1022
|
- [Azure Voice](https://mastra.ai/reference/voice/azure)
|
|
958
1023
|
- [Google Voice](https://mastra.ai/reference/voice/google)
|
|
959
1024
|
- [Google Gemini Live Voice](https://mastra.ai/reference/voice/google-gemini-live)
|
|
1025
|
+
- [AWS Nova Sonic Voice](https://mastra.ai/reference/voice/aws-nova-sonic)
|
|
960
1026
|
- [Deepgram Voice](https://mastra.ai/reference/voice/deepgram)
|
|
961
1027
|
- [PlayAI Voice](https://mastra.ai/reference/voice/playai)
|
|
962
1028
|
- [Voice Examples](https://github.com/mastra-ai/voice-examples)
|
|
@@ -99,4 +99,48 @@ await agent.voice.send(micStream)
|
|
|
99
99
|
Note:
|
|
100
100
|
|
|
101
101
|
- Live API requires `GOOGLE_API_KEY`. Vertex AI requires project/location and service account credentials.
|
|
102
|
-
- Events: `speaker` (audio stream), `writing` (text), `turnComplete`, `usage`, and `error`.
|
|
102
|
+
- Events: `speaker` (audio stream), `writing` (text), `turnComplete`, `usage`, and `error`.
|
|
103
|
+
|
|
104
|
+
## AWS Nova Sonic (Realtime)
|
|
105
|
+
|
|
106
|
+
```typescript
|
|
107
|
+
import { Agent } from '@mastra/core/agent'
|
|
108
|
+
import { NovaSonicVoice } from '@mastra/voice-aws-nova-sonic'
|
|
109
|
+
import { playAudio, getMicrophoneStream } from '@mastra/node-audio'
|
|
110
|
+
|
|
111
|
+
const agent = new Agent({
|
|
112
|
+
id: 'agent',
|
|
113
|
+
name: 'Nova Sonic Agent',
|
|
114
|
+
instructions: 'You are a helpful assistant with real-time voice capabilities.',
|
|
115
|
+
// Model used for text generation; voice provider handles realtime audio
|
|
116
|
+
model: 'openai/gpt-5.4',
|
|
117
|
+
voice: new NovaSonicVoice({
|
|
118
|
+
region: 'us-east-1',
|
|
119
|
+
speaker: 'matthew',
|
|
120
|
+
// Static credentials are optional. The default AWS credential provider
|
|
121
|
+
// chain is used when none are passed.
|
|
122
|
+
}),
|
|
123
|
+
})
|
|
124
|
+
|
|
125
|
+
await agent.voice.connect()
|
|
126
|
+
|
|
127
|
+
// Assistant audio is emitted as 16-bit PCM on the `speaking` event
|
|
128
|
+
agent.voice.on('speaking', ({ audioData }) => {
|
|
129
|
+
if (audioData) playAudio(audioData)
|
|
130
|
+
})
|
|
131
|
+
|
|
132
|
+
agent.voice.on('writing', ({ role, text }) => {
|
|
133
|
+
console.log(`${role}: ${text}`)
|
|
134
|
+
})
|
|
135
|
+
|
|
136
|
+
await agent.voice.speak('How can I help you today?')
|
|
137
|
+
|
|
138
|
+
const micStream = getMicrophoneStream()
|
|
139
|
+
await agent.voice.send(micStream)
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
Note:
|
|
143
|
+
|
|
144
|
+
- Available regions: `us-east-1`, `us-west-2`, and `ap-northeast-1`.
|
|
145
|
+
- Authenticates through the standard AWS credential provider chain. Pass `credentials` to override.
|
|
146
|
+
- Events: `speaking` (Int16Array audio), `writing` (text with `generationStage`), `toolCall`, `interrupt`, `turnComplete`, `usage`, `session`, and `error`.
|
|
@@ -83,6 +83,45 @@ const workspace = new Workspace({
|
|
|
83
83
|
})
|
|
84
84
|
```
|
|
85
85
|
|
|
86
|
+
### Batch embedding
|
|
87
|
+
|
|
88
|
+
The embedder above takes one text at a time. Indexing a workspace with hundreds of files calls the provider hundreds of times, which is slow and expensive.
|
|
89
|
+
|
|
90
|
+
When the provider supports batching (for example, OpenAI's `embedMany`), pass an embedder that takes an array of texts and accepts many embeddings back in one call. To opt in, set a `batch: true` property on the function. Mastra checks for that property at runtime and switches to the batched path.
|
|
91
|
+
|
|
92
|
+
The following example replaces the single-text embedder with a batched one. The embedder function takes an array, returns an array of embeddings in the same order, and carries two extra properties:
|
|
93
|
+
|
|
94
|
+
- `batch: true`: marks the function as batch-capable. Without this property, Mastra calls it one text at a time.
|
|
95
|
+
- `maxBatchSize`: the largest array the provider accepts in one call. Mastra splits larger requests into chunks of this size and sends them in parallel. Set this to your provider's documented limit (for example, 2048 for OpenAI, 96 for Cohere, 128 for Voyage). Omit it to send every pending text in one request.
|
|
96
|
+
|
|
97
|
+
```typescript
|
|
98
|
+
import { Workspace, LocalFilesystem } from '@mastra/core/workspace'
|
|
99
|
+
import { PineconeVector } from '@mastra/pinecone'
|
|
100
|
+
import { embedMany } from 'ai'
|
|
101
|
+
import { openai } from '@ai-sdk/openai'
|
|
102
|
+
|
|
103
|
+
const model = openai.embedding('text-embedding-3-small')
|
|
104
|
+
|
|
105
|
+
const workspace = new Workspace({
|
|
106
|
+
filesystem: new LocalFilesystem({ basePath: './workspace' }),
|
|
107
|
+
vectorStore: new PineconeVector({
|
|
108
|
+
apiKey: process.env.PINECONE_API_KEY,
|
|
109
|
+
index: 'workspace-index',
|
|
110
|
+
}),
|
|
111
|
+
embedder: Object.assign(
|
|
112
|
+
async (texts: string[]) => {
|
|
113
|
+
const { embeddings } = await embedMany({ model, values: texts })
|
|
114
|
+
return embeddings
|
|
115
|
+
},
|
|
116
|
+
{ batch: true as const, maxBatchSize: 2048 },
|
|
117
|
+
),
|
|
118
|
+
})
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
`Object.assign` adds the `batch` and `maxBatchSize` properties to the embedder function. Mastra reads them as metadata and never passes them to the provider.
|
|
122
|
+
|
|
123
|
+
Single-text embedders still work. The function signature `(text: string) => Promise<number[]>` is unchanged, so existing code keeps running without modification.
|
|
124
|
+
|
|
86
125
|
## Hybrid search
|
|
87
126
|
|
|
88
127
|
Configure both BM25 and vector search to enable hybrid mode, which combines keyword matching with semantic understanding.
|
|
@@ -0,0 +1,238 @@
|
|
|
1
|
+
# Integrate Mastra in Your NestJS Project
|
|
2
|
+
|
|
3
|
+
In this guide, you'll build a tool-calling AI agent using Mastra and NestJS. The [NestJS server adapter](https://mastra.ai/reference/server/nestjs-adapter) registers Mastra's agent and workflow routes as a NestJS module, so they run inside your existing NestJS application.
|
|
4
|
+
|
|
5
|
+
## Before you begin
|
|
6
|
+
|
|
7
|
+
- You'll need an API key from a supported [model provider](https://mastra.ai/models). If you don't have a preference, use [OpenAI](https://mastra.ai/models/providers/openai).
|
|
8
|
+
- Install Node.js `v22.13.0` or later
|
|
9
|
+
- Use the NestJS Express platform (`@nestjs/platform-express`)
|
|
10
|
+
|
|
11
|
+
## Create a new NestJS app (optional)
|
|
12
|
+
|
|
13
|
+
If you already have a NestJS app, skip to the next step.
|
|
14
|
+
|
|
15
|
+
Run the following command to create a new NestJS app:
|
|
16
|
+
|
|
17
|
+
**npm**:
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
npx @nestjs/cli new mastra-nest
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
**pnpm**:
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
pnpm dlx @nestjs/cli new mastra-nest
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
**Yarn**:
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
yarn dlx @nestjs/cli new mastra-nest
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
**Bun**:
|
|
36
|
+
|
|
37
|
+
```bash
|
|
38
|
+
bun x @nestjs/cli new mastra-nest
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
This creates a project called `mastra-nest`, but you can replace it with any name you want.
|
|
42
|
+
|
|
43
|
+
## Initialize Mastra
|
|
44
|
+
|
|
45
|
+
Navigate to your NestJS project directory:
|
|
46
|
+
|
|
47
|
+
```bash
|
|
48
|
+
cd mastra-nest
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
Run [`mastra init`](https://mastra.ai/reference/cli/mastra). When prompted, choose a provider (e.g. OpenAI) and enter your key:
|
|
52
|
+
|
|
53
|
+
**npm**:
|
|
54
|
+
|
|
55
|
+
```bash
|
|
56
|
+
npx mastra@latest init
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
**pnpm**:
|
|
60
|
+
|
|
61
|
+
```bash
|
|
62
|
+
pnpm dlx mastra@latest init
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
**Yarn**:
|
|
66
|
+
|
|
67
|
+
```bash
|
|
68
|
+
yarn dlx mastra@latest init
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
**Bun**:
|
|
72
|
+
|
|
73
|
+
```bash
|
|
74
|
+
bun x mastra@latest init
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
This creates a `src/mastra` folder with an example weather agent and the following files:
|
|
78
|
+
|
|
79
|
+
- `index.ts` - Mastra config, including memory
|
|
80
|
+
- `tools/weather-tool.ts` - a tool to fetch weather for a given location
|
|
81
|
+
- `agents/weather-agent.ts` - a weather agent with a prompt that uses the tool
|
|
82
|
+
|
|
83
|
+
You'll pass the `src/mastra/index.ts` file to the NestJS adapter in the next step.
|
|
84
|
+
|
|
85
|
+
## Add server adapter
|
|
86
|
+
|
|
87
|
+
Install the NestJS server adapter package:
|
|
88
|
+
|
|
89
|
+
**npm**:
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
npm install @mastra/nestjs@latest
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
**pnpm**:
|
|
96
|
+
|
|
97
|
+
```bash
|
|
98
|
+
pnpm add @mastra/nestjs@latest
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
**Yarn**:
|
|
102
|
+
|
|
103
|
+
```bash
|
|
104
|
+
yarn add @mastra/nestjs@latest
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
**Bun**:
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
bun add @mastra/nestjs@latest
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
Open `src/app.module.ts` and register `MastraModule`:
|
|
114
|
+
|
|
115
|
+
```typescript
|
|
116
|
+
import { Module } from '@nestjs/common'
|
|
117
|
+
import { MastraModule } from '@mastra/nestjs'
|
|
118
|
+
import { mastra } from './mastra'
|
|
119
|
+
|
|
120
|
+
@Module({
|
|
121
|
+
imports: [
|
|
122
|
+
MastraModule.register({
|
|
123
|
+
mastra,
|
|
124
|
+
}),
|
|
125
|
+
],
|
|
126
|
+
})
|
|
127
|
+
export class AppModule {}
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
> **Note:** `MastraModule` registers a catch-all controller (`@All('*')`). If it is imported before your app modules, it can intercept unrelated routes and return 404s. To avoid conflicts, import `MastraModule` last or mount it under a dedicated prefix (e.g., `/api/v1/mastra`).
|
|
131
|
+
|
|
132
|
+
## Test your agent
|
|
133
|
+
|
|
134
|
+
By default, Mastra's endpoints are added under the `/api` subpath and use your agent/workflow IDs. The default `weather-agent` created by `mastra init` is available at `/api/agents/weather-agent`.
|
|
135
|
+
|
|
136
|
+
Start your NestJS server:
|
|
137
|
+
|
|
138
|
+
**npm**:
|
|
139
|
+
|
|
140
|
+
```bash
|
|
141
|
+
npm run start
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
**pnpm**:
|
|
145
|
+
|
|
146
|
+
```bash
|
|
147
|
+
pnpm run start
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
**Yarn**:
|
|
151
|
+
|
|
152
|
+
```bash
|
|
153
|
+
yarn run start
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
**Bun**:
|
|
157
|
+
|
|
158
|
+
```bash
|
|
159
|
+
bun run start
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
In a separate terminal window, use `curl` to ask the weather agent:
|
|
163
|
+
|
|
164
|
+
```bash
|
|
165
|
+
curl -X POST http://localhost:3000/api/agents/weather-agent/generate -H "Content-Type: application/json" -d "{\"messages\":[{\"role\":\"user\",\"content\":\"What is the weather like in Seoul?\"}]}"
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
## Use Mastra in Your Own Services
|
|
169
|
+
|
|
170
|
+
The module exports two ways to access Mastra from your NestJS services: the `MastraService` wrapper and the `MASTRA` injection token.
|
|
171
|
+
|
|
172
|
+
### MastraService
|
|
173
|
+
|
|
174
|
+
`MastraService` is an injectable wrapper with convenience methods for common operations:
|
|
175
|
+
|
|
176
|
+
```typescript
|
|
177
|
+
import { Injectable } from '@nestjs/common'
|
|
178
|
+
import { MastraService } from '@mastra/nestjs'
|
|
179
|
+
|
|
180
|
+
@Injectable()
|
|
181
|
+
export class AgentService {
|
|
182
|
+
constructor(private readonly mastraService: MastraService) {}
|
|
183
|
+
|
|
184
|
+
async chat(agentId: string, message: string) {
|
|
185
|
+
const agent = this.mastraService.getAgent(agentId)
|
|
186
|
+
return agent.generate({
|
|
187
|
+
messages: [{ role: 'user', content: message }],
|
|
188
|
+
})
|
|
189
|
+
}
|
|
190
|
+
|
|
191
|
+
async runWorkflow(workflowId: string, input: Record<string, unknown>) {
|
|
192
|
+
const workflow = this.mastraService.getWorkflow(workflowId)
|
|
193
|
+
return workflow.start({ inputData: input })
|
|
194
|
+
}
|
|
195
|
+
}
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
`MastraService` exposes:
|
|
199
|
+
|
|
200
|
+
- `getMastra()` — returns the underlying `Mastra` instance
|
|
201
|
+
- `getAgent(id)` — shorthand for `mastra.getAgent(id)`
|
|
202
|
+
- `getWorkflow(id)` — shorthand for `mastra.getWorkflow(id)`
|
|
203
|
+
- `getOptions()` — returns the module configuration
|
|
204
|
+
- `isShuttingDown` — `true` after graceful shutdown begins
|
|
205
|
+
|
|
206
|
+
### MASTRA token
|
|
207
|
+
|
|
208
|
+
If you need the `Mastra` instance directly (e.g., to access storage, memory, or other core APIs), inject it with the `MASTRA` token:
|
|
209
|
+
|
|
210
|
+
```typescript
|
|
211
|
+
import { Injectable, Inject } from '@nestjs/common'
|
|
212
|
+
import { MASTRA } from '@mastra/nestjs'
|
|
213
|
+
import type { Mastra } from '@mastra/core/mastra'
|
|
214
|
+
|
|
215
|
+
@Injectable()
|
|
216
|
+
export class MemoryService {
|
|
217
|
+
constructor(@Inject(MASTRA) private readonly mastra: Mastra) {}
|
|
218
|
+
|
|
219
|
+
async getThreadMessages(threadId: string) {
|
|
220
|
+
const memory = this.mastra.getMemory()
|
|
221
|
+
return memory?.getMessages({ threadId })
|
|
222
|
+
}
|
|
223
|
+
}
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
Both approaches use the same singleton `Mastra` instance registered by `MastraModule`.
|
|
227
|
+
|
|
228
|
+
## Next steps
|
|
229
|
+
|
|
230
|
+
You now have a working Mastra agent running inside NestJS. To extend the project:
|
|
231
|
+
|
|
232
|
+
- [Agents overview](https://mastra.ai/docs/agents/overview)
|
|
233
|
+
- [Using tools](https://mastra.ai/docs/agents/using-tools)
|
|
234
|
+
- [Agent memory](https://mastra.ai/docs/memory/overview)
|
|
235
|
+
|
|
236
|
+
For details on the NestJS integration:
|
|
237
|
+
|
|
238
|
+
- [NestJS Adapter reference](https://mastra.ai/reference/server/nestjs-adapter)
|
|
@@ -15,7 +15,7 @@ const mastra = new Mastra({
|
|
|
15
15
|
|
|
16
16
|
const dataset = await mastra.datasets.get({ id: 'dataset-id' })
|
|
17
17
|
|
|
18
|
-
// Run against a registered agent with
|
|
18
|
+
// Run against a registered agent with a flat scorer list
|
|
19
19
|
const summary = await dataset.startExperiment({
|
|
20
20
|
targetType: 'agent',
|
|
21
21
|
targetId: 'my-agent',
|
|
@@ -23,8 +23,34 @@ const summary = await dataset.startExperiment({
|
|
|
23
23
|
maxConcurrency: 10,
|
|
24
24
|
})
|
|
25
25
|
|
|
26
|
+
// Or pass the same categorised shape accepted by runEvals
|
|
27
|
+
const summary2 = await dataset.startExperiment({
|
|
28
|
+
targetType: 'agent',
|
|
29
|
+
targetId: 'my-agent',
|
|
30
|
+
scorers: {
|
|
31
|
+
agent: [accuracyScorer],
|
|
32
|
+
trajectory: [toolOrderScorer],
|
|
33
|
+
},
|
|
34
|
+
})
|
|
35
|
+
|
|
36
|
+
// For workflow targets, score individual steps with their own scorers
|
|
37
|
+
const summary3 = await dataset.startExperiment({
|
|
38
|
+
targetType: 'workflow',
|
|
39
|
+
targetId: 'my-workflow',
|
|
40
|
+
scorers: {
|
|
41
|
+
workflow: [overallScorer],
|
|
42
|
+
steps: {
|
|
43
|
+
'fetch-data': [fetchScorer],
|
|
44
|
+
transform: [transformScorer],
|
|
45
|
+
},
|
|
46
|
+
trajectory: [executionPathScorer],
|
|
47
|
+
},
|
|
48
|
+
})
|
|
49
|
+
|
|
26
50
|
console.log(`${summary.succeededCount}/${summary.totalItems} succeeded`)
|
|
27
51
|
console.log(`Status: ${summary.status}`)
|
|
52
|
+
console.log(`${summary2.succeededCount}/${summary2.totalItems} succeeded`)
|
|
53
|
+
console.log(`Status: ${summary2.status}`)
|
|
28
54
|
```
|
|
29
55
|
|
|
30
56
|
## Parameters
|
|
@@ -33,7 +59,7 @@ console.log(`Status: ${summary.status}`)
|
|
|
33
59
|
|
|
34
60
|
**targetId** (`string`): ID of the registered target. Use with \`targetType\`.
|
|
35
61
|
|
|
36
|
-
**scorers** (`(MastraScorer | string)[]`): Scorers to evaluate each result.
|
|
62
|
+
**scorers** (`(MastraScorer | string)[] | AgentScorerConfig | WorkflowScorerConfig`): Scorers to evaluate each result. Accepts a flat array of \`MastraScorer\` instances or registered scorer IDs, or the same categorised config shape used by \`runEvals\` (\`AgentScorerConfig\` / \`WorkflowScorerConfig\`). Trajectory scorers (\`type: "trajectory"\`) automatically receive a pre-extracted \`Trajectory\` as their output regardless of which form is used. For workflow targets, per-step scorers can be passed via \`scorers: { steps: { stepId: \[...] } }\` and run against each step's output; their results carry the originating \`stepId\` and keep \`targetScope: "span"\` (matching \`runEvals\`).
|
|
37
63
|
|
|
38
64
|
**name** (`string`): Display name for the experiment.
|
|
39
65
|
|
package/.docs/reference/index.md
CHANGED
|
@@ -191,6 +191,7 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
191
191
|
- [Hono Adapter](https://mastra.ai/reference/server/hono-adapter)
|
|
192
192
|
- [Koa Adapter](https://mastra.ai/reference/server/koa-adapter)
|
|
193
193
|
- [MastraServer](https://mastra.ai/reference/server/mastra-server)
|
|
194
|
+
- [NestJS Adapter](https://mastra.ai/reference/server/nestjs-adapter)
|
|
194
195
|
- [registerApiRoute()](https://mastra.ai/reference/server/register-api-route)
|
|
195
196
|
- [Server Routes](https://mastra.ai/reference/server/routes)
|
|
196
197
|
- [Overview](https://mastra.ai/reference/storage/overview)
|
|
@@ -243,6 +244,7 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
243
244
|
- [Qdrant Vector Store](https://mastra.ai/reference/vectors/qdrant)
|
|
244
245
|
- [Turbopuffer Vector Store](https://mastra.ai/reference/vectors/turbopuffer)
|
|
245
246
|
- [Upstash Vector Store](https://mastra.ai/reference/vectors/upstash)
|
|
247
|
+
- [AWS Nova Sonic](https://mastra.ai/reference/voice/aws-nova-sonic)
|
|
246
248
|
- [Azure](https://mastra.ai/reference/voice/azure)
|
|
247
249
|
- [Cloudflare](https://mastra.ai/reference/voice/cloudflare)
|
|
248
250
|
- [Composite Voice](https://mastra.ai/reference/voice/composite-voice)
|
|
@@ -0,0 +1,169 @@
|
|
|
1
|
+
# NestJS Adapter
|
|
2
|
+
|
|
3
|
+
The `@mastra/nestjs` package provides a NestJS module for running Mastra with the Express-based NestJS platform.
|
|
4
|
+
|
|
5
|
+
It is intentionally Express-only for v1. If Nest is bootstrapped with a different HTTP adapter, `MastraModule` throws during startup instead of attempting a partial integration.
|
|
6
|
+
|
|
7
|
+
> **Info:** For general adapter concepts, see [Server Adapters](https://mastra.ai/docs/server/server-adapters).
|
|
8
|
+
|
|
9
|
+
## Installation
|
|
10
|
+
|
|
11
|
+
Install the NestJS adapter and ensure your app uses the Express platform:
|
|
12
|
+
|
|
13
|
+
**npm**:
|
|
14
|
+
|
|
15
|
+
```bash
|
|
16
|
+
npm install @mastra/nestjs@latest
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
**pnpm**:
|
|
20
|
+
|
|
21
|
+
```bash
|
|
22
|
+
pnpm add @mastra/nestjs@latest
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Yarn**:
|
|
26
|
+
|
|
27
|
+
```bash
|
|
28
|
+
yarn add @mastra/nestjs@latest
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
**Bun**:
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
bun add @mastra/nestjs@latest
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
## Usage example
|
|
38
|
+
|
|
39
|
+
```typescript
|
|
40
|
+
import { Module } from '@nestjs/common'
|
|
41
|
+
import { MastraModule } from '@mastra/nestjs'
|
|
42
|
+
import { mastra } from './mastra'
|
|
43
|
+
|
|
44
|
+
@Module({
|
|
45
|
+
imports: [
|
|
46
|
+
MastraModule.register({
|
|
47
|
+
mastra,
|
|
48
|
+
}),
|
|
49
|
+
],
|
|
50
|
+
})
|
|
51
|
+
export class AppModule {}
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
> **Note:** `MastraModule` registers a catch-all controller (`@All('*')`). If it is imported before your app modules, it can intercept unrelated routes and return 404s. To avoid conflicts, import `MastraModule` last or mount it under a dedicated prefix (e.g., `/api/v1/mastra`).
|
|
55
|
+
|
|
56
|
+
```typescript
|
|
57
|
+
import { NestFactory } from '@nestjs/core'
|
|
58
|
+
import { AppModule } from './app.module'
|
|
59
|
+
|
|
60
|
+
async function bootstrap() {
|
|
61
|
+
const app = await NestFactory.create(AppModule)
|
|
62
|
+
await app.listen(3000)
|
|
63
|
+
}
|
|
64
|
+
|
|
65
|
+
bootstrap()
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
By default, Mastra routes mount under `/api`. Use `prefix` to change it.
|
|
69
|
+
|
|
70
|
+
## Module options
|
|
71
|
+
|
|
72
|
+
**mastra** (`Mastra`): Mastra instance
|
|
73
|
+
|
|
74
|
+
**prefix** (`string`): Route path prefix (e.g., \`/api/v2\`) (Default: `` `/api` ``)
|
|
75
|
+
|
|
76
|
+
**rateLimitOptions** (`{ enabled?: boolean; defaultLimit?: number; windowMs?: number; generateLimit?: number }`): Rate limiting config (enabled by default)
|
|
77
|
+
|
|
78
|
+
**shutdownOptions** (`{ timeoutMs?: number; notifyClients?: boolean }`): Graceful shutdown configuration
|
|
79
|
+
|
|
80
|
+
**bodyLimitOptions** (`{ maxSize?: number; maxFileSize?: number; tempDir?: string; allowedMimeTypes?: string[] }`): Request body size limits
|
|
81
|
+
|
|
82
|
+
**streamOptions** (`{ redact?: boolean; heartbeatMs?: number }`): Streaming configuration
|
|
83
|
+
|
|
84
|
+
**tracingOptions** (`{ enabled?: boolean; serviceName?: string }`): OpenTelemetry tracing configuration
|
|
85
|
+
|
|
86
|
+
**contextOptions** (`{ strict?: boolean; logWarnings?: boolean }`): Request context parsing config
|
|
87
|
+
|
|
88
|
+
**customRouteAuthConfig** (`Map<string, boolean>`): Per-route auth overrides. Keys are \`METHOD:PATH\`.
|
|
89
|
+
|
|
90
|
+
**tools** (`Record<string, Tool>`): Registered tools for the server
|
|
91
|
+
|
|
92
|
+
**taskStore** (`InMemoryTaskStore`): Task store for A2A (Agent-to-Agent) operations
|
|
93
|
+
|
|
94
|
+
**mcpOptions** (`{ serverless?: boolean; sessionIdGenerator?: () => string }`): MCP transport options
|
|
95
|
+
|
|
96
|
+
**auth** (`{ enabled?: boolean; allowQueryApiKey?: boolean }`): Enable Mastra token auth. Disabled by default — most NestJS apps use their own auth guards. Query-string \`apiKey\` auth is opt-in for backward compatibility. (Default: `` `{ enabled: false }` ``)
|
|
97
|
+
|
|
98
|
+
## Async registration
|
|
99
|
+
|
|
100
|
+
```typescript
|
|
101
|
+
import { Module } from '@nestjs/common'
|
|
102
|
+
import { ConfigModule, ConfigService } from '@nestjs/config'
|
|
103
|
+
import { MastraModule } from '@mastra/nestjs'
|
|
104
|
+
import { Mastra } from '@mastra/core/mastra'
|
|
105
|
+
|
|
106
|
+
@Module({
|
|
107
|
+
imports: [
|
|
108
|
+
ConfigModule.forRoot(),
|
|
109
|
+
MastraModule.registerAsync({
|
|
110
|
+
imports: [ConfigModule],
|
|
111
|
+
useFactory: (config: ConfigService) => ({
|
|
112
|
+
mastra: new Mastra({
|
|
113
|
+
agents: {
|
|
114
|
+
greeter: {
|
|
115
|
+
name: 'greeter',
|
|
116
|
+
description: 'Greets the user',
|
|
117
|
+
model: config.get('MASTRA_MODEL', 'openai/gpt-4o-mini'),
|
|
118
|
+
},
|
|
119
|
+
},
|
|
120
|
+
}),
|
|
121
|
+
prefix: config.get('MASTRA_PREFIX', '/api'),
|
|
122
|
+
}),
|
|
123
|
+
inject: [ConfigService],
|
|
124
|
+
}),
|
|
125
|
+
],
|
|
126
|
+
})
|
|
127
|
+
export class AppModule {}
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
## Accessing Mastra
|
|
131
|
+
|
|
132
|
+
Use the `MASTRA` token or `MastraService` in your services:
|
|
133
|
+
|
|
134
|
+
```typescript
|
|
135
|
+
import { Injectable, Inject } from '@nestjs/common'
|
|
136
|
+
import { MASTRA, MastraService } from '@mastra/nestjs'
|
|
137
|
+
import type { Mastra } from '@mastra/core/mastra'
|
|
138
|
+
|
|
139
|
+
@Injectable()
|
|
140
|
+
export class AgentService {
|
|
141
|
+
constructor(@Inject(MASTRA) private readonly mastra: Mastra) {}
|
|
142
|
+
}
|
|
143
|
+
|
|
144
|
+
@Injectable()
|
|
145
|
+
export class WorkflowService {
|
|
146
|
+
constructor(private readonly mastraService: MastraService) {}
|
|
147
|
+
}
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
## MCP routes
|
|
151
|
+
|
|
152
|
+
MCP endpoints are exposed under the API prefix:
|
|
153
|
+
|
|
154
|
+
- `POST /api/mcp/:serverId/mcp`
|
|
155
|
+
- `GET /api/mcp/:serverId/sse`
|
|
156
|
+
- `POST /api/mcp/:serverId/messages`
|
|
157
|
+
|
|
158
|
+
## Health routes
|
|
159
|
+
|
|
160
|
+
Operational endpoints stay unprefixed on purpose for infrastructure compatibility:
|
|
161
|
+
|
|
162
|
+
- `GET /health`
|
|
163
|
+
- `GET /ready`
|
|
164
|
+
- `GET /info`
|
|
165
|
+
|
|
166
|
+
## Related
|
|
167
|
+
|
|
168
|
+
- [Server Adapters](https://mastra.ai/docs/server/server-adapters)
|
|
169
|
+
- [MastraServer Reference](https://mastra.ai/reference/server/mastra-server)
|
|
@@ -0,0 +1,247 @@
|
|
|
1
|
+
# AWS Nova Sonic voice
|
|
2
|
+
|
|
3
|
+
The `NovaSonicVoice` class provides real-time speech-to-speech capabilities backed by [AWS Bedrock Nova 2 Sonic](https://docs.aws.amazon.com/nova/latest/userguide/speech.html). It opens a bidirectional stream to the model and emits events for assistant audio, transcribed text, tool calls, turn boundaries, and interruptions.
|
|
4
|
+
|
|
5
|
+
## Usage example
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { NovaSonicVoice } from '@mastra/voice-aws-nova-sonic'
|
|
9
|
+
import { playAudio, getMicrophoneStream } from '@mastra/node-audio'
|
|
10
|
+
|
|
11
|
+
// Initialize using the default AWS credential provider chain
|
|
12
|
+
const voice = new NovaSonicVoice({
|
|
13
|
+
region: 'us-east-1',
|
|
14
|
+
speaker: 'matthew',
|
|
15
|
+
})
|
|
16
|
+
|
|
17
|
+
// Or pass explicit credentials
|
|
18
|
+
const voiceWithCredentials = new NovaSonicVoice({
|
|
19
|
+
region: 'us-east-1',
|
|
20
|
+
speaker: 'tiffany',
|
|
21
|
+
credentials: {
|
|
22
|
+
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
|
|
23
|
+
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
|
|
24
|
+
},
|
|
25
|
+
})
|
|
26
|
+
|
|
27
|
+
// Establish the bidirectional stream
|
|
28
|
+
await voice.connect()
|
|
29
|
+
|
|
30
|
+
// Listen for assistant audio (Int16Array PCM)
|
|
31
|
+
voice.on('speaking', ({ audioData }) => {
|
|
32
|
+
if (audioData) playAudio(audioData)
|
|
33
|
+
})
|
|
34
|
+
|
|
35
|
+
// Listen for transcribed text from the user and assistant
|
|
36
|
+
voice.on('writing', ({ text, role, generationStage }) => {
|
|
37
|
+
console.log(`${role} (${generationStage ?? 'FINAL'}): ${text}`)
|
|
38
|
+
})
|
|
39
|
+
|
|
40
|
+
// Stream microphone audio in real time
|
|
41
|
+
const microphoneStream = getMicrophoneStream()
|
|
42
|
+
await voice.send(microphoneStream)
|
|
43
|
+
|
|
44
|
+
// Disconnect when done
|
|
45
|
+
voice.close()
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
## Authentication
|
|
49
|
+
|
|
50
|
+
`NovaSonicVoice` uses the AWS SDK credential resolution chain when no `credentials` option is passed. Mastra calls `defaultProvider()` from `@aws-sdk/credential-provider-node`, which checks (in order) environment variables, shared credentials files, IAM role for EC2, ECS, EKS, and other standard sources.
|
|
51
|
+
|
|
52
|
+
To use static credentials, pass them on the constructor:
|
|
53
|
+
|
|
54
|
+
```typescript
|
|
55
|
+
new NovaSonicVoice({
|
|
56
|
+
region: 'us-east-1',
|
|
57
|
+
credentials: {
|
|
58
|
+
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
|
|
59
|
+
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
|
|
60
|
+
sessionToken: process.env.AWS_SESSION_TOKEN,
|
|
61
|
+
},
|
|
62
|
+
})
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
The voice provider never logs credential values.
|
|
66
|
+
|
|
67
|
+
## Configuration
|
|
68
|
+
|
|
69
|
+
### Constructor options
|
|
70
|
+
|
|
71
|
+
**region** (`'us-east-1' | 'us-west-2' | 'ap-northeast-1'`): AWS region that hosts the Nova Sonic model. (Default: `'us-east-1'`)
|
|
72
|
+
|
|
73
|
+
**model** (`string`): Bedrock model ID for the bidirectional stream. (Default: `'amazon.nova-2-sonic-v1:0'`)
|
|
74
|
+
|
|
75
|
+
**credentials** (`AwsCredentialIdentity`): Static AWS credentials. When omitted the default AWS credential provider chain is used.
|
|
76
|
+
|
|
77
|
+
**speaker** (`string | NovaSonicVoiceConfigDetails`): Default voice for the assistant. Pass a voice ID string such as 'matthew' or an object that includes a language code and gender. (Default: `'matthew'`)
|
|
78
|
+
|
|
79
|
+
**languageCode** (`NovaSonicLanguageCode`): Language code used for the session. Polyglot voices support all listed languages.
|
|
80
|
+
|
|
81
|
+
**instructions** (`string`): System prompt sent at session start. Equivalent to calling addInstructions() before connect().
|
|
82
|
+
|
|
83
|
+
**tools** (`NovaSonicToolConfig[]`): Tools exposed to the model. When the voice instance is attached to an Agent, the Agent's tools are added automatically.
|
|
84
|
+
|
|
85
|
+
**sessionConfig** (`NovaSonicSessionConfig`): Inference, turn-detection, and tool-choice configuration. See Session configuration below.
|
|
86
|
+
|
|
87
|
+
**debug** (`boolean`): Enable verbose logging for stream events. Sensitive fields are masked. (Default: `false`)
|
|
88
|
+
|
|
89
|
+
### Session configuration
|
|
90
|
+
|
|
91
|
+
`sessionConfig` controls inference parameters and turn-taking behavior. All fields are optional.
|
|
92
|
+
|
|
93
|
+
**inferenceConfiguration** (`object`): Sampling and decoding parameters.
|
|
94
|
+
|
|
95
|
+
**inferenceConfiguration.maxTokens** (`number`): Maximum tokens generated per turn.
|
|
96
|
+
|
|
97
|
+
**inferenceConfiguration.temperature** (`number`): Sampling temperature.
|
|
98
|
+
|
|
99
|
+
**inferenceConfiguration.topP** (`number`): Nucleus sampling probability.
|
|
100
|
+
|
|
101
|
+
**inferenceConfiguration.topK** (`number`): Top-k sampling.
|
|
102
|
+
|
|
103
|
+
**inferenceConfiguration.stopSequences** (`string[]`): Sequences that end generation.
|
|
104
|
+
|
|
105
|
+
**turnDetectionConfiguration** (`object`): Endpointing sensitivity for turn detection.
|
|
106
|
+
|
|
107
|
+
**turnDetectionConfiguration.endpointingSensitivity** (`'HIGH' | 'MEDIUM' | 'LOW'`): Pause duration before the model considers a turn complete. HIGH ends turns fastest (about 1.5s pause), MEDIUM is balanced (about 1.75s), LOW waits longest (about 2s).
|
|
108
|
+
|
|
109
|
+
**toolChoice** (`'auto' | 'any' | { tool: { name: string } }`): How the model decides whether to call a tool.
|
|
110
|
+
|
|
111
|
+
**enableKnowledgeGrounding** (`boolean`): Enable retrieval-augmented grounding against a Bedrock knowledge base.
|
|
112
|
+
|
|
113
|
+
**knowledgeBaseConfig** (`{ knowledgeBaseId?: string; dataSourceId?: string }`): Knowledge base used when knowledge grounding is enabled.
|
|
114
|
+
|
|
115
|
+
## Methods
|
|
116
|
+
|
|
117
|
+
### `connect()`
|
|
118
|
+
|
|
119
|
+
Opens the bidirectional stream to AWS Bedrock and sends the initial session, prompt, and system events. Call this before `speak`, `listen`, or `send`.
|
|
120
|
+
|
|
121
|
+
**options** (`{ requestContext?: RequestContext }`): Optional request context propagated to tool calls made during the session.
|
|
122
|
+
|
|
123
|
+
Returns: `Promise<void>`
|
|
124
|
+
|
|
125
|
+
### `speak()`
|
|
126
|
+
|
|
127
|
+
Synthesizes speech for a text prompt and emits `speaking` events as audio is produced.
|
|
128
|
+
|
|
129
|
+
**input** (`string | NodeJS.ReadableStream`): Text or text stream to synthesize.
|
|
130
|
+
|
|
131
|
+
**options** (`NovaSonicVoiceOptions`): Per-call overrides such as the speaker or language code.
|
|
132
|
+
|
|
133
|
+
Returns: `Promise<void>`
|
|
134
|
+
|
|
135
|
+
### `send()`
|
|
136
|
+
|
|
137
|
+
Streams microphone audio (or any PCM source) to the model. Use this for live, continuous conversation.
|
|
138
|
+
|
|
139
|
+
**audioData** (`NodeJS.ReadableStream | Int16Array`): 16-bit PCM audio to forward to the model.
|
|
140
|
+
|
|
141
|
+
Returns: `Promise<void>`
|
|
142
|
+
|
|
143
|
+
### `listen()`
|
|
144
|
+
|
|
145
|
+
Convenience wrapper that delegates to `send()`. Use it when you want a single transcription pass over a finite audio stream.
|
|
146
|
+
|
|
147
|
+
**audioData** (`NodeJS.ReadableStream`): Audio stream to transcribe.
|
|
148
|
+
|
|
149
|
+
Returns: `Promise<void>`
|
|
150
|
+
|
|
151
|
+
### `endAudioInput()`
|
|
152
|
+
|
|
153
|
+
Signals the end of the current audio turn so the model can finalize its response. Call this when the user stops speaking and the provider is not configured for server-side turn detection.
|
|
154
|
+
|
|
155
|
+
Returns: `Promise<void>`
|
|
156
|
+
|
|
157
|
+
### `addInstructions()`
|
|
158
|
+
|
|
159
|
+
Updates the system prompt for the active session.
|
|
160
|
+
|
|
161
|
+
**instructions** (`string`): System prompt to apply to the session.
|
|
162
|
+
|
|
163
|
+
Returns: `void`
|
|
164
|
+
|
|
165
|
+
### `addTools()`
|
|
166
|
+
|
|
167
|
+
Registers tools with the voice instance. When `NovaSonicVoice` is attached to an Agent, the Agent's tools are added automatically.
|
|
168
|
+
|
|
169
|
+
**tools** (`ToolsInput`): Tools exposed to the model.
|
|
170
|
+
|
|
171
|
+
Returns: `void`
|
|
172
|
+
|
|
173
|
+
### `getSpeakers()`
|
|
174
|
+
|
|
175
|
+
Returns the list of voices supported by Nova 2 Sonic.
|
|
176
|
+
|
|
177
|
+
Returns: `Promise<Array<{ voiceId: string; name: string; language: string; locale: string; gender: 'masculine' | 'feminine'; polyglot: boolean }>>`
|
|
178
|
+
|
|
179
|
+
### `getListener()`
|
|
180
|
+
|
|
181
|
+
Returns whether the voice instance currently holds an open stream.
|
|
182
|
+
|
|
183
|
+
Returns: `Promise<{ enabled: boolean }>`
|
|
184
|
+
|
|
185
|
+
### `close()`
|
|
186
|
+
|
|
187
|
+
Closes the bidirectional stream and destroys the underlying Bedrock client. Call this when the conversation ends.
|
|
188
|
+
|
|
189
|
+
Returns: `void`
|
|
190
|
+
|
|
191
|
+
### `on()` / `off()`
|
|
192
|
+
|
|
193
|
+
Registers and removes event listeners. See [Voice events](https://mastra.ai/reference/voice/voice.events) for the shared event API.
|
|
194
|
+
|
|
195
|
+
## Events
|
|
196
|
+
|
|
197
|
+
`NovaSonicVoice` emits the following events:
|
|
198
|
+
|
|
199
|
+
**speaking** (`event`): Assistant audio chunk. Callback receives { audioData: Int16Array, sampleRate?: number }.
|
|
200
|
+
|
|
201
|
+
**writing** (`event`): Transcribed text from the user or assistant. Callback receives { text: string, role: 'assistant' | 'user', generationStage?: 'SPECULATIVE' | 'FINAL' }.
|
|
202
|
+
|
|
203
|
+
**toolCall** (`event`): Model requested a tool call. Callback receives { name: string, args: Record\<string, any>, id: string }.
|
|
204
|
+
|
|
205
|
+
**interrupt** (`event`): User or model interrupted the current turn. Callback receives { type: 'user' | 'model', timestamp: number }.
|
|
206
|
+
|
|
207
|
+
**turnComplete** (`event`): Model finished its turn. Callback receives { timestamp: number }.
|
|
208
|
+
|
|
209
|
+
**session** (`event`): Session state transition. Callback receives { state: 'connecting' | 'connected' | 'disconnected' | 'disconnecting' | 'error' }.
|
|
210
|
+
|
|
211
|
+
**usage** (`event`): Token usage for the turn. Callback receives { inputTokens: number, outputTokens: number, totalTokens: number }.
|
|
212
|
+
|
|
213
|
+
**error** (`event`): Stream or provider error. Callback receives { message: string, code?: string, details?: unknown }.
|
|
214
|
+
|
|
215
|
+
`generationStage` distinguishes provisional transcripts (`'SPECULATIVE'`) from finalized ones (`'FINAL'`). Use `'FINAL'` text for persistent storage and `'SPECULATIVE'` text for live captions.
|
|
216
|
+
|
|
217
|
+
## Available voices
|
|
218
|
+
|
|
219
|
+
Nova 2 Sonic ships voices in ten locales. Tiffany and Matthew are polyglot and can speak any supported language.
|
|
220
|
+
|
|
221
|
+
| Voice ID | Name | Language | Locale | Gender | Polyglot |
|
|
222
|
+
| ---------- | -------- | ---------- | ------ | --------- | -------- |
|
|
223
|
+
| `tiffany` | Tiffany | English | en-US | feminine | yes |
|
|
224
|
+
| `matthew` | Matthew | English | en-US | masculine | yes |
|
|
225
|
+
| `amy` | Amy | English | en-GB | feminine | no |
|
|
226
|
+
| `olivia` | Olivia | English | en-AU | feminine | no |
|
|
227
|
+
| `kiara` | Kiara | English | en-IN | feminine | no |
|
|
228
|
+
| `arjun` | Arjun | English | en-IN | masculine | no |
|
|
229
|
+
| `ambre` | Ambre | French | fr-FR | feminine | no |
|
|
230
|
+
| `florian` | Florian | French | fr-FR | masculine | no |
|
|
231
|
+
| `beatrice` | Beatrice | Italian | it-IT | feminine | no |
|
|
232
|
+
| `lorenzo` | Lorenzo | Italian | it-IT | masculine | no |
|
|
233
|
+
| `tina` | Tina | German | de-DE | feminine | no |
|
|
234
|
+
| `lennart` | Lennart | German | de-DE | masculine | no |
|
|
235
|
+
| `lupe` | Lupe | Spanish | es-US | feminine | no |
|
|
236
|
+
| `carlos` | Carlos | Spanish | es-US | masculine | no |
|
|
237
|
+
| `carolina` | Carolina | Portuguese | pt-BR | feminine | no |
|
|
238
|
+
| `leo` | Leo | Portuguese | pt-BR | masculine | no |
|
|
239
|
+
| `kiara` | Kiara | Hindi | hi-IN | feminine | no |
|
|
240
|
+
| `arjun` | Arjun | Hindi | hi-IN | masculine | no |
|
|
241
|
+
|
|
242
|
+
## Notes
|
|
243
|
+
|
|
244
|
+
- Audio is streamed as 16-bit PCM. Assistant audio is emitted as `Int16Array` on the `speaking` event.
|
|
245
|
+
- The voice instance must call `connect()` before any other streaming method.
|
|
246
|
+
- `close()` destroys the underlying `BedrockRuntimeClient` to release the HTTP/2 session.
|
|
247
|
+
- Nova 2 Sonic is available in `us-east-1`, `us-west-2`, and `ap-northeast-1`. Other regions throw a configuration error during construction.
|
|
@@ -37,7 +37,7 @@ const workspace = new Workspace({
|
|
|
37
37
|
|
|
38
38
|
**vectorStore** (`MastraVector`): Vector store for semantic search
|
|
39
39
|
|
|
40
|
-
**embedder** (`(text: string) => Promise
|
|
40
|
+
**embedder** (`Embedder`): Function that turns text into vectors. Required when \`vectorStore\` is set. Accepts either a single-text function \`(text: string) => Promise\<number\[]>\` or a batch-capable function \`(texts: string\[]) => Promise\<number\[]\[]>\` that has a \`batch: true\` property and an optional \`maxBatchSize\`. See \[Batch embedding]\(/docs/workspace/search#batch-embedding).
|
|
41
41
|
|
|
42
42
|
**autoIndexPaths** (`string[]`): Paths or glob patterns to auto-index on init(). Supports glob patterns like '\*\*/\*.md' for selective indexing.
|
|
43
43
|
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,19 @@
|
|
|
1
1
|
# @mastra/mcp-docs-server
|
|
2
2
|
|
|
3
|
+
## 1.1.32-alpha.6
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Updated dependencies:
|
|
8
|
+
- @mastra/core@1.31.0-alpha.5
|
|
9
|
+
|
|
10
|
+
## 1.1.32-alpha.5
|
|
11
|
+
|
|
12
|
+
### Patch Changes
|
|
13
|
+
|
|
14
|
+
- Updated dependencies [[`8091c7c`](https://github.com/mastra-ai/mastra/commit/8091c7c944d15e13fef6d61b6cfd903f158d4006), [`04151c7`](https://github.com/mastra-ai/mastra/commit/04151c7dcea934b4fe9076708a23fac161195414), [`8091c7c`](https://github.com/mastra-ai/mastra/commit/8091c7c944d15e13fef6d61b6cfd903f158d4006)]:
|
|
15
|
+
- @mastra/core@1.31.0-alpha.4
|
|
16
|
+
|
|
3
17
|
## 1.1.32-alpha.4
|
|
4
18
|
|
|
5
19
|
### Patch Changes
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/mcp-docs-server",
|
|
3
|
-
"version": "1.1.32-alpha.
|
|
3
|
+
"version": "1.1.32-alpha.6",
|
|
4
4
|
"description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -29,8 +29,8 @@
|
|
|
29
29
|
"jsdom": "^26.1.0",
|
|
30
30
|
"local-pkg": "^1.1.2",
|
|
31
31
|
"zod": "^4.3.6",
|
|
32
|
-
"@mastra/
|
|
33
|
-
"@mastra/
|
|
32
|
+
"@mastra/mcp": "^1.6.0",
|
|
33
|
+
"@mastra/core": "1.31.0-alpha.5"
|
|
34
34
|
},
|
|
35
35
|
"devDependencies": {
|
|
36
36
|
"@hono/node-server": "^1.19.11",
|
|
@@ -46,9 +46,9 @@
|
|
|
46
46
|
"tsx": "^4.21.0",
|
|
47
47
|
"typescript": "^6.0.3",
|
|
48
48
|
"vitest": "4.1.5",
|
|
49
|
+
"@mastra/core": "1.31.0-alpha.5",
|
|
49
50
|
"@internal/lint": "0.0.89",
|
|
50
|
-
"@internal/types-builder": "0.0.64"
|
|
51
|
-
"@mastra/core": "1.31.0-alpha.3"
|
|
51
|
+
"@internal/types-builder": "0.0.64"
|
|
52
52
|
},
|
|
53
53
|
"homepage": "https://mastra.ai",
|
|
54
54
|
"repository": {
|