@mastra/mcp-docs-server 1.0.0-beta.6 → 1.0.0-beta.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (74) hide show
  1. package/.docs/organized/changelogs/%40mastra%2Fastra.md +10 -10
  2. package/.docs/organized/changelogs/%40mastra%2Fchroma.md +12 -12
  3. package/.docs/organized/changelogs/%40mastra%2Fclickhouse.md +33 -33
  4. package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +58 -58
  5. package/.docs/organized/changelogs/%40mastra%2Fcloudflare-d1.md +32 -32
  6. package/.docs/organized/changelogs/%40mastra%2Fcloudflare.md +32 -32
  7. package/.docs/organized/changelogs/%40mastra%2Fcodemod.md +6 -0
  8. package/.docs/organized/changelogs/%40mastra%2Fconvex.md +31 -0
  9. package/.docs/organized/changelogs/%40mastra%2Fcore.md +176 -176
  10. package/.docs/organized/changelogs/%40mastra%2Fcouchbase.md +11 -11
  11. package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloud.md +10 -10
  12. package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +15 -15
  13. package/.docs/organized/changelogs/%40mastra%2Fdynamodb.md +32 -32
  14. package/.docs/organized/changelogs/%40mastra%2Felasticsearch.md +9 -0
  15. package/.docs/organized/changelogs/%40mastra%2Flance.md +33 -33
  16. package/.docs/organized/changelogs/%40mastra%2Flibsql.md +32 -32
  17. package/.docs/organized/changelogs/%40mastra%2Floggers.md +12 -12
  18. package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +9 -9
  19. package/.docs/organized/changelogs/%40mastra%2Fmcp.md +43 -43
  20. package/.docs/organized/changelogs/%40mastra%2Fmongodb.md +32 -32
  21. package/.docs/organized/changelogs/%40mastra%2Fmssql.md +32 -32
  22. package/.docs/organized/changelogs/%40mastra%2Fopensearch.md +10 -10
  23. package/.docs/organized/changelogs/%40mastra%2Fpg.md +32 -32
  24. package/.docs/organized/changelogs/%40mastra%2Fpinecone.md +10 -10
  25. package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +31 -31
  26. package/.docs/organized/changelogs/%40mastra%2Fqdrant.md +10 -10
  27. package/.docs/organized/changelogs/%40mastra%2Freact.md +7 -0
  28. package/.docs/organized/changelogs/%40mastra%2Fs3vectors.md +9 -0
  29. package/.docs/organized/changelogs/%40mastra%2Fserver.md +58 -58
  30. package/.docs/organized/changelogs/%40mastra%2Fturbopuffer.md +10 -10
  31. package/.docs/organized/changelogs/%40mastra%2Fupstash.md +33 -33
  32. package/.docs/organized/changelogs/%40mastra%2Fvectorize.md +10 -10
  33. package/.docs/organized/changelogs/create-mastra.md +7 -7
  34. package/.docs/organized/changelogs/mastra.md +14 -14
  35. package/.docs/organized/code-examples/ai-elements.md +1 -1
  36. package/.docs/organized/code-examples/ai-sdk-useChat.md +1 -1
  37. package/.docs/organized/code-examples/ai-sdk-v5.md +1 -1
  38. package/.docs/organized/code-examples/assistant-ui.md +1 -1
  39. package/.docs/organized/code-examples/bird-checker-with-nextjs-and-eval.md +1 -1
  40. package/.docs/organized/code-examples/bird-checker-with-nextjs.md +1 -1
  41. package/.docs/organized/code-examples/crypto-chatbot.md +1 -1
  42. package/.docs/organized/code-examples/mcp-server-adapters.md +2 -2
  43. package/.docs/organized/code-examples/server-app-access.md +2 -2
  44. package/.docs/organized/code-examples/server-express-adapter.md +87 -0
  45. package/.docs/organized/code-examples/server-hono-adapter.md +85 -0
  46. package/.docs/raw/agents/overview.mdx +2 -111
  47. package/.docs/raw/agents/processors.mdx +1 -1
  48. package/.docs/raw/agents/structured-output.mdx +224 -0
  49. package/.docs/raw/guides/migrations/upgrade-to-v1/workflows.mdx +10 -2
  50. package/.docs/raw/memory/working-memory.mdx +1 -0
  51. package/.docs/raw/observability/tracing/bridges/otel.mdx +25 -1
  52. package/.docs/raw/observability/tracing/exporters/arize.mdx +19 -0
  53. package/.docs/raw/observability/tracing/exporters/langfuse.mdx +63 -0
  54. package/.docs/raw/observability/tracing/exporters/otel.mdx +30 -19
  55. package/.docs/raw/observability/tracing/exporters/posthog.mdx +20 -0
  56. package/.docs/raw/observability/tracing/overview.mdx +6 -1
  57. package/.docs/raw/reference/observability/tracing/bridges/otel.mdx +27 -0
  58. package/.docs/raw/reference/observability/tracing/exporters/arize.mdx +25 -0
  59. package/.docs/raw/reference/observability/tracing/exporters/langfuse.mdx +43 -0
  60. package/.docs/raw/reference/observability/tracing/exporters/otel.mdx +27 -43
  61. package/.docs/raw/reference/server/create-route.mdx +314 -0
  62. package/.docs/raw/reference/server/express-adapter.mdx +193 -0
  63. package/.docs/raw/reference/server/hono-adapter.mdx +174 -0
  64. package/.docs/raw/reference/server/mastra-server.mdx +316 -0
  65. package/.docs/raw/reference/server/routes.mdx +250 -0
  66. package/.docs/raw/reference/streaming/workflows/timeTravelStream.mdx +170 -0
  67. package/.docs/raw/reference/tools/mcp-client.mdx +54 -1
  68. package/.docs/raw/reference/workflows/run-methods/timeTravel.mdx +310 -0
  69. package/.docs/raw/reference/workflows/run.mdx +14 -0
  70. package/.docs/raw/server-db/custom-adapters.mdx +380 -0
  71. package/.docs/raw/server-db/mastra-server.mdx +11 -32
  72. package/.docs/raw/server-db/server-adapters.mdx +286 -0
  73. package/CHANGELOG.md +8 -0
  74. package/package.json +4 -4
@@ -107,6 +107,35 @@ new LangfuseExporter({
107
107
  });
108
108
  ```
109
109
 
110
+ ## Prompt Linking
111
+
112
+ You can link LLM generations to prompts stored in [Langfuse Prompt Management](https://langfuse.com/docs/prompt-management). This enables version tracking and metrics for your prompts.
113
+
114
+ ### Using the Helper (Recommended)
115
+
116
+ Use `withLangfusePrompt` with `buildTracingOptions` for the cleanest API:
117
+
118
+ ```typescript title="src/agents/support-agent.ts"
119
+ import { Agent } from "@mastra/core/agent";
120
+ import { openai } from "@ai-sdk/openai";
121
+ import { buildTracingOptions } from "@mastra/observability";
122
+ import { withLangfusePrompt } from "@mastra/langfuse";
123
+ import { Langfuse } from "langfuse";
124
+
125
+ const langfuse = new Langfuse({
126
+ publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
127
+ secretKey: process.env.LANGFUSE_SECRET_KEY!,
128
+ });
129
+
130
+ // Fetch the prompt from Langfuse Prompt Management
131
+ const prompt = await langfuse.getPrompt("customer-support");
132
+
133
+ export const supportAgent = new Agent({
134
+ name: "support-agent",
135
+ instructions: prompt.prompt, // Use the prompt text from Langfuse
136
+ model: openai("gpt-4o"),
137
+ defaultGenerateOptions: {
138
+ tracingOptions: buildTracingOptions(withLangfusePrompt(prompt)),
110
139
  ## Using Tags
111
140
 
112
141
  Tags help you categorize and filter traces in the Langfuse dashboard. Add tags when executing agents or workflows:
@@ -120,6 +149,39 @@ const result = await agent.generate({
120
149
  });
121
150
  ```
122
151
 
152
+ The `withLangfusePrompt` helper automatically extracts `name`, `version`, and `id` from the Langfuse prompt object.
153
+
154
+ ### Manual Fields
155
+
156
+ You can also pass manual fields if you're not using the Langfuse SDK:
157
+
158
+ ```typescript
159
+ const tracingOptions = buildTracingOptions(
160
+ withLangfusePrompt({ name: "my-prompt", version: 1 }),
161
+ );
162
+
163
+ // Or with just an ID
164
+ const tracingOptions = buildTracingOptions(
165
+ withLangfusePrompt({ id: "prompt-uuid-12345" }),
166
+ );
167
+ ```
168
+
169
+ ### Prompt Object Fields
170
+
171
+ The prompt object supports these fields:
172
+
173
+ | Field | Type | Description |
174
+ |-------|------|-------------|
175
+ | `name` | string | The prompt name in Langfuse |
176
+ | `version` | number | The prompt version number |
177
+ | `id` | string | The prompt UUID for direct linking |
178
+
179
+ You can link prompts using either:
180
+ - `id` alone (the UUID uniquely identifies a prompt version)
181
+ - `name` + `version` together
182
+ - All three fields
183
+
184
+ When set on a `MODEL_GENERATION` span, the Langfuse exporter automatically links the generation to the corresponding prompt.
123
185
  Tags appear in Langfuse's trace view and can be used to filter and search traces. Common use cases include:
124
186
 
125
187
  - Environment labels: `"production"`, `"staging"`
@@ -131,3 +193,4 @@ Tags appear in Langfuse's trace view and can be used to filter and search traces
131
193
 
132
194
  - [Tracing Overview](/docs/v1/observability/tracing/overview)
133
195
  - [Langfuse Documentation](https://langfuse.com/docs)
196
+ - [Langfuse Prompt Management](https://langfuse.com/docs/prompt-management)
@@ -198,33 +198,26 @@ new OtelExporter({
198
198
 
199
199
  ## OpenTelemetry Semantic Conventions
200
200
 
201
- The exporter follows [OpenTelemetry Semantic Conventions for GenAI](https://opentelemetry.io/docs/specs/semconv/gen-ai/), ensuring compatibility with observability platforms:
201
+ The exporter follows [OpenTelemetry Semantic Conventions for GenAI v1.38.0](https://github.com/open-telemetry/semantic-conventions/tree/v1.38.0/docs/gen-ai), ensuring compatibility with observability platforms:
202
202
 
203
203
  ### Span Naming
204
204
 
205
- - **LLM Operations**: `chat {model}` or `tool_selection {model}`
206
- - **Tool Execution**: `tool.execute {tool_name}`
207
- - **Agent Runs**: `agent.{agent_id}`
208
- - **Workflow Runs**: `workflow.{workflow_id}`
205
+ - **LLM Operations**: `chat {model}`
206
+ - **Tool Execution**: `execute_tool {tool_name}`
207
+ - **Agent Runs**: `invoke_agent {agent_id}`
208
+ - **Workflow Runs**: `invoke_workflow {workflow_id}`
209
209
 
210
210
  ### Key Attributes
211
211
 
212
212
  - `gen_ai.operation.name` - Operation type (chat, tool.execute, etc.)
213
- - `gen_ai.system` - AI provider (openai, anthropic, etc.)
213
+ - `gen_ai.provider.name` - AI provider (openai, anthropic, etc.)
214
214
  - `gen_ai.request.model` - Model identifier
215
+ - `gen_ai.input.messages` - Chat history provided to the model
216
+ - `gen_ai.output.messages` - Messages returned by the model
215
217
  - `gen_ai.usage.input_tokens` - Number of input tokens
216
218
  - `gen_ai.usage.output_tokens` - Number of output tokens
217
219
  - `gen_ai.request.temperature` - Sampling temperature
218
- - `gen_ai.response.finish_reasons` - Completion reason
219
-
220
- ## Buffering Strategy
221
-
222
- The exporter buffers spans until a trace is complete:
223
-
224
- 1. Collects all spans for a trace
225
- 2. Waits 5 seconds after root span completes
226
- 3. Exports complete trace with preserved parent-child relationships
227
- 4. Ensures no orphaned spans
220
+ - `gen_ai.response.finish_reasons` - Completion reasons
228
221
 
229
222
  ## Protocol Selection Guide
230
223
 
@@ -264,11 +257,29 @@ Install the suggested package for your provider.
264
257
  1. **Wrong protocol package**: Verify you installed the correct exporter for your provider
265
258
  2. **Invalid endpoint**: Check endpoint format matches provider requirements
266
259
  3. **Authentication failures**: Verify API keys and headers are correct
267
- 4. **No traces appearing**: Check that traces complete (root span must end)
260
+
261
+ ## Using Tags
262
+
263
+ Tags help you categorize and filter traces in your observability platform. Add tags when executing agents or workflows:
264
+
265
+ ```typescript
266
+ const result = await agent.generate({
267
+ messages: [{ role: "user", content: "Hello" }],
268
+ tracingOptions: {
269
+ tags: ["production", "experiment-v2", "user-request"],
270
+ },
271
+ });
272
+ ```
273
+
274
+ Tags are exported as a JSON string in the `mastra.tags` span attribute for broad backend compatibility. Common use cases include:
275
+
276
+ - Environment labels: `"production"`, `"staging"`
277
+ - Experiment tracking: `"experiment-v1"`, `"control-group"`
278
+ - Priority levels: `"priority-high"`, `"batch-job"`
268
279
 
269
280
  ## Related
270
281
 
271
282
  - [Tracing Overview](/docs/v1/observability/tracing/overview)
272
- - [OpenTelemetry Bridge](/docs/v1/observability/tracing/bridges/otel) - For bidirectional OTEL context integration
273
- - [OpenTelemetry GenAI Conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/)
283
+ - [OpenTelemetry Bridge](/docs/v1/observability/tracing/bridges/otel)
284
+ - [OpenTelemetry Semantic Conventions for GenAI v1.38.0](https://github.com/open-telemetry/semantic-conventions/tree/v1.38.0/docs/gen-ai)
274
285
  - [OTEL Exporter Reference](/reference/v1/observability/tracing/exporters/otel)
@@ -101,6 +101,26 @@ new PosthogExporter({
101
101
  });
102
102
  ```
103
103
 
104
+ ## Using Tags
105
+
106
+ Tags help you categorize and filter traces in PostHog's AI analytics. Add tags when executing agents or workflows:
107
+
108
+ ```typescript
109
+ const result = await agent.generate({
110
+ messages: [{ role: "user", content: "Hello" }],
111
+ tracingOptions: {
112
+ tags: ["production", "experiment-v2", "user-request"],
113
+ },
114
+ });
115
+ ```
116
+
117
+ Tags are added as event properties where the tag name is the key and the value is set to `true`. In PostHog's trace view, filter by a tag using the `is set` filter (e.g., "production is set" shows all traces with the production tag). Common use cases include:
118
+
119
+ - Environment labels: `"production"`, `"staging"`
120
+ - Experiment tracking: `"experiment-v1"`, `"control-group"`
121
+ - Priority levels: `"priority-high"`, `"batch-job"`
122
+ - User segments: `"beta-user"`, `"enterprise"`
123
+
104
124
  ## Related
105
125
 
106
126
  - [Tracing Overview](/docs/v1/observability/tracing/overview)
@@ -492,7 +492,12 @@ const result = await run.start({
492
492
  #### How Tags Work
493
493
 
494
494
  - **Root span only**: Tags are applied only to the root span of a trace (the agent run or workflow run span)
495
- - **Platform-specific**: Tags appear in Braintrust and Langfuse dashboards for filtering and searching
495
+ - **Widely supported**: Tags are supported by most exporters for filtering and searching traces:
496
+ - **Braintrust** - Native `tags` field
497
+ - **Langfuse** - Native `tags` field on traces
498
+ - **ArizeExporter** - `tag.tags` OpenInference attribute
499
+ - **OtelExporter** - `mastra.tags` span attribute
500
+ - **OtelBridge** - `mastra.tags` span attribute
496
501
  - **Combinable with metadata**: You can use both `tags` and `metadata` in the same `tracingOptions`
497
502
 
498
503
  ```ts showLineNumbers copy
@@ -143,6 +143,33 @@ The OtelBridge requires an active OpenTelemetry SDK to function. The bridge read
143
143
 
144
144
  See the [OtelBridge Guide](/docs/v1/observability/tracing/bridges/otel#configuration) for complete setup instructions, including how to configure OTEL instrumentation and run your application.
145
145
 
146
+ ## Tags Support
147
+
148
+ The OtelBridge supports trace tagging for categorization and filtering. Tags are only applied to root spans and are included as the `mastra.tags` attribute on native OTEL spans.
149
+
150
+ ### Usage
151
+
152
+ ```typescript
153
+ const result = await agent.generate({
154
+ messages: [{ role: "user", content: "Hello" }],
155
+ tracingOptions: {
156
+ tags: ["production", "experiment-v2", "user-request"],
157
+ },
158
+ });
159
+ ```
160
+
161
+ ### How Tags Are Stored
162
+
163
+ Tags are stored as a JSON-stringified array in the `mastra.tags` span attribute:
164
+
165
+ ```json
166
+ {
167
+ "mastra.tags": "[\"production\",\"experiment-v2\",\"user-request\"]"
168
+ }
169
+ ```
170
+
171
+ This format ensures compatibility with all OTEL-compatible backends and collectors.
172
+
146
173
  ## Related
147
174
 
148
175
  - [OtelBridge Guide](/docs/v1/observability/tracing/bridges/otel) - Setup guide with examples
@@ -161,6 +161,31 @@ const exporter = new ArizeExporter({
161
161
 
162
162
  The ArizeExporter implements [OpenInference Semantic Conventions](https://github.com/Arize-ai/openinference/tree/main/spec) for generative AI applications, providing standardized trace structure across different observability platforms.
163
163
 
164
+ ## Tags Support
165
+
166
+ The ArizeExporter supports trace tagging for categorization and filtering. Tags are only applied to root spans and are mapped to the native OpenInference `tag.tags` semantic convention.
167
+
168
+ ### Usage
169
+
170
+ ```typescript
171
+ const result = await agent.generate({
172
+ messages: [{ role: "user", content: "Hello" }],
173
+ tracingOptions: {
174
+ tags: ["production", "experiment-v2", "user-request"],
175
+ },
176
+ });
177
+ ```
178
+
179
+ ### How Tags Are Stored
180
+
181
+ Tags are stored using the OpenInference `tag.tags` attribute:
182
+
183
+ ```json
184
+ {
185
+ "tag.tags": ["production", "experiment-v2", "user-request"]
186
+ }
187
+ ```
188
+
164
189
  ## Related
165
190
 
166
191
  - [ArizeExporter Documentation](/docs/v1/observability/tracing/exporters/arize)
@@ -117,3 +117,46 @@ const exporter = new LangfuseExporter({
117
117
  - `MODEL_GENERATION` spans → Langfuse generations
118
118
  - All other spans → Langfuse spans
119
119
  - Event spans → Langfuse events
120
+
121
+ ## Prompt Linking
122
+
123
+ Link LLM generations to [Langfuse Prompt Management](https://langfuse.com/docs/prompt-management) using the `withLangfusePrompt` helper:
124
+
125
+ ```typescript
126
+ import { buildTracingOptions } from "@mastra/observability";
127
+ import { withLangfusePrompt } from "@mastra/langfuse";
128
+ import { Langfuse } from "langfuse";
129
+
130
+ const langfuse = new Langfuse({
131
+ publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
132
+ secretKey: process.env.LANGFUSE_SECRET_KEY!,
133
+ });
134
+
135
+ const prompt = await langfuse.getPrompt("customer-support");
136
+
137
+ const agent = new Agent({
138
+ name: "support-agent",
139
+ instructions: prompt.prompt,
140
+ model: openai("gpt-4o"),
141
+ defaultGenerateOptions: {
142
+ tracingOptions: buildTracingOptions(withLangfusePrompt(prompt)),
143
+ },
144
+ });
145
+ ```
146
+
147
+ ### Helper Functions
148
+
149
+ #### `withLangfusePrompt(prompt)`
150
+
151
+ Adds Langfuse prompt metadata to tracing options.
152
+
153
+ ```typescript
154
+ // With Langfuse SDK prompt object
155
+ withLangfusePrompt(prompt)
156
+
157
+ // With manual fields
158
+ withLangfusePrompt({ name: "my-prompt", version: 1 })
159
+ withLangfusePrompt({ id: "prompt-uuid" })
160
+ ```
161
+
162
+ When `metadata.langfuse.prompt` is set on a `MODEL_GENERATION` span (with either `id` alone, or `name` + `version`), the exporter automatically links the generation to the prompt in Langfuse.
@@ -7,12 +7,6 @@ import PropertiesTable from "@site/src/components/PropertiesTable";
7
7
 
8
8
  # OtelExporter
9
9
 
10
- :::warning
11
-
12
- The OtelExporter is currently **experimental**. APIs and configuration options may change in future releases.
13
-
14
- :::
15
-
16
10
  Sends Tracing data to any OpenTelemetry-compatible observability platform using standardized GenAI semantic conventions.
17
11
 
18
12
  ## Constructor
@@ -317,38 +311,6 @@ const exporter = new OtelExporter({
317
311
  });
318
312
  ```
319
313
 
320
- ## Span Mapping
321
-
322
- The exporter maps Mastra AI spans to OpenTelemetry spans following GenAI semantic conventions:
323
-
324
- ### Span Names
325
-
326
- - `MODEL_GENERATION` → `chat {model}` or `tool_selection {model}`
327
- - `TOOL_CALL` → `tool.execute {tool_name}`
328
- - `AGENT_RUN` → `agent.{agent_id}`
329
- - `WORKFLOW_RUN` → `workflow.{workflow_id}`
330
-
331
- ### Span Kinds
332
-
333
- - Root agent/workflow spans → `SERVER`
334
- - LLM calls → `CLIENT`
335
- - Tool calls → `INTERNAL` or `CLIENT`
336
- - Workflow steps → `INTERNAL`
337
-
338
- ### Attributes
339
-
340
- The exporter maps to standard OTEL GenAI attributes:
341
-
342
- | Mastra Attribute | OTEL Attribute |
343
- | ----------------------------------- | -------------------------------- |
344
- | `model` | `gen_ai.request.model` |
345
- | `provider` | `gen_ai.system` |
346
- | `inputTokens` / `promptTokens` | `gen_ai.usage.input_tokens` |
347
- | `outputTokens` / `completionTokens` | `gen_ai.usage.output_tokens` |
348
- | `temperature` | `gen_ai.request.temperature` |
349
- | `maxOutputTokens` | `gen_ai.request.max_tokens` |
350
- | `finishReason` | `gen_ai.response.finish_reasons` |
351
-
352
314
  ## Protocol Requirements
353
315
 
354
316
  Different providers require different OTEL exporter packages:
@@ -360,13 +322,35 @@ Different providers require different OTEL exporter packages:
360
322
  | HTTP/JSON | `@opentelemetry/exporter-trace-otlp-http` | Traceloop, Custom |
361
323
  | Zipkin | `@opentelemetry/exporter-zipkin` | Zipkin collectors |
362
324
 
363
- ## Parent-Child Relationships
364
325
 
365
- The exporter preserves span hierarchy from Mastra's Tracing:
326
+ ## Tags Support
366
327
 
367
- - Uses `parentSpanId` directly from Mastra spans
368
- - Maintains correct nesting for agents, workflows, LLM calls, and tools
369
- - Exports complete traces with all relationships intact
328
+ The OtelExporter supports trace tagging for categorization and filtering. Tags are only applied to root spans and are stored as the `mastra.tags` attribute.
329
+
330
+ ### Usage
331
+
332
+ ```typescript
333
+ const result = await agent.generate({
334
+ messages: [{ role: "user", content: "Hello" }],
335
+ tracingOptions: {
336
+ tags: ["production", "experiment-v2", "user-request"],
337
+ },
338
+ });
339
+ ```
340
+
341
+ ### How Tags Are Stored
342
+
343
+ Tags are stored as a JSON-stringified array in the `mastra.tags` span attribute for maximum backend compatibility:
344
+
345
+ ```json
346
+ {
347
+ "mastra.tags": "[\"production\",\"experiment-v2\",\"user-request\"]"
348
+ }
349
+ ```
350
+
351
+ :::note
352
+ While the OpenTelemetry specification supports native array attributes, many backends (Jaeger, Zipkin, Tempo) have limited array support. JSON strings ensure consistent behavior across all observability platforms.
353
+ :::
370
354
 
371
355
  ## Related
372
356