telnyx-mcp 6.47.0 → 6.48.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -3074,8 +3074,8 @@ const EMBEDDED_METHODS = [
3074
3074
  description: 'Retrieve a list of all AI Assistants configured by the user.',
3075
3075
  stainlessPath: '(resource) ai.assistants > (method) list',
3076
3076
  qualified: 'client.ai.assistants.list',
3077
- response: '{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }',
3078
- markdown: "## list\n\n`client.ai.assistants.list(): { data: inference_embedding[]; }`\n\n**get** `/ai/assistants`\n\nRetrieve a list of all AI Assistants configured by the user.\n\n### Returns\n\n- `{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }`\n\n - `data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }[]`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst assistantsList = await client.ai.assistants.list();\n\nconsole.log(assistantsList);\n```",
3077
+ response: '{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }',
3078
+ markdown: "## list\n\n`client.ai.assistants.list(): { data: inference_embedding[]; }`\n\n**get** `/ai/assistants`\n\nRetrieve a list of all AI Assistants configured by the user.\n\n### Returns\n\n- `{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }`\n\n - `data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }[]`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst assistantsList = await client.ai.assistants.list();\n\nconsole.log(assistantsList);\n```",
3079
3079
  perLanguage: {
3080
3080
  typescript: {
3081
3081
  method: 'client.ai.assistants.list',
@@ -3147,8 +3147,8 @@ const EMBEDDED_METHODS = [
3147
3147
  "voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; };",
3148
3148
  "widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; };",
3149
3149
  ],
3150
- response: '{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }',
3151
- markdown: "## create\n\n`client.ai.assistants.create(instructions: string, name: string, description?: string, dynamic_variables?: object, dynamic_variables_webhook_timeout_ms?: number, dynamic_variables_webhook_url?: string, enabled_features?: 'telephony' | 'messaging'[], external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }, fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }, greeting?: string, insight_settings?: { insight_group_id?: string; }, integrations?: { integration_id: string; allowed_list?: string[]; }[], interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }, llm_api_key_ref?: string, mcp_servers?: { id: string; allowed_tools?: string[]; }[], messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }, model?: string, observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }, post_conversation_settings?: { enabled?: boolean; }, privacy_settings?: { data_retention?: boolean; }, tags?: string[], telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }, tool_ids?: string[], tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[], transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }, voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }, widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants`\n\nCreate a new AI Assistant.\n\n### Parameters\n\n- `instructions: string`\n System instructions for the assistant. These may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)\n\n- `name: string`\n\n- `description?: string`\n\n- `dynamic_variables?: object`\n Map of dynamic variables and their default values\n\n- `dynamic_variables_webhook_timeout_ms?: number`\n Timeout in milliseconds for the dynamic variables webhook. Must be between 1 and 10000 ms. If the webhook does not respond within this timeout, the call proceeds with default values. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables).\n\n- `dynamic_variables_webhook_url?: string`\n If `dynamic_variables_webhook_url` is set, Telnyx sends a POST request to this URL at the start of the conversation to resolve dynamic variables. **Gotcha:** the webhook response must wrap variables under a top-level `dynamic_variables` object, e.g. `{\"dynamic_variables\": {\"customer_name\": \"Jane\"}}`. Returning a flat object will be ignored and variables will fall back to their defaults. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) for the full request/response format and timeout behavior.\n\n- `enabled_features?: 'telephony' | 'messaging'[]`\n\n- `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `base_url: string`\n Base URL for the external LLM endpoint.\n - `model: string`\n Model identifier to use with the external LLM endpoint.\n - `authentication_method?: 'token' | 'certificate'`\n Authentication method used when connecting to the external LLM endpoint.\n - `certificate_ref?: string`\n Integration secret identifier for the client certificate used with certificate authentication.\n - `forward_metadata?: boolean`\n When `true`, Telnyx forwards the assistant's dynamic variables to the external LLM endpoint as a top-level `extra_metadata` object on the chat completion request body. Defaults to `false`. Example payload sent to the external endpoint: `{\"extra_metadata\": {\"customer_name\": \"Jane\", \"account_id\": \"acct_789\", \"telnyx_agent_target\": \"+13125550100\", \"telnyx_end_user_target\": \"+13125550123\"}}`. Distinct from OpenAI's native `metadata` field, which has its own size and type limits.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the external LLM API key.\n - `token_retrieval_url?: string`\n URL used to retrieve an access token when certificate authentication is enabled.\n\n- `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `llm_api_key_ref?: string`\n Integration secret identifier for the fallback model API key.\n - `model?: string`\n Fallback Telnyx-hosted model to use when the primary LLM provider is unavailable.\n\n- `greeting?: string`\n Text that the assistant will use to start the conversation. This may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). Use an empty string to have the assistant wait for the user to speak first. Use the special value `<assistant-speaks-first-with-model-generated-message>` to have the assistant generate the greeting based on the system instructions.\n\n- `insight_settings?: { insight_group_id?: string; }`\n - `insight_group_id?: string`\n Reference to an Insight Group. Insights in this group will be run automatically for all the assistant's conversations.\n\n- `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n Connected integrations attached to the assistant. The catalog of available integrations is at `/ai/integrations`; the user's connected integrations are at `/ai/integrations/connections`. Each item references a catalog integration by `integration_id`.\n\n- `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n Settings for interruptions and how the assistant decides the user has finished speaking. These timings are most relevant when using non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn behavior is controlled by the transcription end-of-turn settings under `transcription.settings` (`eot_threshold`, `eot_timeout_ms`, `eager_eot_threshold`).\n - `enable?: boolean`\n Whether users can interrupt the assistant while it is speaking.\n - `start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }`\n Controls when the assistant starts speaking after the user stops. These thresholds primarily apply to non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the transcription end-of-turn settings under `transcription.settings` instead.\n\n- `llm_api_key_ref?: string`\n This is only needed when using third-party inference providers selected by `model`. The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your LLM provider's API key. For bring-your-own endpoint authentication, use `external_llm.llm_api_key_ref` instead. Warning: Free plans are unlikely to work with this integration.\n\n- `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n MCP servers attached to the assistant. Create MCP servers with `/ai/mcp_servers`, then reference them by `id` here.\n\n- `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `conversation_inactivity_minutes?: number`\n If more than this many minutes have passed since the last message, the assistant will start a new conversation instead of continuing the existing one.\n - `default_messaging_profile_id?: string`\n Default Messaging Profile used for messaging exchanges with your assistant. This will be created automatically on assistant creation.\n - `delivery_status_webhook_url?: string`\n The URL where webhooks related to delivery statused for assistant messages will be sent.\n\n- `model?: string`\n ID of the model to use when `external_llm` is not set. You can use the [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models) to see available models. If `external_llm` is provided, the assistant uses `external_llm` instead of this field. If neither `model` nor `external_llm` is provided, Telnyx applies the default model.\n\n- `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `host?: string`\n - `prompt_label?: string`\n - `prompt_name?: string`\n - `prompt_sync?: 'enabled' | 'disabled'`\n Whether to auto-publish the assistant's instructions as a Langfuse prompt.\n\nWhen ENABLED + prompt_name set, every assistant create/update pushes\n`instructions` to Langfuse via create_prompt and stores the returned\nversion in prompt_version.\n - `prompt_version?: number`\n - `public_key_ref?: string`\n - `secret_key_ref?: string`\n - `status?: 'enabled' | 'disabled'`\n\n- `post_conversation_settings?: { enabled?: boolean; }`\n Configuration for post-conversation processing. When enabled, the assistant receives one additional LLM turn after the conversation ends, allowing it to execute tool calls such as logging to a CRM or sending a summary. The assistant can execute multiple parallel or sequential tools during this phase. Telephony-control tools (e.g. hangup, transfer) are unavailable post-conversation. Beta feature.\n - `enabled?: boolean`\n Whether post-conversation processing is enabled. When true, the assistant will be invoked after the conversation ends to perform any final tool calls. Defaults to false.\n\n- `privacy_settings?: { data_retention?: boolean; }`\n - `data_retention?: boolean`\n If true, conversation history and insights will be stored. If false, they will not be stored. This in‑tool toggle governs solely the retention of conversation history and insights via the AI assistant. It has no effect on any separate recording, transcription, or storage configuration that you have set at the account, number, or application level. All such external settings remain in force regardless of your selection here.\n\n- `tags?: string[]`\n Tags associated with the assistant. Tags can also be managed with the assistant tag endpoints.\n\n- `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `default_texml_app_id?: string`\n Default Texml App used for voice calls with your assistant. This will be created automatically on assistant creation.\n - `noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'`\n The noise suppression engine to use. Use 'disabled' to turn off noise suppression.\n - `noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }`\n Configuration for noise suppression. Only applicable when noise_suppression is 'deepfilternet'.\n - `recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }`\n Configuration for call recording format and channel settings.\n - `supports_unauthenticated_web_calls?: boolean`\n When enabled, allows users to interact with your AI assistant directly from your website without requiring authentication. This is required for FE widgets that work with assistants that have telephony enabled.\n - `time_limit_secs?: number`\n Maximum duration in seconds for the AI assistant to participate on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `user_idle_reply_secs?: number`\n Duration in seconds of end user silence before the assistant checks in on the user. When this limit is reached the assistant will prompt the user to respond. This is distinct from user_idle_timeout_secs which stops the assistant entirely.\n - `user_idle_timeout_secs?: number`\n Maximum duration in seconds of end user silence on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: { message?: string; prompt?: string; type?: 'prompt' | 'message'; }; }; }`\n Configuration for voicemail detection (AMD - Answering Machine Detection) on outgoing calls. These settings only apply if AMD is enabled on the Dial command. See [TeXML Dial documentation](https://developers.telnyx.com/api-reference/texml-rest-commands/initiate-an-outbound-call) for enabling AMD. Recommended settings: MachineDetection=Enable, AsyncAmd=true, DetectionMode=Premium.\n\n- `tool_ids?: string[]`\n IDs of shared tools to attach to the assistant. New integrations should prefer `tool_ids` over inline `tools`.\n\n- `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n Deprecated for new integrations. Inline tool definitions available to the assistant. Prefer `tool_ids` to attach shared tools created with the AI Tools endpoints.\n\n- `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `api_key_ref?: string`\n Integration secret identifier for the transcription provider API key. Currently used for Azure transcription regions that require a customer-provided API key.\n - `language?: string`\n The language of the audio to be transcribed. If not set, or if set to `auto`, supported models will automatically detect the language. For `deepgram/flux`, supported values are: `auto` (Telnyx language detection controls the language hint), `multi` (no language hint), and language-specific hints `en`, `es`, `fr`, `de`, `hi`, `ru`, `pt`, `ja`, `it`, and `nl`.\n - `model?: string`\n The speech to text model to be used by the voice assistant. All Deepgram models are run on-premise.\n\n- `deepgram/flux` is optimized for turn-taking with multilingual language hints.\n- `deepgram/nova-3` is multilingual with automatic language detection.\n- `deepgram/nova-2` is Deepgram's previous-generation multilingual model.\n- `azure/fast` is a multilingual Azure transcription model.\n- `assemblyai/universal-streaming` is a multilingual streaming model with configurable turn detection.\n- `xai/grok-stt` is a multilingual Grok STT model.\n - `region?: string`\n Region on third party cloud providers (currently Azure) if using one of their models. Some regions require `api_key_ref`.\n - `settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }`\n\n- `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `voice: string`\n The voice to be used by the voice assistant. Check the full list of [available voices](https://developers.telnyx.com/docs/tts-stt/tts-available-voices) via our voices API.\nTo use ElevenLabs, you must reference your ElevenLabs API key as an integration secret under the `api_key_ref` field. See [integration secrets documentation](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) for details. For Telnyx voices, use `Telnyx.<model_id>.<voice_id>` (e.g. Telnyx.KokoroTTS.af_heart).\nThe voice portion of the identifier supports [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) using mustache syntax (e.g. `Telnyx.Ultra.{{voice_id}}`). The variable is resolved at call time from your dynamic variables webhook, allowing you to select the voice dynamically per call.\n - `api_key_ref?: string`\n The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your ElevenLabs API key. Warning: Free plans are unlikely to work with this integration.\n - `background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }`\n Optional background audio to play on the call. Use a predefined media bed, or supply a looped MP3 URL. If a media URL is chosen in the portal, customers can preview it before saving.\n - `expressive_mode?: boolean`\n Enables emotionally expressive speech using SSML emotion tags. When enabled, the assistant uses audio tags like angry, excited, content, and sad to add emotional nuance. Only supported for Telnyx Ultra voices.\n - `language_boost?: string`\n Enhances recognition for specific languages and dialects during MiniMax TTS synthesis. Default is null (no boost). Set to 'auto' for automatic language detection. Only applicable when using MiniMax voices.\n - `similarity_boost?: number`\n Determines how closely the AI should adhere to the original voice when attempting to replicate it. Only applicable when using ElevenLabs.\n - `speed?: number`\n Adjusts speech velocity. 1.0 is default speed; values less than 1.0 slow speech; values greater than 1.0 accelerate it. Only applicable when using ElevenLabs.\n - `style?: number`\n Determines the style exaggeration of the voice. Amplifies speaker style but consumes additional resources when set above 0. Only applicable when using ElevenLabs.\n - `temperature?: number`\n Determines how stable the voice is and the randomness between each generation. Lower values create a broader emotional range; higher values produce more consistent, monotonous output. Only applicable when using ElevenLabs.\n - `use_speaker_boost?: boolean`\n Amplifies similarity to the original speaker voice. Increases computational load and latency slightly. Only applicable when using ElevenLabs.\n - `voice_speed?: number`\n The speed of the voice in the range [0.25, 2.0]. 1.0 is deafult speed. Larger numbers make the voice faster, smaller numbers make it slower. This is only applicable for Telnyx Natural voices.\n\n- `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n Configuration settings for the assistant's web widget.\n - `agent_thinking_text?: string`\n Text displayed while the agent is processing.\n - `audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }`\n - `default_state?: 'expanded' | 'collapsed'`\n The default state of the widget.\n - `give_feedback_url?: string`\n URL for users to give feedback.\n - `logo_icon_url?: string`\n URL to a custom logo icon for the widget.\n - `position?: 'fixed' | 'static'`\n The positioning style for the widget.\n - `report_issue_url?: string`\n URL for users to report issues.\n - `speak_to_interrupt_text?: string`\n Text prompting users to speak to interrupt.\n - `start_call_text?: string`\n Custom text displayed on the start call button.\n - `theme?: 'light' | 'dark'`\n The visual theme for the widget.\n - `view_history_url?: string`\n URL to view conversation history.\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.create({ instructions: 'instructions', name: 'name' });\n\nconsole.log(inferenceEmbedding);\n```",
3150
+ response: "{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: object; insight_settings?: object; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: object; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: object; observability_settings?: object; post_conversation_settings?: object; privacy_settings?: object; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: object; tools?: inference_embedding_webhook_tool_params | retrieval_tool | object | hangup_tool | object | object | object | object | object | object[]; transcription?: object; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: object; widget_settings?: object; }",
3151
+ markdown: "## create\n\n`client.ai.assistants.create(instructions: string, name: string, description?: string, dynamic_variables?: object, dynamic_variables_webhook_timeout_ms?: number, dynamic_variables_webhook_url?: string, enabled_features?: 'telephony' | 'messaging'[], external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }, fallback_config?: { external_llm?: external_llm_req; llm_api_key_ref?: string; model?: string; }, greeting?: string, insight_settings?: { insight_group_id?: string; }, integrations?: { integration_id: string; allowed_list?: string[]; }[], interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }, llm_api_key_ref?: string, mcp_servers?: { id: string; allowed_tools?: string[]; }[], messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }, model?: string, observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }, post_conversation_settings?: { enabled?: boolean; }, privacy_settings?: { data_retention?: boolean; }, tags?: string[], telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }, tool_ids?: string[], tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[], transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }, voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }, widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants`\n\nCreate a new AI Assistant.\n\n### Parameters\n\n- `instructions: string`\n System instructions for the assistant. These may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)\n\n- `name: string`\n\n- `description?: string`\n\n- `dynamic_variables?: object`\n Map of dynamic variables and their default values\n\n- `dynamic_variables_webhook_timeout_ms?: number`\n Timeout in milliseconds for the dynamic variables webhook. Must be between 1 and 10000 ms. If the webhook does not respond within this timeout, the call proceeds with default values. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables).\n\n- `dynamic_variables_webhook_url?: string`\n If `dynamic_variables_webhook_url` is set, Telnyx sends a POST request to this URL at the start of the conversation to resolve dynamic variables. **Gotcha:** the webhook response must wrap variables under a top-level `dynamic_variables` object, e.g. `{\"dynamic_variables\": {\"customer_name\": \"Jane\"}}`. Returning a flat object will be ignored and variables will fall back to their defaults. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) for the full request/response format and timeout behavior.\n\n- `enabled_features?: 'telephony' | 'messaging'[]`\n\n- `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `base_url: string`\n Base URL for the external LLM endpoint.\n - `model: string`\n Model identifier to use with the external LLM endpoint.\n - `authentication_method?: 'token' | 'certificate'`\n Authentication method used when connecting to the external LLM endpoint.\n - `certificate_ref?: string`\n Integration secret identifier for the client certificate used with certificate authentication.\n - `forward_metadata?: boolean`\n When `true`, Telnyx forwards the assistant's dynamic variables to the external LLM endpoint as a top-level `extra_metadata` object on the chat completion request body. Defaults to `false`. Example payload sent to the external endpoint: `{\"extra_metadata\": {\"customer_name\": \"Jane\", \"account_id\": \"acct_789\", \"telnyx_agent_target\": \"+13125550100\", \"telnyx_end_user_target\": \"+13125550123\"}}`. Distinct from OpenAI's native `metadata` field, which has its own size and type limits.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the external LLM API key.\n - `token_retrieval_url?: string`\n URL used to retrieve an access token when certificate authentication is enabled.\n\n- `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `llm_api_key_ref?: string`\n Integration secret identifier for the fallback model API key.\n - `model?: string`\n Fallback Telnyx-hosted model to use when the primary LLM provider is unavailable.\n\n- `greeting?: string`\n Text that the assistant will use to start the conversation. This may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). Use an empty string to have the assistant wait for the user to speak first. Use the special value `<assistant-speaks-first-with-model-generated-message>` to have the assistant generate the greeting based on the system instructions.\n\n- `insight_settings?: { insight_group_id?: string; }`\n - `insight_group_id?: string`\n Reference to an Insight Group. Insights in this group will be run automatically for all the assistant's conversations.\n\n- `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n Connected integrations attached to the assistant. The catalog of available integrations is at `/ai/integrations`; the user's connected integrations are at `/ai/integrations/connections`. Each item references a catalog integration by `integration_id`.\n\n- `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n Settings for interruptions and how the assistant decides the user has finished speaking. These timings are most relevant when using non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn behavior is controlled by the transcription end-of-turn settings under `transcription.settings` (`eot_threshold`, `eot_timeout_ms`, `eager_eot_threshold`).\n - `enable?: boolean`\n Whether users can interrupt the assistant while it is speaking.\n - `start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }`\n Controls when the assistant starts speaking after the user stops. These thresholds primarily apply to non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the transcription end-of-turn settings under `transcription.settings` instead.\n\n- `llm_api_key_ref?: string`\n This is only needed when using third-party inference providers selected by `model`. The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your LLM provider's API key. For bring-your-own endpoint authentication, use `external_llm.llm_api_key_ref` instead. Warning: Free plans are unlikely to work with this integration.\n\n- `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n MCP servers attached to the assistant. Create MCP servers with `/ai/mcp_servers`, then reference them by `id` here.\n\n- `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `conversation_inactivity_minutes?: number`\n If more than this many minutes have passed since the last message, the assistant will start a new conversation instead of continuing the existing one.\n - `default_messaging_profile_id?: string`\n Default Messaging Profile used for messaging exchanges with your assistant. This will be created automatically on assistant creation.\n - `delivery_status_webhook_url?: string`\n The URL where webhooks related to delivery statused for assistant messages will be sent.\n\n- `model?: string`\n ID of the model to use when `external_llm` is not set. You can use the [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models) to see available models. If `external_llm` is provided, the assistant uses `external_llm` instead of this field. If neither `model` nor `external_llm` is provided, Telnyx applies the default model.\n\n- `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `host?: string`\n - `prompt_label?: string`\n - `prompt_name?: string`\n - `prompt_sync?: 'enabled' | 'disabled'`\n Whether to auto-publish the assistant's instructions as a Langfuse prompt.\n\nWhen ENABLED + prompt_name set, every assistant create/update pushes\n`instructions` to Langfuse via create_prompt and stores the returned\nversion in prompt_version.\n - `prompt_version?: number`\n - `public_key_ref?: string`\n - `secret_key_ref?: string`\n - `status?: 'enabled' | 'disabled'`\n\n- `post_conversation_settings?: { enabled?: boolean; }`\n Configuration for post-conversation processing. When enabled, the assistant receives one additional LLM turn after the conversation ends, allowing it to execute tool calls such as logging to a CRM or sending a summary. The assistant can execute multiple parallel or sequential tools during this phase. Telephony-control tools (e.g. hangup, transfer) are unavailable post-conversation. Beta feature.\n - `enabled?: boolean`\n Whether post-conversation processing is enabled. When true, the assistant will be invoked after the conversation ends to perform any final tool calls. Defaults to false.\n\n- `privacy_settings?: { data_retention?: boolean; }`\n - `data_retention?: boolean`\n If true, conversation history and insights will be stored. If false, they will not be stored. This in‑tool toggle governs solely the retention of conversation history and insights via the AI assistant. It has no effect on any separate recording, transcription, or storage configuration that you have set at the account, number, or application level. All such external settings remain in force regardless of your selection here.\n\n- `tags?: string[]`\n Tags associated with the assistant. Tags can also be managed with the assistant tag endpoints.\n\n- `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `default_texml_app_id?: string`\n Default Texml App used for voice calls with your assistant. This will be created automatically on assistant creation.\n - `noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'`\n The noise suppression engine to use. Use 'disabled' to turn off noise suppression.\n - `noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }`\n Configuration for noise suppression. Only applicable when noise_suppression is 'deepfilternet'.\n - `recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }`\n Configuration for call recording format and channel settings.\n - `supports_unauthenticated_web_calls?: boolean`\n When enabled, allows users to interact with your AI assistant directly from your website without requiring authentication. This is required for FE widgets that work with assistants that have telephony enabled.\n - `time_limit_secs?: number`\n Maximum duration in seconds for the AI assistant to participate on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `user_idle_reply_secs?: number`\n Duration in seconds of end user silence before the assistant checks in on the user. When this limit is reached the assistant will prompt the user to respond. This is distinct from user_idle_timeout_secs which stops the assistant entirely.\n - `user_idle_timeout_secs?: number`\n Maximum duration in seconds of end user silence on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: { message?: string; prompt?: string; type?: 'prompt' | 'message'; }; }; }`\n Configuration for voicemail detection (AMD - Answering Machine Detection) on outgoing calls. These settings only apply if AMD is enabled on the Dial command. See [TeXML Dial documentation](https://developers.telnyx.com/api-reference/texml-rest-commands/initiate-an-outbound-call) for enabling AMD. Recommended settings: MachineDetection=Enable, AsyncAmd=true, DetectionMode=Premium.\n\n- `tool_ids?: string[]`\n IDs of shared tools to attach to the assistant. New integrations should prefer `tool_ids` over inline `tools`.\n\n- `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n Deprecated for new integrations. Inline tool definitions available to the assistant. Prefer `tool_ids` to attach shared tools created with the AI Tools endpoints.\n\n- `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `api_key_ref?: string`\n Integration secret identifier for the transcription provider API key. Currently used for Azure transcription regions that require a customer-provided API key.\n - `language?: string`\n The language of the audio to be transcribed. If not set, or if set to `auto`, supported models will automatically detect the language. For `deepgram/flux`, supported values are: `auto` (Telnyx language detection controls the language hint), `multi` (no language hint), and language-specific hints `en`, `es`, `fr`, `de`, `hi`, `ru`, `pt`, `ja`, `it`, and `nl`.\n - `model?: string`\n The speech to text model to be used by the voice assistant. All Deepgram models are run on-premise.\n\n- `deepgram/flux` is optimized for turn-taking with multilingual language hints.\n- `deepgram/nova-3` is multilingual with automatic language detection.\n- `deepgram/nova-2` is Deepgram's previous-generation multilingual model.\n- `azure/fast` is a multilingual Azure transcription model.\n- `assemblyai/universal-streaming` is a multilingual streaming model with configurable turn detection.\n- `xai/grok-stt` is a multilingual Grok STT model.\n - `region?: string`\n Region on third party cloud providers (currently Azure) if using one of their models. Some regions require `api_key_ref`.\n - `settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }`\n\n- `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `voice: string`\n The voice to be used by the voice assistant. Check the full list of [available voices](https://developers.telnyx.com/docs/tts-stt/tts-available-voices) via our voices API.\nTo use ElevenLabs, you must reference your ElevenLabs API key as an integration secret under the `api_key_ref` field. See [integration secrets documentation](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) for details. For Telnyx voices, use `Telnyx.<model_id>.<voice_id>` (e.g. Telnyx.KokoroTTS.af_heart).\nThe voice portion of the identifier supports [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) using mustache syntax (e.g. `Telnyx.Ultra.{{voice_id}}`). The variable is resolved at call time from your dynamic variables webhook, allowing you to select the voice dynamically per call.\n - `api_key_ref?: string`\n The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your ElevenLabs API key. Warning: Free plans are unlikely to work with this integration.\n - `background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }`\n Optional background audio to play on the call. Use a predefined media bed, or supply a looped MP3 URL. If a media URL is chosen in the portal, customers can preview it before saving.\n - `expressive_mode?: boolean`\n Enables emotionally expressive speech using SSML emotion tags. When enabled, the assistant uses audio tags like angry, excited, content, and sad to add emotional nuance. Only supported for Telnyx Ultra voices.\n - `language_boost?: string`\n Enhances recognition for specific languages and dialects during MiniMax TTS synthesis. Default is null (no boost). Set to 'auto' for automatic language detection. Only applicable when using MiniMax voices.\n - `similarity_boost?: number`\n Determines how closely the AI should adhere to the original voice when attempting to replicate it. Only applicable when using ElevenLabs.\n - `speed?: number`\n Adjusts speech velocity. 1.0 is default speed; values less than 1.0 slow speech; values greater than 1.0 accelerate it. Only applicable when using ElevenLabs.\n - `style?: number`\n Determines the style exaggeration of the voice. Amplifies speaker style but consumes additional resources when set above 0. Only applicable when using ElevenLabs.\n - `temperature?: number`\n Determines how stable the voice is and the randomness between each generation. Lower values create a broader emotional range; higher values produce more consistent, monotonous output. Only applicable when using ElevenLabs.\n - `use_speaker_boost?: boolean`\n Amplifies similarity to the original speaker voice. Increases computational load and latency slightly. Only applicable when using ElevenLabs.\n - `voice_speed?: number`\n The speed of the voice in the range [0.25, 2.0]. 1.0 is deafult speed. Larger numbers make the voice faster, smaller numbers make it slower. This is only applicable for Telnyx Natural voices.\n\n- `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n Configuration settings for the assistant's web widget.\n - `agent_thinking_text?: string`\n Text displayed while the agent is processing.\n - `audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }`\n - `default_state?: 'expanded' | 'collapsed'`\n The default state of the widget.\n - `give_feedback_url?: string`\n URL for users to give feedback.\n - `logo_icon_url?: string`\n URL to a custom logo icon for the widget.\n - `position?: 'fixed' | 'static'`\n The positioning style for the widget.\n - `report_issue_url?: string`\n URL for users to report issues.\n - `speak_to_interrupt_text?: string`\n Text prompting users to speak to interrupt.\n - `start_call_text?: string`\n Custom text displayed on the start call button.\n - `theme?: 'light' | 'dark'`\n The visual theme for the widget.\n - `view_history_url?: string`\n URL to view conversation history.\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.create({ instructions: 'instructions', name: 'name' });\n\nconsole.log(inferenceEmbedding);\n```",
3152
3152
  perLanguage: {
3153
3153
  typescript: {
3154
3154
  method: 'client.ai.assistants.create',
@@ -3192,8 +3192,8 @@ const EMBEDDED_METHODS = [
3192
3192
  stainlessPath: '(resource) ai.assistants > (method) imports',
3193
3193
  qualified: 'client.ai.assistants.imports',
3194
3194
  params: ['api_key_ref: string;', "provider: 'elevenlabs' | 'vapi' | 'retell';", 'import_ids?: string[];'],
3195
- response: '{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }',
3196
- markdown: "## imports\n\n`client.ai.assistants.imports(api_key_ref: string, provider: 'elevenlabs' | 'vapi' | 'retell', import_ids?: string[]): { data: inference_embedding[]; }`\n\n**post** `/ai/assistants/import`\n\nImport assistants from external providers. Any assistant that has already been imported will be overwritten with its latest version from the importing provider.\n\n### Parameters\n\n- `api_key_ref: string`\n Integration secret pointer that refers to the API key for the external provider. This should be an identifier for an integration secret created via /v2/integration_secrets.\n\n- `provider: 'elevenlabs' | 'vapi' | 'retell'`\n The external provider to import assistants from.\n\n- `import_ids?: string[]`\n Optional list of assistant IDs to import from the external provider. If not provided, all assistants will be imported.\n\n### Returns\n\n- `{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }`\n\n - `data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }[]`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst assistantsList = await client.ai.assistants.imports({ api_key_ref: 'api_key_ref', provider: 'elevenlabs' });\n\nconsole.log(assistantsList);\n```",
3195
+ response: '{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }',
3196
+ markdown: "## imports\n\n`client.ai.assistants.imports(api_key_ref: string, provider: 'elevenlabs' | 'vapi' | 'retell', import_ids?: string[]): { data: inference_embedding[]; }`\n\n**post** `/ai/assistants/import`\n\nImport assistants from external providers. Any assistant that has already been imported will be overwritten with its latest version from the importing provider.\n\n### Parameters\n\n- `api_key_ref: string`\n Integration secret pointer that refers to the API key for the external provider. This should be an identifier for an integration secret created via /v2/integration_secrets.\n\n- `provider: 'elevenlabs' | 'vapi' | 'retell'`\n The external provider to import assistants from.\n\n- `import_ids?: string[]`\n Optional list of assistant IDs to import from the external provider. If not provided, all assistants will be imported.\n\n### Returns\n\n- `{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }`\n\n - `data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }[]`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst assistantsList = await client.ai.assistants.imports({ api_key_ref: 'api_key_ref', provider: 'elevenlabs' });\n\nconsole.log(assistantsList);\n```",
3197
3197
  perLanguage: {
3198
3198
  typescript: {
3199
3199
  method: 'client.ai.assistants.imports',
@@ -3288,8 +3288,8 @@ const EMBEDDED_METHODS = [
3288
3288
  'from?: string;',
3289
3289
  'to?: string;',
3290
3290
  ],
3291
- response: '{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }',
3292
- markdown: "## retrieve\n\n`client.ai.assistants.retrieve(assistant_id: string, call_control_id?: string, fetch_dynamic_variables_from_webhook?: boolean, from?: string, to?: string): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**get** `/ai/assistants/{assistant_id}`\n\nRetrieve an AI Assistant configuration by `assistant_id`.\n\n### Parameters\n\n- `assistant_id: string`\n\n- `call_control_id?: string`\n\n- `fetch_dynamic_variables_from_webhook?: boolean`\n\n- `from?: string`\n\n- `to?: string`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.retrieve('assistant_id');\n\nconsole.log(inferenceEmbedding);\n```",
3291
+ response: "{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: object; insight_settings?: object; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: object; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: object; observability_settings?: object; post_conversation_settings?: object; privacy_settings?: object; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: object; tools?: inference_embedding_webhook_tool_params | retrieval_tool | object | hangup_tool | object | object | object | object | object | object[]; transcription?: object; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: object; widget_settings?: object; }",
3292
+ markdown: "## retrieve\n\n`client.ai.assistants.retrieve(assistant_id: string, call_control_id?: string, fetch_dynamic_variables_from_webhook?: boolean, from?: string, to?: string): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**get** `/ai/assistants/{assistant_id}`\n\nRetrieve an AI Assistant configuration by `assistant_id`.\n\n### Parameters\n\n- `assistant_id: string`\n\n- `call_control_id?: string`\n\n- `fetch_dynamic_variables_from_webhook?: boolean`\n\n- `from?: string`\n\n- `to?: string`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.retrieve('assistant_id');\n\nconsole.log(inferenceEmbedding);\n```",
3293
3293
  perLanguage: {
3294
3294
  typescript: {
3295
3295
  method: 'client.ai.assistants.retrieve',
@@ -3364,8 +3364,8 @@ const EMBEDDED_METHODS = [
3364
3364
  "voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; };",
3365
3365
  "widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; };",
3366
3366
  ],
3367
- response: '{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }',
3368
- markdown: "## update\n\n`client.ai.assistants.update(assistant_id: string, description?: string, dynamic_variables?: object, dynamic_variables_webhook_timeout_ms?: number, dynamic_variables_webhook_url?: string, enabled_features?: 'telephony' | 'messaging'[], external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }, fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }, greeting?: string, insight_settings?: { insight_group_id?: string; }, instructions?: string, integrations?: { integration_id: string; allowed_list?: string[]; }[], interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }, llm_api_key_ref?: string, mcp_servers?: { id: string; allowed_tools?: string[]; }[], messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }, model?: string, name?: string, observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }, post_conversation_settings?: { enabled?: boolean; }, privacy_settings?: { data_retention?: boolean; }, promote_to_main?: boolean, tags?: string[], telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }, tool_ids?: string[], tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[], transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }, version_name?: string, voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }, widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}`\n\nUpdate an AI Assistant's attributes.\n\n### Parameters\n\n- `assistant_id: string`\n\n- `description?: string`\n\n- `dynamic_variables?: object`\n Map of dynamic variables and their default values\n\n- `dynamic_variables_webhook_timeout_ms?: number`\n Timeout in milliseconds for the dynamic variables webhook. Must be between 1 and 10000 ms. If the webhook does not respond within this timeout, the call proceeds with default values. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables).\n\n- `dynamic_variables_webhook_url?: string`\n If `dynamic_variables_webhook_url` is set, Telnyx sends a POST request to this URL at the start of the conversation to resolve dynamic variables. **Gotcha:** the webhook response must wrap variables under a top-level `dynamic_variables` object, e.g. `{\"dynamic_variables\": {\"customer_name\": \"Jane\"}}`. Returning a flat object will be ignored and variables will fall back to their defaults. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) for the full request/response format and timeout behavior.\n\n- `enabled_features?: 'telephony' | 'messaging'[]`\n\n- `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `base_url: string`\n Base URL for the external LLM endpoint.\n - `model: string`\n Model identifier to use with the external LLM endpoint.\n - `authentication_method?: 'token' | 'certificate'`\n Authentication method used when connecting to the external LLM endpoint.\n - `certificate_ref?: string`\n Integration secret identifier for the client certificate used with certificate authentication.\n - `forward_metadata?: boolean`\n When `true`, Telnyx forwards the assistant's dynamic variables to the external LLM endpoint as a top-level `extra_metadata` object on the chat completion request body. Defaults to `false`. Example payload sent to the external endpoint: `{\"extra_metadata\": {\"customer_name\": \"Jane\", \"account_id\": \"acct_789\", \"telnyx_agent_target\": \"+13125550100\", \"telnyx_end_user_target\": \"+13125550123\"}}`. Distinct from OpenAI's native `metadata` field, which has its own size and type limits.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the external LLM API key.\n - `token_retrieval_url?: string`\n URL used to retrieve an access token when certificate authentication is enabled.\n\n- `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `llm_api_key_ref?: string`\n Integration secret identifier for the fallback model API key.\n - `model?: string`\n Fallback Telnyx-hosted model to use when the primary LLM provider is unavailable.\n\n- `greeting?: string`\n Text that the assistant will use to start the conversation. This may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). Use an empty string to have the assistant wait for the user to speak first. Use the special value `<assistant-speaks-first-with-model-generated-message>` to have the assistant generate the greeting based on the system instructions.\n\n- `insight_settings?: { insight_group_id?: string; }`\n - `insight_group_id?: string`\n Reference to an Insight Group. Insights in this group will be run automatically for all the assistant's conversations.\n\n- `instructions?: string`\n System instructions for the assistant. These may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)\n\n- `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n Connected integrations attached to the assistant. The catalog of available integrations is at `/ai/integrations`; the user's connected integrations are at `/ai/integrations/connections`. Each item references a catalog integration by `integration_id`.\n\n- `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n Settings for interruptions and how the assistant decides the user has finished speaking. These timings are most relevant when using non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn behavior is controlled by the transcription end-of-turn settings under `transcription.settings` (`eot_threshold`, `eot_timeout_ms`, `eager_eot_threshold`).\n - `enable?: boolean`\n Whether users can interrupt the assistant while it is speaking.\n - `start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }`\n Controls when the assistant starts speaking after the user stops. These thresholds primarily apply to non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the transcription end-of-turn settings under `transcription.settings` instead.\n\n- `llm_api_key_ref?: string`\n This is only needed when using third-party inference providers selected by `model`. The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your LLM provider's API key. For bring-your-own endpoint authentication, use `external_llm.llm_api_key_ref` instead. Warning: Free plans are unlikely to work with this integration.\n\n- `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n MCP servers attached to the assistant. Create MCP servers with `/ai/mcp_servers`, then reference them by `id` here.\n\n- `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `conversation_inactivity_minutes?: number`\n If more than this many minutes have passed since the last message, the assistant will start a new conversation instead of continuing the existing one.\n - `default_messaging_profile_id?: string`\n Default Messaging Profile used for messaging exchanges with your assistant. This will be created automatically on assistant creation.\n - `delivery_status_webhook_url?: string`\n The URL where webhooks related to delivery statused for assistant messages will be sent.\n\n- `model?: string`\n ID of the model to use when `external_llm` is not set. You can use the [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models) to see available models. If `external_llm` is provided, the assistant uses `external_llm` instead of this field. If neither `model` nor `external_llm` is provided, Telnyx applies the default model.\n\n- `name?: string`\n\n- `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `host?: string`\n - `prompt_label?: string`\n - `prompt_name?: string`\n - `prompt_sync?: 'enabled' | 'disabled'`\n Whether to auto-publish the assistant's instructions as a Langfuse prompt.\n\nWhen ENABLED + prompt_name set, every assistant create/update pushes\n`instructions` to Langfuse via create_prompt and stores the returned\nversion in prompt_version.\n - `prompt_version?: number`\n - `public_key_ref?: string`\n - `secret_key_ref?: string`\n - `status?: 'enabled' | 'disabled'`\n\n- `post_conversation_settings?: { enabled?: boolean; }`\n Configuration for post-conversation processing. When enabled, the assistant receives one additional LLM turn after the conversation ends, allowing it to execute tool calls such as logging to a CRM or sending a summary. The assistant can execute multiple parallel or sequential tools during this phase. Telephony-control tools (e.g. hangup, transfer) are unavailable post-conversation. Beta feature.\n - `enabled?: boolean`\n Whether post-conversation processing is enabled. When true, the assistant will be invoked after the conversation ends to perform any final tool calls. Defaults to false.\n\n- `privacy_settings?: { data_retention?: boolean; }`\n - `data_retention?: boolean`\n If true, conversation history and insights will be stored. If false, they will not be stored. This in‑tool toggle governs solely the retention of conversation history and insights via the AI assistant. It has no effect on any separate recording, transcription, or storage configuration that you have set at the account, number, or application level. All such external settings remain in force regardless of your selection here.\n\n- `promote_to_main?: boolean`\n Indicates whether the assistant should be promoted to the main version. Defaults to true.\n\n- `tags?: string[]`\n Tags associated with the assistant. Tags can also be managed with the assistant tag endpoints.\n\n- `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `default_texml_app_id?: string`\n Default Texml App used for voice calls with your assistant. This will be created automatically on assistant creation.\n - `noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'`\n The noise suppression engine to use. Use 'disabled' to turn off noise suppression.\n - `noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }`\n Configuration for noise suppression. Only applicable when noise_suppression is 'deepfilternet'.\n - `recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }`\n Configuration for call recording format and channel settings.\n - `supports_unauthenticated_web_calls?: boolean`\n When enabled, allows users to interact with your AI assistant directly from your website without requiring authentication. This is required for FE widgets that work with assistants that have telephony enabled.\n - `time_limit_secs?: number`\n Maximum duration in seconds for the AI assistant to participate on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `user_idle_reply_secs?: number`\n Duration in seconds of end user silence before the assistant checks in on the user. When this limit is reached the assistant will prompt the user to respond. This is distinct from user_idle_timeout_secs which stops the assistant entirely.\n - `user_idle_timeout_secs?: number`\n Maximum duration in seconds of end user silence on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: { message?: string; prompt?: string; type?: 'prompt' | 'message'; }; }; }`\n Configuration for voicemail detection (AMD - Answering Machine Detection) on outgoing calls. These settings only apply if AMD is enabled on the Dial command. See [TeXML Dial documentation](https://developers.telnyx.com/api-reference/texml-rest-commands/initiate-an-outbound-call) for enabling AMD. Recommended settings: MachineDetection=Enable, AsyncAmd=true, DetectionMode=Premium.\n\n- `tool_ids?: string[]`\n IDs of shared tools to attach to the assistant. New integrations should prefer `tool_ids` over inline `tools`.\n\n- `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n Deprecated for new integrations. Inline tool definitions available to the assistant. Prefer `tool_ids` to attach shared tools created with the AI Tools endpoints.\n\n- `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `api_key_ref?: string`\n Integration secret identifier for the transcription provider API key. Currently used for Azure transcription regions that require a customer-provided API key.\n - `language?: string`\n The language of the audio to be transcribed. If not set, or if set to `auto`, supported models will automatically detect the language. For `deepgram/flux`, supported values are: `auto` (Telnyx language detection controls the language hint), `multi` (no language hint), and language-specific hints `en`, `es`, `fr`, `de`, `hi`, `ru`, `pt`, `ja`, `it`, and `nl`.\n - `model?: string`\n The speech to text model to be used by the voice assistant. All Deepgram models are run on-premise.\n\n- `deepgram/flux` is optimized for turn-taking with multilingual language hints.\n- `deepgram/nova-3` is multilingual with automatic language detection.\n- `deepgram/nova-2` is Deepgram's previous-generation multilingual model.\n- `azure/fast` is a multilingual Azure transcription model.\n- `assemblyai/universal-streaming` is a multilingual streaming model with configurable turn detection.\n- `xai/grok-stt` is a multilingual Grok STT model.\n - `region?: string`\n Region on third party cloud providers (currently Azure) if using one of their models. Some regions require `api_key_ref`.\n - `settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }`\n\n- `version_name?: string`\n Human-readable name for the assistant version.\n\n- `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `voice: string`\n The voice to be used by the voice assistant. Check the full list of [available voices](https://developers.telnyx.com/docs/tts-stt/tts-available-voices) via our voices API.\nTo use ElevenLabs, you must reference your ElevenLabs API key as an integration secret under the `api_key_ref` field. See [integration secrets documentation](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) for details. For Telnyx voices, use `Telnyx.<model_id>.<voice_id>` (e.g. Telnyx.KokoroTTS.af_heart).\nThe voice portion of the identifier supports [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) using mustache syntax (e.g. `Telnyx.Ultra.{{voice_id}}`). The variable is resolved at call time from your dynamic variables webhook, allowing you to select the voice dynamically per call.\n - `api_key_ref?: string`\n The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your ElevenLabs API key. Warning: Free plans are unlikely to work with this integration.\n - `background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }`\n Optional background audio to play on the call. Use a predefined media bed, or supply a looped MP3 URL. If a media URL is chosen in the portal, customers can preview it before saving.\n - `expressive_mode?: boolean`\n Enables emotionally expressive speech using SSML emotion tags. When enabled, the assistant uses audio tags like angry, excited, content, and sad to add emotional nuance. Only supported for Telnyx Ultra voices.\n - `language_boost?: string`\n Enhances recognition for specific languages and dialects during MiniMax TTS synthesis. Default is null (no boost). Set to 'auto' for automatic language detection. Only applicable when using MiniMax voices.\n - `similarity_boost?: number`\n Determines how closely the AI should adhere to the original voice when attempting to replicate it. Only applicable when using ElevenLabs.\n - `speed?: number`\n Adjusts speech velocity. 1.0 is default speed; values less than 1.0 slow speech; values greater than 1.0 accelerate it. Only applicable when using ElevenLabs.\n - `style?: number`\n Determines the style exaggeration of the voice. Amplifies speaker style but consumes additional resources when set above 0. Only applicable when using ElevenLabs.\n - `temperature?: number`\n Determines how stable the voice is and the randomness between each generation. Lower values create a broader emotional range; higher values produce more consistent, monotonous output. Only applicable when using ElevenLabs.\n - `use_speaker_boost?: boolean`\n Amplifies similarity to the original speaker voice. Increases computational load and latency slightly. Only applicable when using ElevenLabs.\n - `voice_speed?: number`\n The speed of the voice in the range [0.25, 2.0]. 1.0 is deafult speed. Larger numbers make the voice faster, smaller numbers make it slower. This is only applicable for Telnyx Natural voices.\n\n- `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n Configuration settings for the assistant's web widget.\n - `agent_thinking_text?: string`\n Text displayed while the agent is processing.\n - `audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }`\n - `default_state?: 'expanded' | 'collapsed'`\n The default state of the widget.\n - `give_feedback_url?: string`\n URL for users to give feedback.\n - `logo_icon_url?: string`\n URL to a custom logo icon for the widget.\n - `position?: 'fixed' | 'static'`\n The positioning style for the widget.\n - `report_issue_url?: string`\n URL for users to report issues.\n - `speak_to_interrupt_text?: string`\n Text prompting users to speak to interrupt.\n - `start_call_text?: string`\n Custom text displayed on the start call button.\n - `theme?: 'light' | 'dark'`\n The visual theme for the widget.\n - `view_history_url?: string`\n URL to view conversation history.\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.update('assistant_id');\n\nconsole.log(inferenceEmbedding);\n```",
3367
+ response: "{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: object; insight_settings?: object; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: object; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: object; observability_settings?: object; post_conversation_settings?: object; privacy_settings?: object; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: object; tools?: inference_embedding_webhook_tool_params | retrieval_tool | object | hangup_tool | object | object | object | object | object | object[]; transcription?: object; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: object; widget_settings?: object; }",
3368
+ markdown: "## update\n\n`client.ai.assistants.update(assistant_id: string, description?: string, dynamic_variables?: object, dynamic_variables_webhook_timeout_ms?: number, dynamic_variables_webhook_url?: string, enabled_features?: 'telephony' | 'messaging'[], external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }, fallback_config?: { external_llm?: external_llm_req; llm_api_key_ref?: string; model?: string; }, greeting?: string, insight_settings?: { insight_group_id?: string; }, instructions?: string, integrations?: { integration_id: string; allowed_list?: string[]; }[], interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }, llm_api_key_ref?: string, mcp_servers?: { id: string; allowed_tools?: string[]; }[], messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }, model?: string, name?: string, observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }, post_conversation_settings?: { enabled?: boolean; }, privacy_settings?: { data_retention?: boolean; }, promote_to_main?: boolean, tags?: string[], telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }, tool_ids?: string[], tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[], transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }, version_name?: string, voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }, widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}`\n\nUpdate an AI Assistant's attributes.\n\n### Parameters\n\n- `assistant_id: string`\n\n- `description?: string`\n\n- `dynamic_variables?: object`\n Map of dynamic variables and their default values\n\n- `dynamic_variables_webhook_timeout_ms?: number`\n Timeout in milliseconds for the dynamic variables webhook. Must be between 1 and 10000 ms. If the webhook does not respond within this timeout, the call proceeds with default values. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables).\n\n- `dynamic_variables_webhook_url?: string`\n If `dynamic_variables_webhook_url` is set, Telnyx sends a POST request to this URL at the start of the conversation to resolve dynamic variables. **Gotcha:** the webhook response must wrap variables under a top-level `dynamic_variables` object, e.g. `{\"dynamic_variables\": {\"customer_name\": \"Jane\"}}`. Returning a flat object will be ignored and variables will fall back to their defaults. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) for the full request/response format and timeout behavior.\n\n- `enabled_features?: 'telephony' | 'messaging'[]`\n\n- `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `base_url: string`\n Base URL for the external LLM endpoint.\n - `model: string`\n Model identifier to use with the external LLM endpoint.\n - `authentication_method?: 'token' | 'certificate'`\n Authentication method used when connecting to the external LLM endpoint.\n - `certificate_ref?: string`\n Integration secret identifier for the client certificate used with certificate authentication.\n - `forward_metadata?: boolean`\n When `true`, Telnyx forwards the assistant's dynamic variables to the external LLM endpoint as a top-level `extra_metadata` object on the chat completion request body. Defaults to `false`. Example payload sent to the external endpoint: `{\"extra_metadata\": {\"customer_name\": \"Jane\", \"account_id\": \"acct_789\", \"telnyx_agent_target\": \"+13125550100\", \"telnyx_end_user_target\": \"+13125550123\"}}`. Distinct from OpenAI's native `metadata` field, which has its own size and type limits.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the external LLM API key.\n - `token_retrieval_url?: string`\n URL used to retrieve an access token when certificate authentication is enabled.\n\n- `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `llm_api_key_ref?: string`\n Integration secret identifier for the fallback model API key.\n - `model?: string`\n Fallback Telnyx-hosted model to use when the primary LLM provider is unavailable.\n\n- `greeting?: string`\n Text that the assistant will use to start the conversation. This may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). Use an empty string to have the assistant wait for the user to speak first. Use the special value `<assistant-speaks-first-with-model-generated-message>` to have the assistant generate the greeting based on the system instructions.\n\n- `insight_settings?: { insight_group_id?: string; }`\n - `insight_group_id?: string`\n Reference to an Insight Group. Insights in this group will be run automatically for all the assistant's conversations.\n\n- `instructions?: string`\n System instructions for the assistant. These may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)\n\n- `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n Connected integrations attached to the assistant. The catalog of available integrations is at `/ai/integrations`; the user's connected integrations are at `/ai/integrations/connections`. Each item references a catalog integration by `integration_id`.\n\n- `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n Settings for interruptions and how the assistant decides the user has finished speaking. These timings are most relevant when using non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn behavior is controlled by the transcription end-of-turn settings under `transcription.settings` (`eot_threshold`, `eot_timeout_ms`, `eager_eot_threshold`).\n - `enable?: boolean`\n Whether users can interrupt the assistant while it is speaking.\n - `start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }`\n Controls when the assistant starts speaking after the user stops. These thresholds primarily apply to non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the transcription end-of-turn settings under `transcription.settings` instead.\n\n- `llm_api_key_ref?: string`\n This is only needed when using third-party inference providers selected by `model`. The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your LLM provider's API key. For bring-your-own endpoint authentication, use `external_llm.llm_api_key_ref` instead. Warning: Free plans are unlikely to work with this integration.\n\n- `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n MCP servers attached to the assistant. Create MCP servers with `/ai/mcp_servers`, then reference them by `id` here.\n\n- `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `conversation_inactivity_minutes?: number`\n If more than this many minutes have passed since the last message, the assistant will start a new conversation instead of continuing the existing one.\n - `default_messaging_profile_id?: string`\n Default Messaging Profile used for messaging exchanges with your assistant. This will be created automatically on assistant creation.\n - `delivery_status_webhook_url?: string`\n The URL where webhooks related to delivery statused for assistant messages will be sent.\n\n- `model?: string`\n ID of the model to use when `external_llm` is not set. You can use the [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models) to see available models. If `external_llm` is provided, the assistant uses `external_llm` instead of this field. If neither `model` nor `external_llm` is provided, Telnyx applies the default model.\n\n- `name?: string`\n\n- `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `host?: string`\n - `prompt_label?: string`\n - `prompt_name?: string`\n - `prompt_sync?: 'enabled' | 'disabled'`\n Whether to auto-publish the assistant's instructions as a Langfuse prompt.\n\nWhen ENABLED + prompt_name set, every assistant create/update pushes\n`instructions` to Langfuse via create_prompt and stores the returned\nversion in prompt_version.\n - `prompt_version?: number`\n - `public_key_ref?: string`\n - `secret_key_ref?: string`\n - `status?: 'enabled' | 'disabled'`\n\n- `post_conversation_settings?: { enabled?: boolean; }`\n Configuration for post-conversation processing. When enabled, the assistant receives one additional LLM turn after the conversation ends, allowing it to execute tool calls such as logging to a CRM or sending a summary. The assistant can execute multiple parallel or sequential tools during this phase. Telephony-control tools (e.g. hangup, transfer) are unavailable post-conversation. Beta feature.\n - `enabled?: boolean`\n Whether post-conversation processing is enabled. When true, the assistant will be invoked after the conversation ends to perform any final tool calls. Defaults to false.\n\n- `privacy_settings?: { data_retention?: boolean; }`\n - `data_retention?: boolean`\n If true, conversation history and insights will be stored. If false, they will not be stored. This in‑tool toggle governs solely the retention of conversation history and insights via the AI assistant. It has no effect on any separate recording, transcription, or storage configuration that you have set at the account, number, or application level. All such external settings remain in force regardless of your selection here.\n\n- `promote_to_main?: boolean`\n Indicates whether the assistant should be promoted to the main version. Defaults to true.\n\n- `tags?: string[]`\n Tags associated with the assistant. Tags can also be managed with the assistant tag endpoints.\n\n- `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `default_texml_app_id?: string`\n Default Texml App used for voice calls with your assistant. This will be created automatically on assistant creation.\n - `noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'`\n The noise suppression engine to use. Use 'disabled' to turn off noise suppression.\n - `noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }`\n Configuration for noise suppression. Only applicable when noise_suppression is 'deepfilternet'.\n - `recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }`\n Configuration for call recording format and channel settings.\n - `supports_unauthenticated_web_calls?: boolean`\n When enabled, allows users to interact with your AI assistant directly from your website without requiring authentication. This is required for FE widgets that work with assistants that have telephony enabled.\n - `time_limit_secs?: number`\n Maximum duration in seconds for the AI assistant to participate on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `user_idle_reply_secs?: number`\n Duration in seconds of end user silence before the assistant checks in on the user. When this limit is reached the assistant will prompt the user to respond. This is distinct from user_idle_timeout_secs which stops the assistant entirely.\n - `user_idle_timeout_secs?: number`\n Maximum duration in seconds of end user silence on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: { message?: string; prompt?: string; type?: 'prompt' | 'message'; }; }; }`\n Configuration for voicemail detection (AMD - Answering Machine Detection) on outgoing calls. These settings only apply if AMD is enabled on the Dial command. See [TeXML Dial documentation](https://developers.telnyx.com/api-reference/texml-rest-commands/initiate-an-outbound-call) for enabling AMD. Recommended settings: MachineDetection=Enable, AsyncAmd=true, DetectionMode=Premium.\n\n- `tool_ids?: string[]`\n IDs of shared tools to attach to the assistant. New integrations should prefer `tool_ids` over inline `tools`.\n\n- `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n Deprecated for new integrations. Inline tool definitions available to the assistant. Prefer `tool_ids` to attach shared tools created with the AI Tools endpoints.\n\n- `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `api_key_ref?: string`\n Integration secret identifier for the transcription provider API key. Currently used for Azure transcription regions that require a customer-provided API key.\n - `language?: string`\n The language of the audio to be transcribed. If not set, or if set to `auto`, supported models will automatically detect the language. For `deepgram/flux`, supported values are: `auto` (Telnyx language detection controls the language hint), `multi` (no language hint), and language-specific hints `en`, `es`, `fr`, `de`, `hi`, `ru`, `pt`, `ja`, `it`, and `nl`.\n - `model?: string`\n The speech to text model to be used by the voice assistant. All Deepgram models are run on-premise.\n\n- `deepgram/flux` is optimized for turn-taking with multilingual language hints.\n- `deepgram/nova-3` is multilingual with automatic language detection.\n- `deepgram/nova-2` is Deepgram's previous-generation multilingual model.\n- `azure/fast` is a multilingual Azure transcription model.\n- `assemblyai/universal-streaming` is a multilingual streaming model with configurable turn detection.\n- `xai/grok-stt` is a multilingual Grok STT model.\n - `region?: string`\n Region on third party cloud providers (currently Azure) if using one of their models. Some regions require `api_key_ref`.\n - `settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }`\n\n- `version_name?: string`\n Human-readable name for the assistant version.\n\n- `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `voice: string`\n The voice to be used by the voice assistant. Check the full list of [available voices](https://developers.telnyx.com/docs/tts-stt/tts-available-voices) via our voices API.\nTo use ElevenLabs, you must reference your ElevenLabs API key as an integration secret under the `api_key_ref` field. See [integration secrets documentation](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) for details. For Telnyx voices, use `Telnyx.<model_id>.<voice_id>` (e.g. Telnyx.KokoroTTS.af_heart).\nThe voice portion of the identifier supports [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) using mustache syntax (e.g. `Telnyx.Ultra.{{voice_id}}`). The variable is resolved at call time from your dynamic variables webhook, allowing you to select the voice dynamically per call.\n - `api_key_ref?: string`\n The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your ElevenLabs API key. Warning: Free plans are unlikely to work with this integration.\n - `background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }`\n Optional background audio to play on the call. Use a predefined media bed, or supply a looped MP3 URL. If a media URL is chosen in the portal, customers can preview it before saving.\n - `expressive_mode?: boolean`\n Enables emotionally expressive speech using SSML emotion tags. When enabled, the assistant uses audio tags like angry, excited, content, and sad to add emotional nuance. Only supported for Telnyx Ultra voices.\n - `language_boost?: string`\n Enhances recognition for specific languages and dialects during MiniMax TTS synthesis. Default is null (no boost). Set to 'auto' for automatic language detection. Only applicable when using MiniMax voices.\n - `similarity_boost?: number`\n Determines how closely the AI should adhere to the original voice when attempting to replicate it. Only applicable when using ElevenLabs.\n - `speed?: number`\n Adjusts speech velocity. 1.0 is default speed; values less than 1.0 slow speech; values greater than 1.0 accelerate it. Only applicable when using ElevenLabs.\n - `style?: number`\n Determines the style exaggeration of the voice. Amplifies speaker style but consumes additional resources when set above 0. Only applicable when using ElevenLabs.\n - `temperature?: number`\n Determines how stable the voice is and the randomness between each generation. Lower values create a broader emotional range; higher values produce more consistent, monotonous output. Only applicable when using ElevenLabs.\n - `use_speaker_boost?: boolean`\n Amplifies similarity to the original speaker voice. Increases computational load and latency slightly. Only applicable when using ElevenLabs.\n - `voice_speed?: number`\n The speed of the voice in the range [0.25, 2.0]. 1.0 is deafult speed. Larger numbers make the voice faster, smaller numbers make it slower. This is only applicable for Telnyx Natural voices.\n\n- `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n Configuration settings for the assistant's web widget.\n - `agent_thinking_text?: string`\n Text displayed while the agent is processing.\n - `audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }`\n - `default_state?: 'expanded' | 'collapsed'`\n The default state of the widget.\n - `give_feedback_url?: string`\n URL for users to give feedback.\n - `logo_icon_url?: string`\n URL to a custom logo icon for the widget.\n - `position?: 'fixed' | 'static'`\n The positioning style for the widget.\n - `report_issue_url?: string`\n URL for users to report issues.\n - `speak_to_interrupt_text?: string`\n Text prompting users to speak to interrupt.\n - `start_call_text?: string`\n Custom text displayed on the start call button.\n - `theme?: 'light' | 'dark'`\n The visual theme for the widget.\n - `view_history_url?: string`\n URL to view conversation history.\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.update('assistant_id');\n\nconsole.log(inferenceEmbedding);\n```",
3369
3369
  perLanguage: {
3370
3370
  typescript: {
3371
3371
  method: 'client.ai.assistants.update',
@@ -3454,8 +3454,8 @@ const EMBEDDED_METHODS = [
3454
3454
  stainlessPath: '(resource) ai.assistants > (method) clone',
3455
3455
  qualified: 'client.ai.assistants.clone',
3456
3456
  params: ['assistant_id: string;'],
3457
- response: '{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }',
3458
- markdown: "## clone\n\n`client.ai.assistants.clone(assistant_id: string): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}/clone`\n\nClone an existing assistant, excluding telephony and messaging settings.\n\n### Parameters\n\n- `assistant_id: string`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.clone('assistant_id');\n\nconsole.log(inferenceEmbedding);\n```",
3457
+ response: "{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: object; insight_settings?: object; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: object; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: object; observability_settings?: object; post_conversation_settings?: object; privacy_settings?: object; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: object; tools?: inference_embedding_webhook_tool_params | retrieval_tool | object | hangup_tool | object | object | object | object | object | object[]; transcription?: object; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: object; widget_settings?: object; }",
3458
+ markdown: "## clone\n\n`client.ai.assistants.clone(assistant_id: string): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}/clone`\n\nClone an existing assistant, excluding telephony and messaging settings.\n\n### Parameters\n\n- `assistant_id: string`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.clone('assistant_id');\n\nconsole.log(inferenceEmbedding);\n```",
3459
3459
  perLanguage: {
3460
3460
  typescript: {
3461
3461
  method: 'client.ai.assistants.clone',
@@ -4634,8 +4634,8 @@ const EMBEDDED_METHODS = [
4634
4634
  stainlessPath: '(resource) ai.assistants.versions > (method) list',
4635
4635
  qualified: 'client.ai.assistants.versions.list',
4636
4636
  params: ['assistant_id: string;'],
4637
- response: '{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }',
4638
- markdown: "## list\n\n`client.ai.assistants.versions.list(assistant_id: string): { data: inference_embedding[]; }`\n\n**get** `/ai/assistants/{assistant_id}/versions`\n\nRetrieves all versions of a specific assistant with complete configuration and metadata\n\n### Parameters\n\n- `assistant_id: string`\n\n### Returns\n\n- `{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }`\n\n - `data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }[]`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst assistantsList = await client.ai.assistants.versions.list('assistant_id');\n\nconsole.log(assistantsList);\n```",
4637
+ response: '{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }',
4638
+ markdown: "## list\n\n`client.ai.assistants.versions.list(assistant_id: string): { data: inference_embedding[]; }`\n\n**get** `/ai/assistants/{assistant_id}/versions`\n\nRetrieves all versions of a specific assistant with complete configuration and metadata\n\n### Parameters\n\n- `assistant_id: string`\n\n### Returns\n\n- `{ data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }[]; }`\n\n - `data: { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }[]`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst assistantsList = await client.ai.assistants.versions.list('assistant_id');\n\nconsole.log(assistantsList);\n```",
4639
4639
  perLanguage: {
4640
4640
  typescript: {
4641
4641
  method: 'client.ai.assistants.versions.list',
@@ -4723,8 +4723,8 @@ const EMBEDDED_METHODS = [
4723
4723
  stainlessPath: '(resource) ai.assistants.versions > (method) retrieve',
4724
4724
  qualified: 'client.ai.assistants.versions.retrieve',
4725
4725
  params: ['assistant_id: string;', 'version_id: string;', 'include_mcp_servers?: boolean;'],
4726
- response: '{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }',
4727
- markdown: "## retrieve\n\n`client.ai.assistants.versions.retrieve(assistant_id: string, version_id: string, include_mcp_servers?: boolean): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**get** `/ai/assistants/{assistant_id}/versions/{version_id}`\n\nRetrieves a specific version of an assistant by assistant_id and version_id\n\n### Parameters\n\n- `assistant_id: string`\n\n- `version_id: string`\n\n- `include_mcp_servers?: boolean`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.versions.retrieve('version_id', { assistant_id: 'assistant_id' });\n\nconsole.log(inferenceEmbedding);\n```",
4726
+ response: "{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: object; insight_settings?: object; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: object; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: object; observability_settings?: object; post_conversation_settings?: object; privacy_settings?: object; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: object; tools?: inference_embedding_webhook_tool_params | retrieval_tool | object | hangup_tool | object | object | object | object | object | object[]; transcription?: object; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: object; widget_settings?: object; }",
4727
+ markdown: "## retrieve\n\n`client.ai.assistants.versions.retrieve(assistant_id: string, version_id: string, include_mcp_servers?: boolean): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**get** `/ai/assistants/{assistant_id}/versions/{version_id}`\n\nRetrieves a specific version of an assistant by assistant_id and version_id\n\n### Parameters\n\n- `assistant_id: string`\n\n- `version_id: string`\n\n- `include_mcp_servers?: boolean`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.versions.retrieve('version_id', { assistant_id: 'assistant_id' });\n\nconsole.log(inferenceEmbedding);\n```",
4728
4728
  perLanguage: {
4729
4729
  typescript: {
4730
4730
  method: 'client.ai.assistants.versions.retrieve',
@@ -4799,8 +4799,8 @@ const EMBEDDED_METHODS = [
4799
4799
  "voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; };",
4800
4800
  "widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; };",
4801
4801
  ],
4802
- response: '{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }',
4803
- markdown: "## update\n\n`client.ai.assistants.versions.update(assistant_id: string, version_id: string, description?: string, dynamic_variables?: object, dynamic_variables_webhook_timeout_ms?: number, dynamic_variables_webhook_url?: string, enabled_features?: 'telephony' | 'messaging'[], external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }, fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }, greeting?: string, insight_settings?: { insight_group_id?: string; }, instructions?: string, integrations?: { integration_id: string; allowed_list?: string[]; }[], interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }, llm_api_key_ref?: string, mcp_servers?: { id: string; allowed_tools?: string[]; }[], messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }, model?: string, name?: string, observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }, post_conversation_settings?: { enabled?: boolean; }, privacy_settings?: { data_retention?: boolean; }, tags?: string[], telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }, tool_ids?: string[], tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[], transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }, version_name?: string, voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }, widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}/versions/{version_id}`\n\nUpdates the configuration of a specific assistant version. Can not update main version\n\n### Parameters\n\n- `assistant_id: string`\n\n- `version_id: string`\n\n- `description?: string`\n\n- `dynamic_variables?: object`\n Map of dynamic variables and their default values\n\n- `dynamic_variables_webhook_timeout_ms?: number`\n Timeout in milliseconds for the dynamic variables webhook. Must be between 1 and 10000 ms. If the webhook does not respond within this timeout, the call proceeds with default values. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables).\n\n- `dynamic_variables_webhook_url?: string`\n If `dynamic_variables_webhook_url` is set, Telnyx sends a POST request to this URL at the start of the conversation to resolve dynamic variables. **Gotcha:** the webhook response must wrap variables under a top-level `dynamic_variables` object, e.g. `{\"dynamic_variables\": {\"customer_name\": \"Jane\"}}`. Returning a flat object will be ignored and variables will fall back to their defaults. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) for the full request/response format and timeout behavior.\n\n- `enabled_features?: 'telephony' | 'messaging'[]`\n\n- `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `base_url: string`\n Base URL for the external LLM endpoint.\n - `model: string`\n Model identifier to use with the external LLM endpoint.\n - `authentication_method?: 'token' | 'certificate'`\n Authentication method used when connecting to the external LLM endpoint.\n - `certificate_ref?: string`\n Integration secret identifier for the client certificate used with certificate authentication.\n - `forward_metadata?: boolean`\n When `true`, Telnyx forwards the assistant's dynamic variables to the external LLM endpoint as a top-level `extra_metadata` object on the chat completion request body. Defaults to `false`. Example payload sent to the external endpoint: `{\"extra_metadata\": {\"customer_name\": \"Jane\", \"account_id\": \"acct_789\", \"telnyx_agent_target\": \"+13125550100\", \"telnyx_end_user_target\": \"+13125550123\"}}`. Distinct from OpenAI's native `metadata` field, which has its own size and type limits.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the external LLM API key.\n - `token_retrieval_url?: string`\n URL used to retrieve an access token when certificate authentication is enabled.\n\n- `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `llm_api_key_ref?: string`\n Integration secret identifier for the fallback model API key.\n - `model?: string`\n Fallback Telnyx-hosted model to use when the primary LLM provider is unavailable.\n\n- `greeting?: string`\n Text that the assistant will use to start the conversation. This may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). Use an empty string to have the assistant wait for the user to speak first. Use the special value `<assistant-speaks-first-with-model-generated-message>` to have the assistant generate the greeting based on the system instructions.\n\n- `insight_settings?: { insight_group_id?: string; }`\n - `insight_group_id?: string`\n Reference to an Insight Group. Insights in this group will be run automatically for all the assistant's conversations.\n\n- `instructions?: string`\n System instructions for the assistant. These may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)\n\n- `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n Connected integrations attached to the assistant. The catalog of available integrations is at `/ai/integrations`; the user's connected integrations are at `/ai/integrations/connections`. Each item references a catalog integration by `integration_id`.\n\n- `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n Settings for interruptions and how the assistant decides the user has finished speaking. These timings are most relevant when using non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn behavior is controlled by the transcription end-of-turn settings under `transcription.settings` (`eot_threshold`, `eot_timeout_ms`, `eager_eot_threshold`).\n - `enable?: boolean`\n Whether users can interrupt the assistant while it is speaking.\n - `start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }`\n Controls when the assistant starts speaking after the user stops. These thresholds primarily apply to non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the transcription end-of-turn settings under `transcription.settings` instead.\n\n- `llm_api_key_ref?: string`\n This is only needed when using third-party inference providers selected by `model`. The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your LLM provider's API key. For bring-your-own endpoint authentication, use `external_llm.llm_api_key_ref` instead. Warning: Free plans are unlikely to work with this integration.\n\n- `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n MCP servers attached to the assistant. Create MCP servers with `/ai/mcp_servers`, then reference them by `id` here.\n\n- `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `conversation_inactivity_minutes?: number`\n If more than this many minutes have passed since the last message, the assistant will start a new conversation instead of continuing the existing one.\n - `default_messaging_profile_id?: string`\n Default Messaging Profile used for messaging exchanges with your assistant. This will be created automatically on assistant creation.\n - `delivery_status_webhook_url?: string`\n The URL where webhooks related to delivery statused for assistant messages will be sent.\n\n- `model?: string`\n ID of the model to use when `external_llm` is not set. You can use the [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models) to see available models. If `external_llm` is provided, the assistant uses `external_llm` instead of this field. If neither `model` nor `external_llm` is provided, Telnyx applies the default model.\n\n- `name?: string`\n\n- `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `host?: string`\n - `prompt_label?: string`\n - `prompt_name?: string`\n - `prompt_sync?: 'enabled' | 'disabled'`\n Whether to auto-publish the assistant's instructions as a Langfuse prompt.\n\nWhen ENABLED + prompt_name set, every assistant create/update pushes\n`instructions` to Langfuse via create_prompt and stores the returned\nversion in prompt_version.\n - `prompt_version?: number`\n - `public_key_ref?: string`\n - `secret_key_ref?: string`\n - `status?: 'enabled' | 'disabled'`\n\n- `post_conversation_settings?: { enabled?: boolean; }`\n Configuration for post-conversation processing. When enabled, the assistant receives one additional LLM turn after the conversation ends, allowing it to execute tool calls such as logging to a CRM or sending a summary. The assistant can execute multiple parallel or sequential tools during this phase. Telephony-control tools (e.g. hangup, transfer) are unavailable post-conversation. Beta feature.\n - `enabled?: boolean`\n Whether post-conversation processing is enabled. When true, the assistant will be invoked after the conversation ends to perform any final tool calls. Defaults to false.\n\n- `privacy_settings?: { data_retention?: boolean; }`\n - `data_retention?: boolean`\n If true, conversation history and insights will be stored. If false, they will not be stored. This in‑tool toggle governs solely the retention of conversation history and insights via the AI assistant. It has no effect on any separate recording, transcription, or storage configuration that you have set at the account, number, or application level. All such external settings remain in force regardless of your selection here.\n\n- `tags?: string[]`\n Tags associated with the assistant. Tags can also be managed with the assistant tag endpoints.\n\n- `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `default_texml_app_id?: string`\n Default Texml App used for voice calls with your assistant. This will be created automatically on assistant creation.\n - `noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'`\n The noise suppression engine to use. Use 'disabled' to turn off noise suppression.\n - `noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }`\n Configuration for noise suppression. Only applicable when noise_suppression is 'deepfilternet'.\n - `recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }`\n Configuration for call recording format and channel settings.\n - `supports_unauthenticated_web_calls?: boolean`\n When enabled, allows users to interact with your AI assistant directly from your website without requiring authentication. This is required for FE widgets that work with assistants that have telephony enabled.\n - `time_limit_secs?: number`\n Maximum duration in seconds for the AI assistant to participate on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `user_idle_reply_secs?: number`\n Duration in seconds of end user silence before the assistant checks in on the user. When this limit is reached the assistant will prompt the user to respond. This is distinct from user_idle_timeout_secs which stops the assistant entirely.\n - `user_idle_timeout_secs?: number`\n Maximum duration in seconds of end user silence on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: { message?: string; prompt?: string; type?: 'prompt' | 'message'; }; }; }`\n Configuration for voicemail detection (AMD - Answering Machine Detection) on outgoing calls. These settings only apply if AMD is enabled on the Dial command. See [TeXML Dial documentation](https://developers.telnyx.com/api-reference/texml-rest-commands/initiate-an-outbound-call) for enabling AMD. Recommended settings: MachineDetection=Enable, AsyncAmd=true, DetectionMode=Premium.\n\n- `tool_ids?: string[]`\n IDs of shared tools to attach to the assistant. New integrations should prefer `tool_ids` over inline `tools`.\n\n- `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n Deprecated for new integrations. Inline tool definitions available to the assistant. Prefer `tool_ids` to attach shared tools created with the AI Tools endpoints.\n\n- `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `api_key_ref?: string`\n Integration secret identifier for the transcription provider API key. Currently used for Azure transcription regions that require a customer-provided API key.\n - `language?: string`\n The language of the audio to be transcribed. If not set, or if set to `auto`, supported models will automatically detect the language. For `deepgram/flux`, supported values are: `auto` (Telnyx language detection controls the language hint), `multi` (no language hint), and language-specific hints `en`, `es`, `fr`, `de`, `hi`, `ru`, `pt`, `ja`, `it`, and `nl`.\n - `model?: string`\n The speech to text model to be used by the voice assistant. All Deepgram models are run on-premise.\n\n- `deepgram/flux` is optimized for turn-taking with multilingual language hints.\n- `deepgram/nova-3` is multilingual with automatic language detection.\n- `deepgram/nova-2` is Deepgram's previous-generation multilingual model.\n- `azure/fast` is a multilingual Azure transcription model.\n- `assemblyai/universal-streaming` is a multilingual streaming model with configurable turn detection.\n- `xai/grok-stt` is a multilingual Grok STT model.\n - `region?: string`\n Region on third party cloud providers (currently Azure) if using one of their models. Some regions require `api_key_ref`.\n - `settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }`\n\n- `version_name?: string`\n Human-readable name for the assistant version.\n\n- `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `voice: string`\n The voice to be used by the voice assistant. Check the full list of [available voices](https://developers.telnyx.com/docs/tts-stt/tts-available-voices) via our voices API.\nTo use ElevenLabs, you must reference your ElevenLabs API key as an integration secret under the `api_key_ref` field. See [integration secrets documentation](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) for details. For Telnyx voices, use `Telnyx.<model_id>.<voice_id>` (e.g. Telnyx.KokoroTTS.af_heart).\nThe voice portion of the identifier supports [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) using mustache syntax (e.g. `Telnyx.Ultra.{{voice_id}}`). The variable is resolved at call time from your dynamic variables webhook, allowing you to select the voice dynamically per call.\n - `api_key_ref?: string`\n The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your ElevenLabs API key. Warning: Free plans are unlikely to work with this integration.\n - `background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }`\n Optional background audio to play on the call. Use a predefined media bed, or supply a looped MP3 URL. If a media URL is chosen in the portal, customers can preview it before saving.\n - `expressive_mode?: boolean`\n Enables emotionally expressive speech using SSML emotion tags. When enabled, the assistant uses audio tags like angry, excited, content, and sad to add emotional nuance. Only supported for Telnyx Ultra voices.\n - `language_boost?: string`\n Enhances recognition for specific languages and dialects during MiniMax TTS synthesis. Default is null (no boost). Set to 'auto' for automatic language detection. Only applicable when using MiniMax voices.\n - `similarity_boost?: number`\n Determines how closely the AI should adhere to the original voice when attempting to replicate it. Only applicable when using ElevenLabs.\n - `speed?: number`\n Adjusts speech velocity. 1.0 is default speed; values less than 1.0 slow speech; values greater than 1.0 accelerate it. Only applicable when using ElevenLabs.\n - `style?: number`\n Determines the style exaggeration of the voice. Amplifies speaker style but consumes additional resources when set above 0. Only applicable when using ElevenLabs.\n - `temperature?: number`\n Determines how stable the voice is and the randomness between each generation. Lower values create a broader emotional range; higher values produce more consistent, monotonous output. Only applicable when using ElevenLabs.\n - `use_speaker_boost?: boolean`\n Amplifies similarity to the original speaker voice. Increases computational load and latency slightly. Only applicable when using ElevenLabs.\n - `voice_speed?: number`\n The speed of the voice in the range [0.25, 2.0]. 1.0 is deafult speed. Larger numbers make the voice faster, smaller numbers make it slower. This is only applicable for Telnyx Natural voices.\n\n- `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n Configuration settings for the assistant's web widget.\n - `agent_thinking_text?: string`\n Text displayed while the agent is processing.\n - `audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }`\n - `default_state?: 'expanded' | 'collapsed'`\n The default state of the widget.\n - `give_feedback_url?: string`\n URL for users to give feedback.\n - `logo_icon_url?: string`\n URL to a custom logo icon for the widget.\n - `position?: 'fixed' | 'static'`\n The positioning style for the widget.\n - `report_issue_url?: string`\n URL for users to report issues.\n - `speak_to_interrupt_text?: string`\n Text prompting users to speak to interrupt.\n - `start_call_text?: string`\n Custom text displayed on the start call button.\n - `theme?: 'light' | 'dark'`\n The visual theme for the widget.\n - `view_history_url?: string`\n URL to view conversation history.\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.versions.update('version_id', { assistant_id: 'assistant_id' });\n\nconsole.log(inferenceEmbedding);\n```",
4802
+ response: "{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: object; insight_settings?: object; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: object; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: object; observability_settings?: object; post_conversation_settings?: object; privacy_settings?: object; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: object; tools?: inference_embedding_webhook_tool_params | retrieval_tool | object | hangup_tool | object | object | object | object | object | object[]; transcription?: object; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: object; widget_settings?: object; }",
4803
+ markdown: "## update\n\n`client.ai.assistants.versions.update(assistant_id: string, version_id: string, description?: string, dynamic_variables?: object, dynamic_variables_webhook_timeout_ms?: number, dynamic_variables_webhook_url?: string, enabled_features?: 'telephony' | 'messaging'[], external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }, fallback_config?: { external_llm?: external_llm_req; llm_api_key_ref?: string; model?: string; }, greeting?: string, insight_settings?: { insight_group_id?: string; }, instructions?: string, integrations?: { integration_id: string; allowed_list?: string[]; }[], interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }, llm_api_key_ref?: string, mcp_servers?: { id: string; allowed_tools?: string[]; }[], messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }, model?: string, name?: string, observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }, post_conversation_settings?: { enabled?: boolean; }, privacy_settings?: { data_retention?: boolean; }, tags?: string[], telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }, tool_ids?: string[], tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[], transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }, version_name?: string, voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }, widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}/versions/{version_id}`\n\nUpdates the configuration of a specific assistant version. Can not update main version\n\n### Parameters\n\n- `assistant_id: string`\n\n- `version_id: string`\n\n- `description?: string`\n\n- `dynamic_variables?: object`\n Map of dynamic variables and their default values\n\n- `dynamic_variables_webhook_timeout_ms?: number`\n Timeout in milliseconds for the dynamic variables webhook. Must be between 1 and 10000 ms. If the webhook does not respond within this timeout, the call proceeds with default values. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables).\n\n- `dynamic_variables_webhook_url?: string`\n If `dynamic_variables_webhook_url` is set, Telnyx sends a POST request to this URL at the start of the conversation to resolve dynamic variables. **Gotcha:** the webhook response must wrap variables under a top-level `dynamic_variables` object, e.g. `{\"dynamic_variables\": {\"customer_name\": \"Jane\"}}`. Returning a flat object will be ignored and variables will fall back to their defaults. See the [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) for the full request/response format and timeout behavior.\n\n- `enabled_features?: 'telephony' | 'messaging'[]`\n\n- `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `base_url: string`\n Base URL for the external LLM endpoint.\n - `model: string`\n Model identifier to use with the external LLM endpoint.\n - `authentication_method?: 'token' | 'certificate'`\n Authentication method used when connecting to the external LLM endpoint.\n - `certificate_ref?: string`\n Integration secret identifier for the client certificate used with certificate authentication.\n - `forward_metadata?: boolean`\n When `true`, Telnyx forwards the assistant's dynamic variables to the external LLM endpoint as a top-level `extra_metadata` object on the chat completion request body. Defaults to `false`. Example payload sent to the external endpoint: `{\"extra_metadata\": {\"customer_name\": \"Jane\", \"account_id\": \"acct_789\", \"telnyx_agent_target\": \"+13125550100\", \"telnyx_end_user_target\": \"+13125550123\"}}`. Distinct from OpenAI's native `metadata` field, which has its own size and type limits.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the external LLM API key.\n - `token_retrieval_url?: string`\n URL used to retrieve an access token when certificate authentication is enabled.\n\n- `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `llm_api_key_ref?: string`\n Integration secret identifier for the fallback model API key.\n - `model?: string`\n Fallback Telnyx-hosted model to use when the primary LLM provider is unavailable.\n\n- `greeting?: string`\n Text that the assistant will use to start the conversation. This may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). Use an empty string to have the assistant wait for the user to speak first. Use the special value `<assistant-speaks-first-with-model-generated-message>` to have the assistant generate the greeting based on the system instructions.\n\n- `insight_settings?: { insight_group_id?: string; }`\n - `insight_group_id?: string`\n Reference to an Insight Group. Insights in this group will be run automatically for all the assistant's conversations.\n\n- `instructions?: string`\n System instructions for the assistant. These may be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)\n\n- `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n Connected integrations attached to the assistant. The catalog of available integrations is at `/ai/integrations`; the user's connected integrations are at `/ai/integrations/connections`. Each item references a catalog integration by `integration_id`.\n\n- `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n Settings for interruptions and how the assistant decides the user has finished speaking. These timings are most relevant when using non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn behavior is controlled by the transcription end-of-turn settings under `transcription.settings` (`eot_threshold`, `eot_timeout_ms`, `eager_eot_threshold`).\n - `enable?: boolean`\n Whether users can interrupt the assistant while it is speaking.\n - `start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }`\n Controls when the assistant starts speaking after the user stops. These thresholds primarily apply to non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the transcription end-of-turn settings under `transcription.settings` instead.\n\n- `llm_api_key_ref?: string`\n This is only needed when using third-party inference providers selected by `model`. The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your LLM provider's API key. For bring-your-own endpoint authentication, use `external_llm.llm_api_key_ref` instead. Warning: Free plans are unlikely to work with this integration.\n\n- `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n MCP servers attached to the assistant. Create MCP servers with `/ai/mcp_servers`, then reference them by `id` here.\n\n- `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `conversation_inactivity_minutes?: number`\n If more than this many minutes have passed since the last message, the assistant will start a new conversation instead of continuing the existing one.\n - `default_messaging_profile_id?: string`\n Default Messaging Profile used for messaging exchanges with your assistant. This will be created automatically on assistant creation.\n - `delivery_status_webhook_url?: string`\n The URL where webhooks related to delivery statused for assistant messages will be sent.\n\n- `model?: string`\n ID of the model to use when `external_llm` is not set. You can use the [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models) to see available models. If `external_llm` is provided, the assistant uses `external_llm` instead of this field. If neither `model` nor `external_llm` is provided, Telnyx applies the default model.\n\n- `name?: string`\n\n- `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `host?: string`\n - `prompt_label?: string`\n - `prompt_name?: string`\n - `prompt_sync?: 'enabled' | 'disabled'`\n Whether to auto-publish the assistant's instructions as a Langfuse prompt.\n\nWhen ENABLED + prompt_name set, every assistant create/update pushes\n`instructions` to Langfuse via create_prompt and stores the returned\nversion in prompt_version.\n - `prompt_version?: number`\n - `public_key_ref?: string`\n - `secret_key_ref?: string`\n - `status?: 'enabled' | 'disabled'`\n\n- `post_conversation_settings?: { enabled?: boolean; }`\n Configuration for post-conversation processing. When enabled, the assistant receives one additional LLM turn after the conversation ends, allowing it to execute tool calls such as logging to a CRM or sending a summary. The assistant can execute multiple parallel or sequential tools during this phase. Telephony-control tools (e.g. hangup, transfer) are unavailable post-conversation. Beta feature.\n - `enabled?: boolean`\n Whether post-conversation processing is enabled. When true, the assistant will be invoked after the conversation ends to perform any final tool calls. Defaults to false.\n\n- `privacy_settings?: { data_retention?: boolean; }`\n - `data_retention?: boolean`\n If true, conversation history and insights will be stored. If false, they will not be stored. This in‑tool toggle governs solely the retention of conversation history and insights via the AI assistant. It has no effect on any separate recording, transcription, or storage configuration that you have set at the account, number, or application level. All such external settings remain in force regardless of your selection here.\n\n- `tags?: string[]`\n Tags associated with the assistant. Tags can also be managed with the assistant tag endpoints.\n\n- `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `default_texml_app_id?: string`\n Default Texml App used for voice calls with your assistant. This will be created automatically on assistant creation.\n - `noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'`\n The noise suppression engine to use. Use 'disabled' to turn off noise suppression.\n - `noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }`\n Configuration for noise suppression. Only applicable when noise_suppression is 'deepfilternet'.\n - `recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }`\n Configuration for call recording format and channel settings.\n - `supports_unauthenticated_web_calls?: boolean`\n When enabled, allows users to interact with your AI assistant directly from your website without requiring authentication. This is required for FE widgets that work with assistants that have telephony enabled.\n - `time_limit_secs?: number`\n Maximum duration in seconds for the AI assistant to participate on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `user_idle_reply_secs?: number`\n Duration in seconds of end user silence before the assistant checks in on the user. When this limit is reached the assistant will prompt the user to respond. This is distinct from user_idle_timeout_secs which stops the assistant entirely.\n - `user_idle_timeout_secs?: number`\n Maximum duration in seconds of end user silence on the call. When this limit is reached the assistant will be stopped. This limit does not apply to portions of a call without an active assistant (for instance, a call transferred to a human representative).\n - `voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: { message?: string; prompt?: string; type?: 'prompt' | 'message'; }; }; }`\n Configuration for voicemail detection (AMD - Answering Machine Detection) on outgoing calls. These settings only apply if AMD is enabled on the Dial command. See [TeXML Dial documentation](https://developers.telnyx.com/api-reference/texml-rest-commands/initiate-an-outbound-call) for enabling AMD. Recommended settings: MachineDetection=Enable, AsyncAmd=true, DetectionMode=Premium.\n\n- `tool_ids?: string[]`\n IDs of shared tools to attach to the assistant. New integrations should prefer `tool_ids` over inline `tools`.\n\n- `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n Deprecated for new integrations. Inline tool definitions available to the assistant. Prefer `tool_ids` to attach shared tools created with the AI Tools endpoints.\n\n- `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `api_key_ref?: string`\n Integration secret identifier for the transcription provider API key. Currently used for Azure transcription regions that require a customer-provided API key.\n - `language?: string`\n The language of the audio to be transcribed. If not set, or if set to `auto`, supported models will automatically detect the language. For `deepgram/flux`, supported values are: `auto` (Telnyx language detection controls the language hint), `multi` (no language hint), and language-specific hints `en`, `es`, `fr`, `de`, `hi`, `ru`, `pt`, `ja`, `it`, and `nl`.\n - `model?: string`\n The speech to text model to be used by the voice assistant. All Deepgram models are run on-premise.\n\n- `deepgram/flux` is optimized for turn-taking with multilingual language hints.\n- `deepgram/nova-3` is multilingual with automatic language detection.\n- `deepgram/nova-2` is Deepgram's previous-generation multilingual model.\n- `azure/fast` is a multilingual Azure transcription model.\n- `assemblyai/universal-streaming` is a multilingual streaming model with configurable turn detection.\n- `xai/grok-stt` is a multilingual Grok STT model.\n - `region?: string`\n Region on third party cloud providers (currently Azure) if using one of their models. Some regions require `api_key_ref`.\n - `settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }`\n\n- `version_name?: string`\n Human-readable name for the assistant version.\n\n- `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `voice: string`\n The voice to be used by the voice assistant. Check the full list of [available voices](https://developers.telnyx.com/docs/tts-stt/tts-available-voices) via our voices API.\nTo use ElevenLabs, you must reference your ElevenLabs API key as an integration secret under the `api_key_ref` field. See [integration secrets documentation](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) for details. For Telnyx voices, use `Telnyx.<model_id>.<voice_id>` (e.g. Telnyx.KokoroTTS.af_heart).\nThe voice portion of the identifier supports [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables) using mustache syntax (e.g. `Telnyx.Ultra.{{voice_id}}`). The variable is resolved at call time from your dynamic variables webhook, allowing you to select the voice dynamically per call.\n - `api_key_ref?: string`\n The `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret) that refers to your ElevenLabs API key. Warning: Free plans are unlikely to work with this integration.\n - `background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }`\n Optional background audio to play on the call. Use a predefined media bed, or supply a looped MP3 URL. If a media URL is chosen in the portal, customers can preview it before saving.\n - `expressive_mode?: boolean`\n Enables emotionally expressive speech using SSML emotion tags. When enabled, the assistant uses audio tags like angry, excited, content, and sad to add emotional nuance. Only supported for Telnyx Ultra voices.\n - `language_boost?: string`\n Enhances recognition for specific languages and dialects during MiniMax TTS synthesis. Default is null (no boost). Set to 'auto' for automatic language detection. Only applicable when using MiniMax voices.\n - `similarity_boost?: number`\n Determines how closely the AI should adhere to the original voice when attempting to replicate it. Only applicable when using ElevenLabs.\n - `speed?: number`\n Adjusts speech velocity. 1.0 is default speed; values less than 1.0 slow speech; values greater than 1.0 accelerate it. Only applicable when using ElevenLabs.\n - `style?: number`\n Determines the style exaggeration of the voice. Amplifies speaker style but consumes additional resources when set above 0. Only applicable when using ElevenLabs.\n - `temperature?: number`\n Determines how stable the voice is and the randomness between each generation. Lower values create a broader emotional range; higher values produce more consistent, monotonous output. Only applicable when using ElevenLabs.\n - `use_speaker_boost?: boolean`\n Amplifies similarity to the original speaker voice. Increases computational load and latency slightly. Only applicable when using ElevenLabs.\n - `voice_speed?: number`\n The speed of the voice in the range [0.25, 2.0]. 1.0 is deafult speed. Larger numbers make the voice faster, smaller numbers make it slower. This is only applicable for Telnyx Natural voices.\n\n- `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n Configuration settings for the assistant's web widget.\n - `agent_thinking_text?: string`\n Text displayed while the agent is processing.\n - `audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }`\n - `default_state?: 'expanded' | 'collapsed'`\n The default state of the widget.\n - `give_feedback_url?: string`\n URL for users to give feedback.\n - `logo_icon_url?: string`\n URL to a custom logo icon for the widget.\n - `position?: 'fixed' | 'static'`\n The positioning style for the widget.\n - `report_issue_url?: string`\n URL for users to report issues.\n - `speak_to_interrupt_text?: string`\n Text prompting users to speak to interrupt.\n - `start_call_text?: string`\n Custom text displayed on the start call button.\n - `theme?: 'light' | 'dark'`\n The visual theme for the widget.\n - `view_history_url?: string`\n URL to view conversation history.\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.versions.update('version_id', { assistant_id: 'assistant_id' });\n\nconsole.log(inferenceEmbedding);\n```",
4804
4804
  perLanguage: {
4805
4805
  typescript: {
4806
4806
  method: 'client.ai.assistants.versions.update',
@@ -4844,8 +4844,8 @@ const EMBEDDED_METHODS = [
4844
4844
  stainlessPath: '(resource) ai.assistants.versions > (method) promote',
4845
4845
  qualified: 'client.ai.assistants.versions.promote',
4846
4846
  params: ['assistant_id: string;', 'version_id: string;'],
4847
- response: '{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }',
4848
- markdown: "## promote\n\n`client.ai.assistants.versions.promote(assistant_id: string, version_id: string): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: object; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}/versions/{version_id}/promote`\n\nPromotes a specific version to be the main/current version of the assistant. This will delete any existing canary deploy configuration and send all live production traffic to this version.\n\n### Parameters\n\n- `assistant_id: string`\n\n- `version_id: string`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.versions.promote('version_id', { assistant_id: 'assistant_id' });\n\nconsole.log(inferenceEmbedding);\n```",
4847
+ response: "{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: object; fallback_config?: object; greeting?: string; import_metadata?: object; insight_settings?: object; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: object; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: object; observability_settings?: object; post_conversation_settings?: object; privacy_settings?: object; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: object; tools?: inference_embedding_webhook_tool_params | retrieval_tool | object | hangup_tool | object | object | object | object | object | object[]; transcription?: object; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: object; widget_settings?: object; }",
4848
+ markdown: "## promote\n\n`client.ai.assistants.versions.promote(assistant_id: string, version_id: string): { id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: enabled_features[]; external_llm?: external_llm; fallback_config?: fallback_config; greeting?: string; import_metadata?: import_metadata; insight_settings?: insight_settings; integrations?: object[]; interruption_settings?: object; llm_api_key_ref?: string; mcp_servers?: object[]; messaging_settings?: messaging_settings; observability_settings?: observability; post_conversation_settings?: post_conversation_settings; privacy_settings?: privacy_settings; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: telephony_settings; tools?: assistant_tool[]; transcription?: transcription_settings; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: voice_settings; widget_settings?: widget_settings; }`\n\n**post** `/ai/assistants/{assistant_id}/versions/{version_id}/promote`\n\nPromotes a specific version to be the main/current version of the assistant. This will delete any existing canary deploy configuration and send all live production traffic to this version.\n\n### Parameters\n\n- `assistant_id: string`\n\n- `version_id: string`\n\n### Returns\n\n- `{ id: string; created_at: string; instructions: string; model: string; name: string; description?: string; dynamic_variables?: object; dynamic_variables_webhook_timeout_ms?: number; dynamic_variables_webhook_url?: string; enabled_features?: 'telephony' | 'messaging'[]; external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: external_llm; llm_api_key_ref?: string; model?: string; }; greeting?: string; import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }; insight_settings?: { insight_group_id?: string; }; integrations?: { integration_id: string; allowed_list?: string[]; }[]; interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: object; wait_seconds?: number; }; }; llm_api_key_ref?: string; mcp_servers?: { id: string; allowed_tools?: string[]; }[]; messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }; observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }; post_conversation_settings?: { enabled?: boolean; }; privacy_settings?: { data_retention?: boolean; }; related_mission_ids?: string[]; tags?: string[]; telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: object; recording_settings?: object; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: object; }; tools?: object | object | { handoff: object; type: 'handoff'; } | object | { transfer: object; type: 'transfer'; } | { invite: object; type: 'invite'; } | { refer: object; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: object; type: 'send_message'; } | { skip_turn: object; type: 'skip_turn'; }[]; transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: transcription_settings_config; }; version_created_at?: string; version_id?: string; version_name?: string; voice_settings?: { voice: string; api_key_ref?: string; background_audio?: object | object | object; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }; widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: audio_visualizer_config; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }; }`\n\n - `id: string`\n - `created_at: string`\n - `instructions: string`\n - `model: string`\n - `name: string`\n - `description?: string`\n - `dynamic_variables?: object`\n - `dynamic_variables_webhook_timeout_ms?: number`\n - `dynamic_variables_webhook_url?: string`\n - `enabled_features?: 'telephony' | 'messaging'[]`\n - `external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }`\n - `fallback_config?: { external_llm?: { base_url: string; model: string; authentication_method?: 'token' | 'certificate'; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n - `greeting?: string`\n - `import_metadata?: { import_id?: string; import_provider?: 'elevenlabs' | 'vapi' | 'retell'; }`\n - `insight_settings?: { insight_group_id?: string; }`\n - `integrations?: { integration_id: string; allowed_list?: string[]; }[]`\n - `interruption_settings?: { enable?: boolean; start_speaking_plan?: { transcription_endpointing_plan?: { on_no_punctuation_seconds?: number; on_number_seconds?: number; on_punctuation_seconds?: number; }; wait_seconds?: number; }; }`\n - `llm_api_key_ref?: string`\n - `mcp_servers?: { id: string; allowed_tools?: string[]; }[]`\n - `messaging_settings?: { conversation_inactivity_minutes?: number; default_messaging_profile_id?: string; delivery_status_webhook_url?: string; }`\n - `observability_settings?: { host?: string; prompt_label?: string; prompt_name?: string; prompt_sync?: 'enabled' | 'disabled'; prompt_version?: number; public_key_ref?: string; secret_key_ref?: string; status?: 'enabled' | 'disabled'; }`\n - `post_conversation_settings?: { enabled?: boolean; }`\n - `privacy_settings?: { data_retention?: boolean; }`\n - `related_mission_ids?: string[]`\n - `tags?: string[]`\n - `telephony_settings?: { default_texml_app_id?: string; noise_suppression?: 'krisp' | 'deepfilternet' | 'disabled'; noise_suppression_config?: { attenuation_limit?: number; mode?: 'advanced'; }; recording_settings?: { channels?: 'single' | 'dual'; enabled?: boolean; format?: 'wav' | 'mp3'; }; supports_unauthenticated_web_calls?: boolean; time_limit_secs?: number; user_idle_reply_secs?: number; user_idle_timeout_secs?: number; voicemail_detection?: { on_voicemail_detected?: { action?: 'stop_assistant' | 'leave_message_and_stop_assistant' | 'continue_assistant'; voicemail_message?: object; }; }; }`\n - `tools?: { type: 'webhook'; webhook: { description: string; name: string; url: string; async?: boolean; body_parameters?: object; headers?: object[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: object; query_parameters?: object; store_fields_as_variables?: object[]; timeout_ms?: number; }; } | { retrieval: object; type: 'retrieval'; } | { handoff: { ai_assistants: { id: string; name: string; }[]; voice_mode?: 'unified' | 'distinct'; }; type: 'handoff'; } | { hangup: object; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; custom_headers?: { name?: string; value?: string; }[]; voicemail_detection?: { detection_config?: object; detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; warm_message_delay_ms?: number; warm_transfer_instructions?: string; }; type: 'transfer'; } | { invite: { from: string; custom_headers?: { name?: string; value?: string; }[]; targets?: { to: string; name?: string; }[] | string; voicemail_detection?: { detection_mode?: 'disabled' | 'premium'; on_voicemail_detected?: object; }; }; type: 'invite'; } | { refer: { targets: { name: string; sip_address: string; sip_auth_password?: string; sip_auth_username?: string; }[]; custom_headers?: { name?: string; value?: string; }[]; sip_headers?: { name?: 'User-to-User' | 'Diversion'; value?: string; }[]; }; type: 'refer'; } | { send_dtmf: object; type: 'send_dtmf'; } | { send_message: { message_template?: string; }; type: 'send_message'; } | { skip_turn: { description?: string; }; type: 'skip_turn'; }[]`\n - `transcription?: { api_key_ref?: string; language?: string; model?: string; region?: string; settings?: { eager_eot_threshold?: number; end_of_turn_confidence_threshold?: number; eot_threshold?: number; eot_timeout_ms?: number; keyterm?: string; max_turn_silence?: number; min_turn_silence?: number; numerals?: boolean; smart_format?: boolean; }; }`\n - `version_created_at?: string`\n - `version_id?: string`\n - `version_name?: string`\n - `voice_settings?: { voice: string; api_key_ref?: string; background_audio?: { type: 'predefined_media'; value: 'silence' | 'office'; } | { type: 'media_url'; value: string; } | { type: 'media_name'; value: string; }; expressive_mode?: boolean; language_boost?: string; similarity_boost?: number; speed?: number; style?: number; temperature?: number; use_speaker_boost?: boolean; voice_speed?: number; }`\n - `widget_settings?: { agent_thinking_text?: string; audio_visualizer_config?: { color?: 'verdant' | 'twilight' | 'bloom' | 'mystic' | 'flare' | 'glacier'; preset?: string; }; default_state?: 'expanded' | 'collapsed'; give_feedback_url?: string; logo_icon_url?: string; position?: 'fixed' | 'static'; report_issue_url?: string; speak_to_interrupt_text?: string; start_call_text?: string; theme?: 'light' | 'dark'; view_history_url?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst inferenceEmbedding = await client.ai.assistants.versions.promote('version_id', { assistant_id: 'assistant_id' });\n\nconsole.log(inferenceEmbedding);\n```",
4849
4849
  perLanguage: {
4850
4850
  typescript: {
4851
4851
  method: 'client.ai.assistants.versions.promote',
@@ -11138,10 +11138,10 @@ const EMBEDDED_METHODS = [
11138
11138
  'send_message_history_updates?: boolean;',
11139
11139
  'transcription?: { model?: string; };',
11140
11140
  'voice?: string;',
11141
- "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; };",
11141
+ "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'xai'; language?: string; };",
11142
11142
  ],
11143
11143
  response: '{ data?: { conversation_id?: string; result?: string; }; }',
11144
- markdown: "## start_ai_assistant\n\n`client.calls.actions.startAIAssistant(call_control_id: string, assistant?: { id: string; dynamic_variables?: object; external_llm?: object; fallback_config?: object; greeting?: string; instructions?: string; llm_api_key_ref?: string; mcp_servers?: object[]; model?: string; name?: string; observability_settings?: object; openai_api_key_ref?: string; tools?: book_appointment_tool | check_availability_tool | webhook_tool | hangup_tool | transfer_tool | call_control_retrieval_tool[]; }, client_state?: string, command_id?: string, greeting?: string, interruption_settings?: { enable?: boolean; }, message_history?: { content: string; role: 'user'; metadata?: object; } | { role: 'assistant'; content?: string; metadata?: object; tool_calls?: { id: string; function: object; type: 'function'; }[]; } | { content: string; role: 'tool'; tool_call_id: string; metadata?: object; } | { content: string; role: 'system'; metadata?: object; } | { content: string; role: 'developer'; metadata?: object; }[], participants?: { id: string; role: 'user'; name?: string; on_hangup?: 'continue_conversation' | 'end_conversation'; }[], send_message_history_updates?: boolean, transcription?: { model?: string; }, voice?: string, voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; }): { data?: call_control_command_result_with_conversation_id; }`\n\n**post** `/calls/{call_control_id}/actions/ai_assistant_start`\n\nStart an AI assistant on the call.\n\n**Expected Webhooks:**\n\n- `call.conversation.ended`\n- `call.conversation_insights.generated`\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `assistant?: { id: string; dynamic_variables?: object; external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; instructions?: string; llm_api_key_ref?: string; mcp_servers?: object[]; model?: string; name?: string; observability_settings?: object; openai_api_key_ref?: string; tools?: { book_appointment: book_appointment_tool_params; type: 'book_appointment'; } | { check_availability: check_availability_tool_params; type: 'check_availability'; } | { type: 'webhook'; webhook: object; } | { hangup: hangup_tool_params; type: 'hangup'; } | { transfer: object; type: 'transfer'; } | { retrieval: call_control_bucket_ids; type: 'retrieval'; }[]; }`\n AI Assistant configuration. All fields except `id` are optional — the assistant's stored configuration will be used as fallback for any omitted fields.\n - `id: string`\n The identifier of the AI assistant to use.\n - `dynamic_variables?: object`\n Map of dynamic variables and their default values. Dynamic variables can be referenced in instructions, greeting, and tool definitions using the `{{variable_name}}` syntax. Call-control-agent automatically merges in `telnyx_call_*` variables (telnyx_call_to, telnyx_call_from, telnyx_conversation_channel, telnyx_agent_target, telnyx_end_user_target, telnyx_call_caller_id_name) and custom header variables.\n - `external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }`\n External LLM configuration for bringing your own LLM endpoint.\n - `fallback_config?: { external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n Fallback LLM configuration used when the primary LLM provider is unavailable.\n - `greeting?: string`\n Initial greeting text spoken when the assistant starts. Can be plain text for any voice or SSML for `AWS.Polly.<voice_id>` voices. There is a 3,000 character limit.\n - `instructions?: string`\n System instructions for the voice assistant. Can be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). This will overwrite the instructions set in the assistant configuration.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the LLM provider API key. Use this field to reference an [integration secret](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) containing your LLM provider API key. Supports any LLM provider (OpenAI, Anthropic, etc.).\n - `mcp_servers?: object[]`\n MCP (Model Context Protocol) server configurations for extending the assistant's capabilities with external tools and data sources.\n - `model?: string`\n LLM model override for this call. If omitted, the assistant's configured model is used.\n - `name?: string`\n Assistant name override for this call.\n - `observability_settings?: object`\n Observability configuration for the assistant session, including Langfuse integration for tracing and monitoring.\n - `openai_api_key_ref?: string`\n Deprecated — use `llm_api_key_ref` instead. Integration secret identifier for the OpenAI API key. This field is maintained for backward compatibility; `llm_api_key_ref` is the canonical field name and supports all LLM providers.\n - `tools?: { book_appointment: { api_key_ref: string; event_type_id: number; attendee_name?: string; attendee_timezone?: string; }; type: 'book_appointment'; } | { check_availability: { api_key_ref: string; event_type_id: number; }; type: 'check_availability'; } | { type: 'webhook'; webhook: { description: string; name: string; url: string; body_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; headers?: { name?: string; value?: string; }[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; query_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; }; } | { hangup: { description?: string; }; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; }; type: 'transfer'; } | { retrieval: { bucket_ids: string[]; max_num_results?: number; }; type: 'retrieval'; }[]`\n Inline tool definitions available to the assistant (webhook, retrieval, transfer, hangup, etc.). Overrides the assistant's stored tools if provided.\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `greeting?: string`\n Text that will be played when the assistant starts, if none then nothing will be played when the assistant starts. The greeting can be text for any voice or SSML for `AWS.Polly.<voice_id>` voices. There is a 3,000 character limit.\n\n- `interruption_settings?: { enable?: boolean; }`\n Settings for handling user interruptions during assistant speech\n - `enable?: boolean`\n When true, allows users to interrupt the assistant while speaking\n\n- `message_history?: { content: string; role: 'user'; metadata?: object; } | { role: 'assistant'; content?: string; metadata?: object; tool_calls?: { id: string; function: { name: string; }; type: 'function'; }[]; } | { content: string; role: 'tool'; tool_call_id: string; metadata?: object; } | { content: string; role: 'system'; metadata?: object; } | { content: string; role: 'developer'; metadata?: object; }[]`\n A list of messages to seed the conversation history before the assistant starts. Follows the same message format as the `ai_assistant_add_messages` command.\n\n- `participants?: { id: string; role: 'user'; name?: string; on_hangup?: 'continue_conversation' | 'end_conversation'; }[]`\n A list of participants to add to the conversation when it starts.\n\n- `send_message_history_updates?: boolean`\n When `true`, a webhook is sent each time the conversation message history is updated.\n\n- `transcription?: { model?: string; }`\n The settings associated with speech to text for the voice assistant. This is only relevant if the assistant uses a text-to-text language model. Any assistant using a model with native audio support (e.g. `fixie-ai/ultravox-v0_4`) will ignore this field.\n - `model?: string`\n The speech to text model to be used by the voice assistant.\n\n- `distil-whisper/distil-large-v2` is lower latency but English-only.\n- `openai/whisper-large-v3-turbo` is multi-lingual with automatic language detection but slightly higher latency.\n- `google` is a multi-lingual option, please describe the language in the `language` field.\n\n- `voice?: string`\n The voice to be used by the voice assistant. Currently we support ElevenLabs, Telnyx and AWS voices.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>. (e.g. Azure.en-CA-ClaraNeural, Azure.en-CA-LiamNeural, Azure.en-US-BrianMultilingualNeural, Azure.en-US-Ava:DragonHDLatestNeural. For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery).)\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.BaseModel.John`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration secret under `\"voice_settings\": {\"api_key_ref\": \"<secret_id>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n - **Telnyx:** Use `Telnyx.<model_id>.<voice_id>`\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { conversation_id?: string; result?: string; }; }`\n\n - `data?: { conversation_id?: string; result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.startAIAssistant('call_control_id');\n\nconsole.log(response);\n```",
11144
+ markdown: "## start_ai_assistant\n\n`client.calls.actions.startAIAssistant(call_control_id: string, assistant?: { id: string; dynamic_variables?: object; external_llm?: object; fallback_config?: object; greeting?: string; instructions?: string; llm_api_key_ref?: string; mcp_servers?: object[]; model?: string; name?: string; observability_settings?: object; openai_api_key_ref?: string; tools?: book_appointment_tool | check_availability_tool | webhook_tool | hangup_tool | transfer_tool | call_control_retrieval_tool[]; }, client_state?: string, command_id?: string, greeting?: string, interruption_settings?: { enable?: boolean; }, message_history?: { content: string; role: 'user'; metadata?: object; } | { role: 'assistant'; content?: string; metadata?: object; tool_calls?: { id: string; function: object; type: 'function'; }[]; } | { content: string; role: 'tool'; tool_call_id: string; metadata?: object; } | { content: string; role: 'system'; metadata?: object; } | { content: string; role: 'developer'; metadata?: object; }[], participants?: { id: string; role: 'user'; name?: string; on_hangup?: 'continue_conversation' | 'end_conversation'; }[], send_message_history_updates?: boolean, transcription?: { model?: string; }, voice?: string, voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'xai'; language?: string; }): { data?: call_control_command_result_with_conversation_id; }`\n\n**post** `/calls/{call_control_id}/actions/ai_assistant_start`\n\nStart an AI assistant on the call.\n\n**Expected Webhooks:**\n\n- `call.conversation.ended`\n- `call.conversation_insights.generated`\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `assistant?: { id: string; dynamic_variables?: object; external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }; fallback_config?: { external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }; greeting?: string; instructions?: string; llm_api_key_ref?: string; mcp_servers?: object[]; model?: string; name?: string; observability_settings?: object; openai_api_key_ref?: string; tools?: { book_appointment: book_appointment_tool_params; type: 'book_appointment'; } | { check_availability: check_availability_tool_params; type: 'check_availability'; } | { type: 'webhook'; webhook: object; } | { hangup: hangup_tool_params; type: 'hangup'; } | { transfer: object; type: 'transfer'; } | { retrieval: call_control_bucket_ids; type: 'retrieval'; }[]; }`\n AI Assistant configuration. All fields except `id` are optional — the assistant's stored configuration will be used as fallback for any omitted fields.\n - `id: string`\n The identifier of the AI assistant to use.\n - `dynamic_variables?: object`\n Map of dynamic variables and their default values. Dynamic variables can be referenced in instructions, greeting, and tool definitions using the `{{variable_name}}` syntax. Call-control-agent automatically merges in `telnyx_call_*` variables (telnyx_call_to, telnyx_call_from, telnyx_conversation_channel, telnyx_agent_target, telnyx_end_user_target, telnyx_call_caller_id_name) and custom header variables.\n - `external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }`\n External LLM configuration for bringing your own LLM endpoint.\n - `fallback_config?: { external_llm?: { authentication_method?: 'token' | 'certificate'; base_url?: string; certificate_ref?: string; forward_metadata?: boolean; llm_api_key_ref?: string; model?: string; token_retrieval_url?: string; }; llm_api_key_ref?: string; model?: string; }`\n Fallback LLM configuration used when the primary LLM provider is unavailable.\n - `greeting?: string`\n Initial greeting text spoken when the assistant starts. Can be plain text for any voice or SSML for `AWS.Polly.<voice_id>` voices. There is a 3,000 character limit.\n - `instructions?: string`\n System instructions for the voice assistant. Can be templated with [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables). This will overwrite the instructions set in the assistant configuration.\n - `llm_api_key_ref?: string`\n Integration secret identifier for the LLM provider API key. Use this field to reference an [integration secret](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) containing your LLM provider API key. Supports any LLM provider (OpenAI, Anthropic, etc.).\n - `mcp_servers?: object[]`\n MCP (Model Context Protocol) server configurations for extending the assistant's capabilities with external tools and data sources.\n - `model?: string`\n LLM model override for this call. If omitted, the assistant's configured model is used.\n - `name?: string`\n Assistant name override for this call.\n - `observability_settings?: object`\n Observability configuration for the assistant session, including Langfuse integration for tracing and monitoring.\n - `openai_api_key_ref?: string`\n Deprecated — use `llm_api_key_ref` instead. Integration secret identifier for the OpenAI API key. This field is maintained for backward compatibility; `llm_api_key_ref` is the canonical field name and supports all LLM providers.\n - `tools?: { book_appointment: { api_key_ref: string; event_type_id: number; attendee_name?: string; attendee_timezone?: string; }; type: 'book_appointment'; } | { check_availability: { api_key_ref: string; event_type_id: number; }; type: 'check_availability'; } | { type: 'webhook'; webhook: { description: string; name: string; url: string; body_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; headers?: { name?: string; value?: string; }[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; query_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; }; } | { hangup: { description?: string; }; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; }; type: 'transfer'; } | { retrieval: { bucket_ids: string[]; max_num_results?: number; }; type: 'retrieval'; }[]`\n Inline tool definitions available to the assistant (webhook, retrieval, transfer, hangup, etc.). Overrides the assistant's stored tools if provided.\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `greeting?: string`\n Text that will be played when the assistant starts, if none then nothing will be played when the assistant starts. The greeting can be text for any voice or SSML for `AWS.Polly.<voice_id>` voices. There is a 3,000 character limit.\n\n- `interruption_settings?: { enable?: boolean; }`\n Settings for handling user interruptions during assistant speech\n - `enable?: boolean`\n When true, allows users to interrupt the assistant while speaking\n\n- `message_history?: { content: string; role: 'user'; metadata?: object; } | { role: 'assistant'; content?: string; metadata?: object; tool_calls?: { id: string; function: { name: string; }; type: 'function'; }[]; } | { content: string; role: 'tool'; tool_call_id: string; metadata?: object; } | { content: string; role: 'system'; metadata?: object; } | { content: string; role: 'developer'; metadata?: object; }[]`\n A list of messages to seed the conversation history before the assistant starts. Follows the same message format as the `ai_assistant_add_messages` command.\n\n- `participants?: { id: string; role: 'user'; name?: string; on_hangup?: 'continue_conversation' | 'end_conversation'; }[]`\n A list of participants to add to the conversation when it starts.\n\n- `send_message_history_updates?: boolean`\n When `true`, a webhook is sent each time the conversation message history is updated.\n\n- `transcription?: { model?: string; }`\n The settings associated with speech to text for the voice assistant. This is only relevant if the assistant uses a text-to-text language model. Any assistant using a model with native audio support (e.g. `fixie-ai/ultravox-v0_4`) will ignore this field.\n - `model?: string`\n The speech to text model to be used by the voice assistant.\n\n- `distil-whisper/distil-large-v2` is lower latency but English-only.\n- `openai/whisper-large-v3-turbo` is multi-lingual with automatic language detection but slightly higher latency.\n- `google` is a multi-lingual option, please describe the language in the `language` field.\n\n- `voice?: string`\n The voice to be used by the voice assistant. Currently we support ElevenLabs, Telnyx and AWS voices.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>. (e.g. Azure.en-CA-ClaraNeural, Azure.en-CA-LiamNeural, Azure.en-US-BrianMultilingualNeural, Azure.en-US-Ava:DragonHDLatestNeural. For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery).)\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.BaseModel.John`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration secret under `\"voice_settings\": {\"api_key_ref\": \"<secret_id>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n - **Telnyx:** Use `Telnyx.<model_id>.<voice_id>`\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n- **xAI:** Use `xAI.<VoiceId>` (e.g., `xAI.eve`). Available voices: `eve`, `ara`, `rex`, `sal`, `leo`.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'xai'; language?: string; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { conversation_id?: string; result?: string; }; }`\n\n - `data?: { conversation_id?: string; result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.startAIAssistant('call_control_id');\n\nconsole.log(response);\n```",
11145
11145
  perLanguage: {
11146
11146
  typescript: {
11147
11147
  method: 'client.calls.actions.startAIAssistant',
@@ -11692,10 +11692,10 @@ const EMBEDDED_METHODS = [
11692
11692
  'transcription?: { model?: string; };',
11693
11693
  'user_response_timeout_ms?: number;',
11694
11694
  'voice?: string;',
11695
- "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; };",
11695
+ "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'xai'; language?: string; };",
11696
11696
  ],
11697
11697
  response: '{ data?: { conversation_id?: string; result?: string; }; }',
11698
- markdown: "## gather_using_ai\n\n`client.calls.actions.gatherUsingAI(call_control_id: string, parameters: object, assistant?: { instructions?: string; model?: string; openai_api_key_ref?: string; tools?: book_appointment_tool | check_availability_tool | webhook_tool | hangup_tool | transfer_tool | call_control_retrieval_tool[]; }, client_state?: string, command_id?: string, gather_ended_speech?: string, greeting?: string, interruption_settings?: { enable?: boolean; }, language?: string, message_history?: { content?: string; role?: 'assistant' | 'user'; }[], send_message_history_updates?: boolean, send_partial_results?: boolean, transcription?: { model?: string; }, user_response_timeout_ms?: number, voice?: string, voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; }): { data?: call_control_command_result_with_conversation_id; }`\n\n**post** `/calls/{call_control_id}/actions/gather_using_ai`\n\nGather parameters defined in the request payload using a voice assistant.\n\n You can pass parameters described as a JSON Schema object and the voice assistant will attempt to gather these informations. \n\n**Expected Webhooks:**\n\n- `call.ai_gather.ended`\n- `call.conversation.ended`\n- `call.ai_gather.partial_results` (if `send_partial_results` is set to `true`)\n- `call.ai_gather.message_history_updated` (if `send_message_history_updates` is set to `true`)\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `parameters: object`\n The parameters described as a JSON Schema object that needs to be gathered by the voice assistant. See the [JSON Schema reference](https://json-schema.org/understanding-json-schema) for documentation about the format\n\n- `assistant?: { instructions?: string; model?: string; openai_api_key_ref?: string; tools?: { book_appointment: book_appointment_tool_params; type: 'book_appointment'; } | { check_availability: check_availability_tool_params; type: 'check_availability'; } | { type: 'webhook'; webhook: object; } | { hangup: hangup_tool_params; type: 'hangup'; } | { transfer: object; type: 'transfer'; } | { retrieval: call_control_bucket_ids; type: 'retrieval'; }[]; }`\n Assistant configuration including choice of LLM, custom instructions, and tools.\n - `instructions?: string`\n The system instructions that the voice assistant uses during the gather command\n - `model?: string`\n The model to be used by the voice assistant.\n - `openai_api_key_ref?: string`\n This is necessary only if the model selected is from OpenAI. You would pass the `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) that refers to your OpenAI API Key. Warning: Free plans are unlikely to work with this integration.\n - `tools?: { book_appointment: { api_key_ref: string; event_type_id: number; attendee_name?: string; attendee_timezone?: string; }; type: 'book_appointment'; } | { check_availability: { api_key_ref: string; event_type_id: number; }; type: 'check_availability'; } | { type: 'webhook'; webhook: { description: string; name: string; url: string; body_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; headers?: { name?: string; value?: string; }[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; query_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; }; } | { hangup: { description?: string; }; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; }; type: 'transfer'; } | { retrieval: { bucket_ids: string[]; max_num_results?: number; }; type: 'retrieval'; }[]`\n The tools that the voice assistant can use.\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `gather_ended_speech?: string`\n Text that will be played when the gathering has finished. There is a 3,000 character limit.\n\n- `greeting?: string`\n Text that will be played when the gathering starts, if none then nothing will be played when the gathering starts. The greeting can be text for any voice or SSML for `AWS.Polly.<voice_id>` voices. There is a 3,000 character limit.\n\n- `interruption_settings?: { enable?: boolean; }`\n Settings for handling user interruptions during assistant speech\n - `enable?: boolean`\n When true, allows users to interrupt the assistant while speaking\n\n- `language?: string`\n Language to use for speech recognition\n\n- `message_history?: { content?: string; role?: 'assistant' | 'user'; }[]`\n The message history you want the voice assistant to be aware of, this can be useful to keep the context of the conversation, or to pass additional information to the voice assistant.\n\n- `send_message_history_updates?: boolean`\n Default is `false`. If set to `true`, the voice assistant will send updates to the message history via the `call.ai_gather.message_history_updated` callback in real time as the message history is updated.\n\n- `send_partial_results?: boolean`\n Default is `false`. If set to `true`, the voice assistant will send partial results via the `call.ai_gather.partial_results` callback in real time as individual fields are gathered. If set to `false`, the voice assistant will only send the final result via the `call.ai_gather.ended` callback.\n\n- `transcription?: { model?: string; }`\n The settings associated with speech to text for the voice assistant. This is only relevant if the assistant uses a text-to-text language model. Any assistant using a model with native audio support (e.g. `fixie-ai/ultravox-v0_4`) will ignore this field.\n - `model?: string`\n The speech to text model to be used by the voice assistant.\n\n- `distil-whisper/distil-large-v2` is lower latency but English-only.\n- `openai/whisper-large-v3-turbo` is multi-lingual with automatic language detection but slightly higher latency.\n- `google` is a multi-lingual option, please describe the language in the `language` field.\n\n- `user_response_timeout_ms?: number`\n The maximum time in milliseconds to wait for user response before timing out.\n\n- `voice?: string`\n The voice to be used by the voice assistant. Currently we support ElevenLabs, Telnyx and AWS voices.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>. (e.g. Azure.en-CA-ClaraNeural, Azure.en-CA-LiamNeural, Azure.en-US-BrianMultilingualNeural, Azure.en-US-Ava:DragonHDLatestNeural. For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery).)\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.BaseModel.John`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration secret under `\"voice_settings\": {\"api_key_ref\": \"<secret_id>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n - **Telnyx:** Use `Telnyx.<model_id>.<voice_id>`\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { conversation_id?: string; result?: string; }; }`\n\n - `data?: { conversation_id?: string; result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.gatherUsingAI('call_control_id', { parameters: {\n properties: 'bar',\n required: 'bar',\n type: 'bar',\n} });\n\nconsole.log(response);\n```",
11698
+ markdown: "## gather_using_ai\n\n`client.calls.actions.gatherUsingAI(call_control_id: string, parameters: object, assistant?: { instructions?: string; model?: string; openai_api_key_ref?: string; tools?: book_appointment_tool | check_availability_tool | webhook_tool | hangup_tool | transfer_tool | call_control_retrieval_tool[]; }, client_state?: string, command_id?: string, gather_ended_speech?: string, greeting?: string, interruption_settings?: { enable?: boolean; }, language?: string, message_history?: { content?: string; role?: 'assistant' | 'user'; }[], send_message_history_updates?: boolean, send_partial_results?: boolean, transcription?: { model?: string; }, user_response_timeout_ms?: number, voice?: string, voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'xai'; language?: string; }): { data?: call_control_command_result_with_conversation_id; }`\n\n**post** `/calls/{call_control_id}/actions/gather_using_ai`\n\nGather parameters defined in the request payload using a voice assistant.\n\n You can pass parameters described as a JSON Schema object and the voice assistant will attempt to gather these informations. \n\n**Expected Webhooks:**\n\n- `call.ai_gather.ended`\n- `call.conversation.ended`\n- `call.ai_gather.partial_results` (if `send_partial_results` is set to `true`)\n- `call.ai_gather.message_history_updated` (if `send_message_history_updates` is set to `true`)\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `parameters: object`\n The parameters described as a JSON Schema object that needs to be gathered by the voice assistant. See the [JSON Schema reference](https://json-schema.org/understanding-json-schema) for documentation about the format\n\n- `assistant?: { instructions?: string; model?: string; openai_api_key_ref?: string; tools?: { book_appointment: book_appointment_tool_params; type: 'book_appointment'; } | { check_availability: check_availability_tool_params; type: 'check_availability'; } | { type: 'webhook'; webhook: object; } | { hangup: hangup_tool_params; type: 'hangup'; } | { transfer: object; type: 'transfer'; } | { retrieval: call_control_bucket_ids; type: 'retrieval'; }[]; }`\n Assistant configuration including choice of LLM, custom instructions, and tools.\n - `instructions?: string`\n The system instructions that the voice assistant uses during the gather command\n - `model?: string`\n The model to be used by the voice assistant.\n - `openai_api_key_ref?: string`\n This is necessary only if the model selected is from OpenAI. You would pass the `identifier` for an integration secret [/v2/integration_secrets](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) that refers to your OpenAI API Key. Warning: Free plans are unlikely to work with this integration.\n - `tools?: { book_appointment: { api_key_ref: string; event_type_id: number; attendee_name?: string; attendee_timezone?: string; }; type: 'book_appointment'; } | { check_availability: { api_key_ref: string; event_type_id: number; }; type: 'check_availability'; } | { type: 'webhook'; webhook: { description: string; name: string; url: string; body_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; headers?: { name?: string; value?: string; }[]; method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; query_parameters?: { properties?: object; required?: string[]; type?: 'object'; }; }; } | { hangup: { description?: string; }; type: 'hangup'; } | { transfer: { from: string; targets: { to: string; name?: string; }[] | string; }; type: 'transfer'; } | { retrieval: { bucket_ids: string[]; max_num_results?: number; }; type: 'retrieval'; }[]`\n The tools that the voice assistant can use.\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `gather_ended_speech?: string`\n Text that will be played when the gathering has finished. There is a 3,000 character limit.\n\n- `greeting?: string`\n Text that will be played when the gathering starts, if none then nothing will be played when the gathering starts. The greeting can be text for any voice or SSML for `AWS.Polly.<voice_id>` voices. There is a 3,000 character limit.\n\n- `interruption_settings?: { enable?: boolean; }`\n Settings for handling user interruptions during assistant speech\n - `enable?: boolean`\n When true, allows users to interrupt the assistant while speaking\n\n- `language?: string`\n Language to use for speech recognition\n\n- `message_history?: { content?: string; role?: 'assistant' | 'user'; }[]`\n The message history you want the voice assistant to be aware of, this can be useful to keep the context of the conversation, or to pass additional information to the voice assistant.\n\n- `send_message_history_updates?: boolean`\n Default is `false`. If set to `true`, the voice assistant will send updates to the message history via the `call.ai_gather.message_history_updated` callback in real time as the message history is updated.\n\n- `send_partial_results?: boolean`\n Default is `false`. If set to `true`, the voice assistant will send partial results via the `call.ai_gather.partial_results` callback in real time as individual fields are gathered. If set to `false`, the voice assistant will only send the final result via the `call.ai_gather.ended` callback.\n\n- `transcription?: { model?: string; }`\n The settings associated with speech to text for the voice assistant. This is only relevant if the assistant uses a text-to-text language model. Any assistant using a model with native audio support (e.g. `fixie-ai/ultravox-v0_4`) will ignore this field.\n - `model?: string`\n The speech to text model to be used by the voice assistant.\n\n- `distil-whisper/distil-large-v2` is lower latency but English-only.\n- `openai/whisper-large-v3-turbo` is multi-lingual with automatic language detection but slightly higher latency.\n- `google` is a multi-lingual option, please describe the language in the `language` field.\n\n- `user_response_timeout_ms?: number`\n The maximum time in milliseconds to wait for user response before timing out.\n\n- `voice?: string`\n The voice to be used by the voice assistant. Currently we support ElevenLabs, Telnyx and AWS voices.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>. (e.g. Azure.en-CA-ClaraNeural, Azure.en-CA-LiamNeural, Azure.en-US-BrianMultilingualNeural, Azure.en-US-Ava:DragonHDLatestNeural. For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery).)\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.BaseModel.John`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration secret under `\"voice_settings\": {\"api_key_ref\": \"<secret_id>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n - **Telnyx:** Use `Telnyx.<model_id>.<voice_id>`\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n- **xAI:** Use `xAI.<VoiceId>` (e.g., `xAI.eve`). Available voices: `eve`, `ara`, `rex`, `sal`, `leo`.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'xai'; language?: string; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { conversation_id?: string; result?: string; }; }`\n\n - `data?: { conversation_id?: string; result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.gatherUsingAI('call_control_id', { parameters: {\n properties: 'bar',\n required: 'bar',\n type: 'bar',\n} });\n\nconsole.log(response);\n```",
11699
11699
  perLanguage: {
11700
11700
  typescript: {
11701
11701
  method: 'client.calls.actions.gatherUsingAI',
@@ -11815,10 +11815,10 @@ const EMBEDDED_METHODS = [
11815
11815
  'terminating_digit?: string;',
11816
11816
  'timeout_millis?: number;',
11817
11817
  'valid_digits?: string;',
11818
- "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; };",
11818
+ "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; };",
11819
11819
  ],
11820
11820
  response: '{ data?: { result?: string; }; }',
11821
- markdown: "## gather_using_speak\n\n`client.calls.actions.gatherUsingSpeak(call_control_id: string, payload: string, voice: string, client_state?: string, command_id?: string, inter_digit_timeout_millis?: number, invalid_payload?: string, language?: string, maximum_digits?: number, maximum_tries?: number, minimum_digits?: number, payload_type?: 'text' | 'ssml', service_level?: 'basic' | 'premium', terminating_digit?: string, timeout_millis?: number, valid_digits?: string, voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; }): { data?: call_control_command_result; }`\n\n**post** `/calls/{call_control_id}/actions/gather_using_speak`\n\nConvert text to speech and play it on the call until the required DTMF signals are gathered to build interactive menus.\n\nYou can pass a list of valid digits along with an 'invalid_payload', which will be played back at the beginning of each prompt. Speech will be interrupted when a DTMF signal is received. The `Answer` command must be issued before the `gather_using_speak` command.\n\n**Expected Webhooks:**\n\n- `call.dtmf.received` (you may receive many of these webhooks)\n- `call.gather.ended`\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `payload: string`\n The text or SSML to be converted into speech. There is a 3,000 character limit.\n\n- `voice: string`\n Specifies the voice used in speech synthesis.\n\n- Define voices using the format `<Provider>.<Model>.<VoiceId>`. Specifying only the provider will give default values for voice_id and model_id.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>` (e.g., `Azure.en-CA-ClaraNeural`, `Azure.en-US-BrianMultilingualNeural`, `Azure.en-US-Ava:DragonHDLatestNeural`). For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery). Use `voice_settings` to configure custom deployments, regions, or API keys.\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.eleven_multilingual_v2.21m00Tcm4TlvDq8ikWAM`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration identifier secret in `\"voice_settings\": {\"api_key_ref\": \"<secret_identifier>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n- **Telnyx:** Use `Telnyx.<model_id>.<voice_id>` (e.g., `Telnyx.KokoroTTS.af`). Use `voice_settings` to configure voice_speed and other synthesis parameters.\n- **Minimax:** Use `Minimax.<ModelId>.<VoiceId>` (e.g., `Minimax.speech-02-hd.Wise_Woman`). Supported models: `speech-02-turbo`, `speech-02-hd`, `speech-2.6-turbo`, `speech-2.8-turbo`. Use `voice_settings` to configure speed, volume, pitch, and language_boost.\n- **Rime:** Use `Rime.<model_id>.<voice_id>` (e.g., `Rime.Arcana.cove`). Supported model_ids: `Arcana`, `Mist`. Use `voice_settings` to configure voice_speed.\n- **Resemble:** Use `Resemble.Turbo.<voice_id>` (e.g., `Resemble.Turbo.my_voice`). Only `Turbo` model is supported. Use `voice_settings` to configure precision, sample_rate, and format.\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n\nFor service_level basic, you may define the gender of the speaker (male or female).\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `inter_digit_timeout_millis?: number`\n The number of milliseconds to wait for input between digits.\n\n- `invalid_payload?: string`\n The text or SSML to be converted into speech when digits don't match the `valid_digits` parameter or the number of digits is not between `min` and `max`. There is a 3,000 character limit.\n\n- `language?: string`\n The language you want spoken. This parameter is ignored when a `Polly.*` voice is specified.\n\n- `maximum_digits?: number`\n The maximum number of digits to fetch. This parameter has a maximum value of 128.\n\n- `maximum_tries?: number`\n The maximum number of times that a file should be played back if there is no input from the user on the call.\n\n- `minimum_digits?: number`\n The minimum number of digits to fetch. This parameter has a minimum value of 1.\n\n- `payload_type?: 'text' | 'ssml'`\n The type of the provided payload. The payload can either be plain text, or Speech Synthesis Markup Language (SSML).\n\n- `service_level?: 'basic' | 'premium'`\n This parameter impacts speech quality, language options and payload types. When using `basic`, only the `en-US` language and payload type `text` are allowed.\n\n- `terminating_digit?: string`\n The digit used to terminate input if fewer than `maximum_digits` digits have been gathered.\n\n- `timeout_millis?: number`\n The number of milliseconds to wait for a DTMF response after speak ends before a replaying the sound file.\n\n- `valid_digits?: string`\n A list of all digits accepted as valid.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { result?: string; }; }`\n\n - `data?: { result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.gatherUsingSpeak('call_control_id', { payload: 'say this on call', voice: 'male' });\n\nconsole.log(response);\n```",
11821
+ markdown: "## gather_using_speak\n\n`client.calls.actions.gatherUsingSpeak(call_control_id: string, payload: string, voice: string, client_state?: string, command_id?: string, inter_digit_timeout_millis?: number, invalid_payload?: string, language?: string, maximum_digits?: number, maximum_tries?: number, minimum_digits?: number, payload_type?: 'text' | 'ssml', service_level?: 'basic' | 'premium', terminating_digit?: string, timeout_millis?: number, valid_digits?: string, voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; }): { data?: call_control_command_result; }`\n\n**post** `/calls/{call_control_id}/actions/gather_using_speak`\n\nConvert text to speech and play it on the call until the required DTMF signals are gathered to build interactive menus.\n\nYou can pass a list of valid digits along with an 'invalid_payload', which will be played back at the beginning of each prompt. Speech will be interrupted when a DTMF signal is received. The `Answer` command must be issued before the `gather_using_speak` command.\n\n**Expected Webhooks:**\n\n- `call.dtmf.received` (you may receive many of these webhooks)\n- `call.gather.ended`\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `payload: string`\n The text or SSML to be converted into speech. There is a 3,000 character limit.\n\n- `voice: string`\n Specifies the voice used in speech synthesis.\n\n- Define voices using the format `<Provider>.<Model>.<VoiceId>`. Specifying only the provider will give default values for voice_id and model_id.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>` (e.g., `Azure.en-CA-ClaraNeural`, `Azure.en-US-BrianMultilingualNeural`, `Azure.en-US-Ava:DragonHDLatestNeural`). For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery). Use `voice_settings` to configure custom deployments, regions, or API keys.\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.eleven_multilingual_v2.21m00Tcm4TlvDq8ikWAM`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration identifier secret in `\"voice_settings\": {\"api_key_ref\": \"<secret_identifier>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n- **Telnyx:** Use `Telnyx.<model_id>.<voice_id>` (e.g., `Telnyx.KokoroTTS.af`). Use `voice_settings` to configure voice_speed and other synthesis parameters.\n- **Minimax:** Use `Minimax.<ModelId>.<VoiceId>` (e.g., `Minimax.speech-02-hd.Wise_Woman`). Supported models: `speech-02-turbo`, `speech-02-hd`, `speech-2.6-turbo`, `speech-2.8-turbo`. Use `voice_settings` to configure speed, volume, pitch, and language_boost.\n- **Rime:** Use `Rime.<model_id>.<voice_id>` (e.g., `Rime.Arcana.cove`). Supported model_ids: `Arcana`, `Mist`. Use `voice_settings` to configure voice_speed.\n- **Resemble:** Use `Resemble.Turbo.<voice_id>` (e.g., `Resemble.Turbo.my_voice`). Only `Turbo` model is supported. Use `voice_settings` to configure precision, sample_rate, and format.\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n- **xAI:** Use `xAI.<VoiceId>` (e.g., `xAI.eve`). Available voices: `eve`, `ara`, `rex`, `sal`, `leo`.\n\nFor service_level basic, you may define the gender of the speaker (male or female).\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `inter_digit_timeout_millis?: number`\n The number of milliseconds to wait for input between digits.\n\n- `invalid_payload?: string`\n The text or SSML to be converted into speech when digits don't match the `valid_digits` parameter or the number of digits is not between `min` and `max`. There is a 3,000 character limit.\n\n- `language?: string`\n The language you want spoken. This parameter is ignored when a `Polly.*` voice is specified.\n\n- `maximum_digits?: number`\n The maximum number of digits to fetch. This parameter has a maximum value of 128.\n\n- `maximum_tries?: number`\n The maximum number of times that a file should be played back if there is no input from the user on the call.\n\n- `minimum_digits?: number`\n The minimum number of digits to fetch. This parameter has a minimum value of 1.\n\n- `payload_type?: 'text' | 'ssml'`\n The type of the provided payload. The payload can either be plain text, or Speech Synthesis Markup Language (SSML).\n\n- `service_level?: 'basic' | 'premium'`\n This parameter impacts speech quality, language options and payload types. When using `basic`, only the `en-US` language and payload type `text` are allowed.\n\n- `terminating_digit?: string`\n The digit used to terminate input if fewer than `maximum_digits` digits have been gathered.\n\n- `timeout_millis?: number`\n The number of milliseconds to wait for a DTMF response after speak ends before a replaying the sound file.\n\n- `valid_digits?: string`\n A list of all digits accepted as valid.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { result?: string; }; }`\n\n - `data?: { result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.gatherUsingSpeak('call_control_id', { payload: 'say this on call', voice: 'male' });\n\nconsole.log(response);\n```",
11822
11822
  perLanguage: {
11823
11823
  typescript: {
11824
11824
  method: 'client.calls.actions.gatherUsingSpeak',
@@ -12596,10 +12596,10 @@ const EMBEDDED_METHODS = [
12596
12596
  "service_level?: 'basic' | 'premium';",
12597
12597
  'stop?: string;',
12598
12598
  "target_legs?: 'self' | 'opposite' | 'both';",
12599
- "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; };",
12599
+ "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; };",
12600
12600
  ],
12601
12601
  response: '{ data?: { result?: string; }; }',
12602
- markdown: "## speak\n\n`client.calls.actions.speak(call_control_id: string, payload: string, voice: string, client_state?: string, command_id?: string, language?: string, loop?: string | number, payload_type?: 'text' | 'ssml', service_level?: 'basic' | 'premium', stop?: string, target_legs?: 'self' | 'opposite' | 'both', voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; }): { data?: call_control_command_result; }`\n\n**post** `/calls/{call_control_id}/actions/speak`\n\nConvert text to speech and play it back on the call. If multiple speak text commands are issued consecutively, the audio files will be placed in a queue awaiting playback.\n\n**Expected Webhooks:**\n\n- `call.speak.started`\n- `call.speak.ended`\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `payload: string`\n The text or SSML to be converted into speech. There is a 3,000 character limit.\n\n- `voice: string`\n Specifies the voice used in speech synthesis.\n\n- Define voices using the format `<Provider>.<Model>.<VoiceId>`. Specifying only the provider will give default values for voice_id and model_id.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>` (e.g., `Azure.en-CA-ClaraNeural`, `Azure.en-US-BrianMultilingualNeural`, `Azure.en-US-Ava:DragonHDLatestNeural`). For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery). Use `voice_settings` to configure custom deployments, regions, or API keys.\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.eleven_multilingual_v2.21m00Tcm4TlvDq8ikWAM`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration identifier secret in `\"voice_settings\": {\"api_key_ref\": \"<secret_identifier>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n- **Telnyx:** Use `Telnyx.<model_id>.<voice_id>` (e.g., `Telnyx.KokoroTTS.af`). Use `voice_settings` to configure voice_speed and other synthesis parameters.\n- **Minimax:** Use `Minimax.<ModelId>.<VoiceId>` (e.g., `Minimax.speech-02-hd.Wise_Woman`). Supported models: `speech-02-turbo`, `speech-02-hd`, `speech-2.6-turbo`, `speech-2.8-turbo`. Use `voice_settings` to configure speed, volume, pitch, and language_boost.\n- **Rime:** Use `Rime.<model_id>.<voice_id>` (e.g., `Rime.Arcana.cove`). Supported model_ids: `Arcana`, `Mist`. Use `voice_settings` to configure voice_speed.\n- **Resemble:** Use `Resemble.Turbo.<voice_id>` (e.g., `Resemble.Turbo.my_voice`). Only `Turbo` model is supported. Use `voice_settings` to configure precision, sample_rate, and format.\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n\nFor service_level basic, you may define the gender of the speaker (male or female).\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `language?: string`\n The language you want spoken. This parameter is ignored when a `Polly.*` voice is specified.\n\n- `loop?: string | number`\n The number of times to play the audio file. Use `infinity` to loop indefinitely. Defaults to 1.\n\n- `payload_type?: 'text' | 'ssml'`\n The type of the provided payload. The payload can either be plain text, or Speech Synthesis Markup Language (SSML).\n\n- `service_level?: 'basic' | 'premium'`\n This parameter impacts speech quality, language options and payload types. When using `basic`, only the `en-US` language and payload type `text` are allowed.\n\n- `stop?: string`\n When specified, it stops the current audio being played. Specify `current` to stop the current audio being played, and to play the next file in the queue. Specify `all` to stop the current audio file being played and to also clear all audio files from the queue.\n\n- `target_legs?: 'self' | 'opposite' | 'both'`\n Specifies which legs of the call should receive the spoken audio.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { result?: string; }; }`\n\n - `data?: { result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.speak('call_control_id', { payload: 'Say this on the call', voice: 'female' });\n\nconsole.log(response);\n```",
12602
+ markdown: "## speak\n\n`client.calls.actions.speak(call_control_id: string, payload: string, voice: string, client_state?: string, command_id?: string, language?: string, loop?: string | number, payload_type?: 'text' | 'ssml', service_level?: 'basic' | 'premium', stop?: string, target_legs?: 'self' | 'opposite' | 'both', voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; }): { data?: call_control_command_result; }`\n\n**post** `/calls/{call_control_id}/actions/speak`\n\nConvert text to speech and play it back on the call. If multiple speak text commands are issued consecutively, the audio files will be placed in a queue awaiting playback.\n\n**Expected Webhooks:**\n\n- `call.speak.started`\n- `call.speak.ended`\n\n\n### Parameters\n\n- `call_control_id: string`\n\n- `payload: string`\n The text or SSML to be converted into speech. There is a 3,000 character limit.\n\n- `voice: string`\n Specifies the voice used in speech synthesis.\n\n- Define voices using the format `<Provider>.<Model>.<VoiceId>`. Specifying only the provider will give default values for voice_id and model_id.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>` (e.g., `Azure.en-CA-ClaraNeural`, `Azure.en-US-BrianMultilingualNeural`, `Azure.en-US-Ava:DragonHDLatestNeural`). For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery). Use `voice_settings` to configure custom deployments, regions, or API keys.\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.eleven_multilingual_v2.21m00Tcm4TlvDq8ikWAM`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration identifier secret in `\"voice_settings\": {\"api_key_ref\": \"<secret_identifier>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n- **Telnyx:** Use `Telnyx.<model_id>.<voice_id>` (e.g., `Telnyx.KokoroTTS.af`). Use `voice_settings` to configure voice_speed and other synthesis parameters.\n- **Minimax:** Use `Minimax.<ModelId>.<VoiceId>` (e.g., `Minimax.speech-02-hd.Wise_Woman`). Supported models: `speech-02-turbo`, `speech-02-hd`, `speech-2.6-turbo`, `speech-2.8-turbo`. Use `voice_settings` to configure speed, volume, pitch, and language_boost.\n- **Rime:** Use `Rime.<model_id>.<voice_id>` (e.g., `Rime.Arcana.cove`). Supported model_ids: `Arcana`, `Mist`. Use `voice_settings` to configure voice_speed.\n- **Resemble:** Use `Resemble.Turbo.<voice_id>` (e.g., `Resemble.Turbo.my_voice`). Only `Turbo` model is supported. Use `voice_settings` to configure precision, sample_rate, and format.\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n- **xAI:** Use `xAI.<VoiceId>` (e.g., `xAI.eve`). Available voices: `eve`, `ara`, `rex`, `sal`, `leo`.\n\nFor service_level basic, you may define the gender of the speaker (male or female).\n\n- `client_state?: string`\n Use this field to add state to every subsequent webhook. It must be a valid Base-64 encoded string.\n\n- `command_id?: string`\n Use this field to avoid duplicate commands. Telnyx will ignore any command with the same `command_id` for the same `call_control_id`.\n\n- `language?: string`\n The language you want spoken. This parameter is ignored when a `Polly.*` voice is specified.\n\n- `loop?: string | number`\n The number of times to play the audio file. Use `infinity` to loop indefinitely. Defaults to 1.\n\n- `payload_type?: 'text' | 'ssml'`\n The type of the provided payload. The payload can either be plain text, or Speech Synthesis Markup Language (SSML).\n\n- `service_level?: 'basic' | 'premium'`\n This parameter impacts speech quality, language options and payload types. When using `basic`, only the `en-US` language and payload type `text` are allowed.\n\n- `stop?: string`\n When specified, it stops the current audio being played. Specify `current` to stop the current audio being played, and to play the next file in the queue. Specify `all` to stop the current audio file being played and to also clear all audio files from the queue.\n\n- `target_legs?: 'self' | 'opposite' | 'both'`\n Specifies which legs of the call should receive the spoken audio.\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { result?: string; }; }`\n\n - `data?: { result?: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.calls.actions.speak('call_control_id', { payload: 'Say this on the call', voice: 'female' });\n\nconsole.log(response);\n```",
12603
12603
  perLanguage: {
12604
12604
  typescript: {
12605
12605
  method: 'client.calls.actions.speak',
@@ -14321,10 +14321,10 @@ const EMBEDDED_METHODS = [
14321
14321
  'language?: string;',
14322
14322
  "payload_type?: 'text' | 'ssml';",
14323
14323
  "region?: 'Australia' | 'Europe' | 'Middle East' | 'US';",
14324
- "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; };",
14324
+ "voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; };",
14325
14325
  ],
14326
14326
  response: '{ data?: { result: string; }; }',
14327
- markdown: "## speak\n\n`client.conferences.actions.speak(id: string, payload: string, voice: string, call_control_ids?: string[], command_id?: string, language?: string, payload_type?: 'text' | 'ssml', region?: 'Australia' | 'Europe' | 'Middle East' | 'US', voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; }): { data?: conference_command_result; }`\n\n**post** `/conferences/{id}/actions/speak`\n\nConvert text to speech and play it to all or some participants.\n\n### Parameters\n\n- `id: string`\n\n- `payload: string`\n The text or SSML to be converted into speech. There is a 3,000 character limit.\n\n- `voice: string`\n Specifies the voice used in speech synthesis.\n\n- Define voices using the format `<Provider>.<Model>.<VoiceId>`. Specifying only the provider will give default values for voice_id and model_id.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>` (e.g., `Azure.en-CA-ClaraNeural`, `Azure.en-US-BrianMultilingualNeural`, `Azure.en-US-Ava:DragonHDLatestNeural`). For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery). Use `voice_settings` to configure custom deployments, regions, or API keys.\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.eleven_multilingual_v2.21m00Tcm4TlvDq8ikWAM`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration identifier secret in `\"voice_settings\": {\"api_key_ref\": \"<secret_identifier>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n- **Telnyx:** Use `Telnyx.<model_id>.<voice_id>` (e.g., `Telnyx.KokoroTTS.af`). Use `voice_settings` to configure voice_speed and other synthesis parameters.\n- **Minimax:** Use `Minimax.<ModelId>.<VoiceId>` (e.g., `Minimax.speech-02-hd.Wise_Woman`). Supported models: `speech-02-turbo`, `speech-02-hd`, `speech-2.6-turbo`, `speech-2.8-turbo`. Use `voice_settings` to configure speed, volume, pitch, and language_boost.\n- **Rime:** Use `Rime.<model_id>.<voice_id>` (e.g., `Rime.Arcana.cove`). Supported model_ids: `Arcana`, `Mist`. Use `voice_settings` to configure voice_speed.\n- **Resemble:** Use `Resemble.Turbo.<voice_id>` (e.g., `Resemble.Turbo.my_voice`). Only `Turbo` model is supported. Use `voice_settings` to configure precision, sample_rate, and format.\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n\nFor service_level basic, you may define the gender of the speaker (male or female).\n\n- `call_control_ids?: string[]`\n Call Control IDs of participants who will hear the spoken text. When empty all participants will hear the spoken text.\n\n- `command_id?: string`\n Use this field to avoid execution of duplicate commands. Telnyx will ignore subsequent commands with the same `command_id` as one that has already been executed.\n\n- `language?: string`\n The language you want spoken. This parameter is ignored when a `Polly.*` voice is specified.\n\n- `payload_type?: 'text' | 'ssml'`\n The type of the provided payload. The payload can either be plain text, or Speech Synthesis Markup Language (SSML).\n\n- `region?: 'Australia' | 'Europe' | 'Middle East' | 'US'`\n Region where the conference data is located. Defaults to the region defined in user's data locality settings (Europe or US).\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { result: string; }; }`\n\n - `data?: { result: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.conferences.actions.speak('id', { payload: 'Say this to participants', voice: 'female' });\n\nconsole.log(response);\n```",
14327
+ markdown: "## speak\n\n`client.conferences.actions.speak(id: string, payload: string, voice: string, call_control_ids?: string[], command_id?: string, language?: string, payload_type?: 'text' | 'ssml', region?: 'Australia' | 'Europe' | 'Middle East' | 'US', voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; }): { data?: conference_command_result; }`\n\n**post** `/conferences/{id}/actions/speak`\n\nConvert text to speech and play it to all or some participants.\n\n### Parameters\n\n- `id: string`\n\n- `payload: string`\n The text or SSML to be converted into speech. There is a 3,000 character limit.\n\n- `voice: string`\n Specifies the voice used in speech synthesis.\n\n- Define voices using the format `<Provider>.<Model>.<VoiceId>`. Specifying only the provider will give default values for voice_id and model_id.\n\n **Supported Providers:**\n- **AWS:** Use `AWS.Polly.<VoiceId>` (e.g., `AWS.Polly.Joanna`). For neural voices, which provide more realistic, human-like speech, append `-Neural` to the `VoiceId` (e.g., `AWS.Polly.Joanna-Neural`). Check the [available voices](https://docs.aws.amazon.com/polly/latest/dg/available-voices.html) for compatibility.\n- **Azure:** Use `Azure.<VoiceId>` (e.g., `Azure.en-CA-ClaraNeural`, `Azure.en-US-BrianMultilingualNeural`, `Azure.en-US-Ava:DragonHDLatestNeural`). For a complete list of voices, go to [Azure Voice Gallery](https://speech.microsoft.com/portal/voicegallery). Use `voice_settings` to configure custom deployments, regions, or API keys.\n- **ElevenLabs:** Use `ElevenLabs.<ModelId>.<VoiceId>` (e.g., `ElevenLabs.eleven_multilingual_v2.21m00Tcm4TlvDq8ikWAM`). The `ModelId` part is optional. To use ElevenLabs, you must provide your ElevenLabs API key as an integration identifier secret in `\"voice_settings\": {\"api_key_ref\": \"<secret_identifier>\"}`. See [integration secrets documentation](https://developers.telnyx.com/api/secrets-manager/integration-secrets/create-integration-secret) for details. Check [available voices](https://elevenlabs.io/docs/api-reference/get-voices).\n- **Telnyx:** Use `Telnyx.<model_id>.<voice_id>` (e.g., `Telnyx.KokoroTTS.af`). Use `voice_settings` to configure voice_speed and other synthesis parameters.\n- **Minimax:** Use `Minimax.<ModelId>.<VoiceId>` (e.g., `Minimax.speech-02-hd.Wise_Woman`). Supported models: `speech-02-turbo`, `speech-02-hd`, `speech-2.6-turbo`, `speech-2.8-turbo`. Use `voice_settings` to configure speed, volume, pitch, and language_boost.\n- **Rime:** Use `Rime.<model_id>.<voice_id>` (e.g., `Rime.Arcana.cove`). Supported model_ids: `Arcana`, `Mist`. Use `voice_settings` to configure voice_speed.\n- **Resemble:** Use `Resemble.Turbo.<voice_id>` (e.g., `Resemble.Turbo.my_voice`). Only `Turbo` model is supported. Use `voice_settings` to configure precision, sample_rate, and format.\n- **Inworld:** Use `Inworld.<ModelId>.<VoiceId>` (e.g., `Inworld.Mini.Loretta`, `Inworld.Max.Oliver`). Supported models: `Mini`, `Max`.\n- **xAI:** Use `xAI.<VoiceId>` (e.g., `xAI.eve`). Available voices: `eve`, `ara`, `rex`, `sal`, `leo`.\n\nFor service_level basic, you may define the gender of the speaker (male or female).\n\n- `call_control_ids?: string[]`\n Call Control IDs of participants who will hear the spoken text. When empty all participants will hear the spoken text.\n\n- `command_id?: string`\n Use this field to avoid execution of duplicate commands. Telnyx will ignore subsequent commands with the same `command_id` as one that has already been executed.\n\n- `language?: string`\n The language you want spoken. This parameter is ignored when a `Polly.*` voice is specified.\n\n- `payload_type?: 'text' | 'ssml'`\n The type of the provided payload. The payload can either be plain text, or Speech Synthesis Markup Language (SSML).\n\n- `region?: 'Australia' | 'Europe' | 'Middle East' | 'US'`\n Region where the conference data is located. Defaults to the region defined in user's data locality settings (Europe or US).\n\n- `voice_settings?: { type: 'elevenlabs'; api_key_ref?: string; } | { type: 'telnyx'; voice_speed?: number; } | { type: 'aws'; } | { type: 'minimax'; language_boost?: string; pitch?: number; speed?: number; vol?: number; } | { type: 'azure'; api_key_ref?: string; deployment_id?: string; effect?: 'eq_car' | 'eq_telecomhp8k'; gender?: 'Male' | 'Female'; region?: string; } | { type: 'rime'; voice_speed?: number; } | { type: 'resemble'; format?: 'wav' | 'mp3'; precision?: 'PCM_16' | 'PCM_24' | 'PCM_32' | 'MULAW'; sample_rate?: '8000' | '16000' | '22050' | '32000' | '44100' | '48000'; } | { type: 'inworld'; } | { type: 'xai'; language?: string; }`\n The settings associated with the voice selected\n\n### Returns\n\n- `{ data?: { result: string; }; }`\n\n - `data?: { result: string; }`\n\n### Example\n\n```typescript\nimport Telnyx from 'telnyx';\n\nconst client = new Telnyx();\n\nconst response = await client.conferences.actions.speak('id', { payload: 'Say this to participants', voice: 'female' });\n\nconsole.log(response);\n```",
14328
14328
  perLanguage: {
14329
14329
  typescript: {
14330
14330
  method: 'client.conferences.actions.speak',