rasa-pro 3.13.0.dev7__py3-none-any.whl → 3.13.0.dev9__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of rasa-pro might be problematic. Click here for more details.

Files changed (215) hide show
  1. rasa/__main__.py +0 -3
  2. rasa/api.py +1 -1
  3. rasa/cli/dialogue_understanding_test.py +1 -1
  4. rasa/cli/e2e_test.py +1 -1
  5. rasa/cli/evaluate.py +1 -1
  6. rasa/cli/export.py +1 -1
  7. rasa/cli/llm_fine_tuning.py +12 -11
  8. rasa/cli/project_templates/defaults.py +133 -0
  9. rasa/cli/run.py +1 -1
  10. rasa/cli/studio/link.py +53 -0
  11. rasa/cli/studio/pull.py +78 -0
  12. rasa/cli/studio/push.py +78 -0
  13. rasa/cli/studio/studio.py +12 -0
  14. rasa/cli/studio/upload.py +8 -0
  15. rasa/cli/train.py +1 -1
  16. rasa/cli/utils.py +1 -1
  17. rasa/cli/x.py +1 -1
  18. rasa/constants.py +2 -0
  19. rasa/core/__init__.py +0 -16
  20. rasa/core/actions/action.py +5 -1
  21. rasa/core/actions/action_repeat_bot_messages.py +18 -22
  22. rasa/core/actions/action_run_slot_rejections.py +0 -1
  23. rasa/core/agent.py +16 -1
  24. rasa/core/available_endpoints.py +146 -0
  25. rasa/core/brokers/pika.py +1 -2
  26. rasa/core/channels/botframework.py +2 -2
  27. rasa/core/channels/channel.py +2 -2
  28. rasa/core/channels/development_inspector.py +1 -1
  29. rasa/core/channels/facebook.py +1 -4
  30. rasa/core/channels/hangouts.py +8 -5
  31. rasa/core/channels/inspector/README.md +3 -3
  32. rasa/core/channels/inspector/dist/assets/{arc-c4b064fc.js → arc-02053cc1.js} +1 -1
  33. rasa/core/channels/inspector/dist/assets/{blockDiagram-38ab4fdb-215b5026.js → blockDiagram-38ab4fdb-008b6289.js} +1 -1
  34. rasa/core/channels/inspector/dist/assets/{c4Diagram-3d4e48cf-2b54a0a3.js → c4Diagram-3d4e48cf-fb2597be.js} +1 -1
  35. rasa/core/channels/inspector/dist/assets/channel-078dada8.js +1 -0
  36. rasa/core/channels/inspector/dist/assets/{classDiagram-70f12bd4-daacea5f.js → classDiagram-70f12bd4-7f847e00.js} +1 -1
  37. rasa/core/channels/inspector/dist/assets/{classDiagram-v2-f2320105-930d4dc2.js → classDiagram-v2-f2320105-ba1d689b.js} +1 -1
  38. rasa/core/channels/inspector/dist/assets/clone-5b4516de.js +1 -0
  39. rasa/core/channels/inspector/dist/assets/{createText-2e5e7dd3-83c206ba.js → createText-2e5e7dd3-dd8e67c4.js} +1 -1
  40. rasa/core/channels/inspector/dist/assets/{edges-e0da2a9e-b0eb01d0.js → edges-e0da2a9e-10784939.js} +1 -1
  41. rasa/core/channels/inspector/dist/assets/{erDiagram-9861fffd-17586500.js → erDiagram-9861fffd-24947ae6.js} +1 -1
  42. rasa/core/channels/inspector/dist/assets/{flowDb-956e92f1-be2a1776.js → flowDb-956e92f1-a9ced505.js} +1 -1
  43. rasa/core/channels/inspector/dist/assets/{flowDiagram-66a62f08-c2120ebd.js → flowDiagram-66a62f08-afda9c7c.js} +1 -1
  44. rasa/core/channels/inspector/dist/assets/flowDiagram-v2-96b9c2cf-f9613071.js +1 -0
  45. rasa/core/channels/inspector/dist/assets/{flowchart-elk-definition-4a651766-a6ab5c48.js → flowchart-elk-definition-4a651766-6ef530b8.js} +1 -1
  46. rasa/core/channels/inspector/dist/assets/{ganttDiagram-c361ad54-ef613457.js → ganttDiagram-c361ad54-0c7dd39a.js} +1 -1
  47. rasa/core/channels/inspector/dist/assets/{gitGraphDiagram-72cf32ee-d59185b3.js → gitGraphDiagram-72cf32ee-b57239d6.js} +1 -1
  48. rasa/core/channels/inspector/dist/assets/{graph-0f155405.js → graph-9ed57cec.js} +1 -1
  49. rasa/core/channels/inspector/dist/assets/{index-3862675e-d5f1d1b7.js → index-3862675e-233090de.js} +1 -1
  50. rasa/core/channels/inspector/dist/assets/{index-47737d3a.js → index-72184470.js} +3 -3
  51. rasa/core/channels/inspector/dist/assets/{infoDiagram-f8f76790-b07d141f.js → infoDiagram-f8f76790-aa116649.js} +1 -1
  52. rasa/core/channels/inspector/dist/assets/{journeyDiagram-49397b02-1936d429.js → journeyDiagram-49397b02-e51877cc.js} +1 -1
  53. rasa/core/channels/inspector/dist/assets/{layout-dde8d0f3.js → layout-3ca3798c.js} +1 -1
  54. rasa/core/channels/inspector/dist/assets/{line-0c2c7ee0.js → line-26ee10d3.js} +1 -1
  55. rasa/core/channels/inspector/dist/assets/{linear-35dd89a4.js → linear-aedded32.js} +1 -1
  56. rasa/core/channels/inspector/dist/assets/{mindmap-definition-fc14e90a-56192851.js → mindmap-definition-fc14e90a-d8957261.js} +1 -1
  57. rasa/core/channels/inspector/dist/assets/{pieDiagram-8a3498a8-fc21ed78.js → pieDiagram-8a3498a8-d771f885.js} +1 -1
  58. rasa/core/channels/inspector/dist/assets/{quadrantDiagram-120e2f19-25e98518.js → quadrantDiagram-120e2f19-09fdf50c.js} +1 -1
  59. rasa/core/channels/inspector/dist/assets/{requirementDiagram-deff3bca-546ff1f5.js → requirementDiagram-deff3bca-9f0af02e.js} +1 -1
  60. rasa/core/channels/inspector/dist/assets/{sankeyDiagram-04a897e0-02d8b82d.js → sankeyDiagram-04a897e0-84415b37.js} +1 -1
  61. rasa/core/channels/inspector/dist/assets/{sequenceDiagram-704730f1-3ca5a92e.js → sequenceDiagram-704730f1-8dec4055.js} +1 -1
  62. rasa/core/channels/inspector/dist/assets/{stateDiagram-587899a1-128ea07c.js → stateDiagram-587899a1-c5431d07.js} +1 -1
  63. rasa/core/channels/inspector/dist/assets/{stateDiagram-v2-d93cdb3a-95f290af.js → stateDiagram-v2-d93cdb3a-274e77d9.js} +1 -1
  64. rasa/core/channels/inspector/dist/assets/{styles-6aaf32cf-4984898a.js → styles-6aaf32cf-e364a1d7.js} +1 -1
  65. rasa/core/channels/inspector/dist/assets/{styles-9a916d00-1bf266ba.js → styles-9a916d00-0dae36f6.js} +1 -1
  66. rasa/core/channels/inspector/dist/assets/{styles-c10674c1-60521c63.js → styles-c10674c1-c4641675.js} +1 -1
  67. rasa/core/channels/inspector/dist/assets/{svgDrawCommon-08f97a94-a25b6e12.js → svgDrawCommon-08f97a94-831fe9a1.js} +1 -1
  68. rasa/core/channels/inspector/dist/assets/{timeline-definition-85554ec2-0fc086bf.js → timeline-definition-85554ec2-c3304b3a.js} +1 -1
  69. rasa/core/channels/inspector/dist/assets/{xychartDiagram-e933f94c-44ee592e.js → xychartDiagram-e933f94c-da799369.js} +1 -1
  70. rasa/core/channels/inspector/dist/index.html +1 -1
  71. rasa/core/channels/inspector/src/components/RecruitmentPanel.tsx +1 -1
  72. rasa/core/channels/mattermost.py +1 -1
  73. rasa/core/channels/rasa_chat.py +2 -4
  74. rasa/core/channels/rest.py +5 -4
  75. rasa/core/channels/socketio.py +56 -41
  76. rasa/core/channels/studio_chat.py +314 -10
  77. rasa/core/channels/vier_cvg.py +1 -2
  78. rasa/core/channels/voice_ready/audiocodes.py +2 -9
  79. rasa/core/channels/voice_stream/audiocodes.py +8 -5
  80. rasa/core/channels/voice_stream/browser_audio.py +1 -1
  81. rasa/core/channels/voice_stream/genesys.py +2 -2
  82. rasa/core/channels/voice_stream/tts/__init__.py +8 -0
  83. rasa/core/channels/voice_stream/twilio_media_streams.py +10 -5
  84. rasa/core/channels/voice_stream/voice_channel.py +39 -23
  85. rasa/core/http_interpreter.py +3 -7
  86. rasa/core/information_retrieval/faiss.py +18 -11
  87. rasa/core/information_retrieval/ingestion/__init__.py +0 -0
  88. rasa/core/information_retrieval/ingestion/faq_parser.py +158 -0
  89. rasa/core/jobs.py +2 -1
  90. rasa/core/nlg/contextual_response_rephraser.py +44 -10
  91. rasa/core/nlg/generator.py +0 -1
  92. rasa/core/nlg/interpolator.py +2 -3
  93. rasa/core/nlg/summarize.py +39 -5
  94. rasa/core/policies/enterprise_search_policy.py +262 -62
  95. rasa/core/policies/enterprise_search_prompt_with_relevancy_check_and_citation_template.jinja2 +63 -0
  96. rasa/core/policies/flow_policy.py +1 -1
  97. rasa/core/policies/flows/flow_executor.py +96 -17
  98. rasa/core/policies/intentless_policy.py +56 -17
  99. rasa/core/processor.py +104 -51
  100. rasa/core/run.py +33 -11
  101. rasa/core/tracker_stores/tracker_store.py +1 -1
  102. rasa/core/training/interactive.py +1 -1
  103. rasa/core/utils.py +24 -97
  104. rasa/dialogue_understanding/coexistence/intent_based_router.py +2 -1
  105. rasa/dialogue_understanding/coexistence/llm_based_router.py +9 -6
  106. rasa/dialogue_understanding/commands/can_not_handle_command.py +2 -0
  107. rasa/dialogue_understanding/commands/cancel_flow_command.py +5 -1
  108. rasa/dialogue_understanding/commands/chit_chat_answer_command.py +2 -0
  109. rasa/dialogue_understanding/commands/clarify_command.py +5 -1
  110. rasa/dialogue_understanding/commands/command_syntax_manager.py +1 -0
  111. rasa/dialogue_understanding/commands/correct_slots_command.py +1 -3
  112. rasa/dialogue_understanding/commands/human_handoff_command.py +2 -0
  113. rasa/dialogue_understanding/commands/knowledge_answer_command.py +4 -2
  114. rasa/dialogue_understanding/commands/repeat_bot_messages_command.py +2 -0
  115. rasa/dialogue_understanding/commands/set_slot_command.py +11 -1
  116. rasa/dialogue_understanding/commands/skip_question_command.py +2 -0
  117. rasa/dialogue_understanding/commands/start_flow_command.py +4 -0
  118. rasa/dialogue_understanding/commands/utils.py +26 -2
  119. rasa/dialogue_understanding/generator/__init__.py +7 -1
  120. rasa/dialogue_understanding/generator/command_generator.py +4 -2
  121. rasa/dialogue_understanding/generator/command_parser.py +2 -2
  122. rasa/dialogue_understanding/generator/command_parser_validator.py +63 -0
  123. rasa/dialogue_understanding/generator/nlu_command_adapter.py +2 -2
  124. rasa/dialogue_understanding/generator/prompt_templates/command_prompt_v2_gpt_4o_2024_11_20_template.jinja2 +12 -33
  125. rasa/dialogue_understanding/generator/prompt_templates/command_prompt_v3_gpt_4o_2024_11_20_template.jinja2 +78 -0
  126. rasa/dialogue_understanding/generator/single_step/compact_llm_command_generator.py +26 -461
  127. rasa/dialogue_understanding/generator/single_step/search_ready_llm_command_generator.py +147 -0
  128. rasa/dialogue_understanding/generator/single_step/single_step_based_llm_command_generator.py +477 -0
  129. rasa/dialogue_understanding/generator/single_step/single_step_llm_command_generator.py +11 -64
  130. rasa/dialogue_understanding/patterns/cancel.py +1 -2
  131. rasa/dialogue_understanding/patterns/clarify.py +1 -1
  132. rasa/dialogue_understanding/patterns/correction.py +2 -2
  133. rasa/dialogue_understanding/patterns/default_flows_for_patterns.yml +37 -25
  134. rasa/dialogue_understanding/patterns/domain_for_patterns.py +190 -0
  135. rasa/dialogue_understanding/processor/command_processor.py +6 -7
  136. rasa/dialogue_understanding/processor/command_processor_component.py +3 -3
  137. rasa/dialogue_understanding/stack/frames/flow_stack_frame.py +17 -4
  138. rasa/dialogue_understanding/stack/utils.py +3 -1
  139. rasa/dialogue_understanding/utils.py +68 -12
  140. rasa/dialogue_understanding_test/du_test_case.py +1 -1
  141. rasa/dialogue_understanding_test/du_test_runner.py +4 -22
  142. rasa/dialogue_understanding_test/test_case_simulation/test_case_tracker_simulator.py +2 -6
  143. rasa/e2e_test/e2e_test_runner.py +1 -1
  144. rasa/engine/constants.py +1 -1
  145. rasa/engine/graph.py +2 -2
  146. rasa/engine/recipes/default_recipe.py +26 -2
  147. rasa/engine/validation.py +3 -2
  148. rasa/hooks.py +0 -28
  149. rasa/llm_fine_tuning/annotation_module.py +39 -9
  150. rasa/llm_fine_tuning/conversations.py +3 -0
  151. rasa/llm_fine_tuning/llm_data_preparation_module.py +66 -49
  152. rasa/llm_fine_tuning/paraphrasing/conversation_rephraser.py +1 -5
  153. rasa/llm_fine_tuning/paraphrasing/rephrase_validator.py +52 -44
  154. rasa/llm_fine_tuning/paraphrasing_module.py +10 -12
  155. rasa/llm_fine_tuning/storage.py +4 -4
  156. rasa/llm_fine_tuning/utils.py +63 -1
  157. rasa/model_manager/model_api.py +88 -0
  158. rasa/model_manager/trainer_service.py +4 -4
  159. rasa/plugin.py +1 -11
  160. rasa/privacy/__init__.py +0 -0
  161. rasa/privacy/constants.py +83 -0
  162. rasa/privacy/event_broker_utils.py +77 -0
  163. rasa/privacy/privacy_config.py +281 -0
  164. rasa/privacy/privacy_config_schema.json +86 -0
  165. rasa/privacy/privacy_filter.py +340 -0
  166. rasa/privacy/privacy_manager.py +576 -0
  167. rasa/server.py +23 -2
  168. rasa/shared/constants.py +14 -0
  169. rasa/shared/core/command_payload_reader.py +1 -5
  170. rasa/shared/core/constants.py +4 -3
  171. rasa/shared/core/domain.py +7 -0
  172. rasa/shared/core/events.py +38 -10
  173. rasa/shared/core/flows/flow.py +1 -2
  174. rasa/shared/core/flows/flows_yaml_schema.json +3 -0
  175. rasa/shared/core/flows/steps/collect.py +46 -2
  176. rasa/shared/core/flows/validation.py +16 -3
  177. rasa/shared/core/slots.py +28 -0
  178. rasa/shared/core/training_data/story_reader/yaml_story_reader.py +1 -4
  179. rasa/shared/exceptions.py +4 -0
  180. rasa/shared/utils/common.py +1 -1
  181. rasa/shared/utils/llm.py +191 -6
  182. rasa/shared/utils/yaml.py +32 -0
  183. rasa/studio/data_handler.py +3 -3
  184. rasa/studio/download/download.py +37 -60
  185. rasa/studio/download/flows.py +23 -31
  186. rasa/studio/link.py +200 -0
  187. rasa/studio/pull.py +94 -0
  188. rasa/studio/push.py +131 -0
  189. rasa/studio/upload.py +117 -67
  190. rasa/telemetry.py +82 -25
  191. rasa/tracing/config.py +3 -4
  192. rasa/tracing/constants.py +19 -1
  193. rasa/tracing/instrumentation/attribute_extractors.py +10 -2
  194. rasa/tracing/instrumentation/instrumentation.py +53 -2
  195. rasa/tracing/instrumentation/metrics.py +98 -15
  196. rasa/tracing/metric_instrument_provider.py +75 -3
  197. rasa/utils/common.py +1 -27
  198. rasa/utils/log_utils.py +1 -45
  199. rasa/validator.py +2 -8
  200. rasa/version.py +1 -1
  201. {rasa_pro-3.13.0.dev7.dist-info → rasa_pro-3.13.0.dev9.dist-info}/METADATA +7 -8
  202. {rasa_pro-3.13.0.dev7.dist-info → rasa_pro-3.13.0.dev9.dist-info}/RECORD +205 -189
  203. rasa/anonymization/__init__.py +0 -2
  204. rasa/anonymization/anonymisation_rule_yaml_reader.py +0 -91
  205. rasa/anonymization/anonymization_pipeline.py +0 -286
  206. rasa/anonymization/anonymization_rule_executor.py +0 -266
  207. rasa/anonymization/anonymization_rule_orchestrator.py +0 -119
  208. rasa/anonymization/schemas/config.yml +0 -47
  209. rasa/anonymization/utils.py +0 -118
  210. rasa/core/channels/inspector/dist/assets/channel-3730f5fd.js +0 -1
  211. rasa/core/channels/inspector/dist/assets/clone-e847561e.js +0 -1
  212. rasa/core/channels/inspector/dist/assets/flowDiagram-v2-96b9c2cf-efbbfe00.js +0 -1
  213. {rasa_pro-3.13.0.dev7.dist-info → rasa_pro-3.13.0.dev9.dist-info}/NOTICE +0 -0
  214. {rasa_pro-3.13.0.dev7.dist-info → rasa_pro-3.13.0.dev9.dist-info}/WHEEL +0 -0
  215. {rasa_pro-3.13.0.dev7.dist-info → rasa_pro-3.13.0.dev9.dist-info}/entry_points.txt +0 -0
@@ -1,7 +1,8 @@
1
+ import dataclasses
1
2
  import importlib.resources
2
3
  import json
3
4
  import re
4
- from typing import TYPE_CHECKING, Any, Dict, List, Optional, Text
5
+ from typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Text
5
6
 
6
7
  import dotenv
7
8
  import structlog
@@ -9,6 +10,7 @@ from jinja2 import Template
9
10
  from pydantic import ValidationError
10
11
 
11
12
  import rasa.shared.utils.io
13
+ from rasa.core.available_endpoints import AvailableEndpoints
12
14
  from rasa.core.constants import (
13
15
  POLICY_MAX_HISTORY,
14
16
  POLICY_PRIORITY,
@@ -23,7 +25,6 @@ from rasa.core.information_retrieval import (
23
25
  )
24
26
  from rasa.core.information_retrieval.faiss import FAISS_Store
25
27
  from rasa.core.policies.policy import Policy, PolicyPrediction
26
- from rasa.core.utils import AvailableEndpoints
27
28
  from rasa.dialogue_understanding.generator.constants import (
28
29
  LLM_CONFIG_KEY,
29
30
  )
@@ -53,7 +54,9 @@ from rasa.shared.constants import (
53
54
  MODEL_NAME_CONFIG_KEY,
54
55
  OPENAI_PROVIDER,
55
56
  PROMPT_CONFIG_KEY,
57
+ PROMPT_TEMPLATE_CONFIG_KEY,
56
58
  PROVIDER_CONFIG_KEY,
59
+ RASA_PATTERN_CANNOT_HANDLE_NO_RELEVANT_ANSWER,
57
60
  TEMPERATURE_CONFIG_KEY,
58
61
  TIMEOUT_CONFIG_KEY,
59
62
  )
@@ -78,7 +81,6 @@ from rasa.shared.nlu.training_data.training_data import TrainingData
78
81
  from rasa.shared.providers.embedding._langchain_embedding_client_adapter import (
79
82
  _LangchainEmbeddingClientAdapter,
80
83
  )
81
- from rasa.shared.providers.llm.llm_client import LLMClient
82
84
  from rasa.shared.providers.llm.llm_response import LLMResponse, measure_llm_latency
83
85
  from rasa.shared.utils.cli import print_error_and_exit
84
86
  from rasa.shared.utils.constants import (
@@ -93,6 +95,7 @@ from rasa.shared.utils.io import deep_container_fingerprint
93
95
  from rasa.shared.utils.llm import (
94
96
  DEFAULT_OPENAI_CHAT_MODEL_NAME,
95
97
  DEFAULT_OPENAI_EMBEDDING_MODEL_NAME,
98
+ check_prompt_config_keys_and_warn_if_deprecated,
96
99
  embedder_factory,
97
100
  get_prompt_template,
98
101
  llm_factory,
@@ -113,7 +116,7 @@ if TYPE_CHECKING:
113
116
 
114
117
  from rasa.utils.log_utils import log_llm
115
118
 
116
- logger = structlog.get_logger()
119
+ structlogger = structlog.get_logger()
117
120
 
118
121
  dotenv.load_dotenv("./.env")
119
122
 
@@ -124,6 +127,7 @@ VECTOR_STORE_THRESHOLD_PROPERTY = "threshold"
124
127
  TRACE_TOKENS_PROPERTY = "trace_prompt_tokens"
125
128
  CITATION_ENABLED_PROPERTY = "citation_enabled"
126
129
  USE_LLM_PROPERTY = "use_generative_llm"
130
+ CHECK_RELEVANCY_PROPERTY = "check_relevancy"
127
131
  MAX_MESSAGES_IN_QUERY_KEY = "max_messages_in_query"
128
132
 
129
133
  DEFAULT_VECTOR_STORE_TYPE = "faiss"
@@ -134,6 +138,10 @@ DEFAULT_VECTOR_STORE = {
134
138
  VECTOR_STORE_THRESHOLD_PROPERTY: DEFAULT_VECTOR_STORE_THRESHOLD,
135
139
  }
136
140
 
141
+ DEFAULT_CHECK_RELEVANCY_PROPERTY = False
142
+ DEFAULT_USE_LLM_PROPERTY = True
143
+ DEFAULT_CITATION_ENABLED_PROPERTY = False
144
+
137
145
  DEFAULT_LLM_CONFIG = {
138
146
  PROVIDER_CONFIG_KEY: OPENAI_PROVIDER,
139
147
  MODEL_CONFIG_KEY: DEFAULT_OPENAI_CHAT_MODEL_NAME,
@@ -162,6 +170,18 @@ DEFAULT_ENTERPRISE_SEARCH_PROMPT_WITH_CITATION_TEMPLATE = importlib.resources.re
162
170
  "rasa.core.policies", "enterprise_search_prompt_with_citation_template.jinja2"
163
171
  )
164
172
 
173
+ DEFAULT_ENTERPRISE_SEARCH_PROMPT_WITH_RELEVANCY_CHECK_AND_CITATION_TEMPLATE = (
174
+ importlib.resources.read_text(
175
+ "rasa.core.policies",
176
+ "enterprise_search_prompt_with_relevancy_check_and_citation_template.jinja2",
177
+ )
178
+ )
179
+
180
+ # TODO: Update this pattern once the experiments are done
181
+ _ENTERPRISE_SEARCH_ANSWER_NOT_RELEVANT_PATTERN = re.compile(
182
+ r"\[NO_RELEVANT_ANSWER_FOUND\]"
183
+ )
184
+
165
185
 
166
186
  class VectorStoreConnectionError(RasaException):
167
187
  """Exception raised for errors in connecting to the vector store."""
@@ -171,6 +191,12 @@ class VectorStoreConfigurationError(RasaException):
171
191
  """Exception raised for errors in vector store configuration."""
172
192
 
173
193
 
194
+ @dataclasses.dataclass
195
+ class _RelevancyCheckResponse:
196
+ answer: Optional[str]
197
+ relevant: bool
198
+
199
+
174
200
  @DefaultV1Recipe.register(
175
201
  DefaultV1Recipe.ComponentType.POLICY_WITH_END_TO_END_SUPPORT, is_trainable=True
176
202
  )
@@ -220,6 +246,13 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
220
246
  """Constructs a new Policy object."""
221
247
  super().__init__(config, model_storage, resource, execution_context, featurizer)
222
248
 
249
+ # Check for deprecated keys and issue a warning if those are used
250
+ check_prompt_config_keys_and_warn_if_deprecated(
251
+ config, "enterprise_search_policy"
252
+ )
253
+ # Check for mutual exclusivity of extractive and generative search
254
+ self._check_and_warn_mutual_exclusivity_of_extractive_and_generative_search()
255
+
223
256
  # Resolve LLM config
224
257
  self.config[LLM_CONFIG_KEY] = resolve_model_client_config(
225
258
  self.config.get(LLM_CONFIG_KEY), EnterpriseSearchPolicy.__name__
@@ -234,6 +267,9 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
234
267
  self.vector_store_config = self.config.get(
235
268
  VECTOR_STORE_PROPERTY, DEFAULT_VECTOR_STORE
236
269
  )
270
+ self.vector_search_threshold = self.vector_store_config.get(
271
+ VECTOR_STORE_THRESHOLD_PROPERTY, DEFAULT_VECTOR_STORE_THRESHOLD
272
+ )
237
273
 
238
274
  # Embeddings configuration for encoding the search query
239
275
  self.embeddings_config = (
@@ -249,30 +285,49 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
249
285
  # Maximum number of messages to include in the search query
250
286
  self.max_messages_in_query = self.config.get(MAX_MESSAGES_IN_QUERY_KEY, 2)
251
287
 
252
- # boolean to enable/disable tracing of prompt tokens
288
+ # Boolean to enable/disable tracing of prompt tokens
253
289
  self.trace_prompt_tokens = self.config.get(TRACE_TOKENS_PROPERTY, False)
254
290
 
255
- # boolean to enable/disable the use of LLM for response generation
256
- self.use_llm = self.config.get(USE_LLM_PROPERTY, True)
291
+ # Boolean to enable/disable the use of LLM for response generation
292
+ self.use_llm = self.config.get(USE_LLM_PROPERTY, DEFAULT_USE_LLM_PROPERTY)
257
293
 
258
- # boolean to enable/disable citation generation
259
- self.citation_enabled = self.config.get(CITATION_ENABLED_PROPERTY, False)
294
+ # Boolean to enable/disable citation generation. This flag enables citation
295
+ # logic, but it only takes effect if `use_llm` is True.
296
+ self.citation_enabled = self.config.get(
297
+ CITATION_ENABLED_PROPERTY, DEFAULT_CITATION_ENABLED_PROPERTY
298
+ )
260
299
 
261
- self.prompt_template = prompt_template or get_prompt_template(
262
- self.config.get(PROMPT_CONFIG_KEY),
263
- DEFAULT_ENTERPRISE_SEARCH_PROMPT_TEMPLATE,
264
- log_source_component=EnterpriseSearchPolicy.__name__,
265
- log_source_method=LOG_COMPONENT_SOURCE_METHOD_INIT,
300
+ # Boolean to enable/disable the use of relevancy check alongside answer
301
+ # generation. This flag enables citation logic, but it only takes effect if
302
+ # `use_llm` is True.
303
+ self.relevancy_check_enabled = self.config.get(
304
+ CHECK_RELEVANCY_PROPERTY, DEFAULT_CHECK_RELEVANCY_PROPERTY
266
305
  )
267
- self.citation_prompt_template = get_prompt_template(
268
- self.config.get(PROMPT_CONFIG_KEY),
269
- DEFAULT_ENTERPRISE_SEARCH_PROMPT_WITH_CITATION_TEMPLATE,
270
- log_source_component=EnterpriseSearchPolicy.__name__,
271
- log_source_method=LOG_COMPONENT_SOURCE_METHOD_INIT,
306
+
307
+ # Resolve the prompt template. The prompt will only be used if the 'use_llm' is
308
+ # set to True.
309
+ self.prompt_template = prompt_template or self._resolve_prompt_template(
310
+ self.config, LOG_COMPONENT_SOURCE_METHOD_INIT
272
311
  )
273
- # If citation is enabled, use the citation prompt template
274
- if self.citation_enabled:
275
- self.prompt_template = self.citation_prompt_template
312
+
313
+ def _check_and_warn_mutual_exclusivity_of_extractive_and_generative_search(
314
+ self,
315
+ ) -> None:
316
+ if self.config.get(
317
+ CHECK_RELEVANCY_PROPERTY, DEFAULT_CHECK_RELEVANCY_PROPERTY
318
+ ) and not self.config.get(USE_LLM_PROPERTY, DEFAULT_USE_LLM_PROPERTY):
319
+ structlogger.warning(
320
+ "enterprise_search_policy.init"
321
+ ".relevancy_check_enabled_with_disabled_generative_search",
322
+ event_info=(
323
+ f"The config parameter '{CHECK_RELEVANCY_PROPERTY}' is set to"
324
+ f"'True', but the generative search is disabled (config"
325
+ f"parameter '{USE_LLM_PROPERTY}' is set to 'False'). As a result, "
326
+ "the relevancy check for the generative search will be disabled. "
327
+ f"To use this check, set the config parameter '{USE_LLM_PROPERTY}' "
328
+ f"to `True`."
329
+ ),
330
+ )
276
331
 
277
332
  @classmethod
278
333
  def _create_plain_embedder(cls, config: Dict[Text, Any]) -> "Embeddings":
@@ -366,7 +421,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
366
421
  try:
367
422
  embeddings = self._create_plain_embedder(self.config)
368
423
  except (ValidationError, Exception) as e:
369
- logger.error(
424
+ structlogger.error(
370
425
  "enterprise_search_policy.train.embedder_instantiation_failed",
371
426
  message="Unable to instantiate the embedding client.",
372
427
  error=e,
@@ -377,16 +432,19 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
377
432
  )
378
433
 
379
434
  if store_type == DEFAULT_VECTOR_STORE_TYPE:
380
- logger.info("enterprise_search_policy.train.faiss")
435
+ structlogger.info("enterprise_search_policy.train.faiss")
381
436
  with self._model_storage.write_to(self._resource) as path:
382
437
  self.vector_store = FAISS_Store(
383
438
  docs_folder=self.vector_store_config.get(SOURCE_PROPERTY),
384
439
  embeddings=embeddings,
385
440
  index_path=path,
386
441
  create_index=True,
442
+ parse_as_faq_pairs=not self.use_llm,
387
443
  )
388
444
  else:
389
- logger.info("enterprise_search_policy.train.custom", store_type=store_type)
445
+ structlogger.info(
446
+ "enterprise_search_policy.train.custom", store_type=store_type
447
+ )
390
448
 
391
449
  # telemetry call to track training completion
392
450
  track_enterprise_search_policy_train_completed(
@@ -402,6 +460,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
402
460
  or self.llm_config.get(MODEL_NAME_CONFIG_KEY),
403
461
  llm_model_group_id=self.llm_config.get(MODEL_GROUP_ID_CONFIG_KEY),
404
462
  citation_enabled=self.citation_enabled,
463
+ relevancy_check_enabled=self.relevancy_check_enabled,
405
464
  )
406
465
  self.persist()
407
466
  return self._resource
@@ -454,7 +513,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
454
513
  config = endpoints.vector_store if endpoints else None
455
514
  store_type = self.vector_store_config.get(VECTOR_STORE_TYPE_PROPERTY)
456
515
  if config is None and store_type != DEFAULT_VECTOR_STORE_TYPE:
457
- logger.error(
516
+ structlogger.error(
458
517
  "enterprise_search_policy._connect_vector_store_or_raise.no_config"
459
518
  )
460
519
  raise VectorStoreConfigurationError(
@@ -464,7 +523,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
464
523
  try:
465
524
  self.vector_store.connect(config) # type: ignore
466
525
  except Exception as e:
467
- logger.error(
526
+ structlogger.error(
468
527
  "enterprise_search_policy._connect_vector_store_or_raise.connect_error",
469
528
  error=e,
470
529
  config=config,
@@ -490,14 +549,14 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
490
549
  transcript.append(sanitize_message_for_prompt(event.text))
491
550
 
492
551
  search_query = " ".join(transcript[-history:][::-1])
493
- logger.debug("search_query", search_query=search_query)
552
+ structlogger.debug("search_query", search_query=search_query)
494
553
  return search_query
495
554
 
496
555
  async def predict_action_probabilities( # type: ignore[override]
497
556
  self,
498
557
  tracker: DialogueStateTracker,
499
558
  domain: Domain,
500
- endpoints: Optional[AvailableEndpoints],
559
+ endpoints: Optional[AvailableEndpoints] = None,
501
560
  rule_only_data: Optional[Dict[Text, Any]] = None,
502
561
  **kwargs: Any,
503
562
  ) -> PolicyPrediction:
@@ -516,23 +575,20 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
516
575
  The prediction.
517
576
  """
518
577
  logger_key = "enterprise_search_policy.predict_action_probabilities"
519
- vector_search_threshold = self.vector_store_config.get(
520
- VECTOR_STORE_THRESHOLD_PROPERTY, DEFAULT_VECTOR_STORE_THRESHOLD
521
- )
522
- llm = llm_factory(self.config.get(LLM_CONFIG_KEY), DEFAULT_LLM_CONFIG)
578
+
523
579
  if not self.supports_current_stack_frame(
524
580
  tracker, False, False
525
581
  ) or self.should_abstain_in_coexistence(tracker, True):
526
582
  return self._prediction(self._default_predictions(domain))
527
583
 
528
584
  if not self.vector_store:
529
- logger.error(f"{logger_key}.no_vector_store")
585
+ structlogger.error(f"{logger_key}.no_vector_store")
530
586
  return self._create_prediction_internal_error(domain, tracker)
531
587
 
532
588
  try:
533
589
  self._connect_vector_store_or_raise(endpoints)
534
590
  except (VectorStoreConfigurationError, VectorStoreConnectionError) as e:
535
- logger.error(f"{logger_key}.connection_error", error=e)
591
+ structlogger.error(f"{logger_key}.connection_error", error=e)
536
592
  return self._create_prediction_internal_error(domain, tracker)
537
593
 
538
594
  search_query = self._prepare_search_query(
@@ -544,20 +600,19 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
544
600
  documents = await self.vector_store.search(
545
601
  query=search_query,
546
602
  tracker_state=tracker_state,
547
- threshold=vector_search_threshold,
603
+ threshold=self.vector_search_threshold,
548
604
  )
549
605
  except InformationRetrievalException as e:
550
- logger.error(f"{logger_key}.search_error", error=e)
606
+ structlogger.error(f"{logger_key}.search_error", error=e)
551
607
  return self._create_prediction_internal_error(domain, tracker)
552
608
 
553
609
  if not documents.results:
554
- logger.info(f"{logger_key}.no_documents")
610
+ structlogger.info(f"{logger_key}.no_documents")
555
611
  return self._create_prediction_cannot_handle(domain, tracker)
556
612
 
557
613
  if self.use_llm:
558
614
  prompt = self._render_prompt(tracker, documents.results)
559
- llm_response = await self._generate_llm_answer(llm, prompt)
560
- llm_response = LLMResponse.ensure_llm_response(llm_response)
615
+ llm_response = await self._invoke_llm(prompt)
561
616
 
562
617
  self._add_prompt_and_llm_response_to_latest_message(
563
618
  tracker=tracker,
@@ -567,24 +622,38 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
567
622
  )
568
623
 
569
624
  if llm_response is None or not llm_response.choices:
570
- logger.debug(f"{logger_key}.no_llm_response")
625
+ structlogger.debug(f"{logger_key}.no_llm_response")
571
626
  response = None
572
627
  else:
573
628
  llm_answer = llm_response.choices[0]
574
629
 
630
+ if self.relevancy_check_enabled:
631
+ relevancy_response = self._parse_llm_relevancy_check_response(
632
+ llm_answer
633
+ )
634
+ if not relevancy_response.relevant:
635
+ structlogger.debug(f"{logger_key}.answer_not_relevant")
636
+ return self._create_prediction_cannot_handle(
637
+ domain,
638
+ tracker,
639
+ RASA_PATTERN_CANNOT_HANDLE_NO_RELEVANT_ANSWER,
640
+ )
641
+
575
642
  if self.citation_enabled:
576
643
  llm_answer = self.post_process_citations(llm_answer)
577
644
 
578
- logger.debug(f"{logger_key}.llm_answer", llm_answer=llm_answer)
645
+ structlogger.debug(
646
+ f"{logger_key}.llm_answer", prompt=prompt, llm_answer=llm_answer
647
+ )
579
648
  response = llm_answer
580
649
  else:
581
650
  response = documents.results[0].metadata.get("answer", None)
582
651
  if not response:
583
- logger.error(
652
+ structlogger.error(
584
653
  f"{logger_key}.answer_key_missing_in_metadata",
585
654
  search_results=documents.results,
586
655
  )
587
- logger.debug(
656
+ structlogger.debug(
588
657
  "enterprise_search_policy.predict_action_probabilities.no_llm",
589
658
  search_results=documents,
590
659
  )
@@ -616,6 +685,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
616
685
  or self.llm_config.get(MODEL_NAME_CONFIG_KEY),
617
686
  llm_model_group_id=self.llm_config.get(MODEL_GROUP_ID_CONFIG_KEY),
618
687
  citation_enabled=self.citation_enabled,
688
+ relevancy_check_enabled=self.relevancy_check_enabled,
619
689
  )
620
690
  return self._create_prediction(
621
691
  domain=domain, tracker=tracker, action_metadata=action_metadata
@@ -639,11 +709,12 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
639
709
  ),
640
710
  "docs": documents,
641
711
  "slots": self._prepare_slots_for_template(tracker),
712
+ "check_relevancy": self.relevancy_check_enabled,
642
713
  "citation_enabled": self.citation_enabled,
643
714
  }
644
715
  prompt = Template(self.prompt_template).render(**inputs)
645
716
  log_llm(
646
- logger=logger,
717
+ logger=structlogger,
647
718
  log_module="EnterpriseSearchPolicy",
648
719
  log_event="enterprise_search_policy._render_prompt.prompt_rendered",
649
720
  prompt=prompt,
@@ -651,9 +722,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
651
722
  return prompt
652
723
 
653
724
  @measure_llm_latency
654
- async def _generate_llm_answer(
655
- self, llm: LLMClient, prompt: Text
656
- ) -> Optional[LLMResponse]:
725
+ async def _invoke_llm(self, prompt: Text) -> Optional[LLMResponse]:
657
726
  """Fetches an LLM completion for the provided prompt.
658
727
 
659
728
  Args:
@@ -663,17 +732,32 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
663
732
  Returns:
664
733
  An LLMResponse object, or None if the call fails.
665
734
  """
735
+ llm = llm_factory(self.config.get(LLM_CONFIG_KEY), DEFAULT_LLM_CONFIG)
666
736
  try:
667
- return await llm.acompletion(prompt)
737
+ response = await llm.acompletion(prompt)
738
+ return LLMResponse.ensure_llm_response(response)
668
739
  except Exception as e:
669
740
  # unfortunately, langchain does not wrap LLM exceptions which means
670
741
  # we have to catch all exceptions here
671
- logger.error(
742
+ structlogger.error(
672
743
  "enterprise_search_policy._generate_llm_answer.llm_error",
673
744
  error=e,
674
745
  )
675
746
  return None
676
747
 
748
+ def _parse_llm_relevancy_check_response(
749
+ self, llm_answer: str
750
+ ) -> _RelevancyCheckResponse:
751
+ """Checks if the LLM response is relevant by parsing it."""
752
+ answer_relevant = not _ENTERPRISE_SEARCH_ANSWER_NOT_RELEVANT_PATTERN.search(
753
+ llm_answer
754
+ )
755
+ structlogger.debug("")
756
+ return _RelevancyCheckResponse(
757
+ answer=llm_answer if answer_relevant else None,
758
+ relevant=answer_relevant,
759
+ )
760
+
677
761
  def _create_prediction(
678
762
  self,
679
763
  domain: Domain,
@@ -708,10 +792,18 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
708
792
  )
709
793
 
710
794
  def _create_prediction_cannot_handle(
711
- self, domain: Domain, tracker: DialogueStateTracker
795
+ self,
796
+ domain: Domain,
797
+ tracker: DialogueStateTracker,
798
+ reason: Optional[str] = None,
712
799
  ) -> PolicyPrediction:
800
+ cannot_handle_stack_frame = (
801
+ CannotHandlePatternFlowStackFrame(reason=reason)
802
+ if reason is not None
803
+ else CannotHandlePatternFlowStackFrame()
804
+ )
713
805
  return self._create_prediction_for_pattern(
714
- domain, tracker, CannotHandlePatternFlowStackFrame()
806
+ domain, tracker, cannot_handle_stack_frame
715
807
  )
716
808
 
717
809
  def _create_prediction_for_pattern(
@@ -780,7 +872,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
780
872
  path / ENTERPRISE_SEARCH_PROMPT_FILE_NAME
781
873
  )
782
874
  except (FileNotFoundError, FileIOException) as e:
783
- logger.warning(
875
+ structlogger.warning(
784
876
  "enterprise_search_policy.load.failed", error=e, resource=resource.name
785
877
  )
786
878
 
@@ -790,7 +882,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
790
882
 
791
883
  embeddings = cls._create_plain_embedder(config)
792
884
 
793
- logger.info("enterprise_search_policy.load", config=config)
885
+ structlogger.info("enterprise_search_policy.load", config=config)
794
886
  if store_type == DEFAULT_VECTOR_STORE_TYPE:
795
887
  # if a vector store is not specified,
796
888
  # default to using FAISS with the index stored in the model
@@ -801,6 +893,9 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
801
893
  index_path=path,
802
894
  docs_folder=None,
803
895
  create_index=False,
896
+ parse_as_faq_pairs=not config.get(
897
+ USE_LLM_PROPERTY, DEFAULT_USE_LLM_PROPERTY
898
+ ),
804
899
  )
805
900
  else:
806
901
  vector_store = create_from_endpoint_config(
@@ -849,15 +944,12 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
849
944
  @classmethod
850
945
  def fingerprint_addon(cls, config: Dict[str, Any]) -> Optional[str]:
851
946
  """Add a fingerprint of enterprise search policy for the graph."""
852
- local_knowledge_data = cls._get_local_knowledge_data(config)
853
-
854
- prompt_template = get_prompt_template(
855
- config.get(PROMPT_CONFIG_KEY),
856
- DEFAULT_ENTERPRISE_SEARCH_PROMPT_TEMPLATE,
857
- log_source_component=EnterpriseSearchPolicy.__name__,
858
- log_source_method=LOG_COMPONENT_SOURCE_METHOD_FINGERPRINT_ADDON,
947
+ prompt_template = cls._resolve_prompt_template(
948
+ config, LOG_COMPONENT_SOURCE_METHOD_FINGERPRINT_ADDON
859
949
  )
860
950
 
951
+ local_knowledge_data = cls._get_local_knowledge_data(config)
952
+
861
953
  llm_config = resolve_model_client_config(
862
954
  config.get(LLM_CONFIG_KEY), EnterpriseSearchPolicy.__name__
863
955
  )
@@ -881,7 +973,7 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
881
973
  Returns:
882
974
  The post-processed LLM answer.
883
975
  """
884
- logger.debug(
976
+ structlogger.debug(
885
977
  "enterprise_search_policy.post_process_citations", llm_answer=llm_answer
886
978
  )
887
979
 
@@ -982,3 +1074,111 @@ class EnterpriseSearchPolicy(LLMHealthCheckMixin, EmbeddingsHealthCheckMixin, Po
982
1074
  log_source_method,
983
1075
  EnterpriseSearchPolicy.__name__,
984
1076
  )
1077
+
1078
+ @classmethod
1079
+ def get_system_default_prompt_based_on_config(cls, config: Dict[str, Any]) -> str:
1080
+ """
1081
+ Resolves the default prompt template for Enterprise Search Policy based on
1082
+ the component's configuration.
1083
+
1084
+ - The old prompt is selected when both citation and relevancy check are either
1085
+ disabled or not set in the configuration.
1086
+ - The citation prompt is used when citation is enabled and relevancy check is
1087
+ either disabled or not set in the configuration.
1088
+ - The relevancy check prompt is only used when relevancy check is enabled.
1089
+
1090
+ Args:
1091
+ config: The component's configuration.
1092
+
1093
+ Returns:
1094
+ The resolved jinja prompt template as a string.
1095
+ """
1096
+
1097
+ # Get the feature flags
1098
+ citation_enabled = config.get(
1099
+ CITATION_ENABLED_PROPERTY, DEFAULT_CITATION_ENABLED_PROPERTY
1100
+ )
1101
+ relevancy_check_enabled = config.get(
1102
+ CHECK_RELEVANCY_PROPERTY, DEFAULT_CHECK_RELEVANCY_PROPERTY
1103
+ )
1104
+
1105
+ # Based on the enabled features (citation, relevancy check) fetch the
1106
+ # appropriate default prompt
1107
+ default_prompt = cls._select_default_prompt_template_based_on_features(
1108
+ relevancy_check_enabled, citation_enabled
1109
+ )
1110
+
1111
+ return default_prompt
1112
+
1113
+ @classmethod
1114
+ def _resolve_prompt_template(
1115
+ cls,
1116
+ config: dict,
1117
+ log_source_method: Literal["init", "fingerprint"],
1118
+ ) -> str:
1119
+ """
1120
+ Resolves the prompt template to use for the Enterprise Search Policy's
1121
+ generative search.
1122
+
1123
+ Checks if a custom template is provided via component's configuration. If not,
1124
+ it selects the appropriate default template based on the enabled features
1125
+ (citation and relevancy check).
1126
+
1127
+ Args:
1128
+ config: The component's configuration.
1129
+ log_source_method: The name of the method or function emitting the log for
1130
+ better traceability.
1131
+ Returns:
1132
+ The resolved jinja prompt template as a string.
1133
+ """
1134
+
1135
+ # Read the template path from the configuration if available.
1136
+ # The deprecated 'prompt' has a lower priority compared to 'prompt_template'
1137
+ config_defined_prompt = (
1138
+ config.get(PROMPT_TEMPLATE_CONFIG_KEY)
1139
+ or config.get(PROMPT_CONFIG_KEY)
1140
+ or None
1141
+ )
1142
+ # Select the default prompt based on the features set in the config.
1143
+ default_prompt = cls.get_system_default_prompt_based_on_config(config)
1144
+
1145
+ return get_prompt_template(
1146
+ config_defined_prompt,
1147
+ default_prompt,
1148
+ log_source_component=EnterpriseSearchPolicy.__name__,
1149
+ log_source_method=log_source_method,
1150
+ )
1151
+
1152
+ @classmethod
1153
+ def _select_default_prompt_template_based_on_features(
1154
+ cls,
1155
+ relevancy_check_enabled: bool,
1156
+ citation_enabled: bool,
1157
+ ) -> str:
1158
+ """
1159
+ Returns the appropriate default prompt template based on the feature flags.
1160
+
1161
+ The selection follows this priority:
1162
+ 1. If relevancy check is enabled, return the prompt that includes both relevancy
1163
+ and citation blocks.
1164
+ 2. If only citation is enabled, return the prompt with citation blocks.
1165
+ 3. Otherwise, fall back to the legacy default prompt template.
1166
+
1167
+ Args:
1168
+ relevancy_check_enabled: Whether the LLM-generated answer should undergo
1169
+ relevancy evaluation.
1170
+ citation_enabled: Whether citations should be included in the generated
1171
+ answer.
1172
+
1173
+ Returns:
1174
+ The default prompt template corresponding to the enabled features.
1175
+ """
1176
+ if relevancy_check_enabled:
1177
+ # ES prompt that has relevancy check and citations blocks
1178
+ return DEFAULT_ENTERPRISE_SEARCH_PROMPT_WITH_RELEVANCY_CHECK_AND_CITATION_TEMPLATE # noqa: E501
1179
+ elif citation_enabled:
1180
+ # ES prompt with citation's block - backward compatibility
1181
+ return DEFAULT_ENTERPRISE_SEARCH_PROMPT_WITH_CITATION_TEMPLATE
1182
+ else:
1183
+ # Legacy ES prompt - backward compatibility
1184
+ return DEFAULT_ENTERPRISE_SEARCH_PROMPT_TEMPLATE
@@ -0,0 +1,63 @@
1
+ Given the following information, please provide an answer based on the provided documents and the context of the recent conversation.
2
+ If the answer is not known or cannot be determined from the provided documents or context, please state that you do not know to the user.
3
+ ### Relevant Documents
4
+ Use the following documents to answer the question:
5
+ {% for doc in docs %}
6
+ {{ loop.cycle("*")}}. {{ doc.metadata }}
7
+ {{ doc.text }}
8
+ {% endfor %}
9
+
10
+ {% if citation_enabled %}
11
+ ### Citing Sources
12
+ Find the sources from the documents that are most relevant to answering the question.
13
+ The sources must be extracted from the given document metadata source property and not from the conversation context.
14
+ If there are no relevant sources, write "No relevant sources" instead.
15
+
16
+ For each source you cite, follow a 1-based numbering system for citations.
17
+ Start with [1] for the first source you refer to, regardless of its index in the provided list of documents.
18
+ If you cite another source, use the next number in sequence, [2], and so on.
19
+ Ensure each source is only assigned one number, even if referenced multiple times.
20
+ If you refer back to a previously cited source, use its originally assigned number.
21
+
22
+ For example, if you first cite the third source in the list, refer to it as [1].
23
+ If you then cite the first source in the list, refer to it as [2].
24
+ If you mention the third source again, still refer to it as [1].
25
+
26
+ Don't say "According to Source [1]" when answering. Instead, make references to sources relevant to each section of the answer solely by adding the bracketed number at the end of the relevant sentence.
27
+ #### Formatting
28
+ First print the answer with in-text citations which follow a numbered order starting with index 1, then add the sources section.
29
+ The format of your overall answer must look like what's shown between the <example></example> tags.
30
+ Make sure to follow the formatting exactly and remove any line breaks or whitespaces between the answer and the Sources section.
31
+ <example>
32
+ You can use flows to model business logic in Rasa assistants. [1] You can use the Enterprise Search Policy to search vector stores for relevant knowledge base documents. [2]
33
+ Sources:
34
+ [1] https://rasa.com/docs/rasa-pro/concepts/flows
35
+ [2] https://rasa.com/docs/rasa-pro/concepts/policies/enterprise-search-policy
36
+ </example>
37
+ {% endif %}
38
+
39
+ {% if slots|length > 0 %}
40
+ ### Slots or Variables
41
+ Here are the variables of the currently active conversation which may be used to answer the question:
42
+ {% for slot in slots -%}
43
+ - name: {{ slot.name }}, value: {{ slot.value }}, type: {{ slot.type }}
44
+ {% endfor %}
45
+ {% endif %}
46
+ ### Current Conversation
47
+ Transcript of the current conversation, use it to determine the context of the question:
48
+ {{ current_conversation }}
49
+
50
+
51
+ ## Answering the Question
52
+ Based on the above sections, please formulate an answer to the question or request in the user's last message.
53
+ It is important that you ensure the answer is grounded in the provided documents and conversation context.
54
+ Avoid speculating or making assumptions beyond the given information and keep your answers short, 2 to 3 sentences at most.
55
+
56
+ {% if citation_enabled %}
57
+ If you are unable to find an answer in the given relevant documents, do not cite sources from elsewhere in the conversation context.
58
+ {% endif %}
59
+
60
+ {% if check_relevancy %}
61
+ If answer is not relevant output: "[NO_RELEVANT_ANSWER_FOUND]"
62
+ {% endif %}
63
+ Your answer: