azure-ai-evaluation 1.8.0__py3-none-any.whl → 1.10.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of azure-ai-evaluation might be problematic. Click here for more details.

Files changed (142) hide show
  1. azure/ai/evaluation/__init__.py +51 -6
  2. azure/ai/evaluation/_aoai/__init__.py +1 -1
  3. azure/ai/evaluation/_aoai/aoai_grader.py +21 -11
  4. azure/ai/evaluation/_aoai/label_grader.py +3 -2
  5. azure/ai/evaluation/_aoai/python_grader.py +84 -0
  6. azure/ai/evaluation/_aoai/score_model_grader.py +91 -0
  7. azure/ai/evaluation/_aoai/string_check_grader.py +3 -2
  8. azure/ai/evaluation/_aoai/text_similarity_grader.py +3 -2
  9. azure/ai/evaluation/_azure/_envs.py +9 -10
  10. azure/ai/evaluation/_azure/_token_manager.py +7 -1
  11. azure/ai/evaluation/_common/constants.py +11 -2
  12. azure/ai/evaluation/_common/evaluation_onedp_client.py +32 -26
  13. azure/ai/evaluation/_common/onedp/__init__.py +32 -32
  14. azure/ai/evaluation/_common/onedp/_client.py +136 -139
  15. azure/ai/evaluation/_common/onedp/_configuration.py +70 -73
  16. azure/ai/evaluation/_common/onedp/_patch.py +21 -21
  17. azure/ai/evaluation/_common/onedp/_utils/__init__.py +6 -0
  18. azure/ai/evaluation/_common/onedp/_utils/model_base.py +1232 -0
  19. azure/ai/evaluation/_common/onedp/_utils/serialization.py +2032 -0
  20. azure/ai/evaluation/_common/onedp/_validation.py +50 -50
  21. azure/ai/evaluation/_common/onedp/_version.py +9 -9
  22. azure/ai/evaluation/_common/onedp/aio/__init__.py +29 -29
  23. azure/ai/evaluation/_common/onedp/aio/_client.py +138 -143
  24. azure/ai/evaluation/_common/onedp/aio/_configuration.py +70 -75
  25. azure/ai/evaluation/_common/onedp/aio/_patch.py +21 -21
  26. azure/ai/evaluation/_common/onedp/aio/operations/__init__.py +37 -39
  27. azure/ai/evaluation/_common/onedp/aio/operations/_operations.py +4832 -4494
  28. azure/ai/evaluation/_common/onedp/aio/operations/_patch.py +21 -21
  29. azure/ai/evaluation/_common/onedp/models/__init__.py +168 -142
  30. azure/ai/evaluation/_common/onedp/models/_enums.py +230 -162
  31. azure/ai/evaluation/_common/onedp/models/_models.py +2685 -2228
  32. azure/ai/evaluation/_common/onedp/models/_patch.py +21 -21
  33. azure/ai/evaluation/_common/onedp/operations/__init__.py +37 -39
  34. azure/ai/evaluation/_common/onedp/operations/_operations.py +6106 -5657
  35. azure/ai/evaluation/_common/onedp/operations/_patch.py +21 -21
  36. azure/ai/evaluation/_common/rai_service.py +88 -52
  37. azure/ai/evaluation/_common/raiclient/__init__.py +1 -1
  38. azure/ai/evaluation/_common/raiclient/operations/_operations.py +14 -1
  39. azure/ai/evaluation/_common/utils.py +188 -10
  40. azure/ai/evaluation/_constants.py +2 -1
  41. azure/ai/evaluation/_converters/__init__.py +1 -1
  42. azure/ai/evaluation/_converters/_ai_services.py +9 -8
  43. azure/ai/evaluation/_converters/_models.py +46 -0
  44. azure/ai/evaluation/_converters/_sk_services.py +495 -0
  45. azure/ai/evaluation/_eval_mapping.py +2 -2
  46. azure/ai/evaluation/_evaluate/_batch_run/_run_submitter_client.py +73 -25
  47. azure/ai/evaluation/_evaluate/_batch_run/eval_run_context.py +2 -2
  48. azure/ai/evaluation/_evaluate/_evaluate.py +210 -94
  49. azure/ai/evaluation/_evaluate/_evaluate_aoai.py +132 -89
  50. azure/ai/evaluation/_evaluate/_telemetry/__init__.py +0 -1
  51. azure/ai/evaluation/_evaluate/_utils.py +25 -17
  52. azure/ai/evaluation/_evaluators/_bleu/_bleu.py +4 -4
  53. azure/ai/evaluation/_evaluators/_code_vulnerability/_code_vulnerability.py +20 -12
  54. azure/ai/evaluation/_evaluators/_coherence/_coherence.py +6 -6
  55. azure/ai/evaluation/_evaluators/_common/_base_eval.py +45 -11
  56. azure/ai/evaluation/_evaluators/_common/_base_prompty_eval.py +24 -9
  57. azure/ai/evaluation/_evaluators/_common/_base_rai_svc_eval.py +24 -9
  58. azure/ai/evaluation/_evaluators/_content_safety/_content_safety.py +28 -18
  59. azure/ai/evaluation/_evaluators/_content_safety/_hate_unfairness.py +11 -8
  60. azure/ai/evaluation/_evaluators/_content_safety/_self_harm.py +11 -8
  61. azure/ai/evaluation/_evaluators/_content_safety/_sexual.py +12 -9
  62. azure/ai/evaluation/_evaluators/_content_safety/_violence.py +10 -7
  63. azure/ai/evaluation/_evaluators/_document_retrieval/__init__.py +1 -5
  64. azure/ai/evaluation/_evaluators/_document_retrieval/_document_retrieval.py +37 -64
  65. azure/ai/evaluation/_evaluators/_eci/_eci.py +6 -3
  66. azure/ai/evaluation/_evaluators/_f1_score/_f1_score.py +5 -5
  67. azure/ai/evaluation/_evaluators/_fluency/_fluency.py +3 -3
  68. azure/ai/evaluation/_evaluators/_gleu/_gleu.py +4 -4
  69. azure/ai/evaluation/_evaluators/_groundedness/_groundedness.py +12 -8
  70. azure/ai/evaluation/_evaluators/_intent_resolution/_intent_resolution.py +31 -26
  71. azure/ai/evaluation/_evaluators/_intent_resolution/intent_resolution.prompty +210 -96
  72. azure/ai/evaluation/_evaluators/_meteor/_meteor.py +3 -4
  73. azure/ai/evaluation/_evaluators/_protected_material/_protected_material.py +14 -7
  74. azure/ai/evaluation/_evaluators/_qa/_qa.py +5 -5
  75. azure/ai/evaluation/_evaluators/_relevance/_relevance.py +62 -15
  76. azure/ai/evaluation/_evaluators/_relevance/relevance.prompty +140 -59
  77. azure/ai/evaluation/_evaluators/_response_completeness/_response_completeness.py +21 -26
  78. azure/ai/evaluation/_evaluators/_retrieval/_retrieval.py +5 -5
  79. azure/ai/evaluation/_evaluators/_rouge/_rouge.py +22 -22
  80. azure/ai/evaluation/_evaluators/_service_groundedness/_service_groundedness.py +7 -6
  81. azure/ai/evaluation/_evaluators/_similarity/_similarity.py +4 -4
  82. azure/ai/evaluation/_evaluators/_task_adherence/_task_adherence.py +27 -24
  83. azure/ai/evaluation/_evaluators/_task_adherence/task_adherence.prompty +354 -66
  84. azure/ai/evaluation/_evaluators/_tool_call_accuracy/_tool_call_accuracy.py +175 -183
  85. azure/ai/evaluation/_evaluators/_tool_call_accuracy/tool_call_accuracy.prompty +99 -21
  86. azure/ai/evaluation/_evaluators/_ungrounded_attributes/_ungrounded_attributes.py +20 -12
  87. azure/ai/evaluation/_evaluators/_xpia/xpia.py +10 -7
  88. azure/ai/evaluation/_exceptions.py +10 -0
  89. azure/ai/evaluation/_http_utils.py +3 -3
  90. azure/ai/evaluation/_legacy/_batch_engine/_config.py +6 -3
  91. azure/ai/evaluation/_legacy/_batch_engine/_engine.py +117 -32
  92. azure/ai/evaluation/_legacy/_batch_engine/_openai_injector.py +5 -2
  93. azure/ai/evaluation/_legacy/_batch_engine/_result.py +2 -0
  94. azure/ai/evaluation/_legacy/_batch_engine/_run.py +2 -2
  95. azure/ai/evaluation/_legacy/_batch_engine/_run_submitter.py +33 -41
  96. azure/ai/evaluation/_legacy/_batch_engine/_utils.py +1 -4
  97. azure/ai/evaluation/_legacy/_common/_async_token_provider.py +12 -19
  98. azure/ai/evaluation/_legacy/_common/_thread_pool_executor_with_context.py +2 -0
  99. azure/ai/evaluation/_legacy/prompty/_prompty.py +11 -5
  100. azure/ai/evaluation/_safety_evaluation/__init__.py +1 -1
  101. azure/ai/evaluation/_safety_evaluation/_safety_evaluation.py +195 -111
  102. azure/ai/evaluation/_user_agent.py +32 -1
  103. azure/ai/evaluation/_version.py +1 -1
  104. azure/ai/evaluation/red_team/__init__.py +3 -1
  105. azure/ai/evaluation/red_team/_agent/__init__.py +1 -1
  106. azure/ai/evaluation/red_team/_agent/_agent_functions.py +68 -71
  107. azure/ai/evaluation/red_team/_agent/_agent_tools.py +103 -145
  108. azure/ai/evaluation/red_team/_agent/_agent_utils.py +26 -6
  109. azure/ai/evaluation/red_team/_agent/_semantic_kernel_plugin.py +62 -71
  110. azure/ai/evaluation/red_team/_attack_objective_generator.py +94 -52
  111. azure/ai/evaluation/red_team/_attack_strategy.py +2 -1
  112. azure/ai/evaluation/red_team/_callback_chat_target.py +4 -9
  113. azure/ai/evaluation/red_team/_default_converter.py +1 -1
  114. azure/ai/evaluation/red_team/_red_team.py +1947 -1040
  115. azure/ai/evaluation/red_team/_red_team_result.py +49 -38
  116. azure/ai/evaluation/red_team/_utils/__init__.py +1 -1
  117. azure/ai/evaluation/red_team/_utils/_rai_service_eval_chat_target.py +39 -34
  118. azure/ai/evaluation/red_team/_utils/_rai_service_target.py +163 -138
  119. azure/ai/evaluation/red_team/_utils/_rai_service_true_false_scorer.py +14 -14
  120. azure/ai/evaluation/red_team/_utils/constants.py +1 -13
  121. azure/ai/evaluation/red_team/_utils/formatting_utils.py +41 -44
  122. azure/ai/evaluation/red_team/_utils/logging_utils.py +17 -17
  123. azure/ai/evaluation/red_team/_utils/metric_mapping.py +31 -4
  124. azure/ai/evaluation/red_team/_utils/strategy_utils.py +33 -25
  125. azure/ai/evaluation/simulator/_adversarial_scenario.py +2 -0
  126. azure/ai/evaluation/simulator/_adversarial_simulator.py +31 -17
  127. azure/ai/evaluation/simulator/_conversation/__init__.py +2 -2
  128. azure/ai/evaluation/simulator/_direct_attack_simulator.py +8 -8
  129. azure/ai/evaluation/simulator/_indirect_attack_simulator.py +18 -6
  130. azure/ai/evaluation/simulator/_model_tools/_generated_rai_client.py +54 -24
  131. azure/ai/evaluation/simulator/_model_tools/_identity_manager.py +7 -1
  132. azure/ai/evaluation/simulator/_model_tools/_proxy_completion_model.py +30 -10
  133. azure/ai/evaluation/simulator/_model_tools/_rai_client.py +19 -31
  134. azure/ai/evaluation/simulator/_model_tools/_template_handler.py +20 -6
  135. azure/ai/evaluation/simulator/_model_tools/models.py +1 -1
  136. azure/ai/evaluation/simulator/_simulator.py +21 -8
  137. {azure_ai_evaluation-1.8.0.dist-info → azure_ai_evaluation-1.10.0.dist-info}/METADATA +46 -3
  138. {azure_ai_evaluation-1.8.0.dist-info → azure_ai_evaluation-1.10.0.dist-info}/RECORD +141 -136
  139. azure/ai/evaluation/_common/onedp/aio/_vendor.py +0 -40
  140. {azure_ai_evaluation-1.8.0.dist-info → azure_ai_evaluation-1.10.0.dist-info}/NOTICE.txt +0 -0
  141. {azure_ai_evaluation-1.8.0.dist-info → azure_ai_evaluation-1.10.0.dist-info}/WHEEL +0 -0
  142. {azure_ai_evaluation-1.8.0.dist-info → azure_ai_evaluation-1.10.0.dist-info}/top_level.txt +0 -0
@@ -8,25 +8,33 @@ import re
8
8
  from typing import Dict, List, Union, TypeVar, cast
9
9
  from typing_extensions import overload, override
10
10
  from azure.ai.evaluation._evaluators._common import PromptyEvaluatorBase
11
- from azure.ai.evaluation._common.utils import remove_optional_singletons, parse_quality_evaluator_reason_score
12
- from azure.ai.evaluation._exceptions import ErrorBlame, ErrorCategory, ErrorTarget, EvaluationException
13
- from azure.ai.evaluation._common.constants import PROMPT_BASED_REASON_EVALUATORS
11
+ from azure.ai.evaluation._exceptions import (
12
+ ErrorBlame,
13
+ ErrorCategory,
14
+ ErrorTarget,
15
+ EvaluationException,
16
+ )
17
+ from ..._common.utils import check_score_is_valid
14
18
  from azure.ai.evaluation._common._experimental import experimental
15
19
 
16
20
  logger = logging.getLogger(__name__)
17
21
 
18
22
  T_EvalValue = TypeVar("T_EvalValue")
19
23
 
24
+
20
25
  @experimental
21
26
  class ToolCallAccuracyEvaluator(PromptyEvaluatorBase[Union[str, float]]):
22
27
  """The Tool Call Accuracy evaluator assesses how accurately an AI uses tools by examining:
23
- - Relevance to the conversation
24
- - Parameter correctness according to tool definitions
25
- - Parameter value extraction from the conversation
28
+ - Relevance to the conversation.
29
+ - Parameter correctness according to tool definitions.
30
+ - Parameter value extraction from the conversation.
26
31
 
27
- The evaluator uses a binary scoring system (0 or 1):
28
- - Score 0: The tool call is irrelevant or contains information not in the conversation/definition
29
- - Score 1: The tool call is relevant with properly extracted parameters from the conversation
32
+ The evaluator uses a scoring rubric of 1 to 5:
33
+ - Score 1: The tool calls are irrelevant
34
+ - Score 2: The tool calls are partially relevant, but not enough tools were called or the parameters were not correctly passed.
35
+ - Score 3: The tool calls are relevant, but there were unnecessary, excessive tool calls made.
36
+ - Score 4: The tool calls are relevant, but some tools returned errors and agent retried calling them again and succeeded.
37
+ - Score 5: The tool calls are relevant, and all parameters were correctly passed.
30
38
 
31
39
  This evaluation focuses on measuring whether tool calls meaningfully contribute to addressing
32
40
  user needs while properly following tool definitions and using information present in the
@@ -46,13 +54,13 @@ class ToolCallAccuracyEvaluator(PromptyEvaluatorBase[Union[str, float]]):
46
54
  :caption: Initialize and call a ToolCallAccuracyEvaluator.
47
55
 
48
56
  .. admonition:: Example using Azure AI Project URL:
49
-
57
+
50
58
  .. literalinclude:: ../samples/evaluation_samples_evaluate_fdp.py
51
59
  :start-after: [START tool_call_accuracy_evaluator]
52
60
  :end-before: [END tool_call_accuracy_evaluator]
53
61
  :language: python
54
62
  :dedent: 8
55
- :caption: Initialize and call ToolCallAccuracyEvaluator using Azure AI Project URL in the following format
63
+ :caption: Initialize and call ToolCallAccuracyEvaluator using Azure AI Project URL in the following format
56
64
  https://{resource_name}.services.ai.azure.com/api/projects/{project_name}
57
65
 
58
66
  .. note::
@@ -63,26 +71,33 @@ class ToolCallAccuracyEvaluator(PromptyEvaluatorBase[Union[str, float]]):
63
71
  """
64
72
 
65
73
  _PROMPTY_FILE = "tool_call_accuracy.prompty"
66
- _RESULT_KEY = "tool_call_accurate"
67
- _AGGREGATE_RESULT_KEY = "tool_call_accuracy"
74
+ _RESULT_KEY = "tool_call_accuracy"
75
+
76
+ _MAX_TOOL_CALL_ACCURACY_SCORE = 5
77
+ _MIN_TOOL_CALL_ACCURACY_SCORE = 1
78
+ _DEFAULT_TOOL_CALL_ACCURACY_SCORE = 3
79
+
80
+ _NO_TOOL_CALLS_MESSAGE = "No tool calls found in response or provided tool_calls."
81
+ _NO_TOOL_DEFINITIONS_MESSAGE = "Tool definitions must be provided."
82
+ _TOOL_DEFINITIONS_MISSING_MESSAGE = "Tool definitions for all tool calls must be provided."
83
+ _INVALID_SCORE_MESSAGE = "Tool call accuracy score must be between 1 and 5."
68
84
 
69
- _MAX_TOOL_CALL_ACCURACY_SCORE = 1.0
70
- _MIN_TOOL_CALL_ACCURACY_SCORE = 0.0
71
- _DEFAULT_TOOL_CALL_ACCURACY_SCORE = 0.8
85
+ _LLM_SCORE_KEY = "tool_calls_success_level"
72
86
 
73
- id = "id"
87
+ id = "azureai://built-in/evaluators/tool_call_accuracy"
74
88
  """Evaluator identifier, experimental and to be used only with evaluation in cloud."""
75
89
 
76
90
  @override
77
- def __init__(self, model_config, *,
78
- threshold=_DEFAULT_TOOL_CALL_ACCURACY_SCORE,
79
- **kwargs):
91
+ def __init__(self, model_config, *, threshold=_DEFAULT_TOOL_CALL_ACCURACY_SCORE, **kwargs):
80
92
  current_dir = os.path.dirname(__file__)
81
93
  prompty_path = os.path.join(current_dir, self._PROMPTY_FILE)
82
94
  self.threshold = threshold
83
- super().__init__(model_config=model_config, prompty_file=prompty_path,
84
- result_key=self._RESULT_KEY,
85
- **kwargs)
95
+ super().__init__(
96
+ model_config=model_config,
97
+ prompty_file=prompty_path,
98
+ result_key=self._RESULT_KEY,
99
+ **kwargs,
100
+ )
86
101
 
87
102
  @overload
88
103
  def __call__(
@@ -90,8 +105,8 @@ class ToolCallAccuracyEvaluator(PromptyEvaluatorBase[Union[str, float]]):
90
105
  *,
91
106
  query: Union[str, List[dict]],
92
107
  tool_definitions: Union[dict, List[dict]],
93
- tool_calls: Union[dict, List[dict]] = None,
94
- response: Union[str, List[dict]] = None
108
+ tool_calls: Union[dict, List[dict]] = None,
109
+ response: Union[str, List[dict]] = None,
95
110
  ) -> Dict[str, Union[str, float]]:
96
111
  """
97
112
  Evaluate tool call accuracy. Accepts a query, tool definitions, and tool calls for evaluation.
@@ -137,81 +152,43 @@ class ToolCallAccuracyEvaluator(PromptyEvaluatorBase[Union[str, float]]):
137
152
  """
138
153
  # TODO add warning that only tool calls of type function are supported
139
154
  # Collect inputs
140
- tool_calls = kwargs.get("tool_calls", None)
155
+ tool_calls = kwargs.get("tool_calls")
141
156
  tool_definitions = kwargs.get("tool_definitions")
142
- query = kwargs.get("query", None)
143
- response = kwargs.get("response", None)
144
-
145
- if response is None and tool_calls is None:
146
- raise EvaluationException(
147
- message="Either response or tool_calls must be provided.",
148
- blame=ErrorBlame.USER_ERROR,
149
- category=ErrorCategory.MISSING_FIELD,
150
- target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
151
- )
152
-
153
- if tool_definitions is None:
154
- raise EvaluationException(
155
- message="Tool definitions must be provided.",
156
- blame=ErrorBlame.USER_ERROR,
157
- category=ErrorCategory.MISSING_FIELD,
158
- target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
159
- )
157
+ query = kwargs.get("query")
158
+ response = kwargs.get("response")
160
159
 
161
160
  # TODO : Support classes that represents tool calls, messages etc once client side definitions are available
162
- if tool_calls is None:
163
- # Extract tool calls from response if not provided
164
- tool_calls = []
165
- if isinstance(response, list):
166
- for message in response:
167
- if message.get("role") == "assistant":
168
- tool_calls.extend([content for content in message.get("content")
169
- if content.get("type") == "tool_call"])
170
- if len(tool_calls) == 0:
171
- raise EvaluationException(
172
- message="response does not have tool calls. Either provide tool_calls or response with tool calls.",
173
- blame=ErrorBlame.USER_ERROR,
174
- category=ErrorCategory.MISSING_FIELD,
175
- target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
176
- )
161
+ if response:
162
+ parsed_tool_calls = self._parse_tools_from_response(response)
163
+ if parsed_tool_calls:
164
+ tool_calls = parsed_tool_calls
165
+
166
+ if not tool_calls:
167
+ return {"error_message": self._NO_TOOL_CALLS_MESSAGE}
168
+ if not tool_definitions or len(tool_definitions) == 0:
169
+ return {"error_message": self._NO_TOOL_DEFINITIONS_MESSAGE}
177
170
 
178
171
  if not isinstance(tool_calls, list):
179
172
  tool_calls = [tool_calls]
180
-
181
173
  if not isinstance(tool_definitions, list):
182
174
  tool_definitions = [tool_definitions]
183
175
 
184
- eval_inputs = []
185
- # TODO : When evaluating an agent tool that depends on the output of a previous tool call,
186
- # we need to provide the output of the previous tool call as part of messages.
187
- for tool_call in tool_calls:
188
- if isinstance(tool_call, dict) and tool_call.get("type") == "tool_call": # TODO assuming dict here but it can be a class
189
- function_name = tool_call.get("name")
190
- tool_definition = [tool for tool in tool_definitions if tool.get("name") == function_name]
191
- if len(tool_definition) > 0:
192
- tool_definition = tool_definition
193
- else:
194
- raise EvaluationException(
195
- message="Tool definition not found",
196
- blame=ErrorBlame.USER_ERROR,
197
- category=ErrorCategory.INVALID_VALUE,
198
- target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
199
- )
200
- eval_inputs.append({"query": query, "tool_call": tool_call, "tool_definition": tool_definition})
201
- else:
202
- raise EvaluationException(
203
- message="Tool definition not found",
204
- blame=ErrorBlame.USER_ERROR,
205
- category=ErrorCategory.INVALID_VALUE,
206
- target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
207
- )
176
+ try:
177
+ needed_tool_definitions = self._extract_needed_tool_definitions(tool_calls, tool_definitions)
178
+ except EvaluationException as e:
179
+ return {"error_message": self._TOOL_DEFINITIONS_MISSING_MESSAGE}
180
+ if len(needed_tool_definitions) == 0:
181
+ return {"error_message": self._TOOL_DEFINITIONS_MISSING_MESSAGE}
208
182
 
209
- return eval_inputs
183
+ return {
184
+ "query": query,
185
+ "tool_calls": tool_calls,
186
+ "tool_definitions": needed_tool_definitions,
187
+ }
210
188
 
211
189
  @override
212
190
  async def _do_eval(self, eval_input: Dict) -> Dict[str, Union[float, str]]: # type: ignore[override]
213
- """Do a relevance evaluation.
214
-
191
+ """Do a tool call accuracy evaluation.
215
192
  :param eval_input: The input to the evaluator. Expected to contain
216
193
  whatever inputs are needed for the _flow method, including context
217
194
  and other fields depending on the child class.
@@ -219,23 +196,43 @@ class ToolCallAccuracyEvaluator(PromptyEvaluatorBase[Union[str, float]]):
219
196
  :return: The evaluation result.
220
197
  :rtype: Dict
221
198
  """
199
+ # Single LLM call for all tool calls
222
200
  llm_output = await self._flow(timeout=self._LLM_CALL_TIMEOUT, **eval_input)
223
201
 
224
- score = math.nan
225
- if llm_output:
226
- score, reason = parse_quality_evaluator_reason_score(llm_output, valid_score_range="[0-1]")
227
- if score >= 0 and score <= 1:
228
- return {
229
- self._result_key: bool(float(score)),
230
- f"{self._result_key}_reason": reason,
231
- "tool_call_id" : eval_input.get("tool_call").get("tool_call_id"),
232
- }
233
- raise EvaluationException(
234
- message="Tool call accuracy evaluator: Invalid score returned from LLM.",
235
- blame=ErrorBlame.SYSTEM_ERROR,
236
- category=ErrorCategory.INVALID_VALUE,
237
- target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
238
- )
202
+ if isinstance(llm_output, dict):
203
+ score = llm_output.get(self._LLM_SCORE_KEY, None)
204
+ if not score or not check_score_is_valid(
205
+ score,
206
+ ToolCallAccuracyEvaluator._MIN_TOOL_CALL_ACCURACY_SCORE,
207
+ ToolCallAccuracyEvaluator._MAX_TOOL_CALL_ACCURACY_SCORE,
208
+ ):
209
+ raise EvaluationException(
210
+ message=f"Invalid score value: {score}. Expected a number in range [{ToolCallAccuracyEvaluator._MIN_TOOL_CALL_ACCURACY_SCORE}, {ToolCallAccuracyEvaluator._MAX_TOOL_CALL_ACCURACY_SCORE}].",
211
+ internal_message="Invalid score value.",
212
+ category=ErrorCategory.FAILED_EXECUTION,
213
+ blame=ErrorBlame.SYSTEM_ERROR,
214
+ )
215
+
216
+ # Format the output
217
+ reason = llm_output.get("chain_of_thought", "")
218
+ score = float(score)
219
+ score_result = "pass" if score >= self.threshold else "fail"
220
+ response_dict = {
221
+ self._result_key: score,
222
+ f"{self._result_key}_result": score_result,
223
+ f"{self._result_key}_threshold": self.threshold,
224
+ f"{self._result_key}_reason": reason,
225
+ "details": llm_output.get("details", {}),
226
+ }
227
+ return response_dict
228
+
229
+ else:
230
+ raise EvaluationException(
231
+ message="Tool call accuracy evaluator returned invalid output.",
232
+ blame=ErrorBlame.SYSTEM_ERROR,
233
+ category=ErrorCategory.FAILED_EXECUTION,
234
+ target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
235
+ )
239
236
 
240
237
  async def _real_call(self, **kwargs):
241
238
  """The asynchronous call where real end-to-end evaluation logic is performed.
@@ -246,97 +243,92 @@ class ToolCallAccuracyEvaluator(PromptyEvaluatorBase[Union[str, float]]):
246
243
  :rtype: Union[DoEvalResult[T_EvalValue], AggregateResult[T_EvalValue]]
247
244
  """
248
245
  # Convert inputs into list of evaluable inputs.
249
- eval_input_list = self._convert_kwargs_to_eval_input(**kwargs)
250
- if len(eval_input_list) == 0:
251
- return {self._AGGREGATE_RESULT_KEY: self._NOT_APPLICABLE_RESULT,
252
- f"{self._AGGREGATE_RESULT_KEY}_result": self._NOT_APPLICABLE_RESULT,
253
- f"{self._AGGREGATE_RESULT_KEY}_threshold": self.threshold,
254
- f"{self._AGGREGATE_RESULT_KEY}_reason":
255
- "No tool calls were made.",
256
- "per_tool_call_details": []
257
- }
258
-
259
- per_turn_results = []
260
- # Evaluate all inputs.
261
- for eval_input in eval_input_list:
262
- if self._is_applicable_tool(eval_input):
263
- per_turn_results.append(await self._do_eval(eval_input))
264
- else:
265
- per_turn_results.append(self._not_applicable_result(eval_input))
266
-
267
- return self._aggregate_results(per_turn_results=per_turn_results)
268
-
269
- def _is_applicable_tool(self, eval_input):
270
- """Determine if a given tool should be evaluated, since we only evaluate tools that
271
- have sufficient context available.
272
-
273
- :type eval_input: Dict
274
- :return: True if the tool call should be evaluated
275
- :rtype: bool
276
- """
277
- tool_definition = eval_input.get("tool_definition")
278
- if tool_definition is None or len(tool_definition) != 1:
279
- return False
280
- tool_type = tool_definition[0].get("type")
281
- if tool_type is None or tool_type != "function":
282
- return False
283
- return True
284
-
285
- def _not_applicable_result(self, eval_input):
246
+ eval_input = self._convert_kwargs_to_eval_input(**kwargs)
247
+ if isinstance(eval_input, dict) and eval_input.get("error_message"):
248
+ # If there is an error message, return not applicable result
249
+ return self._not_applicable_result(eval_input.get("error_message"))
250
+ # Do the evaluation
251
+ result = await self._do_eval(eval_input)
252
+ # Return the result
253
+ return result
254
+
255
+ def _not_applicable_result(self, error_message):
286
256
  """Return a result indicating that the tool call is not applicable for evaluation.
287
-
288
257
  :param eval_input: The input to the evaluator.
289
258
  :type eval_input: Dict
290
259
  :return: A dictionary containing the result of the evaluation.
291
260
  :rtype: Dict[str, Union[str, float]]
292
261
  """
262
+ # If no tool calls were made or tool call type is not supported, return not applicable result
293
263
  return {
294
- f"{self._result_key}": self._NOT_APPLICABLE_RESULT,
295
- f"{self._result_key}_reason": "Tool call not supported for evaluation",
296
- "tool_call_id" : eval_input.get("tool_call").get("tool_call_id"),
264
+ self._result_key: self._NOT_APPLICABLE_RESULT,
265
+ f"{self._result_key}_result": "pass",
266
+ f"{self._result_key}_threshold": self.threshold,
267
+ f"{self._result_key}_reason": error_message,
268
+ "details": {},
297
269
  }
298
270
 
299
- def _aggregate_results(self, per_turn_results):
300
- """Aggregate the evaluation results of each conversation turn into a single result.
301
-
302
- Exact implementation might need to vary slightly depending on the results produced.
303
- Default behavior is to average the all number-based outputs.
304
-
305
- :param per_turn_results: List of evaluation results for each turn in the conversation.
306
- :type per_turn_results: List[Dict]
307
- :return: A dictionary containing aggregated results, with numeric metrics having their
308
- means as top-level values in the dictionary, and all original
309
- values (including non-numerics) located in under the "evaluation_per_turn" key,
310
- which each sub-key being a metric and each sub-value being a the list of that metric's
311
- per-turn values.
312
- :rtype: AggregateResult[T_EvalValue]
271
+ def _parse_tools_from_response(self, response):
272
+ """Parse the response to extract tool calls and results.
273
+ :param response: The response to parse.
274
+ :type response: Union[str, List[dict]]
275
+ :return: List of tool calls extracted from the response.
276
+ :rtype: List[dict]
313
277
  """
314
-
315
- aggregated: Dict[str, Union[float, Dict[str, List[T_EvalValue]]]] = {}
316
- evaluation_per_turn: Dict[str, List[T_EvalValue]] = {}
317
-
318
- # Go over each turn, and rotate the results into a
319
- # metric: List[values] format for the evals_per_turn dictionary.
320
-
321
- num_evaluated = len([per_turn_result for per_turn_result in per_turn_results
322
- if per_turn_result.get(self._result_key) != self._NOT_APPLICABLE_RESULT])
323
- if num_evaluated == 0:
324
- # None of the invoked tools were applicable, return not applicable result
325
- # (If a tool fails evaluation, we'll throw an exception)
326
- return {self._AGGREGATE_RESULT_KEY: self._NOT_APPLICABLE_RESULT,
327
- f"{self._AGGREGATE_RESULT_KEY}_result": self._NOT_APPLICABLE_RESULT,
328
- f"{self._AGGREGATE_RESULT_KEY}_threshold": self.threshold,
329
- f"{self._AGGREGATE_RESULT_KEY}_reason":
330
- "Tool call accuracy evaluation is not yet supported for the invoked tools.",
331
- "per_tool_call_details": []
332
- }
333
- # ignore not_applicable results, where the _result_key will be "not applicable"
334
- score = sum([per_turn_result.get(self._result_key) == True for per_turn_result in per_turn_results])/num_evaluated
335
- aggregated[self._AGGREGATE_RESULT_KEY] = score
336
- aggregated[f'{self._AGGREGATE_RESULT_KEY}_result'] = self._PASS_RESULT if score >= self.threshold else self._FAIL_RESULT
337
- aggregated[f'{self._AGGREGATE_RESULT_KEY}_threshold'] = self.threshold
338
- aggregated["per_tool_call_details"] = per_turn_results
339
- return aggregated
278
+ tool_calls = []
279
+ tool_results_map = {}
280
+ if isinstance(response, list):
281
+ for message in response:
282
+ # Extract tool calls from assistant messages
283
+ if message.get("role") == "assistant" and isinstance(message.get("content"), list):
284
+ for content_item in message.get("content"):
285
+ if isinstance(content_item, dict) and content_item.get("type") == "tool_call":
286
+ tool_calls.append(content_item)
287
+
288
+ # Extract tool results from tool messages
289
+ elif message.get("role") == "tool" and message.get("tool_call_id"):
290
+ tool_call_id = message.get("tool_call_id")
291
+ if isinstance(message.get("content"), list) and len(message.get("content")) > 0:
292
+ result_content = message.get("content")[0]
293
+ if isinstance(result_content, dict) and result_content.get("type") == "tool_result":
294
+ tool_results_map[tool_call_id] = result_content
295
+
296
+ # Attach results to their corresponding calls
297
+ for tool_call in tool_calls:
298
+ tool_call_id = tool_call.get("tool_call_id")
299
+ if tool_call_id in tool_results_map:
300
+ tool_call["tool_result"] = tool_results_map[tool_call_id]["tool_result"]
301
+
302
+ return tool_calls
303
+
304
+ def _extract_needed_tool_definitions(self, tool_calls, tool_definitions):
305
+ """Extract the tool definitions that are needed for the provided tool calls.
306
+ :param tool_calls: List of tool calls to evaluate.
307
+ :type tool_calls: List[dict]
308
+ :param tool_definitions: List of tool definitions to use for evaluation.
309
+ :type tool_definitions: List[dict]
310
+ :return: List of tool definitions that are needed for the provided tool calls.
311
+ :rtype: List[dict]
312
+ """
313
+ needed_tool_definitions = []
314
+ for tool_call in tool_calls:
315
+ if isinstance(tool_call, dict) and tool_call.get("type") == "tool_call":
316
+ tool_name = tool_call.get("name")
317
+ tool_definition = [
318
+ tool
319
+ for tool in tool_definitions
320
+ if tool.get("name") == tool_name and tool.get("type", "function") == "function"
321
+ ]
322
+ if len(tool_definition) > 0:
323
+ needed_tool_definitions.extend(tool_definition)
324
+ else:
325
+ raise EvaluationException(
326
+ message=f"Tool definition for {tool_name} not found",
327
+ blame=ErrorBlame.USER_ERROR,
328
+ category=ErrorCategory.INVALID_VALUE,
329
+ target=ErrorTarget.TOOL_CALL_ACCURACY_EVALUATOR,
330
+ )
331
+ return needed_tool_definitions
340
332
 
341
333
  @override
342
334
  def __call__( # pylint: disable=docstring-missing-param
@@ -5,19 +5,19 @@ model:
5
5
  api: chat
6
6
  parameters:
7
7
  temperature: 0.0
8
- max_tokens: 800
8
+ max_tokens: 3000
9
9
  top_p: 1.0
10
10
  presence_penalty: 0
11
11
  frequency_penalty: 0
12
12
  response_format:
13
- type: text
13
+ type: json_object
14
14
 
15
15
  inputs:
16
16
  query:
17
17
  type: List
18
- tool_call:
19
- type: Dict
20
- tool_definition:
18
+ tool_calls:
19
+ type: List
20
+ tool_definitions:
21
21
  type: Dict
22
22
 
23
23
  ---
@@ -27,7 +27,7 @@ system:
27
27
  ### Your are an expert in evaluating the accuracy of a tool call considering relevance and potential usefulness including syntactic and semantic correctness of a proposed tool call from an intelligent system based on provided definition and data. Your goal will involve answering the questions below using the information provided.
28
28
  - **Definition**: You are given a definition of the communication trait that is being evaluated to help guide your Score.
29
29
  - **Data**: Your input data include CONVERSATION , TOOL CALL and TOOL DEFINITION.
30
- - **Tasks**: To complete your evaluation you will be asked to evaluate the Data in different ways.
30
+ - **Tasks**: To complete your evaluation you will be asked to evaluate the Data in different ways, and you need to be very precise in your evaluation.
31
31
 
32
32
  user:
33
33
  # Definition
@@ -42,30 +42,108 @@ user:
42
42
 
43
43
 
44
44
  # Ratings
45
- ## [Tool Call Accuracy: 0] (Irrelevant)
45
+ ## [Tool Call Accuracy: 1] (Irrelevant)
46
46
  **Definition:**
47
- 1. The TOOL CALL is not relevant and will not help resolve the user's need.
48
- 2. TOOL CALL include parameters values that are not present or inferred from CONVERSATION.
49
- 3. TOOL CALL has parameters that is not present in TOOL DEFINITION.
47
+ Tool calls were not relevant to the user's query, resulting in an irrelevant or unhelpful final output.
48
+ This level is a 'fail'.
49
+
50
+ **Example:**
51
+ The user's query is asking for most popular hotels in New York, but the agent calls a tool that does search in local files on a machine. This tool is not relevant to the user query, so this case is a Level 1 'fail'.
52
+
50
53
 
51
- ## [Tool Call Accuracy: 1] (Relevant)
54
+ ## [Tool Call Accuracy: 2] (Partially Relevant - No correct output)
52
55
  **Definition:**
53
- 1. The TOOL CALL is directly relevant and very likely to help resolve the user's need.
54
- 2. TOOL CALL include parameters values that are present or inferred from CONVERSATION.
55
- 3. TOOL CALL has parameters that is present in TOOL DEFINITION.
56
+ Tool calls were somewhat related to the user's query, but the agent was not able to reach a final output that addresses the user query due to one or more of the following:
57
+ • Tools returned errors, and no retrials for the tool call were successful.
58
+ • Parameters passed to the tool were incorrect.
59
+ • Not enough tools were called to fully address the query (missing tool calls).
60
+ This level is a 'fail'.
61
+
62
+ **Example:**
63
+ The user asks for the coordinates of Chicago. The agent calls the correct tool that retrieves the coordinates -which is the relevant tool for the user query- but passes 'New York' instead of 'Chicago' as the parameter to the tool. So this is a Level 2 'fail'.
64
+
65
+ **Example:**
66
+ The user asks for the coordinates of Chicago. The agent calls the correct tool that retrieves the coordinates -which is the relevant tool for the user query- and passes 'Chicago' as the parameter to the tool which is also correct, but the tool returns an error so the agent can't reach the correct answer to the user's query. This is a Level 2 'fail'.
67
+
68
+ **Example:**
69
+ The user asks a question that needs 3 tool calls for it to be answered. The agent calls only one of the three required tool calls. So this case is a Level 2 'fail'.
70
+
71
+
72
+ ## [Tool Call Accuracy: 3] (Slightly Correct - Reached Output)
73
+ **Definition:**
74
+ Tool calls were relevant, correct and grounded parameters were passed so that led to a correct output. However, multiple excessive, unnecessary tool calls were made.
75
+ This level is a 'pass'.
76
+
77
+ **Example:**
78
+ The user asked to do a modification in the database. The agent called the tool multiple times, resulting in multiple modifications in the database instead of one. This is a level 3 'pass'.
79
+
80
+ **Example:**
81
+ The user asked for popular hotels in a certain place. The agent calls the same tool with the same parameters multiple times, even though a single tool call that returns an output is sufficient. So there were unnecessary tool calls. This is a Level 3 'pass'.
82
+
83
+
84
+ ## [Tool Call Accuracy: 4] (Mostly Correct - Reached output)
85
+ **Definition:**
86
+ Tool calls were fully relevant and efficient:
87
+ • Correct tools were called with the correct and grounded parameters, whether they are extracted from the conversation history or the current user query.
88
+ • A tool returned an error, but the agent retried calling the tool and successfully got an output.
89
+ This level is a 'pass'.
90
+
91
+ **Example:**
92
+ The user asks for the weather forecast in a certain place. The agent calls the correct tool that retrieves the weather forecast with the correct parameters, but the tool returns an error. The agent re-calls the tool once again and it returns the correct output. This is a Level 4 'pass'.
93
+
94
+
95
+ ## [Tool Call Accuracy: 5] (Optimal Solution - Reached output)
96
+ **Definition:**
97
+ Tool calls were fully relevant and efficient:
98
+ • Correct tools were called with the correct and grounded parameters, whether they are extracted from the conversation history or the current user query.
99
+ • No unnecessary or excessive tool calls were made.
100
+ • No errors occurred in any of the tools.
101
+ • The agent was able to reach the final output that addresses the user's query without facing any issues.
102
+ This level is a 'pass'.
103
+
104
+ **Example:**
105
+ The user asks for the distance between two places. The agent correctly calls the tools that retrieve the coordinates for the two places respectively, then calls the tool that calculates the distance between the two sets of coordinates, passing the correct arguments to all the tools, without calling other tools excessively or unnecessarily. This is the optimal solution for the user's query. This is a Level 5 'pass'.
106
+
107
+ **Example:**
108
+ The user asks for the distance between two places. The agent retrieves the needed coordinates from the outputs of the tool calls in the conversation history, and then correctly passes these coordinates to the tool that calculates the distance to output it to the user. This is also an optimal solution for the user's query. This is a Level 5 'pass'.
109
+
110
+
111
+
112
+ # IMPORTANT NOTES
113
+ - There is a clear distinction between 'pass' levels and 'fail' levels. The distinction is that the tools are called correctly in order to reach the required output. If the agent was not able to reach the final output that addresses the user query, it cannot be either of the 'pass' levels, and vice versa. It is crucial that you ensure you are rating the agent's response with the correct level based on the tool calls made to address the user's query.
114
+ - "Correct output" means correct tool with the correct, grounded parameters. You are NOT concerned with the correctness of the result of the tool. As long as the parameters passed were correct and the tool did not return an error, then the tool output is correct and accurate.
115
+ - Ensure that every single parameter that is passed to the tools is correct and grounded from the user query or the conversation history. If the agent passes incorrect parameters or completely makes them up, then this is a fail, even if somehow the agent reaches a correct result.
56
116
 
57
117
  # Data
58
118
  CONVERSATION : {{query}}
59
- TOOL CALL: {{tool_call}}
119
+ TOOL CALLS: {{tool_calls}}
60
120
  TOOL DEFINITION: {{tool_definition}}
61
121
 
62
122
 
63
123
  # Tasks
64
- ## Please provide your assessment Score for the previous CONVERSATION , TOOL CALL and TOOL DEFINITION based on the Definitions above. Your output should include the following information:
65
- - **ThoughtChain**: To improve the reasoning process, think step by step and include a step-by-step explanation of your thought process as you analyze the data based on the definitions. Keep it brief and start your ThoughtChain with "Let's think step by step:".
66
- - **Explanation**: a very short explanation of why you think the input Data should get that Score.
67
- - **Score**: based on your previous analysis, provide your Score. The Score you give MUST be a integer score (i.e., "0", "1") based on the levels of the definitions.
68
-
124
+ ## Please provide your evaluation for the assistant RESPONSE in relation to the user QUERY and tool definitions based on the Definitions and examples above.
125
+ Your output should consist only of a JSON object, as provided in the examples, that has the following keys:
126
+ - chain_of_thought: a string that explains your thought process to decide on the tool call accuracy level. Start this string with 'Let's think step by step:', and think deeply and precisely about which level should be chosen based on the agent's tool calls and how they were able to address the user's query.
127
+ - tool_calls_success_level: a integer value between 1 and 5 that represents the level of tool call success, based on the level definitions mentioned before. You need to be very precise when deciding on this level. Ensure you are correctly following the rating system based on the description of each level.
128
+ - details: a dictionary that contains the following keys:
129
+ - tool_calls_made_by_agent: total number of tool calls made by the agent
130
+ - correct_tool_calls_made_by_agent: total number of correct tool calls made by the agent
131
+ - per_tool_call_details: a list of dictionaries, each containing:
132
+ - tool_name: name of the tool
133
+ - total_calls_required: total number of calls required for the tool
134
+ - correct_calls_made_by_agent: number of correct calls made by the agent
135
+ - correct_tool_percentage: percentage of correct calls made by the agent for this tool. It is a value between 0.0 and 1.0
136
+ - tool_call_errors: number of errors encountered during the tool call
137
+ - tool_success_result: 'pass' or 'fail' based on the evaluation of the tool call accuracy for this tool
138
+ - excess_tool_calls: a dictionary with the following keys:
139
+ - total: total number of excess, unnecessary tool calls made by the agent
140
+ - details: a list of dictionaries, each containing:
141
+ - tool_name: name of the tool
142
+ - excess_count: number of excess calls made for this query
143
+ - missing_tool_calls: a dictionary with the following keys:
144
+ - total: total number of missing tool calls that should have been made by the agent to be able to answer the query
145
+ - details: a list of dictionaries, each containing:
146
+ - tool_name: name of the tool
147
+ - missing_count: number of missing calls for this query
69
148
 
70
- ## Please provide your answers between the tags: <S0>your chain of thoughts</S0>, <S1>your explanation</S1>, <S2>your Score</S2>.
71
149
  # Output