azure-ai-evaluation 1.0.0b4__py3-none-any.whl → 1.0.0b5__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of azure-ai-evaluation might be problematic. Click here for more details.

Files changed (79) hide show
  1. azure/ai/evaluation/__init__.py +22 -0
  2. azure/ai/evaluation/_common/constants.py +5 -0
  3. azure/ai/evaluation/_common/math.py +11 -0
  4. azure/ai/evaluation/_common/rai_service.py +172 -35
  5. azure/ai/evaluation/_common/utils.py +162 -23
  6. azure/ai/evaluation/_constants.py +6 -6
  7. azure/ai/evaluation/_evaluate/{_batch_run_client → _batch_run}/__init__.py +3 -2
  8. azure/ai/evaluation/_evaluate/{_batch_run_client/batch_run_context.py → _batch_run/eval_run_context.py} +4 -4
  9. azure/ai/evaluation/_evaluate/{_batch_run_client → _batch_run}/proxy_client.py +6 -3
  10. azure/ai/evaluation/_evaluate/_batch_run/target_run_context.py +35 -0
  11. azure/ai/evaluation/_evaluate/_eval_run.py +21 -4
  12. azure/ai/evaluation/_evaluate/_evaluate.py +267 -139
  13. azure/ai/evaluation/_evaluate/_telemetry/__init__.py +5 -5
  14. azure/ai/evaluation/_evaluate/_utils.py +40 -7
  15. azure/ai/evaluation/_evaluators/_bleu/_bleu.py +1 -1
  16. azure/ai/evaluation/_evaluators/_coherence/_coherence.py +14 -9
  17. azure/ai/evaluation/_evaluators/_coherence/coherence.prompty +76 -34
  18. azure/ai/evaluation/_evaluators/_common/_base_eval.py +20 -19
  19. azure/ai/evaluation/_evaluators/_common/_base_prompty_eval.py +18 -8
  20. azure/ai/evaluation/_evaluators/_common/_base_rai_svc_eval.py +48 -9
  21. azure/ai/evaluation/_evaluators/_content_safety/_content_safety.py +56 -19
  22. azure/ai/evaluation/_evaluators/_content_safety/_content_safety_chat.py +5 -5
  23. azure/ai/evaluation/_evaluators/_content_safety/_hate_unfairness.py +30 -1
  24. azure/ai/evaluation/_evaluators/_content_safety/_self_harm.py +30 -1
  25. azure/ai/evaluation/_evaluators/_content_safety/_sexual.py +30 -1
  26. azure/ai/evaluation/_evaluators/_content_safety/_violence.py +30 -1
  27. azure/ai/evaluation/_evaluators/_eci/_eci.py +3 -1
  28. azure/ai/evaluation/_evaluators/_fluency/_fluency.py +20 -20
  29. azure/ai/evaluation/_evaluators/_fluency/fluency.prompty +66 -36
  30. azure/ai/evaluation/_evaluators/_gleu/_gleu.py +1 -1
  31. azure/ai/evaluation/_evaluators/_groundedness/_groundedness.py +49 -15
  32. azure/ai/evaluation/_evaluators/_groundedness/groundedness_with_query.prompty +113 -0
  33. azure/ai/evaluation/_evaluators/_groundedness/groundedness_without_query.prompty +99 -0
  34. azure/ai/evaluation/_evaluators/_meteor/_meteor.py +3 -7
  35. azure/ai/evaluation/_evaluators/_multimodal/__init__.py +20 -0
  36. azure/ai/evaluation/_evaluators/_multimodal/_content_safety_multimodal.py +130 -0
  37. azure/ai/evaluation/_evaluators/_multimodal/_content_safety_multimodal_base.py +57 -0
  38. azure/ai/evaluation/_evaluators/_multimodal/_hate_unfairness.py +96 -0
  39. azure/ai/evaluation/_evaluators/_multimodal/_protected_material.py +120 -0
  40. azure/ai/evaluation/_evaluators/_multimodal/_self_harm.py +96 -0
  41. azure/ai/evaluation/_evaluators/_multimodal/_sexual.py +96 -0
  42. azure/ai/evaluation/_evaluators/_multimodal/_violence.py +96 -0
  43. azure/ai/evaluation/_evaluators/_protected_material/_protected_material.py +44 -11
  44. azure/ai/evaluation/_evaluators/_qa/_qa.py +7 -3
  45. azure/ai/evaluation/_evaluators/_relevance/_relevance.py +21 -19
  46. azure/ai/evaluation/_evaluators/_relevance/relevance.prompty +78 -42
  47. azure/ai/evaluation/_evaluators/_retrieval/_retrieval.py +125 -82
  48. azure/ai/evaluation/_evaluators/_retrieval/retrieval.prompty +74 -24
  49. azure/ai/evaluation/_evaluators/_rouge/_rouge.py +2 -2
  50. azure/ai/evaluation/_evaluators/_service_groundedness/__init__.py +9 -0
  51. azure/ai/evaluation/_evaluators/_service_groundedness/_service_groundedness.py +150 -0
  52. azure/ai/evaluation/_evaluators/_similarity/_similarity.py +17 -14
  53. azure/ai/evaluation/_evaluators/_xpia/xpia.py +32 -5
  54. azure/ai/evaluation/_exceptions.py +17 -0
  55. azure/ai/evaluation/_model_configurations.py +18 -1
  56. azure/ai/evaluation/_version.py +1 -1
  57. azure/ai/evaluation/simulator/__init__.py +2 -1
  58. azure/ai/evaluation/simulator/_adversarial_scenario.py +5 -0
  59. azure/ai/evaluation/simulator/_adversarial_simulator.py +4 -1
  60. azure/ai/evaluation/simulator/_data_sources/__init__.py +3 -0
  61. azure/ai/evaluation/simulator/_data_sources/grounding.json +1150 -0
  62. azure/ai/evaluation/simulator/_direct_attack_simulator.py +1 -1
  63. azure/ai/evaluation/simulator/_helpers/__init__.py +1 -2
  64. azure/ai/evaluation/simulator/_helpers/_simulator_data_classes.py +22 -1
  65. azure/ai/evaluation/simulator/_indirect_attack_simulator.py +79 -34
  66. azure/ai/evaluation/simulator/_model_tools/_identity_manager.py +1 -1
  67. azure/ai/evaluation/simulator/_prompty/task_query_response.prompty +4 -4
  68. azure/ai/evaluation/simulator/_prompty/task_simulate.prompty +6 -1
  69. azure/ai/evaluation/simulator/_simulator.py +115 -61
  70. azure/ai/evaluation/simulator/_utils.py +6 -6
  71. {azure_ai_evaluation-1.0.0b4.dist-info → azure_ai_evaluation-1.0.0b5.dist-info}/METADATA +166 -9
  72. {azure_ai_evaluation-1.0.0b4.dist-info → azure_ai_evaluation-1.0.0b5.dist-info}/NOTICE.txt +20 -0
  73. azure_ai_evaluation-1.0.0b5.dist-info/RECORD +120 -0
  74. {azure_ai_evaluation-1.0.0b4.dist-info → azure_ai_evaluation-1.0.0b5.dist-info}/WHEEL +1 -1
  75. azure/ai/evaluation/_evaluators/_groundedness/groundedness.prompty +0 -49
  76. azure_ai_evaluation-1.0.0b4.dist-info/RECORD +0 -106
  77. /azure/ai/evaluation/{simulator/_helpers → _common}/_experimental.py +0 -0
  78. /azure/ai/evaluation/_evaluate/{_batch_run_client → _batch_run}/code_client.py +0 -0
  79. {azure_ai_evaluation-1.0.0b4.dist-info → azure_ai_evaluation-1.0.0b5.dist-info}/top_level.txt +0 -0
@@ -2,9 +2,13 @@
2
2
  # Copyright (c) Microsoft Corporation. All rights reserved.
3
3
  # ---------------------------------------------------------
4
4
  from concurrent.futures import as_completed
5
- from typing import Callable, Dict, List, Union
5
+ from typing import Callable, Dict, List, Optional, Union
6
6
 
7
7
  from promptflow.tracing import ThreadPoolExecutorWithContext as ThreadPoolExecutor
8
+ from typing_extensions import override
9
+
10
+ from azure.ai.evaluation._common._experimental import experimental
11
+ from azure.ai.evaluation._evaluators._common import EvaluatorBase
8
12
 
9
13
  from ._hate_unfairness import HateUnfairnessEvaluator
10
14
  from ._self_harm import SelfHarmEvaluator
@@ -12,7 +16,8 @@ from ._sexual import SexualEvaluator
12
16
  from ._violence import ViolenceEvaluator
13
17
 
14
18
 
15
- class ContentSafetyEvaluator:
19
+ @experimental
20
+ class ContentSafetyEvaluator(EvaluatorBase):
16
21
  """
17
22
  Initialize a content safety evaluator configured to evaluate content safetry metrics for QA scenario.
18
23
 
@@ -21,8 +26,10 @@ class ContentSafetyEvaluator:
21
26
  :param azure_ai_project: The scope of the Azure AI project.
22
27
  It contains subscription id, resource group, and project name.
23
28
  :type azure_ai_project: ~azure.ai.evaluation.AzureAIProject
24
- :param parallel: If True, use parallel execution for evaluators. Else, use sequential execution.
25
- Default is True.
29
+ :param eval_last_turn: Whether to evaluate the last turn of a conversation. Default is False.
30
+ :type eval_last_turn: bool
31
+ :param kwargs: Additional arguments to pass to the evaluator.
32
+ :type kwargs: Any
26
33
  :return: A function that evaluates content-safety metrics for "question-answering" scenario.
27
34
  :rtype: Callable
28
35
 
@@ -61,8 +68,10 @@ class ContentSafetyEvaluator:
61
68
  }
62
69
  """
63
70
 
64
- def __init__(self, credential, azure_ai_project: dict, parallel: bool = True):
65
- self._parallel = parallel
71
+ # TODO address 3579092 to re-enabled parallel evals.
72
+ def __init__(self, credential, azure_ai_project, eval_last_turn: bool = False, **kwargs):
73
+ super().__init__(eval_last_turn=eval_last_turn)
74
+ self._parallel = kwargs.pop("parallel", False)
66
75
  self._evaluators: List[Callable[..., Dict[str, Union[str, float]]]] = [
67
76
  ViolenceEvaluator(credential, azure_ai_project),
68
77
  SexualEvaluator(credential, azure_ai_project),
@@ -70,24 +79,52 @@ class ContentSafetyEvaluator:
70
79
  HateUnfairnessEvaluator(credential, azure_ai_project),
71
80
  ]
72
81
 
73
- def __call__(self, *, query: str, response: str, **kwargs):
82
+ @override
83
+ def __call__(
84
+ self,
85
+ *,
86
+ query: Optional[str] = None,
87
+ response: Optional[str] = None,
88
+ conversation=None,
89
+ **kwargs,
90
+ ):
91
+ """Evaluate a collection of content safety metrics for the given query/response pair or conversation.
92
+ This inputs must supply either a query AND response, or a conversation, but not both.
93
+
94
+ :keyword query: The query to evaluate.
95
+ :paramtype query: Optional[str]
96
+ :keyword response: The response to evaluate.
97
+ :paramtype response: Optional[str]
98
+ :keyword conversation: The conversation to evaluate. Expected to contain a list of conversation turns under the
99
+ key "messages", and potentially a global context under the key "context". Conversation turns are expected
100
+ to be dictionaries with keys "content", "role", and possibly "context".
101
+ :paramtype conversation: Optional[~azure.ai.evaluation.Conversation]
102
+ :return: The evaluation result.
103
+ :rtype: Union[Dict[str, Union[str, float]], Dict[str, Union[str, float, Dict[str, List[Union[str, float]]]]]]
74
104
  """
75
- Evaluates content-safety metrics for "question-answering" scenario.
76
-
77
- :keyword query: The query to be evaluated.
78
- :paramtype query: str
79
- :keyword response: The response to be evaluated.
80
- :paramtype response: str
81
- :keyword parallel: Whether to evaluate in parallel.
82
- :paramtype parallel: bool
83
- :return: The scores for content-safety.
84
- :rtype: Dict[str, Union[str, float]]
105
+ return super().__call__(query=query, response=response, conversation=conversation, **kwargs)
106
+
107
+ @override
108
+ async def _do_eval(self, eval_input: Dict) -> Dict[str, Union[str, float]]:
109
+ """Perform the evaluation using the Azure AI RAI service.
110
+ The exact evaluation performed is determined by the evaluation metric supplied
111
+ by the child class initializer.
112
+
113
+ :param eval_input: The input to the evaluation function.
114
+ :type eval_input: Dict
115
+ :return: The evaluation result.
116
+ :rtype: Dict
85
117
  """
118
+ query = eval_input.get("query", None)
119
+ response = eval_input.get("response", None)
120
+ conversation = eval_input.get("conversation", None)
86
121
  results: Dict[str, Union[str, float]] = {}
122
+ # TODO fix this to not explode on empty optional inputs (PF SKD error)
87
123
  if self._parallel:
88
124
  with ThreadPoolExecutor() as executor:
125
+ # pylint: disable=no-value-for-parameter
89
126
  futures = {
90
- executor.submit(evaluator, query=query, response=response, **kwargs): evaluator
127
+ executor.submit(query=query, response=response, conversation=conversation): evaluator
91
128
  for evaluator in self._evaluators
92
129
  }
93
130
 
@@ -95,7 +132,7 @@ class ContentSafetyEvaluator:
95
132
  results.update(future.result())
96
133
  else:
97
134
  for evaluator in self._evaluators:
98
- result = evaluator(query=query, response=response, **kwargs)
135
+ result = evaluator(query=query, response=response, conversation=conversation)
99
136
  results.update(result)
100
137
 
101
138
  return results
@@ -92,17 +92,17 @@ class ContentSafetyChatEvaluator:
92
92
  def __init__(
93
93
  self,
94
94
  credential,
95
- azure_ai_project: dict,
95
+ azure_ai_project,
96
96
  eval_last_turn: bool = False,
97
97
  parallel: bool = True,
98
98
  ):
99
99
  self._eval_last_turn = eval_last_turn
100
100
  self._parallel = parallel
101
101
  self._evaluators: List[Callable[..., Dict[str, Union[str, float]]]] = [
102
- ViolenceEvaluator(azure_ai_project, credential),
103
- SexualEvaluator(azure_ai_project, credential),
104
- SelfHarmEvaluator(azure_ai_project, credential),
105
- HateUnfairnessEvaluator(azure_ai_project, credential),
102
+ ViolenceEvaluator(credential, azure_ai_project),
103
+ SexualEvaluator(credential, azure_ai_project),
104
+ SelfHarmEvaluator(credential, azure_ai_project),
105
+ HateUnfairnessEvaluator(credential, azure_ai_project),
106
106
  ]
107
107
 
108
108
  def __call__(self, *, conversation: list, **kwargs):
@@ -1,12 +1,16 @@
1
1
  # ---------------------------------------------------------
2
2
  # Copyright (c) Microsoft Corporation. All rights reserved.
3
3
  # ---------------------------------------------------------
4
+ from typing import Optional
5
+
4
6
  from typing_extensions import override
5
7
 
8
+ from azure.ai.evaluation._common._experimental import experimental
6
9
  from azure.ai.evaluation._common.constants import EvaluationMetrics
7
10
  from azure.ai.evaluation._evaluators._common import RaiServiceEvaluatorBase
8
11
 
9
12
 
13
+ @experimental
10
14
  class HateUnfairnessEvaluator(RaiServiceEvaluatorBase):
11
15
  """
12
16
  Initialize a hate-unfairness evaluator for hate unfairness score.
@@ -44,7 +48,7 @@ class HateUnfairnessEvaluator(RaiServiceEvaluatorBase):
44
48
  def __init__(
45
49
  self,
46
50
  credential,
47
- azure_ai_project: dict,
51
+ azure_ai_project,
48
52
  eval_last_turn: bool = False,
49
53
  ):
50
54
  super().__init__(
@@ -53,3 +57,28 @@ class HateUnfairnessEvaluator(RaiServiceEvaluatorBase):
53
57
  credential=credential,
54
58
  eval_last_turn=eval_last_turn,
55
59
  )
60
+
61
+ @override
62
+ def __call__(
63
+ self,
64
+ *,
65
+ query: Optional[str] = None,
66
+ response: Optional[str] = None,
67
+ conversation=None,
68
+ **kwargs,
69
+ ):
70
+ """
71
+ Evaluate whether hateful content is present in your AI system's response.
72
+
73
+ :keyword query: The query to be evaluated.
74
+ :paramtype query: str
75
+ :keyword response: The response to be evaluated.
76
+ :paramtype response: str
77
+ :keyword conversation: The conversation to evaluate. Expected to contain a list of conversation turns under the
78
+ key "messages". Conversation turns are expected
79
+ to be dictionaries with keys "content" and "role".
80
+ :paramtype conversation: Optional[~azure.ai.evaluation.Conversation]
81
+ :return: The fluency score.
82
+ :rtype: Union[Dict[str, Union[str, float]], Dict[str, Union[str, float, Dict[str, List[Union[str, float]]]]]]
83
+ """
84
+ return super().__call__(query=query, response=response, conversation=conversation, **kwargs)
@@ -1,12 +1,16 @@
1
1
  # ---------------------------------------------------------
2
2
  # Copyright (c) Microsoft Corporation. All rights reserved.
3
3
  # ---------------------------------------------------------
4
+ from typing import Optional
5
+
4
6
  from typing_extensions import override
5
7
 
8
+ from azure.ai.evaluation._common._experimental import experimental
6
9
  from azure.ai.evaluation._common.constants import EvaluationMetrics
7
10
  from azure.ai.evaluation._evaluators._common import RaiServiceEvaluatorBase
8
11
 
9
12
 
13
+ @experimental
10
14
  class SelfHarmEvaluator(RaiServiceEvaluatorBase):
11
15
  """
12
16
  Initialize a self harm evaluator for self harm score.
@@ -44,7 +48,7 @@ class SelfHarmEvaluator(RaiServiceEvaluatorBase):
44
48
  def __init__(
45
49
  self,
46
50
  credential,
47
- azure_ai_project: dict,
51
+ azure_ai_project,
48
52
  eval_last_turn: bool = False,
49
53
  ):
50
54
  super().__init__(
@@ -53,3 +57,28 @@ class SelfHarmEvaluator(RaiServiceEvaluatorBase):
53
57
  credential=credential,
54
58
  eval_last_turn=eval_last_turn,
55
59
  )
60
+
61
+ @override
62
+ def __call__(
63
+ self,
64
+ *,
65
+ query: Optional[str] = None,
66
+ response: Optional[str] = None,
67
+ conversation=None,
68
+ **kwargs,
69
+ ):
70
+ """
71
+ Evaluate whether self-harm content is present in your AI system's response.
72
+
73
+ :keyword query: The query to be evaluated.
74
+ :paramtype query: str
75
+ :keyword response: The response to be evaluated.
76
+ :paramtype response: str
77
+ :keyword conversation: The conversation to evaluate. Expected to contain a list of conversation turns under the
78
+ key "messages". Conversation turns are expected
79
+ to be dictionaries with keys "content" and "role".
80
+ :paramtype conversation: Optional[~azure.ai.evaluation.Conversation]
81
+ :return: The fluency score.
82
+ :rtype: Union[Dict[str, Union[str, float]], Dict[str, Union[str, float, Dict[str, List[Union[str, float]]]]]]
83
+ """
84
+ return super().__call__(query=query, response=response, conversation=conversation, **kwargs)
@@ -1,12 +1,16 @@
1
1
  # ---------------------------------------------------------
2
2
  # Copyright (c) Microsoft Corporation. All rights reserved.
3
3
  # ---------------------------------------------------------
4
+ from typing import Optional
5
+
4
6
  from typing_extensions import override
5
7
 
8
+ from azure.ai.evaluation._common._experimental import experimental
6
9
  from azure.ai.evaluation._common.constants import EvaluationMetrics
7
10
  from azure.ai.evaluation._evaluators._common import RaiServiceEvaluatorBase
8
11
 
9
12
 
13
+ @experimental
10
14
  class SexualEvaluator(RaiServiceEvaluatorBase):
11
15
  """
12
16
  Initialize a sexual evaluator for sexual score.
@@ -44,7 +48,7 @@ class SexualEvaluator(RaiServiceEvaluatorBase):
44
48
  def __init__(
45
49
  self,
46
50
  credential,
47
- azure_ai_project: dict,
51
+ azure_ai_project,
48
52
  eval_last_turn: bool = False,
49
53
  ):
50
54
  super().__init__(
@@ -53,3 +57,28 @@ class SexualEvaluator(RaiServiceEvaluatorBase):
53
57
  credential=credential,
54
58
  eval_last_turn=eval_last_turn,
55
59
  )
60
+
61
+ @override
62
+ def __call__(
63
+ self,
64
+ *,
65
+ query: Optional[str] = None,
66
+ response: Optional[str] = None,
67
+ conversation=None,
68
+ **kwargs,
69
+ ):
70
+ """
71
+ Evaluate whether sexual content is present in your AI system's response.
72
+
73
+ :keyword query: The query to be evaluated.
74
+ :paramtype query: str
75
+ :keyword response: The response to be evaluated.
76
+ :paramtype response: str
77
+ :keyword conversation: The conversation to evaluate. Expected to contain a list of conversation turns under the
78
+ key "messages". Conversation turns are expected
79
+ to be dictionaries with keys "content" and "role".
80
+ :paramtype conversation: Optional[~azure.ai.evaluation.Conversation]
81
+ :return: The fluency score.
82
+ :rtype: Union[Dict[str, Union[str, float]], Dict[str, Union[str, float, Dict[str, List[Union[str, float]]]]]]
83
+ """
84
+ return super().__call__(query=query, response=response, conversation=conversation, **kwargs)
@@ -1,12 +1,16 @@
1
1
  # ---------------------------------------------------------
2
2
  # Copyright (c) Microsoft Corporation. All rights reserved.
3
3
  # ---------------------------------------------------------
4
+ from typing import Optional
5
+
4
6
  from typing_extensions import override
5
7
 
8
+ from azure.ai.evaluation._common._experimental import experimental
6
9
  from azure.ai.evaluation._common.constants import EvaluationMetrics
7
10
  from azure.ai.evaluation._evaluators._common import RaiServiceEvaluatorBase
8
11
 
9
12
 
13
+ @experimental
10
14
  class ViolenceEvaluator(RaiServiceEvaluatorBase):
11
15
  """
12
16
  Initialize a violence evaluator for violence score.
@@ -44,7 +48,7 @@ class ViolenceEvaluator(RaiServiceEvaluatorBase):
44
48
  def __init__(
45
49
  self,
46
50
  credential,
47
- azure_ai_project: dict,
51
+ azure_ai_project,
48
52
  eval_last_turn: bool = False,
49
53
  ):
50
54
  super().__init__(
@@ -53,3 +57,28 @@ class ViolenceEvaluator(RaiServiceEvaluatorBase):
53
57
  credential=credential,
54
58
  eval_last_turn=eval_last_turn,
55
59
  )
60
+
61
+ @override
62
+ def __call__(
63
+ self,
64
+ *,
65
+ query: Optional[str] = None,
66
+ response: Optional[str] = None,
67
+ conversation=None,
68
+ **kwargs,
69
+ ):
70
+ """
71
+ Evaluate whether violent content is present in your AI system's response.
72
+
73
+ :keyword query: The query to be evaluated.
74
+ :paramtype query: str
75
+ :keyword response: The response to be evaluated.
76
+ :paramtype response: str
77
+ :keyword conversation: The conversation to evaluate. Expected to contain a list of conversation turns under the
78
+ key "messages". Conversation turns are expected
79
+ to be dictionaries with keys "content" and "role".
80
+ :paramtype conversation: Optional[~azure.ai.evaluation.Conversation]
81
+ :return: The fluency score.
82
+ :rtype: Union[Dict[str, Union[str, float]], Dict[str, Union[str, float, Dict[str, List[Union[str, float]]]]]]
83
+ """
84
+ return super().__call__(query=query, response=response, conversation=conversation, **kwargs)
@@ -3,10 +3,12 @@
3
3
  # ---------------------------------------------------------
4
4
  from typing_extensions import override
5
5
 
6
+ from azure.ai.evaluation._common._experimental import experimental
6
7
  from azure.ai.evaluation._common.constants import _InternalEvaluationMetrics
7
8
  from azure.ai.evaluation._evaluators._common import RaiServiceEvaluatorBase
8
9
 
9
10
 
11
+ @experimental
10
12
  class ECIEvaluator(RaiServiceEvaluatorBase):
11
13
  """
12
14
  Initialize an ECI evaluator to evaluate ECI based on the following guidelines:
@@ -51,7 +53,7 @@ class ECIEvaluator(RaiServiceEvaluatorBase):
51
53
  def __init__(
52
54
  self,
53
55
  credential,
54
- azure_ai_project: dict,
56
+ azure_ai_project,
55
57
  eval_last_turn: bool = False,
56
58
  ):
57
59
  super().__init__(
@@ -23,51 +23,51 @@ class FluencyEvaluator(PromptyEvaluatorBase):
23
23
  .. code-block:: python
24
24
 
25
25
  eval_fn = FluencyEvaluator(model_config)
26
- result = eval_fn(
27
- query="What is the capital of Japan?",
28
- response="The capital of Japan is Tokyo.")
26
+ result = eval_fn(response="The capital of Japan is Tokyo.")
29
27
 
30
28
  **Output format**
31
29
 
32
30
  .. code-block:: python
33
31
 
34
32
  {
35
- "gpt_fluency": 4.0
33
+ "fluency": 4.0,
34
+ "gpt_fluency": 4.0,
36
35
  }
36
+
37
+ Note: To align with our support of a diverse set of models, a key without the `gpt_` prefix has been added.
38
+ To maintain backwards compatibility, the old key with the `gpt_` prefix is still be present in the output;
39
+ however, it is recommended to use the new key moving forward as the old key will be deprecated in the future.
37
40
  """
38
41
 
39
- PROMPTY_FILE = "fluency.prompty"
40
- RESULT_KEY = "gpt_fluency"
42
+ _PROMPTY_FILE = "fluency.prompty"
43
+ _RESULT_KEY = "fluency"
41
44
 
42
45
  @override
43
- def __init__(self, model_config: dict):
46
+ def __init__(self, model_config):
44
47
  current_dir = os.path.dirname(__file__)
45
- prompty_path = os.path.join(current_dir, self.PROMPTY_FILE)
46
- super().__init__(model_config=model_config, prompty_file=prompty_path, result_key=self.RESULT_KEY)
48
+ prompty_path = os.path.join(current_dir, self._PROMPTY_FILE)
49
+ super().__init__(model_config=model_config, prompty_file=prompty_path, result_key=self._RESULT_KEY)
47
50
 
48
51
  @override
49
52
  def __call__(
50
53
  self,
51
54
  *,
52
- query: Optional[str] = None,
53
55
  response: Optional[str] = None,
54
- conversation: Optional[dict] = None,
56
+ conversation=None,
55
57
  **kwargs,
56
58
  ):
57
59
  """
58
- Evaluate fluency. Accepts either a query and response for a single evaluation,
60
+ Evaluate fluency. Accepts either a response for a single evaluation,
59
61
  or a conversation for a multi-turn evaluation. If the conversation has more than one turn,
60
62
  the evaluator will aggregate the results of each turn.
61
63
 
62
- :keyword query: The query to be evaluated.
63
- :paramtype query: str
64
- :keyword response: The response to be evaluated.
64
+ :keyword response: The response to be evaluated. Mutually exclusive with the "conversation" parameter.
65
65
  :paramtype response: str
66
66
  :keyword conversation: The conversation to evaluate. Expected to contain a list of conversation turns under the
67
- key "messages". Conversation turns are expected
68
- to be dictionaries with keys "content" and "role".
69
- :paramtype conversation: Optional[Dict]
67
+ key "messages". Conversation turns are expected to be dictionaries with keys "content" and "role".
68
+ :paramtype conversation: Optional[~azure.ai.evaluation.Conversation]
70
69
  :return: The fluency score.
71
- :rtype: Dict[str, float]
70
+ :rtype: Union[Dict[str, float], Dict[str, Union[float, Dict[str, List[float]]]]]
72
71
  """
73
- return super().__call__(query=query, response=response, conversation=conversation, **kwargs)
72
+
73
+ return super().__call__(response=response, conversation=conversation, **kwargs)
@@ -5,7 +5,7 @@ model:
5
5
  api: chat
6
6
  parameters:
7
7
  temperature: 0.0
8
- max_tokens: 1
8
+ max_tokens: 800
9
9
  top_p: 1.0
10
10
  presence_penalty: 0
11
11
  frequency_penalty: 0
@@ -13,44 +13,74 @@ model:
13
13
  type: text
14
14
 
15
15
  inputs:
16
- query:
17
- type: string
18
16
  response:
19
17
  type: string
20
18
 
21
19
  ---
22
20
  system:
23
- You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric. You should return a single integer value between 1 to 5 representing the evaluation metric. You will include no other text or information.
21
+ # Instruction
22
+ ## Goal
23
+ ### You are an expert in evaluating the quality of a RESPONSE from an intelligent system based on provided definition and data. Your goal will involve answering the questions below using the information provided.
24
+ - **Definition**: You are given a definition of the communication trait that is being evaluated to help guide your Score.
25
+ - **Data**: Your input data include a RESPONSE.
26
+ - **Tasks**: To complete your evaluation you will be asked to evaluate the Data in different ways.
27
+
24
28
  user:
25
- Fluency measures the quality of individual sentences in the answer, and whether they are well-written and grammatically correct. Consider the quality of individual sentences when evaluating fluency. Given the question and answer, score the fluency of the answer between one to five stars using the following rating scale:
26
- One star: the answer completely lacks fluency
27
- Two stars: the answer mostly lacks fluency
28
- Three stars: the answer is partially fluent
29
- Four stars: the answer is mostly fluent
30
- Five stars: the answer has perfect fluency
31
-
32
- This rating value should always be an integer between 1 and 5. So the rating produced should be 1 or 2 or 3 or 4 or 5.
33
-
34
- question: What did you have for breakfast today?
35
- answer: Breakfast today, me eating cereal and orange juice very good.
36
- stars: 1
37
-
38
- question: How do you feel when you travel alone?
39
- answer: Alone travel, nervous, but excited also. I feel adventure and like its time.
40
- stars: 2
41
-
42
- question: When was the last time you went on a family vacation?
43
- answer: Last family vacation, it took place in last summer. We traveled to a beach destination, very fun.
44
- stars: 3
45
-
46
- question: What is your favorite thing about your job?
47
- answer: My favorite aspect of my job is the chance to interact with diverse people. I am constantly learning from their experiences and stories.
48
- stars: 4
49
-
50
- question: Can you describe your morning routine?
51
- answer: Every morning, I wake up at 6 am, drink a glass of water, and do some light stretching. After that, I take a shower and get dressed for work. Then, I have a healthy breakfast, usually consisting of oatmeal and fruits, before leaving the house around 7:30 am.
52
- stars: 5
53
-
54
- question: {{query}}
55
- answer: {{response}}
56
- stars:
29
+ # Definition
30
+ **Fluency** refers to the effectiveness and clarity of written communication, focusing on grammatical accuracy, vocabulary range, sentence complexity, coherence, and overall readability. It assesses how smoothly ideas are conveyed and how easily the text can be understood by the reader.
31
+
32
+ # Ratings
33
+ ## [Fluency: 1] (Emergent Fluency)
34
+ **Definition:** The response shows minimal command of the language. It contains pervasive grammatical errors, extremely limited vocabulary, and fragmented or incoherent sentences. The message is largely incomprehensible, making understanding very difficult.
35
+
36
+ **Examples:**
37
+ **Response:** Free time I. Go park. Not fun. Alone.
38
+
39
+ **Response:** Like food pizza. Good cheese eat.
40
+
41
+ ## [Fluency: 2] (Basic Fluency)
42
+ **Definition:** The response communicates simple ideas but has frequent grammatical errors and limited vocabulary. Sentences are short and may be improperly constructed, leading to partial understanding. Repetition and awkward phrasing are common.
43
+
44
+ **Examples:**
45
+ **Response:** I like play soccer. I watch movie. It fun.
46
+
47
+ **Response:** My town small. Many people. We have market.
48
+
49
+ ## [Fluency: 3] (Competent Fluency)
50
+ **Definition:** The response clearly conveys ideas with occasional grammatical errors. Vocabulary is adequate but not extensive. Sentences are generally correct but may lack complexity and variety. The text is coherent, and the message is easily understood with minimal effort.
51
+
52
+ **Examples:**
53
+ **Response:** I'm planning to visit friends and maybe see a movie together.
54
+
55
+ **Response:** I try to eat healthy food and exercise regularly by jogging.
56
+
57
+ ## [Fluency: 4] (Proficient Fluency)
58
+ **Definition:** The response is well-articulated with good control of grammar and a varied vocabulary. Sentences are complex and well-structured, demonstrating coherence and cohesion. Minor errors may occur but do not affect overall understanding. The text flows smoothly, and ideas are connected logically.
59
+
60
+ **Examples:**
61
+ **Response:** My interest in mathematics and problem-solving inspired me to become an engineer, as I enjoy designing solutions that improve people's lives.
62
+
63
+ **Response:** Environmental conservation is crucial because it protects ecosystems, preserves biodiversity, and ensures natural resources are available for future generations.
64
+
65
+ ## [Fluency: 5] (Exceptional Fluency)
66
+ **Definition:** The response demonstrates an exceptional command of language with sophisticated vocabulary and complex, varied sentence structures. It is coherent, cohesive, and engaging, with precise and nuanced expression. Grammar is flawless, and the text reflects a high level of eloquence and style.
67
+
68
+ **Examples:**
69
+ **Response:** Globalization exerts a profound influence on cultural diversity by facilitating unprecedented cultural exchange while simultaneously risking the homogenization of distinct cultural identities, which can diminish the richness of global heritage.
70
+
71
+ **Response:** Technology revolutionizes modern education by providing interactive learning platforms, enabling personalized learning experiences, and connecting students worldwide, thereby transforming how knowledge is acquired and shared.
72
+
73
+
74
+ # Data
75
+ RESPONSE: {{response}}
76
+
77
+
78
+ # Tasks
79
+ ## Please provide your assessment Score for the previous RESPONSE based on the Definitions above. Your output should include the following information:
80
+ - **ThoughtChain**: To improve the reasoning process, think step by step and include a step-by-step explanation of your thought process as you analyze the data based on the definitions. Keep it brief and start your ThoughtChain with "Let's think step by step:".
81
+ - **Explanation**: a very short explanation of why you think the input Data should get that Score.
82
+ - **Score**: based on your previous analysis, provide your Score. The Score you give MUST be a integer score (i.e., "1", "2"...) based on the levels of the definitions.
83
+
84
+
85
+ ## Please provide your answers between the tags: <S0>your chain of thoughts</S0>, <S1>your explanation</S1>, <S2>your Score</S2>.
86
+ # Output
@@ -61,7 +61,7 @@ class GleuScoreEvaluator:
61
61
  :keyword ground_truth: The ground truth to be compared against.
62
62
  :paramtype ground_truth: str
63
63
  :return: The GLEU score.
64
- :rtype: dict
64
+ :rtype: Dict[str, float]
65
65
  """
66
66
  return async_run_allowing_running_loop(
67
67
  self._async_evaluator, ground_truth=ground_truth, response=response, **kwargs