azure-ai-textanalytics 5.3.0b1__py3-none-any.whl → 6.0.0b1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of azure-ai-textanalytics might be problematic. Click here for more details.

Files changed (128) hide show
  1. azure/ai/textanalytics/__init__.py +26 -197
  2. azure/ai/textanalytics/_client.py +111 -0
  3. azure/ai/textanalytics/_configuration.py +73 -0
  4. azure/ai/textanalytics/{_generated/v2022_05_01/operations → _operations}/__init__.py +13 -8
  5. azure/ai/textanalytics/_operations/_operations.py +716 -0
  6. azure/ai/textanalytics/{_generated/v2022_05_01/models → _operations}/_patch.py +8 -6
  7. azure/ai/textanalytics/_patch.py +350 -0
  8. azure/ai/textanalytics/{_generated/aio → _utils}/__init__.py +1 -5
  9. azure/ai/textanalytics/_utils/model_base.py +1237 -0
  10. azure/ai/textanalytics/{_generated/_serialization.py → _utils/serialization.py} +640 -616
  11. azure/ai/textanalytics/{_generated/v2022_05_01/aio/_vendor.py → _utils/utils.py} +10 -12
  12. azure/ai/textanalytics/_version.py +8 -7
  13. azure/ai/textanalytics/aio/__init__.py +25 -14
  14. azure/ai/textanalytics/aio/_client.py +115 -0
  15. azure/ai/textanalytics/aio/_configuration.py +75 -0
  16. azure/ai/textanalytics/{_generated/v2022_10_01_preview/aio/operations → aio/_operations}/__init__.py +13 -8
  17. azure/ai/textanalytics/aio/_operations/_operations.py +623 -0
  18. azure/ai/textanalytics/{_generated/v2022_05_01 → aio/_operations}/_patch.py +8 -6
  19. azure/ai/textanalytics/aio/_patch.py +344 -0
  20. azure/ai/textanalytics/models/__init__.py +402 -0
  21. azure/ai/textanalytics/models/_enums.py +1979 -0
  22. azure/ai/textanalytics/models/_models.py +6641 -0
  23. azure/ai/textanalytics/{_generated/v2022_05_01/aio → models}/_patch.py +8 -6
  24. azure/ai/textanalytics/py.typed +1 -0
  25. {azure_ai_textanalytics-5.3.0b1.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info}/METADATA +755 -319
  26. azure_ai_textanalytics-6.0.0b1.dist-info/RECORD +29 -0
  27. {azure_ai_textanalytics-5.3.0b1.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info}/WHEEL +1 -1
  28. azure/ai/textanalytics/_base_client.py +0 -111
  29. azure/ai/textanalytics/_check.py +0 -22
  30. azure/ai/textanalytics/_dict_mixin.py +0 -54
  31. azure/ai/textanalytics/_generated/__init__.py +0 -16
  32. azure/ai/textanalytics/_generated/_configuration.py +0 -70
  33. azure/ai/textanalytics/_generated/_operations_mixin.py +0 -795
  34. azure/ai/textanalytics/_generated/_text_analytics_client.py +0 -126
  35. azure/ai/textanalytics/_generated/_version.py +0 -8
  36. azure/ai/textanalytics/_generated/aio/_configuration.py +0 -66
  37. azure/ai/textanalytics/_generated/aio/_operations_mixin.py +0 -776
  38. azure/ai/textanalytics/_generated/aio/_text_analytics_client.py +0 -124
  39. azure/ai/textanalytics/_generated/models.py +0 -8
  40. azure/ai/textanalytics/_generated/v2022_05_01/__init__.py +0 -20
  41. azure/ai/textanalytics/_generated/v2022_05_01/_configuration.py +0 -72
  42. azure/ai/textanalytics/_generated/v2022_05_01/_text_analytics_client.py +0 -100
  43. azure/ai/textanalytics/_generated/v2022_05_01/_vendor.py +0 -45
  44. azure/ai/textanalytics/_generated/v2022_05_01/aio/__init__.py +0 -20
  45. azure/ai/textanalytics/_generated/v2022_05_01/aio/_configuration.py +0 -71
  46. azure/ai/textanalytics/_generated/v2022_05_01/aio/_text_analytics_client.py +0 -97
  47. azure/ai/textanalytics/_generated/v2022_05_01/aio/operations/__init__.py +0 -18
  48. azure/ai/textanalytics/_generated/v2022_05_01/aio/operations/_patch.py +0 -121
  49. azure/ai/textanalytics/_generated/v2022_05_01/aio/operations/_text_analytics_client_operations.py +0 -603
  50. azure/ai/textanalytics/_generated/v2022_05_01/models/__init__.py +0 -281
  51. azure/ai/textanalytics/_generated/v2022_05_01/models/_models_py3.py +0 -5722
  52. azure/ai/textanalytics/_generated/v2022_05_01/models/_text_analytics_client_enums.py +0 -439
  53. azure/ai/textanalytics/_generated/v2022_05_01/operations/_patch.py +0 -120
  54. azure/ai/textanalytics/_generated/v2022_05_01/operations/_text_analytics_client_operations.py +0 -744
  55. azure/ai/textanalytics/_generated/v2022_10_01_preview/__init__.py +0 -20
  56. azure/ai/textanalytics/_generated/v2022_10_01_preview/_configuration.py +0 -72
  57. azure/ai/textanalytics/_generated/v2022_10_01_preview/_patch.py +0 -19
  58. azure/ai/textanalytics/_generated/v2022_10_01_preview/_text_analytics_client.py +0 -100
  59. azure/ai/textanalytics/_generated/v2022_10_01_preview/_vendor.py +0 -45
  60. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/__init__.py +0 -20
  61. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_configuration.py +0 -71
  62. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_patch.py +0 -19
  63. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_text_analytics_client.py +0 -97
  64. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_vendor.py +0 -27
  65. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/operations/_patch.py +0 -121
  66. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/operations/_text_analytics_client_operations.py +0 -603
  67. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/__init__.py +0 -407
  68. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/_models_py3.py +0 -8462
  69. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/_patch.py +0 -72
  70. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/_text_analytics_client_enums.py +0 -730
  71. azure/ai/textanalytics/_generated/v2022_10_01_preview/operations/__init__.py +0 -18
  72. azure/ai/textanalytics/_generated/v2022_10_01_preview/operations/_patch.py +0 -120
  73. azure/ai/textanalytics/_generated/v2022_10_01_preview/operations/_text_analytics_client_operations.py +0 -744
  74. azure/ai/textanalytics/_generated/v3_0/__init__.py +0 -20
  75. azure/ai/textanalytics/_generated/v3_0/_configuration.py +0 -66
  76. azure/ai/textanalytics/_generated/v3_0/_patch.py +0 -31
  77. azure/ai/textanalytics/_generated/v3_0/_text_analytics_client.py +0 -96
  78. azure/ai/textanalytics/_generated/v3_0/_vendor.py +0 -33
  79. azure/ai/textanalytics/_generated/v3_0/aio/__init__.py +0 -20
  80. azure/ai/textanalytics/_generated/v3_0/aio/_configuration.py +0 -65
  81. azure/ai/textanalytics/_generated/v3_0/aio/_patch.py +0 -31
  82. azure/ai/textanalytics/_generated/v3_0/aio/_text_analytics_client.py +0 -93
  83. azure/ai/textanalytics/_generated/v3_0/aio/_vendor.py +0 -27
  84. azure/ai/textanalytics/_generated/v3_0/aio/operations/__init__.py +0 -18
  85. azure/ai/textanalytics/_generated/v3_0/aio/operations/_patch.py +0 -19
  86. azure/ai/textanalytics/_generated/v3_0/aio/operations/_text_analytics_client_operations.py +0 -428
  87. azure/ai/textanalytics/_generated/v3_0/models/__init__.py +0 -81
  88. azure/ai/textanalytics/_generated/v3_0/models/_models_py3.py +0 -1467
  89. azure/ai/textanalytics/_generated/v3_0/models/_patch.py +0 -19
  90. azure/ai/textanalytics/_generated/v3_0/models/_text_analytics_client_enums.py +0 -58
  91. azure/ai/textanalytics/_generated/v3_0/operations/__init__.py +0 -18
  92. azure/ai/textanalytics/_generated/v3_0/operations/_patch.py +0 -19
  93. azure/ai/textanalytics/_generated/v3_0/operations/_text_analytics_client_operations.py +0 -604
  94. azure/ai/textanalytics/_generated/v3_1/__init__.py +0 -20
  95. azure/ai/textanalytics/_generated/v3_1/_configuration.py +0 -66
  96. azure/ai/textanalytics/_generated/v3_1/_patch.py +0 -31
  97. azure/ai/textanalytics/_generated/v3_1/_text_analytics_client.py +0 -98
  98. azure/ai/textanalytics/_generated/v3_1/_vendor.py +0 -45
  99. azure/ai/textanalytics/_generated/v3_1/aio/__init__.py +0 -20
  100. azure/ai/textanalytics/_generated/v3_1/aio/_configuration.py +0 -65
  101. azure/ai/textanalytics/_generated/v3_1/aio/_patch.py +0 -31
  102. azure/ai/textanalytics/_generated/v3_1/aio/_text_analytics_client.py +0 -95
  103. azure/ai/textanalytics/_generated/v3_1/aio/_vendor.py +0 -27
  104. azure/ai/textanalytics/_generated/v3_1/aio/operations/__init__.py +0 -18
  105. azure/ai/textanalytics/_generated/v3_1/aio/operations/_patch.py +0 -19
  106. azure/ai/textanalytics/_generated/v3_1/aio/operations/_text_analytics_client_operations.py +0 -1291
  107. azure/ai/textanalytics/_generated/v3_1/models/__init__.py +0 -205
  108. azure/ai/textanalytics/_generated/v3_1/models/_models_py3.py +0 -3976
  109. azure/ai/textanalytics/_generated/v3_1/models/_patch.py +0 -19
  110. azure/ai/textanalytics/_generated/v3_1/models/_text_analytics_client_enums.py +0 -367
  111. azure/ai/textanalytics/_generated/v3_1/operations/__init__.py +0 -18
  112. azure/ai/textanalytics/_generated/v3_1/operations/_patch.py +0 -19
  113. azure/ai/textanalytics/_generated/v3_1/operations/_text_analytics_client_operations.py +0 -1709
  114. azure/ai/textanalytics/_lro.py +0 -552
  115. azure/ai/textanalytics/_models.py +0 -3142
  116. azure/ai/textanalytics/_policies.py +0 -66
  117. azure/ai/textanalytics/_request_handlers.py +0 -104
  118. azure/ai/textanalytics/_response_handlers.py +0 -580
  119. azure/ai/textanalytics/_text_analytics_client.py +0 -1802
  120. azure/ai/textanalytics/_user_agent.py +0 -8
  121. azure/ai/textanalytics/_validate.py +0 -113
  122. azure/ai/textanalytics/aio/_base_client_async.py +0 -95
  123. azure/ai/textanalytics/aio/_lro_async.py +0 -501
  124. azure/ai/textanalytics/aio/_response_handlers_async.py +0 -94
  125. azure/ai/textanalytics/aio/_text_analytics_client_async.py +0 -1800
  126. azure_ai_textanalytics-5.3.0b1.dist-info/RECORD +0 -115
  127. {azure_ai_textanalytics-5.3.0b1.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info/licenses}/LICENSE +0 -0
  128. {azure_ai_textanalytics-5.3.0b1.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info}/top_level.txt +0 -0
@@ -1,29 +1,27 @@
1
- Metadata-Version: 2.1
1
+ Metadata-Version: 2.4
2
2
  Name: azure-ai-textanalytics
3
- Version: 5.3.0b1
4
- Summary: Microsoft Azure Text Analytics Client Library for Python
5
- Home-page: https://github.com/Azure/azure-sdk-for-python
6
- Author: Microsoft Corporation
7
- Author-email: azpysdkhelp@microsoft.com
8
- License: MIT License
9
- Keywords: azure,azure sdk,text analytics,cognitive services,natural language processing
3
+ Version: 6.0.0b1
4
+ Summary: Microsoft Corporation Azure Ai Textanalytics Client Library for Python
5
+ Author-email: Microsoft Corporation <azpysdkhelp@microsoft.com>
6
+ License-Expression: MIT
7
+ Project-URL: repository, https://github.com/Azure/azure-sdk-for-python
8
+ Keywords: azure,azure sdk
10
9
  Classifier: Development Status :: 4 - Beta
11
10
  Classifier: Programming Language :: Python
12
11
  Classifier: Programming Language :: Python :: 3 :: Only
13
12
  Classifier: Programming Language :: Python :: 3
14
- Classifier: Programming Language :: Python :: 3.7
15
- Classifier: Programming Language :: Python :: 3.8
16
13
  Classifier: Programming Language :: Python :: 3.9
17
14
  Classifier: Programming Language :: Python :: 3.10
18
15
  Classifier: Programming Language :: Python :: 3.11
19
- Classifier: License :: OSI Approved :: MIT License
20
- Requires-Python: >=3.7
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Programming Language :: Python :: 3.13
18
+ Requires-Python: >=3.9
21
19
  Description-Content-Type: text/markdown
22
20
  License-File: LICENSE
23
- Requires-Dist: azure-core (<2.0.0,>=1.24.0)
24
- Requires-Dist: azure-common (~=1.1)
25
- Requires-Dist: isodate (<1.0.0,>=0.6.1)
26
- Requires-Dist: typing-extensions (>=4.0.1)
21
+ Requires-Dist: isodate>=0.6.1
22
+ Requires-Dist: azure-core>=1.35.0
23
+ Requires-Dist: typing-extensions>=4.6.0
24
+ Dynamic: license-file
27
25
 
28
26
  # Azure Text Analytics client library for Python
29
27
 
@@ -41,9 +39,13 @@ The Azure Cognitive Service for Language is a cloud-based service that provides
41
39
  - Custom Text Classification
42
40
  - Extractive Text Summarization
43
41
  - Abstractive Text Summarization
44
- - Dynamic Classification
45
42
 
46
- [Source code][source_code] | [Package (PyPI)][ta_pypi] | [API reference documentation][ta_ref_docs] | [Product documentation][language_product_documentation] | [Samples][ta_samples]
43
+ [Source code][source_code]
44
+ | [Package (PyPI)][ta_pypi]
45
+ | [Package (Conda)](https://anaconda.org/microsoft/azure-ai-textanalytics/)
46
+ | [API reference documentation][ta_ref_docs]
47
+ | [Product documentation][language_product_documentation]
48
+ | [Samples][ta_samples]
47
49
 
48
50
  ## Getting started
49
51
 
@@ -57,30 +59,7 @@ The Azure Cognitive Service for Language is a cloud-based service that provides
57
59
 
58
60
  The Language service supports both [multi-service and single-service access][multi_and_single_service].
59
61
  Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Language service access only, create a Language service resource.
60
-
61
- You can create the resource using
62
-
63
- **Option 1:** [Azure Portal][azure_portal_create_ta_resource]
64
-
65
- **Option 2:** [Azure CLI][azure_cli_create_ta_resource].
66
- Below is an example of how you can create a Language service resource using the CLI:
67
-
68
- ```bash
69
- # Create a new resource group to hold the Language service resource -
70
- # if using an existing resource group, skip this step
71
- az group create --name my-resource-group --location westus2
72
- ```
73
-
74
- ```bash
75
- # Create text analytics
76
- az cognitiveservices account create \
77
- --name text-analytics-resource \
78
- --resource-group my-resource-group \
79
- --kind TextAnalytics \
80
- --sku F0 \
81
- --location westus2 \
82
- --yes
83
- ```
62
+ You can create the resource using the [Azure Portal][azure_portal_create_ta_resource] or [Azure CLI][azure_cli] following the steps in [this document][azure_cli_create_ta_resource].
84
63
 
85
64
  Interaction with the service using the client library begins with a [client](#textanalyticsclient "TextAnalyticsClient").
86
65
  To create a client object, you will need the Cognitive Services or Language service `endpoint` to
@@ -102,18 +81,34 @@ For example, `https://<region>.api.cognitive.microsoft.com/`.
102
81
  Install the Azure Text Analytics client library for Python with [pip][pip]:
103
82
 
104
83
  ```bash
105
- pip install azure-ai-textanalytics --pre
84
+ pip install azure-ai-textanalytics
106
85
  ```
107
86
 
87
+ <!-- SNIPPET:sample_authentication.create_ta_client_with_key -->
88
+
89
+ ```python
90
+ import os
91
+ from azure.core.credentials import AzureKeyCredential
92
+ from azure.ai.textanalytics import TextAnalysisClient
93
+
94
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
95
+ key = os.environ["AZURE_TEXT_KEY"]
96
+
97
+ text_client = TextAnalysisClient(endpoint, AzureKeyCredential(key))
98
+ ```
99
+
100
+ <!-- END SNIPPET -->
101
+
108
102
  > Note that `5.2.X` and newer targets the Azure Cognitive Service for Language APIs. These APIs include the text analysis and natural language processing features found in the previous versions of the Text Analytics client library.
109
- In addition, the service API has changed from semantic to date-based versioning. This version of the client library defaults to the latest supported API version, which currently is `2022-10-01-preview`.
103
+ In addition, the service API has changed from semantic to date-based versioning. This version of the client library defaults to the latest supported API version, which currently is `2023-04-01`.
110
104
 
111
105
  This table shows the relationship between SDK versions and supported API versions of the service
112
106
 
113
107
  | SDK version | Supported API version of service |
114
108
  | ------------ | --------------------------------- |
115
- | 5.3.0b1 - Latest beta release | 3.0, 3.1, 2022-05-01, 2022-10-01-preview (default) |
116
- | 5.2.X - Latest stable release | 3.0, 3.1, 2022-05-01 (default) |
109
+ | 6.0.0b1 - Latest preview release | 3.0, 3.1, 2022-05-01, 2023-04-01, 2024-11-01, 2024-11-15-preview, 2025-05-15-preview (default) |
110
+ | 5.3.X - Latest stable release | 3.0, 3.1, 2022-05-01, 2023-04-01 (default) |
111
+ | 5.2.X | 3.0, 3.1, 2022-05-01 (default) |
117
112
  | 5.1.0 | 3.0, 3.1 (default) |
118
113
  | 5.0.0 | 3.0 |
119
114
 
@@ -145,14 +140,21 @@ Alternatively, you can use [Azure CLI][azure_cli_endpoint_lookup] snippet below
145
140
  Once you have the value for the API key, you can pass it as a string into an instance of [AzureKeyCredential][azure-key-credential]. Use the key as the credential parameter
146
141
  to authenticate the client:
147
142
 
143
+ <!-- SNIPPET:sample_authentication.create_ta_client_with_key -->
144
+
148
145
  ```python
146
+ import os
149
147
  from azure.core.credentials import AzureKeyCredential
150
- from azure.ai.textanalytics import TextAnalyticsClient
148
+ from azure.ai.textanalytics import TextAnalysisClient
151
149
 
152
- credential = AzureKeyCredential("<api_key>")
153
- text_analytics_client = TextAnalyticsClient(endpoint="https://<resource-name>.cognitiveservices.azure.com/", credential=credential)
150
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
151
+ key = os.environ["AZURE_TEXT_KEY"]
152
+
153
+ text_client = TextAnalysisClient(endpoint, AzureKeyCredential(key))
154
154
  ```
155
155
 
156
+ <!-- END SNIPPET -->
157
+
156
158
  #### Create a TextAnalyticsClient with an Azure Active Directory Credential
157
159
 
158
160
  To use an [Azure Active Directory (AAD) token credential][cognitive_authentication_aad],
@@ -176,14 +178,21 @@ AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET
176
178
 
177
179
  Use the returned token credential to authenticate the client:
178
180
 
181
+ <!-- SNIPPET:sample_authentication.create_ta_client_with_aad -->
182
+
179
183
  ```python
180
- from azure.ai.textanalytics import TextAnalyticsClient
184
+ import os
185
+ from azure.ai.textanalytics import TextAnalysisClient
181
186
  from azure.identity import DefaultAzureCredential
182
187
 
188
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
183
189
  credential = DefaultAzureCredential()
184
- text_analytics_client = TextAnalyticsClient(endpoint="https://<resource-name>.cognitiveservices.azure.com/", credential=credential)
190
+
191
+ text_client = TextAnalysisClient(endpoint, credential=credential)
185
192
  ```
186
193
 
194
+ <!-- END SNIPPET -->
195
+
187
196
  ## Key concepts
188
197
 
189
198
  ### TextAnalyticsClient
@@ -254,7 +263,6 @@ for result in response:
254
263
  print(f"Document error: {result.code}, {result.message}")
255
264
  ```
256
265
 
257
-
258
266
  ### Long-Running Operations
259
267
 
260
268
  Long-running operations are operations which consist of an initial request sent to the service to start an operation,
@@ -283,221 +291,447 @@ The following section provides several code snippets covering some of the most c
283
291
  - [Custom Multi Label Classification][multi_label_classify_sample]
284
292
  - [Extractive Summarization][extract_summary_sample]
285
293
  - [Abstractive Summarization][abstract_summary_sample]
286
- - [Dynamic Classification][dynamic_classification_sample]
287
294
 
288
- ### Analyze sentiment
295
+ ### Analyze Sentiment
289
296
 
290
297
  [analyze_sentiment][analyze_sentiment] looks at its input text and determines whether its sentiment is positive, negative, neutral or mixed. It's response includes per-sentence sentiment analysis and confidence scores.
291
298
 
299
+ <!-- SNIPPET:sample_analyze_sentiment.analyze_sentiment -->
300
+
292
301
  ```python
293
- from azure.core.credentials import AzureKeyCredential
294
- from azure.ai.textanalytics import TextAnalyticsClient
302
+ import os
295
303
 
296
- credential = AzureKeyCredential("<api_key>")
297
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
304
+ from azure.identity import DefaultAzureCredential
305
+ from azure.ai.textanalytics import TextAnalysisClient
306
+ from azure.ai.textanalytics.models import (
307
+ MultiLanguageTextInput,
308
+ MultiLanguageInput,
309
+ TextSentimentAnalysisInput,
310
+ AnalyzeTextSentimentResult,
311
+ )
298
312
 
299
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
300
313
 
301
- documents = [
302
- "I did not like the restaurant. The food was somehow both too spicy and underseasoned. Additionally, I thought the location was too far away from the playhouse.",
303
- "The restaurant was decorated beautifully. The atmosphere was unlike any other restaurant I've been to.",
304
- "The food was yummy. :)"
305
- ]
314
+ def sample_analyze_sentiment():
315
+ # settings
316
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
317
+ credential = DefaultAzureCredential()
318
+
319
+ client = TextAnalysisClient(endpoint, credential=credential)
306
320
 
307
- response = text_analytics_client.analyze_sentiment(documents, language="en")
308
- result = [doc for doc in response if not doc.is_error]
321
+ # input
322
+ text_a = (
323
+ "The food and service were unacceptable, but the concierge were nice. "
324
+ "After talking to them about the quality of the food and the process to get room service "
325
+ "they refunded the money we spent at the restaurant and gave us a voucher for nearby restaurants."
326
+ )
309
327
 
310
- for doc in result:
311
- print(f"Overall sentiment: {doc.sentiment}")
312
- print(
313
- f"Scores: positive={doc.confidence_scores.positive}; "
314
- f"neutral={doc.confidence_scores.neutral}; "
315
- f"negative={doc.confidence_scores.negative}\n"
328
+ body = TextSentimentAnalysisInput(
329
+ text_input=MultiLanguageTextInput(
330
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
331
+ )
316
332
  )
333
+
334
+ # Sync (non-LRO) call
335
+ result = client.analyze_text(body=body)
336
+
337
+ # Print results
338
+ if isinstance(result, AnalyzeTextSentimentResult) and result.results and result.results.documents:
339
+ for doc in result.results.documents:
340
+ print(f"\nDocument ID: {doc.id}")
341
+ print(f"Overall sentiment: {doc.sentiment}")
342
+ if doc.confidence_scores:
343
+ print("Confidence scores:")
344
+ print(f" positive={doc.confidence_scores.positive}")
345
+ print(f" neutral={doc.confidence_scores.neutral}")
346
+ print(f" negative={doc.confidence_scores.negative}")
347
+
348
+ if doc.sentences:
349
+ print("\nSentence sentiments:")
350
+ for s in doc.sentences:
351
+ print(f" Text: {s.text}")
352
+ print(f" Sentiment: {s.sentiment}")
353
+ if s.confidence_scores:
354
+ print(
355
+ " Scores: "
356
+ f"pos={s.confidence_scores.positive}, "
357
+ f"neu={s.confidence_scores.neutral}, "
358
+ f"neg={s.confidence_scores.negative}"
359
+ )
360
+ print(f" Offset: {s.offset}, Length: {s.length}\n")
361
+ else:
362
+ print("No sentence-level results returned.")
363
+ else:
364
+ print("No documents in the response or unexpected result type.")
317
365
  ```
318
366
 
367
+ <!-- END SNIPPET -->
368
+
319
369
  The returned response is a heterogeneous list of result and error objects: list[[AnalyzeSentimentResult][analyze_sentiment_result], [DocumentError][document_error]]
320
370
 
321
371
  Please refer to the service documentation for a conceptual discussion of [sentiment analysis][sentiment_analysis]. To see how to conduct more granular analysis into the opinions related to individual aspects (such as attributes of a product or service) in a text, see [here][opinion_mining_sample].
322
372
 
323
- ### Recognize entities
373
+ ### Recognize Entities
324
374
 
325
375
  [recognize_entities][recognize_entities] recognizes and categories entities in its input text as people, places, organizations, date/time, quantities, percentages, currencies, and more.
326
376
 
377
+ <!-- SNIPPET:sample_recognize_entities.recognize_entities -->
378
+
327
379
  ```python
328
- from azure.core.credentials import AzureKeyCredential
329
- from azure.ai.textanalytics import TextAnalyticsClient
380
+ import os
330
381
 
331
- credential = AzureKeyCredential("<api_key>")
332
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
382
+ from azure.identity import DefaultAzureCredential
383
+ from azure.ai.textanalytics import TextAnalysisClient
384
+ from azure.ai.textanalytics.models import (
385
+ MultiLanguageTextInput,
386
+ MultiLanguageInput,
387
+ TextEntityRecognitionInput,
388
+ EntitiesActionContent,
389
+ AnalyzeTextEntitiesResult,
390
+ )
333
391
 
334
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
335
392
 
336
- documents = [
337
- """
338
- Microsoft was founded by Bill Gates and Paul Allen. Its headquarters are located in Redmond. Redmond is a
339
- city in King County, Washington, United States, located 15 miles east of Seattle.
340
- """,
341
- "Jeff bought three dozen eggs because there was a 50% discount."
342
- ]
393
+ def sample_recognize_entities():
394
+ # settings
395
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
396
+ credential = DefaultAzureCredential()
397
+
398
+ client = TextAnalysisClient(endpoint, credential=credential)
399
+
400
+ # input
401
+ text_a = (
402
+ "We love this trail and make the trip every year. The views are breathtaking and well worth the hike! "
403
+ "Yesterday was foggy though, so we missed the spectacular views. We tried again today and it was "
404
+ "amazing. Everyone in my family liked the trail although it was too challenging for the less "
405
+ "athletic among us. Not necessarily recommended for small children. A hotel close to the trail "
406
+ "offers services for childcare in case you want that."
407
+ )
343
408
 
344
- response = text_analytics_client.recognize_entities(documents, language="en")
345
- result = [doc for doc in response if not doc.is_error]
409
+ body = TextEntityRecognitionInput(
410
+ text_input=MultiLanguageTextInput(
411
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
412
+ ),
413
+ action_content=EntitiesActionContent(model_version="latest"),
414
+ )
346
415
 
347
- for doc in result:
348
- for entity in doc.entities:
349
- print(f"Entity: {entity.text}")
350
- print(f"...Category: {entity.category}")
351
- print(f"...Confidence Score: {entity.confidence_score}")
352
- print(f"...Offset: {entity.offset}")
416
+ result = client.analyze_text(body=body)
417
+
418
+ # Print results
419
+ if isinstance(result, AnalyzeTextEntitiesResult) and result.results and result.results.documents:
420
+ for doc in result.results.documents:
421
+ print(f"\nDocument ID: {doc.id}")
422
+ if doc.entities:
423
+ print("Entities:")
424
+ for entity in doc.entities:
425
+ print(f" Text: {entity.text}")
426
+ print(f" Category: {entity.category}")
427
+ if entity.subcategory:
428
+ print(f" Subcategory: {entity.subcategory}")
429
+ print(f" Offset: {entity.offset}")
430
+ print(f" Length: {entity.length}")
431
+ print(f" Confidence score: {entity.confidence_score}\n")
432
+ else:
433
+ print("No entities found for this document.")
434
+ else:
435
+ print("No documents in the response or unexpected result type.")
353
436
  ```
354
437
 
438
+ <!-- END SNIPPET -->
439
+
355
440
  The returned response is a heterogeneous list of result and error objects: list[[RecognizeEntitiesResult][recognize_entities_result], [DocumentError][document_error]]
356
441
 
357
442
  Please refer to the service documentation for a conceptual discussion of [named entity recognition][named_entity_recognition]
358
443
  and [supported types][named_entity_categories].
359
444
 
360
- ### Recognize linked entities
445
+ ### Recognize Linked Entities
361
446
 
362
447
  [recognize_linked_entities][recognize_linked_entities] recognizes and disambiguates the identity of each entity found in its input text (for example,
363
448
  determining whether an occurrence of the word Mars refers to the planet, or to the
364
449
  Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia.
365
450
 
451
+ <!-- SNIPPET:sample_recognize_linked_entities.recognize_linked_entities -->
452
+
366
453
  ```python
367
- from azure.core.credentials import AzureKeyCredential
368
- from azure.ai.textanalytics import TextAnalyticsClient
454
+ import os
369
455
 
370
- credential = AzureKeyCredential("<api_key>")
371
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
456
+ from azure.identity import DefaultAzureCredential
457
+ from azure.ai.textanalytics import TextAnalysisClient
458
+ from azure.ai.textanalytics.models import (
459
+ MultiLanguageTextInput,
460
+ MultiLanguageInput,
461
+ TextEntityLinkingInput,
462
+ EntityLinkingActionContent,
463
+ AnalyzeTextEntityLinkingResult,
464
+ )
372
465
 
373
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
374
466
 
375
- documents = [
376
- "Microsoft was founded by Bill Gates and Paul Allen. Its headquarters are located in Redmond.",
377
- "Easter Island, a Chilean territory, is a remote volcanic island in Polynesia."
378
- ]
467
+ def sample_recognize_linked_entities():
468
+ # settings
469
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
470
+ credential = DefaultAzureCredential()
471
+
472
+ client = TextAnalysisClient(endpoint, credential=credential)
379
473
 
380
- response = text_analytics_client.recognize_linked_entities(documents, language="en")
381
- result = [doc for doc in response if not doc.is_error]
382
-
383
- for doc in result:
384
- for entity in doc.entities:
385
- print(f"Entity: {entity.name}")
386
- print(f"...URL: {entity.url}")
387
- print(f"...Data Source: {entity.data_source}")
388
- print("...Entity matches:")
389
- for match in entity.matches:
390
- print(f"......Entity match text: {match.text}")
391
- print(f"......Confidence Score: {match.confidence_score}")
392
- print(f"......Offset: {match.offset}")
474
+ # input
475
+ text_a = (
476
+ "Microsoft was founded by Bill Gates with some friends he met at Harvard. One of his friends, Steve "
477
+ "Ballmer, eventually became CEO after Bill Gates as well. Steve Ballmer eventually stepped down as "
478
+ "CEO of Microsoft, and was succeeded by Satya Nadella. Microsoft originally moved its headquarters "
479
+ "to Bellevue, Washington in January 1979, but is now headquartered in Redmond"
480
+ )
481
+
482
+ body = TextEntityLinkingInput(
483
+ text_input=MultiLanguageTextInput(
484
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
485
+ ),
486
+ action_content=EntityLinkingActionContent(model_version="latest"),
487
+ )
488
+
489
+ # Sync (non-LRO) call
490
+ result = client.analyze_text(body=body)
491
+
492
+ # Print results
493
+ if isinstance(result, AnalyzeTextEntityLinkingResult) and result.results and result.results.documents:
494
+ for doc in result.results.documents:
495
+ print(f"\nDocument ID: {doc.id}")
496
+ if not doc.entities:
497
+ print("No linked entities found for this document.")
498
+ continue
499
+
500
+ print("Linked Entities:")
501
+ for linked in doc.entities:
502
+ print(f" Name: {linked.name}")
503
+ print(f" Language: {linked.language}")
504
+ print(f" Data source: {linked.data_source}")
505
+ print(f" URL: {linked.url}")
506
+ print(f" ID: {linked.id}")
507
+
508
+ if linked.matches:
509
+ print(" Matches:")
510
+ for match in linked.matches:
511
+ print(f" Text: {match.text}")
512
+ print(f" Confidence score: {match.confidence_score}")
513
+ print(f" Offset: {match.offset}")
514
+ print(f" Length: {match.length}")
515
+ print()
516
+ else:
517
+ print("No documents in the response or unexpected result type.")
393
518
  ```
394
519
 
520
+ <!-- END SNIPPET -->
521
+
395
522
  The returned response is a heterogeneous list of result and error objects: list[[RecognizeLinkedEntitiesResult][recognize_linked_entities_result], [DocumentError][document_error]]
396
523
 
397
524
  Please refer to the service documentation for a conceptual discussion of [entity linking][linked_entity_recognition]
398
525
  and [supported types][linked_entities_categories].
399
526
 
400
- ### Recognize PII entities
527
+ ### Recognize PII Entities
401
528
 
402
529
  [recognize_pii_entities][recognize_pii_entities] recognizes and categorizes Personally Identifiable Information (PII) entities in its input text, such as
403
530
  Social Security Numbers, bank account information, credit card numbers, and more.
404
531
 
532
+ <!-- SNIPPET:sample_recognize_pii_entities.recognize_pii_entities -->
533
+
405
534
  ```python
406
- from azure.core.credentials import AzureKeyCredential
407
- from azure.ai.textanalytics import TextAnalyticsClient
535
+ import os
408
536
 
409
- credential = AzureKeyCredential("<api_key>")
410
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
537
+ from azure.identity import DefaultAzureCredential
538
+ from azure.ai.textanalytics import TextAnalysisClient
539
+ from azure.ai.textanalytics.models import (
540
+ MultiLanguageTextInput,
541
+ MultiLanguageInput,
542
+ TextPiiEntitiesRecognitionInput,
543
+ AnalyzeTextPiiResult,
544
+ )
411
545
 
412
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
413
546
 
414
- documents = [
415
- """
416
- We have an employee called Parker who cleans up after customers. The employee's
417
- SSN is 859-98-0987, and their phone number is 555-555-5555.
418
- """
419
- ]
420
- response = text_analytics_client.recognize_pii_entities(documents, language="en")
421
- result = [doc for doc in response if not doc.is_error]
422
- for idx, doc in enumerate(result):
423
- print(f"Document text: {documents[idx]}")
424
- print(f"Redacted document text: {doc.redacted_text}")
425
- for entity in doc.entities:
426
- print(f"...Entity: {entity.text}")
427
- print(f"......Category: {entity.category}")
428
- print(f"......Confidence Score: {entity.confidence_score}")
429
- print(f"......Offset: {entity.offset}")
547
+ def sample_recognize_pii_entities():
548
+ # settings
549
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
550
+ credential = DefaultAzureCredential()
551
+
552
+ client = TextAnalysisClient(endpoint, credential=credential)
553
+
554
+ # input
555
+ text_a = (
556
+ "Parker Doe has repaid all of their loans as of 2020-04-25. Their SSN is 859-98-0987. "
557
+ "To contact them, use their phone number 800-102-1100. They are originally from Brazil and "
558
+ "have document ID number 998.214.865-68."
559
+ )
560
+
561
+ body = TextPiiEntitiesRecognitionInput(
562
+ text_input=MultiLanguageTextInput(
563
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
564
+ )
565
+ )
566
+
567
+ # Sync (non-LRO) call
568
+ result = client.analyze_text(body=body)
569
+
570
+ # Print results
571
+ if isinstance(result, AnalyzeTextPiiResult) and result.results and result.results.documents:
572
+ for doc in result.results.documents:
573
+ print(f"\nDocument ID: {doc.id}")
574
+ if doc.entities:
575
+ print("PII Entities:")
576
+ for entity in doc.entities:
577
+ print(f" Text: {entity.text}")
578
+ print(f" Category: {entity.category}")
579
+ # subcategory may be optional
580
+ if entity.subcategory:
581
+ print(f" Subcategory: {entity.subcategory}")
582
+ print(f" Offset: {entity.offset}")
583
+ print(f" Length: {entity.length}")
584
+ print(f" Confidence score: {entity.confidence_score}\n")
585
+ else:
586
+ print("No PII entities found for this document.")
587
+ else:
588
+ print("No documents in the response or unexpected result type.")
430
589
  ```
431
590
 
591
+ <!-- END SNIPPET -->
592
+
432
593
  The returned response is a heterogeneous list of result and error objects: list[[RecognizePiiEntitiesResult][recognize_pii_entities_result], [DocumentError][document_error]]
433
594
 
434
595
  Please refer to the service documentation for [supported PII entity types][pii_entity_categories].
435
596
 
436
597
  Note: The Recognize PII Entities service is available in API version v3.1 and newer.
437
598
 
438
- ### Extract key phrases
599
+ ### Extract Key Phrases
439
600
 
440
601
  [extract_key_phrases][extract_key_phrases] determines the main talking points in its input text. For example, for the input text "The food was delicious and there were wonderful staff", the API returns: "food" and "wonderful staff".
441
602
 
603
+ <!-- SNIPPET:sample_extract_key_phrases.extract_key_phrases -->
604
+
442
605
  ```python
443
- from azure.core.credentials import AzureKeyCredential
444
- from azure.ai.textanalytics import TextAnalyticsClient
606
+ import os
445
607
 
446
- credential = AzureKeyCredential("<api_key>")
447
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
608
+ from azure.identity import DefaultAzureCredential
609
+ from azure.ai.textanalytics import TextAnalysisClient
610
+ from azure.ai.textanalytics.models import (
611
+ MultiLanguageTextInput,
612
+ MultiLanguageInput,
613
+ TextKeyPhraseExtractionInput,
614
+ KeyPhraseActionContent,
615
+ AnalyzeTextKeyPhraseResult,
616
+ )
448
617
 
449
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
450
618
 
451
- documents = [
452
- "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle.",
453
- """
454
- I need to take my cat to the veterinarian. He has been sick recently, and I need to take him
455
- before I travel to South America for the summer.
456
- """,
457
- ]
619
+ def sample_extract_key_phrases():
620
+ # get settings
621
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
622
+ credential = DefaultAzureCredential()
623
+
624
+ client = TextAnalysisClient(endpoint, credential=credential)
458
625
 
459
- response = text_analytics_client.extract_key_phrases(documents, language="en")
460
- result = [doc for doc in response if not doc.is_error]
626
+ # Build input
627
+ text_a = (
628
+ "We love this trail and make the trip every year. The views are breathtaking and well worth the hike! "
629
+ "Yesterday was foggy though, so we missed the spectacular views. We tried again today and it was "
630
+ "amazing. Everyone in my family liked the trail although it was too challenging for the less "
631
+ "athletic among us. Not necessarily recommended for small children. A hotel close to the trail "
632
+ "offers services for childcare in case you want that."
633
+ )
634
+
635
+ body = TextKeyPhraseExtractionInput(
636
+ text_input=MultiLanguageTextInput(
637
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
638
+ ),
639
+ action_content=KeyPhraseActionContent(model_version="latest"),
640
+ )
461
641
 
462
- for doc in result:
463
- print(doc.key_phrases)
642
+ result = client.analyze_text(body=body)
643
+
644
+ # Validate and print results
645
+ if not isinstance(result, AnalyzeTextKeyPhraseResult):
646
+ print("Unexpected result type.")
647
+ return
648
+
649
+ if result.results is None:
650
+ print("No results returned.")
651
+ return
652
+
653
+ if result.results.documents is None or len(result.results.documents) == 0:
654
+ print("No documents in the response.")
655
+ return
656
+
657
+ for doc in result.results.documents:
658
+ print(f"\nDocument ID: {doc.id}")
659
+ if doc.key_phrases:
660
+ print("Key Phrases:")
661
+ for phrase in doc.key_phrases:
662
+ print(f" - {phrase}")
663
+ else:
664
+ print("No key phrases found for this document.")
464
665
  ```
465
666
 
667
+ <!-- END SNIPPET -->
668
+
466
669
  The returned response is a heterogeneous list of result and error objects: list[[ExtractKeyPhrasesResult][extract_key_phrases_result], [DocumentError][document_error]]
467
670
 
468
671
  Please refer to the service documentation for a conceptual discussion of [key phrase extraction][key_phrase_extraction].
469
672
 
470
- ### Detect language
673
+ ### Detect Language
471
674
 
472
675
  [detect_language][detect_language] determines the language of its input text, including the confidence score of the predicted language.
473
676
 
677
+ <!-- SNIPPET:sample_detect_language.detect_language -->
678
+
474
679
  ```python
475
- from azure.core.credentials import AzureKeyCredential
476
- from azure.ai.textanalytics import TextAnalyticsClient
680
+ import os
477
681
 
478
- credential = AzureKeyCredential("<api_key>")
479
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
682
+ from azure.identity import DefaultAzureCredential
683
+ from azure.ai.textanalytics import TextAnalysisClient
684
+ from azure.ai.textanalytics.models import (
685
+ TextLanguageDetectionInput,
686
+ LanguageDetectionTextInput,
687
+ LanguageInput,
688
+ AnalyzeTextLanguageDetectionResult,
689
+ )
480
690
 
481
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
482
691
 
483
- documents = [
484
- """
485
- This whole document is written in English. In order for the whole document to be written
486
- in English, every sentence also has to be written in English, which it is.
487
- """,
488
- "Il documento scritto in italiano.",
489
- "Dies ist in deutsche Sprache verfasst."
490
- ]
692
+ def sample_detect_language():
693
+ # get settings
694
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
695
+ credential = DefaultAzureCredential()
491
696
 
492
- response = text_analytics_client.detect_language(documents)
493
- result = [doc for doc in response if not doc.is_error]
697
+ client = TextAnalysisClient(endpoint, credential=credential)
494
698
 
495
- for doc in result:
496
- print(f"Language detected: {doc.primary_language.name}")
497
- print(f"ISO6391 name: {doc.primary_language.iso6391_name}")
498
- print(f"Confidence score: {doc.primary_language.confidence_score}\n")
699
+ # Build input
700
+ text_a = (
701
+ "Sentences in different languages."
702
+ )
703
+
704
+ body = TextLanguageDetectionInput(
705
+ text_input=LanguageDetectionTextInput(
706
+ language_inputs=[LanguageInput(id="A", text=text_a)]
707
+ )
708
+ )
709
+
710
+ # Sync (non-LRO) call
711
+ result = client.analyze_text(body=body)
712
+
713
+ # Validate and print results
714
+ if not isinstance(result, AnalyzeTextLanguageDetectionResult):
715
+ print("Unexpected result type.")
716
+ return
717
+
718
+ if not result.results or not result.results.documents:
719
+ print("No documents in the response.")
720
+ return
721
+
722
+ for doc in result.results.documents:
723
+
724
+ print(f"\nDocument ID: {doc.id}")
725
+ if doc.detected_language:
726
+ dl = doc.detected_language
727
+ print(f"Detected language: {dl.name} ({dl.iso6391_name})")
728
+ print(f"Confidence score: {dl.confidence_score}")
729
+ else:
730
+ print("No detected language returned for this document.")
499
731
  ```
500
732
 
733
+ <!-- END SNIPPET -->
734
+
501
735
  The returned response is a heterogeneous list of result and error objects: list[[DetectLanguageResult][detect_language_result], [DocumentError][document_error]]
502
736
 
503
737
  Please refer to the service documentation for a conceptual discussion of [language detection][language_detection]
@@ -507,48 +741,121 @@ and [language and regional support][language_and_regional_support].
507
741
 
508
742
  [Long-running operation](#long-running-operations) [begin_analyze_healthcare_entities][analyze_healthcare_entities] extracts entities recognized within the healthcare domain, and identifies relationships between entities within the input document and links to known sources of information in various well known databases, such as UMLS, CHV, MSH, etc.
509
743
 
744
+ <!-- SNIPPET:sample_analyze_healthcare_entities.analyze_healthcare_entities -->
745
+
510
746
  ```python
511
- from azure.core.credentials import AzureKeyCredential
512
- from azure.ai.textanalytics import TextAnalyticsClient
747
+ import os
513
748
 
514
- credential = AzureKeyCredential("<api_key>")
515
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
516
-
517
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
518
-
519
- documents = ["Subject is taking 100mg of ibuprofen twice daily"]
520
-
521
- poller = text_analytics_client.begin_analyze_healthcare_entities(documents)
522
- result = poller.result()
523
-
524
- docs = [doc for doc in result if not doc.is_error]
525
-
526
- print("Results of Healthcare Entities Analysis:")
527
- for idx, doc in enumerate(docs):
528
- for entity in doc.entities:
529
- print(f"Entity: {entity.text}")
530
- print(f"...Normalized Text: {entity.normalized_text}")
531
- print(f"...Category: {entity.category}")
532
- print(f"...Subcategory: {entity.subcategory}")
533
- print(f"...Offset: {entity.offset}")
534
- print(f"...Confidence score: {entity.confidence_score}")
535
- if entity.data_sources is not None:
536
- print("...Data Sources:")
537
- for data_source in entity.data_sources:
538
- print(f"......Entity ID: {data_source.entity_id}")
539
- print(f"......Name: {data_source.name}")
540
- if entity.assertion is not None:
541
- print("...Assertion:")
542
- print(f"......Conditionality: {entity.assertion.conditionality}")
543
- print(f"......Certainty: {entity.assertion.certainty}")
544
- print(f"......Association: {entity.assertion.association}")
545
- for relation in doc.entity_relations:
546
- print(f"Relation of type: {relation.relation_type} has the following roles")
547
- for role in relation.roles:
548
- print(f"...Role '{role.name}' with entity '{role.entity.text}'")
549
- print("------------------------------------------")
749
+ from azure.identity import DefaultAzureCredential
750
+ from azure.ai.textanalytics import TextAnalysisClient
751
+ from azure.ai.textanalytics.models import (
752
+ MultiLanguageTextInput,
753
+ MultiLanguageInput,
754
+ AnalyzeTextOperationAction,
755
+ HealthcareLROTask,
756
+ HealthcareLROResult,
757
+ )
758
+
759
+
760
+ def sample_analyze_healthcare_entities():
761
+ # get settings
762
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
763
+ credential = DefaultAzureCredential()
764
+
765
+ client = TextAnalysisClient(endpoint, credential=credential)
766
+
767
+ # Build input
768
+ text_a = "Prescribed 100mg ibuprofen, taken twice daily."
769
+
770
+ text_input = MultiLanguageTextInput(
771
+ multi_language_inputs=[
772
+ MultiLanguageInput(id="A", text=text_a, language="en"),
773
+ ]
774
+ )
775
+
776
+ actions: list[AnalyzeTextOperationAction] = [
777
+ HealthcareLROTask(
778
+ name="Healthcare Operation",
779
+ ),
780
+ ]
781
+
782
+ # Start long-running operation (sync) – poller returns ItemPaged[TextActions]
783
+ poller = client.begin_analyze_text_job(
784
+ text_input=text_input,
785
+ actions=actions,
786
+ )
787
+
788
+ # Operation metadata (pre-final)
789
+ print(f"Operation ID: {poller.details.get('operation_id')}")
790
+
791
+ # Wait for completion and get pageable of TextActions
792
+ paged_actions = poller.result()
793
+
794
+ # Final-state metadata
795
+ d = poller.details
796
+ print(f"Job ID: {d.get('job_id')}")
797
+ print(f"Status: {d.get('status')}")
798
+ print(f"Created: {d.get('created_date_time')}")
799
+ print(f"Last Updated: {d.get('last_updated_date_time')}")
800
+ if d.get("expiration_date_time"):
801
+ print(f"Expires: {d.get('expiration_date_time')}")
802
+ if d.get("display_name"):
803
+ print(f"Display Name: {d.get('display_name')}")
804
+
805
+ # Iterate results (sync pageable)
806
+ for actions_page in paged_actions:
807
+ print(
808
+ f"Completed: {actions_page.completed}, "
809
+ f"In Progress: {actions_page.in_progress}, "
810
+ f"Failed: {actions_page.failed}, "
811
+ f"Total: {actions_page.total}"
812
+ )
813
+
814
+ for op_result in actions_page.items_property or []:
815
+ if isinstance(op_result, HealthcareLROResult):
816
+ print(f"\nAction Name: {op_result.task_name}")
817
+ print(f"Action Status: {op_result.status}")
818
+ print(f"Kind: {op_result.kind}")
819
+
820
+ hc_result = op_result.results
821
+ for doc in (hc_result.documents or []):
822
+ print(f"\nDocument ID: {doc.id}")
823
+
824
+ # Entities
825
+ print("Entities:")
826
+ for entity in (doc.entities or []):
827
+ print(f" Text: {entity.text}")
828
+ print(f" Category: {entity.category}")
829
+ print(f" Offset: {entity.offset}")
830
+ print(f" Length: {entity.length}")
831
+ print(f" Confidence score: {entity.confidence_score}")
832
+ if entity.links:
833
+ for link in entity.links:
834
+ print(f" Link ID: {link.id}")
835
+ print(f" Data source: {link.data_source}")
836
+ print()
837
+
838
+ # Relations
839
+ print("Relations:")
840
+ for relation in (doc.relations or []):
841
+ print(f" Relation type: {relation.relation_type}")
842
+ for rel_entity in (relation.entities or []):
843
+ print(f" Role: {rel_entity.role}")
844
+ print(f" Ref: {rel_entity.ref}")
845
+ print()
846
+ else:
847
+ # Other action kinds, if present
848
+ try:
849
+ print(
850
+ f"\n[Non-healthcare action] name={op_result.task_name}, "
851
+ f"status={op_result.status}, kind={op_result.kind}"
852
+ )
853
+ except Exception:
854
+ print("\n[Non-healthcare action present]")
550
855
  ```
551
856
 
857
+ <!-- END SNIPPET -->
858
+
552
859
  Note: Healthcare Entities Analysis is only available with API version v3.1 and newer.
553
860
 
554
861
  ### Multiple Analysis
@@ -564,60 +871,121 @@ Note: Healthcare Entities Analysis is only available with API version v3.1 and n
564
871
  - Custom Single Label Classification (API version 2022-05-01 and newer)
565
872
  - Custom Multi Label Classification (API version 2022-05-01 and newer)
566
873
  - Healthcare Entities Analysis (API version 2022-05-01 and newer)
567
- - Extractive Summarization (API version 2022-10-01-preview and newer)
568
- - Abstractive Summarization (API version 2022-10-01-preview and newer)
874
+ - Extractive Summarization (API version 2023-04-01 and newer)
875
+ - Abstractive Summarization (API version 2023-04-01 and newer)
876
+
877
+ <!-- SNIPPET:sample_analyze_actions.analyze -->
569
878
 
570
879
  ```python
880
+ import os
881
+
882
+ from azure.identity import DefaultAzureCredential
571
883
  from azure.core.credentials import AzureKeyCredential
572
- from azure.ai.textanalytics import (
573
- TextAnalyticsClient,
574
- RecognizeEntitiesAction,
575
- AnalyzeSentimentAction,
884
+ from azure.ai.textanalytics import TextAnalysisClient
885
+ from azure.ai.textanalytics.models import (
886
+ MultiLanguageTextInput,
887
+ MultiLanguageInput,
888
+ EntitiesLROTask,
889
+ KeyPhraseLROTask,
890
+ EntityRecognitionOperationResult,
891
+ KeyPhraseExtractionOperationResult,
892
+ EntityTag,
576
893
  )
577
894
 
578
- credential = AzureKeyCredential("<api_key>")
579
- endpoint="https://<resource-name>.cognitiveservices.azure.com/"
580
895
 
581
- text_analytics_client = TextAnalyticsClient(endpoint, credential)
896
+ def sample_analyze():
897
+ # get settings
898
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
899
+ credential = DefaultAzureCredential()
900
+
901
+ client = TextAnalysisClient(endpoint, credential=credential)
902
+
903
+ text_a = (
904
+ "We love this trail and make the trip every year. The views are breathtaking and well worth the hike!"
905
+ " Yesterday was foggy though, so we missed the spectacular views. We tried again today and it was"
906
+ " amazing. Everyone in my family liked the trail although it was too challenging for the less"
907
+ " athletic among us. Not necessarily recommended for small children. A hotel close to the trail"
908
+ " offers services for childcare in case you want that."
909
+ )
910
+
911
+ text_b = (
912
+ "Sentences in different languages."
913
+ )
914
+
915
+ text_c = (
916
+ "That was the best day of my life! We went on a 4 day trip where we stayed at Hotel Foo. They had"
917
+ " great amenities that included an indoor pool, a spa, and a bar. The spa offered couples massages"
918
+ " which were really good. The spa was clean and felt very peaceful. Overall the whole experience was"
919
+ " great. We will definitely come back."
920
+ )
921
+
922
+ text_d = ""
582
923
 
583
- documents = ["Microsoft was founded by Bill Gates and Paul Allen."]
924
+ # Prepare documents (you can batch multiple docs)
925
+ text_input = MultiLanguageTextInput(
926
+ multi_language_inputs=[
927
+ MultiLanguageInput(id="A", text=text_a, language="en"),
928
+ MultiLanguageInput(id="B", text=text_b, language="es"),
929
+ MultiLanguageInput(id="C", text=text_c, language="en"),
930
+ MultiLanguageInput(id="D", text=text_d),
931
+ ]
932
+ )
584
933
 
585
- poller = text_analytics_client.begin_analyze_actions(
586
- documents,
587
- display_name="Sample Text Analysis",
588
- actions=[
589
- RecognizeEntitiesAction(),
590
- AnalyzeSentimentAction()
934
+ actions = [
935
+ EntitiesLROTask(name="EntitiesOperationActionSample"),
936
+ KeyPhraseLROTask(name="KeyPhraseOperationActionSample"),
591
937
  ]
592
- )
593
938
 
594
- # returns multiple actions results in the same order as the inputted actions
595
- document_results = poller.result()
596
- for doc, action_results in zip(documents, document_results):
597
- print(f"\nDocument text: {doc}")
598
- for result in action_results:
599
- if result.kind == "EntityRecognition":
600
- print("...Results of Recognize Entities Action:")
601
- for entity in result.entities:
602
- print(f"......Entity: {entity.text}")
603
- print(f".........Category: {entity.category}")
604
- print(f".........Confidence Score: {entity.confidence_score}")
605
- print(f".........Offset: {entity.offset}")
606
-
607
- elif result.kind == "SentimentAnalysis":
608
- print("...Results of Analyze Sentiment action:")
609
- print(f"......Overall sentiment: {result.sentiment}")
610
- print(f"......Scores: positive={result.confidence_scores.positive}; "
611
- f"neutral={result.confidence_scores.neutral}; "
612
- f"negative={result.confidence_scores.negative}\n")
613
-
614
- elif result.is_error is True:
615
- print(f"......Is an error with code '{result.code}' "
616
- f"and message '{result.message}'")
617
-
618
- print("------------------------------------------")
939
+ # Submit a multi-action analysis job (LRO)
940
+ poller = client.begin_analyze_text_job(text_input=text_input, actions=actions)
941
+ paged_actions = poller.result()
942
+
943
+ # Iterate through each action's results
944
+ for action_result in paged_actions:
945
+ print() # spacing between action blocks
946
+
947
+ # --- Entities ---
948
+ if isinstance(action_result, EntityRecognitionOperationResult):
949
+ print("=== Entity Recognition Results ===")
950
+ for ent_doc in action_result.results.documents:
951
+ print(f'Result for document with Id = "{ent_doc.id}":')
952
+ print(f" Recognized {len(ent_doc.entities)} entities:")
953
+ for entity in ent_doc.entities:
954
+ print(f" Text: {entity.text}")
955
+ print(f" Offset: {entity.offset}")
956
+ print(f" Length: {entity.length}")
957
+ print(f" Category: {entity.category}")
958
+ if hasattr(entity, "type") and entity.type is not None:
959
+ print(f" Type: {entity.type}")
960
+ if hasattr(entity, "subcategory") and entity.subcategory:
961
+ print(f" Subcategory: {entity.subcategory}")
962
+ if hasattr(entity, "tags") and entity.tags:
963
+ print(" Tags:")
964
+ for tag in entity.tags:
965
+ if isinstance(tag, EntityTag):
966
+ print(f" TagName: {tag.name}")
967
+ print(f" TagConfidenceScore: {tag.confidence_score}")
968
+ print(f" Confidence score: {entity.confidence_score}")
969
+ print()
970
+ for err in action_result.results.errors:
971
+ print(f' Error in document: {err.id}!')
972
+ print(f" Document error: {err.error}")
973
+
974
+ # --- Key Phrases ---
975
+ elif isinstance(action_result, KeyPhraseExtractionOperationResult):
976
+ print("=== Key Phrase Extraction Results ===")
977
+ for kp_doc in action_result.results.documents:
978
+ print(f'Result for document with Id = "{kp_doc.id}":')
979
+ for kp in kp_doc.key_phrases:
980
+ print(f" {kp}")
981
+ print()
982
+ for err in action_result.results.errors:
983
+ print(f' Error in document: {err.id}!')
984
+ print(f" Document error: {err.error}")
619
985
  ```
620
986
 
987
+ <!-- END SNIPPET -->
988
+
621
989
  The returned response is an object encapsulating multiple iterables, each representing results of individual analyses.
622
990
 
623
991
  Note: Multiple analysis is available in API version v3.1 and newer.
@@ -698,15 +1066,10 @@ Common scenarios
698
1066
  - Custom Multi Label Classification: [sample_multi_label_classify.py][multi_label_classify_sample] ([async_version][multi_label_classify_sample_async])
699
1067
  - Extractive text summarization: [sample_extract_summary.py][extract_summary_sample] ([async version][extract_summary_sample_async])
700
1068
  - Abstractive text summarization: [sample_abstract_summary.py][abstract_summary_sample] ([async version][abstract_summary_sample_async])
701
- - Dynamic Classification: [sample_dynamic_classification.py][dynamic_classification_sample] ([async_version][dynamic_classification_sample_async])
702
-
703
- Advanced scenarios
704
-
705
- - Opinion Mining: [sample_analyze_sentiment_with_opinion_mining.py][opinion_mining_sample] ([async_version][opinion_mining_sample_async])
706
1069
 
707
1070
  ### Additional documentation
708
1071
 
709
- For more extensive documentation on Azure Cognitive Service for Language, see the [Language Service documentation][language_product_documentation] on docs.microsoft.com.
1072
+ For more extensive documentation on Azure Cognitive Service for Language, see the [Language Service documentation][language_product_documentation] on learn.microsoft.com.
710
1073
 
711
1074
  ## Contributing
712
1075
 
@@ -718,27 +1081,28 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
718
1081
 
719
1082
  <!-- LINKS -->
720
1083
 
721
- [source_code]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/azure/ai/textanalytics
1084
+ [source_code]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-textanalytics/azure/ai/textanalytics
722
1085
  [ta_pypi]: https://pypi.org/project/azure-ai-textanalytics/
723
1086
  [ta_ref_docs]: https://aka.ms/azsdk-python-textanalytics-ref-docs
724
- [ta_samples]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples
725
- [language_product_documentation]: https://docs.microsoft.com/azure/cognitive-services/language-service
1087
+ [ta_samples]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples
1088
+ [language_product_documentation]: https://learn.microsoft.com/azure/cognitive-services/language-service
726
1089
  [azure_subscription]: https://azure.microsoft.com/free/
727
- [ta_or_cs_resource]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
1090
+ [ta_or_cs_resource]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
728
1091
  [pip]: https://pypi.org/project/pip/
729
1092
  [azure_portal_create_ta_resource]: https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics
730
- [azure_cli_create_ta_resource]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account-cli?tabs=windows
731
- [multi_and_single_service]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
732
- [azure_cli_endpoint_lookup]: https://docs.microsoft.com/cli/azure/cognitiveservices/account?view=azure-cli-latest#az-cognitiveservices-account-show
733
- [azure_portal_get_endpoint]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
734
- [cognitive_authentication]: https://docs.microsoft.com/azure/cognitive-services/authentication
735
- [cognitive_authentication_api_key]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
1093
+ [azure_cli]: https://learn.microsoft.com/cli/azure
1094
+ [azure_cli_create_ta_resource]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account-cli
1095
+ [multi_and_single_service]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
1096
+ [azure_cli_endpoint_lookup]: https://learn.microsoft.com/cli/azure/cognitiveservices/account?view=azure-cli-latest#az-cognitiveservices-account-show
1097
+ [azure_portal_get_endpoint]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
1098
+ [cognitive_authentication]: https://learn.microsoft.com/azure/cognitive-services/authentication
1099
+ [cognitive_authentication_api_key]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
736
1100
  [install_azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#install-the-package
737
- [register_aad_app]: https://docs.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
738
- [grant_role_access]: https://docs.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
739
- [cognitive_custom_subdomain]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-custom-subdomains
740
- [custom_subdomain]: https://docs.microsoft.com/azure/cognitive-services/authentication#create-a-resource-with-a-custom-subdomain
741
- [cognitive_authentication_aad]: https://docs.microsoft.com/azure/cognitive-services/authentication#authenticate-with-azure-active-directory
1101
+ [register_aad_app]: https://learn.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
1102
+ [grant_role_access]: https://learn.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
1103
+ [cognitive_custom_subdomain]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-custom-subdomains
1104
+ [custom_subdomain]: https://learn.microsoft.com/azure/cognitive-services/authentication#create-a-resource-with-a-custom-subdomain
1105
+ [cognitive_authentication_aad]: https://learn.microsoft.com/azure/cognitive-services/authentication#authenticate-with-azure-active-directory
742
1106
  [azure_identity_credentials]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#credentials
743
1107
  [default_azure_credential]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#defaultazurecredential
744
1108
  [service_limits]: https://aka.ms/azsdk/textanalytics/data-limits
@@ -761,59 +1125,131 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
761
1125
  [recognize_linked_entities]: https://aka.ms/azsdk-python-textanalytics-recognizelinkedentities
762
1126
  [extract_key_phrases]: https://aka.ms/azsdk-python-textanalytics-extractkeyphrases
763
1127
  [detect_language]: https://aka.ms/azsdk-python-textanalytics-detectlanguage
764
- [language_detection]: https://docs.microsoft.com/azure/cognitive-services/language-service/language-detection/overview
765
- [language_and_regional_support]: https://docs.microsoft.com/azure/cognitive-services/language-service/language-detection/language-support
766
- [sentiment_analysis]: https://docs.microsoft.com/azure/cognitive-services/language-service/sentiment-opinion-mining/overview
767
- [key_phrase_extraction]: https://docs.microsoft.com/azure/cognitive-services/language-service/key-phrase-extraction/overview
1128
+ [language_detection]: https://learn.microsoft.com/azure/cognitive-services/language-service/language-detection/overview
1129
+ [language_and_regional_support]: https://learn.microsoft.com/azure/cognitive-services/language-service/language-detection/language-support
1130
+ [sentiment_analysis]: https://learn.microsoft.com/azure/cognitive-services/language-service/sentiment-opinion-mining/overview
1131
+ [key_phrase_extraction]: https://learn.microsoft.com/azure/cognitive-services/language-service/key-phrase-extraction/overview
768
1132
  [linked_entities_categories]: https://aka.ms/taner
769
- [linked_entity_recognition]: https://docs.microsoft.com/azure/cognitive-services/language-service/entity-linking/overview
770
- [pii_entity_categories]: https://aka.ms/tanerpii
771
- [named_entity_recognition]: https://docs.microsoft.com/azure/cognitive-services/language-service/named-entity-recognition/overview
1133
+ [linked_entity_recognition]: https://learn.microsoft.com/azure/cognitive-services/language-service/entity-linking/overview
1134
+ [pii_entity_categories]: https://aka.ms/azsdk/language/pii
1135
+ [named_entity_recognition]: https://learn.microsoft.com/azure/cognitive-services/language-service/named-entity-recognition/overview
772
1136
  [named_entity_categories]: https://aka.ms/taner
773
1137
  [azure_core_ref_docs]: https://aka.ms/azsdk-python-core-policies
774
1138
  [azure_core]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/README.md
775
1139
  [azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity
776
1140
  [python_logging]: https://docs.python.org/3/library/logging.html
777
- [sample_authentication]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py
778
- [sample_authentication_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py
779
- [detect_language_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py
780
- [detect_language_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py
781
- [analyze_sentiment_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py
782
- [analyze_sentiment_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py
783
- [extract_key_phrases_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py
784
- [extract_key_phrases_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py
785
- [recognize_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py
786
- [recognize_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py
787
- [recognize_linked_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py
788
- [recognize_linked_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py
789
- [recognize_pii_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py
790
- [recognize_pii_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py
791
- [analyze_healthcare_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_healthcare_entities.py
792
- [analyze_healthcare_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_healthcare_entities_async.py
793
- [analyze_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_actions.py
794
- [analyze_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_actions_async.py
795
- [opinion_mining_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment_with_opinion_mining.py
796
- [opinion_mining_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_with_opinion_mining_async.py
797
- [recognize_custom_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py
798
- [recognize_custom_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_custom_entities_async.py
799
- [single_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py
800
- [single_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_single_label_classify_async.py
801
- [multi_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py
802
- [multi_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_multi_label_classify_async.py
803
- [healthcare_action_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_healthcare_action.py
804
- [extract_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_summary.py
805
- [extract_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_summary_async.py
806
- [abstract_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_abstract_summary.py
807
- [abstract_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_abstract_summary_async.py
808
- [dynamic_classification_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_dynamic_classification.py
809
- [dynamic_classification_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_dynamic_classification_async.py
1141
+ [sample_authentication]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_authentication.py
1142
+ [sample_authentication_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py
1143
+ [detect_language_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_detect_language.py
1144
+ [detect_language_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py
1145
+ [analyze_sentiment_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_sentiment.py
1146
+ [analyze_sentiment_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py
1147
+ [extract_key_phrases_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_extract_key_phrases.py
1148
+ [extract_key_phrases_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py
1149
+ [recognize_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_entities.py
1150
+ [recognize_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py
1151
+ [recognize_linked_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py
1152
+ [recognize_linked_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py
1153
+ [recognize_pii_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py
1154
+ [recognize_pii_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py
1155
+ [analyze_healthcare_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_healthcare_entities.py
1156
+ [analyze_healthcare_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_analyze_healthcare_entities_async.py
1157
+ [analyze_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_actions.py
1158
+ [analyze_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_analyze_actions_async.py
1159
+ [recognize_custom_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py
1160
+ [recognize_custom_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_custom_entities_async.py
1161
+ [single_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_single_label_classify.py
1162
+ [single_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_single_label_classify_async.py
1163
+ [multi_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_multi_label_classify.py
1164
+ [multi_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_multi_label_classify_async.py
1165
+ [healthcare_action_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_healthcare_action.py
1166
+ [extract_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_extract_summary.py
1167
+ [extract_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_extract_summary_async.py
1168
+ [abstract_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_abstract_summary.py
1169
+ [abstract_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_abstract_summary_async.py
810
1170
  [cla]: https://cla.microsoft.com
811
1171
  [code_of_conduct]: https://opensource.microsoft.com/codeofconduct/
812
1172
  [coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/
813
1173
  [coc_contact]: mailto:opencode@microsoft.com
1174
+ # Release History
814
1175
 
1176
+ ## 6.0.0b1 (2025-09-11)
815
1177
 
816
- # Release History
1178
+ This version of the client library defaults to the service API version `2025-05-15-preview`.
1179
+
1180
+ ### Features Added
1181
+
1182
+ - Added Value Exclusion, synonyms, and new entity types to the detection of Personally Identifiable Information (PII).
1183
+
1184
+ ### Breaking Changes
1185
+
1186
+ - Removed `begin_abstract_summary`for abstracting text summarization, added function `begin_analyze_text_job` with `AbstractiveSummarizationOperationAction` for this purpose.
1187
+ - Removed `begin_analyze_healthcare_entities`for analyzing healthcare entities, added function `begin_analyze_text_job` with `HealthcareLROTask` for this purpose.
1188
+ - Removed `analyze_sentiment`for analyzing sentiment, added function `analyze_text` with `TextSentimentAnalysisInput` for this purpose.
1189
+ - Removed `detect_language`for detecting language, added function `analyze_text` with `LanguageDetectionTextInput` for this purpose.
1190
+ - Removed `extract_key_phrases`for extracting key phrases, added function `analyze_text` with `TextKeyPhraseExtractionInput` for this purpose.
1191
+ - Removed `begin_multi_label_classify`for classifying documents into multiple custom categories, added function `begin_analyze_text_job` with `CustomMultiLabelClassificationActionContent` for this purpose.
1192
+ - Removed `begin_recognize_custom_entities`for recognizing custom entities in documents, added function `begin_analyze_text_job` with `CustomEntitiesLROTask` for this purpose.
1193
+ - Removed `recognize_entities`for recognizing named entities in a batch of documents, added function `analyze_text` with `TextEntityRecognitionInput` for this purpose.
1194
+ - Removed `recognize_linked_entities`for detecting linked entities in a batch of documents, added function `analyze_text` with `TextEntityLinkingInput` for this purpose.
1195
+ - Removed `recognize_pii_entities`for recognizing personally identifiable information in a batch of documents, added function `analyze_text` with `TextPiiEntitiesRecognitionInput` for this purpose.
1196
+ - Removed `begin_single_label_classify`for classifying documents into a single custom category, added function `begin_analyze_text_job` with `CustomSingleLabelClassificationOperationAction` for this purpose.
1197
+
1198
+ ### Other Changes
1199
+
1200
+ - Added custom poller `AnalyzeTextLROPoller` and `AnalyzeTextAsyncLROPoller` to customize the return type of `begin_analyze_text_job` to be `AnalyzeTextLROPoller[ItemPaged["TextActions"]]` and `AnalyzeTextAsyncLROPoller[AsyncItemPaged["TextActions"]]`
1201
+
1202
+ ## 5.3.0 (2023-06-15)
1203
+
1204
+ This version of the client library defaults to the service API version `2023-04-01`.
1205
+
1206
+ ### Breaking Changes
1207
+
1208
+ > Note: The following changes are only breaking from the previous beta. They are not breaking against previous stable versions.
1209
+
1210
+ - Renamed model `ExtractSummaryAction` to `ExtractiveSummaryAction`.
1211
+ - Renamed model `ExtractSummaryResult` to `ExtractiveSummaryResult`.
1212
+ - Renamed client method `begin_abstractive_summary` to `begin_abstract_summary`.
1213
+ - Removed `dynamic_classification` client method and related types: `DynamicClassificationResult` and `ClassificationType`.
1214
+ - Removed keyword arguments `fhir_version` and `document_type` from `begin_analyze_healthcare_entities` and `AnalyzeHealthcareEntitiesAction`.
1215
+ - Removed property `fhir_bundle` from `AnalyzeHealthcareEntitiesResult`.
1216
+ - Removed enum `HealthcareDocumentType`.
1217
+ - Removed property `resolutions` from `CategorizedEntity`.
1218
+ - Removed models and enums related to resolutions: `ResolutionKind`, `AgeResolution`, `AreaResolution`,
1219
+ `CurrencyResolution`, `DateTimeResolution`, `InformationResolution`, `LengthResolution`,
1220
+ `NumberResolution`, `NumericRangeResolution`, `OrdinalResolution`, `SpeedResolution`, `TemperatureResolution`,
1221
+ `TemporalSpanResolution`, `VolumeResolution`, `WeightResolution`, `AgeUnit`, `AreaUnit`, `TemporalModifier`,
1222
+ `InformationUnit`, `LengthUnit`, `NumberKind`, `RangeKind`, `RelativeTo`, `SpeedUnit`, `TemperatureUnit`,
1223
+ `VolumeUnit`, `DateTimeSubKind`, and `WeightUnit`.
1224
+ - Removed property `detected_language` from `RecognizeEntitiesResult`, `RecognizePiiEntitiesResult`, `AnalyzeHealthcareEntitiesResult`,
1225
+ `ExtractKeyPhrasesResult`, `RecognizeLinkedEntitiesResult`, `AnalyzeSentimentResult`, `RecognizeCustomEntitiesResult`,
1226
+ `ClassifyDocumentResult`, `ExtractSummaryResult`, and `AbstractSummaryResult`.
1227
+ - Removed property `script` from `DetectedLanguage`.
1228
+
1229
+ ### Features Added
1230
+
1231
+ - New enum values added for `HealthcareEntityCategory` and `HealthcareEntityRelation`.
1232
+
1233
+ ## 5.3.0b2 (2023-03-07)
1234
+
1235
+ This version of the client library defaults to the service API version `2022-10-01-preview`.
1236
+
1237
+ ### Features Added
1238
+
1239
+ - Added `begin_extract_summary` client method to perform extractive summarization on documents.
1240
+ - Added `begin_abstractive_summary` client method to perform abstractive summarization on documents.
1241
+
1242
+ ### Breaking Changes
1243
+
1244
+ - Removed models `BaseResolution` and `BooleanResolution`.
1245
+ - Removed enum value `BooleanResolution` from `ResolutionKind`.
1246
+ - Renamed model `AbstractSummaryAction` to `AbstractiveSummaryAction`.
1247
+ - Renamed model `AbstractSummaryResult` to `AbstractiveSummaryResult`.
1248
+ - Removed keyword argument `autodetect_default_language` from long-running operation APIs.
1249
+
1250
+ ### Other Changes
1251
+
1252
+ - Improved static typing in the client library.
817
1253
 
818
1254
  ## 5.3.0b1 (2022-11-17)
819
1255
 
@@ -1004,7 +1440,7 @@ is this diagnosis conditional on a symptom?
1004
1440
 
1005
1441
  **Known Issues**
1006
1442
 
1007
- - `begin_analyze_healthcare_entities` is currently in gated preview and can not be used with AAD credentials. For more information, see [the Text Analytics for Health documentation](https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1443
+ - `begin_analyze_healthcare_entities` is currently in gated preview and can not be used with AAD credentials. For more information, see [the Text Analytics for Health documentation](https://learn.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1008
1444
  - At time of this SDK release, the service is not respecting the value passed through `model_version` to `begin_analyze_healthcare_entities`, it only uses the latest model.
1009
1445
 
1010
1446
  ## 5.1.0b5 (2021-02-10)
@@ -1042,7 +1478,7 @@ the service client to the poller object returned from `begin_analyze_healthcare_
1042
1478
 
1043
1479
  **New Features**
1044
1480
  - We have added method `begin_analyze`, which supports long-running batch process of Named Entity Recognition, Personally identifiable Information, and Key Phrase Extraction. To use, you must specify `api_version=TextAnalyticsApiVersion.V3_1_PREVIEW_3` when creating your client.
1045
- - We have added method `begin_analyze_healthcare`, which supports the service's Health API. Since the Health API is currently only available in a gated preview, you need to have your subscription on the service's allow list, and you must specify `api_version=TextAnalyticsApiVersion.V3_1_PREVIEW_3` when creating your client. Note that since this is a gated preview, AAD is not supported. More information [here](https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1481
+ - We have added method `begin_analyze_healthcare`, which supports the service's Health API. Since the Health API is currently only available in a gated preview, you need to have your subscription on the service's allow list, and you must specify `api_version=TextAnalyticsApiVersion.V3_1_PREVIEW_3` when creating your client. Note that since this is a gated preview, AAD is not supported. More information [here](https://learn.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1046
1482
 
1047
1483
 
1048
1484
  ## 5.1.0b2 (2020-10-06)