azure-ai-textanalytics 5.3.0b2__py3-none-any.whl → 6.0.0b1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of azure-ai-textanalytics might be problematic. Click here for more details.

Files changed (128) hide show
  1. azure/ai/textanalytics/__init__.py +26 -193
  2. azure/ai/textanalytics/_client.py +111 -0
  3. azure/ai/textanalytics/_configuration.py +73 -0
  4. azure/ai/textanalytics/{_generated/v2022_05_01/operations → _operations}/__init__.py +13 -8
  5. azure/ai/textanalytics/_operations/_operations.py +716 -0
  6. azure/ai/textanalytics/{_generated/v2022_05_01/models → _operations}/_patch.py +8 -6
  7. azure/ai/textanalytics/_patch.py +350 -0
  8. azure/ai/textanalytics/{_generated/aio → _utils}/__init__.py +1 -5
  9. azure/ai/textanalytics/_utils/model_base.py +1237 -0
  10. azure/ai/textanalytics/{_generated/_serialization.py → _utils/serialization.py} +640 -616
  11. azure/ai/textanalytics/{_generated/v2022_05_01/aio/_vendor.py → _utils/utils.py} +10 -12
  12. azure/ai/textanalytics/_version.py +8 -7
  13. azure/ai/textanalytics/aio/__init__.py +25 -14
  14. azure/ai/textanalytics/aio/_client.py +115 -0
  15. azure/ai/textanalytics/aio/_configuration.py +75 -0
  16. azure/ai/textanalytics/{_generated/v2022_10_01_preview/aio/operations → aio/_operations}/__init__.py +13 -8
  17. azure/ai/textanalytics/aio/_operations/_operations.py +623 -0
  18. azure/ai/textanalytics/{_generated/v2022_05_01 → aio/_operations}/_patch.py +8 -6
  19. azure/ai/textanalytics/aio/_patch.py +344 -0
  20. azure/ai/textanalytics/models/__init__.py +402 -0
  21. azure/ai/textanalytics/models/_enums.py +1979 -0
  22. azure/ai/textanalytics/models/_models.py +6641 -0
  23. azure/ai/textanalytics/{_generated/v2022_05_01/aio → models}/_patch.py +8 -6
  24. azure/ai/textanalytics/py.typed +1 -0
  25. {azure_ai_textanalytics-5.3.0b2.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info}/METADATA +668 -403
  26. azure_ai_textanalytics-6.0.0b1.dist-info/RECORD +29 -0
  27. {azure_ai_textanalytics-5.3.0b2.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info}/WHEEL +1 -1
  28. azure/ai/textanalytics/_base_client.py +0 -113
  29. azure/ai/textanalytics/_check.py +0 -22
  30. azure/ai/textanalytics/_dict_mixin.py +0 -57
  31. azure/ai/textanalytics/_generated/__init__.py +0 -16
  32. azure/ai/textanalytics/_generated/_configuration.py +0 -70
  33. azure/ai/textanalytics/_generated/_operations_mixin.py +0 -795
  34. azure/ai/textanalytics/_generated/_text_analytics_client.py +0 -126
  35. azure/ai/textanalytics/_generated/_version.py +0 -8
  36. azure/ai/textanalytics/_generated/aio/_configuration.py +0 -66
  37. azure/ai/textanalytics/_generated/aio/_operations_mixin.py +0 -776
  38. azure/ai/textanalytics/_generated/aio/_text_analytics_client.py +0 -124
  39. azure/ai/textanalytics/_generated/models.py +0 -8
  40. azure/ai/textanalytics/_generated/v2022_05_01/__init__.py +0 -20
  41. azure/ai/textanalytics/_generated/v2022_05_01/_configuration.py +0 -72
  42. azure/ai/textanalytics/_generated/v2022_05_01/_text_analytics_client.py +0 -100
  43. azure/ai/textanalytics/_generated/v2022_05_01/_vendor.py +0 -45
  44. azure/ai/textanalytics/_generated/v2022_05_01/aio/__init__.py +0 -20
  45. azure/ai/textanalytics/_generated/v2022_05_01/aio/_configuration.py +0 -71
  46. azure/ai/textanalytics/_generated/v2022_05_01/aio/_text_analytics_client.py +0 -97
  47. azure/ai/textanalytics/_generated/v2022_05_01/aio/operations/__init__.py +0 -18
  48. azure/ai/textanalytics/_generated/v2022_05_01/aio/operations/_patch.py +0 -121
  49. azure/ai/textanalytics/_generated/v2022_05_01/aio/operations/_text_analytics_client_operations.py +0 -603
  50. azure/ai/textanalytics/_generated/v2022_05_01/models/__init__.py +0 -281
  51. azure/ai/textanalytics/_generated/v2022_05_01/models/_models_py3.py +0 -5722
  52. azure/ai/textanalytics/_generated/v2022_05_01/models/_text_analytics_client_enums.py +0 -439
  53. azure/ai/textanalytics/_generated/v2022_05_01/operations/_patch.py +0 -120
  54. azure/ai/textanalytics/_generated/v2022_05_01/operations/_text_analytics_client_operations.py +0 -744
  55. azure/ai/textanalytics/_generated/v2022_10_01_preview/__init__.py +0 -20
  56. azure/ai/textanalytics/_generated/v2022_10_01_preview/_configuration.py +0 -72
  57. azure/ai/textanalytics/_generated/v2022_10_01_preview/_patch.py +0 -19
  58. azure/ai/textanalytics/_generated/v2022_10_01_preview/_text_analytics_client.py +0 -100
  59. azure/ai/textanalytics/_generated/v2022_10_01_preview/_vendor.py +0 -45
  60. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/__init__.py +0 -20
  61. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_configuration.py +0 -71
  62. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_patch.py +0 -19
  63. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_text_analytics_client.py +0 -97
  64. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/_vendor.py +0 -27
  65. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/operations/_patch.py +0 -121
  66. azure/ai/textanalytics/_generated/v2022_10_01_preview/aio/operations/_text_analytics_client_operations.py +0 -603
  67. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/__init__.py +0 -405
  68. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/_models_py3.py +0 -8420
  69. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/_patch.py +0 -486
  70. azure/ai/textanalytics/_generated/v2022_10_01_preview/models/_text_analytics_client_enums.py +0 -729
  71. azure/ai/textanalytics/_generated/v2022_10_01_preview/operations/__init__.py +0 -18
  72. azure/ai/textanalytics/_generated/v2022_10_01_preview/operations/_patch.py +0 -120
  73. azure/ai/textanalytics/_generated/v2022_10_01_preview/operations/_text_analytics_client_operations.py +0 -744
  74. azure/ai/textanalytics/_generated/v3_0/__init__.py +0 -20
  75. azure/ai/textanalytics/_generated/v3_0/_configuration.py +0 -66
  76. azure/ai/textanalytics/_generated/v3_0/_patch.py +0 -31
  77. azure/ai/textanalytics/_generated/v3_0/_text_analytics_client.py +0 -96
  78. azure/ai/textanalytics/_generated/v3_0/_vendor.py +0 -33
  79. azure/ai/textanalytics/_generated/v3_0/aio/__init__.py +0 -20
  80. azure/ai/textanalytics/_generated/v3_0/aio/_configuration.py +0 -65
  81. azure/ai/textanalytics/_generated/v3_0/aio/_patch.py +0 -31
  82. azure/ai/textanalytics/_generated/v3_0/aio/_text_analytics_client.py +0 -93
  83. azure/ai/textanalytics/_generated/v3_0/aio/_vendor.py +0 -27
  84. azure/ai/textanalytics/_generated/v3_0/aio/operations/__init__.py +0 -18
  85. azure/ai/textanalytics/_generated/v3_0/aio/operations/_patch.py +0 -19
  86. azure/ai/textanalytics/_generated/v3_0/aio/operations/_text_analytics_client_operations.py +0 -428
  87. azure/ai/textanalytics/_generated/v3_0/models/__init__.py +0 -81
  88. azure/ai/textanalytics/_generated/v3_0/models/_models_py3.py +0 -1467
  89. azure/ai/textanalytics/_generated/v3_0/models/_patch.py +0 -19
  90. azure/ai/textanalytics/_generated/v3_0/models/_text_analytics_client_enums.py +0 -58
  91. azure/ai/textanalytics/_generated/v3_0/operations/__init__.py +0 -18
  92. azure/ai/textanalytics/_generated/v3_0/operations/_patch.py +0 -19
  93. azure/ai/textanalytics/_generated/v3_0/operations/_text_analytics_client_operations.py +0 -604
  94. azure/ai/textanalytics/_generated/v3_1/__init__.py +0 -20
  95. azure/ai/textanalytics/_generated/v3_1/_configuration.py +0 -66
  96. azure/ai/textanalytics/_generated/v3_1/_patch.py +0 -31
  97. azure/ai/textanalytics/_generated/v3_1/_text_analytics_client.py +0 -98
  98. azure/ai/textanalytics/_generated/v3_1/_vendor.py +0 -45
  99. azure/ai/textanalytics/_generated/v3_1/aio/__init__.py +0 -20
  100. azure/ai/textanalytics/_generated/v3_1/aio/_configuration.py +0 -65
  101. azure/ai/textanalytics/_generated/v3_1/aio/_patch.py +0 -31
  102. azure/ai/textanalytics/_generated/v3_1/aio/_text_analytics_client.py +0 -95
  103. azure/ai/textanalytics/_generated/v3_1/aio/_vendor.py +0 -27
  104. azure/ai/textanalytics/_generated/v3_1/aio/operations/__init__.py +0 -18
  105. azure/ai/textanalytics/_generated/v3_1/aio/operations/_patch.py +0 -19
  106. azure/ai/textanalytics/_generated/v3_1/aio/operations/_text_analytics_client_operations.py +0 -1291
  107. azure/ai/textanalytics/_generated/v3_1/models/__init__.py +0 -205
  108. azure/ai/textanalytics/_generated/v3_1/models/_models_py3.py +0 -3976
  109. azure/ai/textanalytics/_generated/v3_1/models/_patch.py +0 -19
  110. azure/ai/textanalytics/_generated/v3_1/models/_text_analytics_client_enums.py +0 -367
  111. azure/ai/textanalytics/_generated/v3_1/operations/__init__.py +0 -18
  112. azure/ai/textanalytics/_generated/v3_1/operations/_patch.py +0 -19
  113. azure/ai/textanalytics/_generated/v3_1/operations/_text_analytics_client_operations.py +0 -1709
  114. azure/ai/textanalytics/_lro.py +0 -553
  115. azure/ai/textanalytics/_models.py +0 -3158
  116. azure/ai/textanalytics/_policies.py +0 -66
  117. azure/ai/textanalytics/_request_handlers.py +0 -104
  118. azure/ai/textanalytics/_response_handlers.py +0 -583
  119. azure/ai/textanalytics/_text_analytics_client.py +0 -2081
  120. azure/ai/textanalytics/_user_agent.py +0 -8
  121. azure/ai/textanalytics/_validate.py +0 -113
  122. azure/ai/textanalytics/aio/_base_client_async.py +0 -98
  123. azure/ai/textanalytics/aio/_lro_async.py +0 -503
  124. azure/ai/textanalytics/aio/_response_handlers_async.py +0 -94
  125. azure/ai/textanalytics/aio/_text_analytics_client_async.py +0 -2077
  126. azure_ai_textanalytics-5.3.0b2.dist-info/RECORD +0 -115
  127. {azure_ai_textanalytics-5.3.0b2.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info/licenses}/LICENSE +0 -0
  128. {azure_ai_textanalytics-5.3.0b2.dist-info → azure_ai_textanalytics-6.0.0b1.dist-info}/top_level.txt +0 -0
@@ -1,29 +1,27 @@
1
- Metadata-Version: 2.1
1
+ Metadata-Version: 2.4
2
2
  Name: azure-ai-textanalytics
3
- Version: 5.3.0b2
4
- Summary: Microsoft Azure Text Analytics Client Library for Python
5
- Home-page: https://github.com/Azure/azure-sdk-for-python
6
- Author: Microsoft Corporation
7
- Author-email: azpysdkhelp@microsoft.com
8
- License: MIT License
9
- Keywords: azure,azure sdk,text analytics,cognitive services,natural language processing
3
+ Version: 6.0.0b1
4
+ Summary: Microsoft Corporation Azure Ai Textanalytics Client Library for Python
5
+ Author-email: Microsoft Corporation <azpysdkhelp@microsoft.com>
6
+ License-Expression: MIT
7
+ Project-URL: repository, https://github.com/Azure/azure-sdk-for-python
8
+ Keywords: azure,azure sdk
10
9
  Classifier: Development Status :: 4 - Beta
11
10
  Classifier: Programming Language :: Python
12
11
  Classifier: Programming Language :: Python :: 3 :: Only
13
12
  Classifier: Programming Language :: Python :: 3
14
- Classifier: Programming Language :: Python :: 3.7
15
- Classifier: Programming Language :: Python :: 3.8
16
13
  Classifier: Programming Language :: Python :: 3.9
17
14
  Classifier: Programming Language :: Python :: 3.10
18
15
  Classifier: Programming Language :: Python :: 3.11
19
- Classifier: License :: OSI Approved :: MIT License
20
- Requires-Python: >=3.7
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Programming Language :: Python :: 3.13
18
+ Requires-Python: >=3.9
21
19
  Description-Content-Type: text/markdown
22
20
  License-File: LICENSE
23
- Requires-Dist: azure-core (<2.0.0,>=1.24.0)
24
- Requires-Dist: azure-common (~=1.1)
25
- Requires-Dist: isodate (<1.0.0,>=0.6.1)
26
- Requires-Dist: typing-extensions (>=4.0.1)
21
+ Requires-Dist: isodate>=0.6.1
22
+ Requires-Dist: azure-core>=1.35.0
23
+ Requires-Dist: typing-extensions>=4.6.0
24
+ Dynamic: license-file
27
25
 
28
26
  # Azure Text Analytics client library for Python
29
27
 
@@ -41,9 +39,13 @@ The Azure Cognitive Service for Language is a cloud-based service that provides
41
39
  - Custom Text Classification
42
40
  - Extractive Text Summarization
43
41
  - Abstractive Text Summarization
44
- - Dynamic Classification
45
42
 
46
- [Source code][source_code] | [Package (PyPI)][ta_pypi] | [API reference documentation][ta_ref_docs] | [Product documentation][language_product_documentation] | [Samples][ta_samples]
43
+ [Source code][source_code]
44
+ | [Package (PyPI)][ta_pypi]
45
+ | [Package (Conda)](https://anaconda.org/microsoft/azure-ai-textanalytics/)
46
+ | [API reference documentation][ta_ref_docs]
47
+ | [Product documentation][language_product_documentation]
48
+ | [Samples][ta_samples]
47
49
 
48
50
  ## Getting started
49
51
 
@@ -79,7 +81,7 @@ For example, `https://<region>.api.cognitive.microsoft.com/`.
79
81
  Install the Azure Text Analytics client library for Python with [pip][pip]:
80
82
 
81
83
  ```bash
82
- pip install azure-ai-textanalytics --pre
84
+ pip install azure-ai-textanalytics
83
85
  ```
84
86
 
85
87
  <!-- SNIPPET:sample_authentication.create_ta_client_with_key -->
@@ -87,24 +89,26 @@ pip install azure-ai-textanalytics --pre
87
89
  ```python
88
90
  import os
89
91
  from azure.core.credentials import AzureKeyCredential
90
- from azure.ai.textanalytics import TextAnalyticsClient
91
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
92
- key = os.environ["AZURE_LANGUAGE_KEY"]
92
+ from azure.ai.textanalytics import TextAnalysisClient
93
+
94
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
95
+ key = os.environ["AZURE_TEXT_KEY"]
93
96
 
94
- text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))
97
+ text_client = TextAnalysisClient(endpoint, AzureKeyCredential(key))
95
98
  ```
96
99
 
97
100
  <!-- END SNIPPET -->
98
101
 
99
102
  > Note that `5.2.X` and newer targets the Azure Cognitive Service for Language APIs. These APIs include the text analysis and natural language processing features found in the previous versions of the Text Analytics client library.
100
- In addition, the service API has changed from semantic to date-based versioning. This version of the client library defaults to the latest supported API version, which currently is `2022-10-01-preview`.
103
+ In addition, the service API has changed from semantic to date-based versioning. This version of the client library defaults to the latest supported API version, which currently is `2023-04-01`.
101
104
 
102
105
  This table shows the relationship between SDK versions and supported API versions of the service
103
106
 
104
107
  | SDK version | Supported API version of service |
105
108
  | ------------ | --------------------------------- |
106
- | 5.3.0b2 - Latest beta release | 3.0, 3.1, 2022-05-01, 2022-10-01-preview (default) |
107
- | 5.2.X - Latest stable release | 3.0, 3.1, 2022-05-01 (default) |
109
+ | 6.0.0b1 - Latest preview release | 3.0, 3.1, 2022-05-01, 2023-04-01, 2024-11-01, 2024-11-15-preview, 2025-05-15-preview (default) |
110
+ | 5.3.X - Latest stable release | 3.0, 3.1, 2022-05-01, 2023-04-01 (default) |
111
+ | 5.2.X | 3.0, 3.1, 2022-05-01 (default) |
108
112
  | 5.1.0 | 3.0, 3.1 (default) |
109
113
  | 5.0.0 | 3.0 |
110
114
 
@@ -141,11 +145,12 @@ to authenticate the client:
141
145
  ```python
142
146
  import os
143
147
  from azure.core.credentials import AzureKeyCredential
144
- from azure.ai.textanalytics import TextAnalyticsClient
145
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
146
- key = os.environ["AZURE_LANGUAGE_KEY"]
148
+ from azure.ai.textanalytics import TextAnalysisClient
147
149
 
148
- text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))
150
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
151
+ key = os.environ["AZURE_TEXT_KEY"]
152
+
153
+ text_client = TextAnalysisClient(endpoint, AzureKeyCredential(key))
149
154
  ```
150
155
 
151
156
  <!-- END SNIPPET -->
@@ -177,13 +182,13 @@ Use the returned token credential to authenticate the client:
177
182
 
178
183
  ```python
179
184
  import os
180
- from azure.ai.textanalytics import TextAnalyticsClient
185
+ from azure.ai.textanalytics import TextAnalysisClient
181
186
  from azure.identity import DefaultAzureCredential
182
187
 
183
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
188
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
184
189
  credential = DefaultAzureCredential()
185
190
 
186
- text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)
191
+ text_client = TextAnalysisClient(endpoint, credential=credential)
187
192
  ```
188
193
 
189
194
  <!-- END SNIPPET -->
@@ -285,8 +290,7 @@ The following section provides several code snippets covering some of the most c
285
290
  - [Custom Single Label Classification][single_label_classify_sample]
286
291
  - [Custom Multi Label Classification][multi_label_classify_sample]
287
292
  - [Extractive Summarization][extract_summary_sample]
288
- - [Abstractive Summarization][abstractive_summary_sample]
289
- - [Dynamic Classification][dynamic_classification_sample]
293
+ - [Abstractive Summarization][abstract_summary_sample]
290
294
 
291
295
  ### Analyze Sentiment
292
296
 
@@ -296,33 +300,68 @@ The following section provides several code snippets covering some of the most c
296
300
 
297
301
  ```python
298
302
  import os
299
- from azure.core.credentials import AzureKeyCredential
300
- from azure.ai.textanalytics import TextAnalyticsClient
301
303
 
302
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
303
- key = os.environ["AZURE_LANGUAGE_KEY"]
304
+ from azure.identity import DefaultAzureCredential
305
+ from azure.ai.textanalytics import TextAnalysisClient
306
+ from azure.ai.textanalytics.models import (
307
+ MultiLanguageTextInput,
308
+ MultiLanguageInput,
309
+ TextSentimentAnalysisInput,
310
+ AnalyzeTextSentimentResult,
311
+ )
304
312
 
305
- text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
306
313
 
307
- documents = [
308
- """I had the best day of my life. I decided to go sky-diving and it made me appreciate my whole life so much more.
309
- I developed a deep-connection with my instructor as well, and I feel as if I've made a life-long friend in her.""",
310
- """This was a waste of my time. All of the views on this drop are extremely boring, all I saw was grass. 0/10 would
311
- not recommend to any divers, even first timers.""",
312
- """This was pretty good! The sights were ok, and I had fun with my instructors! Can't complain too much about my experience""",
313
- """I only have one word for my experience: WOW!!! I can't believe I have had such a wonderful skydiving company right
314
- in my backyard this whole time! I will definitely be a repeat customer, and I want to take my grandmother skydiving too,
315
- I know she'll love it!"""
316
- ]
314
+ def sample_analyze_sentiment():
315
+ # settings
316
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
317
+ credential = DefaultAzureCredential()
317
318
 
319
+ client = TextAnalysisClient(endpoint, credential=credential)
320
+
321
+ # input
322
+ text_a = (
323
+ "The food and service were unacceptable, but the concierge were nice. "
324
+ "After talking to them about the quality of the food and the process to get room service "
325
+ "they refunded the money we spent at the restaurant and gave us a voucher for nearby restaurants."
326
+ )
318
327
 
319
- result = text_analytics_client.analyze_sentiment(documents, show_opinion_mining=True)
320
- docs = [doc for doc in result if not doc.is_error]
328
+ body = TextSentimentAnalysisInput(
329
+ text_input=MultiLanguageTextInput(
330
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
331
+ )
332
+ )
321
333
 
322
- print("Let's visualize the sentiment of each of these documents")
323
- for idx, doc in enumerate(docs):
324
- print(f"Document text: {documents[idx]}")
325
- print(f"Overall sentiment: {doc.sentiment}")
334
+ # Sync (non-LRO) call
335
+ result = client.analyze_text(body=body)
336
+
337
+ # Print results
338
+ if isinstance(result, AnalyzeTextSentimentResult) and result.results and result.results.documents:
339
+ for doc in result.results.documents:
340
+ print(f"\nDocument ID: {doc.id}")
341
+ print(f"Overall sentiment: {doc.sentiment}")
342
+ if doc.confidence_scores:
343
+ print("Confidence scores:")
344
+ print(f" positive={doc.confidence_scores.positive}")
345
+ print(f" neutral={doc.confidence_scores.neutral}")
346
+ print(f" negative={doc.confidence_scores.negative}")
347
+
348
+ if doc.sentences:
349
+ print("\nSentence sentiments:")
350
+ for s in doc.sentences:
351
+ print(f" Text: {s.text}")
352
+ print(f" Sentiment: {s.sentiment}")
353
+ if s.confidence_scores:
354
+ print(
355
+ " Scores: "
356
+ f"pos={s.confidence_scores.positive}, "
357
+ f"neu={s.confidence_scores.neutral}, "
358
+ f"neg={s.confidence_scores.negative}"
359
+ )
360
+ print(f" Offset: {s.offset}, Length: {s.length}\n")
361
+ else:
362
+ print("No sentence-level results returned.")
363
+ else:
364
+ print("No documents in the response or unexpected result type.")
326
365
  ```
327
366
 
328
367
  <!-- END SNIPPET -->
@@ -339,40 +378,61 @@ Please refer to the service documentation for a conceptual discussion of [sentim
339
378
 
340
379
  ```python
341
380
  import os
342
- import typing
343
- from azure.core.credentials import AzureKeyCredential
344
- from azure.ai.textanalytics import TextAnalyticsClient
345
381
 
346
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
347
- key = os.environ["AZURE_LANGUAGE_KEY"]
348
-
349
- text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
350
- reviews = [
351
- """I work for Foo Company, and we hired Contoso for our annual founding ceremony. The food
352
- was amazing and we all can't say enough good words about the quality and the level of service.""",
353
- """We at the Foo Company re-hired Contoso after all of our past successes with the company.
354
- Though the food was still great, I feel there has been a quality drop since their last time
355
- catering for us. Is anyone else running into the same problem?""",
356
- """Bar Company is over the moon about the service we received from Contoso, the best sliders ever!!!!"""
357
- ]
382
+ from azure.identity import DefaultAzureCredential
383
+ from azure.ai.textanalytics import TextAnalysisClient
384
+ from azure.ai.textanalytics.models import (
385
+ MultiLanguageTextInput,
386
+ MultiLanguageInput,
387
+ TextEntityRecognitionInput,
388
+ EntitiesActionContent,
389
+ AnalyzeTextEntitiesResult,
390
+ )
358
391
 
359
- result = text_analytics_client.recognize_entities(reviews)
360
- result = [review for review in result if not review.is_error]
361
- organization_to_reviews: typing.Dict[str, typing.List[str]] = {}
362
-
363
- for idx, review in enumerate(result):
364
- for entity in review.entities:
365
- print(f"Entity '{entity.text}' has category '{entity.category}'")
366
- if entity.category == 'Organization':
367
- organization_to_reviews.setdefault(entity.text, [])
368
- organization_to_reviews[entity.text].append(reviews[idx])
369
-
370
- for organization, reviews in organization_to_reviews.items():
371
- print(
372
- "\n\nOrganization '{}' has left us the following review(s): {}".format(
373
- organization, "\n\n".join(reviews)
374
- )
392
+
393
+ def sample_recognize_entities():
394
+ # settings
395
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
396
+ credential = DefaultAzureCredential()
397
+
398
+ client = TextAnalysisClient(endpoint, credential=credential)
399
+
400
+ # input
401
+ text_a = (
402
+ "We love this trail and make the trip every year. The views are breathtaking and well worth the hike! "
403
+ "Yesterday was foggy though, so we missed the spectacular views. We tried again today and it was "
404
+ "amazing. Everyone in my family liked the trail although it was too challenging for the less "
405
+ "athletic among us. Not necessarily recommended for small children. A hotel close to the trail "
406
+ "offers services for childcare in case you want that."
407
+ )
408
+
409
+ body = TextEntityRecognitionInput(
410
+ text_input=MultiLanguageTextInput(
411
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
412
+ ),
413
+ action_content=EntitiesActionContent(model_version="latest"),
375
414
  )
415
+
416
+ result = client.analyze_text(body=body)
417
+
418
+ # Print results
419
+ if isinstance(result, AnalyzeTextEntitiesResult) and result.results and result.results.documents:
420
+ for doc in result.results.documents:
421
+ print(f"\nDocument ID: {doc.id}")
422
+ if doc.entities:
423
+ print("Entities:")
424
+ for entity in doc.entities:
425
+ print(f" Text: {entity.text}")
426
+ print(f" Category: {entity.category}")
427
+ if entity.subcategory:
428
+ print(f" Subcategory: {entity.subcategory}")
429
+ print(f" Offset: {entity.offset}")
430
+ print(f" Length: {entity.length}")
431
+ print(f" Confidence score: {entity.confidence_score}\n")
432
+ else:
433
+ print("No entities found for this document.")
434
+ else:
435
+ print("No documents in the response or unexpected result type.")
376
436
  ```
377
437
 
378
438
  <!-- END SNIPPET -->
@@ -392,38 +452,69 @@ Roman god of war). Recognized entities are associated with URLs to a well-known
392
452
 
393
453
  ```python
394
454
  import os
395
- from azure.core.credentials import AzureKeyCredential
396
- from azure.ai.textanalytics import TextAnalyticsClient
397
455
 
398
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
399
- key = os.environ["AZURE_LANGUAGE_KEY"]
456
+ from azure.identity import DefaultAzureCredential
457
+ from azure.ai.textanalytics import TextAnalysisClient
458
+ from azure.ai.textanalytics.models import (
459
+ MultiLanguageTextInput,
460
+ MultiLanguageInput,
461
+ TextEntityLinkingInput,
462
+ EntityLinkingActionContent,
463
+ AnalyzeTextEntityLinkingResult,
464
+ )
400
465
 
401
- text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
402
- documents = [
403
- """
404
- Microsoft was founded by Bill Gates with some friends he met at Harvard. One of his friends,
405
- Steve Ballmer, eventually became CEO after Bill Gates as well. Steve Ballmer eventually stepped
406
- down as CEO of Microsoft, and was succeeded by Satya Nadella.
407
- Microsoft originally moved its headquarters to Bellevue, Washington in January 1979, but is now
408
- headquartered in Redmond.
409
- """
410
- ]
411
466
 
412
- result = text_analytics_client.recognize_linked_entities(documents)
413
- docs = [doc for doc in result if not doc.is_error]
467
+ def sample_recognize_linked_entities():
468
+ # settings
469
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
470
+ credential = DefaultAzureCredential()
414
471
 
415
- print(
416
- "Let's map each entity to it's Wikipedia article. I also want to see how many times each "
417
- "entity is mentioned in a document\n\n"
418
- )
419
- entity_to_url = {}
420
- for doc in docs:
421
- for entity in doc.entities:
422
- print("Entity '{}' has been mentioned '{}' time(s)".format(
423
- entity.name, len(entity.matches)
424
- ))
425
- if entity.data_source == "Wikipedia":
426
- entity_to_url[entity.name] = entity.url
472
+ client = TextAnalysisClient(endpoint, credential=credential)
473
+
474
+ # input
475
+ text_a = (
476
+ "Microsoft was founded by Bill Gates with some friends he met at Harvard. One of his friends, Steve "
477
+ "Ballmer, eventually became CEO after Bill Gates as well. Steve Ballmer eventually stepped down as "
478
+ "CEO of Microsoft, and was succeeded by Satya Nadella. Microsoft originally moved its headquarters "
479
+ "to Bellevue, Washington in January 1979, but is now headquartered in Redmond"
480
+ )
481
+
482
+ body = TextEntityLinkingInput(
483
+ text_input=MultiLanguageTextInput(
484
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
485
+ ),
486
+ action_content=EntityLinkingActionContent(model_version="latest"),
487
+ )
488
+
489
+ # Sync (non-LRO) call
490
+ result = client.analyze_text(body=body)
491
+
492
+ # Print results
493
+ if isinstance(result, AnalyzeTextEntityLinkingResult) and result.results and result.results.documents:
494
+ for doc in result.results.documents:
495
+ print(f"\nDocument ID: {doc.id}")
496
+ if not doc.entities:
497
+ print("No linked entities found for this document.")
498
+ continue
499
+
500
+ print("Linked Entities:")
501
+ for linked in doc.entities:
502
+ print(f" Name: {linked.name}")
503
+ print(f" Language: {linked.language}")
504
+ print(f" Data source: {linked.data_source}")
505
+ print(f" URL: {linked.url}")
506
+ print(f" ID: {linked.id}")
507
+
508
+ if linked.matches:
509
+ print(" Matches:")
510
+ for match in linked.matches:
511
+ print(f" Text: {match.text}")
512
+ print(f" Confidence score: {match.confidence_score}")
513
+ print(f" Offset: {match.offset}")
514
+ print(f" Length: {match.length}")
515
+ print()
516
+ else:
517
+ print("No documents in the response or unexpected result type.")
427
518
  ```
428
519
 
429
520
  <!-- END SNIPPET -->
@@ -442,35 +533,59 @@ Social Security Numbers, bank account information, credit card numbers, and more
442
533
 
443
534
  ```python
444
535
  import os
445
- from azure.core.credentials import AzureKeyCredential
446
- from azure.ai.textanalytics import TextAnalyticsClient
447
536
 
448
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
449
- key = os.environ["AZURE_LANGUAGE_KEY"]
450
-
451
- text_analytics_client = TextAnalyticsClient(
452
- endpoint=endpoint, credential=AzureKeyCredential(key)
537
+ from azure.identity import DefaultAzureCredential
538
+ from azure.ai.textanalytics import TextAnalysisClient
539
+ from azure.ai.textanalytics.models import (
540
+ MultiLanguageTextInput,
541
+ MultiLanguageInput,
542
+ TextPiiEntitiesRecognitionInput,
543
+ AnalyzeTextPiiResult,
453
544
  )
454
- documents = [
455
- """Parker Doe has repaid all of their loans as of 2020-04-25.
456
- Their SSN is 859-98-0987. To contact them, use their phone number
457
- 555-555-5555. They are originally from Brazil and have Brazilian CPF number 998.214.865-68"""
458
- ]
459
545
 
460
- result = text_analytics_client.recognize_pii_entities(documents)
461
- docs = [doc for doc in result if not doc.is_error]
462
546
 
463
- print(
464
- "Let's compare the original document with the documents after redaction. "
465
- "I also want to comb through all of the entities that got redacted"
466
- )
467
- for idx, doc in enumerate(docs):
468
- print(f"Document text: {documents[idx]}")
469
- print(f"Redacted document text: {doc.redacted_text}")
470
- for entity in doc.entities:
471
- print("...Entity '{}' with category '{}' got redacted".format(
472
- entity.text, entity.category
473
- ))
547
+ def sample_recognize_pii_entities():
548
+ # settings
549
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
550
+ credential = DefaultAzureCredential()
551
+
552
+ client = TextAnalysisClient(endpoint, credential=credential)
553
+
554
+ # input
555
+ text_a = (
556
+ "Parker Doe has repaid all of their loans as of 2020-04-25. Their SSN is 859-98-0987. "
557
+ "To contact them, use their phone number 800-102-1100. They are originally from Brazil and "
558
+ "have document ID number 998.214.865-68."
559
+ )
560
+
561
+ body = TextPiiEntitiesRecognitionInput(
562
+ text_input=MultiLanguageTextInput(
563
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
564
+ )
565
+ )
566
+
567
+ # Sync (non-LRO) call
568
+ result = client.analyze_text(body=body)
569
+
570
+ # Print results
571
+ if isinstance(result, AnalyzeTextPiiResult) and result.results and result.results.documents:
572
+ for doc in result.results.documents:
573
+ print(f"\nDocument ID: {doc.id}")
574
+ if doc.entities:
575
+ print("PII Entities:")
576
+ for entity in doc.entities:
577
+ print(f" Text: {entity.text}")
578
+ print(f" Category: {entity.category}")
579
+ # subcategory may be optional
580
+ if entity.subcategory:
581
+ print(f" Subcategory: {entity.subcategory}")
582
+ print(f" Offset: {entity.offset}")
583
+ print(f" Length: {entity.length}")
584
+ print(f" Confidence score: {entity.confidence_score}\n")
585
+ else:
586
+ print("No PII entities found for this document.")
587
+ else:
588
+ print("No documents in the response or unexpected result type.")
474
589
  ```
475
590
 
476
591
  <!-- END SNIPPET -->
@@ -489,36 +604,64 @@ Note: The Recognize PII Entities service is available in API version v3.1 and ne
489
604
 
490
605
  ```python
491
606
  import os
492
- from azure.core.credentials import AzureKeyCredential
493
- from azure.ai.textanalytics import TextAnalyticsClient
494
607
 
495
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
496
- key = os.environ["AZURE_LANGUAGE_KEY"]
497
-
498
- text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
499
- articles = [
500
- """
501
- Washington, D.C. Autumn in DC is a uniquely beautiful season. The leaves fall from the trees
502
- in a city chock-full of forests, leaving yellow leaves on the ground and a clearer view of the
503
- blue sky above...
504
- """,
505
- """
506
- Redmond, WA. In the past few days, Microsoft has decided to further postpone the start date of
507
- its United States workers, due to the pandemic that rages with no end in sight...
508
- """,
509
- """
510
- Redmond, WA. Employees at Microsoft can be excited about the new coffee shop that will open on campus
511
- once workers no longer have to work remotely...
512
- """
513
- ]
608
+ from azure.identity import DefaultAzureCredential
609
+ from azure.ai.textanalytics import TextAnalysisClient
610
+ from azure.ai.textanalytics.models import (
611
+ MultiLanguageTextInput,
612
+ MultiLanguageInput,
613
+ TextKeyPhraseExtractionInput,
614
+ KeyPhraseActionContent,
615
+ AnalyzeTextKeyPhraseResult,
616
+ )
617
+
618
+
619
+ def sample_extract_key_phrases():
620
+ # get settings
621
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
622
+ credential = DefaultAzureCredential()
514
623
 
515
- result = text_analytics_client.extract_key_phrases(articles)
516
- for idx, doc in enumerate(result):
517
- if not doc.is_error:
518
- print("Key phrases in article #{}: {}".format(
519
- idx + 1,
520
- ", ".join(doc.key_phrases)
521
- ))
624
+ client = TextAnalysisClient(endpoint, credential=credential)
625
+
626
+ # Build input
627
+ text_a = (
628
+ "We love this trail and make the trip every year. The views are breathtaking and well worth the hike! "
629
+ "Yesterday was foggy though, so we missed the spectacular views. We tried again today and it was "
630
+ "amazing. Everyone in my family liked the trail although it was too challenging for the less "
631
+ "athletic among us. Not necessarily recommended for small children. A hotel close to the trail "
632
+ "offers services for childcare in case you want that."
633
+ )
634
+
635
+ body = TextKeyPhraseExtractionInput(
636
+ text_input=MultiLanguageTextInput(
637
+ multi_language_inputs=[MultiLanguageInput(id="A", text=text_a, language="en")]
638
+ ),
639
+ action_content=KeyPhraseActionContent(model_version="latest"),
640
+ )
641
+
642
+ result = client.analyze_text(body=body)
643
+
644
+ # Validate and print results
645
+ if not isinstance(result, AnalyzeTextKeyPhraseResult):
646
+ print("Unexpected result type.")
647
+ return
648
+
649
+ if result.results is None:
650
+ print("No results returned.")
651
+ return
652
+
653
+ if result.results.documents is None or len(result.results.documents) == 0:
654
+ print("No documents in the response.")
655
+ return
656
+
657
+ for doc in result.results.documents:
658
+ print(f"\nDocument ID: {doc.id}")
659
+ if doc.key_phrases:
660
+ print("Key Phrases:")
661
+ for phrase in doc.key_phrases:
662
+ print(f" - {phrase}")
663
+ else:
664
+ print("No key phrases found for this document.")
522
665
  ```
523
666
 
524
667
  <!-- END SNIPPET -->
@@ -535,33 +678,56 @@ Please refer to the service documentation for a conceptual discussion of [key ph
535
678
 
536
679
  ```python
537
680
  import os
538
- from azure.core.credentials import AzureKeyCredential
539
- from azure.ai.textanalytics import TextAnalyticsClient
540
681
 
541
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
542
- key = os.environ["AZURE_LANGUAGE_KEY"]
682
+ from azure.identity import DefaultAzureCredential
683
+ from azure.ai.textanalytics import TextAnalysisClient
684
+ from azure.ai.textanalytics.models import (
685
+ TextLanguageDetectionInput,
686
+ LanguageDetectionTextInput,
687
+ LanguageInput,
688
+ AnalyzeTextLanguageDetectionResult,
689
+ )
690
+
543
691
 
544
- text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
545
- documents = [
546
- """
547
- The concierge Paulette was extremely helpful. Sadly when we arrived the elevator was broken, but with Paulette's help we barely noticed this inconvenience.
548
- She arranged for our baggage to be brought up to our room with no extra charge and gave us a free meal to refurbish all of the calories we lost from
549
- walking up the stairs :). Can't say enough good things about my experience!
550
- """,
551
- """
552
- 最近由于工作压力太大,我们决定去富酒店度假。那儿的温泉实在太舒服了,我跟我丈夫都完全恢复了工作前的青春精神!加油!
553
- """
554
- ]
692
+ def sample_detect_language():
693
+ # get settings
694
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
695
+ credential = DefaultAzureCredential()
696
+
697
+ client = TextAnalysisClient(endpoint, credential=credential)
555
698
 
556
- result = text_analytics_client.detect_language(documents)
557
- reviewed_docs = [doc for doc in result if not doc.is_error]
699
+ # Build input
700
+ text_a = (
701
+ "Sentences in different languages."
702
+ )
703
+
704
+ body = TextLanguageDetectionInput(
705
+ text_input=LanguageDetectionTextInput(
706
+ language_inputs=[LanguageInput(id="A", text=text_a)]
707
+ )
708
+ )
558
709
 
559
- print("Let's see what language each review is in!")
710
+ # Sync (non-LRO) call
711
+ result = client.analyze_text(body=body)
560
712
 
561
- for idx, doc in enumerate(reviewed_docs):
562
- print("Review #{} is in '{}', which has ISO639-1 name '{}'\n".format(
563
- idx, doc.primary_language.name, doc.primary_language.iso6391_name
564
- ))
713
+ # Validate and print results
714
+ if not isinstance(result, AnalyzeTextLanguageDetectionResult):
715
+ print("Unexpected result type.")
716
+ return
717
+
718
+ if not result.results or not result.results.documents:
719
+ print("No documents in the response.")
720
+ return
721
+
722
+ for doc in result.results.documents:
723
+
724
+ print(f"\nDocument ID: {doc.id}")
725
+ if doc.detected_language:
726
+ dl = doc.detected_language
727
+ print(f"Detected language: {dl.name} ({dl.iso6391_name})")
728
+ print(f"Confidence score: {dl.confidence_score}")
729
+ else:
730
+ print("No detected language returned for this document.")
565
731
  ```
566
732
 
567
733
  <!-- END SNIPPET -->
@@ -579,64 +745,113 @@ and [language and regional support][language_and_regional_support].
579
745
 
580
746
  ```python
581
747
  import os
582
- import typing
583
- from azure.core.credentials import AzureKeyCredential
584
- from azure.ai.textanalytics import TextAnalyticsClient, HealthcareEntityRelation
585
748
 
586
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
587
- key = os.environ["AZURE_LANGUAGE_KEY"]
588
-
589
- text_analytics_client = TextAnalyticsClient(
590
- endpoint=endpoint,
591
- credential=AzureKeyCredential(key),
749
+ from azure.identity import DefaultAzureCredential
750
+ from azure.ai.textanalytics import TextAnalysisClient
751
+ from azure.ai.textanalytics.models import (
752
+ MultiLanguageTextInput,
753
+ MultiLanguageInput,
754
+ AnalyzeTextOperationAction,
755
+ HealthcareLROTask,
756
+ HealthcareLROResult,
592
757
  )
593
758
 
594
- documents = [
595
- """
596
- Patient needs to take 100 mg of ibuprofen, and 3 mg of potassium. Also needs to take
597
- 10 mg of Zocor.
598
- """,
599
- """
600
- Patient needs to take 50 mg of ibuprofen, and 2 mg of Coumadin.
601
- """
602
- ]
603
759
 
604
- poller = text_analytics_client.begin_analyze_healthcare_entities(documents)
605
- result = poller.result()
606
-
607
- docs = [doc for doc in result if not doc.is_error]
608
-
609
- print("Let's first visualize the outputted healthcare result:")
610
- for doc in docs:
611
- for entity in doc.entities:
612
- print(f"Entity: {entity.text}")
613
- print(f"...Normalized Text: {entity.normalized_text}")
614
- print(f"...Category: {entity.category}")
615
- print(f"...Subcategory: {entity.subcategory}")
616
- print(f"...Offset: {entity.offset}")
617
- print(f"...Confidence score: {entity.confidence_score}")
618
- if entity.data_sources is not None:
619
- print("...Data Sources:")
620
- for data_source in entity.data_sources:
621
- print(f"......Entity ID: {data_source.entity_id}")
622
- print(f"......Name: {data_source.name}")
623
- if entity.assertion is not None:
624
- print("...Assertion:")
625
- print(f"......Conditionality: {entity.assertion.conditionality}")
626
- print(f"......Certainty: {entity.assertion.certainty}")
627
- print(f"......Association: {entity.assertion.association}")
628
- for relation in doc.entity_relations:
629
- print(f"Relation of type: {relation.relation_type} has the following roles")
630
- for role in relation.roles:
631
- print(f"...Role '{role.name}' with entity '{role.entity.text}'")
632
- print("------------------------------------------")
633
-
634
- print("Now, let's get all of medication dosage relations from the documents")
635
- dosage_of_medication_relations = [
636
- entity_relation
637
- for doc in docs
638
- for entity_relation in doc.entity_relations if entity_relation.relation_type == HealthcareEntityRelation.DOSAGE_OF_MEDICATION
639
- ]
760
+ def sample_analyze_healthcare_entities():
761
+ # get settings
762
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
763
+ credential = DefaultAzureCredential()
764
+
765
+ client = TextAnalysisClient(endpoint, credential=credential)
766
+
767
+ # Build input
768
+ text_a = "Prescribed 100mg ibuprofen, taken twice daily."
769
+
770
+ text_input = MultiLanguageTextInput(
771
+ multi_language_inputs=[
772
+ MultiLanguageInput(id="A", text=text_a, language="en"),
773
+ ]
774
+ )
775
+
776
+ actions: list[AnalyzeTextOperationAction] = [
777
+ HealthcareLROTask(
778
+ name="Healthcare Operation",
779
+ ),
780
+ ]
781
+
782
+ # Start long-running operation (sync) – poller returns ItemPaged[TextActions]
783
+ poller = client.begin_analyze_text_job(
784
+ text_input=text_input,
785
+ actions=actions,
786
+ )
787
+
788
+ # Operation metadata (pre-final)
789
+ print(f"Operation ID: {poller.details.get('operation_id')}")
790
+
791
+ # Wait for completion and get pageable of TextActions
792
+ paged_actions = poller.result()
793
+
794
+ # Final-state metadata
795
+ d = poller.details
796
+ print(f"Job ID: {d.get('job_id')}")
797
+ print(f"Status: {d.get('status')}")
798
+ print(f"Created: {d.get('created_date_time')}")
799
+ print(f"Last Updated: {d.get('last_updated_date_time')}")
800
+ if d.get("expiration_date_time"):
801
+ print(f"Expires: {d.get('expiration_date_time')}")
802
+ if d.get("display_name"):
803
+ print(f"Display Name: {d.get('display_name')}")
804
+
805
+ # Iterate results (sync pageable)
806
+ for actions_page in paged_actions:
807
+ print(
808
+ f"Completed: {actions_page.completed}, "
809
+ f"In Progress: {actions_page.in_progress}, "
810
+ f"Failed: {actions_page.failed}, "
811
+ f"Total: {actions_page.total}"
812
+ )
813
+
814
+ for op_result in actions_page.items_property or []:
815
+ if isinstance(op_result, HealthcareLROResult):
816
+ print(f"\nAction Name: {op_result.task_name}")
817
+ print(f"Action Status: {op_result.status}")
818
+ print(f"Kind: {op_result.kind}")
819
+
820
+ hc_result = op_result.results
821
+ for doc in (hc_result.documents or []):
822
+ print(f"\nDocument ID: {doc.id}")
823
+
824
+ # Entities
825
+ print("Entities:")
826
+ for entity in (doc.entities or []):
827
+ print(f" Text: {entity.text}")
828
+ print(f" Category: {entity.category}")
829
+ print(f" Offset: {entity.offset}")
830
+ print(f" Length: {entity.length}")
831
+ print(f" Confidence score: {entity.confidence_score}")
832
+ if entity.links:
833
+ for link in entity.links:
834
+ print(f" Link ID: {link.id}")
835
+ print(f" Data source: {link.data_source}")
836
+ print()
837
+
838
+ # Relations
839
+ print("Relations:")
840
+ for relation in (doc.relations or []):
841
+ print(f" Relation type: {relation.relation_type}")
842
+ for rel_entity in (relation.entities or []):
843
+ print(f" Role: {rel_entity.role}")
844
+ print(f" Ref: {rel_entity.ref}")
845
+ print()
846
+ else:
847
+ # Other action kinds, if present
848
+ try:
849
+ print(
850
+ f"\n[Non-healthcare action] name={op_result.task_name}, "
851
+ f"status={op_result.status}, kind={op_result.kind}"
852
+ )
853
+ except Exception:
854
+ print("\n[Non-healthcare action present]")
640
855
  ```
641
856
 
642
857
  <!-- END SNIPPET -->
@@ -656,110 +871,117 @@ Note: Healthcare Entities Analysis is only available with API version v3.1 and n
656
871
  - Custom Single Label Classification (API version 2022-05-01 and newer)
657
872
  - Custom Multi Label Classification (API version 2022-05-01 and newer)
658
873
  - Healthcare Entities Analysis (API version 2022-05-01 and newer)
659
- - Extractive Summarization (API version 2022-10-01-preview and newer)
660
- - Abstractive Summarization (API version 2022-10-01-preview and newer)
874
+ - Extractive Summarization (API version 2023-04-01 and newer)
875
+ - Abstractive Summarization (API version 2023-04-01 and newer)
661
876
 
662
877
  <!-- SNIPPET:sample_analyze_actions.analyze -->
663
878
 
664
879
  ```python
665
880
  import os
881
+
882
+ from azure.identity import DefaultAzureCredential
666
883
  from azure.core.credentials import AzureKeyCredential
667
- from azure.ai.textanalytics import (
668
- TextAnalyticsClient,
669
- RecognizeEntitiesAction,
670
- RecognizeLinkedEntitiesAction,
671
- RecognizePiiEntitiesAction,
672
- ExtractKeyPhrasesAction,
673
- AnalyzeSentimentAction,
884
+ from azure.ai.textanalytics import TextAnalysisClient
885
+ from azure.ai.textanalytics.models import (
886
+ MultiLanguageTextInput,
887
+ MultiLanguageInput,
888
+ EntitiesLROTask,
889
+ KeyPhraseLROTask,
890
+ EntityRecognitionOperationResult,
891
+ KeyPhraseExtractionOperationResult,
892
+ EntityTag,
674
893
  )
675
894
 
676
- endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
677
- key = os.environ["AZURE_LANGUAGE_KEY"]
678
895
 
679
- text_analytics_client = TextAnalyticsClient(
680
- endpoint=endpoint,
681
- credential=AzureKeyCredential(key),
682
- )
896
+ def sample_analyze():
897
+ # get settings
898
+ endpoint = os.environ["AZURE_TEXT_ENDPOINT"]
899
+ credential = DefaultAzureCredential()
683
900
 
684
- documents = [
685
- 'We went to Contoso Steakhouse located at midtown NYC last week for a dinner party, and we adore the spot! '
686
- 'They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) '
687
- 'and he is super nice, coming out of the kitchen and greeted us all.'
688
- ,
689
-
690
- 'We enjoyed very much dining in the place! '
691
- 'The Sirloin steak I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their '
692
- 'online menu at www.contososteakhouse.com, call 312-555-0176 or send email to order@contososteakhouse.com! '
693
- 'The only complaint I have is the food didn\'t come fast enough. Overall I highly recommend it!'
694
- ]
901
+ client = TextAnalysisClient(endpoint, credential=credential)
695
902
 
696
- poller = text_analytics_client.begin_analyze_actions(
697
- documents,
698
- display_name="Sample Text Analysis",
699
- actions=[
700
- RecognizeEntitiesAction(),
701
- RecognizePiiEntitiesAction(),
702
- ExtractKeyPhrasesAction(),
703
- RecognizeLinkedEntitiesAction(),
704
- AnalyzeSentimentAction(),
705
- ],
706
- )
903
+ text_a = (
904
+ "We love this trail and make the trip every year. The views are breathtaking and well worth the hike!"
905
+ " Yesterday was foggy though, so we missed the spectacular views. We tried again today and it was"
906
+ " amazing. Everyone in my family liked the trail although it was too challenging for the less"
907
+ " athletic among us. Not necessarily recommended for small children. A hotel close to the trail"
908
+ " offers services for childcare in case you want that."
909
+ )
910
+
911
+ text_b = (
912
+ "Sentences in different languages."
913
+ )
914
+
915
+ text_c = (
916
+ "That was the best day of my life! We went on a 4 day trip where we stayed at Hotel Foo. They had"
917
+ " great amenities that included an indoor pool, a spa, and a bar. The spa offered couples massages"
918
+ " which were really good. The spa was clean and felt very peaceful. Overall the whole experience was"
919
+ " great. We will definitely come back."
920
+ )
707
921
 
708
- document_results = poller.result()
709
- for doc, action_results in zip(documents, document_results):
710
- print(f"\nDocument text: {doc}")
711
- for result in action_results:
712
- if result.kind == "EntityRecognition":
713
- print("...Results of Recognize Entities Action:")
714
- for entity in result.entities:
715
- print(f"......Entity: {entity.text}")
716
- print(f".........Category: {entity.category}")
717
- print(f".........Confidence Score: {entity.confidence_score}")
718
- print(f".........Offset: {entity.offset}")
719
-
720
- elif result.kind == "PiiEntityRecognition":
721
- print("...Results of Recognize PII Entities action:")
722
- for pii_entity in result.entities:
723
- print(f"......Entity: {pii_entity.text}")
724
- print(f".........Category: {pii_entity.category}")
725
- print(f".........Confidence Score: {pii_entity.confidence_score}")
726
-
727
- elif result.kind == "KeyPhraseExtraction":
728
- print("...Results of Extract Key Phrases action:")
729
- print(f"......Key Phrases: {result.key_phrases}")
730
-
731
- elif result.kind == "EntityLinking":
732
- print("...Results of Recognize Linked Entities action:")
733
- for linked_entity in result.entities:
734
- print(f"......Entity name: {linked_entity.name}")
735
- print(f".........Data source: {linked_entity.data_source}")
736
- print(f".........Data source language: {linked_entity.language}")
737
- print(
738
- f".........Data source entity ID: {linked_entity.data_source_entity_id}"
739
- )
740
- print(f".........Data source URL: {linked_entity.url}")
741
- print(".........Document matches:")
742
- for match in linked_entity.matches:
743
- print(f"............Match text: {match.text}")
744
- print(f"............Confidence Score: {match.confidence_score}")
745
- print(f"............Offset: {match.offset}")
746
- print(f"............Length: {match.length}")
747
-
748
- elif result.kind == "SentimentAnalysis":
749
- print("...Results of Analyze Sentiment action:")
750
- print(f"......Overall sentiment: {result.sentiment}")
751
- print(
752
- f"......Scores: positive={result.confidence_scores.positive}; \
753
- neutral={result.confidence_scores.neutral}; \
754
- negative={result.confidence_scores.negative} \n"
755
- )
756
-
757
- elif result.is_error is True:
758
- print(
759
- f"...Is an error with code '{result.error.code}' and message '{result.error.message}'"
760
- )
761
-
762
- print("------------------------------------------")
922
+ text_d = ""
923
+
924
+ # Prepare documents (you can batch multiple docs)
925
+ text_input = MultiLanguageTextInput(
926
+ multi_language_inputs=[
927
+ MultiLanguageInput(id="A", text=text_a, language="en"),
928
+ MultiLanguageInput(id="B", text=text_b, language="es"),
929
+ MultiLanguageInput(id="C", text=text_c, language="en"),
930
+ MultiLanguageInput(id="D", text=text_d),
931
+ ]
932
+ )
933
+
934
+ actions = [
935
+ EntitiesLROTask(name="EntitiesOperationActionSample"),
936
+ KeyPhraseLROTask(name="KeyPhraseOperationActionSample"),
937
+ ]
938
+
939
+ # Submit a multi-action analysis job (LRO)
940
+ poller = client.begin_analyze_text_job(text_input=text_input, actions=actions)
941
+ paged_actions = poller.result()
942
+
943
+ # Iterate through each action's results
944
+ for action_result in paged_actions:
945
+ print() # spacing between action blocks
946
+
947
+ # --- Entities ---
948
+ if isinstance(action_result, EntityRecognitionOperationResult):
949
+ print("=== Entity Recognition Results ===")
950
+ for ent_doc in action_result.results.documents:
951
+ print(f'Result for document with Id = "{ent_doc.id}":')
952
+ print(f" Recognized {len(ent_doc.entities)} entities:")
953
+ for entity in ent_doc.entities:
954
+ print(f" Text: {entity.text}")
955
+ print(f" Offset: {entity.offset}")
956
+ print(f" Length: {entity.length}")
957
+ print(f" Category: {entity.category}")
958
+ if hasattr(entity, "type") and entity.type is not None:
959
+ print(f" Type: {entity.type}")
960
+ if hasattr(entity, "subcategory") and entity.subcategory:
961
+ print(f" Subcategory: {entity.subcategory}")
962
+ if hasattr(entity, "tags") and entity.tags:
963
+ print(" Tags:")
964
+ for tag in entity.tags:
965
+ if isinstance(tag, EntityTag):
966
+ print(f" TagName: {tag.name}")
967
+ print(f" TagConfidenceScore: {tag.confidence_score}")
968
+ print(f" Confidence score: {entity.confidence_score}")
969
+ print()
970
+ for err in action_result.results.errors:
971
+ print(f' Error in document: {err.id}!')
972
+ print(f" Document error: {err.error}")
973
+
974
+ # --- Key Phrases ---
975
+ elif isinstance(action_result, KeyPhraseExtractionOperationResult):
976
+ print("=== Key Phrase Extraction Results ===")
977
+ for kp_doc in action_result.results.documents:
978
+ print(f'Result for document with Id = "{kp_doc.id}":')
979
+ for kp in kp_doc.key_phrases:
980
+ print(f" {kp}")
981
+ print()
982
+ for err in action_result.results.errors:
983
+ print(f' Error in document: {err.id}!')
984
+ print(f" Document error: {err.error}")
763
985
  ```
764
986
 
765
987
  <!-- END SNIPPET -->
@@ -843,17 +1065,11 @@ Common scenarios
843
1065
  - Custom Single Label Classification: [sample_single_label_classify.py][single_label_classify_sample] ([async_version][single_label_classify_sample_async])
844
1066
  - Custom Multi Label Classification: [sample_multi_label_classify.py][multi_label_classify_sample] ([async_version][multi_label_classify_sample_async])
845
1067
  - Extractive text summarization: [sample_extract_summary.py][extract_summary_sample] ([async version][extract_summary_sample_async])
846
- - Abstractive text summarization: [sample_abstractive_summary.py][abstractive_summary_sample] ([async version][abstractive_summary_sample_async])
847
- - Dynamic Classification: [sample_dynamic_classification.py][dynamic_classification_sample] ([async_version][dynamic_classification_sample_async])
848
-
849
- Advanced scenarios
850
-
851
- - Opinion Mining: [sample_analyze_sentiment_with_opinion_mining.py][opinion_mining_sample] ([async_version][opinion_mining_sample_async])
852
- - NER resolutions: [sample_recognize_entity_resolutions.py][recognize_entity_resolutions_sample] ([async_version][recognize_entity_resolutions_sample_async])
1068
+ - Abstractive text summarization: [sample_abstract_summary.py][abstract_summary_sample] ([async version][abstract_summary_sample_async])
853
1069
 
854
1070
  ### Additional documentation
855
1071
 
856
- For more extensive documentation on Azure Cognitive Service for Language, see the [Language Service documentation][language_product_documentation] on docs.microsoft.com.
1072
+ For more extensive documentation on Azure Cognitive Service for Language, see the [Language Service documentation][language_product_documentation] on learn.microsoft.com.
857
1073
 
858
1074
  ## Contributing
859
1075
 
@@ -865,28 +1081,28 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
865
1081
 
866
1082
  <!-- LINKS -->
867
1083
 
868
- [source_code]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/azure/ai/textanalytics
1084
+ [source_code]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-textanalytics/azure/ai/textanalytics
869
1085
  [ta_pypi]: https://pypi.org/project/azure-ai-textanalytics/
870
1086
  [ta_ref_docs]: https://aka.ms/azsdk-python-textanalytics-ref-docs
871
- [ta_samples]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples
872
- [language_product_documentation]: https://docs.microsoft.com/azure/cognitive-services/language-service
1087
+ [ta_samples]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples
1088
+ [language_product_documentation]: https://learn.microsoft.com/azure/cognitive-services/language-service
873
1089
  [azure_subscription]: https://azure.microsoft.com/free/
874
- [ta_or_cs_resource]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
1090
+ [ta_or_cs_resource]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
875
1091
  [pip]: https://pypi.org/project/pip/
876
1092
  [azure_portal_create_ta_resource]: https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics
877
- [azure_cli]: https://docs.microsoft.com/cli/azure
1093
+ [azure_cli]: https://learn.microsoft.com/cli/azure
878
1094
  [azure_cli_create_ta_resource]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account-cli
879
- [multi_and_single_service]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
880
- [azure_cli_endpoint_lookup]: https://docs.microsoft.com/cli/azure/cognitiveservices/account?view=azure-cli-latest#az-cognitiveservices-account-show
881
- [azure_portal_get_endpoint]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
882
- [cognitive_authentication]: https://docs.microsoft.com/azure/cognitive-services/authentication
883
- [cognitive_authentication_api_key]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
1095
+ [multi_and_single_service]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows
1096
+ [azure_cli_endpoint_lookup]: https://learn.microsoft.com/cli/azure/cognitiveservices/account?view=azure-cli-latest#az-cognitiveservices-account-show
1097
+ [azure_portal_get_endpoint]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
1098
+ [cognitive_authentication]: https://learn.microsoft.com/azure/cognitive-services/authentication
1099
+ [cognitive_authentication_api_key]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource
884
1100
  [install_azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#install-the-package
885
- [register_aad_app]: https://docs.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
886
- [grant_role_access]: https://docs.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
887
- [cognitive_custom_subdomain]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-custom-subdomains
888
- [custom_subdomain]: https://docs.microsoft.com/azure/cognitive-services/authentication#create-a-resource-with-a-custom-subdomain
889
- [cognitive_authentication_aad]: https://docs.microsoft.com/azure/cognitive-services/authentication#authenticate-with-azure-active-directory
1101
+ [register_aad_app]: https://learn.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
1102
+ [grant_role_access]: https://learn.microsoft.com/azure/cognitive-services/authentication#assign-a-role-to-a-service-principal
1103
+ [cognitive_custom_subdomain]: https://learn.microsoft.com/azure/cognitive-services/cognitive-services-custom-subdomains
1104
+ [custom_subdomain]: https://learn.microsoft.com/azure/cognitive-services/authentication#create-a-resource-with-a-custom-subdomain
1105
+ [cognitive_authentication_aad]: https://learn.microsoft.com/azure/cognitive-services/authentication#authenticate-with-azure-active-directory
890
1106
  [azure_identity_credentials]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#credentials
891
1107
  [default_azure_credential]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#defaultazurecredential
892
1108
  [service_limits]: https://aka.ms/azsdk/textanalytics/data-limits
@@ -909,61 +1125,110 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
909
1125
  [recognize_linked_entities]: https://aka.ms/azsdk-python-textanalytics-recognizelinkedentities
910
1126
  [extract_key_phrases]: https://aka.ms/azsdk-python-textanalytics-extractkeyphrases
911
1127
  [detect_language]: https://aka.ms/azsdk-python-textanalytics-detectlanguage
912
- [language_detection]: https://docs.microsoft.com/azure/cognitive-services/language-service/language-detection/overview
913
- [language_and_regional_support]: https://docs.microsoft.com/azure/cognitive-services/language-service/language-detection/language-support
914
- [sentiment_analysis]: https://docs.microsoft.com/azure/cognitive-services/language-service/sentiment-opinion-mining/overview
915
- [key_phrase_extraction]: https://docs.microsoft.com/azure/cognitive-services/language-service/key-phrase-extraction/overview
1128
+ [language_detection]: https://learn.microsoft.com/azure/cognitive-services/language-service/language-detection/overview
1129
+ [language_and_regional_support]: https://learn.microsoft.com/azure/cognitive-services/language-service/language-detection/language-support
1130
+ [sentiment_analysis]: https://learn.microsoft.com/azure/cognitive-services/language-service/sentiment-opinion-mining/overview
1131
+ [key_phrase_extraction]: https://learn.microsoft.com/azure/cognitive-services/language-service/key-phrase-extraction/overview
916
1132
  [linked_entities_categories]: https://aka.ms/taner
917
- [linked_entity_recognition]: https://docs.microsoft.com/azure/cognitive-services/language-service/entity-linking/overview
1133
+ [linked_entity_recognition]: https://learn.microsoft.com/azure/cognitive-services/language-service/entity-linking/overview
918
1134
  [pii_entity_categories]: https://aka.ms/azsdk/language/pii
919
- [named_entity_recognition]: https://docs.microsoft.com/azure/cognitive-services/language-service/named-entity-recognition/overview
1135
+ [named_entity_recognition]: https://learn.microsoft.com/azure/cognitive-services/language-service/named-entity-recognition/overview
920
1136
  [named_entity_categories]: https://aka.ms/taner
921
1137
  [azure_core_ref_docs]: https://aka.ms/azsdk-python-core-policies
922
1138
  [azure_core]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/README.md
923
1139
  [azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity
924
1140
  [python_logging]: https://docs.python.org/3/library/logging.html
925
- [sample_authentication]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py
926
- [sample_authentication_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py
927
- [detect_language_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py
928
- [detect_language_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py
929
- [analyze_sentiment_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py
930
- [analyze_sentiment_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py
931
- [extract_key_phrases_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py
932
- [extract_key_phrases_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py
933
- [recognize_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py
934
- [recognize_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py
935
- [recognize_linked_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py
936
- [recognize_linked_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py
937
- [recognize_pii_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py
938
- [recognize_pii_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py
939
- [analyze_healthcare_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_healthcare_entities.py
940
- [analyze_healthcare_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_healthcare_entities_async.py
941
- [analyze_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_actions.py
942
- [analyze_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_actions_async.py
943
- [opinion_mining_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment_with_opinion_mining.py
944
- [opinion_mining_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_with_opinion_mining_async.py
945
- [recognize_custom_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py
946
- [recognize_custom_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_custom_entities_async.py
947
- [single_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py
948
- [single_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_single_label_classify_async.py
949
- [multi_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py
950
- [multi_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_multi_label_classify_async.py
951
- [healthcare_action_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_healthcare_action.py
952
- [extract_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_summary.py
953
- [extract_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_summary_async.py
954
- [abstractive_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_abstractive_summary.py
955
- [abstractive_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_abstractive_summary_async.py
956
- [dynamic_classification_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_dynamic_classification.py
957
- [dynamic_classification_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_dynamic_classification_async.py
958
- [recognize_entity_resolutions_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entity_resolutions.py
959
- [recognize_entity_resolutions_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entity_resolutions_async.py
1141
+ [sample_authentication]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_authentication.py
1142
+ [sample_authentication_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py
1143
+ [detect_language_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_detect_language.py
1144
+ [detect_language_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py
1145
+ [analyze_sentiment_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_sentiment.py
1146
+ [analyze_sentiment_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py
1147
+ [extract_key_phrases_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_extract_key_phrases.py
1148
+ [extract_key_phrases_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py
1149
+ [recognize_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_entities.py
1150
+ [recognize_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py
1151
+ [recognize_linked_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py
1152
+ [recognize_linked_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py
1153
+ [recognize_pii_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py
1154
+ [recognize_pii_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py
1155
+ [analyze_healthcare_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_healthcare_entities.py
1156
+ [analyze_healthcare_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_analyze_healthcare_entities_async.py
1157
+ [analyze_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_actions.py
1158
+ [analyze_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_analyze_actions_async.py
1159
+ [recognize_custom_entities_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py
1160
+ [recognize_custom_entities_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_recognize_custom_entities_async.py
1161
+ [single_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_single_label_classify.py
1162
+ [single_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_single_label_classify_async.py
1163
+ [multi_label_classify_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_multi_label_classify.py
1164
+ [multi_label_classify_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_multi_label_classify_async.py
1165
+ [healthcare_action_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_analyze_healthcare_action.py
1166
+ [extract_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_extract_summary.py
1167
+ [extract_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_extract_summary_async.py
1168
+ [abstract_summary_sample]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/sample_abstract_summary.py
1169
+ [abstract_summary_sample_async]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-textanalytics/samples/async_samples/sample_abstract_summary_async.py
960
1170
  [cla]: https://cla.microsoft.com
961
1171
  [code_of_conduct]: https://opensource.microsoft.com/codeofconduct/
962
1172
  [coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/
963
1173
  [coc_contact]: mailto:opencode@microsoft.com
1174
+ # Release History
964
1175
 
1176
+ ## 6.0.0b1 (2025-09-11)
965
1177
 
966
- # Release History
1178
+ This version of the client library defaults to the service API version `2025-05-15-preview`.
1179
+
1180
+ ### Features Added
1181
+
1182
+ - Added Value Exclusion, synonyms, and new entity types to the detection of Personally Identifiable Information (PII).
1183
+
1184
+ ### Breaking Changes
1185
+
1186
+ - Removed `begin_abstract_summary`for abstracting text summarization, added function `begin_analyze_text_job` with `AbstractiveSummarizationOperationAction` for this purpose.
1187
+ - Removed `begin_analyze_healthcare_entities`for analyzing healthcare entities, added function `begin_analyze_text_job` with `HealthcareLROTask` for this purpose.
1188
+ - Removed `analyze_sentiment`for analyzing sentiment, added function `analyze_text` with `TextSentimentAnalysisInput` for this purpose.
1189
+ - Removed `detect_language`for detecting language, added function `analyze_text` with `LanguageDetectionTextInput` for this purpose.
1190
+ - Removed `extract_key_phrases`for extracting key phrases, added function `analyze_text` with `TextKeyPhraseExtractionInput` for this purpose.
1191
+ - Removed `begin_multi_label_classify`for classifying documents into multiple custom categories, added function `begin_analyze_text_job` with `CustomMultiLabelClassificationActionContent` for this purpose.
1192
+ - Removed `begin_recognize_custom_entities`for recognizing custom entities in documents, added function `begin_analyze_text_job` with `CustomEntitiesLROTask` for this purpose.
1193
+ - Removed `recognize_entities`for recognizing named entities in a batch of documents, added function `analyze_text` with `TextEntityRecognitionInput` for this purpose.
1194
+ - Removed `recognize_linked_entities`for detecting linked entities in a batch of documents, added function `analyze_text` with `TextEntityLinkingInput` for this purpose.
1195
+ - Removed `recognize_pii_entities`for recognizing personally identifiable information in a batch of documents, added function `analyze_text` with `TextPiiEntitiesRecognitionInput` for this purpose.
1196
+ - Removed `begin_single_label_classify`for classifying documents into a single custom category, added function `begin_analyze_text_job` with `CustomSingleLabelClassificationOperationAction` for this purpose.
1197
+
1198
+ ### Other Changes
1199
+
1200
+ - Added custom poller `AnalyzeTextLROPoller` and `AnalyzeTextAsyncLROPoller` to customize the return type of `begin_analyze_text_job` to be `AnalyzeTextLROPoller[ItemPaged["TextActions"]]` and `AnalyzeTextAsyncLROPoller[AsyncItemPaged["TextActions"]]`
1201
+
1202
+ ## 5.3.0 (2023-06-15)
1203
+
1204
+ This version of the client library defaults to the service API version `2023-04-01`.
1205
+
1206
+ ### Breaking Changes
1207
+
1208
+ > Note: The following changes are only breaking from the previous beta. They are not breaking against previous stable versions.
1209
+
1210
+ - Renamed model `ExtractSummaryAction` to `ExtractiveSummaryAction`.
1211
+ - Renamed model `ExtractSummaryResult` to `ExtractiveSummaryResult`.
1212
+ - Renamed client method `begin_abstractive_summary` to `begin_abstract_summary`.
1213
+ - Removed `dynamic_classification` client method and related types: `DynamicClassificationResult` and `ClassificationType`.
1214
+ - Removed keyword arguments `fhir_version` and `document_type` from `begin_analyze_healthcare_entities` and `AnalyzeHealthcareEntitiesAction`.
1215
+ - Removed property `fhir_bundle` from `AnalyzeHealthcareEntitiesResult`.
1216
+ - Removed enum `HealthcareDocumentType`.
1217
+ - Removed property `resolutions` from `CategorizedEntity`.
1218
+ - Removed models and enums related to resolutions: `ResolutionKind`, `AgeResolution`, `AreaResolution`,
1219
+ `CurrencyResolution`, `DateTimeResolution`, `InformationResolution`, `LengthResolution`,
1220
+ `NumberResolution`, `NumericRangeResolution`, `OrdinalResolution`, `SpeedResolution`, `TemperatureResolution`,
1221
+ `TemporalSpanResolution`, `VolumeResolution`, `WeightResolution`, `AgeUnit`, `AreaUnit`, `TemporalModifier`,
1222
+ `InformationUnit`, `LengthUnit`, `NumberKind`, `RangeKind`, `RelativeTo`, `SpeedUnit`, `TemperatureUnit`,
1223
+ `VolumeUnit`, `DateTimeSubKind`, and `WeightUnit`.
1224
+ - Removed property `detected_language` from `RecognizeEntitiesResult`, `RecognizePiiEntitiesResult`, `AnalyzeHealthcareEntitiesResult`,
1225
+ `ExtractKeyPhrasesResult`, `RecognizeLinkedEntitiesResult`, `AnalyzeSentimentResult`, `RecognizeCustomEntitiesResult`,
1226
+ `ClassifyDocumentResult`, `ExtractSummaryResult`, and `AbstractSummaryResult`.
1227
+ - Removed property `script` from `DetectedLanguage`.
1228
+
1229
+ ### Features Added
1230
+
1231
+ - New enum values added for `HealthcareEntityCategory` and `HealthcareEntityRelation`.
967
1232
 
968
1233
  ## 5.3.0b2 (2023-03-07)
969
1234
 
@@ -1175,7 +1440,7 @@ is this diagnosis conditional on a symptom?
1175
1440
 
1176
1441
  **Known Issues**
1177
1442
 
1178
- - `begin_analyze_healthcare_entities` is currently in gated preview and can not be used with AAD credentials. For more information, see [the Text Analytics for Health documentation](https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1443
+ - `begin_analyze_healthcare_entities` is currently in gated preview and can not be used with AAD credentials. For more information, see [the Text Analytics for Health documentation](https://learn.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1179
1444
  - At time of this SDK release, the service is not respecting the value passed through `model_version` to `begin_analyze_healthcare_entities`, it only uses the latest model.
1180
1445
 
1181
1446
  ## 5.1.0b5 (2021-02-10)
@@ -1213,7 +1478,7 @@ the service client to the poller object returned from `begin_analyze_healthcare_
1213
1478
 
1214
1479
  **New Features**
1215
1480
  - We have added method `begin_analyze`, which supports long-running batch process of Named Entity Recognition, Personally identifiable Information, and Key Phrase Extraction. To use, you must specify `api_version=TextAnalyticsApiVersion.V3_1_PREVIEW_3` when creating your client.
1216
- - We have added method `begin_analyze_healthcare`, which supports the service's Health API. Since the Health API is currently only available in a gated preview, you need to have your subscription on the service's allow list, and you must specify `api_version=TextAnalyticsApiVersion.V3_1_PREVIEW_3` when creating your client. Note that since this is a gated preview, AAD is not supported. More information [here](https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1481
+ - We have added method `begin_analyze_healthcare`, which supports the service's Health API. Since the Health API is currently only available in a gated preview, you need to have your subscription on the service's allow list, and you must specify `api_version=TextAnalyticsApiVersion.V3_1_PREVIEW_3` when creating your client. Note that since this is a gated preview, AAD is not supported. More information [here](https://learn.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner#request-access-to-the-public-preview).
1217
1482
 
1218
1483
 
1219
1484
  ## 5.1.0b2 (2020-10-06)