elasticsearch 9.0.2__py3-none-any.whl → 9.0.3__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- elasticsearch/_async/client/__init__.py +42 -198
- elasticsearch/_async/client/cat.py +393 -25
- elasticsearch/_async/client/cluster.py +14 -4
- elasticsearch/_async/client/eql.py +10 -2
- elasticsearch/_async/client/esql.py +17 -4
- elasticsearch/_async/client/indices.py +87 -43
- elasticsearch/_async/client/inference.py +108 -3
- elasticsearch/_async/client/ingest.py +0 -7
- elasticsearch/_async/client/license.py +4 -4
- elasticsearch/_async/client/ml.py +6 -17
- elasticsearch/_async/client/monitoring.py +1 -1
- elasticsearch/_async/client/rollup.py +1 -22
- elasticsearch/_async/client/security.py +11 -17
- elasticsearch/_async/client/snapshot.py +6 -0
- elasticsearch/_async/client/synonyms.py +1 -0
- elasticsearch/_async/client/watcher.py +4 -2
- elasticsearch/_sync/client/__init__.py +42 -198
- elasticsearch/_sync/client/cat.py +393 -25
- elasticsearch/_sync/client/cluster.py +14 -4
- elasticsearch/_sync/client/eql.py +10 -2
- elasticsearch/_sync/client/esql.py +17 -4
- elasticsearch/_sync/client/indices.py +87 -43
- elasticsearch/_sync/client/inference.py +108 -3
- elasticsearch/_sync/client/ingest.py +0 -7
- elasticsearch/_sync/client/license.py +4 -4
- elasticsearch/_sync/client/ml.py +6 -17
- elasticsearch/_sync/client/monitoring.py +1 -1
- elasticsearch/_sync/client/rollup.py +1 -22
- elasticsearch/_sync/client/security.py +11 -17
- elasticsearch/_sync/client/snapshot.py +6 -0
- elasticsearch/_sync/client/synonyms.py +1 -0
- elasticsearch/_sync/client/watcher.py +4 -2
- elasticsearch/_version.py +1 -1
- elasticsearch/compat.py +5 -0
- elasticsearch/dsl/__init__.py +2 -1
- elasticsearch/dsl/document_base.py +176 -16
- elasticsearch/dsl/field.py +222 -47
- elasticsearch/dsl/query.py +7 -4
- elasticsearch/dsl/types.py +105 -80
- elasticsearch/dsl/utils.py +1 -1
- elasticsearch/{dsl/_sync/_sync_check → esql}/__init__.py +2 -0
- elasticsearch/esql/esql.py +1105 -0
- elasticsearch/esql/functions.py +1738 -0
- {elasticsearch-9.0.2.dist-info → elasticsearch-9.0.3.dist-info}/METADATA +1 -1
- {elasticsearch-9.0.2.dist-info → elasticsearch-9.0.3.dist-info}/RECORD +48 -52
- elasticsearch/dsl/_sync/_sync_check/document.py +0 -514
- elasticsearch/dsl/_sync/_sync_check/faceted_search.py +0 -50
- elasticsearch/dsl/_sync/_sync_check/index.py +0 -597
- elasticsearch/dsl/_sync/_sync_check/mapping.py +0 -49
- elasticsearch/dsl/_sync/_sync_check/search.py +0 -230
- elasticsearch/dsl/_sync/_sync_check/update_by_query.py +0 -45
- {elasticsearch-9.0.2.dist-info → elasticsearch-9.0.3.dist-info}/WHEEL +0 -0
- {elasticsearch-9.0.2.dist-info → elasticsearch-9.0.3.dist-info}/licenses/LICENSE +0 -0
- {elasticsearch-9.0.2.dist-info → elasticsearch-9.0.3.dist-info}/licenses/NOTICE +0 -0
|
@@ -366,6 +366,7 @@ class InferenceClient(NamespacedClient):
|
|
|
366
366
|
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
|
|
367
367
|
human: t.Optional[bool] = None,
|
|
368
368
|
pretty: t.Optional[bool] = None,
|
|
369
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
369
370
|
) -> ObjectApiResponse[t.Any]:
|
|
370
371
|
"""
|
|
371
372
|
.. raw:: html
|
|
@@ -374,13 +375,35 @@ class InferenceClient(NamespacedClient):
|
|
|
374
375
|
<p>IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
|
|
375
376
|
For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models.
|
|
376
377
|
However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.</p>
|
|
378
|
+
<p>The following integrations are available through the inference API. You can find the available task types next to the integration name:</p>
|
|
379
|
+
<ul>
|
|
380
|
+
<li>AlibabaCloud AI Search (<code>completion</code>, <code>rerank</code>, <code>sparse_embedding</code>, <code>text_embedding</code>)</li>
|
|
381
|
+
<li>Amazon Bedrock (<code>completion</code>, <code>text_embedding</code>)</li>
|
|
382
|
+
<li>Anthropic (<code>completion</code>)</li>
|
|
383
|
+
<li>Azure AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
|
|
384
|
+
<li>Azure OpenAI (<code>completion</code>, <code>text_embedding</code>)</li>
|
|
385
|
+
<li>Cohere (<code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
|
|
386
|
+
<li>Elasticsearch (<code>rerank</code>, <code>sparse_embedding</code>, <code>text_embedding</code> - this service is for built-in models and models uploaded through Eland)</li>
|
|
387
|
+
<li>ELSER (<code>sparse_embedding</code>)</li>
|
|
388
|
+
<li>Google AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
|
|
389
|
+
<li>Google Vertex AI (<code>rerank</code>, <code>text_embedding</code>)</li>
|
|
390
|
+
<li>Hugging Face (<code>text_embedding</code>)</li>
|
|
391
|
+
<li>Mistral (<code>text_embedding</code>)</li>
|
|
392
|
+
<li>OpenAI (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
|
|
393
|
+
<li>VoyageAI (<code>text_embedding</code>, <code>rerank</code>)</li>
|
|
394
|
+
<li>Watsonx inference integration (<code>text_embedding</code>)</li>
|
|
395
|
+
<li>JinaAI (<code>text_embedding</code>, <code>rerank</code>)</li>
|
|
396
|
+
</ul>
|
|
377
397
|
|
|
378
398
|
|
|
379
399
|
`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-put>`_
|
|
380
400
|
|
|
381
401
|
:param inference_id: The inference Id
|
|
382
402
|
:param inference_config:
|
|
383
|
-
:param task_type: The task type
|
|
403
|
+
:param task_type: The task type. Refer to the integration list in the API description
|
|
404
|
+
for the available task types.
|
|
405
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
406
|
+
to be created.
|
|
384
407
|
"""
|
|
385
408
|
if inference_id in SKIP_IN_PATH:
|
|
386
409
|
raise ValueError("Empty value passed for parameter 'inference_id'")
|
|
@@ -411,6 +434,8 @@ class InferenceClient(NamespacedClient):
|
|
|
411
434
|
__query["human"] = human
|
|
412
435
|
if pretty is not None:
|
|
413
436
|
__query["pretty"] = pretty
|
|
437
|
+
if timeout is not None:
|
|
438
|
+
__query["timeout"] = timeout
|
|
414
439
|
__body = inference_config if inference_config is not None else body
|
|
415
440
|
__headers = {"accept": "application/json", "content-type": "application/json"}
|
|
416
441
|
return await self.perform_request( # type: ignore[return-value]
|
|
@@ -446,6 +471,7 @@ class InferenceClient(NamespacedClient):
|
|
|
446
471
|
human: t.Optional[bool] = None,
|
|
447
472
|
pretty: t.Optional[bool] = None,
|
|
448
473
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
474
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
449
475
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
450
476
|
) -> ObjectApiResponse[t.Any]:
|
|
451
477
|
"""
|
|
@@ -466,6 +492,8 @@ class InferenceClient(NamespacedClient):
|
|
|
466
492
|
:param chunking_settings: The chunking configuration object.
|
|
467
493
|
:param task_settings: Settings to configure the inference task. These settings
|
|
468
494
|
are specific to the task type you specified.
|
|
495
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
496
|
+
to be created.
|
|
469
497
|
"""
|
|
470
498
|
if task_type in SKIP_IN_PATH:
|
|
471
499
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -492,6 +520,8 @@ class InferenceClient(NamespacedClient):
|
|
|
492
520
|
__query["human"] = human
|
|
493
521
|
if pretty is not None:
|
|
494
522
|
__query["pretty"] = pretty
|
|
523
|
+
if timeout is not None:
|
|
524
|
+
__query["timeout"] = timeout
|
|
495
525
|
if not __body:
|
|
496
526
|
if service is not None:
|
|
497
527
|
__body["service"] = service
|
|
@@ -537,13 +567,14 @@ class InferenceClient(NamespacedClient):
|
|
|
537
567
|
human: t.Optional[bool] = None,
|
|
538
568
|
pretty: t.Optional[bool] = None,
|
|
539
569
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
570
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
540
571
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
541
572
|
) -> ObjectApiResponse[t.Any]:
|
|
542
573
|
"""
|
|
543
574
|
.. raw:: html
|
|
544
575
|
|
|
545
576
|
<p>Create an Amazon Bedrock inference endpoint.</p>
|
|
546
|
-
<p>
|
|
577
|
+
<p>Create an inference endpoint to perform an inference task with the <code>amazonbedrock</code> service.</p>
|
|
547
578
|
<blockquote>
|
|
548
579
|
<p>info
|
|
549
580
|
You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.</p>
|
|
@@ -561,6 +592,8 @@ class InferenceClient(NamespacedClient):
|
|
|
561
592
|
:param chunking_settings: The chunking configuration object.
|
|
562
593
|
:param task_settings: Settings to configure the inference task. These settings
|
|
563
594
|
are specific to the task type you specified.
|
|
595
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
596
|
+
to be created.
|
|
564
597
|
"""
|
|
565
598
|
if task_type in SKIP_IN_PATH:
|
|
566
599
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -587,6 +620,8 @@ class InferenceClient(NamespacedClient):
|
|
|
587
620
|
__query["human"] = human
|
|
588
621
|
if pretty is not None:
|
|
589
622
|
__query["pretty"] = pretty
|
|
623
|
+
if timeout is not None:
|
|
624
|
+
__query["timeout"] = timeout
|
|
590
625
|
if not __body:
|
|
591
626
|
if service is not None:
|
|
592
627
|
__body["service"] = service
|
|
@@ -632,6 +667,7 @@ class InferenceClient(NamespacedClient):
|
|
|
632
667
|
human: t.Optional[bool] = None,
|
|
633
668
|
pretty: t.Optional[bool] = None,
|
|
634
669
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
670
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
635
671
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
636
672
|
) -> ObjectApiResponse[t.Any]:
|
|
637
673
|
"""
|
|
@@ -653,6 +689,8 @@ class InferenceClient(NamespacedClient):
|
|
|
653
689
|
:param chunking_settings: The chunking configuration object.
|
|
654
690
|
:param task_settings: Settings to configure the inference task. These settings
|
|
655
691
|
are specific to the task type you specified.
|
|
692
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
693
|
+
to be created.
|
|
656
694
|
"""
|
|
657
695
|
if task_type in SKIP_IN_PATH:
|
|
658
696
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -679,6 +717,8 @@ class InferenceClient(NamespacedClient):
|
|
|
679
717
|
__query["human"] = human
|
|
680
718
|
if pretty is not None:
|
|
681
719
|
__query["pretty"] = pretty
|
|
720
|
+
if timeout is not None:
|
|
721
|
+
__query["timeout"] = timeout
|
|
682
722
|
if not __body:
|
|
683
723
|
if service is not None:
|
|
684
724
|
__body["service"] = service
|
|
@@ -724,6 +764,7 @@ class InferenceClient(NamespacedClient):
|
|
|
724
764
|
human: t.Optional[bool] = None,
|
|
725
765
|
pretty: t.Optional[bool] = None,
|
|
726
766
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
767
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
727
768
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
728
769
|
) -> ObjectApiResponse[t.Any]:
|
|
729
770
|
"""
|
|
@@ -744,6 +785,8 @@ class InferenceClient(NamespacedClient):
|
|
|
744
785
|
:param chunking_settings: The chunking configuration object.
|
|
745
786
|
:param task_settings: Settings to configure the inference task. These settings
|
|
746
787
|
are specific to the task type you specified.
|
|
788
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
789
|
+
to be created.
|
|
747
790
|
"""
|
|
748
791
|
if task_type in SKIP_IN_PATH:
|
|
749
792
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -770,6 +813,8 @@ class InferenceClient(NamespacedClient):
|
|
|
770
813
|
__query["human"] = human
|
|
771
814
|
if pretty is not None:
|
|
772
815
|
__query["pretty"] = pretty
|
|
816
|
+
if timeout is not None:
|
|
817
|
+
__query["timeout"] = timeout
|
|
773
818
|
if not __body:
|
|
774
819
|
if service is not None:
|
|
775
820
|
__body["service"] = service
|
|
@@ -815,6 +860,7 @@ class InferenceClient(NamespacedClient):
|
|
|
815
860
|
human: t.Optional[bool] = None,
|
|
816
861
|
pretty: t.Optional[bool] = None,
|
|
817
862
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
863
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
818
864
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
819
865
|
) -> ObjectApiResponse[t.Any]:
|
|
820
866
|
"""
|
|
@@ -843,6 +889,8 @@ class InferenceClient(NamespacedClient):
|
|
|
843
889
|
:param chunking_settings: The chunking configuration object.
|
|
844
890
|
:param task_settings: Settings to configure the inference task. These settings
|
|
845
891
|
are specific to the task type you specified.
|
|
892
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
893
|
+
to be created.
|
|
846
894
|
"""
|
|
847
895
|
if task_type in SKIP_IN_PATH:
|
|
848
896
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -869,6 +917,8 @@ class InferenceClient(NamespacedClient):
|
|
|
869
917
|
__query["human"] = human
|
|
870
918
|
if pretty is not None:
|
|
871
919
|
__query["pretty"] = pretty
|
|
920
|
+
if timeout is not None:
|
|
921
|
+
__query["timeout"] = timeout
|
|
872
922
|
if not __body:
|
|
873
923
|
if service is not None:
|
|
874
924
|
__body["service"] = service
|
|
@@ -914,6 +964,7 @@ class InferenceClient(NamespacedClient):
|
|
|
914
964
|
human: t.Optional[bool] = None,
|
|
915
965
|
pretty: t.Optional[bool] = None,
|
|
916
966
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
967
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
917
968
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
918
969
|
) -> ObjectApiResponse[t.Any]:
|
|
919
970
|
"""
|
|
@@ -934,6 +985,8 @@ class InferenceClient(NamespacedClient):
|
|
|
934
985
|
:param chunking_settings: The chunking configuration object.
|
|
935
986
|
:param task_settings: Settings to configure the inference task. These settings
|
|
936
987
|
are specific to the task type you specified.
|
|
988
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
989
|
+
to be created.
|
|
937
990
|
"""
|
|
938
991
|
if task_type in SKIP_IN_PATH:
|
|
939
992
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -958,6 +1011,8 @@ class InferenceClient(NamespacedClient):
|
|
|
958
1011
|
__query["human"] = human
|
|
959
1012
|
if pretty is not None:
|
|
960
1013
|
__query["pretty"] = pretty
|
|
1014
|
+
if timeout is not None:
|
|
1015
|
+
__query["timeout"] = timeout
|
|
961
1016
|
if not __body:
|
|
962
1017
|
if service is not None:
|
|
963
1018
|
__body["service"] = service
|
|
@@ -1005,6 +1060,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1005
1060
|
human: t.Optional[bool] = None,
|
|
1006
1061
|
pretty: t.Optional[bool] = None,
|
|
1007
1062
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
1063
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1008
1064
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1009
1065
|
) -> ObjectApiResponse[t.Any]:
|
|
1010
1066
|
"""
|
|
@@ -1039,6 +1095,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1039
1095
|
:param chunking_settings: The chunking configuration object.
|
|
1040
1096
|
:param task_settings: Settings to configure the inference task. These settings
|
|
1041
1097
|
are specific to the task type you specified.
|
|
1098
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1099
|
+
to be created.
|
|
1042
1100
|
"""
|
|
1043
1101
|
if task_type in SKIP_IN_PATH:
|
|
1044
1102
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1065,6 +1123,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1065
1123
|
__query["human"] = human
|
|
1066
1124
|
if pretty is not None:
|
|
1067
1125
|
__query["pretty"] = pretty
|
|
1126
|
+
if timeout is not None:
|
|
1127
|
+
__query["timeout"] = timeout
|
|
1068
1128
|
if not __body:
|
|
1069
1129
|
if service is not None:
|
|
1070
1130
|
__body["service"] = service
|
|
@@ -1104,6 +1164,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1104
1164
|
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
|
|
1105
1165
|
human: t.Optional[bool] = None,
|
|
1106
1166
|
pretty: t.Optional[bool] = None,
|
|
1167
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1107
1168
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1108
1169
|
) -> ObjectApiResponse[t.Any]:
|
|
1109
1170
|
"""
|
|
@@ -1136,6 +1197,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1136
1197
|
:param service_settings: Settings used to install the inference model. These
|
|
1137
1198
|
settings are specific to the `elser` service.
|
|
1138
1199
|
:param chunking_settings: The chunking configuration object.
|
|
1200
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1201
|
+
to be created.
|
|
1139
1202
|
"""
|
|
1140
1203
|
if task_type in SKIP_IN_PATH:
|
|
1141
1204
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1160,6 +1223,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1160
1223
|
__query["human"] = human
|
|
1161
1224
|
if pretty is not None:
|
|
1162
1225
|
__query["pretty"] = pretty
|
|
1226
|
+
if timeout is not None:
|
|
1227
|
+
__query["timeout"] = timeout
|
|
1163
1228
|
if not __body:
|
|
1164
1229
|
if service is not None:
|
|
1165
1230
|
__body["service"] = service
|
|
@@ -1197,6 +1262,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1197
1262
|
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
|
|
1198
1263
|
human: t.Optional[bool] = None,
|
|
1199
1264
|
pretty: t.Optional[bool] = None,
|
|
1265
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1200
1266
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1201
1267
|
) -> ObjectApiResponse[t.Any]:
|
|
1202
1268
|
"""
|
|
@@ -1215,6 +1281,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1215
1281
|
:param service_settings: Settings used to install the inference model. These
|
|
1216
1282
|
settings are specific to the `googleaistudio` service.
|
|
1217
1283
|
:param chunking_settings: The chunking configuration object.
|
|
1284
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1285
|
+
to be created.
|
|
1218
1286
|
"""
|
|
1219
1287
|
if task_type in SKIP_IN_PATH:
|
|
1220
1288
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1241,6 +1309,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1241
1309
|
__query["human"] = human
|
|
1242
1310
|
if pretty is not None:
|
|
1243
1311
|
__query["pretty"] = pretty
|
|
1312
|
+
if timeout is not None:
|
|
1313
|
+
__query["timeout"] = timeout
|
|
1244
1314
|
if not __body:
|
|
1245
1315
|
if service is not None:
|
|
1246
1316
|
__body["service"] = service
|
|
@@ -1284,6 +1354,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1284
1354
|
human: t.Optional[bool] = None,
|
|
1285
1355
|
pretty: t.Optional[bool] = None,
|
|
1286
1356
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
1357
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1287
1358
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1288
1359
|
) -> ObjectApiResponse[t.Any]:
|
|
1289
1360
|
"""
|
|
@@ -1304,6 +1375,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1304
1375
|
:param chunking_settings: The chunking configuration object.
|
|
1305
1376
|
:param task_settings: Settings to configure the inference task. These settings
|
|
1306
1377
|
are specific to the task type you specified.
|
|
1378
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1379
|
+
to be created.
|
|
1307
1380
|
"""
|
|
1308
1381
|
if task_type in SKIP_IN_PATH:
|
|
1309
1382
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1330,6 +1403,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1330
1403
|
__query["human"] = human
|
|
1331
1404
|
if pretty is not None:
|
|
1332
1405
|
__query["pretty"] = pretty
|
|
1406
|
+
if timeout is not None:
|
|
1407
|
+
__query["timeout"] = timeout
|
|
1333
1408
|
if not __body:
|
|
1334
1409
|
if service is not None:
|
|
1335
1410
|
__body["service"] = service
|
|
@@ -1369,6 +1444,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1369
1444
|
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
|
|
1370
1445
|
human: t.Optional[bool] = None,
|
|
1371
1446
|
pretty: t.Optional[bool] = None,
|
|
1447
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1372
1448
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1373
1449
|
) -> ObjectApiResponse[t.Any]:
|
|
1374
1450
|
"""
|
|
@@ -1400,6 +1476,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1400
1476
|
:param service_settings: Settings used to install the inference model. These
|
|
1401
1477
|
settings are specific to the `hugging_face` service.
|
|
1402
1478
|
:param chunking_settings: The chunking configuration object.
|
|
1479
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1480
|
+
to be created.
|
|
1403
1481
|
"""
|
|
1404
1482
|
if task_type in SKIP_IN_PATH:
|
|
1405
1483
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1426,6 +1504,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1426
1504
|
__query["human"] = human
|
|
1427
1505
|
if pretty is not None:
|
|
1428
1506
|
__query["pretty"] = pretty
|
|
1507
|
+
if timeout is not None:
|
|
1508
|
+
__query["timeout"] = timeout
|
|
1429
1509
|
if not __body:
|
|
1430
1510
|
if service is not None:
|
|
1431
1511
|
__body["service"] = service
|
|
@@ -1469,6 +1549,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1469
1549
|
human: t.Optional[bool] = None,
|
|
1470
1550
|
pretty: t.Optional[bool] = None,
|
|
1471
1551
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
1552
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1472
1553
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1473
1554
|
) -> ObjectApiResponse[t.Any]:
|
|
1474
1555
|
"""
|
|
@@ -1491,6 +1572,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1491
1572
|
:param chunking_settings: The chunking configuration object.
|
|
1492
1573
|
:param task_settings: Settings to configure the inference task. These settings
|
|
1493
1574
|
are specific to the task type you specified.
|
|
1575
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1576
|
+
to be created.
|
|
1494
1577
|
"""
|
|
1495
1578
|
if task_type in SKIP_IN_PATH:
|
|
1496
1579
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1515,6 +1598,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1515
1598
|
__query["human"] = human
|
|
1516
1599
|
if pretty is not None:
|
|
1517
1600
|
__query["pretty"] = pretty
|
|
1601
|
+
if timeout is not None:
|
|
1602
|
+
__query["timeout"] = timeout
|
|
1518
1603
|
if not __body:
|
|
1519
1604
|
if service is not None:
|
|
1520
1605
|
__body["service"] = service
|
|
@@ -1554,6 +1639,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1554
1639
|
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
|
|
1555
1640
|
human: t.Optional[bool] = None,
|
|
1556
1641
|
pretty: t.Optional[bool] = None,
|
|
1642
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1557
1643
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1558
1644
|
) -> ObjectApiResponse[t.Any]:
|
|
1559
1645
|
"""
|
|
@@ -1573,6 +1659,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1573
1659
|
:param service_settings: Settings used to install the inference model. These
|
|
1574
1660
|
settings are specific to the `mistral` service.
|
|
1575
1661
|
:param chunking_settings: The chunking configuration object.
|
|
1662
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1663
|
+
to be created.
|
|
1576
1664
|
"""
|
|
1577
1665
|
if task_type in SKIP_IN_PATH:
|
|
1578
1666
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1597,6 +1685,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1597
1685
|
__query["human"] = human
|
|
1598
1686
|
if pretty is not None:
|
|
1599
1687
|
__query["pretty"] = pretty
|
|
1688
|
+
if timeout is not None:
|
|
1689
|
+
__query["timeout"] = timeout
|
|
1600
1690
|
if not __body:
|
|
1601
1691
|
if service is not None:
|
|
1602
1692
|
__body["service"] = service
|
|
@@ -1642,6 +1732,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1642
1732
|
human: t.Optional[bool] = None,
|
|
1643
1733
|
pretty: t.Optional[bool] = None,
|
|
1644
1734
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
1735
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1645
1736
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1646
1737
|
) -> ObjectApiResponse[t.Any]:
|
|
1647
1738
|
"""
|
|
@@ -1664,6 +1755,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1664
1755
|
:param chunking_settings: The chunking configuration object.
|
|
1665
1756
|
:param task_settings: Settings to configure the inference task. These settings
|
|
1666
1757
|
are specific to the task type you specified.
|
|
1758
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1759
|
+
to be created.
|
|
1667
1760
|
"""
|
|
1668
1761
|
if task_type in SKIP_IN_PATH:
|
|
1669
1762
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1688,6 +1781,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1688
1781
|
__query["human"] = human
|
|
1689
1782
|
if pretty is not None:
|
|
1690
1783
|
__query["pretty"] = pretty
|
|
1784
|
+
if timeout is not None:
|
|
1785
|
+
__query["timeout"] = timeout
|
|
1691
1786
|
if not __body:
|
|
1692
1787
|
if service is not None:
|
|
1693
1788
|
__body["service"] = service
|
|
@@ -1733,6 +1828,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1733
1828
|
human: t.Optional[bool] = None,
|
|
1734
1829
|
pretty: t.Optional[bool] = None,
|
|
1735
1830
|
task_settings: t.Optional[t.Mapping[str, t.Any]] = None,
|
|
1831
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1736
1832
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1737
1833
|
) -> ObjectApiResponse[t.Any]:
|
|
1738
1834
|
"""
|
|
@@ -1754,6 +1850,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1754
1850
|
:param chunking_settings: The chunking configuration object.
|
|
1755
1851
|
:param task_settings: Settings to configure the inference task. These settings
|
|
1756
1852
|
are specific to the task type you specified.
|
|
1853
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1854
|
+
to be created.
|
|
1757
1855
|
"""
|
|
1758
1856
|
if task_type in SKIP_IN_PATH:
|
|
1759
1857
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1778,6 +1876,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1778
1876
|
__query["human"] = human
|
|
1779
1877
|
if pretty is not None:
|
|
1780
1878
|
__query["pretty"] = pretty
|
|
1879
|
+
if timeout is not None:
|
|
1880
|
+
__query["timeout"] = timeout
|
|
1781
1881
|
if not __body:
|
|
1782
1882
|
if service is not None:
|
|
1783
1883
|
__body["service"] = service
|
|
@@ -1816,6 +1916,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1816
1916
|
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
|
|
1817
1917
|
human: t.Optional[bool] = None,
|
|
1818
1918
|
pretty: t.Optional[bool] = None,
|
|
1919
|
+
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
1819
1920
|
body: t.Optional[t.Dict[str, t.Any]] = None,
|
|
1820
1921
|
) -> ObjectApiResponse[t.Any]:
|
|
1821
1922
|
"""
|
|
@@ -1836,6 +1937,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1836
1937
|
this case, `watsonxai`.
|
|
1837
1938
|
:param service_settings: Settings used to install the inference model. These
|
|
1838
1939
|
settings are specific to the `watsonxai` service.
|
|
1940
|
+
:param timeout: Specifies the amount of time to wait for the inference endpoint
|
|
1941
|
+
to be created.
|
|
1839
1942
|
"""
|
|
1840
1943
|
if task_type in SKIP_IN_PATH:
|
|
1841
1944
|
raise ValueError("Empty value passed for parameter 'task_type'")
|
|
@@ -1860,6 +1963,8 @@ class InferenceClient(NamespacedClient):
|
|
|
1860
1963
|
__query["human"] = human
|
|
1861
1964
|
if pretty is not None:
|
|
1862
1965
|
__query["pretty"] = pretty
|
|
1966
|
+
if timeout is not None:
|
|
1967
|
+
__query["timeout"] = timeout
|
|
1863
1968
|
if not __body:
|
|
1864
1969
|
if service is not None:
|
|
1865
1970
|
__body["service"] = service
|
|
@@ -1900,7 +2005,7 @@ class InferenceClient(NamespacedClient):
|
|
|
1900
2005
|
"""
|
|
1901
2006
|
.. raw:: html
|
|
1902
2007
|
|
|
1903
|
-
<p>Perform
|
|
2008
|
+
<p>Perform reranking inference on the service</p>
|
|
1904
2009
|
|
|
1905
2010
|
|
|
1906
2011
|
`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-inference>`_
|
|
@@ -288,7 +288,6 @@ class IngestClient(NamespacedClient):
|
|
|
288
288
|
error_trace: t.Optional[bool] = None,
|
|
289
289
|
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
|
|
290
290
|
human: t.Optional[bool] = None,
|
|
291
|
-
master_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
292
291
|
pretty: t.Optional[bool] = None,
|
|
293
292
|
) -> ObjectApiResponse[t.Any]:
|
|
294
293
|
"""
|
|
@@ -302,10 +301,6 @@ class IngestClient(NamespacedClient):
|
|
|
302
301
|
:param id: Comma-separated list of database configuration IDs to retrieve. Wildcard
|
|
303
302
|
(`*`) expressions are supported. To get all database configurations, omit
|
|
304
303
|
this parameter or use `*`.
|
|
305
|
-
:param master_timeout: The period to wait for a connection to the master node.
|
|
306
|
-
If no response is received before the timeout expires, the request fails
|
|
307
|
-
and returns an error. A value of `-1` indicates that the request should never
|
|
308
|
-
time out.
|
|
309
304
|
"""
|
|
310
305
|
__path_parts: t.Dict[str, str]
|
|
311
306
|
if id not in SKIP_IN_PATH:
|
|
@@ -321,8 +316,6 @@ class IngestClient(NamespacedClient):
|
|
|
321
316
|
__query["filter_path"] = filter_path
|
|
322
317
|
if human is not None:
|
|
323
318
|
__query["human"] = human
|
|
324
|
-
if master_timeout is not None:
|
|
325
|
-
__query["master_timeout"] = master_timeout
|
|
326
319
|
if pretty is not None:
|
|
327
320
|
__query["pretty"] = pretty
|
|
328
321
|
__headers = {"accept": "application/json"}
|
|
@@ -353,7 +353,7 @@ class LicenseClient(NamespacedClient):
|
|
|
353
353
|
human: t.Optional[bool] = None,
|
|
354
354
|
master_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
|
|
355
355
|
pretty: t.Optional[bool] = None,
|
|
356
|
-
|
|
356
|
+
type: t.Optional[str] = None,
|
|
357
357
|
) -> ObjectApiResponse[t.Any]:
|
|
358
358
|
"""
|
|
359
359
|
.. raw:: html
|
|
@@ -370,7 +370,7 @@ class LicenseClient(NamespacedClient):
|
|
|
370
370
|
:param acknowledge: whether the user has acknowledged acknowledge messages (default:
|
|
371
371
|
false)
|
|
372
372
|
:param master_timeout: Period to wait for a connection to the master node.
|
|
373
|
-
:param
|
|
373
|
+
:param type: The type of trial license to generate (default: "trial")
|
|
374
374
|
"""
|
|
375
375
|
__path_parts: t.Dict[str, str] = {}
|
|
376
376
|
__path = "/_license/start_trial"
|
|
@@ -387,8 +387,8 @@ class LicenseClient(NamespacedClient):
|
|
|
387
387
|
__query["master_timeout"] = master_timeout
|
|
388
388
|
if pretty is not None:
|
|
389
389
|
__query["pretty"] = pretty
|
|
390
|
-
if
|
|
391
|
-
__query["
|
|
390
|
+
if type is not None:
|
|
391
|
+
__query["type"] = type
|
|
392
392
|
__headers = {"accept": "application/json"}
|
|
393
393
|
return await self.perform_request( # type: ignore[return-value]
|
|
394
394
|
"POST",
|
|
@@ -3549,7 +3549,8 @@ class MlClient(NamespacedClient):
|
|
|
3549
3549
|
Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job.
|
|
3550
3550
|
You can associate only one datafeed with each anomaly detection job.
|
|
3551
3551
|
The datafeed contains a query that runs at a defined interval (<code>frequency</code>).
|
|
3552
|
-
If you are concerned about delayed data, you can add a delay (<code>query_delay
|
|
3552
|
+
If you are concerned about delayed data, you can add a delay (<code>query_delay</code>) at each interval.
|
|
3553
|
+
By default, the datafeed uses the following query: <code>{"match_all": {"boost": 1}}</code>.</p>
|
|
3553
3554
|
<p>When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had
|
|
3554
3555
|
at the time of creation and runs the query using those same roles. If you provide secondary authorization headers,
|
|
3555
3556
|
those credentials are used instead.
|
|
@@ -3871,13 +3872,7 @@ class MlClient(NamespacedClient):
|
|
|
3871
3872
|
:param description: A description of the job.
|
|
3872
3873
|
:param expand_wildcards: Type of index that wildcard patterns can match. If the
|
|
3873
3874
|
request can target data streams, this argument determines whether wildcard
|
|
3874
|
-
expressions match hidden data streams. Supports comma-separated values.
|
|
3875
|
-
values are: * `all`: Match any data stream or index, including hidden ones.
|
|
3876
|
-
* `closed`: Match closed, non-hidden indices. Also matches any non-hidden
|
|
3877
|
-
data stream. Data streams cannot be closed. * `hidden`: Match hidden data
|
|
3878
|
-
streams and hidden indices. Must be combined with `open`, `closed`, or both.
|
|
3879
|
-
* `none`: Wildcard patterns are not accepted. * `open`: Match open, non-hidden
|
|
3880
|
-
indices. Also matches any non-hidden data stream.
|
|
3875
|
+
expressions match hidden data streams. Supports comma-separated values.
|
|
3881
3876
|
:param groups: A list of job groups. A job can belong to no groups or many.
|
|
3882
3877
|
:param ignore_throttled: If `true`, concrete, expanded or aliased indices are
|
|
3883
3878
|
ignored when frozen.
|
|
@@ -4999,7 +4994,7 @@ class MlClient(NamespacedClient):
|
|
|
4999
4994
|
<p>Update a data frame analytics job.</p>
|
|
5000
4995
|
|
|
5001
4996
|
|
|
5002
|
-
`<https://www.elastic.co/docs/api/doc/elasticsearch/
|
|
4997
|
+
`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-update-data-frame-analytics>`_
|
|
5003
4998
|
|
|
5004
4999
|
:param id: Identifier for the data frame analytics job. This identifier can contain
|
|
5005
5000
|
lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores.
|
|
@@ -5140,13 +5135,7 @@ class MlClient(NamespacedClient):
|
|
|
5140
5135
|
check runs only on real-time datafeeds.
|
|
5141
5136
|
:param expand_wildcards: Type of index that wildcard patterns can match. If the
|
|
5142
5137
|
request can target data streams, this argument determines whether wildcard
|
|
5143
|
-
expressions match hidden data streams. Supports comma-separated values.
|
|
5144
|
-
values are: * `all`: Match any data stream or index, including hidden ones.
|
|
5145
|
-
* `closed`: Match closed, non-hidden indices. Also matches any non-hidden
|
|
5146
|
-
data stream. Data streams cannot be closed. * `hidden`: Match hidden data
|
|
5147
|
-
streams and hidden indices. Must be combined with `open`, `closed`, or both.
|
|
5148
|
-
* `none`: Wildcard patterns are not accepted. * `open`: Match open, non-hidden
|
|
5149
|
-
indices. Also matches any non-hidden data stream.
|
|
5138
|
+
expressions match hidden data streams. Supports comma-separated values.
|
|
5150
5139
|
:param frequency: The interval at which scheduled queries are made while the
|
|
5151
5140
|
datafeed runs in real time. The default value is either the bucket span for
|
|
5152
5141
|
short bucket spans, or, for longer bucket spans, a sensible fraction of the
|
|
@@ -5801,7 +5790,7 @@ class MlClient(NamespacedClient):
|
|
|
5801
5790
|
<p>Validate an anomaly detection job.</p>
|
|
5802
5791
|
|
|
5803
5792
|
|
|
5804
|
-
`<https://www.elastic.co/docs/api/doc/elasticsearch
|
|
5793
|
+
`<https://www.elastic.co/docs/api/doc/elasticsearch>`_
|
|
5805
5794
|
|
|
5806
5795
|
:param detector:
|
|
5807
5796
|
"""
|
|
@@ -48,7 +48,7 @@ class MonitoringClient(NamespacedClient):
|
|
|
48
48
|
This API is used by the monitoring features to send monitoring data.</p>
|
|
49
49
|
|
|
50
50
|
|
|
51
|
-
`<https://www.elastic.co/docs/api/doc/elasticsearch
|
|
51
|
+
`<https://www.elastic.co/docs/api/doc/elasticsearch>`_
|
|
52
52
|
|
|
53
53
|
:param interval: Collection interval (e.g., '10s' or '10000ms') of the payload
|
|
54
54
|
:param operations:
|
|
@@ -419,28 +419,7 @@ class RollupClient(NamespacedClient):
|
|
|
419
419
|
The following functionality is not available:</p>
|
|
420
420
|
<p><code>size</code>: Because rollups work on pre-aggregated data, no search hits can be returned and so size must be set to zero or omitted entirely.
|
|
421
421
|
<code>highlighter</code>, <code>suggestors</code>, <code>post_filter</code>, <code>profile</code>, <code>explain</code>: These are similarly disallowed.</p>
|
|
422
|
-
<p
|
|
423
|
-
<p>The rollup search API has the capability to search across both "live" non-rollup data and the aggregated rollup data.
|
|
424
|
-
This is done by simply adding the live indices to the URI. For example:</p>
|
|
425
|
-
<pre><code>GET sensor-1,sensor_rollup/_rollup_search
|
|
426
|
-
{
|
|
427
|
-
"size": 0,
|
|
428
|
-
"aggregations": {
|
|
429
|
-
"max_temperature": {
|
|
430
|
-
"max": {
|
|
431
|
-
"field": "temperature"
|
|
432
|
-
}
|
|
433
|
-
}
|
|
434
|
-
}
|
|
435
|
-
}
|
|
436
|
-
</code></pre>
|
|
437
|
-
<p>The rollup search endpoint does two things when the search runs:</p>
|
|
438
|
-
<ul>
|
|
439
|
-
<li>The original request is sent to the non-rollup index unaltered.</li>
|
|
440
|
-
<li>A rewritten version of the original request is sent to the rollup index.</li>
|
|
441
|
-
</ul>
|
|
442
|
-
<p>When the two responses are received, the endpoint rewrites the rollup response and merges the two together.
|
|
443
|
-
During the merging process, if there is any overlap in buckets between the two responses, the buckets from the non-rollup index are used.</p>
|
|
422
|
+
<p>For more detailed examples of using the rollup search API, including querying rolled-up data only or combining rolled-up and live data, refer to the External documentation.</p>
|
|
444
423
|
|
|
445
424
|
|
|
446
425
|
`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-rollup-rollup-search>`_
|