databricks-sdk 0.42.0__tar.gz → 0.44.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of databricks-sdk might be problematic. Click here for more details.

Files changed (100) hide show
  1. {databricks-sdk-0.42.0/databricks_sdk.egg-info → databricks-sdk-0.44.0}/PKG-INFO +1 -1
  2. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/__init__.py +23 -2
  3. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/apps.py +6 -0
  4. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/billing.py +31 -1
  5. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/catalog.py +26 -1
  6. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/cleanrooms.py +4 -73
  7. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/compute.py +50 -42
  8. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/dashboards.py +438 -0
  9. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/jobs.py +7 -0
  10. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/serving.py +1 -1
  11. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/settings.py +254 -84
  12. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/sharing.py +15 -6
  13. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/sql.py +145 -18
  14. databricks-sdk-0.44.0/databricks/sdk/version.py +1 -0
  15. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0/databricks_sdk.egg-info}/PKG-INFO +1 -1
  16. databricks-sdk-0.42.0/databricks/sdk/version.py +0 -1
  17. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/LICENSE +0 -0
  18. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/NOTICE +0 -0
  19. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/README.md +0 -0
  20. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/__init__.py +0 -0
  21. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/_base_client.py +0 -0
  22. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/_property.py +0 -0
  23. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/_widgets/__init__.py +0 -0
  24. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/_widgets/default_widgets_utils.py +0 -0
  25. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/_widgets/ipywidgets_utils.py +0 -0
  26. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/azure.py +0 -0
  27. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/casing.py +0 -0
  28. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/clock.py +0 -0
  29. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/config.py +0 -0
  30. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/core.py +0 -0
  31. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/credentials_provider.py +0 -0
  32. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/data_plane.py +0 -0
  33. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/dbutils.py +0 -0
  34. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/environments.py +0 -0
  35. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/__init__.py +0 -0
  36. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/base.py +0 -0
  37. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/customizer.py +0 -0
  38. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/deserializer.py +0 -0
  39. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/mapper.py +0 -0
  40. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/overrides.py +0 -0
  41. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/parser.py +0 -0
  42. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/platform.py +0 -0
  43. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/private_link.py +0 -0
  44. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/errors/sdk.py +0 -0
  45. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/logger/__init__.py +0 -0
  46. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/logger/round_trip_logger.py +0 -0
  47. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/mixins/__init__.py +0 -0
  48. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/mixins/compute.py +0 -0
  49. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/mixins/files.py +0 -0
  50. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/mixins/jobs.py +0 -0
  51. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/mixins/open_ai_client.py +0 -0
  52. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/mixins/workspace.py +0 -0
  53. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/oauth.py +0 -0
  54. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/py.typed +0 -0
  55. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/retries.py +0 -0
  56. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/runtime/__init__.py +0 -0
  57. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/runtime/dbutils_stub.py +0 -0
  58. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/__init__.py +0 -0
  59. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/_internal.py +0 -0
  60. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/files.py +0 -0
  61. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/iam.py +0 -0
  62. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/marketplace.py +0 -0
  63. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/ml.py +0 -0
  64. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/oauth2.py +0 -0
  65. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/pipelines.py +0 -0
  66. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/provisioning.py +0 -0
  67. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/vectorsearch.py +0 -0
  68. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/service/workspace.py +0 -0
  69. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks/sdk/useragent.py +0 -0
  70. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks_sdk.egg-info/SOURCES.txt +0 -0
  71. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks_sdk.egg-info/dependency_links.txt +0 -0
  72. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks_sdk.egg-info/requires.txt +0 -0
  73. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/databricks_sdk.egg-info/top_level.txt +0 -0
  74. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/setup.cfg +0 -0
  75. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/setup.py +0 -0
  76. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_auth.py +0 -0
  77. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_auth_manual_tests.py +0 -0
  78. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_base_client.py +0 -0
  79. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_client.py +0 -0
  80. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_compute_mixins.py +0 -0
  81. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_config.py +0 -0
  82. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_core.py +0 -0
  83. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_credentials_provider.py +0 -0
  84. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_data_plane.py +0 -0
  85. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_dbfs_mixins.py +0 -0
  86. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_dbutils.py +0 -0
  87. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_environments.py +0 -0
  88. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_errors.py +0 -0
  89. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_files.py +0 -0
  90. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_init_file.py +0 -0
  91. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_internal.py +0 -0
  92. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_jobs.py +0 -0
  93. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_jobs_mixin.py +0 -0
  94. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_metadata_service_auth.py +0 -0
  95. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_misc.py +0 -0
  96. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_model_serving_auth.py +0 -0
  97. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_oauth.py +0 -0
  98. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_open_ai_mixin.py +0 -0
  99. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_retries.py +0 -0
  100. {databricks-sdk-0.42.0 → databricks-sdk-0.44.0}/tests/test_user_agent.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: databricks-sdk
3
- Version: 0.42.0
3
+ Version: 0.44.0
4
4
  Summary: Databricks SDK for Python (Beta)
5
5
  Home-page: https://databricks-sdk-py.readthedocs.io
6
6
  Author: Serge Smertin
@@ -43,7 +43,9 @@ from databricks.sdk.service.compute import (ClusterPoliciesAPI, ClustersAPI,
43
43
  InstanceProfilesAPI, LibrariesAPI,
44
44
  PolicyComplianceForClustersAPI,
45
45
  PolicyFamiliesAPI)
46
- from databricks.sdk.service.dashboards import GenieAPI, LakeviewAPI
46
+ from databricks.sdk.service.dashboards import (GenieAPI, LakeviewAPI,
47
+ LakeviewEmbeddedAPI,
48
+ QueryExecutionAPI)
47
49
  from databricks.sdk.service.files import DbfsAPI, FilesAPI
48
50
  from databricks.sdk.service.iam import (AccessControlAPI,
49
51
  AccountAccessControlAPI,
@@ -97,7 +99,8 @@ from databricks.sdk.service.sql import (AlertsAPI, AlertsLegacyAPI,
97
99
  QueryHistoryAPI,
98
100
  QueryVisualizationsAPI,
99
101
  QueryVisualizationsLegacyAPI,
100
- StatementExecutionAPI, WarehousesAPI)
102
+ RedashConfigAPI, StatementExecutionAPI,
103
+ WarehousesAPI)
101
104
  from databricks.sdk.service.vectorsearch import (VectorSearchEndpointsAPI,
102
105
  VectorSearchIndexesAPI)
103
106
  from databricks.sdk.service.workspace import (GitCredentialsAPI, ReposAPI,
@@ -233,6 +236,7 @@ class WorkspaceClient:
233
236
  self._ip_access_lists = service.settings.IpAccessListsAPI(self._api_client)
234
237
  self._jobs = JobsExt(self._api_client)
235
238
  self._lakeview = service.dashboards.LakeviewAPI(self._api_client)
239
+ self._lakeview_embedded = service.dashboards.LakeviewEmbeddedAPI(self._api_client)
236
240
  self._libraries = service.compute.LibrariesAPI(self._api_client)
237
241
  self._metastores = service.catalog.MetastoresAPI(self._api_client)
238
242
  self._model_registry = service.ml.ModelRegistryAPI(self._api_client)
@@ -259,11 +263,13 @@ class WorkspaceClient:
259
263
  self._quality_monitors = service.catalog.QualityMonitorsAPI(self._api_client)
260
264
  self._queries = service.sql.QueriesAPI(self._api_client)
261
265
  self._queries_legacy = service.sql.QueriesLegacyAPI(self._api_client)
266
+ self._query_execution = service.dashboards.QueryExecutionAPI(self._api_client)
262
267
  self._query_history = service.sql.QueryHistoryAPI(self._api_client)
263
268
  self._query_visualizations = service.sql.QueryVisualizationsAPI(self._api_client)
264
269
  self._query_visualizations_legacy = service.sql.QueryVisualizationsLegacyAPI(self._api_client)
265
270
  self._recipient_activation = service.sharing.RecipientActivationAPI(self._api_client)
266
271
  self._recipients = service.sharing.RecipientsAPI(self._api_client)
272
+ self._redash_config = service.sql.RedashConfigAPI(self._api_client)
267
273
  self._registered_models = service.catalog.RegisteredModelsAPI(self._api_client)
268
274
  self._repos = service.workspace.ReposAPI(self._api_client)
269
275
  self._resource_quotas = service.catalog.ResourceQuotasAPI(self._api_client)
@@ -509,6 +515,11 @@ class WorkspaceClient:
509
515
  """These APIs provide specific management operations for Lakeview dashboards."""
510
516
  return self._lakeview
511
517
 
518
+ @property
519
+ def lakeview_embedded(self) -> service.dashboards.LakeviewEmbeddedAPI:
520
+ """Token-based Lakeview APIs for embedding dashboards in external applications."""
521
+ return self._lakeview_embedded
522
+
512
523
  @property
513
524
  def libraries(self) -> service.compute.LibrariesAPI:
514
525
  """The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster."""
@@ -625,6 +636,11 @@ class WorkspaceClient:
625
636
  """These endpoints are used for CRUD operations on query definitions."""
626
637
  return self._queries_legacy
627
638
 
639
+ @property
640
+ def query_execution(self) -> service.dashboards.QueryExecutionAPI:
641
+ """Query execution APIs for AI / BI Dashboards."""
642
+ return self._query_execution
643
+
628
644
  @property
629
645
  def query_history(self) -> service.sql.QueryHistoryAPI:
630
646
  """A service responsible for storing and retrieving the list of queries run against SQL endpoints and serverless compute."""
@@ -650,6 +666,11 @@ class WorkspaceClient:
650
666
  """A recipient is an object you create using :method:recipients/create to represent an organization which you want to allow access shares."""
651
667
  return self._recipients
652
668
 
669
+ @property
670
+ def redash_config(self) -> service.sql.RedashConfigAPI:
671
+ """Redash V2 service for workspace configurations (internal)."""
672
+ return self._redash_config
673
+
653
674
  @property
654
675
  def registered_models(self) -> service.catalog.RegisteredModelsAPI:
655
676
  """Databricks provides a hosted version of MLflow Model Registry in Unity Catalog."""
@@ -45,6 +45,9 @@ class App:
45
45
  description: Optional[str] = None
46
46
  """The description of the app."""
47
47
 
48
+ id: Optional[str] = None
49
+ """The unique identifier of the app."""
50
+
48
51
  pending_deployment: Optional[AppDeployment] = None
49
52
  """The pending deployment of the app. A deployment is considered pending when it is being prepared
50
53
  for deployment to the app compute."""
@@ -78,6 +81,7 @@ class App:
78
81
  if self.default_source_code_path is not None:
79
82
  body['default_source_code_path'] = self.default_source_code_path
80
83
  if self.description is not None: body['description'] = self.description
84
+ if self.id is not None: body['id'] = self.id
81
85
  if self.name is not None: body['name'] = self.name
82
86
  if self.pending_deployment: body['pending_deployment'] = self.pending_deployment.as_dict()
83
87
  if self.resources: body['resources'] = [v.as_dict() for v in self.resources]
@@ -102,6 +106,7 @@ class App:
102
106
  if self.default_source_code_path is not None:
103
107
  body['default_source_code_path'] = self.default_source_code_path
104
108
  if self.description is not None: body['description'] = self.description
109
+ if self.id is not None: body['id'] = self.id
105
110
  if self.name is not None: body['name'] = self.name
106
111
  if self.pending_deployment: body['pending_deployment'] = self.pending_deployment
107
112
  if self.resources: body['resources'] = self.resources
@@ -125,6 +130,7 @@ class App:
125
130
  creator=d.get('creator', None),
126
131
  default_source_code_path=d.get('default_source_code_path', None),
127
132
  description=d.get('description', None),
133
+ id=d.get('id', None),
128
134
  name=d.get('name', None),
129
135
  pending_deployment=_from_dict(d, 'pending_deployment', AppDeployment),
130
136
  resources=_repeated_dict(d, 'resources', AppResource),
@@ -894,6 +894,27 @@ class GetBudgetConfigurationResponse:
894
894
  return cls(budget=_from_dict(d, 'budget', BudgetConfiguration))
895
895
 
896
896
 
897
+ @dataclass
898
+ class LimitConfig:
899
+ """The limit configuration of the policy. Limit configuration provide a budget policy level cost
900
+ control by enforcing the limit."""
901
+
902
+ def as_dict(self) -> dict:
903
+ """Serializes the LimitConfig into a dictionary suitable for use as a JSON request body."""
904
+ body = {}
905
+ return body
906
+
907
+ def as_shallow_dict(self) -> dict:
908
+ """Serializes the LimitConfig into a shallow dictionary of its immediate attributes."""
909
+ body = {}
910
+ return body
911
+
912
+ @classmethod
913
+ def from_dict(cls, d: Dict[str, any]) -> LimitConfig:
914
+ """Deserializes the LimitConfig from a dictionary."""
915
+ return cls()
916
+
917
+
897
918
  @dataclass
898
919
  class ListBudgetConfigurationsResponse:
899
920
  budgets: Optional[List[BudgetConfiguration]] = None
@@ -1641,23 +1662,32 @@ class BudgetPolicyAPI:
1641
1662
  return
1642
1663
  query['page_token'] = json['next_page_token']
1643
1664
 
1644
- def update(self, policy_id: str, *, policy: Optional[BudgetPolicy] = None) -> BudgetPolicy:
1665
+ def update(self,
1666
+ policy_id: str,
1667
+ *,
1668
+ limit_config: Optional[LimitConfig] = None,
1669
+ policy: Optional[BudgetPolicy] = None) -> BudgetPolicy:
1645
1670
  """Update a budget policy.
1646
1671
 
1647
1672
  Updates a policy
1648
1673
 
1649
1674
  :param policy_id: str
1650
1675
  The Id of the policy. This field is generated by Databricks and globally unique.
1676
+ :param limit_config: :class:`LimitConfig` (optional)
1677
+ DEPRECATED. This is redundant field as LimitConfig is part of the BudgetPolicy
1651
1678
  :param policy: :class:`BudgetPolicy` (optional)
1652
1679
  Contains the BudgetPolicy details.
1653
1680
 
1654
1681
  :returns: :class:`BudgetPolicy`
1655
1682
  """
1656
1683
  body = policy.as_dict()
1684
+ query = {}
1685
+ if limit_config is not None: query['limit_config'] = limit_config.as_dict()
1657
1686
  headers = {'Accept': 'application/json', 'Content-Type': 'application/json', }
1658
1687
 
1659
1688
  res = self._api.do('PATCH',
1660
1689
  f'/api/2.1/accounts/{self._api.account_id}/budget-policies/{policy_id}',
1690
+ query=query,
1661
1691
  body=body,
1662
1692
  headers=headers)
1663
1693
  return BudgetPolicy.from_dict(res)
@@ -6913,12 +6913,17 @@ class TemporaryCredentials:
6913
6913
  """Server time when the credential will expire, in epoch milliseconds. The API client is advised to
6914
6914
  cache the credential given this expiration time."""
6915
6915
 
6916
+ gcp_oauth_token: Optional[GcpOauthToken] = None
6917
+ """GCP temporary credentials for API authentication. Read more at
6918
+ https://developers.google.com/identity/protocols/oauth2/service-account"""
6919
+
6916
6920
  def as_dict(self) -> dict:
6917
6921
  """Serializes the TemporaryCredentials into a dictionary suitable for use as a JSON request body."""
6918
6922
  body = {}
6919
6923
  if self.aws_temp_credentials: body['aws_temp_credentials'] = self.aws_temp_credentials.as_dict()
6920
6924
  if self.azure_aad: body['azure_aad'] = self.azure_aad.as_dict()
6921
6925
  if self.expiration_time is not None: body['expiration_time'] = self.expiration_time
6926
+ if self.gcp_oauth_token: body['gcp_oauth_token'] = self.gcp_oauth_token.as_dict()
6922
6927
  return body
6923
6928
 
6924
6929
  def as_shallow_dict(self) -> dict:
@@ -6927,6 +6932,7 @@ class TemporaryCredentials:
6927
6932
  if self.aws_temp_credentials: body['aws_temp_credentials'] = self.aws_temp_credentials
6928
6933
  if self.azure_aad: body['azure_aad'] = self.azure_aad
6929
6934
  if self.expiration_time is not None: body['expiration_time'] = self.expiration_time
6935
+ if self.gcp_oauth_token: body['gcp_oauth_token'] = self.gcp_oauth_token
6930
6936
  return body
6931
6937
 
6932
6938
  @classmethod
@@ -6934,7 +6940,8 @@ class TemporaryCredentials:
6934
6940
  """Deserializes the TemporaryCredentials from a dictionary."""
6935
6941
  return cls(aws_temp_credentials=_from_dict(d, 'aws_temp_credentials', AwsCredentials),
6936
6942
  azure_aad=_from_dict(d, 'azure_aad', AzureActiveDirectoryToken),
6937
- expiration_time=d.get('expiration_time', None))
6943
+ expiration_time=d.get('expiration_time', None),
6944
+ gcp_oauth_token=_from_dict(d, 'gcp_oauth_token', GcpOauthToken))
6938
6945
 
6939
6946
 
6940
6947
  @dataclass
@@ -7043,6 +7050,9 @@ class UpdateCatalog:
7043
7050
  new_name: Optional[str] = None
7044
7051
  """New name for the catalog."""
7045
7052
 
7053
+ options: Optional[Dict[str, str]] = None
7054
+ """A map of key-value properties attached to the securable."""
7055
+
7046
7056
  owner: Optional[str] = None
7047
7057
  """Username of current owner of catalog."""
7048
7058
 
@@ -7058,6 +7068,7 @@ class UpdateCatalog:
7058
7068
  if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode.value
7059
7069
  if self.name is not None: body['name'] = self.name
7060
7070
  if self.new_name is not None: body['new_name'] = self.new_name
7071
+ if self.options: body['options'] = self.options
7061
7072
  if self.owner is not None: body['owner'] = self.owner
7062
7073
  if self.properties: body['properties'] = self.properties
7063
7074
  return body
@@ -7071,6 +7082,7 @@ class UpdateCatalog:
7071
7082
  if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode
7072
7083
  if self.name is not None: body['name'] = self.name
7073
7084
  if self.new_name is not None: body['new_name'] = self.new_name
7085
+ if self.options: body['options'] = self.options
7074
7086
  if self.owner is not None: body['owner'] = self.owner
7075
7087
  if self.properties: body['properties'] = self.properties
7076
7088
  return body
@@ -7084,6 +7096,7 @@ class UpdateCatalog:
7084
7096
  isolation_mode=_enum(d, 'isolation_mode', CatalogIsolationMode),
7085
7097
  name=d.get('name', None),
7086
7098
  new_name=d.get('new_name', None),
7099
+ options=d.get('options', None),
7087
7100
  owner=d.get('owner', None),
7088
7101
  properties=d.get('properties', None))
7089
7102
 
@@ -8970,6 +8983,7 @@ class CatalogsAPI:
8970
8983
  if page_token is not None: query['page_token'] = page_token
8971
8984
  headers = {'Accept': 'application/json', }
8972
8985
 
8986
+ if "max_results" not in query: query['max_results'] = 0
8973
8987
  while True:
8974
8988
  json = self._api.do('GET', '/api/2.1/unity-catalog/catalogs', query=query, headers=headers)
8975
8989
  if 'catalogs' in json:
@@ -8986,6 +9000,7 @@ class CatalogsAPI:
8986
9000
  enable_predictive_optimization: Optional[EnablePredictiveOptimization] = None,
8987
9001
  isolation_mode: Optional[CatalogIsolationMode] = None,
8988
9002
  new_name: Optional[str] = None,
9003
+ options: Optional[Dict[str, str]] = None,
8989
9004
  owner: Optional[str] = None,
8990
9005
  properties: Optional[Dict[str, str]] = None) -> CatalogInfo:
8991
9006
  """Update a catalog.
@@ -9003,6 +9018,8 @@ class CatalogsAPI:
9003
9018
  Whether the current securable is accessible from all workspaces or a specific set of workspaces.
9004
9019
  :param new_name: str (optional)
9005
9020
  New name for the catalog.
9021
+ :param options: Dict[str,str] (optional)
9022
+ A map of key-value properties attached to the securable.
9006
9023
  :param owner: str (optional)
9007
9024
  Username of current owner of catalog.
9008
9025
  :param properties: Dict[str,str] (optional)
@@ -9016,6 +9033,7 @@ class CatalogsAPI:
9016
9033
  body['enable_predictive_optimization'] = enable_predictive_optimization.value
9017
9034
  if isolation_mode is not None: body['isolation_mode'] = isolation_mode.value
9018
9035
  if new_name is not None: body['new_name'] = new_name
9036
+ if options is not None: body['options'] = options
9019
9037
  if owner is not None: body['owner'] = owner
9020
9038
  if properties is not None: body['properties'] = properties
9021
9039
  headers = {'Accept': 'application/json', 'Content-Type': 'application/json', }
@@ -9134,6 +9152,7 @@ class ConnectionsAPI:
9134
9152
  if page_token is not None: query['page_token'] = page_token
9135
9153
  headers = {'Accept': 'application/json', }
9136
9154
 
9155
+ if "max_results" not in query: query['max_results'] = 0
9137
9156
  while True:
9138
9157
  json = self._api.do('GET', '/api/2.1/unity-catalog/connections', query=query, headers=headers)
9139
9158
  if 'connections' in json:
@@ -9639,6 +9658,7 @@ class ExternalLocationsAPI:
9639
9658
  if page_token is not None: query['page_token'] = page_token
9640
9659
  headers = {'Accept': 'application/json', }
9641
9660
 
9661
+ if "max_results" not in query: query['max_results'] = 0
9642
9662
  while True:
9643
9663
  json = self._api.do('GET',
9644
9664
  '/api/2.1/unity-catalog/external-locations',
@@ -11372,6 +11392,7 @@ class SchemasAPI:
11372
11392
  if page_token is not None: query['page_token'] = page_token
11373
11393
  headers = {'Accept': 'application/json', }
11374
11394
 
11395
+ if "max_results" not in query: query['max_results'] = 0
11375
11396
  while True:
11376
11397
  json = self._api.do('GET', '/api/2.1/unity-catalog/schemas', query=query, headers=headers)
11377
11398
  if 'schemas' in json:
@@ -11561,6 +11582,7 @@ class StorageCredentialsAPI:
11561
11582
  if page_token is not None: query['page_token'] = page_token
11562
11583
  headers = {'Accept': 'application/json', }
11563
11584
 
11585
+ if "max_results" not in query: query['max_results'] = 0
11564
11586
  while True:
11565
11587
  json = self._api.do('GET',
11566
11588
  '/api/2.1/unity-catalog/storage-credentials',
@@ -11785,6 +11807,7 @@ class SystemSchemasAPI:
11785
11807
  if page_token is not None: query['page_token'] = page_token
11786
11808
  headers = {'Accept': 'application/json', }
11787
11809
 
11810
+ if "max_results" not in query: query['max_results'] = 0
11788
11811
  while True:
11789
11812
  json = self._api.do('GET',
11790
11813
  f'/api/2.1/unity-catalog/metastores/{metastore_id}/systemschemas',
@@ -12027,6 +12050,7 @@ class TablesAPI:
12027
12050
  if schema_name is not None: query['schema_name'] = schema_name
12028
12051
  headers = {'Accept': 'application/json', }
12029
12052
 
12053
+ if "max_results" not in query: query['max_results'] = 0
12030
12054
  while True:
12031
12055
  json = self._api.do('GET', '/api/2.1/unity-catalog/tables', query=query, headers=headers)
12032
12056
  if 'tables' in json:
@@ -12087,6 +12111,7 @@ class TablesAPI:
12087
12111
  if table_name_pattern is not None: query['table_name_pattern'] = table_name_pattern
12088
12112
  headers = {'Accept': 'application/json', }
12089
12113
 
12114
+ if "max_results" not in query: query['max_results'] = 0
12090
12115
  while True:
12091
12116
  json = self._api.do('GET', '/api/2.1/unity-catalog/table-summaries', query=query, headers=headers)
12092
12117
  if 'tables' in json:
@@ -289,24 +289,11 @@ class CleanRoomAssetNotebook:
289
289
  """Base 64 representation of the notebook contents. This is the same format as returned by
290
290
  :method:workspace/export with the format of **HTML**."""
291
291
 
292
- review_state: Optional[CleanRoomNotebookReviewNotebookReviewState] = None
293
- """top-level status derived from all reviews"""
294
-
295
- reviews: Optional[List[CleanRoomNotebookReview]] = None
296
- """All existing approvals or rejections"""
297
-
298
- runner_collaborators: Optional[List[CleanRoomCollaborator]] = None
299
- """collaborators that can run the notebook"""
300
-
301
292
  def as_dict(self) -> dict:
302
293
  """Serializes the CleanRoomAssetNotebook into a dictionary suitable for use as a JSON request body."""
303
294
  body = {}
304
295
  if self.etag is not None: body['etag'] = self.etag
305
296
  if self.notebook_content is not None: body['notebook_content'] = self.notebook_content
306
- if self.review_state is not None: body['review_state'] = self.review_state.value
307
- if self.reviews: body['reviews'] = [v.as_dict() for v in self.reviews]
308
- if self.runner_collaborators:
309
- body['runner_collaborators'] = [v.as_dict() for v in self.runner_collaborators]
310
297
  return body
311
298
 
312
299
  def as_shallow_dict(self) -> dict:
@@ -314,19 +301,12 @@ class CleanRoomAssetNotebook:
314
301
  body = {}
315
302
  if self.etag is not None: body['etag'] = self.etag
316
303
  if self.notebook_content is not None: body['notebook_content'] = self.notebook_content
317
- if self.review_state is not None: body['review_state'] = self.review_state
318
- if self.reviews: body['reviews'] = self.reviews
319
- if self.runner_collaborators: body['runner_collaborators'] = self.runner_collaborators
320
304
  return body
321
305
 
322
306
  @classmethod
323
307
  def from_dict(cls, d: Dict[str, any]) -> CleanRoomAssetNotebook:
324
308
  """Deserializes the CleanRoomAssetNotebook from a dictionary."""
325
- return cls(etag=d.get('etag', None),
326
- notebook_content=d.get('notebook_content', None),
327
- review_state=_enum(d, 'review_state', CleanRoomNotebookReviewNotebookReviewState),
328
- reviews=_repeated_dict(d, 'reviews', CleanRoomNotebookReview),
329
- runner_collaborators=_repeated_dict(d, 'runner_collaborators', CleanRoomCollaborator))
309
+ return cls(etag=d.get('etag', None), notebook_content=d.get('notebook_content', None))
330
310
 
331
311
 
332
312
  class CleanRoomAssetStatusEnum(Enum):
@@ -531,56 +511,6 @@ class CleanRoomCollaborator:
531
511
  organization_name=d.get('organization_name', None))
532
512
 
533
513
 
534
- @dataclass
535
- class CleanRoomNotebookReview:
536
- comment: Optional[str] = None
537
- """review comment"""
538
-
539
- created_at_millis: Optional[int] = None
540
- """timestamp of when the review was submitted"""
541
-
542
- review_state: Optional[CleanRoomNotebookReviewNotebookReviewState] = None
543
- """review outcome"""
544
-
545
- reviewer_collaborator_alias: Optional[str] = None
546
- """collaborator alias of the reviewer"""
547
-
548
- def as_dict(self) -> dict:
549
- """Serializes the CleanRoomNotebookReview into a dictionary suitable for use as a JSON request body."""
550
- body = {}
551
- if self.comment is not None: body['comment'] = self.comment
552
- if self.created_at_millis is not None: body['created_at_millis'] = self.created_at_millis
553
- if self.review_state is not None: body['review_state'] = self.review_state.value
554
- if self.reviewer_collaborator_alias is not None:
555
- body['reviewer_collaborator_alias'] = self.reviewer_collaborator_alias
556
- return body
557
-
558
- def as_shallow_dict(self) -> dict:
559
- """Serializes the CleanRoomNotebookReview into a shallow dictionary of its immediate attributes."""
560
- body = {}
561
- if self.comment is not None: body['comment'] = self.comment
562
- if self.created_at_millis is not None: body['created_at_millis'] = self.created_at_millis
563
- if self.review_state is not None: body['review_state'] = self.review_state
564
- if self.reviewer_collaborator_alias is not None:
565
- body['reviewer_collaborator_alias'] = self.reviewer_collaborator_alias
566
- return body
567
-
568
- @classmethod
569
- def from_dict(cls, d: Dict[str, any]) -> CleanRoomNotebookReview:
570
- """Deserializes the CleanRoomNotebookReview from a dictionary."""
571
- return cls(comment=d.get('comment', None),
572
- created_at_millis=d.get('created_at_millis', None),
573
- review_state=_enum(d, 'review_state', CleanRoomNotebookReviewNotebookReviewState),
574
- reviewer_collaborator_alias=d.get('reviewer_collaborator_alias', None))
575
-
576
-
577
- class CleanRoomNotebookReviewNotebookReviewState(Enum):
578
-
579
- APPROVED = 'APPROVED'
580
- PENDING = 'PENDING'
581
- REJECTED = 'REJECTED'
582
-
583
-
584
514
  @dataclass
585
515
  class CleanRoomNotebookTaskRun:
586
516
  """Stores information about a single task run."""
@@ -1228,8 +1158,9 @@ class CleanRoomsAPI:
1228
1158
 
1229
1159
  Create a new clean room with the specified collaborators. This method is asynchronous; the returned
1230
1160
  name field inside the clean_room field can be used to poll the clean room status, using the
1231
- :method:cleanrooms/get method. When this method returns, the cluster will be in a PROVISIONING state.
1232
- The cluster will be usable once it enters an ACTIVE state.
1161
+ :method:cleanrooms/get method. When this method returns, the clean room will be in a PROVISIONING
1162
+ state, with only name, owner, comment, created_at and status populated. The clean room will be usable
1163
+ once it enters an ACTIVE state.
1233
1164
 
1234
1165
  The caller must be a metastore admin or have the **CREATE_CLEAN_ROOM** privilege on the metastore.
1235
1166
 
@@ -637,11 +637,11 @@ class ClusterAttributes:
637
637
  a set of default values will be used."""
638
638
 
639
639
  cluster_log_conf: Optional[ClusterLogConf] = None
640
- """The configuration for delivering spark logs to a long-term storage destination. Two kinds of
641
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster.
642
- If the conf is given, the logs will be delivered to the destination every `5 mins`. The
643
- destination of driver logs is `$destination/$clusterId/driver`, while the destination of
644
- executor logs is `$destination/$clusterId/executor`."""
640
+ """The configuration for delivering spark logs to a long-term storage destination. Three kinds of
641
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
642
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination
643
+ every `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the
644
+ destination of executor logs is `$destination/$clusterId/executor`."""
645
645
 
646
646
  cluster_name: Optional[str] = None
647
647
  """Cluster name requested by the user. This doesn't have to be unique. If not specified at
@@ -947,11 +947,11 @@ class ClusterDetails:
947
947
  while each new cluster has a globally unique id."""
948
948
 
949
949
  cluster_log_conf: Optional[ClusterLogConf] = None
950
- """The configuration for delivering spark logs to a long-term storage destination. Two kinds of
951
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster.
952
- If the conf is given, the logs will be delivered to the destination every `5 mins`. The
953
- destination of driver logs is `$destination/$clusterId/driver`, while the destination of
954
- executor logs is `$destination/$clusterId/executor`."""
950
+ """The configuration for delivering spark logs to a long-term storage destination. Three kinds of
951
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
952
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination
953
+ every `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the
954
+ destination of executor logs is `$destination/$clusterId/executor`."""
955
955
 
956
956
  cluster_log_status: Optional[LogSyncStatus] = None
957
957
  """Cluster log delivery status."""
@@ -1428,11 +1428,16 @@ class ClusterLogConf:
1428
1428
  access s3, please make sure the cluster iam role in `instance_profile_arn` has permission to
1429
1429
  write data to the s3 destination."""
1430
1430
 
1431
+ volumes: Optional[VolumesStorageInfo] = None
1432
+ """destination needs to be provided. e.g. `{ "volumes" : { "destination" :
1433
+ "/Volumes/catalog/schema/volume/cluster_log" } }`"""
1434
+
1431
1435
  def as_dict(self) -> dict:
1432
1436
  """Serializes the ClusterLogConf into a dictionary suitable for use as a JSON request body."""
1433
1437
  body = {}
1434
1438
  if self.dbfs: body['dbfs'] = self.dbfs.as_dict()
1435
1439
  if self.s3: body['s3'] = self.s3.as_dict()
1440
+ if self.volumes: body['volumes'] = self.volumes.as_dict()
1436
1441
  return body
1437
1442
 
1438
1443
  def as_shallow_dict(self) -> dict:
@@ -1440,12 +1445,15 @@ class ClusterLogConf:
1440
1445
  body = {}
1441
1446
  if self.dbfs: body['dbfs'] = self.dbfs
1442
1447
  if self.s3: body['s3'] = self.s3
1448
+ if self.volumes: body['volumes'] = self.volumes
1443
1449
  return body
1444
1450
 
1445
1451
  @classmethod
1446
1452
  def from_dict(cls, d: Dict[str, any]) -> ClusterLogConf:
1447
1453
  """Deserializes the ClusterLogConf from a dictionary."""
1448
- return cls(dbfs=_from_dict(d, 'dbfs', DbfsStorageInfo), s3=_from_dict(d, 's3', S3StorageInfo))
1454
+ return cls(dbfs=_from_dict(d, 'dbfs', DbfsStorageInfo),
1455
+ s3=_from_dict(d, 's3', S3StorageInfo),
1456
+ volumes=_from_dict(d, 'volumes', VolumesStorageInfo))
1449
1457
 
1450
1458
 
1451
1459
  @dataclass
@@ -1918,11 +1926,11 @@ class ClusterSpec:
1918
1926
  a set of default values will be used."""
1919
1927
 
1920
1928
  cluster_log_conf: Optional[ClusterLogConf] = None
1921
- """The configuration for delivering spark logs to a long-term storage destination. Two kinds of
1922
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster.
1923
- If the conf is given, the logs will be delivered to the destination every `5 mins`. The
1924
- destination of driver logs is `$destination/$clusterId/driver`, while the destination of
1925
- executor logs is `$destination/$clusterId/executor`."""
1929
+ """The configuration for delivering spark logs to a long-term storage destination. Three kinds of
1930
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
1931
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination
1932
+ every `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the
1933
+ destination of executor logs is `$destination/$clusterId/executor`."""
1926
1934
 
1927
1935
  cluster_name: Optional[str] = None
1928
1936
  """Cluster name requested by the user. This doesn't have to be unique. If not specified at
@@ -2334,11 +2342,11 @@ class CreateCluster:
2334
2342
  cluster."""
2335
2343
 
2336
2344
  cluster_log_conf: Optional[ClusterLogConf] = None
2337
- """The configuration for delivering spark logs to a long-term storage destination. Two kinds of
2338
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster.
2339
- If the conf is given, the logs will be delivered to the destination every `5 mins`. The
2340
- destination of driver logs is `$destination/$clusterId/driver`, while the destination of
2341
- executor logs is `$destination/$clusterId/executor`."""
2345
+ """The configuration for delivering spark logs to a long-term storage destination. Three kinds of
2346
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
2347
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination
2348
+ every `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the
2349
+ destination of executor logs is `$destination/$clusterId/executor`."""
2342
2350
 
2343
2351
  cluster_name: Optional[str] = None
2344
2352
  """Cluster name requested by the user. This doesn't have to be unique. If not specified at
@@ -3469,11 +3477,11 @@ class EditCluster:
3469
3477
  a set of default values will be used."""
3470
3478
 
3471
3479
  cluster_log_conf: Optional[ClusterLogConf] = None
3472
- """The configuration for delivering spark logs to a long-term storage destination. Two kinds of
3473
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster.
3474
- If the conf is given, the logs will be delivered to the destination every `5 mins`. The
3475
- destination of driver logs is `$destination/$clusterId/driver`, while the destination of
3476
- executor logs is `$destination/$clusterId/executor`."""
3480
+ """The configuration for delivering spark logs to a long-term storage destination. Three kinds of
3481
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
3482
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination
3483
+ every `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the
3484
+ destination of executor logs is `$destination/$clusterId/executor`."""
3477
3485
 
3478
3486
  cluster_name: Optional[str] = None
3479
3487
  """Cluster name requested by the user. This doesn't have to be unique. If not specified at
@@ -7773,11 +7781,11 @@ class UpdateClusterResource:
7773
7781
  a set of default values will be used."""
7774
7782
 
7775
7783
  cluster_log_conf: Optional[ClusterLogConf] = None
7776
- """The configuration for delivering spark logs to a long-term storage destination. Two kinds of
7777
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster.
7778
- If the conf is given, the logs will be delivered to the destination every `5 mins`. The
7779
- destination of driver logs is `$destination/$clusterId/driver`, while the destination of
7780
- executor logs is `$destination/$clusterId/executor`."""
7784
+ """The configuration for delivering spark logs to a long-term storage destination. Three kinds of
7785
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
7786
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination
7787
+ every `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the
7788
+ destination of executor logs is `$destination/$clusterId/executor`."""
7781
7789
 
7782
7790
  cluster_name: Optional[str] = None
7783
7791
  """Cluster name requested by the user. This doesn't have to be unique. If not specified at
@@ -8077,7 +8085,7 @@ class UpdateResponse:
8077
8085
  @dataclass
8078
8086
  class VolumesStorageInfo:
8079
8087
  destination: str
8080
- """Unity Catalog Volumes file destination, e.g. `/Volumes/my-init.sh`"""
8088
+ """Unity Catalog volumes file destination, e.g. `/Volumes/catalog/schema/volume/dir/file`"""
8081
8089
 
8082
8090
  def as_dict(self) -> dict:
8083
8091
  """Serializes the VolumesStorageInfo into a dictionary suitable for use as a JSON request body."""
@@ -8619,11 +8627,11 @@ class ClustersAPI:
8619
8627
  :param clone_from: :class:`CloneCluster` (optional)
8620
8628
  When specified, this clones libraries from a source cluster during the creation of a new cluster.
8621
8629
  :param cluster_log_conf: :class:`ClusterLogConf` (optional)
8622
- The configuration for delivering spark logs to a long-term storage destination. Two kinds of
8623
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster. If
8624
- the conf is given, the logs will be delivered to the destination every `5 mins`. The destination of
8625
- driver logs is `$destination/$clusterId/driver`, while the destination of executor logs is
8626
- `$destination/$clusterId/executor`.
8630
+ The configuration for delivering spark logs to a long-term storage destination. Three kinds of
8631
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
8632
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination every
8633
+ `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the destination
8634
+ of executor logs is `$destination/$clusterId/executor`.
8627
8635
  :param cluster_name: str (optional)
8628
8636
  Cluster name requested by the user. This doesn't have to be unique. If not specified at creation,
8629
8637
  the cluster name will be an empty string.
@@ -8952,11 +8960,11 @@ class ClustersAPI:
8952
8960
  Attributes related to clusters running on Microsoft Azure. If not specified at cluster creation, a
8953
8961
  set of default values will be used.
8954
8962
  :param cluster_log_conf: :class:`ClusterLogConf` (optional)
8955
- The configuration for delivering spark logs to a long-term storage destination. Two kinds of
8956
- destinations (dbfs and s3) are supported. Only one destination can be specified for one cluster. If
8957
- the conf is given, the logs will be delivered to the destination every `5 mins`. The destination of
8958
- driver logs is `$destination/$clusterId/driver`, while the destination of executor logs is
8959
- `$destination/$clusterId/executor`.
8963
+ The configuration for delivering spark logs to a long-term storage destination. Three kinds of
8964
+ destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be
8965
+ specified for one cluster. If the conf is given, the logs will be delivered to the destination every
8966
+ `5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while the destination
8967
+ of executor logs is `$destination/$clusterId/executor`.
8960
8968
  :param cluster_name: str (optional)
8961
8969
  Cluster name requested by the user. This doesn't have to be unique. If not specified at creation,
8962
8970
  the cluster name will be an empty string.