databricks-sdk 0.32.3__py3-none-any.whl → 0.33.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of databricks-sdk might be problematic. Click here for more details.

@@ -31,7 +31,11 @@ class CreateDashboardRequest:
31
31
 
32
32
  serialized_dashboard: Optional[str] = None
33
33
  """The contents of the dashboard in serialized string form. This field is excluded in List
34
- Dashboards responses."""
34
+ Dashboards responses. Use the [get dashboard API] to retrieve an example response, which
35
+ includes the `serialized_dashboard` field. This field provides the structure of the JSON string
36
+ that represents the dashboard's layout and components.
37
+
38
+ [get dashboard API]: https://docs.databricks.com/api/workspace/lakeview/get"""
35
39
 
36
40
  warehouse_id: Optional[str] = None
37
41
  """The warehouse ID used to run the dashboard."""
@@ -170,7 +174,11 @@ class Dashboard:
170
174
 
171
175
  serialized_dashboard: Optional[str] = None
172
176
  """The contents of the dashboard in serialized string form. This field is excluded in List
173
- Dashboards responses."""
177
+ Dashboards responses. Use the [get dashboard API] to retrieve an example response, which
178
+ includes the `serialized_dashboard` field. This field provides the structure of the JSON string
179
+ that represents the dashboard's layout and components.
180
+
181
+ [get dashboard API]: https://docs.databricks.com/api/workspace/lakeview/get"""
174
182
 
175
183
  update_time: Optional[str] = None
176
184
  """The timestamp of when the dashboard was last updated by the user. This field is excluded in List
@@ -382,8 +390,9 @@ class GenieMessage:
382
390
 
383
391
  status: Optional[MessageStatus] = None
384
392
  """MesssageStatus. The possible values are: * `FETCHING_METADATA`: Fetching metadata from the data
385
- sources. * `ASKING_AI`: Waiting for the LLM to respond to the users question. *
386
- `EXECUTING_QUERY`: Executing AI provided SQL query. Get the SQL query result by calling
393
+ sources. * `FILTERING_CONTEXT`: Running smart context step to determine relevant context. *
394
+ `ASKING_AI`: Waiting for the LLM to respond to the users question. * `EXECUTING_QUERY`:
395
+ Executing AI provided SQL query. Get the SQL query result by calling
387
396
  [getMessageQueryResult](:method:genie/getMessageQueryResult) API. **Important: The message
388
397
  status will stay in the `EXECUTING_QUERY` until a client calls
389
398
  [getMessageQueryResult](:method:genie/getMessageQueryResult)**. * `FAILED`: Generating a
@@ -615,8 +624,9 @@ class MessageErrorType(Enum):
615
624
 
616
625
  class MessageStatus(Enum):
617
626
  """MesssageStatus. The possible values are: * `FETCHING_METADATA`: Fetching metadata from the data
618
- sources. * `ASKING_AI`: Waiting for the LLM to respond to the users question. *
619
- `EXECUTING_QUERY`: Executing AI provided SQL query. Get the SQL query result by calling
627
+ sources. * `FILTERING_CONTEXT`: Running smart context step to determine relevant context. *
628
+ `ASKING_AI`: Waiting for the LLM to respond to the users question. * `EXECUTING_QUERY`:
629
+ Executing AI provided SQL query. Get the SQL query result by calling
620
630
  [getMessageQueryResult](:method:genie/getMessageQueryResult) API. **Important: The message
621
631
  status will stay in the `EXECUTING_QUERY` until a client calls
622
632
  [getMessageQueryResult](:method:genie/getMessageQueryResult)**. * `FAILED`: Generating a
@@ -632,6 +642,7 @@ class MessageStatus(Enum):
632
642
  EXECUTING_QUERY = 'EXECUTING_QUERY'
633
643
  FAILED = 'FAILED'
634
644
  FETCHING_METADATA = 'FETCHING_METADATA'
645
+ FILTERING_CONTEXT = 'FILTERING_CONTEXT'
635
646
  QUERY_RESULT_EXPIRED = 'QUERY_RESULT_EXPIRED'
636
647
  SUBMITTED = 'SUBMITTED'
637
648
 
@@ -1028,7 +1039,11 @@ class UpdateDashboardRequest:
1028
1039
 
1029
1040
  serialized_dashboard: Optional[str] = None
1030
1041
  """The contents of the dashboard in serialized string form. This field is excluded in List
1031
- Dashboards responses."""
1042
+ Dashboards responses. Use the [get dashboard API] to retrieve an example response, which
1043
+ includes the `serialized_dashboard` field. This field provides the structure of the JSON string
1044
+ that represents the dashboard's layout and components.
1045
+
1046
+ [get dashboard API]: https://docs.databricks.com/api/workspace/lakeview/get"""
1032
1047
 
1033
1048
  warehouse_id: Optional[str] = None
1034
1049
  """The warehouse ID used to run the dashboard."""
@@ -1308,7 +1323,11 @@ class LakeviewAPI:
1308
1323
  slash. This field is excluded in List Dashboards responses.
1309
1324
  :param serialized_dashboard: str (optional)
1310
1325
  The contents of the dashboard in serialized string form. This field is excluded in List Dashboards
1311
- responses.
1326
+ responses. Use the [get dashboard API] to retrieve an example response, which includes the
1327
+ `serialized_dashboard` field. This field provides the structure of the JSON string that represents
1328
+ the dashboard's layout and components.
1329
+
1330
+ [get dashboard API]: https://docs.databricks.com/api/workspace/lakeview/get
1312
1331
  :param warehouse_id: str (optional)
1313
1332
  The warehouse ID used to run the dashboard.
1314
1333
 
@@ -1723,7 +1742,11 @@ class LakeviewAPI:
1723
1742
  not been modified since the last read. This field is excluded in List Dashboards responses.
1724
1743
  :param serialized_dashboard: str (optional)
1725
1744
  The contents of the dashboard in serialized string form. This field is excluded in List Dashboards
1726
- responses.
1745
+ responses. Use the [get dashboard API] to retrieve an example response, which includes the
1746
+ `serialized_dashboard` field. This field provides the structure of the JSON string that represents
1747
+ the dashboard's layout and components.
1748
+
1749
+ [get dashboard API]: https://docs.databricks.com/api/workspace/lakeview/get
1727
1750
  :param warehouse_id: str (optional)
1728
1751
  The warehouse ID used to run the dashboard.
1729
1752
 
@@ -505,7 +505,11 @@ class CreateJob:
505
505
  well as when this job is deleted."""
506
506
 
507
507
  environments: Optional[List[JobEnvironment]] = None
508
- """A list of task execution environment specifications that can be referenced by tasks of this job."""
508
+ """A list of task execution environment specifications that can be referenced by serverless tasks
509
+ of this job. An environment is required to be present for serverless tasks. For serverless
510
+ notebook tasks, the environment is accessible in the notebook environment panel. For other
511
+ serverless tasks, the task environment is required to be specified using environment_key in the
512
+ task settings."""
509
513
 
510
514
  format: Optional[Format] = None
511
515
  """Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls.
@@ -553,12 +557,11 @@ class CreateJob:
553
557
  """The queue settings of the job."""
554
558
 
555
559
  run_as: Optional[JobRunAs] = None
556
- """Write-only setting, available only in Create/Update/Reset and Submit calls. Specifies the user
557
- or service principal that the job runs as. If not specified, the job runs as the user who
558
- created the job.
560
+ """Write-only setting. Specifies the user, service principal or group that the job/pipeline runs
561
+ as. If not specified, the job/pipeline runs as the user who created the job/pipeline.
559
562
 
560
- Only `user_name` or `service_principal_name` can be specified. If both are specified, an error
561
- is thrown."""
563
+ Exactly one of `user_name`, `service_principal_name`, `group_name` should be specified. If not,
564
+ an error is thrown."""
562
565
 
563
566
  schedule: Optional[CronSchedule] = None
564
567
  """An optional periodic schedule for this job. The default behavior is that the job only runs when
@@ -1462,7 +1465,8 @@ class JobEditMode(Enum):
1462
1465
  @dataclass
1463
1466
  class JobEmailNotifications:
1464
1467
  no_alert_for_skipped_runs: Optional[bool] = None
1465
- """If true, do not send email to recipients specified in `on_failure` if the run is skipped."""
1468
+ """If true, do not send email to recipients specified in `on_failure` if the run is skipped. This
1469
+ field is `deprecated`. Please use the `notification_settings.no_alert_for_skipped_runs` field."""
1466
1470
 
1467
1471
  on_duration_warning_threshold_exceeded: Optional[List[str]] = None
1468
1472
  """A list of email addresses to be notified when the duration of a run exceeds the threshold
@@ -1720,12 +1724,11 @@ class JobPermissionsRequest:
1720
1724
 
1721
1725
  @dataclass
1722
1726
  class JobRunAs:
1723
- """Write-only setting, available only in Create/Update/Reset and Submit calls. Specifies the user
1724
- or service principal that the job runs as. If not specified, the job runs as the user who
1725
- created the job.
1727
+ """Write-only setting. Specifies the user, service principal or group that the job/pipeline runs
1728
+ as. If not specified, the job/pipeline runs as the user who created the job/pipeline.
1726
1729
 
1727
- Only `user_name` or `service_principal_name` can be specified. If both are specified, an error
1728
- is thrown."""
1730
+ Exactly one of `user_name`, `service_principal_name`, `group_name` should be specified. If not,
1731
+ an error is thrown."""
1729
1732
 
1730
1733
  service_principal_name: Optional[str] = None
1731
1734
  """Application ID of an active service principal. Setting this field requires the
@@ -1773,7 +1776,11 @@ class JobSettings:
1773
1776
  well as when this job is deleted."""
1774
1777
 
1775
1778
  environments: Optional[List[JobEnvironment]] = None
1776
- """A list of task execution environment specifications that can be referenced by tasks of this job."""
1779
+ """A list of task execution environment specifications that can be referenced by serverless tasks
1780
+ of this job. An environment is required to be present for serverless tasks. For serverless
1781
+ notebook tasks, the environment is accessible in the notebook environment panel. For other
1782
+ serverless tasks, the task environment is required to be specified using environment_key in the
1783
+ task settings."""
1777
1784
 
1778
1785
  format: Optional[Format] = None
1779
1786
  """Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls.
@@ -1821,12 +1828,11 @@ class JobSettings:
1821
1828
  """The queue settings of the job."""
1822
1829
 
1823
1830
  run_as: Optional[JobRunAs] = None
1824
- """Write-only setting, available only in Create/Update/Reset and Submit calls. Specifies the user
1825
- or service principal that the job runs as. If not specified, the job runs as the user who
1826
- created the job.
1831
+ """Write-only setting. Specifies the user, service principal or group that the job/pipeline runs
1832
+ as. If not specified, the job/pipeline runs as the user who created the job/pipeline.
1827
1833
 
1828
- Only `user_name` or `service_principal_name` can be specified. If both are specified, an error
1829
- is thrown."""
1834
+ Exactly one of `user_name`, `service_principal_name`, `group_name` should be specified. If not,
1835
+ an error is thrown."""
1830
1836
 
1831
1837
  schedule: Optional[CronSchedule] = None
1832
1838
  """An optional periodic schedule for this job. The default behavior is that the job only runs when
@@ -3617,9 +3623,11 @@ class RunResultState(Enum):
3617
3623
  reached. * `EXCLUDED`: The run was skipped because the necessary conditions were not met. *
3618
3624
  `SUCCESS_WITH_FAILURES`: The job run completed successfully with some failures; leaf tasks were
3619
3625
  successful. * `UPSTREAM_FAILED`: The run was skipped because of an upstream failure. *
3620
- `UPSTREAM_CANCELED`: The run was skipped because an upstream task was canceled."""
3626
+ `UPSTREAM_CANCELED`: The run was skipped because an upstream task was canceled. * `DISABLED`:
3627
+ The run was skipped because it was disabled explicitly by the user."""
3621
3628
 
3622
3629
  CANCELED = 'CANCELED'
3630
+ DISABLED = 'DISABLED'
3623
3631
  EXCLUDED = 'EXCLUDED'
3624
3632
  FAILED = 'FAILED'
3625
3633
  MAXIMUM_CONCURRENT_RUNS_REACHED = 'MAXIMUM_CONCURRENT_RUNS_REACHED'
@@ -5034,7 +5042,8 @@ class TaskDependency:
5034
5042
  @dataclass
5035
5043
  class TaskEmailNotifications:
5036
5044
  no_alert_for_skipped_runs: Optional[bool] = None
5037
- """If true, do not send email to recipients specified in `on_failure` if the run is skipped."""
5045
+ """If true, do not send email to recipients specified in `on_failure` if the run is skipped. This
5046
+ field is `deprecated`. Please use the `notification_settings.no_alert_for_skipped_runs` field."""
5038
5047
 
5039
5048
  on_duration_warning_threshold_exceeded: Optional[List[str]] = None
5040
5049
  """A list of email addresses to be notified when the duration of a run exceeds the threshold
@@ -5128,36 +5137,36 @@ class TaskNotificationSettings:
5128
5137
 
5129
5138
  class TerminationCodeCode(Enum):
5130
5139
  """The code indicates why the run was terminated. Additional codes might be introduced in future
5131
- releases. * `SUCCESS`: The run was completed successfully. * `CANCELED`: The run was canceled
5132
- during execution by the Databricks platform; for example, if the maximum run duration was
5133
- exceeded. * `SKIPPED`: Run was never executed, for example, if the upstream task run failed, the
5134
- dependency type condition was not met, or there were no material tasks to execute. *
5135
- `INTERNAL_ERROR`: The run encountered an unexpected error. Refer to the state message for
5136
- further details. * `DRIVER_ERROR`: The run encountered an error while communicating with the
5137
- Spark Driver. * `CLUSTER_ERROR`: The run failed due to a cluster error. Refer to the state
5138
- message for further details. * `REPOSITORY_CHECKOUT_FAILED`: Failed to complete the checkout due
5139
- to an error when communicating with the third party service. * `INVALID_CLUSTER_REQUEST`: The
5140
- run failed because it issued an invalid request to start the cluster. *
5141
- `WORKSPACE_RUN_LIMIT_EXCEEDED`: The workspace has reached the quota for the maximum number of
5142
- concurrent active runs. Consider scheduling the runs over a larger time frame. *
5143
- `FEATURE_DISABLED`: The run failed because it tried to access a feature unavailable for the
5144
- workspace. * `CLUSTER_REQUEST_LIMIT_EXCEEDED`: The number of cluster creation, start, and upsize
5145
- requests have exceeded the allotted rate limit. Consider spreading the run execution over a
5146
- larger time frame. * `STORAGE_ACCESS_ERROR`: The run failed due to an error when accessing the
5147
- customer blob storage. Refer to the state message for further details. * `RUN_EXECUTION_ERROR`:
5148
- The run was completed with task failures. For more details, refer to the state message or run
5149
- output. * `UNAUTHORIZED_ERROR`: The run failed due to a permission issue while accessing a
5150
- resource. Refer to the state message for further details. * `LIBRARY_INSTALLATION_ERROR`: The
5151
- run failed while installing the user-requested library. Refer to the state message for further
5152
- details. The causes might include, but are not limited to: The provided library is invalid,
5153
- there are insufficient permissions to install the library, and so forth. *
5154
- `MAX_CONCURRENT_RUNS_EXCEEDED`: The scheduled run exceeds the limit of maximum concurrent runs
5155
- set for the job. * `MAX_SPARK_CONTEXTS_EXCEEDED`: The run is scheduled on a cluster that has
5156
- already reached the maximum number of contexts it is configured to create. See: [Link]. *
5157
- `RESOURCE_NOT_FOUND`: A resource necessary for run execution does not exist. Refer to the state
5158
- message for further details. * `INVALID_RUN_CONFIGURATION`: The run failed due to an invalid
5159
- configuration. Refer to the state message for further details. * `CLOUD_FAILURE`: The run failed
5160
- due to a cloud provider issue. Refer to the state message for further details. *
5140
+ releases. * `SUCCESS`: The run was completed successfully. * `USER_CANCELED`: The run was
5141
+ successfully canceled during execution by a user. * `CANCELED`: The run was canceled during
5142
+ execution by the Databricks platform; for example, if the maximum run duration was exceeded. *
5143
+ `SKIPPED`: Run was never executed, for example, if the upstream task run failed, the dependency
5144
+ type condition was not met, or there were no material tasks to execute. * `INTERNAL_ERROR`: The
5145
+ run encountered an unexpected error. Refer to the state message for further details. *
5146
+ `DRIVER_ERROR`: The run encountered an error while communicating with the Spark Driver. *
5147
+ `CLUSTER_ERROR`: The run failed due to a cluster error. Refer to the state message for further
5148
+ details. * `REPOSITORY_CHECKOUT_FAILED`: Failed to complete the checkout due to an error when
5149
+ communicating with the third party service. * `INVALID_CLUSTER_REQUEST`: The run failed because
5150
+ it issued an invalid request to start the cluster. * `WORKSPACE_RUN_LIMIT_EXCEEDED`: The
5151
+ workspace has reached the quota for the maximum number of concurrent active runs. Consider
5152
+ scheduling the runs over a larger time frame. * `FEATURE_DISABLED`: The run failed because it
5153
+ tried to access a feature unavailable for the workspace. * `CLUSTER_REQUEST_LIMIT_EXCEEDED`: The
5154
+ number of cluster creation, start, and upsize requests have exceeded the allotted rate limit.
5155
+ Consider spreading the run execution over a larger time frame. * `STORAGE_ACCESS_ERROR`: The run
5156
+ failed due to an error when accessing the customer blob storage. Refer to the state message for
5157
+ further details. * `RUN_EXECUTION_ERROR`: The run was completed with task failures. For more
5158
+ details, refer to the state message or run output. * `UNAUTHORIZED_ERROR`: The run failed due to
5159
+ a permission issue while accessing a resource. Refer to the state message for further details. *
5160
+ `LIBRARY_INSTALLATION_ERROR`: The run failed while installing the user-requested library. Refer
5161
+ to the state message for further details. The causes might include, but are not limited to: The
5162
+ provided library is invalid, there are insufficient permissions to install the library, and so
5163
+ forth. * `MAX_CONCURRENT_RUNS_EXCEEDED`: The scheduled run exceeds the limit of maximum
5164
+ concurrent runs set for the job. * `MAX_SPARK_CONTEXTS_EXCEEDED`: The run is scheduled on a
5165
+ cluster that has already reached the maximum number of contexts it is configured to create. See:
5166
+ [Link]. * `RESOURCE_NOT_FOUND`: A resource necessary for run execution does not exist. Refer to
5167
+ the state message for further details. * `INVALID_RUN_CONFIGURATION`: The run failed due to an
5168
+ invalid configuration. Refer to the state message for further details. * `CLOUD_FAILURE`: The
5169
+ run failed due to a cloud provider issue. Refer to the state message for further details. *
5161
5170
  `MAX_JOB_QUEUE_SIZE_EXCEEDED`: The run was skipped due to reaching the job level queue size
5162
5171
  limit.
5163
5172
 
@@ -5183,6 +5192,7 @@ class TerminationCodeCode(Enum):
5183
5192
  STORAGE_ACCESS_ERROR = 'STORAGE_ACCESS_ERROR'
5184
5193
  SUCCESS = 'SUCCESS'
5185
5194
  UNAUTHORIZED_ERROR = 'UNAUTHORIZED_ERROR'
5195
+ USER_CANCELED = 'USER_CANCELED'
5186
5196
  WORKSPACE_RUN_LIMIT_EXCEEDED = 'WORKSPACE_RUN_LIMIT_EXCEEDED'
5187
5197
 
5188
5198
 
@@ -5190,36 +5200,36 @@ class TerminationCodeCode(Enum):
5190
5200
  class TerminationDetails:
5191
5201
  code: Optional[TerminationCodeCode] = None
5192
5202
  """The code indicates why the run was terminated. Additional codes might be introduced in future
5193
- releases. * `SUCCESS`: The run was completed successfully. * `CANCELED`: The run was canceled
5194
- during execution by the Databricks platform; for example, if the maximum run duration was
5195
- exceeded. * `SKIPPED`: Run was never executed, for example, if the upstream task run failed, the
5196
- dependency type condition was not met, or there were no material tasks to execute. *
5197
- `INTERNAL_ERROR`: The run encountered an unexpected error. Refer to the state message for
5198
- further details. * `DRIVER_ERROR`: The run encountered an error while communicating with the
5199
- Spark Driver. * `CLUSTER_ERROR`: The run failed due to a cluster error. Refer to the state
5200
- message for further details. * `REPOSITORY_CHECKOUT_FAILED`: Failed to complete the checkout due
5201
- to an error when communicating with the third party service. * `INVALID_CLUSTER_REQUEST`: The
5202
- run failed because it issued an invalid request to start the cluster. *
5203
- `WORKSPACE_RUN_LIMIT_EXCEEDED`: The workspace has reached the quota for the maximum number of
5204
- concurrent active runs. Consider scheduling the runs over a larger time frame. *
5205
- `FEATURE_DISABLED`: The run failed because it tried to access a feature unavailable for the
5206
- workspace. * `CLUSTER_REQUEST_LIMIT_EXCEEDED`: The number of cluster creation, start, and upsize
5207
- requests have exceeded the allotted rate limit. Consider spreading the run execution over a
5208
- larger time frame. * `STORAGE_ACCESS_ERROR`: The run failed due to an error when accessing the
5209
- customer blob storage. Refer to the state message for further details. * `RUN_EXECUTION_ERROR`:
5210
- The run was completed with task failures. For more details, refer to the state message or run
5211
- output. * `UNAUTHORIZED_ERROR`: The run failed due to a permission issue while accessing a
5212
- resource. Refer to the state message for further details. * `LIBRARY_INSTALLATION_ERROR`: The
5213
- run failed while installing the user-requested library. Refer to the state message for further
5214
- details. The causes might include, but are not limited to: The provided library is invalid,
5215
- there are insufficient permissions to install the library, and so forth. *
5216
- `MAX_CONCURRENT_RUNS_EXCEEDED`: The scheduled run exceeds the limit of maximum concurrent runs
5217
- set for the job. * `MAX_SPARK_CONTEXTS_EXCEEDED`: The run is scheduled on a cluster that has
5218
- already reached the maximum number of contexts it is configured to create. See: [Link]. *
5219
- `RESOURCE_NOT_FOUND`: A resource necessary for run execution does not exist. Refer to the state
5220
- message for further details. * `INVALID_RUN_CONFIGURATION`: The run failed due to an invalid
5221
- configuration. Refer to the state message for further details. * `CLOUD_FAILURE`: The run failed
5222
- due to a cloud provider issue. Refer to the state message for further details. *
5203
+ releases. * `SUCCESS`: The run was completed successfully. * `USER_CANCELED`: The run was
5204
+ successfully canceled during execution by a user. * `CANCELED`: The run was canceled during
5205
+ execution by the Databricks platform; for example, if the maximum run duration was exceeded. *
5206
+ `SKIPPED`: Run was never executed, for example, if the upstream task run failed, the dependency
5207
+ type condition was not met, or there were no material tasks to execute. * `INTERNAL_ERROR`: The
5208
+ run encountered an unexpected error. Refer to the state message for further details. *
5209
+ `DRIVER_ERROR`: The run encountered an error while communicating with the Spark Driver. *
5210
+ `CLUSTER_ERROR`: The run failed due to a cluster error. Refer to the state message for further
5211
+ details. * `REPOSITORY_CHECKOUT_FAILED`: Failed to complete the checkout due to an error when
5212
+ communicating with the third party service. * `INVALID_CLUSTER_REQUEST`: The run failed because
5213
+ it issued an invalid request to start the cluster. * `WORKSPACE_RUN_LIMIT_EXCEEDED`: The
5214
+ workspace has reached the quota for the maximum number of concurrent active runs. Consider
5215
+ scheduling the runs over a larger time frame. * `FEATURE_DISABLED`: The run failed because it
5216
+ tried to access a feature unavailable for the workspace. * `CLUSTER_REQUEST_LIMIT_EXCEEDED`: The
5217
+ number of cluster creation, start, and upsize requests have exceeded the allotted rate limit.
5218
+ Consider spreading the run execution over a larger time frame. * `STORAGE_ACCESS_ERROR`: The run
5219
+ failed due to an error when accessing the customer blob storage. Refer to the state message for
5220
+ further details. * `RUN_EXECUTION_ERROR`: The run was completed with task failures. For more
5221
+ details, refer to the state message or run output. * `UNAUTHORIZED_ERROR`: The run failed due to
5222
+ a permission issue while accessing a resource. Refer to the state message for further details. *
5223
+ `LIBRARY_INSTALLATION_ERROR`: The run failed while installing the user-requested library. Refer
5224
+ to the state message for further details. The causes might include, but are not limited to: The
5225
+ provided library is invalid, there are insufficient permissions to install the library, and so
5226
+ forth. * `MAX_CONCURRENT_RUNS_EXCEEDED`: The scheduled run exceeds the limit of maximum
5227
+ concurrent runs set for the job. * `MAX_SPARK_CONTEXTS_EXCEEDED`: The run is scheduled on a
5228
+ cluster that has already reached the maximum number of contexts it is configured to create. See:
5229
+ [Link]. * `RESOURCE_NOT_FOUND`: A resource necessary for run execution does not exist. Refer to
5230
+ the state message for further details. * `INVALID_RUN_CONFIGURATION`: The run failed due to an
5231
+ invalid configuration. Refer to the state message for further details. * `CLOUD_FAILURE`: The
5232
+ run failed due to a cloud provider issue. Refer to the state message for further details. *
5223
5233
  `MAX_JOB_QUEUE_SIZE_EXCEEDED`: The run was skipped due to reaching the job level queue size
5224
5234
  limit.
5225
5235
 
@@ -5649,7 +5659,10 @@ class JobsAPI:
5649
5659
  An optional set of email addresses that is notified when runs of this job begin or complete as well
5650
5660
  as when this job is deleted.
5651
5661
  :param environments: List[:class:`JobEnvironment`] (optional)
5652
- A list of task execution environment specifications that can be referenced by tasks of this job.
5662
+ A list of task execution environment specifications that can be referenced by serverless tasks of
5663
+ this job. An environment is required to be present for serverless tasks. For serverless notebook
5664
+ tasks, the environment is accessible in the notebook environment panel. For other serverless tasks,
5665
+ the task environment is required to be specified using environment_key in the task settings.
5653
5666
  :param format: :class:`Format` (optional)
5654
5667
  Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When
5655
5668
  using the Jobs API 2.1 this value is always set to `"MULTI_TASK"`.
@@ -5686,12 +5699,11 @@ class JobsAPI:
5686
5699
  :param queue: :class:`QueueSettings` (optional)
5687
5700
  The queue settings of the job.
5688
5701
  :param run_as: :class:`JobRunAs` (optional)
5689
- Write-only setting, available only in Create/Update/Reset and Submit calls. Specifies the user or
5690
- service principal that the job runs as. If not specified, the job runs as the user who created the
5691
- job.
5702
+ Write-only setting. Specifies the user, service principal or group that the job/pipeline runs as. If
5703
+ not specified, the job/pipeline runs as the user who created the job/pipeline.
5692
5704
 
5693
- Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is
5694
- thrown.
5705
+ Exactly one of `user_name`, `service_principal_name`, `group_name` should be specified. If not, an
5706
+ error is thrown.
5695
5707
  :param schedule: :class:`CronSchedule` (optional)
5696
5708
  An optional periodic schedule for this job. The default behavior is that the job only runs when
5697
5709
  triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.