pulumi-kubernetes 4.23.0a1746131759__py3-none-any.whl → 4.23.0a1746153578__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of pulumi-kubernetes might be problematic. Click here for more details.
- pulumi_kubernetes/__init__.py +36 -2
- pulumi_kubernetes/admissionregistration/v1/MutatingWebhookConfiguration.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/MutatingWebhookConfigurationList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/MutatingWebhookConfigurationPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicy.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyBinding.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyBindingList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyBindingPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingWebhookConfiguration.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingWebhookConfigurationList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1/ValidatingWebhookConfigurationPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicy.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyBinding.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyBindingList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyBindingPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicy.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyBinding.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyBindingList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyBindingPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1alpha1/_inputs.py +30 -30
- pulumi_kubernetes/admissionregistration/v1alpha1/outputs.py +20 -20
- pulumi_kubernetes/admissionregistration/v1beta1/MutatingWebhookConfiguration.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/MutatingWebhookConfigurationList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/MutatingWebhookConfigurationPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicy.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyBinding.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyBindingList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyBindingPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyPatch.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingWebhookConfiguration.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingWebhookConfigurationList.py +1 -3
- pulumi_kubernetes/admissionregistration/v1beta1/ValidatingWebhookConfigurationPatch.py +1 -3
- pulumi_kubernetes/apiextensions/v1/CustomResourceDefinition.py +1 -3
- pulumi_kubernetes/apiextensions/v1/CustomResourceDefinitionList.py +1 -3
- pulumi_kubernetes/apiextensions/v1/CustomResourceDefinitionPatch.py +1 -3
- pulumi_kubernetes/apiextensions/v1beta1/CustomResourceDefinition.py +1 -3
- pulumi_kubernetes/apiextensions/v1beta1/CustomResourceDefinitionList.py +1 -3
- pulumi_kubernetes/apiextensions/v1beta1/CustomResourceDefinitionPatch.py +1 -3
- pulumi_kubernetes/apiregistration/v1/APIService.py +1 -3
- pulumi_kubernetes/apiregistration/v1/APIServiceList.py +1 -3
- pulumi_kubernetes/apiregistration/v1/APIServicePatch.py +1 -3
- pulumi_kubernetes/apiregistration/v1beta1/APIService.py +1 -3
- pulumi_kubernetes/apiregistration/v1beta1/APIServiceList.py +1 -3
- pulumi_kubernetes/apiregistration/v1beta1/APIServicePatch.py +1 -3
- pulumi_kubernetes/apps/v1/ControllerRevision.py +1 -3
- pulumi_kubernetes/apps/v1/ControllerRevisionList.py +1 -3
- pulumi_kubernetes/apps/v1/ControllerRevisionPatch.py +1 -3
- pulumi_kubernetes/apps/v1/DaemonSet.py +1 -3
- pulumi_kubernetes/apps/v1/DaemonSetList.py +1 -3
- pulumi_kubernetes/apps/v1/DaemonSetPatch.py +1 -3
- pulumi_kubernetes/apps/v1/Deployment.py +1 -3
- pulumi_kubernetes/apps/v1/DeploymentList.py +1 -3
- pulumi_kubernetes/apps/v1/DeploymentPatch.py +1 -3
- pulumi_kubernetes/apps/v1/ReplicaSet.py +1 -3
- pulumi_kubernetes/apps/v1/ReplicaSetList.py +5 -7
- pulumi_kubernetes/apps/v1/ReplicaSetPatch.py +1 -3
- pulumi_kubernetes/apps/v1/StatefulSet.py +1 -3
- pulumi_kubernetes/apps/v1/StatefulSetList.py +1 -3
- pulumi_kubernetes/apps/v1/StatefulSetPatch.py +1 -3
- pulumi_kubernetes/apps/v1/_inputs.py +109 -56
- pulumi_kubernetes/apps/v1/outputs.py +129 -56
- pulumi_kubernetes/apps/v1beta1/ControllerRevision.py +1 -3
- pulumi_kubernetes/apps/v1beta1/ControllerRevisionList.py +1 -3
- pulumi_kubernetes/apps/v1beta1/ControllerRevisionPatch.py +1 -3
- pulumi_kubernetes/apps/v1beta1/Deployment.py +1 -3
- pulumi_kubernetes/apps/v1beta1/DeploymentList.py +1 -3
- pulumi_kubernetes/apps/v1beta1/DeploymentPatch.py +1 -3
- pulumi_kubernetes/apps/v1beta1/StatefulSet.py +1 -3
- pulumi_kubernetes/apps/v1beta1/StatefulSetList.py +1 -3
- pulumi_kubernetes/apps/v1beta1/StatefulSetPatch.py +1 -3
- pulumi_kubernetes/apps/v1beta2/ControllerRevision.py +1 -3
- pulumi_kubernetes/apps/v1beta2/ControllerRevisionList.py +1 -3
- pulumi_kubernetes/apps/v1beta2/ControllerRevisionPatch.py +1 -3
- pulumi_kubernetes/apps/v1beta2/DaemonSet.py +1 -3
- pulumi_kubernetes/apps/v1beta2/DaemonSetList.py +1 -3
- pulumi_kubernetes/apps/v1beta2/DaemonSetPatch.py +1 -3
- pulumi_kubernetes/apps/v1beta2/Deployment.py +1 -3
- pulumi_kubernetes/apps/v1beta2/DeploymentList.py +1 -3
- pulumi_kubernetes/apps/v1beta2/DeploymentPatch.py +1 -3
- pulumi_kubernetes/apps/v1beta2/ReplicaSet.py +1 -3
- pulumi_kubernetes/apps/v1beta2/ReplicaSetList.py +1 -3
- pulumi_kubernetes/apps/v1beta2/ReplicaSetPatch.py +1 -3
- pulumi_kubernetes/apps/v1beta2/StatefulSet.py +1 -3
- pulumi_kubernetes/apps/v1beta2/StatefulSetList.py +1 -3
- pulumi_kubernetes/apps/v1beta2/StatefulSetPatch.py +1 -3
- pulumi_kubernetes/auditregistration/v1alpha1/AuditSink.py +1 -3
- pulumi_kubernetes/auditregistration/v1alpha1/AuditSinkList.py +1 -3
- pulumi_kubernetes/auditregistration/v1alpha1/AuditSinkPatch.py +1 -3
- pulumi_kubernetes/autoscaling/v1/HorizontalPodAutoscaler.py +1 -3
- pulumi_kubernetes/autoscaling/v1/HorizontalPodAutoscalerList.py +1 -3
- pulumi_kubernetes/autoscaling/v1/HorizontalPodAutoscalerPatch.py +1 -3
- pulumi_kubernetes/autoscaling/v2/HorizontalPodAutoscaler.py +1 -3
- pulumi_kubernetes/autoscaling/v2/HorizontalPodAutoscalerList.py +1 -3
- pulumi_kubernetes/autoscaling/v2/HorizontalPodAutoscalerPatch.py +1 -3
- pulumi_kubernetes/autoscaling/v2/_inputs.py +92 -12
- pulumi_kubernetes/autoscaling/v2/outputs.py +66 -10
- pulumi_kubernetes/autoscaling/v2beta1/HorizontalPodAutoscaler.py +1 -3
- pulumi_kubernetes/autoscaling/v2beta1/HorizontalPodAutoscalerList.py +1 -3
- pulumi_kubernetes/autoscaling/v2beta1/HorizontalPodAutoscalerPatch.py +1 -3
- pulumi_kubernetes/autoscaling/v2beta2/HorizontalPodAutoscaler.py +1 -3
- pulumi_kubernetes/autoscaling/v2beta2/HorizontalPodAutoscalerList.py +1 -3
- pulumi_kubernetes/autoscaling/v2beta2/HorizontalPodAutoscalerPatch.py +1 -3
- pulumi_kubernetes/batch/v1/CronJob.py +1 -3
- pulumi_kubernetes/batch/v1/CronJobList.py +1 -3
- pulumi_kubernetes/batch/v1/CronJobPatch.py +1 -3
- pulumi_kubernetes/batch/v1/Job.py +1 -3
- pulumi_kubernetes/batch/v1/JobList.py +1 -3
- pulumi_kubernetes/batch/v1/JobPatch.py +1 -3
- pulumi_kubernetes/batch/v1/_inputs.py +12 -42
- pulumi_kubernetes/batch/v1/outputs.py +8 -32
- pulumi_kubernetes/batch/v1beta1/CronJob.py +1 -3
- pulumi_kubernetes/batch/v1beta1/CronJobList.py +1 -3
- pulumi_kubernetes/batch/v1beta1/CronJobPatch.py +1 -3
- pulumi_kubernetes/batch/v2alpha1/CronJob.py +1 -3
- pulumi_kubernetes/batch/v2alpha1/CronJobList.py +1 -3
- pulumi_kubernetes/batch/v2alpha1/CronJobPatch.py +1 -3
- pulumi_kubernetes/certificates/v1/CertificateSigningRequest.py +1 -3
- pulumi_kubernetes/certificates/v1/CertificateSigningRequestList.py +1 -3
- pulumi_kubernetes/certificates/v1/CertificateSigningRequestPatch.py +1 -3
- pulumi_kubernetes/certificates/v1alpha1/ClusterTrustBundle.py +3 -3
- pulumi_kubernetes/certificates/v1alpha1/ClusterTrustBundleList.py +1 -3
- pulumi_kubernetes/certificates/v1alpha1/ClusterTrustBundlePatch.py +3 -3
- pulumi_kubernetes/certificates/v1beta1/CertificateSigningRequest.py +1 -3
- pulumi_kubernetes/certificates/v1beta1/CertificateSigningRequestList.py +1 -3
- pulumi_kubernetes/certificates/v1beta1/CertificateSigningRequestPatch.py +1 -3
- pulumi_kubernetes/certificates/v1beta1/ClusterTrustBundle.py +227 -0
- pulumi_kubernetes/certificates/v1beta1/ClusterTrustBundleList.py +217 -0
- pulumi_kubernetes/certificates/v1beta1/ClusterTrustBundlePatch.py +238 -0
- pulumi_kubernetes/certificates/v1beta1/__init__.py +3 -0
- pulumi_kubernetes/certificates/v1beta1/_inputs.py +292 -0
- pulumi_kubernetes/certificates/v1beta1/outputs.py +241 -0
- pulumi_kubernetes/coordination/v1/Lease.py +1 -3
- pulumi_kubernetes/coordination/v1/LeaseList.py +1 -3
- pulumi_kubernetes/coordination/v1/LeasePatch.py +1 -3
- pulumi_kubernetes/coordination/v1alpha1/LeaseCandidate.py +2 -4
- pulumi_kubernetes/coordination/v1alpha1/LeaseCandidateList.py +1 -3
- pulumi_kubernetes/coordination/v1alpha1/LeaseCandidatePatch.py +2 -4
- pulumi_kubernetes/coordination/v1alpha2/LeaseCandidate.py +2 -4
- pulumi_kubernetes/coordination/v1alpha2/LeaseCandidateList.py +1 -3
- pulumi_kubernetes/coordination/v1alpha2/LeaseCandidatePatch.py +2 -4
- pulumi_kubernetes/coordination/v1alpha2/_inputs.py +6 -6
- pulumi_kubernetes/coordination/v1alpha2/outputs.py +4 -4
- pulumi_kubernetes/coordination/v1beta1/Lease.py +1 -3
- pulumi_kubernetes/coordination/v1beta1/LeaseCandidate.py +218 -0
- pulumi_kubernetes/coordination/v1beta1/LeaseCandidateList.py +217 -0
- pulumi_kubernetes/coordination/v1beta1/LeaseCandidatePatch.py +230 -0
- pulumi_kubernetes/coordination/v1beta1/LeaseList.py +1 -3
- pulumi_kubernetes/coordination/v1beta1/LeasePatch.py +1 -3
- pulumi_kubernetes/coordination/v1beta1/__init__.py +3 -0
- pulumi_kubernetes/coordination/v1beta1/_inputs.py +371 -0
- pulumi_kubernetes/coordination/v1beta1/outputs.py +292 -0
- pulumi_kubernetes/core/v1/Binding.py +1 -3
- pulumi_kubernetes/core/v1/BindingPatch.py +1 -3
- pulumi_kubernetes/core/v1/ConfigMap.py +1 -3
- pulumi_kubernetes/core/v1/ConfigMapList.py +1 -3
- pulumi_kubernetes/core/v1/ConfigMapPatch.py +1 -3
- pulumi_kubernetes/core/v1/Endpoints.py +9 -3
- pulumi_kubernetes/core/v1/EndpointsList.py +3 -5
- pulumi_kubernetes/core/v1/EndpointsPatch.py +9 -3
- pulumi_kubernetes/core/v1/Event.py +1 -3
- pulumi_kubernetes/core/v1/EventList.py +1 -3
- pulumi_kubernetes/core/v1/EventPatch.py +1 -3
- pulumi_kubernetes/core/v1/LimitRange.py +1 -3
- pulumi_kubernetes/core/v1/LimitRangeList.py +1 -3
- pulumi_kubernetes/core/v1/LimitRangePatch.py +1 -3
- pulumi_kubernetes/core/v1/Namespace.py +1 -3
- pulumi_kubernetes/core/v1/NamespaceList.py +1 -3
- pulumi_kubernetes/core/v1/NamespacePatch.py +1 -3
- pulumi_kubernetes/core/v1/Node.py +1 -3
- pulumi_kubernetes/core/v1/NodeList.py +1 -3
- pulumi_kubernetes/core/v1/NodePatch.py +1 -3
- pulumi_kubernetes/core/v1/PersistentVolume.py +1 -3
- pulumi_kubernetes/core/v1/PersistentVolumeClaim.py +1 -3
- pulumi_kubernetes/core/v1/PersistentVolumeClaimList.py +1 -3
- pulumi_kubernetes/core/v1/PersistentVolumeClaimPatch.py +1 -3
- pulumi_kubernetes/core/v1/PersistentVolumeList.py +1 -3
- pulumi_kubernetes/core/v1/PersistentVolumePatch.py +1 -3
- pulumi_kubernetes/core/v1/Pod.py +1 -3
- pulumi_kubernetes/core/v1/PodList.py +1 -3
- pulumi_kubernetes/core/v1/PodPatch.py +1 -3
- pulumi_kubernetes/core/v1/PodTemplate.py +1 -3
- pulumi_kubernetes/core/v1/PodTemplateList.py +1 -3
- pulumi_kubernetes/core/v1/PodTemplatePatch.py +1 -3
- pulumi_kubernetes/core/v1/ReplicationController.py +1 -3
- pulumi_kubernetes/core/v1/ReplicationControllerList.py +1 -3
- pulumi_kubernetes/core/v1/ReplicationControllerPatch.py +1 -3
- pulumi_kubernetes/core/v1/ResourceQuota.py +1 -3
- pulumi_kubernetes/core/v1/ResourceQuotaList.py +1 -3
- pulumi_kubernetes/core/v1/ResourceQuotaPatch.py +1 -3
- pulumi_kubernetes/core/v1/Secret.py +1 -3
- pulumi_kubernetes/core/v1/SecretList.py +1 -3
- pulumi_kubernetes/core/v1/SecretPatch.py +1 -3
- pulumi_kubernetes/core/v1/Service.py +1 -3
- pulumi_kubernetes/core/v1/ServiceAccount.py +1 -3
- pulumi_kubernetes/core/v1/ServiceAccountList.py +1 -3
- pulumi_kubernetes/core/v1/ServiceAccountPatch.py +1 -3
- pulumi_kubernetes/core/v1/ServiceList.py +1 -3
- pulumi_kubernetes/core/v1/ServicePatch.py +1 -3
- pulumi_kubernetes/core/v1/_enums.py +2 -1
- pulumi_kubernetes/core/v1/_inputs.py +240 -66
- pulumi_kubernetes/core/v1/outputs.py +251 -51
- pulumi_kubernetes/discovery/v1/EndpointSlice.py +11 -13
- pulumi_kubernetes/discovery/v1/EndpointSliceList.py +1 -3
- pulumi_kubernetes/discovery/v1/EndpointSlicePatch.py +11 -13
- pulumi_kubernetes/discovery/v1/_inputs.py +159 -44
- pulumi_kubernetes/discovery/v1/outputs.py +107 -32
- pulumi_kubernetes/discovery/v1beta1/EndpointSlice.py +1 -3
- pulumi_kubernetes/discovery/v1beta1/EndpointSliceList.py +1 -3
- pulumi_kubernetes/discovery/v1beta1/EndpointSlicePatch.py +1 -3
- pulumi_kubernetes/events/v1/Event.py +1 -3
- pulumi_kubernetes/events/v1/EventList.py +1 -3
- pulumi_kubernetes/events/v1/EventPatch.py +1 -3
- pulumi_kubernetes/events/v1beta1/Event.py +1 -3
- pulumi_kubernetes/events/v1beta1/EventList.py +1 -3
- pulumi_kubernetes/events/v1beta1/EventPatch.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/DaemonSet.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/DaemonSetList.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/DaemonSetPatch.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/Deployment.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/DeploymentList.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/DeploymentPatch.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/Ingress.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/IngressList.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/IngressPatch.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/NetworkPolicy.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/NetworkPolicyList.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/NetworkPolicyPatch.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/PodSecurityPolicy.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/PodSecurityPolicyList.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/PodSecurityPolicyPatch.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/ReplicaSet.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/ReplicaSetList.py +1 -3
- pulumi_kubernetes/extensions/v1beta1/ReplicaSetPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1/FlowSchema.py +1 -3
- pulumi_kubernetes/flowcontrol/v1/FlowSchemaList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1/FlowSchemaPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1/PriorityLevelConfiguration.py +1 -3
- pulumi_kubernetes/flowcontrol/v1/PriorityLevelConfigurationList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1/PriorityLevelConfigurationPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1alpha1/FlowSchema.py +1 -3
- pulumi_kubernetes/flowcontrol/v1alpha1/FlowSchemaList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1alpha1/FlowSchemaPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1alpha1/PriorityLevelConfiguration.py +1 -3
- pulumi_kubernetes/flowcontrol/v1alpha1/PriorityLevelConfigurationList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1alpha1/PriorityLevelConfigurationPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta1/FlowSchema.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta1/FlowSchemaList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta1/FlowSchemaPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta1/PriorityLevelConfiguration.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta1/PriorityLevelConfigurationList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta1/PriorityLevelConfigurationPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta2/FlowSchema.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta2/FlowSchemaList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta2/FlowSchemaPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta2/PriorityLevelConfiguration.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta2/PriorityLevelConfigurationList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta2/PriorityLevelConfigurationPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta3/FlowSchema.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta3/FlowSchemaList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta3/FlowSchemaPatch.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta3/PriorityLevelConfiguration.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta3/PriorityLevelConfigurationList.py +1 -3
- pulumi_kubernetes/flowcontrol/v1beta3/PriorityLevelConfigurationPatch.py +1 -3
- pulumi_kubernetes/helm/v3/Release.py +1 -3
- pulumi_kubernetes/helm/v4/Chart.py +1 -3
- pulumi_kubernetes/kustomize/v2/Directory.py +1 -3
- pulumi_kubernetes/meta/v1/Status.py +1 -3
- pulumi_kubernetes/meta/v1/StatusPatch.py +1 -3
- pulumi_kubernetes/networking/v1/IPAddress.py +218 -0
- pulumi_kubernetes/networking/v1/IPAddressList.py +217 -0
- pulumi_kubernetes/networking/v1/IPAddressPatch.py +230 -0
- pulumi_kubernetes/networking/v1/Ingress.py +1 -3
- pulumi_kubernetes/networking/v1/IngressClass.py +1 -3
- pulumi_kubernetes/networking/v1/IngressClassList.py +1 -3
- pulumi_kubernetes/networking/v1/IngressClassPatch.py +1 -3
- pulumi_kubernetes/networking/v1/IngressList.py +1 -3
- pulumi_kubernetes/networking/v1/IngressPatch.py +1 -3
- pulumi_kubernetes/networking/v1/NetworkPolicy.py +1 -3
- pulumi_kubernetes/networking/v1/NetworkPolicyList.py +1 -3
- pulumi_kubernetes/networking/v1/NetworkPolicyPatch.py +1 -3
- pulumi_kubernetes/networking/v1/ServiceCIDR.py +228 -0
- pulumi_kubernetes/networking/v1/ServiceCIDRList.py +217 -0
- pulumi_kubernetes/networking/v1/ServiceCIDRPatch.py +240 -0
- pulumi_kubernetes/networking/v1/__init__.py +6 -0
- pulumi_kubernetes/networking/v1/_inputs.py +599 -0
- pulumi_kubernetes/networking/v1/outputs.py +461 -0
- pulumi_kubernetes/networking/v1alpha1/ClusterCIDR.py +1 -3
- pulumi_kubernetes/networking/v1alpha1/ClusterCIDRList.py +1 -3
- pulumi_kubernetes/networking/v1alpha1/ClusterCIDRPatch.py +1 -3
- pulumi_kubernetes/networking/v1alpha1/IPAddress.py +2 -4
- pulumi_kubernetes/networking/v1alpha1/IPAddressList.py +1 -3
- pulumi_kubernetes/networking/v1alpha1/IPAddressPatch.py +2 -4
- pulumi_kubernetes/networking/v1alpha1/ServiceCIDR.py +2 -4
- pulumi_kubernetes/networking/v1alpha1/ServiceCIDRList.py +1 -3
- pulumi_kubernetes/networking/v1alpha1/ServiceCIDRPatch.py +2 -4
- pulumi_kubernetes/networking/v1beta1/IPAddress.py +2 -4
- pulumi_kubernetes/networking/v1beta1/IPAddressList.py +1 -3
- pulumi_kubernetes/networking/v1beta1/IPAddressPatch.py +2 -4
- pulumi_kubernetes/networking/v1beta1/Ingress.py +1 -3
- pulumi_kubernetes/networking/v1beta1/IngressClass.py +1 -3
- pulumi_kubernetes/networking/v1beta1/IngressClassList.py +1 -3
- pulumi_kubernetes/networking/v1beta1/IngressClassPatch.py +1 -3
- pulumi_kubernetes/networking/v1beta1/IngressList.py +1 -3
- pulumi_kubernetes/networking/v1beta1/IngressPatch.py +1 -3
- pulumi_kubernetes/networking/v1beta1/ServiceCIDR.py +2 -4
- pulumi_kubernetes/networking/v1beta1/ServiceCIDRList.py +1 -3
- pulumi_kubernetes/networking/v1beta1/ServiceCIDRPatch.py +2 -4
- pulumi_kubernetes/node/v1/RuntimeClass.py +1 -3
- pulumi_kubernetes/node/v1/RuntimeClassList.py +1 -3
- pulumi_kubernetes/node/v1/RuntimeClassPatch.py +1 -3
- pulumi_kubernetes/node/v1alpha1/RuntimeClass.py +1 -3
- pulumi_kubernetes/node/v1alpha1/RuntimeClassList.py +1 -3
- pulumi_kubernetes/node/v1alpha1/RuntimeClassPatch.py +1 -3
- pulumi_kubernetes/node/v1beta1/RuntimeClass.py +1 -3
- pulumi_kubernetes/node/v1beta1/RuntimeClassList.py +1 -3
- pulumi_kubernetes/node/v1beta1/RuntimeClassPatch.py +1 -3
- pulumi_kubernetes/policy/v1/PodDisruptionBudget.py +1 -3
- pulumi_kubernetes/policy/v1/PodDisruptionBudgetList.py +1 -3
- pulumi_kubernetes/policy/v1/PodDisruptionBudgetPatch.py +1 -3
- pulumi_kubernetes/policy/v1/_inputs.py +0 -12
- pulumi_kubernetes/policy/v1/outputs.py +0 -8
- pulumi_kubernetes/policy/v1beta1/PodDisruptionBudget.py +1 -3
- pulumi_kubernetes/policy/v1beta1/PodDisruptionBudgetList.py +1 -3
- pulumi_kubernetes/policy/v1beta1/PodDisruptionBudgetPatch.py +1 -3
- pulumi_kubernetes/policy/v1beta1/PodSecurityPolicy.py +1 -3
- pulumi_kubernetes/policy/v1beta1/PodSecurityPolicyList.py +1 -3
- pulumi_kubernetes/policy/v1beta1/PodSecurityPolicyPatch.py +1 -3
- pulumi_kubernetes/provider.py +1 -3
- pulumi_kubernetes/pulumi-plugin.json +1 -1
- pulumi_kubernetes/rbac/v1/ClusterRole.py +1 -3
- pulumi_kubernetes/rbac/v1/ClusterRoleBinding.py +1 -3
- pulumi_kubernetes/rbac/v1/ClusterRoleBindingList.py +1 -3
- pulumi_kubernetes/rbac/v1/ClusterRoleBindingPatch.py +1 -3
- pulumi_kubernetes/rbac/v1/ClusterRoleList.py +1 -3
- pulumi_kubernetes/rbac/v1/ClusterRolePatch.py +1 -3
- pulumi_kubernetes/rbac/v1/Role.py +1 -3
- pulumi_kubernetes/rbac/v1/RoleBinding.py +1 -3
- pulumi_kubernetes/rbac/v1/RoleBindingList.py +1 -3
- pulumi_kubernetes/rbac/v1/RoleBindingPatch.py +1 -3
- pulumi_kubernetes/rbac/v1/RoleList.py +1 -3
- pulumi_kubernetes/rbac/v1/RolePatch.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/ClusterRole.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/ClusterRoleBinding.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/ClusterRoleBindingList.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/ClusterRoleBindingPatch.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/ClusterRoleList.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/ClusterRolePatch.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/Role.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/RoleBinding.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/RoleBindingList.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/RoleBindingPatch.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/RoleList.py +1 -3
- pulumi_kubernetes/rbac/v1alpha1/RolePatch.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/ClusterRole.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/ClusterRoleBinding.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/ClusterRoleBindingList.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/ClusterRoleBindingPatch.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/ClusterRoleList.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/ClusterRolePatch.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/Role.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/RoleBinding.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/RoleBindingList.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/RoleBindingPatch.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/RoleList.py +1 -3
- pulumi_kubernetes/rbac/v1beta1/RolePatch.py +1 -3
- pulumi_kubernetes/resource/__init__.py +3 -0
- pulumi_kubernetes/resource/v1alpha1/PodScheduling.py +1 -3
- pulumi_kubernetes/resource/v1alpha1/PodSchedulingList.py +1 -3
- pulumi_kubernetes/resource/v1alpha1/PodSchedulingPatch.py +1 -3
- pulumi_kubernetes/resource/v1alpha1/ResourceClaim.py +2 -4
- pulumi_kubernetes/resource/v1alpha1/ResourceClaimList.py +1 -3
- pulumi_kubernetes/resource/v1alpha1/ResourceClaimPatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha1/ResourceClaimTemplate.py +2 -4
- pulumi_kubernetes/resource/v1alpha1/ResourceClaimTemplateList.py +1 -3
- pulumi_kubernetes/resource/v1alpha1/ResourceClaimTemplatePatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha1/ResourceClass.py +1 -3
- pulumi_kubernetes/resource/v1alpha1/ResourceClassList.py +1 -3
- pulumi_kubernetes/resource/v1alpha1/ResourceClassPatch.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/PodSchedulingContext.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/PodSchedulingContextList.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/PodSchedulingContextPatch.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClaim.py +2 -4
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimList.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimParameters.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimParametersList.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimParametersPatch.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimPatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimTemplate.py +2 -4
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimTemplateList.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClaimTemplatePatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha2/ResourceClass.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClassList.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClassParameters.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClassParametersList.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClassParametersPatch.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceClassPatch.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceSlice.py +2 -4
- pulumi_kubernetes/resource/v1alpha2/ResourceSliceList.py +1 -3
- pulumi_kubernetes/resource/v1alpha2/ResourceSlicePatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/DeviceClass.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/DeviceClassList.py +1 -3
- pulumi_kubernetes/resource/v1alpha3/DeviceClassPatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/DeviceTaintRule.py +225 -0
- pulumi_kubernetes/resource/v1alpha3/DeviceTaintRuleList.py +217 -0
- pulumi_kubernetes/resource/v1alpha3/DeviceTaintRulePatch.py +236 -0
- pulumi_kubernetes/resource/v1alpha3/PodSchedulingContext.py +1 -3
- pulumi_kubernetes/resource/v1alpha3/PodSchedulingContextList.py +1 -3
- pulumi_kubernetes/resource/v1alpha3/PodSchedulingContextPatch.py +1 -3
- pulumi_kubernetes/resource/v1alpha3/ResourceClaim.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/ResourceClaimList.py +1 -3
- pulumi_kubernetes/resource/v1alpha3/ResourceClaimPatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/ResourceClaimTemplate.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/ResourceClaimTemplateList.py +1 -3
- pulumi_kubernetes/resource/v1alpha3/ResourceClaimTemplatePatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/ResourceSlice.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/ResourceSliceList.py +1 -3
- pulumi_kubernetes/resource/v1alpha3/ResourceSlicePatch.py +2 -4
- pulumi_kubernetes/resource/v1alpha3/__init__.py +3 -0
- pulumi_kubernetes/resource/v1alpha3/_inputs.py +2559 -213
- pulumi_kubernetes/resource/v1alpha3/outputs.py +2037 -256
- pulumi_kubernetes/resource/v1beta1/DeviceClass.py +2 -4
- pulumi_kubernetes/resource/v1beta1/DeviceClassList.py +1 -3
- pulumi_kubernetes/resource/v1beta1/DeviceClassPatch.py +2 -4
- pulumi_kubernetes/resource/v1beta1/ResourceClaim.py +2 -4
- pulumi_kubernetes/resource/v1beta1/ResourceClaimList.py +1 -3
- pulumi_kubernetes/resource/v1beta1/ResourceClaimPatch.py +2 -4
- pulumi_kubernetes/resource/v1beta1/ResourceClaimTemplate.py +2 -4
- pulumi_kubernetes/resource/v1beta1/ResourceClaimTemplateList.py +1 -3
- pulumi_kubernetes/resource/v1beta1/ResourceClaimTemplatePatch.py +2 -4
- pulumi_kubernetes/resource/v1beta1/ResourceSlice.py +2 -4
- pulumi_kubernetes/resource/v1beta1/ResourceSliceList.py +1 -3
- pulumi_kubernetes/resource/v1beta1/ResourceSlicePatch.py +2 -4
- pulumi_kubernetes/resource/v1beta1/_inputs.py +2044 -176
- pulumi_kubernetes/resource/v1beta1/outputs.py +1536 -134
- pulumi_kubernetes/resource/v1beta2/DeviceClass.py +239 -0
- pulumi_kubernetes/resource/v1beta2/DeviceClassList.py +217 -0
- pulumi_kubernetes/resource/v1beta2/DeviceClassPatch.py +250 -0
- pulumi_kubernetes/resource/v1beta2/ResourceClaim.py +234 -0
- pulumi_kubernetes/resource/v1beta2/ResourceClaimList.py +218 -0
- pulumi_kubernetes/resource/v1beta2/ResourceClaimPatch.py +245 -0
- pulumi_kubernetes/resource/v1beta2/ResourceClaimTemplate.py +231 -0
- pulumi_kubernetes/resource/v1beta2/ResourceClaimTemplateList.py +217 -0
- pulumi_kubernetes/resource/v1beta2/ResourceClaimTemplatePatch.py +242 -0
- pulumi_kubernetes/resource/v1beta2/ResourceSlice.py +248 -0
- pulumi_kubernetes/resource/v1beta2/ResourceSliceList.py +218 -0
- pulumi_kubernetes/resource/v1beta2/ResourceSlicePatch.py +259 -0
- pulumi_kubernetes/resource/v1beta2/__init__.py +22 -0
- pulumi_kubernetes/resource/v1beta2/_inputs.py +5681 -0
- pulumi_kubernetes/resource/v1beta2/outputs.py +4726 -0
- pulumi_kubernetes/scheduling/v1/PriorityClass.py +1 -3
- pulumi_kubernetes/scheduling/v1/PriorityClassList.py +1 -3
- pulumi_kubernetes/scheduling/v1/PriorityClassPatch.py +1 -3
- pulumi_kubernetes/scheduling/v1alpha1/PriorityClass.py +1 -3
- pulumi_kubernetes/scheduling/v1alpha1/PriorityClassList.py +1 -3
- pulumi_kubernetes/scheduling/v1alpha1/PriorityClassPatch.py +1 -3
- pulumi_kubernetes/scheduling/v1beta1/PriorityClass.py +1 -3
- pulumi_kubernetes/scheduling/v1beta1/PriorityClassList.py +1 -3
- pulumi_kubernetes/scheduling/v1beta1/PriorityClassPatch.py +1 -3
- pulumi_kubernetes/settings/v1alpha1/PodPreset.py +1 -3
- pulumi_kubernetes/settings/v1alpha1/PodPresetList.py +1 -3
- pulumi_kubernetes/settings/v1alpha1/PodPresetPatch.py +1 -3
- pulumi_kubernetes/storage/v1/CSIDriver.py +1 -3
- pulumi_kubernetes/storage/v1/CSIDriverList.py +1 -3
- pulumi_kubernetes/storage/v1/CSIDriverPatch.py +1 -3
- pulumi_kubernetes/storage/v1/CSINode.py +1 -3
- pulumi_kubernetes/storage/v1/CSINodeList.py +1 -3
- pulumi_kubernetes/storage/v1/CSINodePatch.py +1 -3
- pulumi_kubernetes/storage/v1/CSIStorageCapacity.py +1 -3
- pulumi_kubernetes/storage/v1/CSIStorageCapacityList.py +1 -3
- pulumi_kubernetes/storage/v1/CSIStorageCapacityPatch.py +1 -3
- pulumi_kubernetes/storage/v1/StorageClass.py +1 -3
- pulumi_kubernetes/storage/v1/StorageClassList.py +1 -3
- pulumi_kubernetes/storage/v1/StorageClassPatch.py +1 -3
- pulumi_kubernetes/storage/v1/VolumeAttachment.py +1 -3
- pulumi_kubernetes/storage/v1/VolumeAttachmentList.py +1 -3
- pulumi_kubernetes/storage/v1/VolumeAttachmentPatch.py +1 -3
- pulumi_kubernetes/storage/v1/_inputs.py +90 -0
- pulumi_kubernetes/storage/v1/outputs.py +110 -0
- pulumi_kubernetes/storage/v1alpha1/VolumeAttachment.py +1 -3
- pulumi_kubernetes/storage/v1alpha1/VolumeAttachmentList.py +1 -3
- pulumi_kubernetes/storage/v1alpha1/VolumeAttachmentPatch.py +1 -3
- pulumi_kubernetes/storage/v1alpha1/VolumeAttributesClass.py +1 -3
- pulumi_kubernetes/storage/v1alpha1/VolumeAttributesClassList.py +1 -3
- pulumi_kubernetes/storage/v1alpha1/VolumeAttributesClassPatch.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSIDriver.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSIDriverList.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSIDriverPatch.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSINode.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSINodeList.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSINodePatch.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSIStorageCapacity.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSIStorageCapacityList.py +1 -3
- pulumi_kubernetes/storage/v1beta1/CSIStorageCapacityPatch.py +1 -3
- pulumi_kubernetes/storage/v1beta1/StorageClass.py +1 -3
- pulumi_kubernetes/storage/v1beta1/StorageClassList.py +1 -3
- pulumi_kubernetes/storage/v1beta1/StorageClassPatch.py +1 -3
- pulumi_kubernetes/storage/v1beta1/VolumeAttachment.py +1 -3
- pulumi_kubernetes/storage/v1beta1/VolumeAttachmentList.py +1 -3
- pulumi_kubernetes/storage/v1beta1/VolumeAttachmentPatch.py +1 -3
- pulumi_kubernetes/storage/v1beta1/VolumeAttributesClass.py +1 -3
- pulumi_kubernetes/storage/v1beta1/VolumeAttributesClassList.py +1 -3
- pulumi_kubernetes/storage/v1beta1/VolumeAttributesClassPatch.py +1 -3
- pulumi_kubernetes/storagemigration/v1alpha1/StorageVersionMigration.py +1 -3
- pulumi_kubernetes/storagemigration/v1alpha1/StorageVersionMigrationList.py +1 -3
- pulumi_kubernetes/storagemigration/v1alpha1/StorageVersionMigrationPatch.py +1 -3
- pulumi_kubernetes/yaml/v2/ConfigFile.py +1 -3
- pulumi_kubernetes/yaml/v2/ConfigGroup.py +1 -3
- pulumi_kubernetes/yaml/yaml.py +108 -0
- {pulumi_kubernetes-4.23.0a1746131759.dist-info → pulumi_kubernetes-4.23.0a1746153578.dist-info}/METADATA +2 -2
- pulumi_kubernetes-4.23.0a1746153578.dist-info/RECORD +709 -0
- pulumi_kubernetes-4.23.0a1746131759.dist-info/RECORD +0 -679
- {pulumi_kubernetes-4.23.0a1746131759.dist-info → pulumi_kubernetes-4.23.0a1746153578.dist-info}/WHEEL +0 -0
- {pulumi_kubernetes-4.23.0a1746131759.dist-info → pulumi_kubernetes-4.23.0a1746153578.dist-info}/top_level.txt +0 -0
|
@@ -213,6 +213,8 @@ __all__ = [
|
|
|
213
213
|
'NodeSpecPatch',
|
|
214
214
|
'NodeStatus',
|
|
215
215
|
'NodeStatusPatch',
|
|
216
|
+
'NodeSwapStatus',
|
|
217
|
+
'NodeSwapStatusPatch',
|
|
216
218
|
'NodeSystemInfo',
|
|
217
219
|
'NodeSystemInfoPatch',
|
|
218
220
|
'ObjectFieldSelector',
|
|
@@ -5431,6 +5433,8 @@ class ContainerStatus(dict):
|
|
|
5431
5433
|
suggest = "container_id"
|
|
5432
5434
|
elif key == "lastState":
|
|
5433
5435
|
suggest = "last_state"
|
|
5436
|
+
elif key == "stopSignal":
|
|
5437
|
+
suggest = "stop_signal"
|
|
5434
5438
|
elif key == "volumeMounts":
|
|
5435
5439
|
suggest = "volume_mounts"
|
|
5436
5440
|
|
|
@@ -5458,6 +5462,7 @@ class ContainerStatus(dict):
|
|
|
5458
5462
|
resources: Optional['outputs.ResourceRequirements'] = None,
|
|
5459
5463
|
started: Optional[builtins.bool] = None,
|
|
5460
5464
|
state: Optional['outputs.ContainerState'] = None,
|
|
5465
|
+
stop_signal: Optional[builtins.str] = None,
|
|
5461
5466
|
user: Optional['outputs.ContainerUser'] = None,
|
|
5462
5467
|
volume_mounts: Optional[Sequence['outputs.VolumeMountStatus']] = None):
|
|
5463
5468
|
"""
|
|
@@ -5476,6 +5481,7 @@ class ContainerStatus(dict):
|
|
|
5476
5481
|
:param 'ResourceRequirementsArgs' resources: Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized.
|
|
5477
5482
|
:param builtins.bool started: Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false.
|
|
5478
5483
|
:param 'ContainerStateArgs' state: State holds details about the container's current condition.
|
|
5484
|
+
:param builtins.str stop_signal: StopSignal reports the effective stop signal for this container
|
|
5479
5485
|
:param 'ContainerUserArgs' user: User represents user identity information initially attached to the first process of the container
|
|
5480
5486
|
:param Sequence['VolumeMountStatusArgs'] volume_mounts: Status of volume mounts.
|
|
5481
5487
|
"""
|
|
@@ -5498,6 +5504,8 @@ class ContainerStatus(dict):
|
|
|
5498
5504
|
pulumi.set(__self__, "started", started)
|
|
5499
5505
|
if state is not None:
|
|
5500
5506
|
pulumi.set(__self__, "state", state)
|
|
5507
|
+
if stop_signal is not None:
|
|
5508
|
+
pulumi.set(__self__, "stop_signal", stop_signal)
|
|
5501
5509
|
if user is not None:
|
|
5502
5510
|
pulumi.set(__self__, "user", user)
|
|
5503
5511
|
if volume_mounts is not None:
|
|
@@ -5601,6 +5609,14 @@ class ContainerStatus(dict):
|
|
|
5601
5609
|
"""
|
|
5602
5610
|
return pulumi.get(self, "state")
|
|
5603
5611
|
|
|
5612
|
+
@property
|
|
5613
|
+
@pulumi.getter(name="stopSignal")
|
|
5614
|
+
def stop_signal(self) -> Optional[builtins.str]:
|
|
5615
|
+
"""
|
|
5616
|
+
StopSignal reports the effective stop signal for this container
|
|
5617
|
+
"""
|
|
5618
|
+
return pulumi.get(self, "stop_signal")
|
|
5619
|
+
|
|
5604
5620
|
@property
|
|
5605
5621
|
@pulumi.getter
|
|
5606
5622
|
def user(self) -> Optional['outputs.ContainerUser']:
|
|
@@ -5638,6 +5654,8 @@ class ContainerStatusPatch(dict):
|
|
|
5638
5654
|
suggest = "last_state"
|
|
5639
5655
|
elif key == "restartCount":
|
|
5640
5656
|
suggest = "restart_count"
|
|
5657
|
+
elif key == "stopSignal":
|
|
5658
|
+
suggest = "stop_signal"
|
|
5641
5659
|
elif key == "volumeMounts":
|
|
5642
5660
|
suggest = "volume_mounts"
|
|
5643
5661
|
|
|
@@ -5665,6 +5683,7 @@ class ContainerStatusPatch(dict):
|
|
|
5665
5683
|
restart_count: Optional[builtins.int] = None,
|
|
5666
5684
|
started: Optional[builtins.bool] = None,
|
|
5667
5685
|
state: Optional['outputs.ContainerStatePatch'] = None,
|
|
5686
|
+
stop_signal: Optional[builtins.str] = None,
|
|
5668
5687
|
user: Optional['outputs.ContainerUserPatch'] = None,
|
|
5669
5688
|
volume_mounts: Optional[Sequence['outputs.VolumeMountStatusPatch']] = None):
|
|
5670
5689
|
"""
|
|
@@ -5683,6 +5702,7 @@ class ContainerStatusPatch(dict):
|
|
|
5683
5702
|
:param builtins.int restart_count: RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative.
|
|
5684
5703
|
:param builtins.bool started: Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false.
|
|
5685
5704
|
:param 'ContainerStatePatchArgs' state: State holds details about the container's current condition.
|
|
5705
|
+
:param builtins.str stop_signal: StopSignal reports the effective stop signal for this container
|
|
5686
5706
|
:param 'ContainerUserPatchArgs' user: User represents user identity information initially attached to the first process of the container
|
|
5687
5707
|
:param Sequence['VolumeMountStatusPatchArgs'] volume_mounts: Status of volume mounts.
|
|
5688
5708
|
"""
|
|
@@ -5710,6 +5730,8 @@ class ContainerStatusPatch(dict):
|
|
|
5710
5730
|
pulumi.set(__self__, "started", started)
|
|
5711
5731
|
if state is not None:
|
|
5712
5732
|
pulumi.set(__self__, "state", state)
|
|
5733
|
+
if stop_signal is not None:
|
|
5734
|
+
pulumi.set(__self__, "stop_signal", stop_signal)
|
|
5713
5735
|
if user is not None:
|
|
5714
5736
|
pulumi.set(__self__, "user", user)
|
|
5715
5737
|
if volume_mounts is not None:
|
|
@@ -5813,6 +5835,14 @@ class ContainerStatusPatch(dict):
|
|
|
5813
5835
|
"""
|
|
5814
5836
|
return pulumi.get(self, "state")
|
|
5815
5837
|
|
|
5838
|
+
@property
|
|
5839
|
+
@pulumi.getter(name="stopSignal")
|
|
5840
|
+
def stop_signal(self) -> Optional[builtins.str]:
|
|
5841
|
+
"""
|
|
5842
|
+
StopSignal reports the effective stop signal for this container
|
|
5843
|
+
"""
|
|
5844
|
+
return pulumi.get(self, "stop_signal")
|
|
5845
|
+
|
|
5816
5846
|
@property
|
|
5817
5847
|
@pulumi.getter
|
|
5818
5848
|
def user(self) -> Optional['outputs.ContainerUserPatch']:
|
|
@@ -6367,7 +6397,7 @@ class EmptyDirVolumeSourcePatch(dict):
|
|
|
6367
6397
|
@pulumi.output_type
|
|
6368
6398
|
class EndpointAddress(dict):
|
|
6369
6399
|
"""
|
|
6370
|
-
EndpointAddress is a tuple that describes single IP address.
|
|
6400
|
+
EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
|
|
6371
6401
|
"""
|
|
6372
6402
|
@staticmethod
|
|
6373
6403
|
def __key_warning(key: str):
|
|
@@ -6394,7 +6424,7 @@ class EndpointAddress(dict):
|
|
|
6394
6424
|
node_name: Optional[builtins.str] = None,
|
|
6395
6425
|
target_ref: Optional['outputs.ObjectReference'] = None):
|
|
6396
6426
|
"""
|
|
6397
|
-
EndpointAddress is a tuple that describes single IP address.
|
|
6427
|
+
EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
|
|
6398
6428
|
:param builtins.str ip: The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16).
|
|
6399
6429
|
:param builtins.str hostname: The Hostname of this endpoint
|
|
6400
6430
|
:param builtins.str node_name: Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node.
|
|
@@ -6444,7 +6474,7 @@ class EndpointAddress(dict):
|
|
|
6444
6474
|
@pulumi.output_type
|
|
6445
6475
|
class EndpointAddressPatch(dict):
|
|
6446
6476
|
"""
|
|
6447
|
-
EndpointAddress is a tuple that describes single IP address.
|
|
6477
|
+
EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
|
|
6448
6478
|
"""
|
|
6449
6479
|
@staticmethod
|
|
6450
6480
|
def __key_warning(key: str):
|
|
@@ -6471,7 +6501,7 @@ class EndpointAddressPatch(dict):
|
|
|
6471
6501
|
node_name: Optional[builtins.str] = None,
|
|
6472
6502
|
target_ref: Optional['outputs.ObjectReferencePatch'] = None):
|
|
6473
6503
|
"""
|
|
6474
|
-
EndpointAddress is a tuple that describes single IP address.
|
|
6504
|
+
EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
|
|
6475
6505
|
:param builtins.str hostname: The Hostname of this endpoint
|
|
6476
6506
|
:param builtins.str ip: The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16).
|
|
6477
6507
|
:param builtins.str node_name: Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node.
|
|
@@ -6522,7 +6552,7 @@ class EndpointAddressPatch(dict):
|
|
|
6522
6552
|
@pulumi.output_type
|
|
6523
6553
|
class EndpointPort(dict):
|
|
6524
6554
|
"""
|
|
6525
|
-
EndpointPort is a tuple that describes a single port.
|
|
6555
|
+
EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
|
|
6526
6556
|
"""
|
|
6527
6557
|
@staticmethod
|
|
6528
6558
|
def __key_warning(key: str):
|
|
@@ -6547,7 +6577,7 @@ class EndpointPort(dict):
|
|
|
6547
6577
|
name: Optional[builtins.str] = None,
|
|
6548
6578
|
protocol: Optional[builtins.str] = None):
|
|
6549
6579
|
"""
|
|
6550
|
-
EndpointPort is a tuple that describes a single port.
|
|
6580
|
+
EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
|
|
6551
6581
|
:param builtins.int port: The port number of the endpoint.
|
|
6552
6582
|
:param builtins.str app_protocol: The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:
|
|
6553
6583
|
|
|
@@ -6615,7 +6645,7 @@ class EndpointPort(dict):
|
|
|
6615
6645
|
@pulumi.output_type
|
|
6616
6646
|
class EndpointPortPatch(dict):
|
|
6617
6647
|
"""
|
|
6618
|
-
EndpointPort is a tuple that describes a single port.
|
|
6648
|
+
EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
|
|
6619
6649
|
"""
|
|
6620
6650
|
@staticmethod
|
|
6621
6651
|
def __key_warning(key: str):
|
|
@@ -6640,7 +6670,7 @@ class EndpointPortPatch(dict):
|
|
|
6640
6670
|
port: Optional[builtins.int] = None,
|
|
6641
6671
|
protocol: Optional[builtins.str] = None):
|
|
6642
6672
|
"""
|
|
6643
|
-
EndpointPort is a tuple that describes a single port.
|
|
6673
|
+
EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
|
|
6644
6674
|
:param builtins.str app_protocol: The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:
|
|
6645
6675
|
|
|
6646
6676
|
* Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names).
|
|
@@ -6720,6 +6750,8 @@ class EndpointSubset(dict):
|
|
|
6720
6750
|
|
|
6721
6751
|
a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
|
|
6722
6752
|
b: [ 10.10.1.1:309, 10.10.2.2:309 ]
|
|
6753
|
+
|
|
6754
|
+
Deprecated: This API is deprecated in v1.33+.
|
|
6723
6755
|
"""
|
|
6724
6756
|
@staticmethod
|
|
6725
6757
|
def __key_warning(key: str):
|
|
@@ -6754,6 +6786,8 @@ class EndpointSubset(dict):
|
|
|
6754
6786
|
|
|
6755
6787
|
a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
|
|
6756
6788
|
b: [ 10.10.1.1:309, 10.10.2.2:309 ]
|
|
6789
|
+
|
|
6790
|
+
Deprecated: This API is deprecated in v1.33+.
|
|
6757
6791
|
:param Sequence['EndpointAddressArgs'] addresses: IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize.
|
|
6758
6792
|
:param Sequence['EndpointAddressArgs'] not_ready_addresses: IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check.
|
|
6759
6793
|
:param Sequence['EndpointPortArgs'] ports: Port numbers available on the related IP addresses.
|
|
@@ -6804,6 +6838,8 @@ class EndpointSubsetPatch(dict):
|
|
|
6804
6838
|
|
|
6805
6839
|
a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
|
|
6806
6840
|
b: [ 10.10.1.1:309, 10.10.2.2:309 ]
|
|
6841
|
+
|
|
6842
|
+
Deprecated: This API is deprecated in v1.33+.
|
|
6807
6843
|
"""
|
|
6808
6844
|
@staticmethod
|
|
6809
6845
|
def __key_warning(key: str):
|
|
@@ -6838,6 +6874,8 @@ class EndpointSubsetPatch(dict):
|
|
|
6838
6874
|
|
|
6839
6875
|
a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
|
|
6840
6876
|
b: [ 10.10.1.1:309, 10.10.2.2:309 ]
|
|
6877
|
+
|
|
6878
|
+
Deprecated: This API is deprecated in v1.33+.
|
|
6841
6879
|
:param Sequence['EndpointAddressPatchArgs'] addresses: IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize.
|
|
6842
6880
|
:param Sequence['EndpointAddressPatchArgs'] not_ready_addresses: IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check.
|
|
6843
6881
|
:param Sequence['EndpointPortPatchArgs'] ports: Port numbers available on the related IP addresses.
|
|
@@ -6890,6 +6928,10 @@ class Endpoints(dict):
|
|
|
6890
6928
|
Ports: [{"name": "a", "port": 93}, {"name": "b", "port": 76}]
|
|
6891
6929
|
},
|
|
6892
6930
|
]
|
|
6931
|
+
|
|
6932
|
+
Endpoints is a legacy API and does not contain information about all Service features. Use discoveryv1.EndpointSlice for complete information about Service endpoints.
|
|
6933
|
+
|
|
6934
|
+
Deprecated: This API is deprecated in v1.33+. Use discoveryv1.EndpointSlice.
|
|
6893
6935
|
"""
|
|
6894
6936
|
@staticmethod
|
|
6895
6937
|
def __key_warning(key: str):
|
|
@@ -6927,6 +6969,10 @@ class Endpoints(dict):
|
|
|
6927
6969
|
Ports: [{"name": "a", "port": 93}, {"name": "b", "port": 76}]
|
|
6928
6970
|
},
|
|
6929
6971
|
]
|
|
6972
|
+
|
|
6973
|
+
Endpoints is a legacy API and does not contain information about all Service features. Use discoveryv1.EndpointSlice for complete information about Service endpoints.
|
|
6974
|
+
|
|
6975
|
+
Deprecated: This API is deprecated in v1.33+. Use discoveryv1.EndpointSlice.
|
|
6930
6976
|
:param builtins.str api_version: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
|
|
6931
6977
|
:param builtins.str kind: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
|
6932
6978
|
:param '_meta.v1.ObjectMetaArgs' metadata: Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
|
@@ -6977,7 +7023,7 @@ class Endpoints(dict):
|
|
|
6977
7023
|
@pulumi.output_type
|
|
6978
7024
|
class EnvFromSource(dict):
|
|
6979
7025
|
"""
|
|
6980
|
-
EnvFromSource represents the source of a set of ConfigMaps
|
|
7026
|
+
EnvFromSource represents the source of a set of ConfigMaps or Secrets
|
|
6981
7027
|
"""
|
|
6982
7028
|
@staticmethod
|
|
6983
7029
|
def __key_warning(key: str):
|
|
@@ -7003,9 +7049,9 @@ class EnvFromSource(dict):
|
|
|
7003
7049
|
prefix: Optional[builtins.str] = None,
|
|
7004
7050
|
secret_ref: Optional['outputs.SecretEnvSource'] = None):
|
|
7005
7051
|
"""
|
|
7006
|
-
EnvFromSource represents the source of a set of ConfigMaps
|
|
7052
|
+
EnvFromSource represents the source of a set of ConfigMaps or Secrets
|
|
7007
7053
|
:param 'ConfigMapEnvSourceArgs' config_map_ref: The ConfigMap to select from
|
|
7008
|
-
:param builtins.str prefix:
|
|
7054
|
+
:param builtins.str prefix: Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
|
|
7009
7055
|
:param 'SecretEnvSourceArgs' secret_ref: The Secret to select from
|
|
7010
7056
|
"""
|
|
7011
7057
|
if config_map_ref is not None:
|
|
@@ -7027,7 +7073,7 @@ class EnvFromSource(dict):
|
|
|
7027
7073
|
@pulumi.getter
|
|
7028
7074
|
def prefix(self) -> Optional[builtins.str]:
|
|
7029
7075
|
"""
|
|
7030
|
-
|
|
7076
|
+
Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
|
|
7031
7077
|
"""
|
|
7032
7078
|
return pulumi.get(self, "prefix")
|
|
7033
7079
|
|
|
@@ -7043,7 +7089,7 @@ class EnvFromSource(dict):
|
|
|
7043
7089
|
@pulumi.output_type
|
|
7044
7090
|
class EnvFromSourcePatch(dict):
|
|
7045
7091
|
"""
|
|
7046
|
-
EnvFromSource represents the source of a set of ConfigMaps
|
|
7092
|
+
EnvFromSource represents the source of a set of ConfigMaps or Secrets
|
|
7047
7093
|
"""
|
|
7048
7094
|
@staticmethod
|
|
7049
7095
|
def __key_warning(key: str):
|
|
@@ -7069,9 +7115,9 @@ class EnvFromSourcePatch(dict):
|
|
|
7069
7115
|
prefix: Optional[builtins.str] = None,
|
|
7070
7116
|
secret_ref: Optional['outputs.SecretEnvSourcePatch'] = None):
|
|
7071
7117
|
"""
|
|
7072
|
-
EnvFromSource represents the source of a set of ConfigMaps
|
|
7118
|
+
EnvFromSource represents the source of a set of ConfigMaps or Secrets
|
|
7073
7119
|
:param 'ConfigMapEnvSourcePatchArgs' config_map_ref: The ConfigMap to select from
|
|
7074
|
-
:param builtins.str prefix:
|
|
7120
|
+
:param builtins.str prefix: Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
|
|
7075
7121
|
:param 'SecretEnvSourcePatchArgs' secret_ref: The Secret to select from
|
|
7076
7122
|
"""
|
|
7077
7123
|
if config_map_ref is not None:
|
|
@@ -7093,7 +7139,7 @@ class EnvFromSourcePatch(dict):
|
|
|
7093
7139
|
@pulumi.getter
|
|
7094
7140
|
def prefix(self) -> Optional[builtins.str]:
|
|
7095
7141
|
"""
|
|
7096
|
-
|
|
7142
|
+
Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
|
|
7097
7143
|
"""
|
|
7098
7144
|
return pulumi.get(self, "prefix")
|
|
7099
7145
|
|
|
@@ -11349,6 +11395,8 @@ class Lifecycle(dict):
|
|
|
11349
11395
|
suggest = "post_start"
|
|
11350
11396
|
elif key == "preStop":
|
|
11351
11397
|
suggest = "pre_stop"
|
|
11398
|
+
elif key == "stopSignal":
|
|
11399
|
+
suggest = "stop_signal"
|
|
11352
11400
|
|
|
11353
11401
|
if suggest:
|
|
11354
11402
|
pulumi.log.warn(f"Key '{key}' not found in Lifecycle. Access the value via the '{suggest}' property getter instead.")
|
|
@@ -11363,16 +11411,20 @@ class Lifecycle(dict):
|
|
|
11363
11411
|
|
|
11364
11412
|
def __init__(__self__, *,
|
|
11365
11413
|
post_start: Optional['outputs.LifecycleHandler'] = None,
|
|
11366
|
-
pre_stop: Optional['outputs.LifecycleHandler'] = None
|
|
11414
|
+
pre_stop: Optional['outputs.LifecycleHandler'] = None,
|
|
11415
|
+
stop_signal: Optional[builtins.str] = None):
|
|
11367
11416
|
"""
|
|
11368
11417
|
Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.
|
|
11369
11418
|
:param 'LifecycleHandlerArgs' post_start: PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
|
|
11370
11419
|
:param 'LifecycleHandlerArgs' pre_stop: PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
|
|
11420
|
+
:param builtins.str stop_signal: StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
|
|
11371
11421
|
"""
|
|
11372
11422
|
if post_start is not None:
|
|
11373
11423
|
pulumi.set(__self__, "post_start", post_start)
|
|
11374
11424
|
if pre_stop is not None:
|
|
11375
11425
|
pulumi.set(__self__, "pre_stop", pre_stop)
|
|
11426
|
+
if stop_signal is not None:
|
|
11427
|
+
pulumi.set(__self__, "stop_signal", stop_signal)
|
|
11376
11428
|
|
|
11377
11429
|
@property
|
|
11378
11430
|
@pulumi.getter(name="postStart")
|
|
@@ -11390,6 +11442,14 @@ class Lifecycle(dict):
|
|
|
11390
11442
|
"""
|
|
11391
11443
|
return pulumi.get(self, "pre_stop")
|
|
11392
11444
|
|
|
11445
|
+
@property
|
|
11446
|
+
@pulumi.getter(name="stopSignal")
|
|
11447
|
+
def stop_signal(self) -> Optional[builtins.str]:
|
|
11448
|
+
"""
|
|
11449
|
+
StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
|
|
11450
|
+
"""
|
|
11451
|
+
return pulumi.get(self, "stop_signal")
|
|
11452
|
+
|
|
11393
11453
|
|
|
11394
11454
|
@pulumi.output_type
|
|
11395
11455
|
class LifecycleHandler(dict):
|
|
@@ -11563,6 +11623,8 @@ class LifecyclePatch(dict):
|
|
|
11563
11623
|
suggest = "post_start"
|
|
11564
11624
|
elif key == "preStop":
|
|
11565
11625
|
suggest = "pre_stop"
|
|
11626
|
+
elif key == "stopSignal":
|
|
11627
|
+
suggest = "stop_signal"
|
|
11566
11628
|
|
|
11567
11629
|
if suggest:
|
|
11568
11630
|
pulumi.log.warn(f"Key '{key}' not found in LifecyclePatch. Access the value via the '{suggest}' property getter instead.")
|
|
@@ -11577,16 +11639,20 @@ class LifecyclePatch(dict):
|
|
|
11577
11639
|
|
|
11578
11640
|
def __init__(__self__, *,
|
|
11579
11641
|
post_start: Optional['outputs.LifecycleHandlerPatch'] = None,
|
|
11580
|
-
pre_stop: Optional['outputs.LifecycleHandlerPatch'] = None
|
|
11642
|
+
pre_stop: Optional['outputs.LifecycleHandlerPatch'] = None,
|
|
11643
|
+
stop_signal: Optional[builtins.str] = None):
|
|
11581
11644
|
"""
|
|
11582
11645
|
Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.
|
|
11583
11646
|
:param 'LifecycleHandlerPatchArgs' post_start: PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
|
|
11584
11647
|
:param 'LifecycleHandlerPatchArgs' pre_stop: PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
|
|
11648
|
+
:param builtins.str stop_signal: StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
|
|
11585
11649
|
"""
|
|
11586
11650
|
if post_start is not None:
|
|
11587
11651
|
pulumi.set(__self__, "post_start", post_start)
|
|
11588
11652
|
if pre_stop is not None:
|
|
11589
11653
|
pulumi.set(__self__, "pre_stop", pre_stop)
|
|
11654
|
+
if stop_signal is not None:
|
|
11655
|
+
pulumi.set(__self__, "stop_signal", stop_signal)
|
|
11590
11656
|
|
|
11591
11657
|
@property
|
|
11592
11658
|
@pulumi.getter(name="postStart")
|
|
@@ -11604,6 +11670,14 @@ class LifecyclePatch(dict):
|
|
|
11604
11670
|
"""
|
|
11605
11671
|
return pulumi.get(self, "pre_stop")
|
|
11606
11672
|
|
|
11673
|
+
@property
|
|
11674
|
+
@pulumi.getter(name="stopSignal")
|
|
11675
|
+
def stop_signal(self) -> Optional[builtins.str]:
|
|
11676
|
+
"""
|
|
11677
|
+
StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
|
|
11678
|
+
"""
|
|
11679
|
+
return pulumi.get(self, "stop_signal")
|
|
11680
|
+
|
|
11607
11681
|
|
|
11608
11682
|
@pulumi.output_type
|
|
11609
11683
|
class LimitRange(dict):
|
|
@@ -14984,6 +15058,52 @@ class NodeStatusPatch(dict):
|
|
|
14984
15058
|
return pulumi.get(self, "volumes_in_use")
|
|
14985
15059
|
|
|
14986
15060
|
|
|
15061
|
+
@pulumi.output_type
|
|
15062
|
+
class NodeSwapStatus(dict):
|
|
15063
|
+
"""
|
|
15064
|
+
NodeSwapStatus represents swap memory information.
|
|
15065
|
+
"""
|
|
15066
|
+
def __init__(__self__, *,
|
|
15067
|
+
capacity: Optional[builtins.int] = None):
|
|
15068
|
+
"""
|
|
15069
|
+
NodeSwapStatus represents swap memory information.
|
|
15070
|
+
:param builtins.int capacity: Total amount of swap memory in bytes.
|
|
15071
|
+
"""
|
|
15072
|
+
if capacity is not None:
|
|
15073
|
+
pulumi.set(__self__, "capacity", capacity)
|
|
15074
|
+
|
|
15075
|
+
@property
|
|
15076
|
+
@pulumi.getter
|
|
15077
|
+
def capacity(self) -> Optional[builtins.int]:
|
|
15078
|
+
"""
|
|
15079
|
+
Total amount of swap memory in bytes.
|
|
15080
|
+
"""
|
|
15081
|
+
return pulumi.get(self, "capacity")
|
|
15082
|
+
|
|
15083
|
+
|
|
15084
|
+
@pulumi.output_type
|
|
15085
|
+
class NodeSwapStatusPatch(dict):
|
|
15086
|
+
"""
|
|
15087
|
+
NodeSwapStatus represents swap memory information.
|
|
15088
|
+
"""
|
|
15089
|
+
def __init__(__self__, *,
|
|
15090
|
+
capacity: Optional[builtins.int] = None):
|
|
15091
|
+
"""
|
|
15092
|
+
NodeSwapStatus represents swap memory information.
|
|
15093
|
+
:param builtins.int capacity: Total amount of swap memory in bytes.
|
|
15094
|
+
"""
|
|
15095
|
+
if capacity is not None:
|
|
15096
|
+
pulumi.set(__self__, "capacity", capacity)
|
|
15097
|
+
|
|
15098
|
+
@property
|
|
15099
|
+
@pulumi.getter
|
|
15100
|
+
def capacity(self) -> Optional[builtins.int]:
|
|
15101
|
+
"""
|
|
15102
|
+
Total amount of swap memory in bytes.
|
|
15103
|
+
"""
|
|
15104
|
+
return pulumi.get(self, "capacity")
|
|
15105
|
+
|
|
15106
|
+
|
|
14987
15107
|
@pulumi.output_type
|
|
14988
15108
|
class NodeSystemInfo(dict):
|
|
14989
15109
|
"""
|
|
@@ -15032,7 +15152,8 @@ class NodeSystemInfo(dict):
|
|
|
15032
15152
|
machine_id: builtins.str,
|
|
15033
15153
|
operating_system: builtins.str,
|
|
15034
15154
|
os_image: builtins.str,
|
|
15035
|
-
system_uuid: builtins.str
|
|
15155
|
+
system_uuid: builtins.str,
|
|
15156
|
+
swap: Optional['outputs.NodeSwapStatus'] = None):
|
|
15036
15157
|
"""
|
|
15037
15158
|
NodeSystemInfo is a set of ids/uuids to uniquely identify the node.
|
|
15038
15159
|
:param builtins.str architecture: The Architecture reported by the node
|
|
@@ -15045,6 +15166,7 @@ class NodeSystemInfo(dict):
|
|
|
15045
15166
|
:param builtins.str operating_system: The Operating System reported by the node
|
|
15046
15167
|
:param builtins.str os_image: OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)).
|
|
15047
15168
|
:param builtins.str system_uuid: SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid
|
|
15169
|
+
:param 'NodeSwapStatusArgs' swap: Swap Info reported by the node.
|
|
15048
15170
|
"""
|
|
15049
15171
|
pulumi.set(__self__, "architecture", architecture)
|
|
15050
15172
|
pulumi.set(__self__, "boot_id", boot_id)
|
|
@@ -15056,6 +15178,8 @@ class NodeSystemInfo(dict):
|
|
|
15056
15178
|
pulumi.set(__self__, "operating_system", operating_system)
|
|
15057
15179
|
pulumi.set(__self__, "os_image", os_image)
|
|
15058
15180
|
pulumi.set(__self__, "system_uuid", system_uuid)
|
|
15181
|
+
if swap is not None:
|
|
15182
|
+
pulumi.set(__self__, "swap", swap)
|
|
15059
15183
|
|
|
15060
15184
|
@property
|
|
15061
15185
|
@pulumi.getter
|
|
@@ -15137,6 +15261,14 @@ class NodeSystemInfo(dict):
|
|
|
15137
15261
|
"""
|
|
15138
15262
|
return pulumi.get(self, "system_uuid")
|
|
15139
15263
|
|
|
15264
|
+
@property
|
|
15265
|
+
@pulumi.getter
|
|
15266
|
+
def swap(self) -> Optional['outputs.NodeSwapStatus']:
|
|
15267
|
+
"""
|
|
15268
|
+
Swap Info reported by the node.
|
|
15269
|
+
"""
|
|
15270
|
+
return pulumi.get(self, "swap")
|
|
15271
|
+
|
|
15140
15272
|
|
|
15141
15273
|
@pulumi.output_type
|
|
15142
15274
|
class NodeSystemInfoPatch(dict):
|
|
@@ -15186,6 +15318,7 @@ class NodeSystemInfoPatch(dict):
|
|
|
15186
15318
|
machine_id: Optional[builtins.str] = None,
|
|
15187
15319
|
operating_system: Optional[builtins.str] = None,
|
|
15188
15320
|
os_image: Optional[builtins.str] = None,
|
|
15321
|
+
swap: Optional['outputs.NodeSwapStatusPatch'] = None,
|
|
15189
15322
|
system_uuid: Optional[builtins.str] = None):
|
|
15190
15323
|
"""
|
|
15191
15324
|
NodeSystemInfo is a set of ids/uuids to uniquely identify the node.
|
|
@@ -15198,6 +15331,7 @@ class NodeSystemInfoPatch(dict):
|
|
|
15198
15331
|
:param builtins.str machine_id: MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http://man7.org/linux/man-pages/man5/machine-id.5.html
|
|
15199
15332
|
:param builtins.str operating_system: The Operating System reported by the node
|
|
15200
15333
|
:param builtins.str os_image: OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)).
|
|
15334
|
+
:param 'NodeSwapStatusPatchArgs' swap: Swap Info reported by the node.
|
|
15201
15335
|
:param builtins.str system_uuid: SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid
|
|
15202
15336
|
"""
|
|
15203
15337
|
if architecture is not None:
|
|
@@ -15218,6 +15352,8 @@ class NodeSystemInfoPatch(dict):
|
|
|
15218
15352
|
pulumi.set(__self__, "operating_system", operating_system)
|
|
15219
15353
|
if os_image is not None:
|
|
15220
15354
|
pulumi.set(__self__, "os_image", os_image)
|
|
15355
|
+
if swap is not None:
|
|
15356
|
+
pulumi.set(__self__, "swap", swap)
|
|
15221
15357
|
if system_uuid is not None:
|
|
15222
15358
|
pulumi.set(__self__, "system_uuid", system_uuid)
|
|
15223
15359
|
|
|
@@ -15293,6 +15429,14 @@ class NodeSystemInfoPatch(dict):
|
|
|
15293
15429
|
"""
|
|
15294
15430
|
return pulumi.get(self, "os_image")
|
|
15295
15431
|
|
|
15432
|
+
@property
|
|
15433
|
+
@pulumi.getter
|
|
15434
|
+
def swap(self) -> Optional['outputs.NodeSwapStatusPatch']:
|
|
15435
|
+
"""
|
|
15436
|
+
Swap Info reported by the node.
|
|
15437
|
+
"""
|
|
15438
|
+
return pulumi.get(self, "swap")
|
|
15439
|
+
|
|
15296
15440
|
@property
|
|
15297
15441
|
@pulumi.getter(name="systemUUID")
|
|
15298
15442
|
def system_uuid(self) -> Optional[builtins.str]:
|
|
@@ -18449,8 +18593,8 @@ class PodAffinityTerm(dict):
|
|
|
18449
18593
|
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
|
|
18450
18594
|
:param builtins.str topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
|
|
18451
18595
|
:param '_meta.v1.LabelSelectorArgs' label_selector: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
|
|
18452
|
-
:param Sequence[builtins.str] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18453
|
-
:param Sequence[builtins.str] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18596
|
+
:param Sequence[builtins.str] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18597
|
+
:param Sequence[builtins.str] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18454
18598
|
:param '_meta.v1.LabelSelectorArgs' namespace_selector: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
|
|
18455
18599
|
:param Sequence[builtins.str] namespaces: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
|
|
18456
18600
|
"""
|
|
@@ -18486,7 +18630,7 @@ class PodAffinityTerm(dict):
|
|
|
18486
18630
|
@pulumi.getter(name="matchLabelKeys")
|
|
18487
18631
|
def match_label_keys(self) -> Optional[Sequence[builtins.str]]:
|
|
18488
18632
|
"""
|
|
18489
|
-
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18633
|
+
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18490
18634
|
"""
|
|
18491
18635
|
return pulumi.get(self, "match_label_keys")
|
|
18492
18636
|
|
|
@@ -18494,7 +18638,7 @@ class PodAffinityTerm(dict):
|
|
|
18494
18638
|
@pulumi.getter(name="mismatchLabelKeys")
|
|
18495
18639
|
def mismatch_label_keys(self) -> Optional[Sequence[builtins.str]]:
|
|
18496
18640
|
"""
|
|
18497
|
-
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18641
|
+
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18498
18642
|
"""
|
|
18499
18643
|
return pulumi.get(self, "mismatch_label_keys")
|
|
18500
18644
|
|
|
@@ -18555,8 +18699,8 @@ class PodAffinityTermPatch(dict):
|
|
|
18555
18699
|
"""
|
|
18556
18700
|
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
|
|
18557
18701
|
:param '_meta.v1.LabelSelectorPatchArgs' label_selector: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
|
|
18558
|
-
:param Sequence[builtins.str] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18559
|
-
:param Sequence[builtins.str] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18702
|
+
:param Sequence[builtins.str] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18703
|
+
:param Sequence[builtins.str] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18560
18704
|
:param '_meta.v1.LabelSelectorPatchArgs' namespace_selector: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
|
|
18561
18705
|
:param Sequence[builtins.str] namespaces: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
|
|
18562
18706
|
:param builtins.str topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
|
|
@@ -18586,7 +18730,7 @@ class PodAffinityTermPatch(dict):
|
|
|
18586
18730
|
@pulumi.getter(name="matchLabelKeys")
|
|
18587
18731
|
def match_label_keys(self) -> Optional[Sequence[builtins.str]]:
|
|
18588
18732
|
"""
|
|
18589
|
-
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18733
|
+
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
|
|
18590
18734
|
"""
|
|
18591
18735
|
return pulumi.get(self, "match_label_keys")
|
|
18592
18736
|
|
|
@@ -18594,7 +18738,7 @@ class PodAffinityTermPatch(dict):
|
|
|
18594
18738
|
@pulumi.getter(name="mismatchLabelKeys")
|
|
18595
18739
|
def mismatch_label_keys(self) -> Optional[Sequence[builtins.str]]:
|
|
18596
18740
|
"""
|
|
18597
|
-
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18741
|
+
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
|
|
18598
18742
|
"""
|
|
18599
18743
|
return pulumi.get(self, "mismatch_label_keys")
|
|
18600
18744
|
|
|
@@ -18743,6 +18887,8 @@ class PodCondition(dict):
|
|
|
18743
18887
|
suggest = "last_probe_time"
|
|
18744
18888
|
elif key == "lastTransitionTime":
|
|
18745
18889
|
suggest = "last_transition_time"
|
|
18890
|
+
elif key == "observedGeneration":
|
|
18891
|
+
suggest = "observed_generation"
|
|
18746
18892
|
|
|
18747
18893
|
if suggest:
|
|
18748
18894
|
pulumi.log.warn(f"Key '{key}' not found in PodCondition. Access the value via the '{suggest}' property getter instead.")
|
|
@@ -18761,6 +18907,7 @@ class PodCondition(dict):
|
|
|
18761
18907
|
last_probe_time: Optional[builtins.str] = None,
|
|
18762
18908
|
last_transition_time: Optional[builtins.str] = None,
|
|
18763
18909
|
message: Optional[builtins.str] = None,
|
|
18910
|
+
observed_generation: Optional[builtins.int] = None,
|
|
18764
18911
|
reason: Optional[builtins.str] = None):
|
|
18765
18912
|
"""
|
|
18766
18913
|
PodCondition contains details for the current condition of this pod.
|
|
@@ -18769,6 +18916,7 @@ class PodCondition(dict):
|
|
|
18769
18916
|
:param builtins.str last_probe_time: Last time we probed the condition.
|
|
18770
18917
|
:param builtins.str last_transition_time: Last time the condition transitioned from one status to another.
|
|
18771
18918
|
:param builtins.str message: Human-readable message indicating details about last transition.
|
|
18919
|
+
:param builtins.int observed_generation: If set, this represents the .metadata.generation that the pod condition was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
18772
18920
|
:param builtins.str reason: Unique, one-word, CamelCase reason for the condition's last transition.
|
|
18773
18921
|
"""
|
|
18774
18922
|
pulumi.set(__self__, "status", status)
|
|
@@ -18779,6 +18927,8 @@ class PodCondition(dict):
|
|
|
18779
18927
|
pulumi.set(__self__, "last_transition_time", last_transition_time)
|
|
18780
18928
|
if message is not None:
|
|
18781
18929
|
pulumi.set(__self__, "message", message)
|
|
18930
|
+
if observed_generation is not None:
|
|
18931
|
+
pulumi.set(__self__, "observed_generation", observed_generation)
|
|
18782
18932
|
if reason is not None:
|
|
18783
18933
|
pulumi.set(__self__, "reason", reason)
|
|
18784
18934
|
|
|
@@ -18822,6 +18972,14 @@ class PodCondition(dict):
|
|
|
18822
18972
|
"""
|
|
18823
18973
|
return pulumi.get(self, "message")
|
|
18824
18974
|
|
|
18975
|
+
@property
|
|
18976
|
+
@pulumi.getter(name="observedGeneration")
|
|
18977
|
+
def observed_generation(self) -> Optional[builtins.int]:
|
|
18978
|
+
"""
|
|
18979
|
+
If set, this represents the .metadata.generation that the pod condition was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
18980
|
+
"""
|
|
18981
|
+
return pulumi.get(self, "observed_generation")
|
|
18982
|
+
|
|
18825
18983
|
@property
|
|
18826
18984
|
@pulumi.getter
|
|
18827
18985
|
def reason(self) -> Optional[builtins.str]:
|
|
@@ -18843,6 +19001,8 @@ class PodConditionPatch(dict):
|
|
|
18843
19001
|
suggest = "last_probe_time"
|
|
18844
19002
|
elif key == "lastTransitionTime":
|
|
18845
19003
|
suggest = "last_transition_time"
|
|
19004
|
+
elif key == "observedGeneration":
|
|
19005
|
+
suggest = "observed_generation"
|
|
18846
19006
|
|
|
18847
19007
|
if suggest:
|
|
18848
19008
|
pulumi.log.warn(f"Key '{key}' not found in PodConditionPatch. Access the value via the '{suggest}' property getter instead.")
|
|
@@ -18859,6 +19019,7 @@ class PodConditionPatch(dict):
|
|
|
18859
19019
|
last_probe_time: Optional[builtins.str] = None,
|
|
18860
19020
|
last_transition_time: Optional[builtins.str] = None,
|
|
18861
19021
|
message: Optional[builtins.str] = None,
|
|
19022
|
+
observed_generation: Optional[builtins.int] = None,
|
|
18862
19023
|
reason: Optional[builtins.str] = None,
|
|
18863
19024
|
status: Optional[builtins.str] = None,
|
|
18864
19025
|
type: Optional[builtins.str] = None):
|
|
@@ -18867,6 +19028,7 @@ class PodConditionPatch(dict):
|
|
|
18867
19028
|
:param builtins.str last_probe_time: Last time we probed the condition.
|
|
18868
19029
|
:param builtins.str last_transition_time: Last time the condition transitioned from one status to another.
|
|
18869
19030
|
:param builtins.str message: Human-readable message indicating details about last transition.
|
|
19031
|
+
:param builtins.int observed_generation: If set, this represents the .metadata.generation that the pod condition was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
18870
19032
|
:param builtins.str reason: Unique, one-word, CamelCase reason for the condition's last transition.
|
|
18871
19033
|
:param builtins.str status: Status is the status of the condition. Can be True, False, Unknown. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions
|
|
18872
19034
|
:param builtins.str type: Type is the type of the condition. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions
|
|
@@ -18877,6 +19039,8 @@ class PodConditionPatch(dict):
|
|
|
18877
19039
|
pulumi.set(__self__, "last_transition_time", last_transition_time)
|
|
18878
19040
|
if message is not None:
|
|
18879
19041
|
pulumi.set(__self__, "message", message)
|
|
19042
|
+
if observed_generation is not None:
|
|
19043
|
+
pulumi.set(__self__, "observed_generation", observed_generation)
|
|
18880
19044
|
if reason is not None:
|
|
18881
19045
|
pulumi.set(__self__, "reason", reason)
|
|
18882
19046
|
if status is not None:
|
|
@@ -18908,6 +19072,14 @@ class PodConditionPatch(dict):
|
|
|
18908
19072
|
"""
|
|
18909
19073
|
return pulumi.get(self, "message")
|
|
18910
19074
|
|
|
19075
|
+
@property
|
|
19076
|
+
@pulumi.getter(name="observedGeneration")
|
|
19077
|
+
def observed_generation(self) -> Optional[builtins.int]:
|
|
19078
|
+
"""
|
|
19079
|
+
If set, this represents the .metadata.generation that the pod condition was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
19080
|
+
"""
|
|
19081
|
+
return pulumi.get(self, "observed_generation")
|
|
19082
|
+
|
|
18911
19083
|
@property
|
|
18912
19084
|
@pulumi.getter
|
|
18913
19085
|
def reason(self) -> Optional[builtins.str]:
|
|
@@ -20215,7 +20387,7 @@ class PodSpec(dict):
|
|
|
20215
20387
|
:param builtins.bool host_users: Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.
|
|
20216
20388
|
:param builtins.str hostname: Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value.
|
|
20217
20389
|
:param Sequence['LocalObjectReferenceArgs'] image_pull_secrets: ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
|
|
20218
|
-
:param Sequence['ContainerArgs'] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of
|
|
20390
|
+
:param Sequence['ContainerArgs'] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
|
|
20219
20391
|
:param builtins.str node_name: NodeName indicates in which node this pod is scheduled. If empty, this pod is a candidate for scheduling by the scheduler defined in schedulerName. Once this field is set, the kubelet for this node becomes responsible for the lifecycle of this pod. This field should not be used to express a desire for the pod to be scheduled on a specific node. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename
|
|
20220
20392
|
:param Mapping[str, builtins.str] node_selector: NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
|
|
20221
20393
|
:param 'PodOSArgs' os: Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set.
|
|
@@ -20459,7 +20631,7 @@ class PodSpec(dict):
|
|
|
20459
20631
|
@pulumi.getter(name="initContainers")
|
|
20460
20632
|
def init_containers(self) -> Optional[Sequence['outputs.Container']]:
|
|
20461
20633
|
"""
|
|
20462
|
-
List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of
|
|
20634
|
+
List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
|
|
20463
20635
|
"""
|
|
20464
20636
|
return pulumi.get(self, "init_containers")
|
|
20465
20637
|
|
|
@@ -20808,7 +20980,7 @@ class PodSpecPatch(dict):
|
|
|
20808
20980
|
:param builtins.bool host_users: Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.
|
|
20809
20981
|
:param builtins.str hostname: Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value.
|
|
20810
20982
|
:param Sequence['LocalObjectReferencePatchArgs'] image_pull_secrets: ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
|
|
20811
|
-
:param Sequence['ContainerPatchArgs'] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of
|
|
20983
|
+
:param Sequence['ContainerPatchArgs'] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
|
|
20812
20984
|
:param builtins.str node_name: NodeName indicates in which node this pod is scheduled. If empty, this pod is a candidate for scheduling by the scheduler defined in schedulerName. Once this field is set, the kubelet for this node becomes responsible for the lifecycle of this pod. This field should not be used to express a desire for the pod to be scheduled on a specific node. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename
|
|
20813
20985
|
:param Mapping[str, builtins.str] node_selector: NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
|
|
20814
20986
|
:param 'PodOSPatchArgs' os: Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set.
|
|
@@ -21053,7 +21225,7 @@ class PodSpecPatch(dict):
|
|
|
21053
21225
|
@pulumi.getter(name="initContainers")
|
|
21054
21226
|
def init_containers(self) -> Optional[Sequence['outputs.ContainerPatch']]:
|
|
21055
21227
|
"""
|
|
21056
|
-
List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of
|
|
21228
|
+
List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
|
|
21057
21229
|
"""
|
|
21058
21230
|
return pulumi.get(self, "init_containers")
|
|
21059
21231
|
|
|
@@ -21284,6 +21456,8 @@ class PodStatus(dict):
|
|
|
21284
21456
|
suggest = "init_container_statuses"
|
|
21285
21457
|
elif key == "nominatedNodeName":
|
|
21286
21458
|
suggest = "nominated_node_name"
|
|
21459
|
+
elif key == "observedGeneration":
|
|
21460
|
+
suggest = "observed_generation"
|
|
21287
21461
|
elif key == "podIP":
|
|
21288
21462
|
suggest = "pod_ip"
|
|
21289
21463
|
elif key == "podIPs":
|
|
@@ -21315,6 +21489,7 @@ class PodStatus(dict):
|
|
|
21315
21489
|
init_container_statuses: Optional[Sequence['outputs.ContainerStatus']] = None,
|
|
21316
21490
|
message: Optional[builtins.str] = None,
|
|
21317
21491
|
nominated_node_name: Optional[builtins.str] = None,
|
|
21492
|
+
observed_generation: Optional[builtins.int] = None,
|
|
21318
21493
|
phase: Optional[builtins.str] = None,
|
|
21319
21494
|
pod_ip: Optional[builtins.str] = None,
|
|
21320
21495
|
pod_ips: Optional[Sequence['outputs.PodIP']] = None,
|
|
@@ -21333,6 +21508,7 @@ class PodStatus(dict):
|
|
|
21333
21508
|
:param Sequence['ContainerStatusArgs'] init_container_statuses: Statuses of init containers in this pod. The most recent successful non-restartable init container will have ready = true, the most recently started container will have startTime set. Each init container in the pod should have at most one status in this list, and all statuses should be for containers in the pod. However this is not enforced. If a status for a non-existent container is present in the list, or the list has duplicate names, the behavior of various Kubernetes components is not defined and those statuses might be ignored. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-and-container-status
|
|
21334
21509
|
:param builtins.str message: A human readable message indicating details about why the pod is in this condition.
|
|
21335
21510
|
:param builtins.str nominated_node_name: nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.
|
|
21511
|
+
:param builtins.int observed_generation: If set, this represents the .metadata.generation that the pod status was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
21336
21512
|
:param builtins.str phase: The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values:
|
|
21337
21513
|
|
|
21338
21514
|
Pending: The pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded: All containers in the pod have terminated in success, and will not be restarted. Failed: All containers in the pod have terminated, and at least one container has terminated in failure. The container either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.
|
|
@@ -21342,7 +21518,7 @@ class PodStatus(dict):
|
|
|
21342
21518
|
:param Sequence['PodIPArgs'] pod_ips: podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet.
|
|
21343
21519
|
:param builtins.str qos_class: The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes
|
|
21344
21520
|
:param builtins.str reason: A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted'
|
|
21345
|
-
:param builtins.str resize: Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed"
|
|
21521
|
+
:param builtins.str resize: Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" Deprecated: Resize status is moved to two pod conditions PodResizePending and PodResizeInProgress. PodResizePending will track states where the spec has been resized, but the Kubelet has not yet allocated the resources. PodResizeInProgress will track in-progress resizes, and should be present whenever allocated resources != acknowledged resources.
|
|
21346
21522
|
:param Sequence['PodResourceClaimStatusArgs'] resource_claim_statuses: Status of resource claims.
|
|
21347
21523
|
:param builtins.str start_time: RFC 3339 date and time at which the object was acknowledged by the Kubelet. This is before the Kubelet pulled the container image(s) for the pod.
|
|
21348
21524
|
"""
|
|
@@ -21362,6 +21538,8 @@ class PodStatus(dict):
|
|
|
21362
21538
|
pulumi.set(__self__, "message", message)
|
|
21363
21539
|
if nominated_node_name is not None:
|
|
21364
21540
|
pulumi.set(__self__, "nominated_node_name", nominated_node_name)
|
|
21541
|
+
if observed_generation is not None:
|
|
21542
|
+
pulumi.set(__self__, "observed_generation", observed_generation)
|
|
21365
21543
|
if phase is not None:
|
|
21366
21544
|
pulumi.set(__self__, "phase", phase)
|
|
21367
21545
|
if pod_ip is not None:
|
|
@@ -21443,6 +21621,14 @@ class PodStatus(dict):
|
|
|
21443
21621
|
"""
|
|
21444
21622
|
return pulumi.get(self, "nominated_node_name")
|
|
21445
21623
|
|
|
21624
|
+
@property
|
|
21625
|
+
@pulumi.getter(name="observedGeneration")
|
|
21626
|
+
def observed_generation(self) -> Optional[builtins.int]:
|
|
21627
|
+
"""
|
|
21628
|
+
If set, this represents the .metadata.generation that the pod status was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
21629
|
+
"""
|
|
21630
|
+
return pulumi.get(self, "observed_generation")
|
|
21631
|
+
|
|
21446
21632
|
@property
|
|
21447
21633
|
@pulumi.getter
|
|
21448
21634
|
def phase(self) -> Optional[builtins.str]:
|
|
@@ -21491,7 +21677,7 @@ class PodStatus(dict):
|
|
|
21491
21677
|
@pulumi.getter
|
|
21492
21678
|
def resize(self) -> Optional[builtins.str]:
|
|
21493
21679
|
"""
|
|
21494
|
-
Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed"
|
|
21680
|
+
Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" Deprecated: Resize status is moved to two pod conditions PodResizePending and PodResizeInProgress. PodResizePending will track states where the spec has been resized, but the Kubelet has not yet allocated the resources. PodResizeInProgress will track in-progress resizes, and should be present whenever allocated resources != acknowledged resources.
|
|
21495
21681
|
"""
|
|
21496
21682
|
return pulumi.get(self, "resize")
|
|
21497
21683
|
|
|
@@ -21532,6 +21718,8 @@ class PodStatusPatch(dict):
|
|
|
21532
21718
|
suggest = "init_container_statuses"
|
|
21533
21719
|
elif key == "nominatedNodeName":
|
|
21534
21720
|
suggest = "nominated_node_name"
|
|
21721
|
+
elif key == "observedGeneration":
|
|
21722
|
+
suggest = "observed_generation"
|
|
21535
21723
|
elif key == "podIP":
|
|
21536
21724
|
suggest = "pod_ip"
|
|
21537
21725
|
elif key == "podIPs":
|
|
@@ -21563,6 +21751,7 @@ class PodStatusPatch(dict):
|
|
|
21563
21751
|
init_container_statuses: Optional[Sequence['outputs.ContainerStatusPatch']] = None,
|
|
21564
21752
|
message: Optional[builtins.str] = None,
|
|
21565
21753
|
nominated_node_name: Optional[builtins.str] = None,
|
|
21754
|
+
observed_generation: Optional[builtins.int] = None,
|
|
21566
21755
|
phase: Optional[builtins.str] = None,
|
|
21567
21756
|
pod_ip: Optional[builtins.str] = None,
|
|
21568
21757
|
pod_ips: Optional[Sequence['outputs.PodIPPatch']] = None,
|
|
@@ -21581,6 +21770,7 @@ class PodStatusPatch(dict):
|
|
|
21581
21770
|
:param Sequence['ContainerStatusPatchArgs'] init_container_statuses: Statuses of init containers in this pod. The most recent successful non-restartable init container will have ready = true, the most recently started container will have startTime set. Each init container in the pod should have at most one status in this list, and all statuses should be for containers in the pod. However this is not enforced. If a status for a non-existent container is present in the list, or the list has duplicate names, the behavior of various Kubernetes components is not defined and those statuses might be ignored. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-and-container-status
|
|
21582
21771
|
:param builtins.str message: A human readable message indicating details about why the pod is in this condition.
|
|
21583
21772
|
:param builtins.str nominated_node_name: nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.
|
|
21773
|
+
:param builtins.int observed_generation: If set, this represents the .metadata.generation that the pod status was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
21584
21774
|
:param builtins.str phase: The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values:
|
|
21585
21775
|
|
|
21586
21776
|
Pending: The pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded: All containers in the pod have terminated in success, and will not be restarted. Failed: All containers in the pod have terminated, and at least one container has terminated in failure. The container either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.
|
|
@@ -21590,7 +21780,7 @@ class PodStatusPatch(dict):
|
|
|
21590
21780
|
:param Sequence['PodIPPatchArgs'] pod_ips: podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet.
|
|
21591
21781
|
:param builtins.str qos_class: The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes
|
|
21592
21782
|
:param builtins.str reason: A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted'
|
|
21593
|
-
:param builtins.str resize: Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed"
|
|
21783
|
+
:param builtins.str resize: Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" Deprecated: Resize status is moved to two pod conditions PodResizePending and PodResizeInProgress. PodResizePending will track states where the spec has been resized, but the Kubelet has not yet allocated the resources. PodResizeInProgress will track in-progress resizes, and should be present whenever allocated resources != acknowledged resources.
|
|
21594
21784
|
:param Sequence['PodResourceClaimStatusPatchArgs'] resource_claim_statuses: Status of resource claims.
|
|
21595
21785
|
:param builtins.str start_time: RFC 3339 date and time at which the object was acknowledged by the Kubelet. This is before the Kubelet pulled the container image(s) for the pod.
|
|
21596
21786
|
"""
|
|
@@ -21610,6 +21800,8 @@ class PodStatusPatch(dict):
|
|
|
21610
21800
|
pulumi.set(__self__, "message", message)
|
|
21611
21801
|
if nominated_node_name is not None:
|
|
21612
21802
|
pulumi.set(__self__, "nominated_node_name", nominated_node_name)
|
|
21803
|
+
if observed_generation is not None:
|
|
21804
|
+
pulumi.set(__self__, "observed_generation", observed_generation)
|
|
21613
21805
|
if phase is not None:
|
|
21614
21806
|
pulumi.set(__self__, "phase", phase)
|
|
21615
21807
|
if pod_ip is not None:
|
|
@@ -21691,6 +21883,14 @@ class PodStatusPatch(dict):
|
|
|
21691
21883
|
"""
|
|
21692
21884
|
return pulumi.get(self, "nominated_node_name")
|
|
21693
21885
|
|
|
21886
|
+
@property
|
|
21887
|
+
@pulumi.getter(name="observedGeneration")
|
|
21888
|
+
def observed_generation(self) -> Optional[builtins.int]:
|
|
21889
|
+
"""
|
|
21890
|
+
If set, this represents the .metadata.generation that the pod status was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
|
|
21891
|
+
"""
|
|
21892
|
+
return pulumi.get(self, "observed_generation")
|
|
21893
|
+
|
|
21694
21894
|
@property
|
|
21695
21895
|
@pulumi.getter
|
|
21696
21896
|
def phase(self) -> Optional[builtins.str]:
|
|
@@ -21739,7 +21939,7 @@ class PodStatusPatch(dict):
|
|
|
21739
21939
|
@pulumi.getter
|
|
21740
21940
|
def resize(self) -> Optional[builtins.str]:
|
|
21741
21941
|
"""
|
|
21742
|
-
Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed"
|
|
21942
|
+
Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" Deprecated: Resize status is moved to two pod conditions PodResizePending and PodResizeInProgress. PodResizePending will track states where the spec has been resized, but the Kubelet has not yet allocated the resources. PodResizeInProgress will track in-progress resizes, and should be present whenever allocated resources != acknowledged resources.
|
|
21743
21943
|
"""
|
|
21744
21944
|
return pulumi.get(self, "resize")
|
|
21745
21945
|
|
|
@@ -27572,7 +27772,7 @@ class ServiceSpec(dict):
|
|
|
27572
27772
|
:param builtins.str session_affinity: Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
|
|
27573
27773
|
:param 'SessionAffinityConfigArgs' session_affinity_config: sessionAffinityConfig contains the configurations of session affinity.
|
|
27574
27774
|
:param Sequence[builtins.str] topology_keys: topologyKeys is a preference-order list of topology keys which implementations of services should use to preferentially sort endpoints when accessing this Service, it can not be used at the same time as externalTrafficPolicy=Local. Topology keys must be valid label keys and at most 16 keys may be specified. Endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no backends for that client and connections should fail. The special value "*" may be used to mean "any topology". This catch-all value, if used, only makes sense as the last value in the list. If this is not specified or empty, no topology constraints will be applied.
|
|
27575
|
-
:param builtins.str traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are
|
|
27775
|
+
:param builtins.str traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
|
|
27576
27776
|
:param Union[builtins.str, 'ServiceSpecType'] type: type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
|
|
27577
27777
|
"""
|
|
27578
27778
|
if allocate_load_balancer_node_ports is not None:
|
|
@@ -27788,7 +27988,7 @@ class ServiceSpec(dict):
|
|
|
27788
27988
|
@pulumi.getter(name="trafficDistribution")
|
|
27789
27989
|
def traffic_distribution(self) -> Optional[builtins.str]:
|
|
27790
27990
|
"""
|
|
27791
|
-
TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are
|
|
27991
|
+
TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
|
|
27792
27992
|
"""
|
|
27793
27993
|
return pulumi.get(self, "traffic_distribution")
|
|
27794
27994
|
|
|
@@ -27908,7 +28108,7 @@ class ServiceSpecPatch(dict):
|
|
|
27908
28108
|
:param builtins.str session_affinity: Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
|
|
27909
28109
|
:param 'SessionAffinityConfigPatchArgs' session_affinity_config: sessionAffinityConfig contains the configurations of session affinity.
|
|
27910
28110
|
:param Sequence[builtins.str] topology_keys: topologyKeys is a preference-order list of topology keys which implementations of services should use to preferentially sort endpoints when accessing this Service, it can not be used at the same time as externalTrafficPolicy=Local. Topology keys must be valid label keys and at most 16 keys may be specified. Endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no backends for that client and connections should fail. The special value "*" may be used to mean "any topology". This catch-all value, if used, only makes sense as the last value in the list. If this is not specified or empty, no topology constraints will be applied.
|
|
27911
|
-
:param builtins.str traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are
|
|
28111
|
+
:param builtins.str traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
|
|
27912
28112
|
:param Union[builtins.str, 'ServiceSpecType'] type: type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
|
|
27913
28113
|
"""
|
|
27914
28114
|
if allocate_load_balancer_node_ports is not None:
|
|
@@ -28124,7 +28324,7 @@ class ServiceSpecPatch(dict):
|
|
|
28124
28324
|
@pulumi.getter(name="trafficDistribution")
|
|
28125
28325
|
def traffic_distribution(self) -> Optional[builtins.str]:
|
|
28126
28326
|
"""
|
|
28127
|
-
TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are
|
|
28327
|
+
TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
|
|
28128
28328
|
"""
|
|
28129
28329
|
return pulumi.get(self, "traffic_distribution")
|
|
28130
28330
|
|
|
@@ -29423,10 +29623,10 @@ class TopologySpreadConstraint(dict):
|
|
|
29423
29623
|
For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.
|
|
29424
29624
|
:param builtins.str node_affinity_policy: NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
|
|
29425
29625
|
|
|
29426
|
-
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29626
|
+
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29427
29627
|
:param builtins.str node_taints_policy: NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
|
|
29428
29628
|
|
|
29429
|
-
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29629
|
+
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29430
29630
|
"""
|
|
29431
29631
|
pulumi.set(__self__, "max_skew", max_skew)
|
|
29432
29632
|
pulumi.set(__self__, "topology_key", topology_key)
|
|
@@ -29503,7 +29703,7 @@ class TopologySpreadConstraint(dict):
|
|
|
29503
29703
|
"""
|
|
29504
29704
|
NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
|
|
29505
29705
|
|
|
29506
|
-
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29706
|
+
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29507
29707
|
"""
|
|
29508
29708
|
return pulumi.get(self, "node_affinity_policy")
|
|
29509
29709
|
|
|
@@ -29513,7 +29713,7 @@ class TopologySpreadConstraint(dict):
|
|
|
29513
29713
|
"""
|
|
29514
29714
|
NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
|
|
29515
29715
|
|
|
29516
|
-
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29716
|
+
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29517
29717
|
"""
|
|
29518
29718
|
return pulumi.get(self, "node_taints_policy")
|
|
29519
29719
|
|
|
@@ -29575,10 +29775,10 @@ class TopologySpreadConstraintPatch(dict):
|
|
|
29575
29775
|
For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.
|
|
29576
29776
|
:param builtins.str node_affinity_policy: NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
|
|
29577
29777
|
|
|
29578
|
-
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29778
|
+
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29579
29779
|
:param builtins.str node_taints_policy: NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
|
|
29580
29780
|
|
|
29581
|
-
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29781
|
+
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29582
29782
|
:param builtins.str topology_key: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
|
|
29583
29783
|
:param builtins.str when_unsatisfiable: WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location,
|
|
29584
29784
|
but giving higher precedence to topologies that would help reduce the
|
|
@@ -29644,7 +29844,7 @@ class TopologySpreadConstraintPatch(dict):
|
|
|
29644
29844
|
"""
|
|
29645
29845
|
NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
|
|
29646
29846
|
|
|
29647
|
-
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29847
|
+
If this value is nil, the behavior is equivalent to the Honor policy.
|
|
29648
29848
|
"""
|
|
29649
29849
|
return pulumi.get(self, "node_affinity_policy")
|
|
29650
29850
|
|
|
@@ -29654,7 +29854,7 @@ class TopologySpreadConstraintPatch(dict):
|
|
|
29654
29854
|
"""
|
|
29655
29855
|
NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
|
|
29656
29856
|
|
|
29657
|
-
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29857
|
+
If this value is nil, the behavior is equivalent to the Ignore policy.
|
|
29658
29858
|
"""
|
|
29659
29859
|
return pulumi.get(self, "node_taints_policy")
|
|
29660
29860
|
|
|
@@ -30073,7 +30273,7 @@ class Volume(dict):
|
|
|
30073
30273
|
|
|
30074
30274
|
- Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
|
|
30075
30275
|
|
|
30076
|
-
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
30276
|
+
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
30077
30277
|
:param 'ISCSIVolumeSourceArgs' iscsi: iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md
|
|
30078
30278
|
:param 'NFSVolumeSourceArgs' nfs: nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
|
|
30079
30279
|
:param 'PersistentVolumeClaimVolumeSourceArgs' persistent_volume_claim: persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
|
|
@@ -30314,7 +30514,7 @@ class Volume(dict):
|
|
|
30314
30514
|
|
|
30315
30515
|
- Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
|
|
30316
30516
|
|
|
30317
|
-
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
30517
|
+
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
30318
30518
|
"""
|
|
30319
30519
|
return pulumi.get(self, "image")
|
|
30320
30520
|
|
|
@@ -31113,7 +31313,7 @@ class VolumePatch(dict):
|
|
|
31113
31313
|
|
|
31114
31314
|
- Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
|
|
31115
31315
|
|
|
31116
|
-
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
31316
|
+
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
31117
31317
|
:param 'ISCSIVolumeSourcePatchArgs' iscsi: iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md
|
|
31118
31318
|
:param builtins.str name: name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
|
31119
31319
|
:param 'NFSVolumeSourcePatchArgs' nfs: nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
|
|
@@ -31348,7 +31548,7 @@ class VolumePatch(dict):
|
|
|
31348
31548
|
|
|
31349
31549
|
- Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
|
|
31350
31550
|
|
|
31351
|
-
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
31551
|
+
The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
|
|
31352
31552
|
"""
|
|
31353
31553
|
return pulumi.get(self, "image")
|
|
31354
31554
|
|