pulumi-kubernetes 4.23.0a1746131759__py3-none-any.whl → 4.23.0a1746153578__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of pulumi-kubernetes might be problematic. Click here for more details.

Files changed (520) hide show
  1. pulumi_kubernetes/__init__.py +36 -2
  2. pulumi_kubernetes/admissionregistration/v1/MutatingWebhookConfiguration.py +1 -3
  3. pulumi_kubernetes/admissionregistration/v1/MutatingWebhookConfigurationList.py +1 -3
  4. pulumi_kubernetes/admissionregistration/v1/MutatingWebhookConfigurationPatch.py +1 -3
  5. pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicy.py +1 -3
  6. pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyBinding.py +1 -3
  7. pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyBindingList.py +1 -3
  8. pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyBindingPatch.py +1 -3
  9. pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyList.py +1 -3
  10. pulumi_kubernetes/admissionregistration/v1/ValidatingAdmissionPolicyPatch.py +1 -3
  11. pulumi_kubernetes/admissionregistration/v1/ValidatingWebhookConfiguration.py +1 -3
  12. pulumi_kubernetes/admissionregistration/v1/ValidatingWebhookConfigurationList.py +1 -3
  13. pulumi_kubernetes/admissionregistration/v1/ValidatingWebhookConfigurationPatch.py +1 -3
  14. pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicy.py +1 -3
  15. pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyBinding.py +1 -3
  16. pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyBindingList.py +1 -3
  17. pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyBindingPatch.py +1 -3
  18. pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyList.py +1 -3
  19. pulumi_kubernetes/admissionregistration/v1alpha1/MutatingAdmissionPolicyPatch.py +1 -3
  20. pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicy.py +1 -3
  21. pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyBinding.py +1 -3
  22. pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyBindingList.py +1 -3
  23. pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyBindingPatch.py +1 -3
  24. pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyList.py +1 -3
  25. pulumi_kubernetes/admissionregistration/v1alpha1/ValidatingAdmissionPolicyPatch.py +1 -3
  26. pulumi_kubernetes/admissionregistration/v1alpha1/_inputs.py +30 -30
  27. pulumi_kubernetes/admissionregistration/v1alpha1/outputs.py +20 -20
  28. pulumi_kubernetes/admissionregistration/v1beta1/MutatingWebhookConfiguration.py +1 -3
  29. pulumi_kubernetes/admissionregistration/v1beta1/MutatingWebhookConfigurationList.py +1 -3
  30. pulumi_kubernetes/admissionregistration/v1beta1/MutatingWebhookConfigurationPatch.py +1 -3
  31. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicy.py +1 -3
  32. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyBinding.py +1 -3
  33. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyBindingList.py +1 -3
  34. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyBindingPatch.py +1 -3
  35. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyList.py +1 -3
  36. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingAdmissionPolicyPatch.py +1 -3
  37. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingWebhookConfiguration.py +1 -3
  38. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingWebhookConfigurationList.py +1 -3
  39. pulumi_kubernetes/admissionregistration/v1beta1/ValidatingWebhookConfigurationPatch.py +1 -3
  40. pulumi_kubernetes/apiextensions/v1/CustomResourceDefinition.py +1 -3
  41. pulumi_kubernetes/apiextensions/v1/CustomResourceDefinitionList.py +1 -3
  42. pulumi_kubernetes/apiextensions/v1/CustomResourceDefinitionPatch.py +1 -3
  43. pulumi_kubernetes/apiextensions/v1beta1/CustomResourceDefinition.py +1 -3
  44. pulumi_kubernetes/apiextensions/v1beta1/CustomResourceDefinitionList.py +1 -3
  45. pulumi_kubernetes/apiextensions/v1beta1/CustomResourceDefinitionPatch.py +1 -3
  46. pulumi_kubernetes/apiregistration/v1/APIService.py +1 -3
  47. pulumi_kubernetes/apiregistration/v1/APIServiceList.py +1 -3
  48. pulumi_kubernetes/apiregistration/v1/APIServicePatch.py +1 -3
  49. pulumi_kubernetes/apiregistration/v1beta1/APIService.py +1 -3
  50. pulumi_kubernetes/apiregistration/v1beta1/APIServiceList.py +1 -3
  51. pulumi_kubernetes/apiregistration/v1beta1/APIServicePatch.py +1 -3
  52. pulumi_kubernetes/apps/v1/ControllerRevision.py +1 -3
  53. pulumi_kubernetes/apps/v1/ControllerRevisionList.py +1 -3
  54. pulumi_kubernetes/apps/v1/ControllerRevisionPatch.py +1 -3
  55. pulumi_kubernetes/apps/v1/DaemonSet.py +1 -3
  56. pulumi_kubernetes/apps/v1/DaemonSetList.py +1 -3
  57. pulumi_kubernetes/apps/v1/DaemonSetPatch.py +1 -3
  58. pulumi_kubernetes/apps/v1/Deployment.py +1 -3
  59. pulumi_kubernetes/apps/v1/DeploymentList.py +1 -3
  60. pulumi_kubernetes/apps/v1/DeploymentPatch.py +1 -3
  61. pulumi_kubernetes/apps/v1/ReplicaSet.py +1 -3
  62. pulumi_kubernetes/apps/v1/ReplicaSetList.py +5 -7
  63. pulumi_kubernetes/apps/v1/ReplicaSetPatch.py +1 -3
  64. pulumi_kubernetes/apps/v1/StatefulSet.py +1 -3
  65. pulumi_kubernetes/apps/v1/StatefulSetList.py +1 -3
  66. pulumi_kubernetes/apps/v1/StatefulSetPatch.py +1 -3
  67. pulumi_kubernetes/apps/v1/_inputs.py +109 -56
  68. pulumi_kubernetes/apps/v1/outputs.py +129 -56
  69. pulumi_kubernetes/apps/v1beta1/ControllerRevision.py +1 -3
  70. pulumi_kubernetes/apps/v1beta1/ControllerRevisionList.py +1 -3
  71. pulumi_kubernetes/apps/v1beta1/ControllerRevisionPatch.py +1 -3
  72. pulumi_kubernetes/apps/v1beta1/Deployment.py +1 -3
  73. pulumi_kubernetes/apps/v1beta1/DeploymentList.py +1 -3
  74. pulumi_kubernetes/apps/v1beta1/DeploymentPatch.py +1 -3
  75. pulumi_kubernetes/apps/v1beta1/StatefulSet.py +1 -3
  76. pulumi_kubernetes/apps/v1beta1/StatefulSetList.py +1 -3
  77. pulumi_kubernetes/apps/v1beta1/StatefulSetPatch.py +1 -3
  78. pulumi_kubernetes/apps/v1beta2/ControllerRevision.py +1 -3
  79. pulumi_kubernetes/apps/v1beta2/ControllerRevisionList.py +1 -3
  80. pulumi_kubernetes/apps/v1beta2/ControllerRevisionPatch.py +1 -3
  81. pulumi_kubernetes/apps/v1beta2/DaemonSet.py +1 -3
  82. pulumi_kubernetes/apps/v1beta2/DaemonSetList.py +1 -3
  83. pulumi_kubernetes/apps/v1beta2/DaemonSetPatch.py +1 -3
  84. pulumi_kubernetes/apps/v1beta2/Deployment.py +1 -3
  85. pulumi_kubernetes/apps/v1beta2/DeploymentList.py +1 -3
  86. pulumi_kubernetes/apps/v1beta2/DeploymentPatch.py +1 -3
  87. pulumi_kubernetes/apps/v1beta2/ReplicaSet.py +1 -3
  88. pulumi_kubernetes/apps/v1beta2/ReplicaSetList.py +1 -3
  89. pulumi_kubernetes/apps/v1beta2/ReplicaSetPatch.py +1 -3
  90. pulumi_kubernetes/apps/v1beta2/StatefulSet.py +1 -3
  91. pulumi_kubernetes/apps/v1beta2/StatefulSetList.py +1 -3
  92. pulumi_kubernetes/apps/v1beta2/StatefulSetPatch.py +1 -3
  93. pulumi_kubernetes/auditregistration/v1alpha1/AuditSink.py +1 -3
  94. pulumi_kubernetes/auditregistration/v1alpha1/AuditSinkList.py +1 -3
  95. pulumi_kubernetes/auditregistration/v1alpha1/AuditSinkPatch.py +1 -3
  96. pulumi_kubernetes/autoscaling/v1/HorizontalPodAutoscaler.py +1 -3
  97. pulumi_kubernetes/autoscaling/v1/HorizontalPodAutoscalerList.py +1 -3
  98. pulumi_kubernetes/autoscaling/v1/HorizontalPodAutoscalerPatch.py +1 -3
  99. pulumi_kubernetes/autoscaling/v2/HorizontalPodAutoscaler.py +1 -3
  100. pulumi_kubernetes/autoscaling/v2/HorizontalPodAutoscalerList.py +1 -3
  101. pulumi_kubernetes/autoscaling/v2/HorizontalPodAutoscalerPatch.py +1 -3
  102. pulumi_kubernetes/autoscaling/v2/_inputs.py +92 -12
  103. pulumi_kubernetes/autoscaling/v2/outputs.py +66 -10
  104. pulumi_kubernetes/autoscaling/v2beta1/HorizontalPodAutoscaler.py +1 -3
  105. pulumi_kubernetes/autoscaling/v2beta1/HorizontalPodAutoscalerList.py +1 -3
  106. pulumi_kubernetes/autoscaling/v2beta1/HorizontalPodAutoscalerPatch.py +1 -3
  107. pulumi_kubernetes/autoscaling/v2beta2/HorizontalPodAutoscaler.py +1 -3
  108. pulumi_kubernetes/autoscaling/v2beta2/HorizontalPodAutoscalerList.py +1 -3
  109. pulumi_kubernetes/autoscaling/v2beta2/HorizontalPodAutoscalerPatch.py +1 -3
  110. pulumi_kubernetes/batch/v1/CronJob.py +1 -3
  111. pulumi_kubernetes/batch/v1/CronJobList.py +1 -3
  112. pulumi_kubernetes/batch/v1/CronJobPatch.py +1 -3
  113. pulumi_kubernetes/batch/v1/Job.py +1 -3
  114. pulumi_kubernetes/batch/v1/JobList.py +1 -3
  115. pulumi_kubernetes/batch/v1/JobPatch.py +1 -3
  116. pulumi_kubernetes/batch/v1/_inputs.py +12 -42
  117. pulumi_kubernetes/batch/v1/outputs.py +8 -32
  118. pulumi_kubernetes/batch/v1beta1/CronJob.py +1 -3
  119. pulumi_kubernetes/batch/v1beta1/CronJobList.py +1 -3
  120. pulumi_kubernetes/batch/v1beta1/CronJobPatch.py +1 -3
  121. pulumi_kubernetes/batch/v2alpha1/CronJob.py +1 -3
  122. pulumi_kubernetes/batch/v2alpha1/CronJobList.py +1 -3
  123. pulumi_kubernetes/batch/v2alpha1/CronJobPatch.py +1 -3
  124. pulumi_kubernetes/certificates/v1/CertificateSigningRequest.py +1 -3
  125. pulumi_kubernetes/certificates/v1/CertificateSigningRequestList.py +1 -3
  126. pulumi_kubernetes/certificates/v1/CertificateSigningRequestPatch.py +1 -3
  127. pulumi_kubernetes/certificates/v1alpha1/ClusterTrustBundle.py +3 -3
  128. pulumi_kubernetes/certificates/v1alpha1/ClusterTrustBundleList.py +1 -3
  129. pulumi_kubernetes/certificates/v1alpha1/ClusterTrustBundlePatch.py +3 -3
  130. pulumi_kubernetes/certificates/v1beta1/CertificateSigningRequest.py +1 -3
  131. pulumi_kubernetes/certificates/v1beta1/CertificateSigningRequestList.py +1 -3
  132. pulumi_kubernetes/certificates/v1beta1/CertificateSigningRequestPatch.py +1 -3
  133. pulumi_kubernetes/certificates/v1beta1/ClusterTrustBundle.py +227 -0
  134. pulumi_kubernetes/certificates/v1beta1/ClusterTrustBundleList.py +217 -0
  135. pulumi_kubernetes/certificates/v1beta1/ClusterTrustBundlePatch.py +238 -0
  136. pulumi_kubernetes/certificates/v1beta1/__init__.py +3 -0
  137. pulumi_kubernetes/certificates/v1beta1/_inputs.py +292 -0
  138. pulumi_kubernetes/certificates/v1beta1/outputs.py +241 -0
  139. pulumi_kubernetes/coordination/v1/Lease.py +1 -3
  140. pulumi_kubernetes/coordination/v1/LeaseList.py +1 -3
  141. pulumi_kubernetes/coordination/v1/LeasePatch.py +1 -3
  142. pulumi_kubernetes/coordination/v1alpha1/LeaseCandidate.py +2 -4
  143. pulumi_kubernetes/coordination/v1alpha1/LeaseCandidateList.py +1 -3
  144. pulumi_kubernetes/coordination/v1alpha1/LeaseCandidatePatch.py +2 -4
  145. pulumi_kubernetes/coordination/v1alpha2/LeaseCandidate.py +2 -4
  146. pulumi_kubernetes/coordination/v1alpha2/LeaseCandidateList.py +1 -3
  147. pulumi_kubernetes/coordination/v1alpha2/LeaseCandidatePatch.py +2 -4
  148. pulumi_kubernetes/coordination/v1alpha2/_inputs.py +6 -6
  149. pulumi_kubernetes/coordination/v1alpha2/outputs.py +4 -4
  150. pulumi_kubernetes/coordination/v1beta1/Lease.py +1 -3
  151. pulumi_kubernetes/coordination/v1beta1/LeaseCandidate.py +218 -0
  152. pulumi_kubernetes/coordination/v1beta1/LeaseCandidateList.py +217 -0
  153. pulumi_kubernetes/coordination/v1beta1/LeaseCandidatePatch.py +230 -0
  154. pulumi_kubernetes/coordination/v1beta1/LeaseList.py +1 -3
  155. pulumi_kubernetes/coordination/v1beta1/LeasePatch.py +1 -3
  156. pulumi_kubernetes/coordination/v1beta1/__init__.py +3 -0
  157. pulumi_kubernetes/coordination/v1beta1/_inputs.py +371 -0
  158. pulumi_kubernetes/coordination/v1beta1/outputs.py +292 -0
  159. pulumi_kubernetes/core/v1/Binding.py +1 -3
  160. pulumi_kubernetes/core/v1/BindingPatch.py +1 -3
  161. pulumi_kubernetes/core/v1/ConfigMap.py +1 -3
  162. pulumi_kubernetes/core/v1/ConfigMapList.py +1 -3
  163. pulumi_kubernetes/core/v1/ConfigMapPatch.py +1 -3
  164. pulumi_kubernetes/core/v1/Endpoints.py +9 -3
  165. pulumi_kubernetes/core/v1/EndpointsList.py +3 -5
  166. pulumi_kubernetes/core/v1/EndpointsPatch.py +9 -3
  167. pulumi_kubernetes/core/v1/Event.py +1 -3
  168. pulumi_kubernetes/core/v1/EventList.py +1 -3
  169. pulumi_kubernetes/core/v1/EventPatch.py +1 -3
  170. pulumi_kubernetes/core/v1/LimitRange.py +1 -3
  171. pulumi_kubernetes/core/v1/LimitRangeList.py +1 -3
  172. pulumi_kubernetes/core/v1/LimitRangePatch.py +1 -3
  173. pulumi_kubernetes/core/v1/Namespace.py +1 -3
  174. pulumi_kubernetes/core/v1/NamespaceList.py +1 -3
  175. pulumi_kubernetes/core/v1/NamespacePatch.py +1 -3
  176. pulumi_kubernetes/core/v1/Node.py +1 -3
  177. pulumi_kubernetes/core/v1/NodeList.py +1 -3
  178. pulumi_kubernetes/core/v1/NodePatch.py +1 -3
  179. pulumi_kubernetes/core/v1/PersistentVolume.py +1 -3
  180. pulumi_kubernetes/core/v1/PersistentVolumeClaim.py +1 -3
  181. pulumi_kubernetes/core/v1/PersistentVolumeClaimList.py +1 -3
  182. pulumi_kubernetes/core/v1/PersistentVolumeClaimPatch.py +1 -3
  183. pulumi_kubernetes/core/v1/PersistentVolumeList.py +1 -3
  184. pulumi_kubernetes/core/v1/PersistentVolumePatch.py +1 -3
  185. pulumi_kubernetes/core/v1/Pod.py +1 -3
  186. pulumi_kubernetes/core/v1/PodList.py +1 -3
  187. pulumi_kubernetes/core/v1/PodPatch.py +1 -3
  188. pulumi_kubernetes/core/v1/PodTemplate.py +1 -3
  189. pulumi_kubernetes/core/v1/PodTemplateList.py +1 -3
  190. pulumi_kubernetes/core/v1/PodTemplatePatch.py +1 -3
  191. pulumi_kubernetes/core/v1/ReplicationController.py +1 -3
  192. pulumi_kubernetes/core/v1/ReplicationControllerList.py +1 -3
  193. pulumi_kubernetes/core/v1/ReplicationControllerPatch.py +1 -3
  194. pulumi_kubernetes/core/v1/ResourceQuota.py +1 -3
  195. pulumi_kubernetes/core/v1/ResourceQuotaList.py +1 -3
  196. pulumi_kubernetes/core/v1/ResourceQuotaPatch.py +1 -3
  197. pulumi_kubernetes/core/v1/Secret.py +1 -3
  198. pulumi_kubernetes/core/v1/SecretList.py +1 -3
  199. pulumi_kubernetes/core/v1/SecretPatch.py +1 -3
  200. pulumi_kubernetes/core/v1/Service.py +1 -3
  201. pulumi_kubernetes/core/v1/ServiceAccount.py +1 -3
  202. pulumi_kubernetes/core/v1/ServiceAccountList.py +1 -3
  203. pulumi_kubernetes/core/v1/ServiceAccountPatch.py +1 -3
  204. pulumi_kubernetes/core/v1/ServiceList.py +1 -3
  205. pulumi_kubernetes/core/v1/ServicePatch.py +1 -3
  206. pulumi_kubernetes/core/v1/_enums.py +2 -1
  207. pulumi_kubernetes/core/v1/_inputs.py +240 -66
  208. pulumi_kubernetes/core/v1/outputs.py +251 -51
  209. pulumi_kubernetes/discovery/v1/EndpointSlice.py +11 -13
  210. pulumi_kubernetes/discovery/v1/EndpointSliceList.py +1 -3
  211. pulumi_kubernetes/discovery/v1/EndpointSlicePatch.py +11 -13
  212. pulumi_kubernetes/discovery/v1/_inputs.py +159 -44
  213. pulumi_kubernetes/discovery/v1/outputs.py +107 -32
  214. pulumi_kubernetes/discovery/v1beta1/EndpointSlice.py +1 -3
  215. pulumi_kubernetes/discovery/v1beta1/EndpointSliceList.py +1 -3
  216. pulumi_kubernetes/discovery/v1beta1/EndpointSlicePatch.py +1 -3
  217. pulumi_kubernetes/events/v1/Event.py +1 -3
  218. pulumi_kubernetes/events/v1/EventList.py +1 -3
  219. pulumi_kubernetes/events/v1/EventPatch.py +1 -3
  220. pulumi_kubernetes/events/v1beta1/Event.py +1 -3
  221. pulumi_kubernetes/events/v1beta1/EventList.py +1 -3
  222. pulumi_kubernetes/events/v1beta1/EventPatch.py +1 -3
  223. pulumi_kubernetes/extensions/v1beta1/DaemonSet.py +1 -3
  224. pulumi_kubernetes/extensions/v1beta1/DaemonSetList.py +1 -3
  225. pulumi_kubernetes/extensions/v1beta1/DaemonSetPatch.py +1 -3
  226. pulumi_kubernetes/extensions/v1beta1/Deployment.py +1 -3
  227. pulumi_kubernetes/extensions/v1beta1/DeploymentList.py +1 -3
  228. pulumi_kubernetes/extensions/v1beta1/DeploymentPatch.py +1 -3
  229. pulumi_kubernetes/extensions/v1beta1/Ingress.py +1 -3
  230. pulumi_kubernetes/extensions/v1beta1/IngressList.py +1 -3
  231. pulumi_kubernetes/extensions/v1beta1/IngressPatch.py +1 -3
  232. pulumi_kubernetes/extensions/v1beta1/NetworkPolicy.py +1 -3
  233. pulumi_kubernetes/extensions/v1beta1/NetworkPolicyList.py +1 -3
  234. pulumi_kubernetes/extensions/v1beta1/NetworkPolicyPatch.py +1 -3
  235. pulumi_kubernetes/extensions/v1beta1/PodSecurityPolicy.py +1 -3
  236. pulumi_kubernetes/extensions/v1beta1/PodSecurityPolicyList.py +1 -3
  237. pulumi_kubernetes/extensions/v1beta1/PodSecurityPolicyPatch.py +1 -3
  238. pulumi_kubernetes/extensions/v1beta1/ReplicaSet.py +1 -3
  239. pulumi_kubernetes/extensions/v1beta1/ReplicaSetList.py +1 -3
  240. pulumi_kubernetes/extensions/v1beta1/ReplicaSetPatch.py +1 -3
  241. pulumi_kubernetes/flowcontrol/v1/FlowSchema.py +1 -3
  242. pulumi_kubernetes/flowcontrol/v1/FlowSchemaList.py +1 -3
  243. pulumi_kubernetes/flowcontrol/v1/FlowSchemaPatch.py +1 -3
  244. pulumi_kubernetes/flowcontrol/v1/PriorityLevelConfiguration.py +1 -3
  245. pulumi_kubernetes/flowcontrol/v1/PriorityLevelConfigurationList.py +1 -3
  246. pulumi_kubernetes/flowcontrol/v1/PriorityLevelConfigurationPatch.py +1 -3
  247. pulumi_kubernetes/flowcontrol/v1alpha1/FlowSchema.py +1 -3
  248. pulumi_kubernetes/flowcontrol/v1alpha1/FlowSchemaList.py +1 -3
  249. pulumi_kubernetes/flowcontrol/v1alpha1/FlowSchemaPatch.py +1 -3
  250. pulumi_kubernetes/flowcontrol/v1alpha1/PriorityLevelConfiguration.py +1 -3
  251. pulumi_kubernetes/flowcontrol/v1alpha1/PriorityLevelConfigurationList.py +1 -3
  252. pulumi_kubernetes/flowcontrol/v1alpha1/PriorityLevelConfigurationPatch.py +1 -3
  253. pulumi_kubernetes/flowcontrol/v1beta1/FlowSchema.py +1 -3
  254. pulumi_kubernetes/flowcontrol/v1beta1/FlowSchemaList.py +1 -3
  255. pulumi_kubernetes/flowcontrol/v1beta1/FlowSchemaPatch.py +1 -3
  256. pulumi_kubernetes/flowcontrol/v1beta1/PriorityLevelConfiguration.py +1 -3
  257. pulumi_kubernetes/flowcontrol/v1beta1/PriorityLevelConfigurationList.py +1 -3
  258. pulumi_kubernetes/flowcontrol/v1beta1/PriorityLevelConfigurationPatch.py +1 -3
  259. pulumi_kubernetes/flowcontrol/v1beta2/FlowSchema.py +1 -3
  260. pulumi_kubernetes/flowcontrol/v1beta2/FlowSchemaList.py +1 -3
  261. pulumi_kubernetes/flowcontrol/v1beta2/FlowSchemaPatch.py +1 -3
  262. pulumi_kubernetes/flowcontrol/v1beta2/PriorityLevelConfiguration.py +1 -3
  263. pulumi_kubernetes/flowcontrol/v1beta2/PriorityLevelConfigurationList.py +1 -3
  264. pulumi_kubernetes/flowcontrol/v1beta2/PriorityLevelConfigurationPatch.py +1 -3
  265. pulumi_kubernetes/flowcontrol/v1beta3/FlowSchema.py +1 -3
  266. pulumi_kubernetes/flowcontrol/v1beta3/FlowSchemaList.py +1 -3
  267. pulumi_kubernetes/flowcontrol/v1beta3/FlowSchemaPatch.py +1 -3
  268. pulumi_kubernetes/flowcontrol/v1beta3/PriorityLevelConfiguration.py +1 -3
  269. pulumi_kubernetes/flowcontrol/v1beta3/PriorityLevelConfigurationList.py +1 -3
  270. pulumi_kubernetes/flowcontrol/v1beta3/PriorityLevelConfigurationPatch.py +1 -3
  271. pulumi_kubernetes/helm/v3/Release.py +1 -3
  272. pulumi_kubernetes/helm/v4/Chart.py +1 -3
  273. pulumi_kubernetes/kustomize/v2/Directory.py +1 -3
  274. pulumi_kubernetes/meta/v1/Status.py +1 -3
  275. pulumi_kubernetes/meta/v1/StatusPatch.py +1 -3
  276. pulumi_kubernetes/networking/v1/IPAddress.py +218 -0
  277. pulumi_kubernetes/networking/v1/IPAddressList.py +217 -0
  278. pulumi_kubernetes/networking/v1/IPAddressPatch.py +230 -0
  279. pulumi_kubernetes/networking/v1/Ingress.py +1 -3
  280. pulumi_kubernetes/networking/v1/IngressClass.py +1 -3
  281. pulumi_kubernetes/networking/v1/IngressClassList.py +1 -3
  282. pulumi_kubernetes/networking/v1/IngressClassPatch.py +1 -3
  283. pulumi_kubernetes/networking/v1/IngressList.py +1 -3
  284. pulumi_kubernetes/networking/v1/IngressPatch.py +1 -3
  285. pulumi_kubernetes/networking/v1/NetworkPolicy.py +1 -3
  286. pulumi_kubernetes/networking/v1/NetworkPolicyList.py +1 -3
  287. pulumi_kubernetes/networking/v1/NetworkPolicyPatch.py +1 -3
  288. pulumi_kubernetes/networking/v1/ServiceCIDR.py +228 -0
  289. pulumi_kubernetes/networking/v1/ServiceCIDRList.py +217 -0
  290. pulumi_kubernetes/networking/v1/ServiceCIDRPatch.py +240 -0
  291. pulumi_kubernetes/networking/v1/__init__.py +6 -0
  292. pulumi_kubernetes/networking/v1/_inputs.py +599 -0
  293. pulumi_kubernetes/networking/v1/outputs.py +461 -0
  294. pulumi_kubernetes/networking/v1alpha1/ClusterCIDR.py +1 -3
  295. pulumi_kubernetes/networking/v1alpha1/ClusterCIDRList.py +1 -3
  296. pulumi_kubernetes/networking/v1alpha1/ClusterCIDRPatch.py +1 -3
  297. pulumi_kubernetes/networking/v1alpha1/IPAddress.py +2 -4
  298. pulumi_kubernetes/networking/v1alpha1/IPAddressList.py +1 -3
  299. pulumi_kubernetes/networking/v1alpha1/IPAddressPatch.py +2 -4
  300. pulumi_kubernetes/networking/v1alpha1/ServiceCIDR.py +2 -4
  301. pulumi_kubernetes/networking/v1alpha1/ServiceCIDRList.py +1 -3
  302. pulumi_kubernetes/networking/v1alpha1/ServiceCIDRPatch.py +2 -4
  303. pulumi_kubernetes/networking/v1beta1/IPAddress.py +2 -4
  304. pulumi_kubernetes/networking/v1beta1/IPAddressList.py +1 -3
  305. pulumi_kubernetes/networking/v1beta1/IPAddressPatch.py +2 -4
  306. pulumi_kubernetes/networking/v1beta1/Ingress.py +1 -3
  307. pulumi_kubernetes/networking/v1beta1/IngressClass.py +1 -3
  308. pulumi_kubernetes/networking/v1beta1/IngressClassList.py +1 -3
  309. pulumi_kubernetes/networking/v1beta1/IngressClassPatch.py +1 -3
  310. pulumi_kubernetes/networking/v1beta1/IngressList.py +1 -3
  311. pulumi_kubernetes/networking/v1beta1/IngressPatch.py +1 -3
  312. pulumi_kubernetes/networking/v1beta1/ServiceCIDR.py +2 -4
  313. pulumi_kubernetes/networking/v1beta1/ServiceCIDRList.py +1 -3
  314. pulumi_kubernetes/networking/v1beta1/ServiceCIDRPatch.py +2 -4
  315. pulumi_kubernetes/node/v1/RuntimeClass.py +1 -3
  316. pulumi_kubernetes/node/v1/RuntimeClassList.py +1 -3
  317. pulumi_kubernetes/node/v1/RuntimeClassPatch.py +1 -3
  318. pulumi_kubernetes/node/v1alpha1/RuntimeClass.py +1 -3
  319. pulumi_kubernetes/node/v1alpha1/RuntimeClassList.py +1 -3
  320. pulumi_kubernetes/node/v1alpha1/RuntimeClassPatch.py +1 -3
  321. pulumi_kubernetes/node/v1beta1/RuntimeClass.py +1 -3
  322. pulumi_kubernetes/node/v1beta1/RuntimeClassList.py +1 -3
  323. pulumi_kubernetes/node/v1beta1/RuntimeClassPatch.py +1 -3
  324. pulumi_kubernetes/policy/v1/PodDisruptionBudget.py +1 -3
  325. pulumi_kubernetes/policy/v1/PodDisruptionBudgetList.py +1 -3
  326. pulumi_kubernetes/policy/v1/PodDisruptionBudgetPatch.py +1 -3
  327. pulumi_kubernetes/policy/v1/_inputs.py +0 -12
  328. pulumi_kubernetes/policy/v1/outputs.py +0 -8
  329. pulumi_kubernetes/policy/v1beta1/PodDisruptionBudget.py +1 -3
  330. pulumi_kubernetes/policy/v1beta1/PodDisruptionBudgetList.py +1 -3
  331. pulumi_kubernetes/policy/v1beta1/PodDisruptionBudgetPatch.py +1 -3
  332. pulumi_kubernetes/policy/v1beta1/PodSecurityPolicy.py +1 -3
  333. pulumi_kubernetes/policy/v1beta1/PodSecurityPolicyList.py +1 -3
  334. pulumi_kubernetes/policy/v1beta1/PodSecurityPolicyPatch.py +1 -3
  335. pulumi_kubernetes/provider.py +1 -3
  336. pulumi_kubernetes/pulumi-plugin.json +1 -1
  337. pulumi_kubernetes/rbac/v1/ClusterRole.py +1 -3
  338. pulumi_kubernetes/rbac/v1/ClusterRoleBinding.py +1 -3
  339. pulumi_kubernetes/rbac/v1/ClusterRoleBindingList.py +1 -3
  340. pulumi_kubernetes/rbac/v1/ClusterRoleBindingPatch.py +1 -3
  341. pulumi_kubernetes/rbac/v1/ClusterRoleList.py +1 -3
  342. pulumi_kubernetes/rbac/v1/ClusterRolePatch.py +1 -3
  343. pulumi_kubernetes/rbac/v1/Role.py +1 -3
  344. pulumi_kubernetes/rbac/v1/RoleBinding.py +1 -3
  345. pulumi_kubernetes/rbac/v1/RoleBindingList.py +1 -3
  346. pulumi_kubernetes/rbac/v1/RoleBindingPatch.py +1 -3
  347. pulumi_kubernetes/rbac/v1/RoleList.py +1 -3
  348. pulumi_kubernetes/rbac/v1/RolePatch.py +1 -3
  349. pulumi_kubernetes/rbac/v1alpha1/ClusterRole.py +1 -3
  350. pulumi_kubernetes/rbac/v1alpha1/ClusterRoleBinding.py +1 -3
  351. pulumi_kubernetes/rbac/v1alpha1/ClusterRoleBindingList.py +1 -3
  352. pulumi_kubernetes/rbac/v1alpha1/ClusterRoleBindingPatch.py +1 -3
  353. pulumi_kubernetes/rbac/v1alpha1/ClusterRoleList.py +1 -3
  354. pulumi_kubernetes/rbac/v1alpha1/ClusterRolePatch.py +1 -3
  355. pulumi_kubernetes/rbac/v1alpha1/Role.py +1 -3
  356. pulumi_kubernetes/rbac/v1alpha1/RoleBinding.py +1 -3
  357. pulumi_kubernetes/rbac/v1alpha1/RoleBindingList.py +1 -3
  358. pulumi_kubernetes/rbac/v1alpha1/RoleBindingPatch.py +1 -3
  359. pulumi_kubernetes/rbac/v1alpha1/RoleList.py +1 -3
  360. pulumi_kubernetes/rbac/v1alpha1/RolePatch.py +1 -3
  361. pulumi_kubernetes/rbac/v1beta1/ClusterRole.py +1 -3
  362. pulumi_kubernetes/rbac/v1beta1/ClusterRoleBinding.py +1 -3
  363. pulumi_kubernetes/rbac/v1beta1/ClusterRoleBindingList.py +1 -3
  364. pulumi_kubernetes/rbac/v1beta1/ClusterRoleBindingPatch.py +1 -3
  365. pulumi_kubernetes/rbac/v1beta1/ClusterRoleList.py +1 -3
  366. pulumi_kubernetes/rbac/v1beta1/ClusterRolePatch.py +1 -3
  367. pulumi_kubernetes/rbac/v1beta1/Role.py +1 -3
  368. pulumi_kubernetes/rbac/v1beta1/RoleBinding.py +1 -3
  369. pulumi_kubernetes/rbac/v1beta1/RoleBindingList.py +1 -3
  370. pulumi_kubernetes/rbac/v1beta1/RoleBindingPatch.py +1 -3
  371. pulumi_kubernetes/rbac/v1beta1/RoleList.py +1 -3
  372. pulumi_kubernetes/rbac/v1beta1/RolePatch.py +1 -3
  373. pulumi_kubernetes/resource/__init__.py +3 -0
  374. pulumi_kubernetes/resource/v1alpha1/PodScheduling.py +1 -3
  375. pulumi_kubernetes/resource/v1alpha1/PodSchedulingList.py +1 -3
  376. pulumi_kubernetes/resource/v1alpha1/PodSchedulingPatch.py +1 -3
  377. pulumi_kubernetes/resource/v1alpha1/ResourceClaim.py +2 -4
  378. pulumi_kubernetes/resource/v1alpha1/ResourceClaimList.py +1 -3
  379. pulumi_kubernetes/resource/v1alpha1/ResourceClaimPatch.py +2 -4
  380. pulumi_kubernetes/resource/v1alpha1/ResourceClaimTemplate.py +2 -4
  381. pulumi_kubernetes/resource/v1alpha1/ResourceClaimTemplateList.py +1 -3
  382. pulumi_kubernetes/resource/v1alpha1/ResourceClaimTemplatePatch.py +2 -4
  383. pulumi_kubernetes/resource/v1alpha1/ResourceClass.py +1 -3
  384. pulumi_kubernetes/resource/v1alpha1/ResourceClassList.py +1 -3
  385. pulumi_kubernetes/resource/v1alpha1/ResourceClassPatch.py +1 -3
  386. pulumi_kubernetes/resource/v1alpha2/PodSchedulingContext.py +1 -3
  387. pulumi_kubernetes/resource/v1alpha2/PodSchedulingContextList.py +1 -3
  388. pulumi_kubernetes/resource/v1alpha2/PodSchedulingContextPatch.py +1 -3
  389. pulumi_kubernetes/resource/v1alpha2/ResourceClaim.py +2 -4
  390. pulumi_kubernetes/resource/v1alpha2/ResourceClaimList.py +1 -3
  391. pulumi_kubernetes/resource/v1alpha2/ResourceClaimParameters.py +1 -3
  392. pulumi_kubernetes/resource/v1alpha2/ResourceClaimParametersList.py +1 -3
  393. pulumi_kubernetes/resource/v1alpha2/ResourceClaimParametersPatch.py +1 -3
  394. pulumi_kubernetes/resource/v1alpha2/ResourceClaimPatch.py +2 -4
  395. pulumi_kubernetes/resource/v1alpha2/ResourceClaimTemplate.py +2 -4
  396. pulumi_kubernetes/resource/v1alpha2/ResourceClaimTemplateList.py +1 -3
  397. pulumi_kubernetes/resource/v1alpha2/ResourceClaimTemplatePatch.py +2 -4
  398. pulumi_kubernetes/resource/v1alpha2/ResourceClass.py +1 -3
  399. pulumi_kubernetes/resource/v1alpha2/ResourceClassList.py +1 -3
  400. pulumi_kubernetes/resource/v1alpha2/ResourceClassParameters.py +1 -3
  401. pulumi_kubernetes/resource/v1alpha2/ResourceClassParametersList.py +1 -3
  402. pulumi_kubernetes/resource/v1alpha2/ResourceClassParametersPatch.py +1 -3
  403. pulumi_kubernetes/resource/v1alpha2/ResourceClassPatch.py +1 -3
  404. pulumi_kubernetes/resource/v1alpha2/ResourceSlice.py +2 -4
  405. pulumi_kubernetes/resource/v1alpha2/ResourceSliceList.py +1 -3
  406. pulumi_kubernetes/resource/v1alpha2/ResourceSlicePatch.py +2 -4
  407. pulumi_kubernetes/resource/v1alpha3/DeviceClass.py +2 -4
  408. pulumi_kubernetes/resource/v1alpha3/DeviceClassList.py +1 -3
  409. pulumi_kubernetes/resource/v1alpha3/DeviceClassPatch.py +2 -4
  410. pulumi_kubernetes/resource/v1alpha3/DeviceTaintRule.py +225 -0
  411. pulumi_kubernetes/resource/v1alpha3/DeviceTaintRuleList.py +217 -0
  412. pulumi_kubernetes/resource/v1alpha3/DeviceTaintRulePatch.py +236 -0
  413. pulumi_kubernetes/resource/v1alpha3/PodSchedulingContext.py +1 -3
  414. pulumi_kubernetes/resource/v1alpha3/PodSchedulingContextList.py +1 -3
  415. pulumi_kubernetes/resource/v1alpha3/PodSchedulingContextPatch.py +1 -3
  416. pulumi_kubernetes/resource/v1alpha3/ResourceClaim.py +2 -4
  417. pulumi_kubernetes/resource/v1alpha3/ResourceClaimList.py +1 -3
  418. pulumi_kubernetes/resource/v1alpha3/ResourceClaimPatch.py +2 -4
  419. pulumi_kubernetes/resource/v1alpha3/ResourceClaimTemplate.py +2 -4
  420. pulumi_kubernetes/resource/v1alpha3/ResourceClaimTemplateList.py +1 -3
  421. pulumi_kubernetes/resource/v1alpha3/ResourceClaimTemplatePatch.py +2 -4
  422. pulumi_kubernetes/resource/v1alpha3/ResourceSlice.py +2 -4
  423. pulumi_kubernetes/resource/v1alpha3/ResourceSliceList.py +1 -3
  424. pulumi_kubernetes/resource/v1alpha3/ResourceSlicePatch.py +2 -4
  425. pulumi_kubernetes/resource/v1alpha3/__init__.py +3 -0
  426. pulumi_kubernetes/resource/v1alpha3/_inputs.py +2559 -213
  427. pulumi_kubernetes/resource/v1alpha3/outputs.py +2037 -256
  428. pulumi_kubernetes/resource/v1beta1/DeviceClass.py +2 -4
  429. pulumi_kubernetes/resource/v1beta1/DeviceClassList.py +1 -3
  430. pulumi_kubernetes/resource/v1beta1/DeviceClassPatch.py +2 -4
  431. pulumi_kubernetes/resource/v1beta1/ResourceClaim.py +2 -4
  432. pulumi_kubernetes/resource/v1beta1/ResourceClaimList.py +1 -3
  433. pulumi_kubernetes/resource/v1beta1/ResourceClaimPatch.py +2 -4
  434. pulumi_kubernetes/resource/v1beta1/ResourceClaimTemplate.py +2 -4
  435. pulumi_kubernetes/resource/v1beta1/ResourceClaimTemplateList.py +1 -3
  436. pulumi_kubernetes/resource/v1beta1/ResourceClaimTemplatePatch.py +2 -4
  437. pulumi_kubernetes/resource/v1beta1/ResourceSlice.py +2 -4
  438. pulumi_kubernetes/resource/v1beta1/ResourceSliceList.py +1 -3
  439. pulumi_kubernetes/resource/v1beta1/ResourceSlicePatch.py +2 -4
  440. pulumi_kubernetes/resource/v1beta1/_inputs.py +2044 -176
  441. pulumi_kubernetes/resource/v1beta1/outputs.py +1536 -134
  442. pulumi_kubernetes/resource/v1beta2/DeviceClass.py +239 -0
  443. pulumi_kubernetes/resource/v1beta2/DeviceClassList.py +217 -0
  444. pulumi_kubernetes/resource/v1beta2/DeviceClassPatch.py +250 -0
  445. pulumi_kubernetes/resource/v1beta2/ResourceClaim.py +234 -0
  446. pulumi_kubernetes/resource/v1beta2/ResourceClaimList.py +218 -0
  447. pulumi_kubernetes/resource/v1beta2/ResourceClaimPatch.py +245 -0
  448. pulumi_kubernetes/resource/v1beta2/ResourceClaimTemplate.py +231 -0
  449. pulumi_kubernetes/resource/v1beta2/ResourceClaimTemplateList.py +217 -0
  450. pulumi_kubernetes/resource/v1beta2/ResourceClaimTemplatePatch.py +242 -0
  451. pulumi_kubernetes/resource/v1beta2/ResourceSlice.py +248 -0
  452. pulumi_kubernetes/resource/v1beta2/ResourceSliceList.py +218 -0
  453. pulumi_kubernetes/resource/v1beta2/ResourceSlicePatch.py +259 -0
  454. pulumi_kubernetes/resource/v1beta2/__init__.py +22 -0
  455. pulumi_kubernetes/resource/v1beta2/_inputs.py +5681 -0
  456. pulumi_kubernetes/resource/v1beta2/outputs.py +4726 -0
  457. pulumi_kubernetes/scheduling/v1/PriorityClass.py +1 -3
  458. pulumi_kubernetes/scheduling/v1/PriorityClassList.py +1 -3
  459. pulumi_kubernetes/scheduling/v1/PriorityClassPatch.py +1 -3
  460. pulumi_kubernetes/scheduling/v1alpha1/PriorityClass.py +1 -3
  461. pulumi_kubernetes/scheduling/v1alpha1/PriorityClassList.py +1 -3
  462. pulumi_kubernetes/scheduling/v1alpha1/PriorityClassPatch.py +1 -3
  463. pulumi_kubernetes/scheduling/v1beta1/PriorityClass.py +1 -3
  464. pulumi_kubernetes/scheduling/v1beta1/PriorityClassList.py +1 -3
  465. pulumi_kubernetes/scheduling/v1beta1/PriorityClassPatch.py +1 -3
  466. pulumi_kubernetes/settings/v1alpha1/PodPreset.py +1 -3
  467. pulumi_kubernetes/settings/v1alpha1/PodPresetList.py +1 -3
  468. pulumi_kubernetes/settings/v1alpha1/PodPresetPatch.py +1 -3
  469. pulumi_kubernetes/storage/v1/CSIDriver.py +1 -3
  470. pulumi_kubernetes/storage/v1/CSIDriverList.py +1 -3
  471. pulumi_kubernetes/storage/v1/CSIDriverPatch.py +1 -3
  472. pulumi_kubernetes/storage/v1/CSINode.py +1 -3
  473. pulumi_kubernetes/storage/v1/CSINodeList.py +1 -3
  474. pulumi_kubernetes/storage/v1/CSINodePatch.py +1 -3
  475. pulumi_kubernetes/storage/v1/CSIStorageCapacity.py +1 -3
  476. pulumi_kubernetes/storage/v1/CSIStorageCapacityList.py +1 -3
  477. pulumi_kubernetes/storage/v1/CSIStorageCapacityPatch.py +1 -3
  478. pulumi_kubernetes/storage/v1/StorageClass.py +1 -3
  479. pulumi_kubernetes/storage/v1/StorageClassList.py +1 -3
  480. pulumi_kubernetes/storage/v1/StorageClassPatch.py +1 -3
  481. pulumi_kubernetes/storage/v1/VolumeAttachment.py +1 -3
  482. pulumi_kubernetes/storage/v1/VolumeAttachmentList.py +1 -3
  483. pulumi_kubernetes/storage/v1/VolumeAttachmentPatch.py +1 -3
  484. pulumi_kubernetes/storage/v1/_inputs.py +90 -0
  485. pulumi_kubernetes/storage/v1/outputs.py +110 -0
  486. pulumi_kubernetes/storage/v1alpha1/VolumeAttachment.py +1 -3
  487. pulumi_kubernetes/storage/v1alpha1/VolumeAttachmentList.py +1 -3
  488. pulumi_kubernetes/storage/v1alpha1/VolumeAttachmentPatch.py +1 -3
  489. pulumi_kubernetes/storage/v1alpha1/VolumeAttributesClass.py +1 -3
  490. pulumi_kubernetes/storage/v1alpha1/VolumeAttributesClassList.py +1 -3
  491. pulumi_kubernetes/storage/v1alpha1/VolumeAttributesClassPatch.py +1 -3
  492. pulumi_kubernetes/storage/v1beta1/CSIDriver.py +1 -3
  493. pulumi_kubernetes/storage/v1beta1/CSIDriverList.py +1 -3
  494. pulumi_kubernetes/storage/v1beta1/CSIDriverPatch.py +1 -3
  495. pulumi_kubernetes/storage/v1beta1/CSINode.py +1 -3
  496. pulumi_kubernetes/storage/v1beta1/CSINodeList.py +1 -3
  497. pulumi_kubernetes/storage/v1beta1/CSINodePatch.py +1 -3
  498. pulumi_kubernetes/storage/v1beta1/CSIStorageCapacity.py +1 -3
  499. pulumi_kubernetes/storage/v1beta1/CSIStorageCapacityList.py +1 -3
  500. pulumi_kubernetes/storage/v1beta1/CSIStorageCapacityPatch.py +1 -3
  501. pulumi_kubernetes/storage/v1beta1/StorageClass.py +1 -3
  502. pulumi_kubernetes/storage/v1beta1/StorageClassList.py +1 -3
  503. pulumi_kubernetes/storage/v1beta1/StorageClassPatch.py +1 -3
  504. pulumi_kubernetes/storage/v1beta1/VolumeAttachment.py +1 -3
  505. pulumi_kubernetes/storage/v1beta1/VolumeAttachmentList.py +1 -3
  506. pulumi_kubernetes/storage/v1beta1/VolumeAttachmentPatch.py +1 -3
  507. pulumi_kubernetes/storage/v1beta1/VolumeAttributesClass.py +1 -3
  508. pulumi_kubernetes/storage/v1beta1/VolumeAttributesClassList.py +1 -3
  509. pulumi_kubernetes/storage/v1beta1/VolumeAttributesClassPatch.py +1 -3
  510. pulumi_kubernetes/storagemigration/v1alpha1/StorageVersionMigration.py +1 -3
  511. pulumi_kubernetes/storagemigration/v1alpha1/StorageVersionMigrationList.py +1 -3
  512. pulumi_kubernetes/storagemigration/v1alpha1/StorageVersionMigrationPatch.py +1 -3
  513. pulumi_kubernetes/yaml/v2/ConfigFile.py +1 -3
  514. pulumi_kubernetes/yaml/v2/ConfigGroup.py +1 -3
  515. pulumi_kubernetes/yaml/yaml.py +108 -0
  516. {pulumi_kubernetes-4.23.0a1746131759.dist-info → pulumi_kubernetes-4.23.0a1746153578.dist-info}/METADATA +2 -2
  517. pulumi_kubernetes-4.23.0a1746153578.dist-info/RECORD +709 -0
  518. pulumi_kubernetes-4.23.0a1746131759.dist-info/RECORD +0 -679
  519. {pulumi_kubernetes-4.23.0a1746131759.dist-info → pulumi_kubernetes-4.23.0a1746153578.dist-info}/WHEEL +0 -0
  520. {pulumi_kubernetes-4.23.0a1746131759.dist-info → pulumi_kubernetes-4.23.0a1746153578.dist-info}/top_level.txt +0 -0
@@ -358,6 +358,8 @@ __all__ = [
358
358
  'NodeSpecArgsDict',
359
359
  'NodeStatusArgs',
360
360
  'NodeStatusArgsDict',
361
+ 'NodeSwapStatusArgs',
362
+ 'NodeSwapStatusArgsDict',
361
363
  'NodeSystemInfoArgs',
362
364
  'NodeSystemInfoArgsDict',
363
365
  'NodeArgs',
@@ -6258,6 +6260,10 @@ if not MYPY:
6258
6260
  """
6259
6261
  State holds details about the container's current condition.
6260
6262
  """
6263
+ stop_signal: NotRequired[pulumi.Input[builtins.str]]
6264
+ """
6265
+ StopSignal reports the effective stop signal for this container
6266
+ """
6261
6267
  user: NotRequired[pulumi.Input['ContainerUserArgsDict']]
6262
6268
  """
6263
6269
  User represents user identity information initially attached to the first process of the container
@@ -6284,6 +6290,7 @@ class ContainerStatusArgs:
6284
6290
  resources: Optional[pulumi.Input['ResourceRequirementsArgs']] = None,
6285
6291
  started: Optional[pulumi.Input[builtins.bool]] = None,
6286
6292
  state: Optional[pulumi.Input['ContainerStateArgs']] = None,
6293
+ stop_signal: Optional[pulumi.Input[builtins.str]] = None,
6287
6294
  user: Optional[pulumi.Input['ContainerUserArgs']] = None,
6288
6295
  volume_mounts: Optional[pulumi.Input[Sequence[pulumi.Input['VolumeMountStatusArgs']]]] = None):
6289
6296
  """
@@ -6302,6 +6309,7 @@ class ContainerStatusArgs:
6302
6309
  :param pulumi.Input['ResourceRequirementsArgs'] resources: Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized.
6303
6310
  :param pulumi.Input[builtins.bool] started: Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false.
6304
6311
  :param pulumi.Input['ContainerStateArgs'] state: State holds details about the container's current condition.
6312
+ :param pulumi.Input[builtins.str] stop_signal: StopSignal reports the effective stop signal for this container
6305
6313
  :param pulumi.Input['ContainerUserArgs'] user: User represents user identity information initially attached to the first process of the container
6306
6314
  :param pulumi.Input[Sequence[pulumi.Input['VolumeMountStatusArgs']]] volume_mounts: Status of volume mounts.
6307
6315
  """
@@ -6324,6 +6332,8 @@ class ContainerStatusArgs:
6324
6332
  pulumi.set(__self__, "started", started)
6325
6333
  if state is not None:
6326
6334
  pulumi.set(__self__, "state", state)
6335
+ if stop_signal is not None:
6336
+ pulumi.set(__self__, "stop_signal", stop_signal)
6327
6337
  if user is not None:
6328
6338
  pulumi.set(__self__, "user", user)
6329
6339
  if volume_mounts is not None:
@@ -6475,6 +6485,18 @@ class ContainerStatusArgs:
6475
6485
  def state(self, value: Optional[pulumi.Input['ContainerStateArgs']]):
6476
6486
  pulumi.set(self, "state", value)
6477
6487
 
6488
+ @property
6489
+ @pulumi.getter(name="stopSignal")
6490
+ def stop_signal(self) -> Optional[pulumi.Input[builtins.str]]:
6491
+ """
6492
+ StopSignal reports the effective stop signal for this container
6493
+ """
6494
+ return pulumi.get(self, "stop_signal")
6495
+
6496
+ @stop_signal.setter
6497
+ def stop_signal(self, value: Optional[pulumi.Input[builtins.str]]):
6498
+ pulumi.set(self, "stop_signal", value)
6499
+
6478
6500
  @property
6479
6501
  @pulumi.getter
6480
6502
  def user(self) -> Optional[pulumi.Input['ContainerUserArgs']]:
@@ -7556,7 +7578,7 @@ class EmptyDirVolumeSourceArgs:
7556
7578
  if not MYPY:
7557
7579
  class EndpointAddressPatchArgsDict(TypedDict):
7558
7580
  """
7559
- EndpointAddress is a tuple that describes single IP address.
7581
+ EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
7560
7582
  """
7561
7583
  hostname: NotRequired[pulumi.Input[builtins.str]]
7562
7584
  """
@@ -7585,7 +7607,7 @@ class EndpointAddressPatchArgs:
7585
7607
  node_name: Optional[pulumi.Input[builtins.str]] = None,
7586
7608
  target_ref: Optional[pulumi.Input['ObjectReferencePatchArgs']] = None):
7587
7609
  """
7588
- EndpointAddress is a tuple that describes single IP address.
7610
+ EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
7589
7611
  :param pulumi.Input[builtins.str] hostname: The Hostname of this endpoint
7590
7612
  :param pulumi.Input[builtins.str] ip: The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16).
7591
7613
  :param pulumi.Input[builtins.str] node_name: Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node.
@@ -7652,7 +7674,7 @@ class EndpointAddressPatchArgs:
7652
7674
  if not MYPY:
7653
7675
  class EndpointAddressArgsDict(TypedDict):
7654
7676
  """
7655
- EndpointAddress is a tuple that describes single IP address.
7677
+ EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
7656
7678
  """
7657
7679
  ip: pulumi.Input[builtins.str]
7658
7680
  """
@@ -7681,7 +7703,7 @@ class EndpointAddressArgs:
7681
7703
  node_name: Optional[pulumi.Input[builtins.str]] = None,
7682
7704
  target_ref: Optional[pulumi.Input['ObjectReferenceArgs']] = None):
7683
7705
  """
7684
- EndpointAddress is a tuple that describes single IP address.
7706
+ EndpointAddress is a tuple that describes single IP address. Deprecated: This API is deprecated in v1.33+.
7685
7707
  :param pulumi.Input[builtins.str] ip: The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16).
7686
7708
  :param pulumi.Input[builtins.str] hostname: The Hostname of this endpoint
7687
7709
  :param pulumi.Input[builtins.str] node_name: Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node.
@@ -7747,7 +7769,7 @@ class EndpointAddressArgs:
7747
7769
  if not MYPY:
7748
7770
  class EndpointPortPatchArgsDict(TypedDict):
7749
7771
  """
7750
- EndpointPort is a tuple that describes a single port.
7772
+ EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
7751
7773
  """
7752
7774
  app_protocol: NotRequired[pulumi.Input[builtins.str]]
7753
7775
  """
@@ -7785,7 +7807,7 @@ class EndpointPortPatchArgs:
7785
7807
  port: Optional[pulumi.Input[builtins.int]] = None,
7786
7808
  protocol: Optional[pulumi.Input[builtins.str]] = None):
7787
7809
  """
7788
- EndpointPort is a tuple that describes a single port.
7810
+ EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
7789
7811
  :param pulumi.Input[builtins.str] app_protocol: The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:
7790
7812
 
7791
7813
  * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names).
@@ -7870,7 +7892,7 @@ class EndpointPortPatchArgs:
7870
7892
  if not MYPY:
7871
7893
  class EndpointPortArgsDict(TypedDict):
7872
7894
  """
7873
- EndpointPort is a tuple that describes a single port.
7895
+ EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
7874
7896
  """
7875
7897
  port: pulumi.Input[builtins.int]
7876
7898
  """
@@ -7908,7 +7930,7 @@ class EndpointPortArgs:
7908
7930
  name: Optional[pulumi.Input[builtins.str]] = None,
7909
7931
  protocol: Optional[pulumi.Input[builtins.str]] = None):
7910
7932
  """
7911
- EndpointPort is a tuple that describes a single port.
7933
+ EndpointPort is a tuple that describes a single port. Deprecated: This API is deprecated in v1.33+.
7912
7934
  :param pulumi.Input[builtins.int] port: The port number of the endpoint.
7913
7935
  :param pulumi.Input[builtins.str] app_protocol: The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:
7914
7936
 
@@ -8003,6 +8025,8 @@ if not MYPY:
8003
8025
 
8004
8026
  a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
8005
8027
  b: [ 10.10.1.1:309, 10.10.2.2:309 ]
8028
+
8029
+ Deprecated: This API is deprecated in v1.33+.
8006
8030
  """
8007
8031
  addresses: NotRequired[pulumi.Input[Sequence[pulumi.Input['EndpointAddressPatchArgsDict']]]]
8008
8032
  """
@@ -8037,6 +8061,8 @@ class EndpointSubsetPatchArgs:
8037
8061
 
8038
8062
  a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
8039
8063
  b: [ 10.10.1.1:309, 10.10.2.2:309 ]
8064
+
8065
+ Deprecated: This API is deprecated in v1.33+.
8040
8066
  :param pulumi.Input[Sequence[pulumi.Input['EndpointAddressPatchArgs']]] addresses: IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize.
8041
8067
  :param pulumi.Input[Sequence[pulumi.Input['EndpointAddressPatchArgs']]] not_ready_addresses: IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check.
8042
8068
  :param pulumi.Input[Sequence[pulumi.Input['EndpointPortPatchArgs']]] ports: Port numbers available on the related IP addresses.
@@ -8099,6 +8125,8 @@ if not MYPY:
8099
8125
 
8100
8126
  a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
8101
8127
  b: [ 10.10.1.1:309, 10.10.2.2:309 ]
8128
+
8129
+ Deprecated: This API is deprecated in v1.33+.
8102
8130
  """
8103
8131
  addresses: NotRequired[pulumi.Input[Sequence[pulumi.Input['EndpointAddressArgsDict']]]]
8104
8132
  """
@@ -8133,6 +8161,8 @@ class EndpointSubsetArgs:
8133
8161
 
8134
8162
  a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
8135
8163
  b: [ 10.10.1.1:309, 10.10.2.2:309 ]
8164
+
8165
+ Deprecated: This API is deprecated in v1.33+.
8136
8166
  :param pulumi.Input[Sequence[pulumi.Input['EndpointAddressArgs']]] addresses: IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize.
8137
8167
  :param pulumi.Input[Sequence[pulumi.Input['EndpointAddressArgs']]] not_ready_addresses: IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check.
8138
8168
  :param pulumi.Input[Sequence[pulumi.Input['EndpointPortArgs']]] ports: Port numbers available on the related IP addresses.
@@ -8197,6 +8227,10 @@ if not MYPY:
8197
8227
  Ports: [{"name": "a", "port": 93}, {"name": "b", "port": 76}]
8198
8228
  },
8199
8229
  ]
8230
+
8231
+ Endpoints is a legacy API and does not contain information about all Service features. Use discoveryv1.EndpointSlice for complete information about Service endpoints.
8232
+
8233
+ Deprecated: This API is deprecated in v1.33+. Use discoveryv1.EndpointSlice.
8200
8234
  """
8201
8235
  api_version: NotRequired[pulumi.Input[builtins.str]]
8202
8236
  """
@@ -8238,6 +8272,10 @@ class EndpointsArgs:
8238
8272
  Ports: [{"name": "a", "port": 93}, {"name": "b", "port": 76}]
8239
8273
  },
8240
8274
  ]
8275
+
8276
+ Endpoints is a legacy API and does not contain information about all Service features. Use discoveryv1.EndpointSlice for complete information about Service endpoints.
8277
+
8278
+ Deprecated: This API is deprecated in v1.33+. Use discoveryv1.EndpointSlice.
8241
8279
  :param pulumi.Input[builtins.str] api_version: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
8242
8280
  :param pulumi.Input[builtins.str] kind: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
8243
8281
  :param pulumi.Input['_meta.v1.ObjectMetaArgs'] metadata: Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
@@ -8304,7 +8342,7 @@ class EndpointsArgs:
8304
8342
  if not MYPY:
8305
8343
  class EnvFromSourcePatchArgsDict(TypedDict):
8306
8344
  """
8307
- EnvFromSource represents the source of a set of ConfigMaps
8345
+ EnvFromSource represents the source of a set of ConfigMaps or Secrets
8308
8346
  """
8309
8347
  config_map_ref: NotRequired[pulumi.Input['ConfigMapEnvSourcePatchArgsDict']]
8310
8348
  """
@@ -8312,7 +8350,7 @@ if not MYPY:
8312
8350
  """
8313
8351
  prefix: NotRequired[pulumi.Input[builtins.str]]
8314
8352
  """
8315
- An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
8353
+ Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
8316
8354
  """
8317
8355
  secret_ref: NotRequired[pulumi.Input['SecretEnvSourcePatchArgsDict']]
8318
8356
  """
@@ -8328,9 +8366,9 @@ class EnvFromSourcePatchArgs:
8328
8366
  prefix: Optional[pulumi.Input[builtins.str]] = None,
8329
8367
  secret_ref: Optional[pulumi.Input['SecretEnvSourcePatchArgs']] = None):
8330
8368
  """
8331
- EnvFromSource represents the source of a set of ConfigMaps
8369
+ EnvFromSource represents the source of a set of ConfigMaps or Secrets
8332
8370
  :param pulumi.Input['ConfigMapEnvSourcePatchArgs'] config_map_ref: The ConfigMap to select from
8333
- :param pulumi.Input[builtins.str] prefix: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
8371
+ :param pulumi.Input[builtins.str] prefix: Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
8334
8372
  :param pulumi.Input['SecretEnvSourcePatchArgs'] secret_ref: The Secret to select from
8335
8373
  """
8336
8374
  if config_map_ref is not None:
@@ -8356,7 +8394,7 @@ class EnvFromSourcePatchArgs:
8356
8394
  @pulumi.getter
8357
8395
  def prefix(self) -> Optional[pulumi.Input[builtins.str]]:
8358
8396
  """
8359
- An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
8397
+ Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
8360
8398
  """
8361
8399
  return pulumi.get(self, "prefix")
8362
8400
 
@@ -8380,7 +8418,7 @@ class EnvFromSourcePatchArgs:
8380
8418
  if not MYPY:
8381
8419
  class EnvFromSourceArgsDict(TypedDict):
8382
8420
  """
8383
- EnvFromSource represents the source of a set of ConfigMaps
8421
+ EnvFromSource represents the source of a set of ConfigMaps or Secrets
8384
8422
  """
8385
8423
  config_map_ref: NotRequired[pulumi.Input['ConfigMapEnvSourceArgsDict']]
8386
8424
  """
@@ -8388,7 +8426,7 @@ if not MYPY:
8388
8426
  """
8389
8427
  prefix: NotRequired[pulumi.Input[builtins.str]]
8390
8428
  """
8391
- An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
8429
+ Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
8392
8430
  """
8393
8431
  secret_ref: NotRequired[pulumi.Input['SecretEnvSourceArgsDict']]
8394
8432
  """
@@ -8404,9 +8442,9 @@ class EnvFromSourceArgs:
8404
8442
  prefix: Optional[pulumi.Input[builtins.str]] = None,
8405
8443
  secret_ref: Optional[pulumi.Input['SecretEnvSourceArgs']] = None):
8406
8444
  """
8407
- EnvFromSource represents the source of a set of ConfigMaps
8445
+ EnvFromSource represents the source of a set of ConfigMaps or Secrets
8408
8446
  :param pulumi.Input['ConfigMapEnvSourceArgs'] config_map_ref: The ConfigMap to select from
8409
- :param pulumi.Input[builtins.str] prefix: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
8447
+ :param pulumi.Input[builtins.str] prefix: Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
8410
8448
  :param pulumi.Input['SecretEnvSourceArgs'] secret_ref: The Secret to select from
8411
8449
  """
8412
8450
  if config_map_ref is not None:
@@ -8432,7 +8470,7 @@ class EnvFromSourceArgs:
8432
8470
  @pulumi.getter
8433
8471
  def prefix(self) -> Optional[pulumi.Input[builtins.str]]:
8434
8472
  """
8435
- An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
8473
+ Optional text to prepend to the name of each environment variable. Must be a C_IDENTIFIER.
8436
8474
  """
8437
8475
  return pulumi.get(self, "prefix")
8438
8476
 
@@ -14315,6 +14353,10 @@ if not MYPY:
14315
14353
  """
14316
14354
  PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
14317
14355
  """
14356
+ stop_signal: NotRequired[pulumi.Input[builtins.str]]
14357
+ """
14358
+ StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
14359
+ """
14318
14360
  elif False:
14319
14361
  LifecyclePatchArgsDict: TypeAlias = Mapping[str, Any]
14320
14362
 
@@ -14322,16 +14364,20 @@ elif False:
14322
14364
  class LifecyclePatchArgs:
14323
14365
  def __init__(__self__, *,
14324
14366
  post_start: Optional[pulumi.Input['LifecycleHandlerPatchArgs']] = None,
14325
- pre_stop: Optional[pulumi.Input['LifecycleHandlerPatchArgs']] = None):
14367
+ pre_stop: Optional[pulumi.Input['LifecycleHandlerPatchArgs']] = None,
14368
+ stop_signal: Optional[pulumi.Input[builtins.str]] = None):
14326
14369
  """
14327
14370
  Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.
14328
14371
  :param pulumi.Input['LifecycleHandlerPatchArgs'] post_start: PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
14329
14372
  :param pulumi.Input['LifecycleHandlerPatchArgs'] pre_stop: PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
14373
+ :param pulumi.Input[builtins.str] stop_signal: StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
14330
14374
  """
14331
14375
  if post_start is not None:
14332
14376
  pulumi.set(__self__, "post_start", post_start)
14333
14377
  if pre_stop is not None:
14334
14378
  pulumi.set(__self__, "pre_stop", pre_stop)
14379
+ if stop_signal is not None:
14380
+ pulumi.set(__self__, "stop_signal", stop_signal)
14335
14381
 
14336
14382
  @property
14337
14383
  @pulumi.getter(name="postStart")
@@ -14357,6 +14403,18 @@ class LifecyclePatchArgs:
14357
14403
  def pre_stop(self, value: Optional[pulumi.Input['LifecycleHandlerPatchArgs']]):
14358
14404
  pulumi.set(self, "pre_stop", value)
14359
14405
 
14406
+ @property
14407
+ @pulumi.getter(name="stopSignal")
14408
+ def stop_signal(self) -> Optional[pulumi.Input[builtins.str]]:
14409
+ """
14410
+ StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
14411
+ """
14412
+ return pulumi.get(self, "stop_signal")
14413
+
14414
+ @stop_signal.setter
14415
+ def stop_signal(self, value: Optional[pulumi.Input[builtins.str]]):
14416
+ pulumi.set(self, "stop_signal", value)
14417
+
14360
14418
 
14361
14419
  if not MYPY:
14362
14420
  class LifecycleArgsDict(TypedDict):
@@ -14371,6 +14429,10 @@ if not MYPY:
14371
14429
  """
14372
14430
  PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
14373
14431
  """
14432
+ stop_signal: NotRequired[pulumi.Input[builtins.str]]
14433
+ """
14434
+ StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
14435
+ """
14374
14436
  elif False:
14375
14437
  LifecycleArgsDict: TypeAlias = Mapping[str, Any]
14376
14438
 
@@ -14378,16 +14440,20 @@ elif False:
14378
14440
  class LifecycleArgs:
14379
14441
  def __init__(__self__, *,
14380
14442
  post_start: Optional[pulumi.Input['LifecycleHandlerArgs']] = None,
14381
- pre_stop: Optional[pulumi.Input['LifecycleHandlerArgs']] = None):
14443
+ pre_stop: Optional[pulumi.Input['LifecycleHandlerArgs']] = None,
14444
+ stop_signal: Optional[pulumi.Input[builtins.str]] = None):
14382
14445
  """
14383
14446
  Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.
14384
14447
  :param pulumi.Input['LifecycleHandlerArgs'] post_start: PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
14385
14448
  :param pulumi.Input['LifecycleHandlerArgs'] pre_stop: PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
14449
+ :param pulumi.Input[builtins.str] stop_signal: StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
14386
14450
  """
14387
14451
  if post_start is not None:
14388
14452
  pulumi.set(__self__, "post_start", post_start)
14389
14453
  if pre_stop is not None:
14390
14454
  pulumi.set(__self__, "pre_stop", pre_stop)
14455
+ if stop_signal is not None:
14456
+ pulumi.set(__self__, "stop_signal", stop_signal)
14391
14457
 
14392
14458
  @property
14393
14459
  @pulumi.getter(name="postStart")
@@ -14413,6 +14479,18 @@ class LifecycleArgs:
14413
14479
  def pre_stop(self, value: Optional[pulumi.Input['LifecycleHandlerArgs']]):
14414
14480
  pulumi.set(self, "pre_stop", value)
14415
14481
 
14482
+ @property
14483
+ @pulumi.getter(name="stopSignal")
14484
+ def stop_signal(self) -> Optional[pulumi.Input[builtins.str]]:
14485
+ """
14486
+ StopSignal defines which signal will be sent to a container when it is being stopped. If not specified, the default is defined by the container runtime in use. StopSignal can only be set for Pods with a non-empty .spec.os.name
14487
+ """
14488
+ return pulumi.get(self, "stop_signal")
14489
+
14490
+ @stop_signal.setter
14491
+ def stop_signal(self, value: Optional[pulumi.Input[builtins.str]]):
14492
+ pulumi.set(self, "stop_signal", value)
14493
+
14416
14494
 
14417
14495
  if not MYPY:
14418
14496
  class LimitRangeItemPatchArgsDict(TypedDict):
@@ -17487,6 +17565,42 @@ class NodeStatusArgs:
17487
17565
  pulumi.set(self, "volumes_in_use", value)
17488
17566
 
17489
17567
 
17568
+ if not MYPY:
17569
+ class NodeSwapStatusArgsDict(TypedDict):
17570
+ """
17571
+ NodeSwapStatus represents swap memory information.
17572
+ """
17573
+ capacity: NotRequired[pulumi.Input[builtins.int]]
17574
+ """
17575
+ Total amount of swap memory in bytes.
17576
+ """
17577
+ elif False:
17578
+ NodeSwapStatusArgsDict: TypeAlias = Mapping[str, Any]
17579
+
17580
+ @pulumi.input_type
17581
+ class NodeSwapStatusArgs:
17582
+ def __init__(__self__, *,
17583
+ capacity: Optional[pulumi.Input[builtins.int]] = None):
17584
+ """
17585
+ NodeSwapStatus represents swap memory information.
17586
+ :param pulumi.Input[builtins.int] capacity: Total amount of swap memory in bytes.
17587
+ """
17588
+ if capacity is not None:
17589
+ pulumi.set(__self__, "capacity", capacity)
17590
+
17591
+ @property
17592
+ @pulumi.getter
17593
+ def capacity(self) -> Optional[pulumi.Input[builtins.int]]:
17594
+ """
17595
+ Total amount of swap memory in bytes.
17596
+ """
17597
+ return pulumi.get(self, "capacity")
17598
+
17599
+ @capacity.setter
17600
+ def capacity(self, value: Optional[pulumi.Input[builtins.int]]):
17601
+ pulumi.set(self, "capacity", value)
17602
+
17603
+
17490
17604
  if not MYPY:
17491
17605
  class NodeSystemInfoArgsDict(TypedDict):
17492
17606
  """
@@ -17532,6 +17646,10 @@ if not MYPY:
17532
17646
  """
17533
17647
  SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid
17534
17648
  """
17649
+ swap: NotRequired[pulumi.Input['NodeSwapStatusArgsDict']]
17650
+ """
17651
+ Swap Info reported by the node.
17652
+ """
17535
17653
  elif False:
17536
17654
  NodeSystemInfoArgsDict: TypeAlias = Mapping[str, Any]
17537
17655
 
@@ -17547,7 +17665,8 @@ class NodeSystemInfoArgs:
17547
17665
  machine_id: pulumi.Input[builtins.str],
17548
17666
  operating_system: pulumi.Input[builtins.str],
17549
17667
  os_image: pulumi.Input[builtins.str],
17550
- system_uuid: pulumi.Input[builtins.str]):
17668
+ system_uuid: pulumi.Input[builtins.str],
17669
+ swap: Optional[pulumi.Input['NodeSwapStatusArgs']] = None):
17551
17670
  """
17552
17671
  NodeSystemInfo is a set of ids/uuids to uniquely identify the node.
17553
17672
  :param pulumi.Input[builtins.str] architecture: The Architecture reported by the node
@@ -17560,6 +17679,7 @@ class NodeSystemInfoArgs:
17560
17679
  :param pulumi.Input[builtins.str] operating_system: The Operating System reported by the node
17561
17680
  :param pulumi.Input[builtins.str] os_image: OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)).
17562
17681
  :param pulumi.Input[builtins.str] system_uuid: SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid
17682
+ :param pulumi.Input['NodeSwapStatusArgs'] swap: Swap Info reported by the node.
17563
17683
  """
17564
17684
  pulumi.set(__self__, "architecture", architecture)
17565
17685
  pulumi.set(__self__, "boot_id", boot_id)
@@ -17571,6 +17691,8 @@ class NodeSystemInfoArgs:
17571
17691
  pulumi.set(__self__, "operating_system", operating_system)
17572
17692
  pulumi.set(__self__, "os_image", os_image)
17573
17693
  pulumi.set(__self__, "system_uuid", system_uuid)
17694
+ if swap is not None:
17695
+ pulumi.set(__self__, "swap", swap)
17574
17696
 
17575
17697
  @property
17576
17698
  @pulumi.getter
@@ -17692,6 +17814,18 @@ class NodeSystemInfoArgs:
17692
17814
  def system_uuid(self, value: pulumi.Input[builtins.str]):
17693
17815
  pulumi.set(self, "system_uuid", value)
17694
17816
 
17817
+ @property
17818
+ @pulumi.getter
17819
+ def swap(self) -> Optional[pulumi.Input['NodeSwapStatusArgs']]:
17820
+ """
17821
+ Swap Info reported by the node.
17822
+ """
17823
+ return pulumi.get(self, "swap")
17824
+
17825
+ @swap.setter
17826
+ def swap(self, value: Optional[pulumi.Input['NodeSwapStatusArgs']]):
17827
+ pulumi.set(self, "swap", value)
17828
+
17695
17829
 
17696
17830
  if not MYPY:
17697
17831
  class NodeArgsDict(TypedDict):
@@ -21678,11 +21812,11 @@ if not MYPY:
21678
21812
  """
21679
21813
  match_label_keys: NotRequired[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]
21680
21814
  """
21681
- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21815
+ MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
21682
21816
  """
21683
21817
  mismatch_label_keys: NotRequired[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]
21684
21818
  """
21685
- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21819
+ MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
21686
21820
  """
21687
21821
  namespace_selector: NotRequired[pulumi.Input['_meta.v1.LabelSelectorPatchArgsDict']]
21688
21822
  """
@@ -21711,8 +21845,8 @@ class PodAffinityTermPatchArgs:
21711
21845
  """
21712
21846
  Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
21713
21847
  :param pulumi.Input['_meta.v1.LabelSelectorPatchArgs'] label_selector: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
21714
- :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21715
- :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21848
+ :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
21849
+ :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
21716
21850
  :param pulumi.Input['_meta.v1.LabelSelectorPatchArgs'] namespace_selector: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
21717
21851
  :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] namespaces: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
21718
21852
  :param pulumi.Input[builtins.str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
@@ -21746,7 +21880,7 @@ class PodAffinityTermPatchArgs:
21746
21880
  @pulumi.getter(name="matchLabelKeys")
21747
21881
  def match_label_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]:
21748
21882
  """
21749
- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21883
+ MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
21750
21884
  """
21751
21885
  return pulumi.get(self, "match_label_keys")
21752
21886
 
@@ -21758,7 +21892,7 @@ class PodAffinityTermPatchArgs:
21758
21892
  @pulumi.getter(name="mismatchLabelKeys")
21759
21893
  def mismatch_label_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]:
21760
21894
  """
21761
- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21895
+ MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
21762
21896
  """
21763
21897
  return pulumi.get(self, "mismatch_label_keys")
21764
21898
 
@@ -21818,11 +21952,11 @@ if not MYPY:
21818
21952
  """
21819
21953
  match_label_keys: NotRequired[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]
21820
21954
  """
21821
- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21955
+ MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
21822
21956
  """
21823
21957
  mismatch_label_keys: NotRequired[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]
21824
21958
  """
21825
- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21959
+ MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
21826
21960
  """
21827
21961
  namespace_selector: NotRequired[pulumi.Input['_meta.v1.LabelSelectorArgsDict']]
21828
21962
  """
@@ -21848,8 +21982,8 @@ class PodAffinityTermArgs:
21848
21982
  Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
21849
21983
  :param pulumi.Input[builtins.str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
21850
21984
  :param pulumi.Input['_meta.v1.LabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
21851
- :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21852
- :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
21985
+ :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] match_label_keys: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
21986
+ :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] mismatch_label_keys: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
21853
21987
  :param pulumi.Input['_meta.v1.LabelSelectorArgs'] namespace_selector: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
21854
21988
  :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] namespaces: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
21855
21989
  """
@@ -21893,7 +22027,7 @@ class PodAffinityTermArgs:
21893
22027
  @pulumi.getter(name="matchLabelKeys")
21894
22028
  def match_label_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]:
21895
22029
  """
21896
- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
22030
+ MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set.
21897
22031
  """
21898
22032
  return pulumi.get(self, "match_label_keys")
21899
22033
 
@@ -21905,7 +22039,7 @@ class PodAffinityTermArgs:
21905
22039
  @pulumi.getter(name="mismatchLabelKeys")
21906
22040
  def mismatch_label_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[builtins.str]]]]:
21907
22041
  """
21908
- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
22042
+ MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
21909
22043
  """
21910
22044
  return pulumi.get(self, "mismatch_label_keys")
21911
22045
 
@@ -22131,6 +22265,10 @@ if not MYPY:
22131
22265
  """
22132
22266
  Human-readable message indicating details about last transition.
22133
22267
  """
22268
+ observed_generation: NotRequired[pulumi.Input[builtins.int]]
22269
+ """
22270
+ If set, this represents the .metadata.generation that the pod condition was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
22271
+ """
22134
22272
  reason: NotRequired[pulumi.Input[builtins.str]]
22135
22273
  """
22136
22274
  Unique, one-word, CamelCase reason for the condition's last transition.
@@ -22146,6 +22284,7 @@ class PodConditionArgs:
22146
22284
  last_probe_time: Optional[pulumi.Input[builtins.str]] = None,
22147
22285
  last_transition_time: Optional[pulumi.Input[builtins.str]] = None,
22148
22286
  message: Optional[pulumi.Input[builtins.str]] = None,
22287
+ observed_generation: Optional[pulumi.Input[builtins.int]] = None,
22149
22288
  reason: Optional[pulumi.Input[builtins.str]] = None):
22150
22289
  """
22151
22290
  PodCondition contains details for the current condition of this pod.
@@ -22154,6 +22293,7 @@ class PodConditionArgs:
22154
22293
  :param pulumi.Input[builtins.str] last_probe_time: Last time we probed the condition.
22155
22294
  :param pulumi.Input[builtins.str] last_transition_time: Last time the condition transitioned from one status to another.
22156
22295
  :param pulumi.Input[builtins.str] message: Human-readable message indicating details about last transition.
22296
+ :param pulumi.Input[builtins.int] observed_generation: If set, this represents the .metadata.generation that the pod condition was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
22157
22297
  :param pulumi.Input[builtins.str] reason: Unique, one-word, CamelCase reason for the condition's last transition.
22158
22298
  """
22159
22299
  pulumi.set(__self__, "status", status)
@@ -22164,6 +22304,8 @@ class PodConditionArgs:
22164
22304
  pulumi.set(__self__, "last_transition_time", last_transition_time)
22165
22305
  if message is not None:
22166
22306
  pulumi.set(__self__, "message", message)
22307
+ if observed_generation is not None:
22308
+ pulumi.set(__self__, "observed_generation", observed_generation)
22167
22309
  if reason is not None:
22168
22310
  pulumi.set(__self__, "reason", reason)
22169
22311
 
@@ -22227,6 +22369,18 @@ class PodConditionArgs:
22227
22369
  def message(self, value: Optional[pulumi.Input[builtins.str]]):
22228
22370
  pulumi.set(self, "message", value)
22229
22371
 
22372
+ @property
22373
+ @pulumi.getter(name="observedGeneration")
22374
+ def observed_generation(self) -> Optional[pulumi.Input[builtins.int]]:
22375
+ """
22376
+ If set, this represents the .metadata.generation that the pod condition was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
22377
+ """
22378
+ return pulumi.get(self, "observed_generation")
22379
+
22380
+ @observed_generation.setter
22381
+ def observed_generation(self, value: Optional[pulumi.Input[builtins.int]]):
22382
+ pulumi.set(self, "observed_generation", value)
22383
+
22230
22384
  @property
22231
22385
  @pulumi.getter
22232
22386
  def reason(self) -> Optional[pulumi.Input[builtins.str]]:
@@ -23757,7 +23911,7 @@ if not MYPY:
23757
23911
  """
23758
23912
  init_containers: NotRequired[pulumi.Input[Sequence[pulumi.Input['ContainerPatchArgsDict']]]]
23759
23913
  """
23760
- List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
23914
+ List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
23761
23915
  """
23762
23916
  node_name: NotRequired[pulumi.Input[builtins.str]]
23763
23917
  """
@@ -23932,7 +24086,7 @@ class PodSpecPatchArgs:
23932
24086
  :param pulumi.Input[builtins.bool] host_users: Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.
23933
24087
  :param pulumi.Input[builtins.str] hostname: Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value.
23934
24088
  :param pulumi.Input[Sequence[pulumi.Input['LocalObjectReferencePatchArgs']]] image_pull_secrets: ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
23935
- :param pulumi.Input[Sequence[pulumi.Input['ContainerPatchArgs']]] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
24089
+ :param pulumi.Input[Sequence[pulumi.Input['ContainerPatchArgs']]] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
23936
24090
  :param pulumi.Input[builtins.str] node_name: NodeName indicates in which node this pod is scheduled. If empty, this pod is a candidate for scheduling by the scheduler defined in schedulerName. Once this field is set, the kubelet for this node becomes responsible for the lifecycle of this pod. This field should not be used to express a desire for the pod to be scheduled on a specific node. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename
23937
24091
  :param pulumi.Input[Mapping[str, pulumi.Input[builtins.str]]] node_selector: NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
23938
24092
  :param pulumi.Input['PodOSPatchArgs'] os: Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set.
@@ -24237,7 +24391,7 @@ class PodSpecPatchArgs:
24237
24391
  @pulumi.getter(name="initContainers")
24238
24392
  def init_containers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ContainerPatchArgs']]]]:
24239
24393
  """
24240
- List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
24394
+ List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
24241
24395
  """
24242
24396
  return pulumi.get(self, "init_containers")
24243
24397
 
@@ -24615,7 +24769,7 @@ if not MYPY:
24615
24769
  """
24616
24770
  init_containers: NotRequired[pulumi.Input[Sequence[pulumi.Input['ContainerArgsDict']]]]
24617
24771
  """
24618
- List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
24772
+ List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
24619
24773
  """
24620
24774
  node_name: NotRequired[pulumi.Input[builtins.str]]
24621
24775
  """
@@ -24790,7 +24944,7 @@ class PodSpecArgs:
24790
24944
  :param pulumi.Input[builtins.bool] host_users: Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.
24791
24945
  :param pulumi.Input[builtins.str] hostname: Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value.
24792
24946
  :param pulumi.Input[Sequence[pulumi.Input['LocalObjectReferenceArgs']]] image_pull_secrets: ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
24793
- :param pulumi.Input[Sequence[pulumi.Input['ContainerArgs']]] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
24947
+ :param pulumi.Input[Sequence[pulumi.Input['ContainerArgs']]] init_containers: List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
24794
24948
  :param pulumi.Input[builtins.str] node_name: NodeName indicates in which node this pod is scheduled. If empty, this pod is a candidate for scheduling by the scheduler defined in schedulerName. Once this field is set, the kubelet for this node becomes responsible for the lifecycle of this pod. This field should not be used to express a desire for the pod to be scheduled on a specific node. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename
24795
24949
  :param pulumi.Input[Mapping[str, pulumi.Input[builtins.str]]] node_selector: NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
24796
24950
  :param pulumi.Input['PodOSArgs'] os: Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set.
@@ -25094,7 +25248,7 @@ class PodSpecArgs:
25094
25248
  @pulumi.getter(name="initContainers")
25095
25249
  def init_containers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ContainerArgs']]]]:
25096
25250
  """
25097
- List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
25251
+ List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
25098
25252
  """
25099
25253
  return pulumi.get(self, "init_containers")
25100
25254
 
@@ -25442,6 +25596,10 @@ if not MYPY:
25442
25596
  """
25443
25597
  nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.
25444
25598
  """
25599
+ observed_generation: NotRequired[pulumi.Input[builtins.int]]
25600
+ """
25601
+ If set, this represents the .metadata.generation that the pod status was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
25602
+ """
25445
25603
  phase: NotRequired[pulumi.Input[builtins.str]]
25446
25604
  """
25447
25605
  The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values:
@@ -25468,7 +25626,7 @@ if not MYPY:
25468
25626
  """
25469
25627
  resize: NotRequired[pulumi.Input[builtins.str]]
25470
25628
  """
25471
- Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed"
25629
+ Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" Deprecated: Resize status is moved to two pod conditions PodResizePending and PodResizeInProgress. PodResizePending will track states where the spec has been resized, but the Kubelet has not yet allocated the resources. PodResizeInProgress will track in-progress resizes, and should be present whenever allocated resources != acknowledged resources.
25472
25630
  """
25473
25631
  resource_claim_statuses: NotRequired[pulumi.Input[Sequence[pulumi.Input['PodResourceClaimStatusArgsDict']]]]
25474
25632
  """
@@ -25492,6 +25650,7 @@ class PodStatusArgs:
25492
25650
  init_container_statuses: Optional[pulumi.Input[Sequence[pulumi.Input['ContainerStatusArgs']]]] = None,
25493
25651
  message: Optional[pulumi.Input[builtins.str]] = None,
25494
25652
  nominated_node_name: Optional[pulumi.Input[builtins.str]] = None,
25653
+ observed_generation: Optional[pulumi.Input[builtins.int]] = None,
25495
25654
  phase: Optional[pulumi.Input[builtins.str]] = None,
25496
25655
  pod_ip: Optional[pulumi.Input[builtins.str]] = None,
25497
25656
  pod_ips: Optional[pulumi.Input[Sequence[pulumi.Input['PodIPArgs']]]] = None,
@@ -25510,6 +25669,7 @@ class PodStatusArgs:
25510
25669
  :param pulumi.Input[Sequence[pulumi.Input['ContainerStatusArgs']]] init_container_statuses: Statuses of init containers in this pod. The most recent successful non-restartable init container will have ready = true, the most recently started container will have startTime set. Each init container in the pod should have at most one status in this list, and all statuses should be for containers in the pod. However this is not enforced. If a status for a non-existent container is present in the list, or the list has duplicate names, the behavior of various Kubernetes components is not defined and those statuses might be ignored. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-and-container-status
25511
25670
  :param pulumi.Input[builtins.str] message: A human readable message indicating details about why the pod is in this condition.
25512
25671
  :param pulumi.Input[builtins.str] nominated_node_name: nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.
25672
+ :param pulumi.Input[builtins.int] observed_generation: If set, this represents the .metadata.generation that the pod status was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
25513
25673
  :param pulumi.Input[builtins.str] phase: The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values:
25514
25674
 
25515
25675
  Pending: The pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded: All containers in the pod have terminated in success, and will not be restarted. Failed: All containers in the pod have terminated, and at least one container has terminated in failure. The container either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.
@@ -25519,7 +25679,7 @@ class PodStatusArgs:
25519
25679
  :param pulumi.Input[Sequence[pulumi.Input['PodIPArgs']]] pod_ips: podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet.
25520
25680
  :param pulumi.Input[builtins.str] qos_class: The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes
25521
25681
  :param pulumi.Input[builtins.str] reason: A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted'
25522
- :param pulumi.Input[builtins.str] resize: Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed"
25682
+ :param pulumi.Input[builtins.str] resize: Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" Deprecated: Resize status is moved to two pod conditions PodResizePending and PodResizeInProgress. PodResizePending will track states where the spec has been resized, but the Kubelet has not yet allocated the resources. PodResizeInProgress will track in-progress resizes, and should be present whenever allocated resources != acknowledged resources.
25523
25683
  :param pulumi.Input[Sequence[pulumi.Input['PodResourceClaimStatusArgs']]] resource_claim_statuses: Status of resource claims.
25524
25684
  :param pulumi.Input[builtins.str] start_time: RFC 3339 date and time at which the object was acknowledged by the Kubelet. This is before the Kubelet pulled the container image(s) for the pod.
25525
25685
  """
@@ -25539,6 +25699,8 @@ class PodStatusArgs:
25539
25699
  pulumi.set(__self__, "message", message)
25540
25700
  if nominated_node_name is not None:
25541
25701
  pulumi.set(__self__, "nominated_node_name", nominated_node_name)
25702
+ if observed_generation is not None:
25703
+ pulumi.set(__self__, "observed_generation", observed_generation)
25542
25704
  if phase is not None:
25543
25705
  pulumi.set(__self__, "phase", phase)
25544
25706
  if pod_ip is not None:
@@ -25652,6 +25814,18 @@ class PodStatusArgs:
25652
25814
  def nominated_node_name(self, value: Optional[pulumi.Input[builtins.str]]):
25653
25815
  pulumi.set(self, "nominated_node_name", value)
25654
25816
 
25817
+ @property
25818
+ @pulumi.getter(name="observedGeneration")
25819
+ def observed_generation(self) -> Optional[pulumi.Input[builtins.int]]:
25820
+ """
25821
+ If set, this represents the .metadata.generation that the pod status was set based upon. This is an alpha field. Enable PodObservedGenerationTracking to be able to use this field.
25822
+ """
25823
+ return pulumi.get(self, "observed_generation")
25824
+
25825
+ @observed_generation.setter
25826
+ def observed_generation(self, value: Optional[pulumi.Input[builtins.int]]):
25827
+ pulumi.set(self, "observed_generation", value)
25828
+
25655
25829
  @property
25656
25830
  @pulumi.getter
25657
25831
  def phase(self) -> Optional[pulumi.Input[builtins.str]]:
@@ -25720,7 +25894,7 @@ class PodStatusArgs:
25720
25894
  @pulumi.getter
25721
25895
  def resize(self) -> Optional[pulumi.Input[builtins.str]]:
25722
25896
  """
25723
- Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed"
25897
+ Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" Deprecated: Resize status is moved to two pod conditions PodResizePending and PodResizeInProgress. PodResizePending will track states where the spec has been resized, but the Kubelet has not yet allocated the resources. PodResizeInProgress will track in-progress resizes, and should be present whenever allocated resources != acknowledged resources.
25724
25898
  """
25725
25899
  return pulumi.get(self, "resize")
25726
25900
 
@@ -32946,7 +33120,7 @@ if not MYPY:
32946
33120
  """
32947
33121
  traffic_distribution: NotRequired[pulumi.Input[builtins.str]]
32948
33122
  """
32949
- TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is a beta field and requires enabling ServiceTrafficDistribution feature.
33123
+ TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
32950
33124
  """
32951
33125
  type: NotRequired[pulumi.Input[Union[builtins.str, 'ServiceSpecType']]]
32952
33126
  """
@@ -33006,7 +33180,7 @@ class ServiceSpecPatchArgs:
33006
33180
  :param pulumi.Input[builtins.str] session_affinity: Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
33007
33181
  :param pulumi.Input['SessionAffinityConfigPatchArgs'] session_affinity_config: sessionAffinityConfig contains the configurations of session affinity.
33008
33182
  :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] topology_keys: topologyKeys is a preference-order list of topology keys which implementations of services should use to preferentially sort endpoints when accessing this Service, it can not be used at the same time as externalTrafficPolicy=Local. Topology keys must be valid label keys and at most 16 keys may be specified. Endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no backends for that client and connections should fail. The special value "*" may be used to mean "any topology". This catch-all value, if used, only makes sense as the last value in the list. If this is not specified or empty, no topology constraints will be applied.
33009
- :param pulumi.Input[builtins.str] traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is a beta field and requires enabling ServiceTrafficDistribution feature.
33183
+ :param pulumi.Input[builtins.str] traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
33010
33184
  :param pulumi.Input[Union[builtins.str, 'ServiceSpecType']] type: type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
33011
33185
  """
33012
33186
  if allocate_load_balancer_node_ports is not None:
@@ -33302,7 +33476,7 @@ class ServiceSpecPatchArgs:
33302
33476
  @pulumi.getter(name="trafficDistribution")
33303
33477
  def traffic_distribution(self) -> Optional[pulumi.Input[builtins.str]]:
33304
33478
  """
33305
- TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is a beta field and requires enabling ServiceTrafficDistribution feature.
33479
+ TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
33306
33480
  """
33307
33481
  return pulumi.get(self, "traffic_distribution")
33308
33482
 
@@ -33414,7 +33588,7 @@ if not MYPY:
33414
33588
  """
33415
33589
  traffic_distribution: NotRequired[pulumi.Input[builtins.str]]
33416
33590
  """
33417
- TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is a beta field and requires enabling ServiceTrafficDistribution feature.
33591
+ TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
33418
33592
  """
33419
33593
  type: NotRequired[pulumi.Input[Union[builtins.str, 'ServiceSpecType']]]
33420
33594
  """
@@ -33474,7 +33648,7 @@ class ServiceSpecArgs:
33474
33648
  :param pulumi.Input[builtins.str] session_affinity: Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
33475
33649
  :param pulumi.Input['SessionAffinityConfigArgs'] session_affinity_config: sessionAffinityConfig contains the configurations of session affinity.
33476
33650
  :param pulumi.Input[Sequence[pulumi.Input[builtins.str]]] topology_keys: topologyKeys is a preference-order list of topology keys which implementations of services should use to preferentially sort endpoints when accessing this Service, it can not be used at the same time as externalTrafficPolicy=Local. Topology keys must be valid label keys and at most 16 keys may be specified. Endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no backends for that client and connections should fail. The special value "*" may be used to mean "any topology". This catch-all value, if used, only makes sense as the last value in the list. If this is not specified or empty, no topology constraints will be applied.
33477
- :param pulumi.Input[builtins.str] traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is a beta field and requires enabling ServiceTrafficDistribution feature.
33651
+ :param pulumi.Input[builtins.str] traffic_distribution: TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
33478
33652
  :param pulumi.Input[Union[builtins.str, 'ServiceSpecType']] type: type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
33479
33653
  """
33480
33654
  if allocate_load_balancer_node_ports is not None:
@@ -33770,7 +33944,7 @@ class ServiceSpecArgs:
33770
33944
  @pulumi.getter(name="trafficDistribution")
33771
33945
  def traffic_distribution(self) -> Optional[pulumi.Input[builtins.str]]:
33772
33946
  """
33773
- TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is a beta field and requires enabling ServiceTrafficDistribution feature.
33947
+ TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are in the same zone.
33774
33948
  """
33775
33949
  return pulumi.get(self, "traffic_distribution")
33776
33950
 
@@ -35458,13 +35632,13 @@ if not MYPY:
35458
35632
  """
35459
35633
  NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
35460
35634
 
35461
- If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35635
+ If this value is nil, the behavior is equivalent to the Honor policy.
35462
35636
  """
35463
35637
  node_taints_policy: NotRequired[pulumi.Input[builtins.str]]
35464
35638
  """
35465
35639
  NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
35466
35640
 
35467
- If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35641
+ If this value is nil, the behavior is equivalent to the Ignore policy.
35468
35642
  """
35469
35643
  topology_key: NotRequired[pulumi.Input[builtins.str]]
35470
35644
  """
@@ -35503,10 +35677,10 @@ class TopologySpreadConstraintPatchArgs:
35503
35677
  For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.
35504
35678
  :param pulumi.Input[builtins.str] node_affinity_policy: NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
35505
35679
 
35506
- If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35680
+ If this value is nil, the behavior is equivalent to the Honor policy.
35507
35681
  :param pulumi.Input[builtins.str] node_taints_policy: NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
35508
35682
 
35509
- If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35683
+ If this value is nil, the behavior is equivalent to the Ignore policy.
35510
35684
  :param pulumi.Input[builtins.str] topology_key: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
35511
35685
  :param pulumi.Input[builtins.str] when_unsatisfiable: WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location,
35512
35686
  but giving higher precedence to topologies that would help reduce the
@@ -35588,7 +35762,7 @@ class TopologySpreadConstraintPatchArgs:
35588
35762
  """
35589
35763
  NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
35590
35764
 
35591
- If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35765
+ If this value is nil, the behavior is equivalent to the Honor policy.
35592
35766
  """
35593
35767
  return pulumi.get(self, "node_affinity_policy")
35594
35768
 
@@ -35602,7 +35776,7 @@ class TopologySpreadConstraintPatchArgs:
35602
35776
  """
35603
35777
  NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
35604
35778
 
35605
- If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35779
+ If this value is nil, the behavior is equivalent to the Ignore policy.
35606
35780
  """
35607
35781
  return pulumi.get(self, "node_taints_policy")
35608
35782
 
@@ -35678,13 +35852,13 @@ if not MYPY:
35678
35852
  """
35679
35853
  NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
35680
35854
 
35681
- If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35855
+ If this value is nil, the behavior is equivalent to the Honor policy.
35682
35856
  """
35683
35857
  node_taints_policy: NotRequired[pulumi.Input[builtins.str]]
35684
35858
  """
35685
35859
  NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
35686
35860
 
35687
- If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35861
+ If this value is nil, the behavior is equivalent to the Ignore policy.
35688
35862
  """
35689
35863
  elif False:
35690
35864
  TopologySpreadConstraintArgsDict: TypeAlias = Mapping[str, Any]
@@ -35717,10 +35891,10 @@ class TopologySpreadConstraintArgs:
35717
35891
  For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.
35718
35892
  :param pulumi.Input[builtins.str] node_affinity_policy: NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
35719
35893
 
35720
- If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35894
+ If this value is nil, the behavior is equivalent to the Honor policy.
35721
35895
  :param pulumi.Input[builtins.str] node_taints_policy: NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
35722
35896
 
35723
- If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35897
+ If this value is nil, the behavior is equivalent to the Ignore policy.
35724
35898
  """
35725
35899
  pulumi.set(__self__, "max_skew", max_skew)
35726
35900
  pulumi.set(__self__, "topology_key", topology_key)
@@ -35821,7 +35995,7 @@ class TopologySpreadConstraintArgs:
35821
35995
  """
35822
35996
  NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
35823
35997
 
35824
- If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
35998
+ If this value is nil, the behavior is equivalent to the Honor policy.
35825
35999
  """
35826
36000
  return pulumi.get(self, "node_affinity_policy")
35827
36001
 
@@ -35835,7 +36009,7 @@ class TopologySpreadConstraintArgs:
35835
36009
  """
35836
36010
  NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
35837
36011
 
35838
- If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
36012
+ If this value is nil, the behavior is equivalent to the Ignore policy.
35839
36013
  """
35840
36014
  return pulumi.get(self, "node_taints_policy")
35841
36015
 
@@ -36910,7 +37084,7 @@ if not MYPY:
36910
37084
 
36911
37085
  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
36912
37086
 
36913
- The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
37087
+ The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
36914
37088
  """
36915
37089
  iscsi: NotRequired[pulumi.Input['ISCSIVolumeSourcePatchArgsDict']]
36916
37090
  """
@@ -37037,7 +37211,7 @@ class VolumePatchArgs:
37037
37211
 
37038
37212
  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
37039
37213
 
37040
- The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
37214
+ The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
37041
37215
  :param pulumi.Input['ISCSIVolumeSourcePatchArgs'] iscsi: iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md
37042
37216
  :param pulumi.Input[builtins.str] name: name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
37043
37217
  :param pulumi.Input['NFSVolumeSourcePatchArgs'] nfs: nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
@@ -37340,7 +37514,7 @@ class VolumePatchArgs:
37340
37514
 
37341
37515
  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
37342
37516
 
37343
- The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
37517
+ The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
37344
37518
  """
37345
37519
  return pulumi.get(self, "image")
37346
37520
 
@@ -37981,7 +38155,7 @@ if not MYPY:
37981
38155
 
37982
38156
  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
37983
38157
 
37984
- The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
38158
+ The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
37985
38159
  """
37986
38160
  iscsi: NotRequired[pulumi.Input['ISCSIVolumeSourceArgsDict']]
37987
38161
  """
@@ -38105,7 +38279,7 @@ class VolumeArgs:
38105
38279
 
38106
38280
  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
38107
38281
 
38108
- The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
38282
+ The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
38109
38283
  :param pulumi.Input['ISCSIVolumeSourceArgs'] iscsi: iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md
38110
38284
  :param pulumi.Input['NFSVolumeSourceArgs'] nfs: nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
38111
38285
  :param pulumi.Input['PersistentVolumeClaimVolumeSourceArgs'] persistent_volume_claim: persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
@@ -38418,7 +38592,7 @@ class VolumeArgs:
38418
38592
 
38419
38593
  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.
38420
38594
 
38421
- The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
38595
+ The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33. The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
38422
38596
  """
38423
38597
  return pulumi.get(self, "image")
38424
38598