aws-cdk.aws-eks-v2-alpha 2.206.0a0__py3-none-any.whl → 2.231.0a0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -4,13 +4,13 @@ r'''
4
4
  <!--BEGIN STABILITY BANNER-->---
5
5
 
6
6
 
7
- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)
7
+ ![cdk-constructs: Developer Preview](https://img.shields.io/badge/cdk--constructs-developer--preview-informational.svg?style=for-the-badge)
8
8
 
9
- > The APIs of higher level constructs in this module are experimental and under active development.
10
- > They are subject to non-backward compatible changes or removal in any future version. These are
11
- > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be
12
- > announced in the release notes. This means that while you may use them, you may need to update
13
- > your source code when upgrading to a newer version of this package.
9
+ > The APIs of higher level constructs in this module are in **developer preview** before they
10
+ > become stable. We will only make breaking changes to address unforeseen API issues. Therefore,
11
+ > these APIs are not subject to [Semantic Versioning](https://semver.org/), and breaking changes
12
+ > will be announced in release notes. This means that while you may use them, you may need to
13
+ > update your source code when upgrading to a newer version of this package.
14
14
 
15
15
  ---
16
16
  <!--END STABILITY BANNER-->
@@ -33,39 +33,88 @@ Here is the minimal example of defining an AWS EKS cluster
33
33
 
34
34
  ```python
35
35
  cluster = eks.Cluster(self, "hello-eks",
36
- version=eks.KubernetesVersion.V1_32
36
+ version=eks.KubernetesVersion.V1_34
37
37
  )
38
38
  ```
39
39
 
40
40
  ## Architecture
41
41
 
42
- ```text
43
- +-----------------------------------------------+
44
- | EKS Cluster | kubectl | |
45
- | -----------------|<--------+| Kubectl Handler |
46
- | AWS::EKS::Cluster (Optional) |
47
- | +--------------------+ +-----------------+ |
48
- | | | | | |
49
- | | Managed Node Group | | Fargate Profile | |
50
- | | | | | |
51
- | +--------------------+ +-----------------+ |
52
- +-----------------------------------------------+
53
- ^
54
- | connect self managed capacity
55
- +
56
- +--------------------+
57
- | Auto Scaling Group |
58
- +--------------------+
42
+ ```text +-----------------+
43
+ kubectl | |
44
+ +------------>| Kubectl Handler |
45
+ | | (Optional) |
46
+ | +-----------------+
47
+ +-------------------------------------+-------------------------------------+
48
+ | EKS Cluster (Auto Mode) |
49
+ | AWS::EKS::Cluster |
50
+ | |
51
+ | +---------------------------------------------------------------------+ |
52
+ | | Auto Mode Compute (Managed by EKS) (Default) | |
53
+ | | | |
54
+ | | - Automatically provisions EC2 instances | |
55
+ | | - Auto scaling based on pod requirements | |
56
+ | | - No manual node group configuration needed | |
57
+ | | | |
58
+ | +---------------------------------------------------------------------+ |
59
+ | |
60
+ +---------------------------------------------------------------------------+
59
61
  ```
60
62
 
61
63
  In a nutshell:
62
64
 
63
- * EKS Cluster - The cluster endpoint created by EKS.
64
- * Managed Node Group - EC2 worker nodes managed by EKS.
65
- * Fargate Profile - Fargate worker nodes managed by EKS.
66
- * Auto Scaling Group - EC2 worker nodes managed by the user.
67
- * Kubectl Handler (Optional) - Custom resource (i.e Lambda Function) for invoking kubectl commands on the
68
- cluster - created by CDK
65
+ * **[Auto Mode](#eks-auto-mode)** (Default) – The fully managed capacity mode in EKS.
66
+ EKS automatically provisions and scales EC2 capacity based on pod requirements.
67
+ It manages internal *system* and *general-purpose* NodePools, handles networking and storage setup, and removes the need for user-managed node groups or Auto Scaling Groups.
68
+
69
+ ```python
70
+ cluster = eks.Cluster(self, "AutoModeCluster",
71
+ version=eks.KubernetesVersion.V1_34
72
+ )
73
+ ```
74
+ * **[Managed Node Groups](#managed-node-groups)** – The semi-managed capacity mode.
75
+ EKS provisions and manages EC2 nodes on your behalf but you configure the instance types, scaling ranges, and update strategy.
76
+ AWS handles node health, draining, and rolling updates while you retain control over scaling and cost optimization.
77
+
78
+ You can also define *Fargate Profiles* that determine which pods or namespaces run on Fargate infrastructure.
79
+
80
+ ```python
81
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
82
+ version=eks.KubernetesVersion.V1_34,
83
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
84
+ )
85
+
86
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
87
+ cluster.add_fargate_profile("FargateProfile",
88
+ selectors=[eks.Selector(namespace="default")
89
+ ]
90
+ )
91
+ ```
92
+ * **[Fargate Mode](#fargate-profiles)** – The Fargate capacity mode.
93
+ EKS runs your pods directly on AWS Fargate without provisioning EC2 nodes.
94
+
95
+ ```python
96
+ cluster = eks.FargateCluster(self, "FargateCluster",
97
+ version=eks.KubernetesVersion.V1_34
98
+ )
99
+ ```
100
+ * **[Self-Managed Nodes](#self-managed-capacity)** – The fully manual capacity mode.
101
+ You create and manage EC2 instances (via an Auto Scaling Group) and connect them to the cluster manually.
102
+ This provides maximum flexibility for custom AMIs or configurations but also the highest operational overhead.
103
+
104
+ ```python
105
+ cluster = eks.Cluster(self, "SelfManagedCluster",
106
+ version=eks.KubernetesVersion.V1_34
107
+ )
108
+
109
+ # Add self-managed Auto Scaling Group
110
+ cluster.add_auto_scaling_group_capacity("self-managed-asg",
111
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
112
+ min_capacity=1,
113
+ max_capacity=5
114
+ )
115
+ ```
116
+ * **[Kubectl Handler](#kubectl-support) (Optional)** – A Lambda-backed custom resource created by the AWS CDK to execute `kubectl` commands (like `apply` or `patch`) during deployment.
117
+ Regardless of the capacity mode, this handler may still be created to apply Kubernetes manifests as part of CDK provisioning.
69
118
 
70
119
  ## Provisioning cluster
71
120
 
@@ -73,7 +122,7 @@ Creating a new cluster is done using the `Cluster` constructs. The only required
73
122
 
74
123
  ```python
75
124
  eks.Cluster(self, "HelloEKS",
76
- version=eks.KubernetesVersion.V1_32
125
+ version=eks.KubernetesVersion.V1_34
77
126
  )
78
127
  ```
79
128
 
@@ -81,7 +130,7 @@ You can also use `FargateCluster` to provision a cluster that uses only fargate
81
130
 
82
131
  ```python
83
132
  eks.FargateCluster(self, "HelloEKS",
84
- version=eks.KubernetesVersion.V1_32
133
+ version=eks.KubernetesVersion.V1_34
85
134
  )
86
135
  ```
87
136
 
@@ -90,22 +139,22 @@ be created by default. It will only be deployed when `kubectlProviderOptions`
90
139
  property is used.**
91
140
 
92
141
  ```python
93
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
142
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
94
143
 
95
144
 
96
145
  eks.Cluster(self, "hello-eks",
97
- version=eks.KubernetesVersion.V1_32,
146
+ version=eks.KubernetesVersion.V1_34,
98
147
  kubectl_provider_options=eks.KubectlProviderOptions(
99
- kubectl_layer=KubectlV32Layer(self, "kubectl")
148
+ kubectl_layer=KubectlV34Layer(self, "kubectl")
100
149
  )
101
150
  )
102
151
  ```
103
152
 
104
- ## EKS Auto Mode
153
+ ### EKS Auto Mode
105
154
 
106
155
  [Amazon EKS Auto Mode](https://aws.amazon.com/eks/auto-mode/) extends AWS management of Kubernetes clusters beyond the cluster itself, allowing AWS to set up and manage the infrastructure that enables the smooth operation of your workloads.
107
156
 
108
- ### Using Auto Mode
157
+ #### Using Auto Mode
109
158
 
110
159
  While `aws-eks` uses `DefaultCapacityType.NODEGROUP` by default, `aws-eks-v2` uses `DefaultCapacityType.AUTOMODE` as the default capacity type.
111
160
 
@@ -114,7 +163,7 @@ Auto Mode is enabled by default when creating a new cluster without specifying a
114
163
  ```python
115
164
  # Create EKS cluster with Auto Mode implicitly enabled
116
165
  cluster = eks.Cluster(self, "EksAutoCluster",
117
- version=eks.KubernetesVersion.V1_32
166
+ version=eks.KubernetesVersion.V1_34
118
167
  )
119
168
  ```
120
169
 
@@ -123,12 +172,12 @@ You can also explicitly enable Auto Mode using `defaultCapacityType`:
123
172
  ```python
124
173
  # Create EKS cluster with Auto Mode explicitly enabled
125
174
  cluster = eks.Cluster(self, "EksAutoCluster",
126
- version=eks.KubernetesVersion.V1_32,
175
+ version=eks.KubernetesVersion.V1_34,
127
176
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE
128
177
  )
129
178
  ```
130
179
 
131
- ### Node Pools
180
+ #### Node Pools
132
181
 
133
182
  When Auto Mode is enabled, the cluster comes with two default node pools:
134
183
 
@@ -139,7 +188,7 @@ These node pools are managed automatically by EKS. You can configure which node
139
188
 
140
189
  ```python
141
190
  cluster = eks.Cluster(self, "EksAutoCluster",
142
- version=eks.KubernetesVersion.V1_32,
191
+ version=eks.KubernetesVersion.V1_34,
143
192
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
144
193
  compute=eks.ComputeConfig(
145
194
  node_pools=["system", "general-purpose"]
@@ -149,13 +198,13 @@ cluster = eks.Cluster(self, "EksAutoCluster",
149
198
 
150
199
  For more information, see [Create a Node Pool for EKS Auto Mode](https://docs.aws.amazon.com/eks/latest/userguide/create-node-pool.html).
151
200
 
152
- ### Disabling Default Node Pools
201
+ #### Disabling Default Node Pools
153
202
 
154
203
  You can disable the default node pools entirely by setting an empty array for `nodePools`. This is useful when you want to use Auto Mode features but manage your compute resources separately:
155
204
 
156
205
  ```python
157
206
  cluster = eks.Cluster(self, "EksAutoCluster",
158
- version=eks.KubernetesVersion.V1_32,
207
+ version=eks.KubernetesVersion.V1_34,
159
208
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
160
209
  compute=eks.ComputeConfig(
161
210
  node_pools=[]
@@ -172,7 +221,7 @@ If you prefer to manage your own node groups instead of using Auto Mode, you can
172
221
  ```python
173
222
  # Create EKS cluster with traditional managed node group
174
223
  cluster = eks.Cluster(self, "EksCluster",
175
- version=eks.KubernetesVersion.V1_32,
224
+ version=eks.KubernetesVersion.V1_34,
176
225
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
177
226
  default_capacity=3, # Number of instances
178
227
  default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.LARGE)
@@ -183,7 +232,7 @@ You can also create a cluster with no initial capacity and add node groups later
183
232
 
184
233
  ```python
185
234
  cluster = eks.Cluster(self, "EksCluster",
186
- version=eks.KubernetesVersion.V1_32,
235
+ version=eks.KubernetesVersion.V1_34,
187
236
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
188
237
  default_capacity=0
189
238
  )
@@ -204,7 +253,7 @@ You can combine Auto Mode with traditional node groups for specific workload req
204
253
 
205
254
  ```python
206
255
  cluster = eks.Cluster(self, "Cluster",
207
- version=eks.KubernetesVersion.V1_32,
256
+ version=eks.KubernetesVersion.V1_34,
208
257
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
209
258
  compute=eks.ComputeConfig(
210
259
  node_pools=["system", "general-purpose"]
@@ -243,7 +292,7 @@ By default, when using `DefaultCapacityType.NODEGROUP`, this library will alloca
243
292
 
244
293
  ```python
245
294
  eks.Cluster(self, "HelloEKS",
246
- version=eks.KubernetesVersion.V1_32,
295
+ version=eks.KubernetesVersion.V1_34,
247
296
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP
248
297
  )
249
298
  ```
@@ -252,7 +301,7 @@ At cluster instantiation time, you can customize the number of instances and the
252
301
 
253
302
  ```python
254
303
  eks.Cluster(self, "HelloEKS",
255
- version=eks.KubernetesVersion.V1_32,
304
+ version=eks.KubernetesVersion.V1_34,
256
305
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
257
306
  default_capacity=5,
258
307
  default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL)
@@ -265,7 +314,7 @@ Additional customizations are available post instantiation. To apply them, set t
265
314
 
266
315
  ```python
267
316
  cluster = eks.Cluster(self, "HelloEKS",
268
- version=eks.KubernetesVersion.V1_32,
317
+ version=eks.KubernetesVersion.V1_34,
269
318
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
270
319
  default_capacity=0
271
320
  )
@@ -318,7 +367,7 @@ The following code defines an Amazon EKS cluster with a default Fargate Profile
318
367
 
319
368
  ```python
320
369
  cluster = eks.FargateCluster(self, "MyCluster",
321
- version=eks.KubernetesVersion.V1_32
370
+ version=eks.KubernetesVersion.V1_34
322
371
  )
323
372
  ```
324
373
 
@@ -329,6 +378,39 @@ pods running on Fargate. For ingress, we recommend that you use the [ALB Ingress
329
378
  Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)
330
379
  on Amazon EKS (minimum version v1.1.4).
331
380
 
381
+ ### Self-managed capacity
382
+
383
+ Self-managed capacity gives you the most control over your worker nodes by allowing you to create and manage your own EC2 Auto Scaling Groups. This approach provides maximum flexibility for custom AMIs, instance configurations, and scaling policies, but requires more operational overhead.
384
+
385
+ You can add self-managed capacity to any cluster using the `addAutoScalingGroupCapacity` method:
386
+
387
+ ```python
388
+ cluster = eks.Cluster(self, "Cluster",
389
+ version=eks.KubernetesVersion.V1_34
390
+ )
391
+
392
+ cluster.add_auto_scaling_group_capacity("self-managed-nodes",
393
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
394
+ min_capacity=1,
395
+ max_capacity=10,
396
+ desired_capacity=3
397
+ )
398
+ ```
399
+
400
+ You can specify custom subnets for the Auto Scaling Group:
401
+
402
+ ```python
403
+ # vpc: ec2.Vpc
404
+ # cluster: eks.Cluster
405
+
406
+
407
+ cluster.add_auto_scaling_group_capacity("custom-subnet-nodes",
408
+ vpc_subnets=ec2.SubnetSelection(subnets=vpc.private_subnets),
409
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
410
+ min_capacity=2
411
+ )
412
+ ```
413
+
332
414
  ### Endpoint Access
333
415
 
334
416
  When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as `kubectl`)
@@ -337,7 +419,7 @@ You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/
337
419
 
338
420
  ```python
339
421
  cluster = eks.Cluster(self, "hello-eks",
340
- version=eks.KubernetesVersion.V1_32,
422
+ version=eks.KubernetesVersion.V1_34,
341
423
  endpoint_access=eks.EndpointAccess.PRIVATE
342
424
  )
343
425
  ```
@@ -359,7 +441,7 @@ To deploy the controller on your EKS cluster, configure the `albController` prop
359
441
 
360
442
  ```python
361
443
  eks.Cluster(self, "HelloEKS",
362
- version=eks.KubernetesVersion.V1_32,
444
+ version=eks.KubernetesVersion.V1_34,
363
445
  alb_controller=eks.AlbControllerOptions(
364
446
  version=eks.AlbControllerVersion.V2_8_2
365
447
  )
@@ -401,7 +483,7 @@ You can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properti
401
483
 
402
484
 
403
485
  eks.Cluster(self, "HelloEKS",
404
- version=eks.KubernetesVersion.V1_32,
486
+ version=eks.KubernetesVersion.V1_34,
405
487
  vpc=vpc,
406
488
  vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS)]
407
489
  )
@@ -445,13 +527,13 @@ To create a `Kubectl Handler`, use `kubectlProviderOptions` when creating the cl
445
527
  `kubectlLayer` is the only required property in `kubectlProviderOptions`.
446
528
 
447
529
  ```python
448
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
530
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
449
531
 
450
532
 
451
533
  eks.Cluster(self, "hello-eks",
452
- version=eks.KubernetesVersion.V1_32,
534
+ version=eks.KubernetesVersion.V1_34,
453
535
  kubectl_provider_options=eks.KubectlProviderOptions(
454
- kubectl_layer=KubectlV32Layer(self, "kubectl")
536
+ kubectl_layer=KubectlV34Layer(self, "kubectl")
455
537
  )
456
538
  )
457
539
  ```
@@ -461,9 +543,6 @@ eks.Cluster(self, "hello-eks",
461
543
  If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
462
544
 
463
545
  ```python
464
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
465
-
466
-
467
546
  handler_role = iam.Role.from_role_arn(self, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role")
468
547
  # get the serivceToken from the custom resource provider
469
548
  function_arn = lambda_.Function.from_function_name(self, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").function_arn
@@ -483,13 +562,13 @@ cluster = eks.Cluster.from_cluster_attributes(self, "Cluster",
483
562
  You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
484
563
 
485
564
  ```python
486
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
565
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
487
566
 
488
567
 
489
568
  cluster = eks.Cluster(self, "hello-eks",
490
- version=eks.KubernetesVersion.V1_32,
569
+ version=eks.KubernetesVersion.V1_34,
491
570
  kubectl_provider_options=eks.KubectlProviderOptions(
492
- kubectl_layer=KubectlV32Layer(self, "kubectl"),
571
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
493
572
  environment={
494
573
  "http_proxy": "http://proxy.myproxy.com"
495
574
  }
@@ -510,13 +589,13 @@ Depending on which version of kubernetes you're targeting, you will need to use
510
589
  the `@aws-cdk/lambda-layer-kubectl-vXY` packages.
511
590
 
512
591
  ```python
513
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
592
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
514
593
 
515
594
 
516
595
  cluster = eks.Cluster(self, "hello-eks",
517
- version=eks.KubernetesVersion.V1_32,
596
+ version=eks.KubernetesVersion.V1_34,
518
597
  kubectl_provider_options=eks.KubectlProviderOptions(
519
- kubectl_layer=KubectlV32Layer(self, "kubectl")
598
+ kubectl_layer=KubectlV34Layer(self, "kubectl")
520
599
  )
521
600
  )
522
601
  ```
@@ -526,15 +605,15 @@ cluster = eks.Cluster(self, "hello-eks",
526
605
  By default, the kubectl provider is configured with 1024MiB of memory. You can use the `memory` option to specify the memory size for the AWS Lambda function:
527
606
 
528
607
  ```python
529
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
608
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
530
609
 
531
610
 
532
611
  eks.Cluster(self, "MyCluster",
533
612
  kubectl_provider_options=eks.KubectlProviderOptions(
534
- kubectl_layer=KubectlV32Layer(self, "kubectl"),
613
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
535
614
  memory=Size.gibibytes(4)
536
615
  ),
537
- version=eks.KubernetesVersion.V1_32
616
+ version=eks.KubernetesVersion.V1_34
538
617
  )
539
618
  ```
540
619
 
@@ -567,7 +646,7 @@ When you create a cluster, you can specify a `mastersRole`. The `Cluster` constr
567
646
  # role: iam.Role
568
647
 
569
648
  eks.Cluster(self, "HelloEKS",
570
- version=eks.KubernetesVersion.V1_32,
649
+ version=eks.KubernetesVersion.V1_34,
571
650
  masters_role=role
572
651
  )
573
652
  ```
@@ -588,7 +667,7 @@ You can use the `secretsEncryptionKey` to configure which key the cluster will u
588
667
  secrets_key = kms.Key(self, "SecretsKey")
589
668
  cluster = eks.Cluster(self, "MyCluster",
590
669
  secrets_encryption_key=secrets_key,
591
- version=eks.KubernetesVersion.V1_32
670
+ version=eks.KubernetesVersion.V1_34
592
671
  )
593
672
  ```
594
673
 
@@ -598,7 +677,7 @@ You can also use a similar configuration for running a cluster built using the F
598
677
  secrets_key = kms.Key(self, "SecretsKey")
599
678
  cluster = eks.FargateCluster(self, "MyFargateCluster",
600
679
  secrets_encryption_key=secrets_key,
601
- version=eks.KubernetesVersion.V1_32
680
+ version=eks.KubernetesVersion.V1_34
602
681
  )
603
682
  ```
604
683
 
@@ -641,7 +720,7 @@ eks.AccessPolicy.from_access_policy_name("AmazonEKSAdminPolicy",
641
720
  Use `grantAccess()` to grant the AccessPolicy to an IAM principal:
642
721
 
643
722
  ```python
644
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
723
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
645
724
  # vpc: ec2.Vpc
646
725
 
647
726
 
@@ -656,9 +735,9 @@ eks_admin_role = iam.Role(self, "EKSAdminRole",
656
735
  cluster = eks.Cluster(self, "Cluster",
657
736
  vpc=vpc,
658
737
  masters_role=cluster_admin_role,
659
- version=eks.KubernetesVersion.V1_32,
738
+ version=eks.KubernetesVersion.V1_34,
660
739
  kubectl_provider_options=eks.KubectlProviderOptions(
661
- kubectl_layer=KubectlV32Layer(self, "kubectl"),
740
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
662
741
  memory=Size.gibibytes(4)
663
742
  )
664
743
  )
@@ -843,7 +922,7 @@ when a cluster is defined:
843
922
 
844
923
  ```python
845
924
  eks.Cluster(self, "MyCluster",
846
- version=eks.KubernetesVersion.V1_32,
925
+ version=eks.KubernetesVersion.V1_34,
847
926
  prune=False
848
927
  )
849
928
  ```
@@ -1162,7 +1241,7 @@ property. For example:
1162
1241
  ```python
1163
1242
  cluster = eks.Cluster(self, "Cluster",
1164
1243
  # ...
1165
- version=eks.KubernetesVersion.V1_32,
1244
+ version=eks.KubernetesVersion.V1_34,
1166
1245
  cluster_logging=[eks.ClusterLoggingTypes.API, eks.ClusterLoggingTypes.AUTHENTICATOR, eks.ClusterLoggingTypes.SCHEDULER
1167
1246
  ]
1168
1247
  )
@@ -1218,9 +1297,9 @@ import aws_cdk as _aws_cdk_ceddda9d
1218
1297
  import aws_cdk.aws_autoscaling as _aws_cdk_aws_autoscaling_ceddda9d
1219
1298
  import aws_cdk.aws_ec2 as _aws_cdk_aws_ec2_ceddda9d
1220
1299
  import aws_cdk.aws_iam as _aws_cdk_aws_iam_ceddda9d
1221
- import aws_cdk.aws_kms as _aws_cdk_aws_kms_ceddda9d
1222
1300
  import aws_cdk.aws_lambda as _aws_cdk_aws_lambda_ceddda9d
1223
1301
  import aws_cdk.aws_s3_assets as _aws_cdk_aws_s3_assets_ceddda9d
1302
+ import aws_cdk.interfaces.aws_kms as _aws_cdk_interfaces_aws_kms_ceddda9d
1224
1303
  import constructs as _constructs_77d1e7e8
1225
1304
 
1226
1305
 
@@ -2202,7 +2281,7 @@ class AlbControllerOptions:
2202
2281
  Example::
2203
2282
 
2204
2283
  eks.Cluster(self, "HelloEKS",
2205
- version=eks.KubernetesVersion.V1_32,
2284
+ version=eks.KubernetesVersion.V1_34,
2206
2285
  alb_controller=eks.AlbControllerOptions(
2207
2286
  version=eks.AlbControllerVersion.V2_8_2
2208
2287
  )
@@ -2414,7 +2493,7 @@ class AlbControllerVersion(
2414
2493
  Example::
2415
2494
 
2416
2495
  eks.Cluster(self, "HelloEKS",
2417
- version=eks.KubernetesVersion.V1_32,
2496
+ version=eks.KubernetesVersion.V1_34,
2418
2497
  alb_controller=eks.AlbControllerOptions(
2419
2498
  version=eks.AlbControllerVersion.V2_8_2
2420
2499
  )
@@ -2900,7 +2979,7 @@ class AutoScalingGroupCapacityOptions(
2900
2979
  :param key_name: (deprecated) Name of SSH keypair to grant access to instances. ``launchTemplate`` and ``mixedInstancesPolicy`` must not be specified when this property is specified You can either specify ``keyPair`` or ``keyName``, not both. Default: - No SSH access will be possible.
2901
2980
  :param key_pair: The SSH keypair to grant access to the instance. Feature flag ``AUTOSCALING_GENERATE_LAUNCH_TEMPLATE`` must be enabled to use this property. ``launchTemplate`` and ``mixedInstancesPolicy`` must not be specified when this property is specified. You can either specify ``keyPair`` or ``keyName``, not both. Default: - No SSH access will be possible.
2902
2981
  :param max_capacity: Maximum number of instances in the fleet. Default: desiredCapacity
2903
- :param max_instance_lifetime: The maximum amount of time that an instance can be in service. The maximum duration applies to all current and future instances in the group. As an instance approaches its maximum duration, it is terminated and replaced, and cannot be used again. You must specify a value of at least 604,800 seconds (7 days). To clear a previously set value, leave this property undefined. Default: none
2982
+ :param max_instance_lifetime: The maximum amount of time that an instance can be in service. The maximum duration applies to all current and future instances in the group. As an instance approaches its maximum duration, it is terminated and replaced, and cannot be used again. You must specify a value of at least 86,400 seconds (one day). To clear a previously set value, leave this property undefined. Default: none
2904
2983
  :param min_capacity: Minimum number of instances in the fleet. Default: 1
2905
2984
  :param new_instances_protected_from_scale_in: Whether newly-launched instances are protected from termination by Amazon EC2 Auto Scaling when scaling in. By default, Auto Scaling can terminate an instance at any time after launch when scaling in an Auto Scaling Group, subject to the group's termination policy. However, you may wish to protect newly-launched instances from being scaled in if they are going to run critical applications that should not be prematurely terminated. This flag must be enabled if the Auto Scaling Group will be associated with an ECS Capacity Provider with managed termination protection. Default: false
2906
2985
  :param notifications: Configure autoscaling group to send notifications about fleet changes to an SNS topic(s). Default: - No fleet change notifications will be sent.
@@ -2921,12 +3000,15 @@ class AutoScalingGroupCapacityOptions(
2921
3000
 
2922
3001
  Example::
2923
3002
 
2924
- # vpc: ec2.Vpc
2925
- # cluster: eks.Cluster
3003
+ cluster = eks.Cluster(self, "SelfManagedCluster",
3004
+ version=eks.KubernetesVersion.V1_34
3005
+ )
2926
3006
 
2927
- cluster.add_auto_scaling_group_capacity("nodes",
2928
- vpc_subnets=ec2.SubnetSelection(subnets=vpc.private_subnets),
2929
- instance_type=ec2.InstanceType("t2.medium")
3007
+ # Add self-managed Auto Scaling Group
3008
+ cluster.add_auto_scaling_group_capacity("self-managed-asg",
3009
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
3010
+ min_capacity=1,
3011
+ max_capacity=5
2930
3012
  )
2931
3013
  '''
2932
3014
  if isinstance(vpc_subnets, dict):
@@ -3279,7 +3361,7 @@ class AutoScalingGroupCapacityOptions(
3279
3361
  to all current and future instances in the group. As an instance approaches its maximum duration,
3280
3362
  it is terminated and replaced, and cannot be used again.
3281
3363
 
3282
- You must specify a value of at least 604,800 seconds (7 days). To clear a previously set value,
3364
+ You must specify a value of at least 86,400 seconds (one day). To clear a previously set value,
3283
3365
  leave this property undefined.
3284
3366
 
3285
3367
  :default: none
@@ -3864,9 +3946,6 @@ class ClusterAttributes:
3864
3946
 
3865
3947
  Example::
3866
3948
 
3867
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
3868
-
3869
-
3870
3949
  handler_role = iam.Role.from_role_arn(self, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role")
3871
3950
  # get the serivceToken from the custom resource provider
3872
3951
  function_arn = lambda_.Function.from_function_name(self, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").function_arn
@@ -4127,7 +4206,7 @@ class ClusterCommonOptions:
4127
4206
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4128
4207
  prune: typing.Optional[builtins.bool] = None,
4129
4208
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4130
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
4209
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
4131
4210
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
4132
4211
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
4133
4212
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -4165,12 +4244,12 @@ class ClusterCommonOptions:
4165
4244
  import aws_cdk as cdk
4166
4245
  from aws_cdk import aws_ec2 as ec2
4167
4246
  from aws_cdk import aws_iam as iam
4168
- from aws_cdk import aws_kms as kms
4169
4247
  from aws_cdk import aws_lambda as lambda_
4248
+ from aws_cdk.interfaces import aws_kms as interfaces_aws_kms
4170
4249
 
4171
4250
  # alb_controller_version: eks_v2_alpha.AlbControllerVersion
4172
4251
  # endpoint_access: eks_v2_alpha.EndpointAccess
4173
- # key: kms.Key
4252
+ # key_ref: interfaces_aws_kms.IKeyRef
4174
4253
  # kubernetes_version: eks_v2_alpha.KubernetesVersion
4175
4254
  # layer_version: lambda.LayerVersion
4176
4255
  # policy: Any
@@ -4213,7 +4292,7 @@ class ClusterCommonOptions:
4213
4292
  masters_role=role,
4214
4293
  prune=False,
4215
4294
  role=role,
4216
- secrets_encryption_key=key,
4295
+ secrets_encryption_key=key_ref,
4217
4296
  security_group=security_group,
4218
4297
  service_ipv4_cidr="serviceIpv4Cidr",
4219
4298
  tags={
@@ -4420,7 +4499,9 @@ class ClusterCommonOptions:
4420
4499
  return typing.cast(typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole], result)
4421
4500
 
4422
4501
  @builtins.property
4423
- def secrets_encryption_key(self) -> typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey]:
4502
+ def secrets_encryption_key(
4503
+ self,
4504
+ ) -> typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef]:
4424
4505
  '''(experimental) KMS secret for envelope encryption for Kubernetes secrets.
4425
4506
 
4426
4507
  :default:
@@ -4432,7 +4513,7 @@ class ClusterCommonOptions:
4432
4513
  :stability: experimental
4433
4514
  '''
4434
4515
  result = self._values.get("secrets_encryption_key")
4435
- return typing.cast(typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey], result)
4516
+ return typing.cast(typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef], result)
4436
4517
 
4437
4518
  @builtins.property
4438
4519
  def security_group(
@@ -4524,7 +4605,7 @@ class ClusterLoggingTypes(enum.Enum):
4524
4605
 
4525
4606
  cluster = eks.Cluster(self, "Cluster",
4526
4607
  # ...
4527
- version=eks.KubernetesVersion.V1_32,
4608
+ version=eks.KubernetesVersion.V1_34,
4528
4609
  cluster_logging=[eks.ClusterLoggingTypes.API, eks.ClusterLoggingTypes.AUTHENTICATOR, eks.ClusterLoggingTypes.SCHEDULER
4529
4610
  ]
4530
4611
  )
@@ -4601,7 +4682,7 @@ class ClusterProps(ClusterCommonOptions):
4601
4682
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4602
4683
  prune: typing.Optional[builtins.bool] = None,
4603
4684
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4604
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
4685
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
4605
4686
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
4606
4687
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
4607
4688
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -4645,12 +4726,15 @@ class ClusterProps(ClusterCommonOptions):
4645
4726
 
4646
4727
  Example::
4647
4728
 
4648
- cluster = eks.Cluster(self, "EksAutoCluster",
4649
- version=eks.KubernetesVersion.V1_32,
4650
- default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
4651
- compute=eks.ComputeConfig(
4652
- node_pools=["system", "general-purpose"]
4653
- )
4729
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
4730
+ version=eks.KubernetesVersion.V1_34,
4731
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
4732
+ )
4733
+
4734
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
4735
+ cluster.add_fargate_profile("FargateProfile",
4736
+ selectors=[eks.Selector(namespace="default")
4737
+ ]
4654
4738
  )
4655
4739
  '''
4656
4740
  if isinstance(alb_controller, dict):
@@ -4863,7 +4947,9 @@ class ClusterProps(ClusterCommonOptions):
4863
4947
  return typing.cast(typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole], result)
4864
4948
 
4865
4949
  @builtins.property
4866
- def secrets_encryption_key(self) -> typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey]:
4950
+ def secrets_encryption_key(
4951
+ self,
4952
+ ) -> typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef]:
4867
4953
  '''(experimental) KMS secret for envelope encryption for Kubernetes secrets.
4868
4954
 
4869
4955
  :default:
@@ -4875,7 +4961,7 @@ class ClusterProps(ClusterCommonOptions):
4875
4961
  :stability: experimental
4876
4962
  '''
4877
4963
  result = self._values.get("secrets_encryption_key")
4878
- return typing.cast(typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey], result)
4964
+ return typing.cast(typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef], result)
4879
4965
 
4880
4966
  @builtins.property
4881
4967
  def security_group(
@@ -5067,7 +5153,7 @@ class ComputeConfig:
5067
5153
  Example::
5068
5154
 
5069
5155
  cluster = eks.Cluster(self, "EksAutoCluster",
5070
- version=eks.KubernetesVersion.V1_32,
5156
+ version=eks.KubernetesVersion.V1_34,
5071
5157
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
5072
5158
  compute=eks.ComputeConfig(
5073
5159
  node_pools=["system", "general-purpose"]
@@ -5171,7 +5257,7 @@ class DefaultCapacityType(enum.Enum):
5171
5257
  Example::
5172
5258
 
5173
5259
  cluster = eks.Cluster(self, "HelloEKS",
5174
- version=eks.KubernetesVersion.V1_32,
5260
+ version=eks.KubernetesVersion.V1_34,
5175
5261
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
5176
5262
  default_capacity=0
5177
5263
  )
@@ -5371,7 +5457,7 @@ class EndpointAccess(
5371
5457
  Example::
5372
5458
 
5373
5459
  cluster = eks.Cluster(self, "hello-eks",
5374
- version=eks.KubernetesVersion.V1_32,
5460
+ version=eks.KubernetesVersion.V1_34,
5375
5461
  endpoint_access=eks.EndpointAccess.PRIVATE
5376
5462
  )
5377
5463
  '''
@@ -5474,7 +5560,7 @@ class FargateClusterProps(ClusterCommonOptions):
5474
5560
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
5475
5561
  prune: typing.Optional[builtins.bool] = None,
5476
5562
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
5477
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
5563
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
5478
5564
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
5479
5565
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
5480
5566
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -5508,8 +5594,8 @@ class FargateClusterProps(ClusterCommonOptions):
5508
5594
 
5509
5595
  Example::
5510
5596
 
5511
- cluster = eks.FargateCluster(self, "MyCluster",
5512
- version=eks.KubernetesVersion.V1_32
5597
+ cluster = eks.FargateCluster(self, "FargateCluster",
5598
+ version=eks.KubernetesVersion.V1_34
5513
5599
  )
5514
5600
  '''
5515
5601
  if isinstance(alb_controller, dict):
@@ -5707,7 +5793,9 @@ class FargateClusterProps(ClusterCommonOptions):
5707
5793
  return typing.cast(typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole], result)
5708
5794
 
5709
5795
  @builtins.property
5710
- def secrets_encryption_key(self) -> typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey]:
5796
+ def secrets_encryption_key(
5797
+ self,
5798
+ ) -> typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef]:
5711
5799
  '''(experimental) KMS secret for envelope encryption for Kubernetes secrets.
5712
5800
 
5713
5801
  :default:
@@ -5719,7 +5807,7 @@ class FargateClusterProps(ClusterCommonOptions):
5719
5807
  :stability: experimental
5720
5808
  '''
5721
5809
  result = self._values.get("secrets_encryption_key")
5722
- return typing.cast(typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey], result)
5810
+ return typing.cast(typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef], result)
5723
5811
 
5724
5812
  @builtins.property
5725
5813
  def security_group(
@@ -5968,10 +6056,15 @@ class FargateProfileOptions:
5968
6056
 
5969
6057
  Example::
5970
6058
 
5971
- # cluster: eks.Cluster
6059
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
6060
+ version=eks.KubernetesVersion.V1_34,
6061
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
6062
+ )
5972
6063
 
5973
- cluster.add_fargate_profile("MyProfile",
5974
- selectors=[eks.Selector(namespace="default")]
6064
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
6065
+ cluster.add_fargate_profile("FargateProfile",
6066
+ selectors=[eks.Selector(namespace="default")
6067
+ ]
5975
6068
  )
5976
6069
  '''
5977
6070
  if isinstance(subnet_selection, dict):
@@ -7970,9 +8063,6 @@ class KubectlProvider(
7970
8063
 
7971
8064
  Example::
7972
8065
 
7973
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
7974
-
7975
-
7976
8066
  handler_role = iam.Role.from_role_arn(self, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role")
7977
8067
  # get the serivceToken from the custom resource provider
7978
8068
  function_arn = lambda_.Function.from_function_name(self, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").function_arn
@@ -8120,9 +8210,6 @@ class KubectlProviderAttributes:
8120
8210
 
8121
8211
  Example::
8122
8212
 
8123
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
8124
-
8125
-
8126
8213
  handler_role = iam.Role.from_role_arn(self, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role")
8127
8214
  # get the serivceToken from the custom resource provider
8128
8215
  function_arn = lambda_.Function.from_function_name(self, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").function_arn
@@ -8220,13 +8307,13 @@ class KubectlProviderOptions:
8220
8307
 
8221
8308
  Example::
8222
8309
 
8223
- from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
8310
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
8224
8311
 
8225
8312
 
8226
8313
  cluster = eks.Cluster(self, "hello-eks",
8227
- version=eks.KubernetesVersion.V1_32,
8314
+ version=eks.KubernetesVersion.V1_34,
8228
8315
  kubectl_provider_options=eks.KubectlProviderOptions(
8229
- kubectl_layer=KubectlV32Layer(self, "kubectl"),
8316
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
8230
8317
  environment={
8231
8318
  "http_proxy": "http://proxy.myproxy.com"
8232
8319
  }
@@ -9501,12 +9588,15 @@ class KubernetesVersion(
9501
9588
 
9502
9589
  Example::
9503
9590
 
9504
- cluster = eks.Cluster(self, "EksAutoCluster",
9505
- version=eks.KubernetesVersion.V1_32,
9506
- default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
9507
- compute=eks.ComputeConfig(
9508
- node_pools=["system", "general-purpose"]
9509
- )
9591
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
9592
+ version=eks.KubernetesVersion.V1_34,
9593
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
9594
+ )
9595
+
9596
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
9597
+ cluster.add_fargate_profile("FargateProfile",
9598
+ selectors=[eks.Selector(namespace="default")
9599
+ ]
9510
9600
  )
9511
9601
  '''
9512
9602
 
@@ -9628,6 +9718,32 @@ class KubernetesVersion(
9628
9718
  '''
9629
9719
  return typing.cast("KubernetesVersion", jsii.sget(cls, "V1_32"))
9630
9720
 
9721
+ @jsii.python.classproperty
9722
+ @jsii.member(jsii_name="V1_33")
9723
+ def V1_33(cls) -> "KubernetesVersion":
9724
+ '''(experimental) Kubernetes version 1.33.
9725
+
9726
+ When creating a ``Cluster`` with this version, you need to also specify the
9727
+ ``kubectlLayer`` property with a ``KubectlV33Layer`` from
9728
+ ``@aws-cdk/lambda-layer-kubectl-v33``.
9729
+
9730
+ :stability: experimental
9731
+ '''
9732
+ return typing.cast("KubernetesVersion", jsii.sget(cls, "V1_33"))
9733
+
9734
+ @jsii.python.classproperty
9735
+ @jsii.member(jsii_name="V1_34")
9736
+ def V1_34(cls) -> "KubernetesVersion":
9737
+ '''(experimental) Kubernetes version 1.34.
9738
+
9739
+ When creating a ``Cluster`` with this version, you need to also specify the
9740
+ ``kubectlLayer`` property with a ``KubectlV34Layer`` from
9741
+ ``@aws-cdk/lambda-layer-kubectl-v34``.
9742
+
9743
+ :stability: experimental
9744
+ '''
9745
+ return typing.cast("KubernetesVersion", jsii.sget(cls, "V1_34"))
9746
+
9631
9747
  @builtins.property
9632
9748
  @jsii.member(jsii_name="version")
9633
9749
  def version(self) -> builtins.str:
@@ -10092,6 +10208,11 @@ class NodegroupAmiType(enum.Enum):
10092
10208
  AL2023_X86_64_NVIDIA = "AL2023_X86_64_NVIDIA"
10093
10209
  '''(experimental) Amazon Linux 2023 with NVIDIA drivers (x86-64).
10094
10210
 
10211
+ :stability: experimental
10212
+ '''
10213
+ AL2023_ARM_64_NVIDIA = "AL2023_ARM_64_NVIDIA"
10214
+ '''(experimental) Amazon Linux 2023 with NVIDIA drivers (ARM-64).
10215
+
10095
10216
  :stability: experimental
10096
10217
  '''
10097
10218
  AL2023_ARM_64_STANDARD = "AL2023_ARM_64_STANDARD"
@@ -10184,7 +10305,7 @@ class NodegroupOptions:
10184
10305
  Example::
10185
10306
 
10186
10307
  cluster = eks.Cluster(self, "HelloEKS",
10187
- version=eks.KubernetesVersion.V1_32,
10308
+ version=eks.KubernetesVersion.V1_34,
10188
10309
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
10189
10310
  default_capacity=0
10190
10311
  )
@@ -11503,7 +11624,8 @@ class ServiceAccount(
11503
11624
  self,
11504
11625
  statement: _aws_cdk_aws_iam_ceddda9d.PolicyStatement,
11505
11626
  ) -> builtins.bool:
11506
- '''
11627
+ '''(deprecated) Add to the policy of this principal.
11628
+
11507
11629
  :param statement: -
11508
11630
 
11509
11631
  :deprecated: use ``addToPrincipalPolicy()``
@@ -11975,6 +12097,17 @@ class ServiceLoadBalancerAddressOptions:
11975
12097
  class TaintEffect(enum.Enum):
11976
12098
  '''(experimental) Effect types of kubernetes node taint.
11977
12099
 
12100
+ Note: These values are specifically for AWS EKS NodeGroups and use the AWS API format.
12101
+ When using AWS CLI or API, taint effects must be NO_SCHEDULE, PREFER_NO_SCHEDULE, or NO_EXECUTE.
12102
+ When using Kubernetes directly or kubectl, taint effects must be NoSchedule, PreferNoSchedule, or NoExecute.
12103
+
12104
+ For Kubernetes manifests (like Karpenter NodePools), use string literals with PascalCase format:
12105
+
12106
+ - 'NoSchedule' instead of TaintEffect.NO_SCHEDULE
12107
+ - 'PreferNoSchedule' instead of TaintEffect.PREFER_NO_SCHEDULE
12108
+ - 'NoExecute' instead of TaintEffect.NO_EXECUTE
12109
+
12110
+ :see: https://docs.aws.amazon.com/eks/latest/userguide/node-taints-managed-node-groups.html
11978
12111
  :stability: experimental
11979
12112
  '''
11980
12113
 
@@ -12484,12 +12617,15 @@ class Cluster(
12484
12617
 
12485
12618
  Example::
12486
12619
 
12487
- cluster = eks.Cluster(self, "EksAutoCluster",
12488
- version=eks.KubernetesVersion.V1_32,
12489
- default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
12490
- compute=eks.ComputeConfig(
12491
- node_pools=["system", "general-purpose"]
12492
- )
12620
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
12621
+ version=eks.KubernetesVersion.V1_34,
12622
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
12623
+ )
12624
+
12625
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
12626
+ cluster.add_fargate_profile("FargateProfile",
12627
+ selectors=[eks.Selector(namespace="default")
12628
+ ]
12493
12629
  )
12494
12630
  '''
12495
12631
 
@@ -12515,7 +12651,7 @@ class Cluster(
12515
12651
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
12516
12652
  prune: typing.Optional[builtins.bool] = None,
12517
12653
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
12518
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
12654
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
12519
12655
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
12520
12656
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
12521
12657
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -12713,7 +12849,7 @@ class Cluster(
12713
12849
  :param key_name: (deprecated) Name of SSH keypair to grant access to instances. ``launchTemplate`` and ``mixedInstancesPolicy`` must not be specified when this property is specified You can either specify ``keyPair`` or ``keyName``, not both. Default: - No SSH access will be possible.
12714
12850
  :param key_pair: The SSH keypair to grant access to the instance. Feature flag ``AUTOSCALING_GENERATE_LAUNCH_TEMPLATE`` must be enabled to use this property. ``launchTemplate`` and ``mixedInstancesPolicy`` must not be specified when this property is specified. You can either specify ``keyPair`` or ``keyName``, not both. Default: - No SSH access will be possible.
12715
12851
  :param max_capacity: Maximum number of instances in the fleet. Default: desiredCapacity
12716
- :param max_instance_lifetime: The maximum amount of time that an instance can be in service. The maximum duration applies to all current and future instances in the group. As an instance approaches its maximum duration, it is terminated and replaced, and cannot be used again. You must specify a value of at least 604,800 seconds (7 days). To clear a previously set value, leave this property undefined. Default: none
12852
+ :param max_instance_lifetime: The maximum amount of time that an instance can be in service. The maximum duration applies to all current and future instances in the group. As an instance approaches its maximum duration, it is terminated and replaced, and cannot be used again. You must specify a value of at least 86,400 seconds (one day). To clear a previously set value, leave this property undefined. Default: none
12717
12853
  :param min_capacity: Minimum number of instances in the fleet. Default: 1
12718
12854
  :param new_instances_protected_from_scale_in: Whether newly-launched instances are protected from termination by Amazon EC2 Auto Scaling when scaling in. By default, Auto Scaling can terminate an instance at any time after launch when scaling in an Auto Scaling Group, subject to the group's termination policy. However, you may wish to protect newly-launched instances from being scaled in if they are going to run critical applications that should not be prematurely terminated. This flag must be enabled if the Auto Scaling Group will be associated with an ECS Capacity Provider with managed termination protection. Default: false
12719
12855
  :param notifications: Configure autoscaling group to send notifications about fleet changes to an SNS topic(s). Default: - No fleet change notifications will be sent.
@@ -13410,8 +13546,8 @@ class FargateCluster(
13410
13546
 
13411
13547
  Example::
13412
13548
 
13413
- cluster = eks.FargateCluster(self, "MyCluster",
13414
- version=eks.KubernetesVersion.V1_32
13549
+ cluster = eks.FargateCluster(self, "FargateCluster",
13550
+ version=eks.KubernetesVersion.V1_34
13415
13551
  )
13416
13552
  '''
13417
13553
 
@@ -13432,7 +13568,7 @@ class FargateCluster(
13432
13568
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13433
13569
  prune: typing.Optional[builtins.bool] = None,
13434
13570
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13435
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
13571
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13436
13572
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13437
13573
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13438
13574
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -13885,7 +14021,7 @@ def _typecheckingstub__522396bf3ea38086bd92ddd50181dc1757140cccc27f7d0415c200a26
13885
14021
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13886
14022
  prune: typing.Optional[builtins.bool] = None,
13887
14023
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13888
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
14024
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13889
14025
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13890
14026
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13891
14027
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -13908,7 +14044,7 @@ def _typecheckingstub__cdebba88d00ede95b7f48fc97c266609fdb0fc0ef3bb709493d319c84
13908
14044
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13909
14045
  prune: typing.Optional[builtins.bool] = None,
13910
14046
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13911
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
14047
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13912
14048
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13913
14049
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13914
14050
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -13966,7 +14102,7 @@ def _typecheckingstub__89419afd037884b6a69d80af0bf5c1fe35164b8d31e7e5746501350e5
13966
14102
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13967
14103
  prune: typing.Optional[builtins.bool] = None,
13968
14104
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13969
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
14105
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13970
14106
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13971
14107
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13972
14108
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -14585,7 +14721,7 @@ def _typecheckingstub__f953a3ebdf317cd4c17c2caf66c079973022b58e6c5cf124f9d5f0089
14585
14721
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14586
14722
  prune: typing.Optional[builtins.bool] = None,
14587
14723
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14588
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
14724
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
14589
14725
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
14590
14726
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
14591
14727
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -14803,7 +14939,7 @@ def _typecheckingstub__673db9ae76799e064c85ea6382670fd3efa0ca3c8a72239cc0723fff0
14803
14939
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14804
14940
  prune: typing.Optional[builtins.bool] = None,
14805
14941
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14806
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKey] = None,
14942
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
14807
14943
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
14808
14944
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
14809
14945
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -14820,3 +14956,6 @@ def _typecheckingstub__867c24d91e82b6c927deaaa713ad154d44030f8a2d7da291636600790
14820
14956
  ) -> None:
14821
14957
  """Type checking stubs"""
14822
14958
  pass
14959
+
14960
+ for cls in [IAccessEntry, IAccessPolicy, IAddon, ICluster, IKubectlProvider, INodegroup]:
14961
+ typing.cast(typing.Any, cls).__protocol_attrs__ = typing.cast(typing.Any, cls).__protocol_attrs__ - set(['__jsii_proxy_class__', '__jsii_type__'])