aws-cdk.aws-eks-v2-alpha 2.218.0a0__py3-none-any.whl → 2.231.0a0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -4,13 +4,13 @@ r'''
4
4
  <!--BEGIN STABILITY BANNER-->---
5
5
 
6
6
 
7
- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)
7
+ ![cdk-constructs: Developer Preview](https://img.shields.io/badge/cdk--constructs-developer--preview-informational.svg?style=for-the-badge)
8
8
 
9
- > The APIs of higher level constructs in this module are experimental and under active development.
10
- > They are subject to non-backward compatible changes or removal in any future version. These are
11
- > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be
12
- > announced in the release notes. This means that while you may use them, you may need to update
13
- > your source code when upgrading to a newer version of this package.
9
+ > The APIs of higher level constructs in this module are in **developer preview** before they
10
+ > become stable. We will only make breaking changes to address unforeseen API issues. Therefore,
11
+ > these APIs are not subject to [Semantic Versioning](https://semver.org/), and breaking changes
12
+ > will be announced in release notes. This means that while you may use them, you may need to
13
+ > update your source code when upgrading to a newer version of this package.
14
14
 
15
15
  ---
16
16
  <!--END STABILITY BANNER-->
@@ -33,39 +33,88 @@ Here is the minimal example of defining an AWS EKS cluster
33
33
 
34
34
  ```python
35
35
  cluster = eks.Cluster(self, "hello-eks",
36
- version=eks.KubernetesVersion.V1_33
36
+ version=eks.KubernetesVersion.V1_34
37
37
  )
38
38
  ```
39
39
 
40
40
  ## Architecture
41
41
 
42
- ```text
43
- +-----------------------------------------------+
44
- | EKS Cluster | kubectl | |
45
- | -----------------|<--------+| Kubectl Handler |
46
- | AWS::EKS::Cluster (Optional) |
47
- | +--------------------+ +-----------------+ |
48
- | | | | | |
49
- | | Managed Node Group | | Fargate Profile | |
50
- | | | | | |
51
- | +--------------------+ +-----------------+ |
52
- +-----------------------------------------------+
53
- ^
54
- | connect self managed capacity
55
- +
56
- +--------------------+
57
- | Auto Scaling Group |
58
- +--------------------+
42
+ ```text +-----------------+
43
+ kubectl | |
44
+ +------------>| Kubectl Handler |
45
+ | | (Optional) |
46
+ | +-----------------+
47
+ +-------------------------------------+-------------------------------------+
48
+ | EKS Cluster (Auto Mode) |
49
+ | AWS::EKS::Cluster |
50
+ | |
51
+ | +---------------------------------------------------------------------+ |
52
+ | | Auto Mode Compute (Managed by EKS) (Default) | |
53
+ | | | |
54
+ | | - Automatically provisions EC2 instances | |
55
+ | | - Auto scaling based on pod requirements | |
56
+ | | - No manual node group configuration needed | |
57
+ | | | |
58
+ | +---------------------------------------------------------------------+ |
59
+ | |
60
+ +---------------------------------------------------------------------------+
59
61
  ```
60
62
 
61
63
  In a nutshell:
62
64
 
63
- * EKS Cluster - The cluster endpoint created by EKS.
64
- * Managed Node Group - EC2 worker nodes managed by EKS.
65
- * Fargate Profile - Fargate worker nodes managed by EKS.
66
- * Auto Scaling Group - EC2 worker nodes managed by the user.
67
- * Kubectl Handler (Optional) - Custom resource (i.e Lambda Function) for invoking kubectl commands on the
68
- cluster - created by CDK
65
+ * **[Auto Mode](#eks-auto-mode)** (Default) – The fully managed capacity mode in EKS.
66
+ EKS automatically provisions and scales EC2 capacity based on pod requirements.
67
+ It manages internal *system* and *general-purpose* NodePools, handles networking and storage setup, and removes the need for user-managed node groups or Auto Scaling Groups.
68
+
69
+ ```python
70
+ cluster = eks.Cluster(self, "AutoModeCluster",
71
+ version=eks.KubernetesVersion.V1_34
72
+ )
73
+ ```
74
+ * **[Managed Node Groups](#managed-node-groups)** – The semi-managed capacity mode.
75
+ EKS provisions and manages EC2 nodes on your behalf but you configure the instance types, scaling ranges, and update strategy.
76
+ AWS handles node health, draining, and rolling updates while you retain control over scaling and cost optimization.
77
+
78
+ You can also define *Fargate Profiles* that determine which pods or namespaces run on Fargate infrastructure.
79
+
80
+ ```python
81
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
82
+ version=eks.KubernetesVersion.V1_34,
83
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
84
+ )
85
+
86
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
87
+ cluster.add_fargate_profile("FargateProfile",
88
+ selectors=[eks.Selector(namespace="default")
89
+ ]
90
+ )
91
+ ```
92
+ * **[Fargate Mode](#fargate-profiles)** – The Fargate capacity mode.
93
+ EKS runs your pods directly on AWS Fargate without provisioning EC2 nodes.
94
+
95
+ ```python
96
+ cluster = eks.FargateCluster(self, "FargateCluster",
97
+ version=eks.KubernetesVersion.V1_34
98
+ )
99
+ ```
100
+ * **[Self-Managed Nodes](#self-managed-capacity)** – The fully manual capacity mode.
101
+ You create and manage EC2 instances (via an Auto Scaling Group) and connect them to the cluster manually.
102
+ This provides maximum flexibility for custom AMIs or configurations but also the highest operational overhead.
103
+
104
+ ```python
105
+ cluster = eks.Cluster(self, "SelfManagedCluster",
106
+ version=eks.KubernetesVersion.V1_34
107
+ )
108
+
109
+ # Add self-managed Auto Scaling Group
110
+ cluster.add_auto_scaling_group_capacity("self-managed-asg",
111
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
112
+ min_capacity=1,
113
+ max_capacity=5
114
+ )
115
+ ```
116
+ * **[Kubectl Handler](#kubectl-support) (Optional)** – A Lambda-backed custom resource created by the AWS CDK to execute `kubectl` commands (like `apply` or `patch`) during deployment.
117
+ Regardless of the capacity mode, this handler may still be created to apply Kubernetes manifests as part of CDK provisioning.
69
118
 
70
119
  ## Provisioning cluster
71
120
 
@@ -73,7 +122,7 @@ Creating a new cluster is done using the `Cluster` constructs. The only required
73
122
 
74
123
  ```python
75
124
  eks.Cluster(self, "HelloEKS",
76
- version=eks.KubernetesVersion.V1_33
125
+ version=eks.KubernetesVersion.V1_34
77
126
  )
78
127
  ```
79
128
 
@@ -81,7 +130,7 @@ You can also use `FargateCluster` to provision a cluster that uses only fargate
81
130
 
82
131
  ```python
83
132
  eks.FargateCluster(self, "HelloEKS",
84
- version=eks.KubernetesVersion.V1_33
133
+ version=eks.KubernetesVersion.V1_34
85
134
  )
86
135
  ```
87
136
 
@@ -90,13 +139,13 @@ be created by default. It will only be deployed when `kubectlProviderOptions`
90
139
  property is used.**
91
140
 
92
141
  ```python
93
- from aws_cdk.lambda_layer_kubectl_v33 import KubectlV33Layer
142
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
94
143
 
95
144
 
96
145
  eks.Cluster(self, "hello-eks",
97
- version=eks.KubernetesVersion.V1_33,
146
+ version=eks.KubernetesVersion.V1_34,
98
147
  kubectl_provider_options=eks.KubectlProviderOptions(
99
- kubectl_layer=KubectlV33Layer(self, "kubectl")
148
+ kubectl_layer=KubectlV34Layer(self, "kubectl")
100
149
  )
101
150
  )
102
151
  ```
@@ -114,7 +163,7 @@ Auto Mode is enabled by default when creating a new cluster without specifying a
114
163
  ```python
115
164
  # Create EKS cluster with Auto Mode implicitly enabled
116
165
  cluster = eks.Cluster(self, "EksAutoCluster",
117
- version=eks.KubernetesVersion.V1_33
166
+ version=eks.KubernetesVersion.V1_34
118
167
  )
119
168
  ```
120
169
 
@@ -123,7 +172,7 @@ You can also explicitly enable Auto Mode using `defaultCapacityType`:
123
172
  ```python
124
173
  # Create EKS cluster with Auto Mode explicitly enabled
125
174
  cluster = eks.Cluster(self, "EksAutoCluster",
126
- version=eks.KubernetesVersion.V1_33,
175
+ version=eks.KubernetesVersion.V1_34,
127
176
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE
128
177
  )
129
178
  ```
@@ -139,7 +188,7 @@ These node pools are managed automatically by EKS. You can configure which node
139
188
 
140
189
  ```python
141
190
  cluster = eks.Cluster(self, "EksAutoCluster",
142
- version=eks.KubernetesVersion.V1_33,
191
+ version=eks.KubernetesVersion.V1_34,
143
192
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
144
193
  compute=eks.ComputeConfig(
145
194
  node_pools=["system", "general-purpose"]
@@ -155,7 +204,7 @@ You can disable the default node pools entirely by setting an empty array for `n
155
204
 
156
205
  ```python
157
206
  cluster = eks.Cluster(self, "EksAutoCluster",
158
- version=eks.KubernetesVersion.V1_33,
207
+ version=eks.KubernetesVersion.V1_34,
159
208
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
160
209
  compute=eks.ComputeConfig(
161
210
  node_pools=[]
@@ -172,7 +221,7 @@ If you prefer to manage your own node groups instead of using Auto Mode, you can
172
221
  ```python
173
222
  # Create EKS cluster with traditional managed node group
174
223
  cluster = eks.Cluster(self, "EksCluster",
175
- version=eks.KubernetesVersion.V1_33,
224
+ version=eks.KubernetesVersion.V1_34,
176
225
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
177
226
  default_capacity=3, # Number of instances
178
227
  default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.LARGE)
@@ -183,7 +232,7 @@ You can also create a cluster with no initial capacity and add node groups later
183
232
 
184
233
  ```python
185
234
  cluster = eks.Cluster(self, "EksCluster",
186
- version=eks.KubernetesVersion.V1_33,
235
+ version=eks.KubernetesVersion.V1_34,
187
236
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
188
237
  default_capacity=0
189
238
  )
@@ -204,7 +253,7 @@ You can combine Auto Mode with traditional node groups for specific workload req
204
253
 
205
254
  ```python
206
255
  cluster = eks.Cluster(self, "Cluster",
207
- version=eks.KubernetesVersion.V1_33,
256
+ version=eks.KubernetesVersion.V1_34,
208
257
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
209
258
  compute=eks.ComputeConfig(
210
259
  node_pools=["system", "general-purpose"]
@@ -243,7 +292,7 @@ By default, when using `DefaultCapacityType.NODEGROUP`, this library will alloca
243
292
 
244
293
  ```python
245
294
  eks.Cluster(self, "HelloEKS",
246
- version=eks.KubernetesVersion.V1_33,
295
+ version=eks.KubernetesVersion.V1_34,
247
296
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP
248
297
  )
249
298
  ```
@@ -252,7 +301,7 @@ At cluster instantiation time, you can customize the number of instances and the
252
301
 
253
302
  ```python
254
303
  eks.Cluster(self, "HelloEKS",
255
- version=eks.KubernetesVersion.V1_33,
304
+ version=eks.KubernetesVersion.V1_34,
256
305
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
257
306
  default_capacity=5,
258
307
  default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL)
@@ -265,7 +314,7 @@ Additional customizations are available post instantiation. To apply them, set t
265
314
 
266
315
  ```python
267
316
  cluster = eks.Cluster(self, "HelloEKS",
268
- version=eks.KubernetesVersion.V1_33,
317
+ version=eks.KubernetesVersion.V1_34,
269
318
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
270
319
  default_capacity=0
271
320
  )
@@ -318,7 +367,7 @@ The following code defines an Amazon EKS cluster with a default Fargate Profile
318
367
 
319
368
  ```python
320
369
  cluster = eks.FargateCluster(self, "MyCluster",
321
- version=eks.KubernetesVersion.V1_33
370
+ version=eks.KubernetesVersion.V1_34
322
371
  )
323
372
  ```
324
373
 
@@ -329,6 +378,39 @@ pods running on Fargate. For ingress, we recommend that you use the [ALB Ingress
329
378
  Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)
330
379
  on Amazon EKS (minimum version v1.1.4).
331
380
 
381
+ ### Self-managed capacity
382
+
383
+ Self-managed capacity gives you the most control over your worker nodes by allowing you to create and manage your own EC2 Auto Scaling Groups. This approach provides maximum flexibility for custom AMIs, instance configurations, and scaling policies, but requires more operational overhead.
384
+
385
+ You can add self-managed capacity to any cluster using the `addAutoScalingGroupCapacity` method:
386
+
387
+ ```python
388
+ cluster = eks.Cluster(self, "Cluster",
389
+ version=eks.KubernetesVersion.V1_34
390
+ )
391
+
392
+ cluster.add_auto_scaling_group_capacity("self-managed-nodes",
393
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
394
+ min_capacity=1,
395
+ max_capacity=10,
396
+ desired_capacity=3
397
+ )
398
+ ```
399
+
400
+ You can specify custom subnets for the Auto Scaling Group:
401
+
402
+ ```python
403
+ # vpc: ec2.Vpc
404
+ # cluster: eks.Cluster
405
+
406
+
407
+ cluster.add_auto_scaling_group_capacity("custom-subnet-nodes",
408
+ vpc_subnets=ec2.SubnetSelection(subnets=vpc.private_subnets),
409
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
410
+ min_capacity=2
411
+ )
412
+ ```
413
+
332
414
  ### Endpoint Access
333
415
 
334
416
  When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as `kubectl`)
@@ -337,7 +419,7 @@ You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/
337
419
 
338
420
  ```python
339
421
  cluster = eks.Cluster(self, "hello-eks",
340
- version=eks.KubernetesVersion.V1_33,
422
+ version=eks.KubernetesVersion.V1_34,
341
423
  endpoint_access=eks.EndpointAccess.PRIVATE
342
424
  )
343
425
  ```
@@ -359,7 +441,7 @@ To deploy the controller on your EKS cluster, configure the `albController` prop
359
441
 
360
442
  ```python
361
443
  eks.Cluster(self, "HelloEKS",
362
- version=eks.KubernetesVersion.V1_33,
444
+ version=eks.KubernetesVersion.V1_34,
363
445
  alb_controller=eks.AlbControllerOptions(
364
446
  version=eks.AlbControllerVersion.V2_8_2
365
447
  )
@@ -401,7 +483,7 @@ You can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properti
401
483
 
402
484
 
403
485
  eks.Cluster(self, "HelloEKS",
404
- version=eks.KubernetesVersion.V1_33,
486
+ version=eks.KubernetesVersion.V1_34,
405
487
  vpc=vpc,
406
488
  vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS)]
407
489
  )
@@ -445,13 +527,13 @@ To create a `Kubectl Handler`, use `kubectlProviderOptions` when creating the cl
445
527
  `kubectlLayer` is the only required property in `kubectlProviderOptions`.
446
528
 
447
529
  ```python
448
- from aws_cdk.lambda_layer_kubectl_v33 import KubectlV33Layer
530
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
449
531
 
450
532
 
451
533
  eks.Cluster(self, "hello-eks",
452
- version=eks.KubernetesVersion.V1_33,
534
+ version=eks.KubernetesVersion.V1_34,
453
535
  kubectl_provider_options=eks.KubectlProviderOptions(
454
- kubectl_layer=KubectlV33Layer(self, "kubectl")
536
+ kubectl_layer=KubectlV34Layer(self, "kubectl")
455
537
  )
456
538
  )
457
539
  ```
@@ -480,13 +562,13 @@ cluster = eks.Cluster.from_cluster_attributes(self, "Cluster",
480
562
  You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
481
563
 
482
564
  ```python
483
- from aws_cdk.lambda_layer_kubectl_v33 import KubectlV33Layer
565
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
484
566
 
485
567
 
486
568
  cluster = eks.Cluster(self, "hello-eks",
487
- version=eks.KubernetesVersion.V1_33,
569
+ version=eks.KubernetesVersion.V1_34,
488
570
  kubectl_provider_options=eks.KubectlProviderOptions(
489
- kubectl_layer=KubectlV33Layer(self, "kubectl"),
571
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
490
572
  environment={
491
573
  "http_proxy": "http://proxy.myproxy.com"
492
574
  }
@@ -507,13 +589,13 @@ Depending on which version of kubernetes you're targeting, you will need to use
507
589
  the `@aws-cdk/lambda-layer-kubectl-vXY` packages.
508
590
 
509
591
  ```python
510
- from aws_cdk.lambda_layer_kubectl_v33 import KubectlV33Layer
592
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
511
593
 
512
594
 
513
595
  cluster = eks.Cluster(self, "hello-eks",
514
- version=eks.KubernetesVersion.V1_33,
596
+ version=eks.KubernetesVersion.V1_34,
515
597
  kubectl_provider_options=eks.KubectlProviderOptions(
516
- kubectl_layer=KubectlV33Layer(self, "kubectl")
598
+ kubectl_layer=KubectlV34Layer(self, "kubectl")
517
599
  )
518
600
  )
519
601
  ```
@@ -523,15 +605,15 @@ cluster = eks.Cluster(self, "hello-eks",
523
605
  By default, the kubectl provider is configured with 1024MiB of memory. You can use the `memory` option to specify the memory size for the AWS Lambda function:
524
606
 
525
607
  ```python
526
- from aws_cdk.lambda_layer_kubectl_v33 import KubectlV33Layer
608
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
527
609
 
528
610
 
529
611
  eks.Cluster(self, "MyCluster",
530
612
  kubectl_provider_options=eks.KubectlProviderOptions(
531
- kubectl_layer=KubectlV33Layer(self, "kubectl"),
613
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
532
614
  memory=Size.gibibytes(4)
533
615
  ),
534
- version=eks.KubernetesVersion.V1_33
616
+ version=eks.KubernetesVersion.V1_34
535
617
  )
536
618
  ```
537
619
 
@@ -564,7 +646,7 @@ When you create a cluster, you can specify a `mastersRole`. The `Cluster` constr
564
646
  # role: iam.Role
565
647
 
566
648
  eks.Cluster(self, "HelloEKS",
567
- version=eks.KubernetesVersion.V1_33,
649
+ version=eks.KubernetesVersion.V1_34,
568
650
  masters_role=role
569
651
  )
570
652
  ```
@@ -585,7 +667,7 @@ You can use the `secretsEncryptionKey` to configure which key the cluster will u
585
667
  secrets_key = kms.Key(self, "SecretsKey")
586
668
  cluster = eks.Cluster(self, "MyCluster",
587
669
  secrets_encryption_key=secrets_key,
588
- version=eks.KubernetesVersion.V1_33
670
+ version=eks.KubernetesVersion.V1_34
589
671
  )
590
672
  ```
591
673
 
@@ -595,7 +677,7 @@ You can also use a similar configuration for running a cluster built using the F
595
677
  secrets_key = kms.Key(self, "SecretsKey")
596
678
  cluster = eks.FargateCluster(self, "MyFargateCluster",
597
679
  secrets_encryption_key=secrets_key,
598
- version=eks.KubernetesVersion.V1_33
680
+ version=eks.KubernetesVersion.V1_34
599
681
  )
600
682
  ```
601
683
 
@@ -638,7 +720,7 @@ eks.AccessPolicy.from_access_policy_name("AmazonEKSAdminPolicy",
638
720
  Use `grantAccess()` to grant the AccessPolicy to an IAM principal:
639
721
 
640
722
  ```python
641
- from aws_cdk.lambda_layer_kubectl_v33 import KubectlV33Layer
723
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
642
724
  # vpc: ec2.Vpc
643
725
 
644
726
 
@@ -653,9 +735,9 @@ eks_admin_role = iam.Role(self, "EKSAdminRole",
653
735
  cluster = eks.Cluster(self, "Cluster",
654
736
  vpc=vpc,
655
737
  masters_role=cluster_admin_role,
656
- version=eks.KubernetesVersion.V1_33,
738
+ version=eks.KubernetesVersion.V1_34,
657
739
  kubectl_provider_options=eks.KubectlProviderOptions(
658
- kubectl_layer=KubectlV33Layer(self, "kubectl"),
740
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
659
741
  memory=Size.gibibytes(4)
660
742
  )
661
743
  )
@@ -840,7 +922,7 @@ when a cluster is defined:
840
922
 
841
923
  ```python
842
924
  eks.Cluster(self, "MyCluster",
843
- version=eks.KubernetesVersion.V1_33,
925
+ version=eks.KubernetesVersion.V1_34,
844
926
  prune=False
845
927
  )
846
928
  ```
@@ -1159,7 +1241,7 @@ property. For example:
1159
1241
  ```python
1160
1242
  cluster = eks.Cluster(self, "Cluster",
1161
1243
  # ...
1162
- version=eks.KubernetesVersion.V1_33,
1244
+ version=eks.KubernetesVersion.V1_34,
1163
1245
  cluster_logging=[eks.ClusterLoggingTypes.API, eks.ClusterLoggingTypes.AUTHENTICATOR, eks.ClusterLoggingTypes.SCHEDULER
1164
1246
  ]
1165
1247
  )
@@ -1215,9 +1297,9 @@ import aws_cdk as _aws_cdk_ceddda9d
1215
1297
  import aws_cdk.aws_autoscaling as _aws_cdk_aws_autoscaling_ceddda9d
1216
1298
  import aws_cdk.aws_ec2 as _aws_cdk_aws_ec2_ceddda9d
1217
1299
  import aws_cdk.aws_iam as _aws_cdk_aws_iam_ceddda9d
1218
- import aws_cdk.aws_kms as _aws_cdk_aws_kms_ceddda9d
1219
1300
  import aws_cdk.aws_lambda as _aws_cdk_aws_lambda_ceddda9d
1220
1301
  import aws_cdk.aws_s3_assets as _aws_cdk_aws_s3_assets_ceddda9d
1302
+ import aws_cdk.interfaces.aws_kms as _aws_cdk_interfaces_aws_kms_ceddda9d
1221
1303
  import constructs as _constructs_77d1e7e8
1222
1304
 
1223
1305
 
@@ -2199,7 +2281,7 @@ class AlbControllerOptions:
2199
2281
  Example::
2200
2282
 
2201
2283
  eks.Cluster(self, "HelloEKS",
2202
- version=eks.KubernetesVersion.V1_33,
2284
+ version=eks.KubernetesVersion.V1_34,
2203
2285
  alb_controller=eks.AlbControllerOptions(
2204
2286
  version=eks.AlbControllerVersion.V2_8_2
2205
2287
  )
@@ -2411,7 +2493,7 @@ class AlbControllerVersion(
2411
2493
  Example::
2412
2494
 
2413
2495
  eks.Cluster(self, "HelloEKS",
2414
- version=eks.KubernetesVersion.V1_33,
2496
+ version=eks.KubernetesVersion.V1_34,
2415
2497
  alb_controller=eks.AlbControllerOptions(
2416
2498
  version=eks.AlbControllerVersion.V2_8_2
2417
2499
  )
@@ -2918,12 +3000,15 @@ class AutoScalingGroupCapacityOptions(
2918
3000
 
2919
3001
  Example::
2920
3002
 
2921
- # vpc: ec2.Vpc
2922
- # cluster: eks.Cluster
3003
+ cluster = eks.Cluster(self, "SelfManagedCluster",
3004
+ version=eks.KubernetesVersion.V1_34
3005
+ )
2923
3006
 
2924
- cluster.add_auto_scaling_group_capacity("nodes",
2925
- vpc_subnets=ec2.SubnetSelection(subnets=vpc.private_subnets),
2926
- instance_type=ec2.InstanceType("t2.medium")
3007
+ # Add self-managed Auto Scaling Group
3008
+ cluster.add_auto_scaling_group_capacity("self-managed-asg",
3009
+ instance_type=ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MEDIUM),
3010
+ min_capacity=1,
3011
+ max_capacity=5
2927
3012
  )
2928
3013
  '''
2929
3014
  if isinstance(vpc_subnets, dict):
@@ -4121,7 +4206,7 @@ class ClusterCommonOptions:
4121
4206
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4122
4207
  prune: typing.Optional[builtins.bool] = None,
4123
4208
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4124
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
4209
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
4125
4210
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
4126
4211
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
4127
4212
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -4159,12 +4244,12 @@ class ClusterCommonOptions:
4159
4244
  import aws_cdk as cdk
4160
4245
  from aws_cdk import aws_ec2 as ec2
4161
4246
  from aws_cdk import aws_iam as iam
4162
- from aws_cdk import aws_kms as kms
4163
4247
  from aws_cdk import aws_lambda as lambda_
4248
+ from aws_cdk.interfaces import aws_kms as interfaces_aws_kms
4164
4249
 
4165
4250
  # alb_controller_version: eks_v2_alpha.AlbControllerVersion
4166
4251
  # endpoint_access: eks_v2_alpha.EndpointAccess
4167
- # key_ref: kms.IKeyRef
4252
+ # key_ref: interfaces_aws_kms.IKeyRef
4168
4253
  # kubernetes_version: eks_v2_alpha.KubernetesVersion
4169
4254
  # layer_version: lambda.LayerVersion
4170
4255
  # policy: Any
@@ -4416,7 +4501,7 @@ class ClusterCommonOptions:
4416
4501
  @builtins.property
4417
4502
  def secrets_encryption_key(
4418
4503
  self,
4419
- ) -> typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef]:
4504
+ ) -> typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef]:
4420
4505
  '''(experimental) KMS secret for envelope encryption for Kubernetes secrets.
4421
4506
 
4422
4507
  :default:
@@ -4428,7 +4513,7 @@ class ClusterCommonOptions:
4428
4513
  :stability: experimental
4429
4514
  '''
4430
4515
  result = self._values.get("secrets_encryption_key")
4431
- return typing.cast(typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef], result)
4516
+ return typing.cast(typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef], result)
4432
4517
 
4433
4518
  @builtins.property
4434
4519
  def security_group(
@@ -4520,7 +4605,7 @@ class ClusterLoggingTypes(enum.Enum):
4520
4605
 
4521
4606
  cluster = eks.Cluster(self, "Cluster",
4522
4607
  # ...
4523
- version=eks.KubernetesVersion.V1_33,
4608
+ version=eks.KubernetesVersion.V1_34,
4524
4609
  cluster_logging=[eks.ClusterLoggingTypes.API, eks.ClusterLoggingTypes.AUTHENTICATOR, eks.ClusterLoggingTypes.SCHEDULER
4525
4610
  ]
4526
4611
  )
@@ -4597,7 +4682,7 @@ class ClusterProps(ClusterCommonOptions):
4597
4682
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4598
4683
  prune: typing.Optional[builtins.bool] = None,
4599
4684
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
4600
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
4685
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
4601
4686
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
4602
4687
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
4603
4688
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -4641,12 +4726,15 @@ class ClusterProps(ClusterCommonOptions):
4641
4726
 
4642
4727
  Example::
4643
4728
 
4644
- cluster = eks.Cluster(self, "EksAutoCluster",
4645
- version=eks.KubernetesVersion.V1_33,
4646
- default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
4647
- compute=eks.ComputeConfig(
4648
- node_pools=["system", "general-purpose"]
4649
- )
4729
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
4730
+ version=eks.KubernetesVersion.V1_34,
4731
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
4732
+ )
4733
+
4734
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
4735
+ cluster.add_fargate_profile("FargateProfile",
4736
+ selectors=[eks.Selector(namespace="default")
4737
+ ]
4650
4738
  )
4651
4739
  '''
4652
4740
  if isinstance(alb_controller, dict):
@@ -4861,7 +4949,7 @@ class ClusterProps(ClusterCommonOptions):
4861
4949
  @builtins.property
4862
4950
  def secrets_encryption_key(
4863
4951
  self,
4864
- ) -> typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef]:
4952
+ ) -> typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef]:
4865
4953
  '''(experimental) KMS secret for envelope encryption for Kubernetes secrets.
4866
4954
 
4867
4955
  :default:
@@ -4873,7 +4961,7 @@ class ClusterProps(ClusterCommonOptions):
4873
4961
  :stability: experimental
4874
4962
  '''
4875
4963
  result = self._values.get("secrets_encryption_key")
4876
- return typing.cast(typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef], result)
4964
+ return typing.cast(typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef], result)
4877
4965
 
4878
4966
  @builtins.property
4879
4967
  def security_group(
@@ -5065,7 +5153,7 @@ class ComputeConfig:
5065
5153
  Example::
5066
5154
 
5067
5155
  cluster = eks.Cluster(self, "EksAutoCluster",
5068
- version=eks.KubernetesVersion.V1_33,
5156
+ version=eks.KubernetesVersion.V1_34,
5069
5157
  default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
5070
5158
  compute=eks.ComputeConfig(
5071
5159
  node_pools=["system", "general-purpose"]
@@ -5169,7 +5257,7 @@ class DefaultCapacityType(enum.Enum):
5169
5257
  Example::
5170
5258
 
5171
5259
  cluster = eks.Cluster(self, "HelloEKS",
5172
- version=eks.KubernetesVersion.V1_33,
5260
+ version=eks.KubernetesVersion.V1_34,
5173
5261
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
5174
5262
  default_capacity=0
5175
5263
  )
@@ -5369,7 +5457,7 @@ class EndpointAccess(
5369
5457
  Example::
5370
5458
 
5371
5459
  cluster = eks.Cluster(self, "hello-eks",
5372
- version=eks.KubernetesVersion.V1_33,
5460
+ version=eks.KubernetesVersion.V1_34,
5373
5461
  endpoint_access=eks.EndpointAccess.PRIVATE
5374
5462
  )
5375
5463
  '''
@@ -5472,7 +5560,7 @@ class FargateClusterProps(ClusterCommonOptions):
5472
5560
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
5473
5561
  prune: typing.Optional[builtins.bool] = None,
5474
5562
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
5475
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
5563
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
5476
5564
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
5477
5565
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
5478
5566
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -5506,8 +5594,8 @@ class FargateClusterProps(ClusterCommonOptions):
5506
5594
 
5507
5595
  Example::
5508
5596
 
5509
- cluster = eks.FargateCluster(self, "MyCluster",
5510
- version=eks.KubernetesVersion.V1_33
5597
+ cluster = eks.FargateCluster(self, "FargateCluster",
5598
+ version=eks.KubernetesVersion.V1_34
5511
5599
  )
5512
5600
  '''
5513
5601
  if isinstance(alb_controller, dict):
@@ -5707,7 +5795,7 @@ class FargateClusterProps(ClusterCommonOptions):
5707
5795
  @builtins.property
5708
5796
  def secrets_encryption_key(
5709
5797
  self,
5710
- ) -> typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef]:
5798
+ ) -> typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef]:
5711
5799
  '''(experimental) KMS secret for envelope encryption for Kubernetes secrets.
5712
5800
 
5713
5801
  :default:
@@ -5719,7 +5807,7 @@ class FargateClusterProps(ClusterCommonOptions):
5719
5807
  :stability: experimental
5720
5808
  '''
5721
5809
  result = self._values.get("secrets_encryption_key")
5722
- return typing.cast(typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef], result)
5810
+ return typing.cast(typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef], result)
5723
5811
 
5724
5812
  @builtins.property
5725
5813
  def security_group(
@@ -5968,10 +6056,15 @@ class FargateProfileOptions:
5968
6056
 
5969
6057
  Example::
5970
6058
 
5971
- # cluster: eks.Cluster
6059
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
6060
+ version=eks.KubernetesVersion.V1_34,
6061
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
6062
+ )
5972
6063
 
5973
- cluster.add_fargate_profile("MyProfile",
5974
- selectors=[eks.Selector(namespace="default")]
6064
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
6065
+ cluster.add_fargate_profile("FargateProfile",
6066
+ selectors=[eks.Selector(namespace="default")
6067
+ ]
5975
6068
  )
5976
6069
  '''
5977
6070
  if isinstance(subnet_selection, dict):
@@ -8214,13 +8307,13 @@ class KubectlProviderOptions:
8214
8307
 
8215
8308
  Example::
8216
8309
 
8217
- from aws_cdk.lambda_layer_kubectl_v33 import KubectlV33Layer
8310
+ from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
8218
8311
 
8219
8312
 
8220
8313
  cluster = eks.Cluster(self, "hello-eks",
8221
- version=eks.KubernetesVersion.V1_33,
8314
+ version=eks.KubernetesVersion.V1_34,
8222
8315
  kubectl_provider_options=eks.KubectlProviderOptions(
8223
- kubectl_layer=KubectlV33Layer(self, "kubectl"),
8316
+ kubectl_layer=KubectlV34Layer(self, "kubectl"),
8224
8317
  environment={
8225
8318
  "http_proxy": "http://proxy.myproxy.com"
8226
8319
  }
@@ -9495,12 +9588,15 @@ class KubernetesVersion(
9495
9588
 
9496
9589
  Example::
9497
9590
 
9498
- cluster = eks.Cluster(self, "EksAutoCluster",
9499
- version=eks.KubernetesVersion.V1_33,
9500
- default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
9501
- compute=eks.ComputeConfig(
9502
- node_pools=["system", "general-purpose"]
9503
- )
9591
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
9592
+ version=eks.KubernetesVersion.V1_34,
9593
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
9594
+ )
9595
+
9596
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
9597
+ cluster.add_fargate_profile("FargateProfile",
9598
+ selectors=[eks.Selector(namespace="default")
9599
+ ]
9504
9600
  )
9505
9601
  '''
9506
9602
 
@@ -9635,6 +9731,19 @@ class KubernetesVersion(
9635
9731
  '''
9636
9732
  return typing.cast("KubernetesVersion", jsii.sget(cls, "V1_33"))
9637
9733
 
9734
+ @jsii.python.classproperty
9735
+ @jsii.member(jsii_name="V1_34")
9736
+ def V1_34(cls) -> "KubernetesVersion":
9737
+ '''(experimental) Kubernetes version 1.34.
9738
+
9739
+ When creating a ``Cluster`` with this version, you need to also specify the
9740
+ ``kubectlLayer`` property with a ``KubectlV34Layer`` from
9741
+ ``@aws-cdk/lambda-layer-kubectl-v34``.
9742
+
9743
+ :stability: experimental
9744
+ '''
9745
+ return typing.cast("KubernetesVersion", jsii.sget(cls, "V1_34"))
9746
+
9638
9747
  @builtins.property
9639
9748
  @jsii.member(jsii_name="version")
9640
9749
  def version(self) -> builtins.str:
@@ -10196,7 +10305,7 @@ class NodegroupOptions:
10196
10305
  Example::
10197
10306
 
10198
10307
  cluster = eks.Cluster(self, "HelloEKS",
10199
- version=eks.KubernetesVersion.V1_33,
10308
+ version=eks.KubernetesVersion.V1_34,
10200
10309
  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
10201
10310
  default_capacity=0
10202
10311
  )
@@ -11515,7 +11624,8 @@ class ServiceAccount(
11515
11624
  self,
11516
11625
  statement: _aws_cdk_aws_iam_ceddda9d.PolicyStatement,
11517
11626
  ) -> builtins.bool:
11518
- '''
11627
+ '''(deprecated) Add to the policy of this principal.
11628
+
11519
11629
  :param statement: -
11520
11630
 
11521
11631
  :deprecated: use ``addToPrincipalPolicy()``
@@ -12507,12 +12617,15 @@ class Cluster(
12507
12617
 
12508
12618
  Example::
12509
12619
 
12510
- cluster = eks.Cluster(self, "EksAutoCluster",
12511
- version=eks.KubernetesVersion.V1_33,
12512
- default_capacity_type=eks.DefaultCapacityType.AUTOMODE,
12513
- compute=eks.ComputeConfig(
12514
- node_pools=["system", "general-purpose"]
12515
- )
12620
+ cluster = eks.Cluster(self, "ManagedNodeCluster",
12621
+ version=eks.KubernetesVersion.V1_34,
12622
+ default_capacity_type=eks.DefaultCapacityType.NODEGROUP
12623
+ )
12624
+
12625
+ # Add a Fargate Profile for specific workloads (e.g., default namespace)
12626
+ cluster.add_fargate_profile("FargateProfile",
12627
+ selectors=[eks.Selector(namespace="default")
12628
+ ]
12516
12629
  )
12517
12630
  '''
12518
12631
 
@@ -12538,7 +12651,7 @@ class Cluster(
12538
12651
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
12539
12652
  prune: typing.Optional[builtins.bool] = None,
12540
12653
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
12541
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
12654
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
12542
12655
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
12543
12656
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
12544
12657
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -13433,8 +13546,8 @@ class FargateCluster(
13433
13546
 
13434
13547
  Example::
13435
13548
 
13436
- cluster = eks.FargateCluster(self, "MyCluster",
13437
- version=eks.KubernetesVersion.V1_33
13549
+ cluster = eks.FargateCluster(self, "FargateCluster",
13550
+ version=eks.KubernetesVersion.V1_34
13438
13551
  )
13439
13552
  '''
13440
13553
 
@@ -13455,7 +13568,7 @@ class FargateCluster(
13455
13568
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13456
13569
  prune: typing.Optional[builtins.bool] = None,
13457
13570
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13458
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
13571
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13459
13572
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13460
13573
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13461
13574
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -13908,7 +14021,7 @@ def _typecheckingstub__522396bf3ea38086bd92ddd50181dc1757140cccc27f7d0415c200a26
13908
14021
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13909
14022
  prune: typing.Optional[builtins.bool] = None,
13910
14023
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13911
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
14024
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13912
14025
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13913
14026
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13914
14027
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -13931,7 +14044,7 @@ def _typecheckingstub__cdebba88d00ede95b7f48fc97c266609fdb0fc0ef3bb709493d319c84
13931
14044
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13932
14045
  prune: typing.Optional[builtins.bool] = None,
13933
14046
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13934
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
14047
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13935
14048
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13936
14049
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13937
14050
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -13989,7 +14102,7 @@ def _typecheckingstub__89419afd037884b6a69d80af0bf5c1fe35164b8d31e7e5746501350e5
13989
14102
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13990
14103
  prune: typing.Optional[builtins.bool] = None,
13991
14104
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
13992
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
14105
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
13993
14106
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
13994
14107
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
13995
14108
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -14608,7 +14721,7 @@ def _typecheckingstub__f953a3ebdf317cd4c17c2caf66c079973022b58e6c5cf124f9d5f0089
14608
14721
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14609
14722
  prune: typing.Optional[builtins.bool] = None,
14610
14723
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14611
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
14724
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
14612
14725
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
14613
14726
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
14614
14727
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -14826,7 +14939,7 @@ def _typecheckingstub__673db9ae76799e064c85ea6382670fd3efa0ca3c8a72239cc0723fff0
14826
14939
  masters_role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14827
14940
  prune: typing.Optional[builtins.bool] = None,
14828
14941
  role: typing.Optional[_aws_cdk_aws_iam_ceddda9d.IRole] = None,
14829
- secrets_encryption_key: typing.Optional[_aws_cdk_aws_kms_ceddda9d.IKeyRef] = None,
14942
+ secrets_encryption_key: typing.Optional[_aws_cdk_interfaces_aws_kms_ceddda9d.IKeyRef] = None,
14830
14943
  security_group: typing.Optional[_aws_cdk_aws_ec2_ceddda9d.ISecurityGroup] = None,
14831
14944
  service_ipv4_cidr: typing.Optional[builtins.str] = None,
14832
14945
  tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
@@ -14843,3 +14956,6 @@ def _typecheckingstub__867c24d91e82b6c927deaaa713ad154d44030f8a2d7da291636600790
14843
14956
  ) -> None:
14844
14957
  """Type checking stubs"""
14845
14958
  pass
14959
+
14960
+ for cls in [IAccessEntry, IAccessPolicy, IAddon, ICluster, IKubectlProvider, INodegroup]:
14961
+ typing.cast(typing.Any, cls).__protocol_attrs__ = typing.cast(typing.Any, cls).__protocol_attrs__ - set(['__jsii_proxy_class__', '__jsii_type__'])