aws-cdk.aws-eks-v2-alpha 2.178.0a0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of aws-cdk.aws-eks-v2-alpha might be problematic. Click here for more details.

@@ -0,0 +1,1058 @@
1
+ Metadata-Version: 2.1
2
+ Name: aws-cdk.aws-eks-v2-alpha
3
+ Version: 2.178.0a0
4
+ Summary: The CDK Construct Library for AWS::EKS
5
+ Home-page: https://github.com/aws/aws-cdk
6
+ Author: Amazon Web Services
7
+ License: Apache-2.0
8
+ Project-URL: Source, https://github.com/aws/aws-cdk.git
9
+ Classifier: Intended Audience :: Developers
10
+ Classifier: Operating System :: OS Independent
11
+ Classifier: Programming Language :: JavaScript
12
+ Classifier: Programming Language :: Python :: 3 :: Only
13
+ Classifier: Programming Language :: Python :: 3.8
14
+ Classifier: Programming Language :: Python :: 3.9
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Classifier: Programming Language :: Python :: 3.11
17
+ Classifier: Typing :: Typed
18
+ Classifier: Development Status :: 4 - Beta
19
+ Classifier: License :: OSI Approved
20
+ Classifier: Framework :: AWS CDK
21
+ Classifier: Framework :: AWS CDK :: 2
22
+ Requires-Python: ~=3.8
23
+ Description-Content-Type: text/markdown
24
+ License-File: LICENSE
25
+ License-File: NOTICE
26
+ Requires-Dist: aws-cdk-lib<3.0.0,>=2.178.0
27
+ Requires-Dist: constructs<11.0.0,>=10.0.0
28
+ Requires-Dist: jsii<2.0.0,>=1.106.0
29
+ Requires-Dist: publication>=0.0.3
30
+ Requires-Dist: typeguard<4.3.0,>=2.13.3
31
+
32
+ # Amazon EKS V2 Construct Library
33
+
34
+ <!--BEGIN STABILITY BANNER-->---
35
+
36
+
37
+ ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)
38
+
39
+ > The APIs of higher level constructs in this module are experimental and under active development.
40
+ > They are subject to non-backward compatible changes or removal in any future version. These are
41
+ > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be
42
+ > announced in the release notes. This means that while you may use them, you may need to update
43
+ > your source code when upgrading to a newer version of this package.
44
+
45
+ ---
46
+ <!--END STABILITY BANNER-->
47
+
48
+ The eks-v2-alpha module is a rewrite of the existing aws-eks module (https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html). This new iteration leverages native L1 CFN resources, replacing the previous custom resource approach for creating EKS clusters and Fargate Profiles.
49
+
50
+ Compared to the original EKS module, it has the following major changes:
51
+
52
+ * Use native L1 AWS::EKS::Cluster resource to replace custom resource Custom::AWSCDK-EKS-Cluster
53
+ * Use native L1 AWS::EKS::FargateProfile resource to replace custom resource Custom::AWSCDK-EKS-FargateProfile
54
+ * Kubectl Handler will not be created by default. It will only be created if users specify it.
55
+ * Remove AwsAuth construct. Permissions to the cluster will be managed by Access Entry.
56
+ * Remove the limit of 1 cluster per stack
57
+ * Remove nested stacks
58
+ * API changes to make them more ergonomic.
59
+
60
+ ## Quick start
61
+
62
+ Here is the minimal example of defining an AWS EKS cluster
63
+
64
+ ```python
65
+ cluster = eks.Cluster(self, "hello-eks",
66
+ version=eks.KubernetesVersion.V1_31
67
+ )
68
+ ```
69
+
70
+ ## Architecture
71
+
72
+ ```text
73
+ +-----------------------------------------------+
74
+ | EKS Cluster | kubectl | |
75
+ | -----------------|<--------+| Kubectl Handler |
76
+ | AWS::EKS::Cluster (Optional) |
77
+ | +--------------------+ +-----------------+ |
78
+ | | | | | |
79
+ | | Managed Node Group | | Fargate Profile | |
80
+ | | | | | |
81
+ | +--------------------+ +-----------------+ |
82
+ +-----------------------------------------------+
83
+ ^
84
+ | connect self managed capacity
85
+ +
86
+ +--------------------+
87
+ | Auto Scaling Group |
88
+ +--------------------+
89
+ ```
90
+
91
+ In a nutshell:
92
+
93
+ * EKS Cluster - The cluster endpoint created by EKS.
94
+ * Managed Node Group - EC2 worker nodes managed by EKS.
95
+ * Fargate Profile - Fargate worker nodes managed by EKS.
96
+ * Auto Scaling Group - EC2 worker nodes managed by the user.
97
+ * Kubectl Handler (Optional) - Custom resource (i.e Lambda Function) for invoking kubectl commands on the
98
+ cluster - created by CDK
99
+
100
+ ## Provisioning cluster
101
+
102
+ Creating a new cluster is done using the `Cluster` constructs. The only required property is the kubernetes version.
103
+
104
+ ```python
105
+ eks.Cluster(self, "HelloEKS",
106
+ version=eks.KubernetesVersion.V1_31
107
+ )
108
+ ```
109
+
110
+ You can also use `FargateCluster` to provision a cluster that uses only fargate workers.
111
+
112
+ ```python
113
+ eks.FargateCluster(self, "HelloEKS",
114
+ version=eks.KubernetesVersion.V1_31
115
+ )
116
+ ```
117
+
118
+ **Note: Unlike the previous EKS cluster, `Kubectl Handler` will not
119
+ be created by default. It will only be deployed when `kubectlProviderOptions`
120
+ property is used.**
121
+
122
+ ```python
123
+ from aws_cdk.lambda_layer_kubectl_v31 import KubectlV31Layer
124
+
125
+
126
+ eks.Cluster(self, "hello-eks",
127
+ version=eks.KubernetesVersion.V1_31,
128
+ kubectl_provider_options=eks.KubectlProviderOptions(
129
+ kubectl_layer=KubectlV31Layer(self, "kubectl")
130
+ )
131
+ )
132
+ ```
133
+
134
+ ### Managed node groups
135
+
136
+ Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
137
+ With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
138
+
139
+ > For more details visit [Amazon EKS Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).
140
+
141
+ **Managed Node Groups are the recommended way to allocate cluster capacity.**
142
+
143
+ By default, this library will allocate a managed node group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).
144
+
145
+ At cluster instantiation time, you can customize the number of instances and their type:
146
+
147
+ ```python
148
+ eks.Cluster(self, "HelloEKS",
149
+ version=eks.KubernetesVersion.V1_31,
150
+ default_capacity=5,
151
+ default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL)
152
+ )
153
+ ```
154
+
155
+ To access the node group that was created on your behalf, you can use `cluster.defaultNodegroup`.
156
+
157
+ Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the `cluster.addNodegroupCapacity` method:
158
+
159
+ ```python
160
+ cluster = eks.Cluster(self, "HelloEKS",
161
+ version=eks.KubernetesVersion.V1_31,
162
+ default_capacity=0
163
+ )
164
+
165
+ cluster.add_nodegroup_capacity("custom-node-group",
166
+ instance_types=[ec2.InstanceType("m5.large")],
167
+ min_size=4,
168
+ disk_size=100
169
+ )
170
+ ```
171
+
172
+ ### Fargate profiles
173
+
174
+ AWS Fargate is a technology that provides on-demand, right-sized compute
175
+ capacity for containers. With AWS Fargate, you no longer have to provision,
176
+ configure, or scale groups of virtual machines to run containers. This removes
177
+ the need to choose server types, decide when to scale your node groups, or
178
+ optimize cluster packing.
179
+
180
+ You can control which pods start on Fargate and how they run with Fargate
181
+ Profiles, which are defined as part of your Amazon EKS cluster.
182
+
183
+ See [Fargate Considerations](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations) in the AWS EKS User Guide.
184
+
185
+ You can add Fargate Profiles to any EKS cluster defined in your CDK app
186
+ through the `addFargateProfile()` method. The following example adds a profile
187
+ that will match all pods from the "default" namespace:
188
+
189
+ ```python
190
+ # cluster: eks.Cluster
191
+
192
+ cluster.add_fargate_profile("MyProfile",
193
+ selectors=[eks.Selector(namespace="default")]
194
+ )
195
+ ```
196
+
197
+ You can also directly use the `FargateProfile` construct to create profiles under different scopes:
198
+
199
+ ```python
200
+ # cluster: eks.Cluster
201
+
202
+ eks.FargateProfile(self, "MyProfile",
203
+ cluster=cluster,
204
+ selectors=[eks.Selector(namespace="default")]
205
+ )
206
+ ```
207
+
208
+ To create an EKS cluster that **only** uses Fargate capacity, you can use `FargateCluster`.
209
+ The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to [run CoreDNS on Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-coredns).
210
+
211
+ ```python
212
+ cluster = eks.FargateCluster(self, "MyCluster",
213
+ version=eks.KubernetesVersion.V1_31
214
+ )
215
+ ```
216
+
217
+ `FargateCluster` will create a default `FargateProfile` which can be accessed via the cluster's `defaultProfile` property. The created profile can also be customized by passing options as with `addFargateProfile`.
218
+
219
+ **NOTE**: Classic Load Balancers and Network Load Balancers are not supported on
220
+ pods running on Fargate. For ingress, we recommend that you use the [ALB Ingress
221
+ Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)
222
+ on Amazon EKS (minimum version v1.1.4).
223
+
224
+ ### Endpoint Access
225
+
226
+ When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as `kubectl`)
227
+
228
+ You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) by using the `endpointAccess` property:
229
+
230
+ ```python
231
+ cluster = eks.Cluster(self, "hello-eks",
232
+ version=eks.KubernetesVersion.V1_31,
233
+ endpoint_access=eks.EndpointAccess.PRIVATE
234
+ )
235
+ ```
236
+
237
+ The default value is `eks.EndpointAccess.PUBLIC_AND_PRIVATE`. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and `kubectl` commands issued by this library stay within your VPC.
238
+
239
+ ### Alb Controller
240
+
241
+ Some Kubernetes resources are commonly implemented on AWS with the help of the [ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/).
242
+
243
+ From the docs:
244
+
245
+ > AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
246
+ >
247
+ > * It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
248
+ > * It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
249
+
250
+ To deploy the controller on your EKS cluster, configure the `albController` property:
251
+
252
+ ```python
253
+ eks.Cluster(self, "HelloEKS",
254
+ version=eks.KubernetesVersion.V1_31,
255
+ alb_controller=eks.AlbControllerOptions(
256
+ version=eks.AlbControllerVersion.V2_8_2
257
+ )
258
+ )
259
+ ```
260
+
261
+ The `albController` requires `defaultCapacity` or at least one nodegroup. If there's no `defaultCapacity` or available
262
+ nodegroup for the cluster, the `albController` deployment would fail.
263
+
264
+ Querying the controller pods should look something like this:
265
+
266
+ ```console
267
+ ❯ kubectl get pods -n kube-system
268
+ NAME READY STATUS RESTARTS AGE
269
+ aws-load-balancer-controller-76bd6c7586-d929p 1/1 Running 0 109m
270
+ aws-load-balancer-controller-76bd6c7586-fqxph 1/1 Running 0 109m
271
+ ...
272
+ ...
273
+ ```
274
+
275
+ Every Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller.
276
+ If the controller is deleted before the manifest, it might result in dangling ELB/ALB resources.
277
+ Currently, the EKS construct library does not detect such dependencies, and they should be done explicitly.
278
+
279
+ For example:
280
+
281
+ ```python
282
+ # cluster: eks.Cluster
283
+
284
+ manifest = cluster.add_manifest("manifest", {})
285
+ if cluster.alb_controller:
286
+ manifest.node.add_dependency(cluster.alb_controller)
287
+ ```
288
+
289
+ You can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properties:
290
+
291
+ ```python
292
+ # vpc: ec2.Vpc
293
+
294
+
295
+ eks.Cluster(self, "HelloEKS",
296
+ version=eks.KubernetesVersion.V1_31,
297
+ vpc=vpc,
298
+ vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS)]
299
+ )
300
+ ```
301
+
302
+ If you do not specify a VPC, one will be created on your behalf, which you can then access via `cluster.vpc`. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).
303
+
304
+ Please note that the `vpcSubnets` property defines the subnets where EKS will place the *control plane* ENIs. To choose
305
+ the subnets where EKS will place the worker nodes, please refer to the **Provisioning clusters** section above.
306
+
307
+ If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:
308
+
309
+ ```python
310
+ # vpc: ec2.Vpc
311
+ # cluster: eks.Cluster
312
+
313
+ cluster.add_auto_scaling_group_capacity("nodes",
314
+ vpc_subnets=ec2.SubnetSelection(subnets=vpc.private_subnets),
315
+ instance_type=ec2.InstanceType("t2.medium")
316
+ )
317
+ ```
318
+
319
+ There is an additional components you might want to provision within the VPC.
320
+
321
+ The `KubectlHandler` is a Lambda function responsible to issuing `kubectl` and `helm` commands against the cluster when you add resource manifests to the cluster.
322
+
323
+ The handler association to the VPC is derived from the `endpointAccess` configuration. The rule of thumb is: *If the cluster VPC can be associated, it will be*.
324
+
325
+ Breaking this down, it means that if the endpoint exposes private access (via `EndpointAccess.PRIVATE` or `EndpointAccess.PUBLIC_AND_PRIVATE`), and the VPC contains **private** subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.
326
+
327
+ If the endpoint does not expose private access (via `EndpointAccess.PUBLIC`) **or** the VPC does not contain private subnets, the function will not be provisioned within the VPC.
328
+
329
+ If your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as `kubectlLambdaRole`) of the EKS Cluster construct.
330
+
331
+ ### Kubectl Support
332
+
333
+ You can choose to have CDK create a `Kubectl Handler` - a Python Lambda Function to
334
+ apply k8s manifests using `kubectl apply`. This handler will not be created by default.
335
+
336
+ To create a `Kubectl Handler`, use `kubectlProviderOptions` when creating the cluster.
337
+ `kubectlLayer` is the only required property in `kubectlProviderOptions`.
338
+
339
+ ```python
340
+ from aws_cdk.lambda_layer_kubectl_v31 import KubectlV31Layer
341
+
342
+
343
+ eks.Cluster(self, "hello-eks",
344
+ version=eks.KubernetesVersion.V1_31,
345
+ kubectl_provider_options=eks.KubectlProviderOptions(
346
+ kubectl_layer=KubectlV31Layer(self, "kubectl")
347
+ )
348
+ )
349
+ ```
350
+
351
+ `Kubectl Handler` created along with the cluster will be granted admin permissions to the cluster.
352
+
353
+ If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
354
+
355
+ ```python
356
+ from aws_cdk.lambda_layer_kubectl_v31 import KubectlV31Layer
357
+
358
+
359
+ handler_role = iam.Role.from_role_arn(self, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role")
360
+ # get the serivceToken from the custom resource provider
361
+ function_arn = lambda_.Function.from_function_name(self, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").function_arn
362
+ kubectl_provider = eks.KubectlProvider.from_kubectl_provider_attributes(self, "KubectlProvider",
363
+ service_token=function_arn,
364
+ role=handler_role
365
+ )
366
+
367
+ cluster = eks.Cluster.from_cluster_attributes(self, "Cluster",
368
+ cluster_name="cluster",
369
+ kubectl_provider=kubectl_provider
370
+ )
371
+ ```
372
+
373
+ #### Environment
374
+
375
+ You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
376
+
377
+ ```python
378
+ from aws_cdk.lambda_layer_kubectl_v31 import KubectlV31Layer
379
+
380
+
381
+ cluster = eks.Cluster(self, "hello-eks",
382
+ version=eks.KubernetesVersion.V1_31,
383
+ kubectl_provider_options=eks.KubectlProviderOptions(
384
+ kubectl_layer=KubectlV31Layer(self, "kubectl"),
385
+ environment={
386
+ "http_proxy": "http://proxy.myproxy.com"
387
+ }
388
+ )
389
+ )
390
+ ```
391
+
392
+ #### Runtime
393
+
394
+ The kubectl handler uses `kubectl`, `helm` and the `aws` CLI in order to
395
+ interact with the cluster. These are bundled into AWS Lambda layers included in
396
+ the `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` modules.
397
+
398
+ The version of kubectl used must be compatible with the Kubernetes version of the
399
+ cluster. kubectl is supported within one minor version (older or newer) of Kubernetes
400
+ (see [Kubernetes version skew policy](https://kubernetes.io/releases/version-skew-policy/#kubectl)).
401
+ Depending on which version of kubernetes you're targeting, you will need to use one of
402
+ the `@aws-cdk/lambda-layer-kubectl-vXY` packages.
403
+
404
+ ```python
405
+ from aws_cdk.lambda_layer_kubectl_v31 import KubectlV31Layer
406
+
407
+
408
+ cluster = eks.Cluster(self, "hello-eks",
409
+ version=eks.KubernetesVersion.V1_31,
410
+ kubectl_provider_options=eks.KubectlProviderOptions(
411
+ kubectl_layer=KubectlV31Layer(self, "kubectl")
412
+ )
413
+ )
414
+ ```
415
+
416
+ #### Memory
417
+
418
+ By default, the kubectl provider is configured with 1024MiB of memory. You can use the `memory` option to specify the memory size for the AWS Lambda function:
419
+
420
+ ```python
421
+ from aws_cdk.lambda_layer_kubectl_v31 import KubectlV31Layer
422
+
423
+
424
+ eks.Cluster(self, "MyCluster",
425
+ kubectl_provider_options=eks.KubectlProviderOptions(
426
+ kubectl_layer=KubectlV31Layer(self, "kubectl"),
427
+ memory=Size.gibibytes(4)
428
+ ),
429
+ version=eks.KubernetesVersion.V1_31
430
+ )
431
+ ```
432
+
433
+ ### ARM64 Support
434
+
435
+ Instance types with `ARM64` architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 `instanceType` (such as `m6g.medium`), and the latest
436
+ Amazon Linux 2 AMI for ARM64 will be automatically selected.
437
+
438
+ ```python
439
+ # cluster: eks.Cluster
440
+
441
+ # add a managed ARM64 nodegroup
442
+ cluster.add_nodegroup_capacity("extra-ng-arm",
443
+ instance_types=[ec2.InstanceType("m6g.medium")],
444
+ min_size=2
445
+ )
446
+
447
+ # add a self-managed ARM64 nodegroup
448
+ cluster.add_auto_scaling_group_capacity("self-ng-arm",
449
+ instance_type=ec2.InstanceType("m6g.medium"),
450
+ min_capacity=2
451
+ )
452
+ ```
453
+
454
+ ### Masters Role
455
+
456
+ When you create a cluster, you can specify a `mastersRole`. The `Cluster` construct will associate this role with `AmazonEKSClusterAdminPolicy` through [Access Entry](https://docs.aws.amazon.com/eks/latest/userguide/access-policy-permissions.html).
457
+
458
+ ```python
459
+ # role: iam.Role
460
+
461
+ eks.Cluster(self, "HelloEKS",
462
+ version=eks.KubernetesVersion.V1_31,
463
+ masters_role=role
464
+ )
465
+ ```
466
+
467
+ If you do not specify it, you won't have access to the cluster from outside of the CDK application.
468
+
469
+ ### Encryption
470
+
471
+ When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled.
472
+ The documentation on [creating a cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
473
+ can provide more details about the customer master key (CMK) that can be used for the encryption.
474
+
475
+ You can use the `secretsEncryptionKey` to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.
476
+
477
+ > This setting can only be specified when the cluster is created and cannot be updated.
478
+
479
+ ```python
480
+ secrets_key = kms.Key(self, "SecretsKey")
481
+ cluster = eks.Cluster(self, "MyCluster",
482
+ secrets_encryption_key=secrets_key,
483
+ version=eks.KubernetesVersion.V1_31
484
+ )
485
+ ```
486
+
487
+ You can also use a similar configuration for running a cluster built using the FargateCluster construct.
488
+
489
+ ```python
490
+ secrets_key = kms.Key(self, "SecretsKey")
491
+ cluster = eks.FargateCluster(self, "MyFargateCluster",
492
+ secrets_encryption_key=secrets_key,
493
+ version=eks.KubernetesVersion.V1_31
494
+ )
495
+ ```
496
+
497
+ The Amazon Resource Name (ARN) for that CMK can be retrieved.
498
+
499
+ ```python
500
+ # cluster: eks.Cluster
501
+
502
+ cluster_encryption_config_key_arn = cluster.cluster_encryption_config_key_arn
503
+ ```
504
+
505
+ ## Permissions and Security
506
+
507
+ In the new EKS module, `ConfigMap` is deprecated. Clusters created by the new module will use `API` as authentication mode. Access Entry will be the only way for granting permissions to specific IAM users and roles.
508
+
509
+ ### Access Entry
510
+
511
+ An access entry is a cluster identity—directly linked to an AWS IAM principal user or role that is used to authenticate to
512
+ an Amazon EKS cluster. An Amazon EKS access policy authorizes an access entry to perform specific cluster actions.
513
+
514
+ Access policies are Amazon EKS-specific policies that assign Kubernetes permissions to access entries. Amazon EKS supports
515
+ only predefined and AWS managed policies. Access policies are not AWS IAM entities and are defined and managed by Amazon EKS.
516
+ Amazon EKS access policies include permission sets that support common use cases of administration, editing, or read-only access
517
+ to Kubernetes resources. See [Access Policy Permissions](https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html#access-policy-permissions) for more details.
518
+
519
+ Use `AccessPolicy` to include predefined AWS managed policies:
520
+
521
+ ```python
522
+ # AmazonEKSClusterAdminPolicy with `cluster` scope
523
+ eks.AccessPolicy.from_access_policy_name("AmazonEKSClusterAdminPolicy",
524
+ access_scope_type=eks.AccessScopeType.CLUSTER
525
+ )
526
+ # AmazonEKSAdminPolicy with `namespace` scope
527
+ eks.AccessPolicy.from_access_policy_name("AmazonEKSAdminPolicy",
528
+ access_scope_type=eks.AccessScopeType.NAMESPACE,
529
+ namespaces=["foo", "bar"]
530
+ )
531
+ ```
532
+
533
+ Use `grantAccess()` to grant the AccessPolicy to an IAM principal:
534
+
535
+ ```python
536
+ from aws_cdk.lambda_layer_kubectl_v31 import KubectlV31Layer
537
+ # vpc: ec2.Vpc
538
+
539
+
540
+ cluster_admin_role = iam.Role(self, "ClusterAdminRole",
541
+ assumed_by=iam.ArnPrincipal("arn_for_trusted_principal")
542
+ )
543
+
544
+ eks_admin_role = iam.Role(self, "EKSAdminRole",
545
+ assumed_by=iam.ArnPrincipal("arn_for_trusted_principal")
546
+ )
547
+
548
+ cluster = eks.Cluster(self, "Cluster",
549
+ vpc=vpc,
550
+ masters_role=cluster_admin_role,
551
+ version=eks.KubernetesVersion.V1_31,
552
+ kubectl_provider_options=eks.KubectlProviderOptions(
553
+ kubectl_layer=KubectlV31Layer(self, "kubectl"),
554
+ memory=Size.gibibytes(4)
555
+ )
556
+ )
557
+
558
+ # Cluster Admin role for this cluster
559
+ cluster.grant_access("clusterAdminAccess", cluster_admin_role.role_arn, [
560
+ eks.AccessPolicy.from_access_policy_name("AmazonEKSClusterAdminPolicy",
561
+ access_scope_type=eks.AccessScopeType.CLUSTER
562
+ )
563
+ ])
564
+
565
+ # EKS Admin role for specified namespaces of this cluster
566
+ cluster.grant_access("eksAdminRoleAccess", eks_admin_role.role_arn, [
567
+ eks.AccessPolicy.from_access_policy_name("AmazonEKSAdminPolicy",
568
+ access_scope_type=eks.AccessScopeType.NAMESPACE,
569
+ namespaces=["foo", "bar"]
570
+ )
571
+ ])
572
+ ```
573
+
574
+ By default, the cluster creator role will be granted the cluster admin permissions. You can disable it by setting
575
+ `bootstrapClusterCreatorAdminPermissions` to false.
576
+
577
+ > **Note** - Switching `bootstrapClusterCreatorAdminPermissions` on an existing cluster would cause cluster replacement and should be avoided in production.
578
+
579
+ ### Cluster Security Group
580
+
581
+ When you create an Amazon EKS cluster, a [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
582
+ is automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely
583
+ between each other.
584
+
585
+ The ID for that security group can be retrieved after creating the cluster.
586
+
587
+ ```python
588
+ # cluster: eks.Cluster
589
+
590
+ cluster_security_group_id = cluster.cluster_security_group_id
591
+ ```
592
+
593
+ ## Applying Kubernetes Resources
594
+
595
+ To apply kubernetes resource, kubectl provider needs to be created for the cluster. You can use `kubectlProviderOptions` to create the kubectl Provider.
596
+
597
+ The library supports several popular resource deployment mechanisms, among which are:
598
+
599
+ ### Kubernetes Manifests
600
+
601
+ The `KubernetesManifest` construct or `cluster.addManifest` method can be used
602
+ to apply Kubernetes resource manifests to this cluster.
603
+
604
+ > When using `cluster.addManifest`, the manifest construct is defined within the cluster's stack scope. If the manifest contains
605
+ > attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.
606
+ > To avoid this, directly use `new KubernetesManifest` to create the manifest in the scope of the other stack.
607
+
608
+ The following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes)
609
+ service on the cluster:
610
+
611
+ ```python
612
+ # cluster: eks.Cluster
613
+
614
+ app_label = {"app": "hello-kubernetes"}
615
+
616
+ deployment = {
617
+ "api_version": "apps/v1",
618
+ "kind": "Deployment",
619
+ "metadata": {"name": "hello-kubernetes"},
620
+ "spec": {
621
+ "replicas": 3,
622
+ "selector": {"match_labels": app_label},
623
+ "template": {
624
+ "metadata": {"labels": app_label},
625
+ "spec": {
626
+ "containers": [{
627
+ "name": "hello-kubernetes",
628
+ "image": "paulbouwer/hello-kubernetes:1.5",
629
+ "ports": [{"container_port": 8080}]
630
+ }
631
+ ]
632
+ }
633
+ }
634
+ }
635
+ }
636
+
637
+ service = {
638
+ "api_version": "v1",
639
+ "kind": "Service",
640
+ "metadata": {"name": "hello-kubernetes"},
641
+ "spec": {
642
+ "type": "LoadBalancer",
643
+ "ports": [{"port": 80, "target_port": 8080}],
644
+ "selector": app_label
645
+ }
646
+ }
647
+
648
+ # option 1: use a construct
649
+ eks.KubernetesManifest(self, "hello-kub",
650
+ cluster=cluster,
651
+ manifest=[deployment, service]
652
+ )
653
+
654
+ # or, option2: use `addManifest`
655
+ cluster.add_manifest("hello-kub", service, deployment)
656
+ ```
657
+
658
+ #### ALB Controller Integration
659
+
660
+ The `KubernetesManifest` construct can detect ingress resources inside your manifest and automatically add the necessary annotations
661
+ so they are picked up by the ALB Controller.
662
+
663
+ > See [Alb Controller](#alb-controller)
664
+
665
+ To that end, it offers the following properties:
666
+
667
+ * `ingressAlb` - Signal that the ingress detection should be done.
668
+ * `ingressAlbScheme` - Which ALB scheme should be applied. Defaults to `internal`.
669
+
670
+ #### Adding resources from a URL
671
+
672
+ The following example will deploy the resource manifest hosting on remote server:
673
+
674
+ ```text
675
+ // This example is only available in TypeScript
676
+
677
+ import * as yaml from 'js-yaml';
678
+ import * as request from 'sync-request';
679
+
680
+ declare const cluster: eks.Cluster;
681
+ const manifestUrl = 'https://url/of/manifest.yaml';
682
+ const manifest = yaml.safeLoadAll(request('GET', manifestUrl).getBody());
683
+ cluster.addManifest('my-resource', manifest);
684
+ ```
685
+
686
+ #### Dependencies
687
+
688
+ There are cases where Kubernetes resources must be deployed in a specific order.
689
+ For example, you cannot define a resource in a Kubernetes namespace before the
690
+ namespace was created.
691
+
692
+ You can represent dependencies between `KubernetesManifest`s using
693
+ `resource.node.addDependency()`:
694
+
695
+ ```python
696
+ # cluster: eks.Cluster
697
+
698
+ namespace = cluster.add_manifest("my-namespace", {
699
+ "api_version": "v1",
700
+ "kind": "Namespace",
701
+ "metadata": {"name": "my-app"}
702
+ })
703
+
704
+ service = cluster.add_manifest("my-service", {
705
+ "metadata": {
706
+ "name": "myservice",
707
+ "namespace": "my-app"
708
+ },
709
+ "spec": {}
710
+ })
711
+
712
+ service.node.add_dependency(namespace)
713
+ ```
714
+
715
+ **NOTE:** when a `KubernetesManifest` includes multiple resources (either directly
716
+ or through `cluster.addManifest()`) (e.g. `cluster.addManifest('foo', r1, r2, r3,...)`), these resources will be applied as a single manifest via `kubectl`
717
+ and will be applied sequentially (the standard behavior in `kubectl`).
718
+
719
+ ---
720
+
721
+
722
+ Since Kubernetes manifests are implemented as CloudFormation resources in the
723
+ CDK. This means that if the manifest is deleted from your code (or the stack is
724
+ deleted), the next `cdk deploy` will issue a `kubectl delete` command and the
725
+ Kubernetes resources in that manifest will be deleted.
726
+
727
+ #### Resource Pruning
728
+
729
+ When a resource is deleted from a Kubernetes manifest, the EKS module will
730
+ automatically delete these resources by injecting a *prune label* to all
731
+ manifest resources. This label is then passed to [`kubectl apply --prune`](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#alternative-kubectl-apply-f-directory-prune-l-your-label).
732
+
733
+ Pruning is enabled by default but can be disabled through the `prune` option
734
+ when a cluster is defined:
735
+
736
+ ```python
737
+ eks.Cluster(self, "MyCluster",
738
+ version=eks.KubernetesVersion.V1_31,
739
+ prune=False
740
+ )
741
+ ```
742
+
743
+ #### Manifests Validation
744
+
745
+ The `kubectl` CLI supports applying a manifest by skipping the validation.
746
+ This can be accomplished by setting the `skipValidation` flag to `true` in the `KubernetesManifest` props.
747
+
748
+ ```python
749
+ # cluster: eks.Cluster
750
+
751
+ eks.KubernetesManifest(self, "HelloAppWithoutValidation",
752
+ cluster=cluster,
753
+ manifest=[{"foo": "bar"}],
754
+ skip_validation=True
755
+ )
756
+ ```
757
+
758
+ ### Helm Charts
759
+
760
+ The `HelmChart` construct or `cluster.addHelmChart` method can be used
761
+ to add Kubernetes resources to this cluster using Helm.
762
+
763
+ > When using `cluster.addHelmChart`, the manifest construct is defined within the cluster's stack scope. If the manifest contains
764
+ > attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.
765
+ > To avoid this, directly use `new HelmChart` to create the chart in the scope of the other stack.
766
+
767
+ The following example will install the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/) to your cluster using Helm.
768
+
769
+ ```python
770
+ # cluster: eks.Cluster
771
+
772
+ # option 1: use a construct
773
+ eks.HelmChart(self, "NginxIngress",
774
+ cluster=cluster,
775
+ chart="nginx-ingress",
776
+ repository="https://helm.nginx.com/stable",
777
+ namespace="kube-system"
778
+ )
779
+
780
+ # or, option2: use `addHelmChart`
781
+ cluster.add_helm_chart("NginxIngress",
782
+ chart="nginx-ingress",
783
+ repository="https://helm.nginx.com/stable",
784
+ namespace="kube-system"
785
+ )
786
+ ```
787
+
788
+ Helm charts will be installed and updated using `helm upgrade --install`, where a few parameters
789
+ are being passed down (such as `repo`, `values`, `version`, `namespace`, `wait`, `timeout`, etc).
790
+ This means that if the chart is added to CDK with the same release name, it will try to update
791
+ the chart in the cluster.
792
+
793
+ Additionally, the `chartAsset` property can be an `aws-s3-assets.Asset`. This allows the use of local, private helm charts.
794
+
795
+ ```python
796
+ import aws_cdk.aws_s3_assets as s3_assets
797
+
798
+ # cluster: eks.Cluster
799
+
800
+ chart_asset = s3_assets.Asset(self, "ChartAsset",
801
+ path="/path/to/asset"
802
+ )
803
+
804
+ cluster.add_helm_chart("test-chart",
805
+ chart_asset=chart_asset
806
+ )
807
+ ```
808
+
809
+ Nested values passed to the `values` parameter should be provided as a nested dictionary:
810
+
811
+ ```python
812
+ # cluster: eks.Cluster
813
+
814
+
815
+ cluster.add_helm_chart("ExternalSecretsOperator",
816
+ chart="external-secrets",
817
+ release="external-secrets",
818
+ repository="https://charts.external-secrets.io",
819
+ namespace="external-secrets",
820
+ values={
821
+ "install_cRDs": True,
822
+ "webhook": {
823
+ "port": 9443
824
+ }
825
+ }
826
+ )
827
+ ```
828
+
829
+ Helm chart can come with Custom Resource Definitions (CRDs) defined that by default will be installed by helm as well. However in special cases it might be needed to skip the installation of CRDs, for that the property `skipCrds` can be used.
830
+
831
+ ```python
832
+ # cluster: eks.Cluster
833
+
834
+ # option 1: use a construct
835
+ eks.HelmChart(self, "NginxIngress",
836
+ cluster=cluster,
837
+ chart="nginx-ingress",
838
+ repository="https://helm.nginx.com/stable",
839
+ namespace="kube-system",
840
+ skip_crds=True
841
+ )
842
+ ```
843
+
844
+ ### OCI Charts
845
+
846
+ OCI charts are also supported.
847
+ Also replace the `${VARS}` with appropriate values.
848
+
849
+ ```python
850
+ # cluster: eks.Cluster
851
+
852
+ # option 1: use a construct
853
+ eks.HelmChart(self, "MyOCIChart",
854
+ cluster=cluster,
855
+ chart="some-chart",
856
+ repository="oci://${ACCOUNT_ID}.dkr.ecr.${ACCOUNT_REGION}.amazonaws.com/${REPO_NAME}",
857
+ namespace="oci",
858
+ version="0.0.1"
859
+ )
860
+ ```
861
+
862
+ Helm charts are implemented as CloudFormation resources in CDK.
863
+ This means that if the chart is deleted from your code (or the stack is
864
+ deleted), the next `cdk deploy` will issue a `helm uninstall` command and the
865
+ Helm chart will be deleted.
866
+
867
+ When there is no `release` defined, a unique ID will be allocated for the release based
868
+ on the construct path.
869
+
870
+ By default, all Helm charts will be installed concurrently. In some cases, this
871
+ could cause race conditions where two Helm charts attempt to deploy the same
872
+ resource or if Helm charts depend on each other. You can use
873
+ `chart.node.addDependency()` in order to declare a dependency order between
874
+ charts:
875
+
876
+ ```python
877
+ # cluster: eks.Cluster
878
+
879
+ chart1 = cluster.add_helm_chart("MyChart",
880
+ chart="foo"
881
+ )
882
+ chart2 = cluster.add_helm_chart("MyChart",
883
+ chart="bar"
884
+ )
885
+
886
+ chart2.node.add_dependency(chart1)
887
+ ```
888
+
889
+ #### Custom CDK8s Constructs
890
+
891
+ You can also compose a few stock `cdk8s+` constructs into your own custom construct. However, since mixing scopes between `aws-cdk` and `cdk8s` is currently not supported, the `Construct` class
892
+ you'll need to use is the one from the [`constructs`](https://github.com/aws/constructs) module, and not from `aws-cdk-lib` like you normally would.
893
+ This is why we used `new cdk8s.App()` as the scope of the chart above.
894
+
895
+ ```python
896
+ import constructs as constructs
897
+ import cdk8s as cdk8s
898
+ import cdk8s_plus_25 as kplus
899
+
900
+
901
+ app = cdk8s.App()
902
+ chart = cdk8s.Chart(app, "my-chart")
903
+
904
+ class LoadBalancedWebService(constructs.Construct):
905
+ def __init__(self, scope, id, props):
906
+ super().__init__(scope, id)
907
+
908
+ deployment = kplus.Deployment(chart, "Deployment",
909
+ replicas=props.replicas,
910
+ containers=[kplus.Container(image=props.image)]
911
+ )
912
+
913
+ deployment.expose_via_service(
914
+ ports=[kplus.ServicePort(port=props.port)
915
+ ],
916
+ service_type=kplus.ServiceType.LOAD_BALANCER
917
+ )
918
+ ```
919
+
920
+ #### Manually importing k8s specs and CRD's
921
+
922
+ If you find yourself unable to use `cdk8s+`, or just like to directly use the `k8s` native objects or CRD's, you can do so by manually importing them using the `cdk8s-cli`.
923
+
924
+ See [Importing kubernetes objects](https://cdk8s.io/docs/latest/cli/import/) for detailed instructions.
925
+
926
+ ## Patching Kubernetes Resources
927
+
928
+ The `KubernetesPatch` construct can be used to update existing kubernetes
929
+ resources. The following example can be used to patch the `hello-kubernetes`
930
+ deployment from the example above with 5 replicas.
931
+
932
+ ```python
933
+ # cluster: eks.Cluster
934
+
935
+ eks.KubernetesPatch(self, "hello-kub-deployment-label",
936
+ cluster=cluster,
937
+ resource_name="deployment/hello-kubernetes",
938
+ apply_patch={"spec": {"replicas": 5}},
939
+ restore_patch={"spec": {"replicas": 3}}
940
+ )
941
+ ```
942
+
943
+ ## Querying Kubernetes Resources
944
+
945
+ The `KubernetesObjectValue` construct can be used to query for information about kubernetes objects,
946
+ and use that as part of your CDK application.
947
+
948
+ For example, you can fetch the address of a [`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) type service:
949
+
950
+ ```python
951
+ # cluster: eks.Cluster
952
+
953
+ # query the load balancer address
954
+ my_service_address = eks.KubernetesObjectValue(self, "LoadBalancerAttribute",
955
+ cluster=cluster,
956
+ object_type="service",
957
+ object_name="my-service",
958
+ json_path=".status.loadBalancer.ingress[0].hostname"
959
+ )
960
+
961
+ # pass the address to a lambda function
962
+ proxy_function = lambda_.Function(self, "ProxyFunction",
963
+ handler="index.handler",
964
+ code=lambda_.Code.from_inline("my-code"),
965
+ runtime=lambda_.Runtime.NODEJS_LATEST,
966
+ environment={
967
+ "my_service_address": my_service_address.value
968
+ }
969
+ )
970
+ ```
971
+
972
+ Specifically, since the above use-case is quite common, there is an easier way to access that information:
973
+
974
+ ```python
975
+ # cluster: eks.Cluster
976
+
977
+ load_balancer_address = cluster.get_service_load_balancer_address("my-service")
978
+ ```
979
+
980
+ ## Add-ons
981
+
982
+ [Add-ons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) is a software that provides supporting operational capabilities to Kubernetes applications. The EKS module supports adding add-ons to your cluster using the `eks.Addon` class.
983
+
984
+ ```python
985
+ # cluster: eks.Cluster
986
+
987
+
988
+ eks.Addon(self, "Addon",
989
+ cluster=cluster,
990
+ addon_name="aws-guardduty-agent",
991
+ addon_version="v1.6.1",
992
+ # whether to preserve the add-on software on your cluster but Amazon EKS stops managing any settings for the add-on.
993
+ preserve_on_delete=False
994
+ )
995
+ ```
996
+
997
+ ## Using existing clusters
998
+
999
+ The EKS library allows defining Kubernetes resources such as [Kubernetes
1000
+ manifests](#kubernetes-resources) and [Helm charts](#helm-charts) on clusters
1001
+ that are not defined as part of your CDK app.
1002
+
1003
+ First you will need to import the kubectl provider and cluster created in another stack
1004
+
1005
+ ```python
1006
+ handler_role = iam.Role.from_role_arn(self, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role")
1007
+
1008
+ kubectl_provider = eks.KubectlProvider.from_kubectl_provider_attributes(self, "KubectlProvider",
1009
+ service_token="arn:aws:lambda:us-east-2:123456789012:function:my-function:1",
1010
+ role=handler_role
1011
+ )
1012
+
1013
+ cluster = eks.Cluster.from_cluster_attributes(self, "Cluster",
1014
+ cluster_name="cluster",
1015
+ kubectl_provider=kubectl_provider
1016
+ )
1017
+ ```
1018
+
1019
+ Then, you can use `addManifest` or `addHelmChart` to define resources inside
1020
+ your Kubernetes cluster.
1021
+
1022
+ ```python
1023
+ # cluster: eks.Cluster
1024
+
1025
+ cluster.add_manifest("Test", {
1026
+ "api_version": "v1",
1027
+ "kind": "ConfigMap",
1028
+ "metadata": {
1029
+ "name": "myconfigmap"
1030
+ },
1031
+ "data": {
1032
+ "Key": "value",
1033
+ "Another": "123454"
1034
+ }
1035
+ })
1036
+ ```
1037
+
1038
+ ## Logging
1039
+
1040
+ EKS supports cluster logging for 5 different types of events:
1041
+
1042
+ * API requests to the cluster.
1043
+ * Cluster access via the Kubernetes API.
1044
+ * Authentication requests into the cluster.
1045
+ * State of cluster controllers.
1046
+ * Scheduling decisions.
1047
+
1048
+ You can enable logging for each one separately using the `clusterLogging`
1049
+ property. For example:
1050
+
1051
+ ```python
1052
+ cluster = eks.Cluster(self, "Cluster",
1053
+ # ...
1054
+ version=eks.KubernetesVersion.V1_31,
1055
+ cluster_logging=[eks.ClusterLoggingTypes.API, eks.ClusterLoggingTypes.AUTHENTICATOR, eks.ClusterLoggingTypes.SCHEDULER
1056
+ ]
1057
+ )
1058
+ ```