aws-cdk-lib 2.192.0__py3-none-any.whl → 2.194.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of aws-cdk-lib might be problematic. Click here for more details.

Files changed (39) hide show
  1. aws_cdk/_jsii/__init__.py +1 -1
  2. aws_cdk/_jsii/{aws-cdk-lib@2.192.0.jsii.tgz → aws-cdk-lib@2.194.0.jsii.tgz} +0 -0
  3. aws_cdk/aws_apigateway/__init__.py +3 -1
  4. aws_cdk/aws_appsync/__init__.py +4367 -336
  5. aws_cdk/aws_aps/__init__.py +34 -22
  6. aws_cdk/aws_autoscaling/__init__.py +8 -0
  7. aws_cdk/aws_batch/__init__.py +2 -2
  8. aws_cdk/aws_bedrock/__init__.py +6 -4
  9. aws_cdk/aws_ce/__init__.py +34 -22
  10. aws_cdk/aws_cloudfront/__init__.py +3019 -971
  11. aws_cdk/aws_codebuild/__init__.py +19 -10
  12. aws_cdk/aws_codepipeline_actions/__init__.py +526 -0
  13. aws_cdk/aws_dlm/__init__.py +2 -2
  14. aws_cdk/aws_ec2/__init__.py +6 -3
  15. aws_cdk/aws_ecr/__init__.py +417 -0
  16. aws_cdk/aws_ecs/__init__.py +18 -10
  17. aws_cdk/aws_eks/__init__.py +170 -2
  18. aws_cdk/aws_entityresolution/__init__.py +7 -2
  19. aws_cdk/aws_events/__init__.py +41 -8
  20. aws_cdk/aws_lambda/__init__.py +1 -1
  21. aws_cdk/aws_mediapackagev2/__init__.py +50 -6
  22. aws_cdk/aws_memorydb/__init__.py +21 -11
  23. aws_cdk/aws_omics/__init__.py +5 -5
  24. aws_cdk/aws_opensearchservice/__init__.py +45 -25
  25. aws_cdk/aws_qbusiness/__init__.py +2 -2
  26. aws_cdk/aws_quicksight/__init__.py +1 -1
  27. aws_cdk/aws_rds/__init__.py +46 -2
  28. aws_cdk/aws_redshiftserverless/__init__.py +20 -0
  29. aws_cdk/aws_route53resolver/__init__.py +41 -0
  30. aws_cdk/aws_s3/__init__.py +2 -4
  31. aws_cdk/aws_sagemaker/__init__.py +2 -4
  32. aws_cdk/aws_vpclattice/__init__.py +6 -2
  33. aws_cdk/aws_wisdom/__init__.py +25 -6
  34. {aws_cdk_lib-2.192.0.dist-info → aws_cdk_lib-2.194.0.dist-info}/METADATA +1 -1
  35. {aws_cdk_lib-2.192.0.dist-info → aws_cdk_lib-2.194.0.dist-info}/RECORD +39 -39
  36. {aws_cdk_lib-2.192.0.dist-info → aws_cdk_lib-2.194.0.dist-info}/LICENSE +0 -0
  37. {aws_cdk_lib-2.192.0.dist-info → aws_cdk_lib-2.194.0.dist-info}/NOTICE +0 -0
  38. {aws_cdk_lib-2.192.0.dist-info → aws_cdk_lib-2.194.0.dist-info}/WHEEL +0 -0
  39. {aws_cdk_lib-2.192.0.dist-info → aws_cdk_lib-2.194.0.dist-info}/top_level.txt +0 -0
@@ -10053,7 +10053,7 @@ class CfnService(
10053
10053
  - For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
10054
10054
 
10055
10055
  :param log_driver: The log driver to use for the container. For tasks on AWS Fargate , the supported log drivers are ``awslogs`` , ``splunk`` , and ``awsfirelens`` . For tasks hosted on Amazon EC2 instances, the supported log drivers are ``awslogs`` , ``fluentd`` , ``gelf`` , ``json-file`` , ``journald`` , ``syslog`` , ``splunk`` , and ``awsfirelens`` . For more information about using the ``awslogs`` log driver, see `Send Amazon ECS logs to CloudWatch <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html>`_ in the *Amazon Elastic Container Service Developer Guide* . For more information about using the ``awsfirelens`` log driver, see `Send Amazon ECS logs to an AWS service or AWS Partner <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html>`_ . .. epigraph:: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's `available on GitHub <https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent>`_ and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.
10056
- :param options: The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the ``awslogs`` log driver to route logs to Amazon CloudWatch include the following: - **awslogs-create-group** - Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to ``false`` . .. epigraph:: Your IAM policy must include the ``logs:CreateLogGroup`` permission before you attempt to use ``awslogs-create-group`` . - **awslogs-region** - Required: Yes Specify the AWS Region that the ``awslogs`` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. - **awslogs-group** - Required: Yes Make sure to specify a log group that the ``awslogs`` log driver sends its log streams to. - **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type. Use the ``awslogs-stream-prefix`` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format ``prefix-name/container-name/ecs-task-id`` . If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. - **awslogs-datetime-format** - Required: No This option defines a multiline start pattern in Python ``strftime`` format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see `awslogs-datetime-format <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format>`_ . You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. - **awslogs-multiline-pattern** - Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see `awslogs-multiline-pattern <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern>`_ . This option is ignored if ``awslogs-datetime-format`` is also configured. You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. - **mode** - Required: No Valid values: ``non-blocking`` | ``blocking`` This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted. If you use the ``blocking`` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ . - **max-buffer-size** - Required: No Default value: ``1m`` When ``non-blocking`` mode is used, the ``max-buffer-size`` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the ``splunk`` log router, you need to specify a ``splunk-token`` and a ``splunk-url`` . When you use the ``awsfirelens`` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the ``log-driver-buffer-limit`` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using ``awsfirelens`` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with ``region`` and a name for the log stream with ``delivery_stream`` . When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with ``region`` and a data stream name with ``stream`` . When you export logs to Amazon OpenSearch Service, you can specify options like ``Name`` , ``Host`` (OpenSearch Service endpoint without protocol), ``Port`` , ``Index`` , ``Type`` , ``Aws_auth`` , ``Aws_region`` , ``Suppress_Type_Name`` , and ``tls`` . For more information, see `Under the hood: FireLens for Amazon ECS Tasks <https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/>`_ . When you export logs to Amazon S3, you can specify the bucket using the ``bucket`` option. You can also specify ``region`` , ``total_file_size`` , ``upload_timeout`` , and ``use_put_object`` as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: ``sudo docker version --format '{{.Server.APIVersion}}'``
10056
+ :param options: The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the ``awslogs`` log driver to route logs to Amazon CloudWatch include the following: - **awslogs-create-group** - Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to ``false`` . .. epigraph:: Your IAM policy must include the ``logs:CreateLogGroup`` permission before you attempt to use ``awslogs-create-group`` . - **awslogs-region** - Required: Yes Specify the AWS Region that the ``awslogs`` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. - **awslogs-group** - Required: Yes Make sure to specify a log group that the ``awslogs`` log driver sends its log streams to. - **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2. Use the ``awslogs-stream-prefix`` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format ``prefix-name/container-name/ecs-task-id`` . If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. - **awslogs-datetime-format** - Required: No This option defines a multiline start pattern in Python ``strftime`` format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see `awslogs-datetime-format <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format>`_ . You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. - **awslogs-multiline-pattern** - Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see `awslogs-multiline-pattern <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern>`_ . This option is ignored if ``awslogs-datetime-format`` is also configured. You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. - **mode** - Required: No Valid values: ``non-blocking`` | ``blocking`` This option defines the delivery mode of log messages from the container to the log driver specified using ``logDriver`` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the ``blocking`` mode and the flow of logs is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ . You can set a default ``mode`` for all containers in a specific AWS Region by using the ``defaultLogDriverMode`` account setting. If you don't specify the ``mode`` option or configure the account setting, Amazon ECS will default to the ``blocking`` mode. For more information about the account setting, see `Default log driver mode <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode>`_ in the *Amazon Elastic Container Service Developer Guide* . - **max-buffer-size** - Required: No Default value: ``1m`` When ``non-blocking`` mode is used, the ``max-buffer-size`` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the ``splunk`` log router, you need to specify a ``splunk-token`` and a ``splunk-url`` . When you use the ``awsfirelens`` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the ``log-driver-buffer-limit`` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using ``awsfirelens`` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with ``region`` and a name for the log stream with ``delivery_stream`` . When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with ``region`` and a data stream name with ``stream`` . When you export logs to Amazon OpenSearch Service, you can specify options like ``Name`` , ``Host`` (OpenSearch Service endpoint without protocol), ``Port`` , ``Index`` , ``Type`` , ``Aws_auth`` , ``Aws_region`` , ``Suppress_Type_Name`` , and ``tls`` . For more information, see `Under the hood: FireLens for Amazon ECS Tasks <https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/>`_ . When you export logs to Amazon S3, you can specify the bucket using the ``bucket`` option. You can also specify ``region`` , ``total_file_size`` , ``upload_timeout`` , and ``use_put_object`` as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: ``sudo docker version --format '{{.Server.APIVersion}}'``
10057
10057
  :param secret_options: The secrets to pass to the log configuration. For more information, see `Specifying sensitive data <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html>`_ in the *Amazon Elastic Container Service Developer Guide* .
10058
10058
 
10059
10059
  :see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-service-logconfiguration.html
@@ -10132,7 +10132,7 @@ class CfnService(
10132
10132
 
10133
10133
  Make sure to specify a log group that the ``awslogs`` log driver sends its log streams to.
10134
10134
 
10135
- - **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
10135
+ - **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.
10136
10136
 
10137
10137
  Use the ``awslogs-stream-prefix`` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format ``prefix-name/container-name/ecs-task-id`` .
10138
10138
 
@@ -10168,15 +10168,19 @@ class CfnService(
10168
10168
 
10169
10169
  Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
10170
10170
 
10171
+ The following options apply to all supported log drivers.
10172
+
10171
10173
  - **mode** - Required: No
10172
10174
 
10173
10175
  Valid values: ``non-blocking`` | ``blocking``
10174
10176
 
10175
- This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
10177
+ This option defines the delivery mode of log messages from the container to the log driver specified using ``logDriver`` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
10178
+
10179
+ If you use the ``blocking`` mode and the flow of logs is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
10176
10180
 
10177
- If you use the ``blocking`` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
10181
+ If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ .
10178
10182
 
10179
- If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ .
10183
+ You can set a default ``mode`` for all containers in a specific AWS Region by using the ``defaultLogDriverMode`` account setting. If you don't specify the ``mode`` option or configure the account setting, Amazon ECS will default to the ``blocking`` mode. For more information about the account setting, see `Default log driver mode <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode>`_ in the *Amazon Elastic Container Service Developer Guide* .
10180
10184
 
10181
10185
  - **max-buffer-size** - Required: No
10182
10186
 
@@ -15963,7 +15967,7 @@ class CfnTaskDefinition(
15963
15967
  '''The ``LogConfiguration`` property specifies log configuration options to send to a custom log driver for the container.
15964
15968
 
15965
15969
  :param log_driver: The log driver to use for the container. For tasks on AWS Fargate , the supported log drivers are ``awslogs`` , ``splunk`` , and ``awsfirelens`` . For tasks hosted on Amazon EC2 instances, the supported log drivers are ``awslogs`` , ``fluentd`` , ``gelf`` , ``json-file`` , ``journald`` , ``syslog`` , ``splunk`` , and ``awsfirelens`` . For more information about using the ``awslogs`` log driver, see `Send Amazon ECS logs to CloudWatch <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html>`_ in the *Amazon Elastic Container Service Developer Guide* . For more information about using the ``awsfirelens`` log driver, see `Send Amazon ECS logs to an AWS service or AWS Partner <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html>`_ . .. epigraph:: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's `available on GitHub <https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent>`_ and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.
15966
- :param options: The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the ``awslogs`` log driver to route logs to Amazon CloudWatch include the following: - **awslogs-create-group** - Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to ``false`` . .. epigraph:: Your IAM policy must include the ``logs:CreateLogGroup`` permission before you attempt to use ``awslogs-create-group`` . - **awslogs-region** - Required: Yes Specify the AWS Region that the ``awslogs`` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. - **awslogs-group** - Required: Yes Make sure to specify a log group that the ``awslogs`` log driver sends its log streams to. - **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type. Use the ``awslogs-stream-prefix`` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format ``prefix-name/container-name/ecs-task-id`` . If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. - **awslogs-datetime-format** - Required: No This option defines a multiline start pattern in Python ``strftime`` format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see `awslogs-datetime-format <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format>`_ . You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. - **awslogs-multiline-pattern** - Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see `awslogs-multiline-pattern <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern>`_ . This option is ignored if ``awslogs-datetime-format`` is also configured. You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. - **mode** - Required: No Valid values: ``non-blocking`` | ``blocking`` This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted. If you use the ``blocking`` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ . - **max-buffer-size** - Required: No Default value: ``1m`` When ``non-blocking`` mode is used, the ``max-buffer-size`` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the ``splunk`` log router, you need to specify a ``splunk-token`` and a ``splunk-url`` . When you use the ``awsfirelens`` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the ``log-driver-buffer-limit`` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using ``awsfirelens`` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with ``region`` and a name for the log stream with ``delivery_stream`` . When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with ``region`` and a data stream name with ``stream`` . When you export logs to Amazon OpenSearch Service, you can specify options like ``Name`` , ``Host`` (OpenSearch Service endpoint without protocol), ``Port`` , ``Index`` , ``Type`` , ``Aws_auth`` , ``Aws_region`` , ``Suppress_Type_Name`` , and ``tls`` . For more information, see `Under the hood: FireLens for Amazon ECS Tasks <https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/>`_ . When you export logs to Amazon S3, you can specify the bucket using the ``bucket`` option. You can also specify ``region`` , ``total_file_size`` , ``upload_timeout`` , and ``use_put_object`` as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: ``sudo docker version --format '{{.Server.APIVersion}}'``
15970
+ :param options: The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the ``awslogs`` log driver to route logs to Amazon CloudWatch include the following: - **awslogs-create-group** - Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to ``false`` . .. epigraph:: Your IAM policy must include the ``logs:CreateLogGroup`` permission before you attempt to use ``awslogs-create-group`` . - **awslogs-region** - Required: Yes Specify the AWS Region that the ``awslogs`` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. - **awslogs-group** - Required: Yes Make sure to specify a log group that the ``awslogs`` log driver sends its log streams to. - **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2. Use the ``awslogs-stream-prefix`` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format ``prefix-name/container-name/ecs-task-id`` . If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. - **awslogs-datetime-format** - Required: No This option defines a multiline start pattern in Python ``strftime`` format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see `awslogs-datetime-format <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format>`_ . You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. - **awslogs-multiline-pattern** - Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see `awslogs-multiline-pattern <https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern>`_ . This option is ignored if ``awslogs-datetime-format`` is also configured. You cannot configure both the ``awslogs-datetime-format`` and ``awslogs-multiline-pattern`` options. .. epigraph:: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. - **mode** - Required: No Valid values: ``non-blocking`` | ``blocking`` This option defines the delivery mode of log messages from the container to the log driver specified using ``logDriver`` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the ``blocking`` mode and the flow of logs is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ . You can set a default ``mode`` for all containers in a specific AWS Region by using the ``defaultLogDriverMode`` account setting. If you don't specify the ``mode`` option or configure the account setting, Amazon ECS will default to the ``blocking`` mode. For more information about the account setting, see `Default log driver mode <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode>`_ in the *Amazon Elastic Container Service Developer Guide* . - **max-buffer-size** - Required: No Default value: ``1m`` When ``non-blocking`` mode is used, the ``max-buffer-size`` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the ``splunk`` log router, you need to specify a ``splunk-token`` and a ``splunk-url`` . When you use the ``awsfirelens`` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the ``log-driver-buffer-limit`` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using ``awsfirelens`` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with ``region`` and a name for the log stream with ``delivery_stream`` . When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with ``region`` and a data stream name with ``stream`` . When you export logs to Amazon OpenSearch Service, you can specify options like ``Name`` , ``Host`` (OpenSearch Service endpoint without protocol), ``Port`` , ``Index`` , ``Type`` , ``Aws_auth`` , ``Aws_region`` , ``Suppress_Type_Name`` , and ``tls`` . For more information, see `Under the hood: FireLens for Amazon ECS Tasks <https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/>`_ . When you export logs to Amazon S3, you can specify the bucket using the ``bucket`` option. You can also specify ``region`` , ``total_file_size`` , ``upload_timeout`` , and ``use_put_object`` as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: ``sudo docker version --format '{{.Server.APIVersion}}'``
15967
15971
  :param secret_options: The secrets to pass to the log configuration. For more information, see `Specifying sensitive data <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html>`_ in the *Amazon Elastic Container Service Developer Guide* .
15968
15972
 
15969
15973
  :see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-logconfiguration.html
@@ -16045,7 +16049,7 @@ class CfnTaskDefinition(
16045
16049
 
16046
16050
  Make sure to specify a log group that the ``awslogs`` log driver sends its log streams to.
16047
16051
 
16048
- - **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
16052
+ - **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.
16049
16053
 
16050
16054
  Use the ``awslogs-stream-prefix`` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format ``prefix-name/container-name/ecs-task-id`` .
16051
16055
 
@@ -16081,15 +16085,19 @@ class CfnTaskDefinition(
16081
16085
 
16082
16086
  Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
16083
16087
 
16088
+ The following options apply to all supported log drivers.
16089
+
16084
16090
  - **mode** - Required: No
16085
16091
 
16086
16092
  Valid values: ``non-blocking`` | ``blocking``
16087
16093
 
16088
- This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
16094
+ This option defines the delivery mode of log messages from the container to the log driver specified using ``logDriver`` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
16095
+
16096
+ If you use the ``blocking`` mode and the flow of logs is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
16089
16097
 
16090
- If you use the ``blocking`` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the ``stdout`` and ``stderr`` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
16098
+ If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ .
16091
16099
 
16092
- If you use the ``non-blocking`` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the ``max-buffer-size`` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see `Preventing log loss with non-blocking mode in the ``awslogs`` container log driver <https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`_ .
16100
+ You can set a default ``mode`` for all containers in a specific AWS Region by using the ``defaultLogDriverMode`` account setting. If you don't specify the ``mode`` option or configure the account setting, Amazon ECS will default to the ``blocking`` mode. For more information about the account setting, see `Default log driver mode <https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode>`_ in the *Amazon Elastic Container Service Developer Guide* .
16093
16101
 
16094
16102
  - **max-buffer-size** - Required: No
16095
16103
 
@@ -721,6 +721,24 @@ eks.Cluster(self, "HelloEKS",
721
721
  )
722
722
  ```
723
723
 
724
+ To provide additional Helm chart values supported by `albController` in CDK, use the `additionalHelmChartValues` property. For example, the following code snippet shows how to set the `enableWafV2` flag:
725
+
726
+ ```python
727
+ from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
728
+
729
+
730
+ eks.Cluster(self, "HelloEKS",
731
+ version=eks.KubernetesVersion.V1_32,
732
+ alb_controller=eks.AlbControllerOptions(
733
+ version=eks.AlbControllerVersion.V2_8_2,
734
+ additional_helm_chart_values=eks.AlbControllerHelmChartOptions(
735
+ enable_wafv2=False
736
+ )
737
+ ),
738
+ kubectl_layer=KubectlV32Layer(self, "kubectl")
739
+ )
740
+ ```
741
+
724
742
  The `albController` requires `defaultCapacity` or at least one nodegroup. If there's no `defaultCapacity` or available
725
743
  nodegroup for the cluster, the `albController` deployment would fail.
726
744
 
@@ -2917,6 +2935,10 @@ class AlbController(
2917
2935
  version=alb_controller_version,
2918
2936
 
2919
2937
  # the properties below are optional
2938
+ additional_helm_chart_values=eks.AlbControllerHelmChartOptions(
2939
+ enable_waf=False,
2940
+ enable_wafv2=False
2941
+ ),
2920
2942
  policy=policy,
2921
2943
  repository="repository"
2922
2944
  )
@@ -2929,6 +2951,7 @@ class AlbController(
2929
2951
  *,
2930
2952
  cluster: "Cluster",
2931
2953
  version: "AlbControllerVersion",
2954
+ additional_helm_chart_values: typing.Optional[typing.Union["AlbControllerHelmChartOptions", typing.Dict[builtins.str, typing.Any]]] = None,
2932
2955
  policy: typing.Any = None,
2933
2956
  repository: typing.Optional[builtins.str] = None,
2934
2957
  ) -> None:
@@ -2937,6 +2960,7 @@ class AlbController(
2937
2960
  :param id: -
2938
2961
  :param cluster: [disable-awslint:ref-via-interface] Cluster to install the controller onto.
2939
2962
  :param version: Version of the controller.
2963
+ :param additional_helm_chart_values: Additional helm chart values for ALB controller. Default: - no additional helm chart values
2940
2964
  :param policy: The IAM policy to apply to the service account. If you're using one of the built-in versions, this is not required since CDK ships with the appropriate policies for those versions. However, if you are using a custom version, this is required (and validated). Default: - Corresponds to the predefined version.
2941
2965
  :param repository: The repository to pull the controller image from. Note that the default repository works for most regions, but not all. If the repository is not applicable to your region, use a custom repository according to the information here: https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases. Default: '602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller'
2942
2966
  '''
@@ -2945,7 +2969,11 @@ class AlbController(
2945
2969
  check_type(argname="argument scope", value=scope, expected_type=type_hints["scope"])
2946
2970
  check_type(argname="argument id", value=id, expected_type=type_hints["id"])
2947
2971
  props = AlbControllerProps(
2948
- cluster=cluster, version=version, policy=policy, repository=repository
2972
+ cluster=cluster,
2973
+ version=version,
2974
+ additional_helm_chart_values=additional_helm_chart_values,
2975
+ policy=policy,
2976
+ repository=repository,
2949
2977
  )
2950
2978
 
2951
2979
  jsii.create(self.__class__, self, [scope, id, props])
@@ -2958,6 +2986,7 @@ class AlbController(
2958
2986
  *,
2959
2987
  cluster: "Cluster",
2960
2988
  version: "AlbControllerVersion",
2989
+ additional_helm_chart_values: typing.Optional[typing.Union["AlbControllerHelmChartOptions", typing.Dict[builtins.str, typing.Any]]] = None,
2961
2990
  policy: typing.Any = None,
2962
2991
  repository: typing.Optional[builtins.str] = None,
2963
2992
  ) -> "AlbController":
@@ -2968,6 +2997,7 @@ class AlbController(
2968
2997
  :param scope: -
2969
2998
  :param cluster: [disable-awslint:ref-via-interface] Cluster to install the controller onto.
2970
2999
  :param version: Version of the controller.
3000
+ :param additional_helm_chart_values: Additional helm chart values for ALB controller. Default: - no additional helm chart values
2971
3001
  :param policy: The IAM policy to apply to the service account. If you're using one of the built-in versions, this is not required since CDK ships with the appropriate policies for those versions. However, if you are using a custom version, this is required (and validated). Default: - Corresponds to the predefined version.
2972
3002
  :param repository: The repository to pull the controller image from. Note that the default repository works for most regions, but not all. If the repository is not applicable to your region, use a custom repository according to the information here: https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases. Default: '602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller'
2973
3003
  '''
@@ -2975,17 +3005,97 @@ class AlbController(
2975
3005
  type_hints = typing.get_type_hints(_typecheckingstub__1b3813db11381f0166360b7dc6066bdeadc4a52043da6eba56f9a55a4bfd6157)
2976
3006
  check_type(argname="argument scope", value=scope, expected_type=type_hints["scope"])
2977
3007
  props = AlbControllerProps(
2978
- cluster=cluster, version=version, policy=policy, repository=repository
3008
+ cluster=cluster,
3009
+ version=version,
3010
+ additional_helm_chart_values=additional_helm_chart_values,
3011
+ policy=policy,
3012
+ repository=repository,
2979
3013
  )
2980
3014
 
2981
3015
  return typing.cast("AlbController", jsii.sinvoke(cls, "create", [scope, props]))
2982
3016
 
2983
3017
 
3018
+ @jsii.data_type(
3019
+ jsii_type="aws-cdk-lib.aws_eks.AlbControllerHelmChartOptions",
3020
+ jsii_struct_bases=[],
3021
+ name_mapping={"enable_waf": "enableWaf", "enable_wafv2": "enableWafv2"},
3022
+ )
3023
+ class AlbControllerHelmChartOptions:
3024
+ def __init__(
3025
+ self,
3026
+ *,
3027
+ enable_waf: typing.Optional[builtins.bool] = None,
3028
+ enable_wafv2: typing.Optional[builtins.bool] = None,
3029
+ ) -> None:
3030
+ '''Helm chart options that can be set for AlbControllerChart To add any new supported values refer https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/helm/aws-load-balancer-controller/values.yaml.
3031
+
3032
+ :param enable_waf: Enable or disable AWS WAF on the ALB ingress controller. Default: - no value defined for this helm chart option, so it will not be set in the helm chart values
3033
+ :param enable_wafv2: Enable or disable AWS WAFv2 on the ALB ingress controller. Default: - no value defined for this helm chart option, so it will not be set in the helm chart values
3034
+
3035
+ :exampleMetadata: infused
3036
+
3037
+ Example::
3038
+
3039
+ from aws_cdk.lambda_layer_kubectl_v32 import KubectlV32Layer
3040
+
3041
+
3042
+ eks.Cluster(self, "HelloEKS",
3043
+ version=eks.KubernetesVersion.V1_32,
3044
+ alb_controller=eks.AlbControllerOptions(
3045
+ version=eks.AlbControllerVersion.V2_8_2,
3046
+ additional_helm_chart_values=eks.AlbControllerHelmChartOptions(
3047
+ enable_wafv2=False
3048
+ )
3049
+ ),
3050
+ kubectl_layer=KubectlV32Layer(self, "kubectl")
3051
+ )
3052
+ '''
3053
+ if __debug__:
3054
+ type_hints = typing.get_type_hints(_typecheckingstub__281499b199c1a76de8c09b4fa8c74547b8a256e9ceb223d10f672ae9e7a452d1)
3055
+ check_type(argname="argument enable_waf", value=enable_waf, expected_type=type_hints["enable_waf"])
3056
+ check_type(argname="argument enable_wafv2", value=enable_wafv2, expected_type=type_hints["enable_wafv2"])
3057
+ self._values: typing.Dict[builtins.str, typing.Any] = {}
3058
+ if enable_waf is not None:
3059
+ self._values["enable_waf"] = enable_waf
3060
+ if enable_wafv2 is not None:
3061
+ self._values["enable_wafv2"] = enable_wafv2
3062
+
3063
+ @builtins.property
3064
+ def enable_waf(self) -> typing.Optional[builtins.bool]:
3065
+ '''Enable or disable AWS WAF on the ALB ingress controller.
3066
+
3067
+ :default: - no value defined for this helm chart option, so it will not be set in the helm chart values
3068
+ '''
3069
+ result = self._values.get("enable_waf")
3070
+ return typing.cast(typing.Optional[builtins.bool], result)
3071
+
3072
+ @builtins.property
3073
+ def enable_wafv2(self) -> typing.Optional[builtins.bool]:
3074
+ '''Enable or disable AWS WAFv2 on the ALB ingress controller.
3075
+
3076
+ :default: - no value defined for this helm chart option, so it will not be set in the helm chart values
3077
+ '''
3078
+ result = self._values.get("enable_wafv2")
3079
+ return typing.cast(typing.Optional[builtins.bool], result)
3080
+
3081
+ def __eq__(self, rhs: typing.Any) -> builtins.bool:
3082
+ return isinstance(rhs, self.__class__) and rhs._values == self._values
3083
+
3084
+ def __ne__(self, rhs: typing.Any) -> builtins.bool:
3085
+ return not (rhs == self)
3086
+
3087
+ def __repr__(self) -> str:
3088
+ return "AlbControllerHelmChartOptions(%s)" % ", ".join(
3089
+ k + "=" + repr(v) for k, v in self._values.items()
3090
+ )
3091
+
3092
+
2984
3093
  @jsii.data_type(
2985
3094
  jsii_type="aws-cdk-lib.aws_eks.AlbControllerOptions",
2986
3095
  jsii_struct_bases=[],
2987
3096
  name_mapping={
2988
3097
  "version": "version",
3098
+ "additional_helm_chart_values": "additionalHelmChartValues",
2989
3099
  "policy": "policy",
2990
3100
  "repository": "repository",
2991
3101
  },
@@ -2995,12 +3105,14 @@ class AlbControllerOptions:
2995
3105
  self,
2996
3106
  *,
2997
3107
  version: "AlbControllerVersion",
3108
+ additional_helm_chart_values: typing.Optional[typing.Union[AlbControllerHelmChartOptions, typing.Dict[builtins.str, typing.Any]]] = None,
2998
3109
  policy: typing.Any = None,
2999
3110
  repository: typing.Optional[builtins.str] = None,
3000
3111
  ) -> None:
3001
3112
  '''Options for ``AlbController``.
3002
3113
 
3003
3114
  :param version: Version of the controller.
3115
+ :param additional_helm_chart_values: Additional helm chart values for ALB controller. Default: - no additional helm chart values
3004
3116
  :param policy: The IAM policy to apply to the service account. If you're using one of the built-in versions, this is not required since CDK ships with the appropriate policies for those versions. However, if you are using a custom version, this is required (and validated). Default: - Corresponds to the predefined version.
3005
3117
  :param repository: The repository to pull the controller image from. Note that the default repository works for most regions, but not all. If the repository is not applicable to your region, use a custom repository according to the information here: https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases. Default: '602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller'
3006
3118
 
@@ -3019,14 +3131,19 @@ class AlbControllerOptions:
3019
3131
  kubectl_layer=KubectlV32Layer(self, "kubectl")
3020
3132
  )
3021
3133
  '''
3134
+ if isinstance(additional_helm_chart_values, dict):
3135
+ additional_helm_chart_values = AlbControllerHelmChartOptions(**additional_helm_chart_values)
3022
3136
  if __debug__:
3023
3137
  type_hints = typing.get_type_hints(_typecheckingstub__b22ec5f19b5d1b4d655cc304c12c33352da257e2109041355aa01fc993ec3ef9)
3024
3138
  check_type(argname="argument version", value=version, expected_type=type_hints["version"])
3139
+ check_type(argname="argument additional_helm_chart_values", value=additional_helm_chart_values, expected_type=type_hints["additional_helm_chart_values"])
3025
3140
  check_type(argname="argument policy", value=policy, expected_type=type_hints["policy"])
3026
3141
  check_type(argname="argument repository", value=repository, expected_type=type_hints["repository"])
3027
3142
  self._values: typing.Dict[builtins.str, typing.Any] = {
3028
3143
  "version": version,
3029
3144
  }
3145
+ if additional_helm_chart_values is not None:
3146
+ self._values["additional_helm_chart_values"] = additional_helm_chart_values
3030
3147
  if policy is not None:
3031
3148
  self._values["policy"] = policy
3032
3149
  if repository is not None:
@@ -3039,6 +3156,17 @@ class AlbControllerOptions:
3039
3156
  assert result is not None, "Required property 'version' is missing"
3040
3157
  return typing.cast("AlbControllerVersion", result)
3041
3158
 
3159
+ @builtins.property
3160
+ def additional_helm_chart_values(
3161
+ self,
3162
+ ) -> typing.Optional[AlbControllerHelmChartOptions]:
3163
+ '''Additional helm chart values for ALB controller.
3164
+
3165
+ :default: - no additional helm chart values
3166
+ '''
3167
+ result = self._values.get("additional_helm_chart_values")
3168
+ return typing.cast(typing.Optional[AlbControllerHelmChartOptions], result)
3169
+
3042
3170
  @builtins.property
3043
3171
  def policy(self) -> typing.Any:
3044
3172
  '''The IAM policy to apply to the service account.
@@ -3083,6 +3211,7 @@ class AlbControllerOptions:
3083
3211
  jsii_struct_bases=[AlbControllerOptions],
3084
3212
  name_mapping={
3085
3213
  "version": "version",
3214
+ "additional_helm_chart_values": "additionalHelmChartValues",
3086
3215
  "policy": "policy",
3087
3216
  "repository": "repository",
3088
3217
  "cluster": "cluster",
@@ -3093,6 +3222,7 @@ class AlbControllerProps(AlbControllerOptions):
3093
3222
  self,
3094
3223
  *,
3095
3224
  version: "AlbControllerVersion",
3225
+ additional_helm_chart_values: typing.Optional[typing.Union[AlbControllerHelmChartOptions, typing.Dict[builtins.str, typing.Any]]] = None,
3096
3226
  policy: typing.Any = None,
3097
3227
  repository: typing.Optional[builtins.str] = None,
3098
3228
  cluster: "Cluster",
@@ -3100,6 +3230,7 @@ class AlbControllerProps(AlbControllerOptions):
3100
3230
  '''Properties for ``AlbController``.
3101
3231
 
3102
3232
  :param version: Version of the controller.
3233
+ :param additional_helm_chart_values: Additional helm chart values for ALB controller. Default: - no additional helm chart values
3103
3234
  :param policy: The IAM policy to apply to the service account. If you're using one of the built-in versions, this is not required since CDK ships with the appropriate policies for those versions. However, if you are using a custom version, this is required (and validated). Default: - Corresponds to the predefined version.
3104
3235
  :param repository: The repository to pull the controller image from. Note that the default repository works for most regions, but not all. If the repository is not applicable to your region, use a custom repository according to the information here: https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases. Default: '602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller'
3105
3236
  :param cluster: [disable-awslint:ref-via-interface] Cluster to install the controller onto.
@@ -3121,13 +3252,20 @@ class AlbControllerProps(AlbControllerOptions):
3121
3252
  version=alb_controller_version,
3122
3253
 
3123
3254
  # the properties below are optional
3255
+ additional_helm_chart_values=eks.AlbControllerHelmChartOptions(
3256
+ enable_waf=False,
3257
+ enable_wafv2=False
3258
+ ),
3124
3259
  policy=policy,
3125
3260
  repository="repository"
3126
3261
  )
3127
3262
  '''
3263
+ if isinstance(additional_helm_chart_values, dict):
3264
+ additional_helm_chart_values = AlbControllerHelmChartOptions(**additional_helm_chart_values)
3128
3265
  if __debug__:
3129
3266
  type_hints = typing.get_type_hints(_typecheckingstub__9f52254abb63608be11e6e9e1ec6c94ebb428a9ab274e1bda653dd78d26cd509)
3130
3267
  check_type(argname="argument version", value=version, expected_type=type_hints["version"])
3268
+ check_type(argname="argument additional_helm_chart_values", value=additional_helm_chart_values, expected_type=type_hints["additional_helm_chart_values"])
3131
3269
  check_type(argname="argument policy", value=policy, expected_type=type_hints["policy"])
3132
3270
  check_type(argname="argument repository", value=repository, expected_type=type_hints["repository"])
3133
3271
  check_type(argname="argument cluster", value=cluster, expected_type=type_hints["cluster"])
@@ -3135,6 +3273,8 @@ class AlbControllerProps(AlbControllerOptions):
3135
3273
  "version": version,
3136
3274
  "cluster": cluster,
3137
3275
  }
3276
+ if additional_helm_chart_values is not None:
3277
+ self._values["additional_helm_chart_values"] = additional_helm_chart_values
3138
3278
  if policy is not None:
3139
3279
  self._values["policy"] = policy
3140
3280
  if repository is not None:
@@ -3147,6 +3287,17 @@ class AlbControllerProps(AlbControllerOptions):
3147
3287
  assert result is not None, "Required property 'version' is missing"
3148
3288
  return typing.cast("AlbControllerVersion", result)
3149
3289
 
3290
+ @builtins.property
3291
+ def additional_helm_chart_values(
3292
+ self,
3293
+ ) -> typing.Optional[AlbControllerHelmChartOptions]:
3294
+ '''Additional helm chart values for ALB controller.
3295
+
3296
+ :default: - no additional helm chart values
3297
+ '''
3298
+ result = self._values.get("additional_helm_chart_values")
3299
+ return typing.cast(typing.Optional[AlbControllerHelmChartOptions], result)
3300
+
3150
3301
  @builtins.property
3151
3302
  def policy(self) -> typing.Any:
3152
3303
  '''The IAM policy to apply to the service account.
@@ -19532,6 +19683,10 @@ class ClusterOptions(CommonClusterOptions):
19532
19683
  version=alb_controller_version,
19533
19684
 
19534
19685
  # the properties below are optional
19686
+ additional_helm_chart_values=eks.AlbControllerHelmChartOptions(
19687
+ enable_waf=False,
19688
+ enable_wafv2=False
19689
+ ),
19535
19690
  policy=policy,
19536
19691
  repository="repository"
19537
19692
  ),
@@ -21452,6 +21607,7 @@ __all__ = [
21452
21607
  "AddonAttributes",
21453
21608
  "AddonProps",
21454
21609
  "AlbController",
21610
+ "AlbControllerHelmChartOptions",
21455
21611
  "AlbControllerOptions",
21456
21612
  "AlbControllerProps",
21457
21613
  "AlbControllerVersion",
@@ -21621,6 +21777,7 @@ def _typecheckingstub__5e2ca421e3f17c3114d53057ba096ab3f90bd3b8ed6c2e0f75f61c88d
21621
21777
  *,
21622
21778
  cluster: Cluster,
21623
21779
  version: AlbControllerVersion,
21780
+ additional_helm_chart_values: typing.Optional[typing.Union[AlbControllerHelmChartOptions, typing.Dict[builtins.str, typing.Any]]] = None,
21624
21781
  policy: typing.Any = None,
21625
21782
  repository: typing.Optional[builtins.str] = None,
21626
21783
  ) -> None:
@@ -21632,15 +21789,25 @@ def _typecheckingstub__1b3813db11381f0166360b7dc6066bdeadc4a52043da6eba56f9a55a4
21632
21789
  *,
21633
21790
  cluster: Cluster,
21634
21791
  version: AlbControllerVersion,
21792
+ additional_helm_chart_values: typing.Optional[typing.Union[AlbControllerHelmChartOptions, typing.Dict[builtins.str, typing.Any]]] = None,
21635
21793
  policy: typing.Any = None,
21636
21794
  repository: typing.Optional[builtins.str] = None,
21637
21795
  ) -> None:
21638
21796
  """Type checking stubs"""
21639
21797
  pass
21640
21798
 
21799
+ def _typecheckingstub__281499b199c1a76de8c09b4fa8c74547b8a256e9ceb223d10f672ae9e7a452d1(
21800
+ *,
21801
+ enable_waf: typing.Optional[builtins.bool] = None,
21802
+ enable_wafv2: typing.Optional[builtins.bool] = None,
21803
+ ) -> None:
21804
+ """Type checking stubs"""
21805
+ pass
21806
+
21641
21807
  def _typecheckingstub__b22ec5f19b5d1b4d655cc304c12c33352da257e2109041355aa01fc993ec3ef9(
21642
21808
  *,
21643
21809
  version: AlbControllerVersion,
21810
+ additional_helm_chart_values: typing.Optional[typing.Union[AlbControllerHelmChartOptions, typing.Dict[builtins.str, typing.Any]]] = None,
21644
21811
  policy: typing.Any = None,
21645
21812
  repository: typing.Optional[builtins.str] = None,
21646
21813
  ) -> None:
@@ -21650,6 +21817,7 @@ def _typecheckingstub__b22ec5f19b5d1b4d655cc304c12c33352da257e2109041355aa01fc99
21650
21817
  def _typecheckingstub__9f52254abb63608be11e6e9e1ec6c94ebb428a9ab274e1bda653dd78d26cd509(
21651
21818
  *,
21652
21819
  version: AlbControllerVersion,
21820
+ additional_helm_chart_values: typing.Optional[typing.Union[AlbControllerHelmChartOptions, typing.Dict[builtins.str, typing.Any]]] = None,
21653
21821
  policy: typing.Any = None,
21654
21822
  repository: typing.Optional[builtins.str] = None,
21655
21823
  cluster: Cluster,
@@ -4056,10 +4056,10 @@ class CfnSchemaMapping(
4056
4056
  ) -> None:
4057
4057
  '''A configuration object for defining input data fields in AWS Entity Resolution .
4058
4058
 
4059
- The SchemaInputAttribute specifies how individual fields in your input data should be processed and matched.
4059
+ The ``SchemaInputAttribute`` specifies how individual fields in your input data should be processed and matched.
4060
4060
 
4061
4061
  :param field_name: A string containing the field name.
4062
- :param type: The type of the attribute, selected from a list of values. .. epigraph:: Normalization is only supported for ``NAME`` , ``ADDRESS`` , ``PHONE`` , and ``EMAIL_ADDRESS`` . If you want to normalize ``NAME_FIRST`` , ``NAME_MIDDLE`` , and ``NAME_LAST`` , you must group them by assigning them to the ``NAME`` ``groupName`` . If you want to normalize ``ADDRESS_STREET1`` , ``ADDRESS_STREET2`` , ``ADDRESS_STREET3`` , ``ADDRESS_CITY`` , ``ADDRESS_STATE`` , ``ADDRESS_COUNTRY`` , and ``ADDRESS_POSTALCODE`` , you must group them by assigning them to the ``ADDRESS`` ``groupName`` . If you want to normalize ``PHONE_NUMBER`` and ``PHONE_COUNTRYCODE`` , you must group them by assigning them to the ``PHONE`` ``groupName`` .
4062
+ :param type: The type of the attribute, selected from a list of values. LiveRamp supports: ``NAME`` | ``NAME_FIRST`` | ``NAME_MIDDLE`` | ``NAME_LAST`` | ``ADDRESS`` | ``ADDRESS_STREET1`` | ``ADDRESS_STREET2`` | ``ADDRESS_STREET3`` | ``ADDRESS_CITY`` | ``ADDRESS_STATE`` | ``ADDRESS_COUNTRY`` | ``ADDRESS_POSTALCODE`` | ``PHONE`` | ``PHONE_NUMBER`` | ``EMAIL_ADDRESS`` | ``UNIQUE_ID`` | ``PROVIDER_ID`` TransUnion supports: ``NAME`` | ``NAME_FIRST`` | ``NAME_LAST`` | ``ADDRESS`` | ``ADDRESS_CITY`` | ``ADDRESS_STATE`` | ``ADDRESS_COUNTRY`` | ``ADDRESS_POSTALCODE`` | ``PHONE_NUMBER`` | ``EMAIL_ADDRESS`` | ``UNIQUE_ID`` | ``IPV4`` | ``IPV6`` | ``MAID`` Unified ID 2.0 supports: ``PHONE_NUMBER`` | ``EMAIL_ADDRESS`` | ``UNIQUE_ID`` .. epigraph:: Normalization is only supported for ``NAME`` , ``ADDRESS`` , ``PHONE`` , and ``EMAIL_ADDRESS`` . If you want to normalize ``NAME_FIRST`` , ``NAME_MIDDLE`` , and ``NAME_LAST`` , you must group them by assigning them to the ``NAME`` ``groupName`` . If you want to normalize ``ADDRESS_STREET1`` , ``ADDRESS_STREET2`` , ``ADDRESS_STREET3`` , ``ADDRESS_CITY`` , ``ADDRESS_STATE`` , ``ADDRESS_COUNTRY`` , and ``ADDRESS_POSTALCODE`` , you must group them by assigning them to the ``ADDRESS`` ``groupName`` . If you want to normalize ``PHONE_NUMBER`` and ``PHONE_COUNTRYCODE`` , you must group them by assigning them to the ``PHONE`` ``groupName`` .
4063
4063
  :param group_name: A string that instructs AWS Entity Resolution to combine several columns into a unified column with the identical attribute type. For example, when working with columns such as ``NAME_FIRST`` , ``NAME_MIDDLE`` , and ``NAME_LAST`` , assigning them a common ``groupName`` will prompt AWS Entity Resolution to concatenate them into a single value.
4064
4064
  :param hashed: Indicates if the column values are hashed in the schema input. If the value is set to ``TRUE`` , the column values are hashed. If the value is set to ``FALSE`` , the column values are cleartext.
4065
4065
  :param match_key: A key that allows grouping of multiple input attributes into a unified matching group. For example, consider a scenario where the source table contains various addresses, such as ``business_address`` and ``shipping_address`` . By assigning a ``matchKey`` called ``address`` to both attributes, AWS Entity Resolution will match records across these fields to create a consolidated matching group. If no ``matchKey`` is specified for a column, it won't be utilized for matching purposes but will still be included in the output table.
@@ -4120,6 +4120,11 @@ class CfnSchemaMapping(
4120
4120
  def type(self) -> builtins.str:
4121
4121
  '''The type of the attribute, selected from a list of values.
4122
4122
 
4123
+ LiveRamp supports: ``NAME`` | ``NAME_FIRST`` | ``NAME_MIDDLE`` | ``NAME_LAST`` | ``ADDRESS`` | ``ADDRESS_STREET1`` | ``ADDRESS_STREET2`` | ``ADDRESS_STREET3`` | ``ADDRESS_CITY`` | ``ADDRESS_STATE`` | ``ADDRESS_COUNTRY`` | ``ADDRESS_POSTALCODE`` | ``PHONE`` | ``PHONE_NUMBER`` | ``EMAIL_ADDRESS`` | ``UNIQUE_ID`` | ``PROVIDER_ID``
4124
+
4125
+ TransUnion supports: ``NAME`` | ``NAME_FIRST`` | ``NAME_LAST`` | ``ADDRESS`` | ``ADDRESS_CITY`` | ``ADDRESS_STATE`` | ``ADDRESS_COUNTRY`` | ``ADDRESS_POSTALCODE`` | ``PHONE_NUMBER`` | ``EMAIL_ADDRESS`` | ``UNIQUE_ID`` | ``IPV4`` | ``IPV6`` | ``MAID``
4126
+
4127
+ Unified ID 2.0 supports: ``PHONE_NUMBER`` | ``EMAIL_ADDRESS`` | ``UNIQUE_ID``
4123
4128
  .. epigraph::
4124
4129
 
4125
4130
  Normalization is only supported for ``NAME`` , ``ADDRESS`` , ``PHONE`` , and ``EMAIL_ADDRESS`` .