cdk-comprehend-s3olap 2.0.116 → 2.0.117

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -29,11 +29,11 @@ declare class ECS extends Service {
29
29
  */
30
30
  createCluster(callback?: (err: AWSError, data: ECS.Types.CreateClusterResponse) => void): Request<ECS.Types.CreateClusterResponse, AWSError>;
31
31
  /**
32
- * Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action. In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide. Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer. There are two service scheduler strategies available: REPLICA - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. DAEMON - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for minimumHealthyPercent is 100%. The default value for a daemon service for minimumHealthyPercent is 0%. If a service uses the ECS deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they're in the RUNNING state, tasks for services that don't use a load balancer are considered healthy . If they're in the RUNNING state and reported as healthy by the load balancer, tasks for services that do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%. If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING or PENDING state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%. If a service uses either the CODE_DEPLOY or EXTERNAL deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING state. This is while the container instances are in the DRAINING state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used. This is the case even if they're currently visible when describing your service. When creating a service that uses the EXTERNAL deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide. When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the Amazon Elastic Container Service Developer Guide.
32
+ * Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide. Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer. There are two service scheduler strategies available: REPLICA - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. DAEMON - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for minimumHealthyPercent is 100%. The default value for a daemon service for minimumHealthyPercent is 0%. If a service uses the ECS deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they're in the RUNNING state, tasks for services that don't use a load balancer are considered healthy . If they're in the RUNNING state and reported as healthy by the load balancer, tasks for services that do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%. If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING or PENDING state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%. If a service uses either the CODE_DEPLOY or EXTERNAL deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING state. This is while the container instances are in the DRAINING state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used. This is the case even if they're currently visible when describing your service. When creating a service that uses the EXTERNAL deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide. When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the Amazon Elastic Container Service Developer Guide.
33
33
  */
34
34
  createService(params: ECS.Types.CreateServiceRequest, callback?: (err: AWSError, data: ECS.Types.CreateServiceResponse) => void): Request<ECS.Types.CreateServiceResponse, AWSError>;
35
35
  /**
36
- * Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action. In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide. Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer. There are two service scheduler strategies available: REPLICA - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. DAEMON - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for minimumHealthyPercent is 100%. The default value for a daemon service for minimumHealthyPercent is 0%. If a service uses the ECS deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they're in the RUNNING state, tasks for services that don't use a load balancer are considered healthy . If they're in the RUNNING state and reported as healthy by the load balancer, tasks for services that do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%. If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING or PENDING state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%. If a service uses either the CODE_DEPLOY or EXTERNAL deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING state. This is while the container instances are in the DRAINING state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used. This is the case even if they're currently visible when describing your service. When creating a service that uses the EXTERNAL deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide. When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the Amazon Elastic Container Service Developer Guide.
36
+ * Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide. Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer. There are two service scheduler strategies available: REPLICA - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. DAEMON - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide. You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for minimumHealthyPercent is 100%. The default value for a daemon service for minimumHealthyPercent is 0%. If a service uses the ECS deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they're in the RUNNING state, tasks for services that don't use a load balancer are considered healthy . If they're in the RUNNING state and reported as healthy by the load balancer, tasks for services that do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%. If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING or PENDING state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%. If a service uses either the CODE_DEPLOY or EXTERNAL deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING state. This is while the container instances are in the DRAINING state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used. This is the case even if they're currently visible when describing your service. When creating a service that uses the EXTERNAL deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide. When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the Amazon Elastic Container Service Developer Guide.
37
37
  */
38
38
  createService(callback?: (err: AWSError, data: ECS.Types.CreateServiceResponse) => void): Request<ECS.Types.CreateServiceResponse, AWSError>;
39
39
  /**
@@ -325,19 +325,19 @@ declare class ECS extends Service {
325
325
  */
326
326
  registerTaskDefinition(callback?: (err: AWSError, data: ECS.Types.RegisterTaskDefinitionResponse) => void): Request<ECS.Types.RegisterTaskDefinitionResponse, AWSError>;
327
327
  /**
328
- * Starts a new task using the specified task definition. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide. Alternatively, you can use StartTask to use your own scheduler or place tasks manually on specific container instances. The Amazon ECS API follows an eventual consistency model. This is because of the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your Amazon ECS resources might not be immediately visible to all subsequent commands you run. Keep this in mind when you carry out an API command that immediately follows a previous API command. To manage eventual consistency, you can do the following: Confirm the state of the resource before you run a command to modify it. Run the DescribeTasks command using an exponential backoff algorithm to ensure that you allow enough time for the previous command to propagate through the system. To do this, run the DescribeTasks command repeatedly, starting with a couple of seconds of wait time and increasing gradually up to five minutes of wait time. Add wait time between subsequent commands, even if the DescribeTasks command returns an accurate response. Apply an exponential backoff algorithm starting with a couple of seconds of wait time, and increase gradually up to about five minutes of wait time.
328
+ * Starts a new task using the specified task definition. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide. Alternatively, you can use StartTask to use your own scheduler or place tasks manually on specific container instances. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. The Amazon ECS API follows an eventual consistency model. This is because of the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your Amazon ECS resources might not be immediately visible to all subsequent commands you run. Keep this in mind when you carry out an API command that immediately follows a previous API command. To manage eventual consistency, you can do the following: Confirm the state of the resource before you run a command to modify it. Run the DescribeTasks command using an exponential backoff algorithm to ensure that you allow enough time for the previous command to propagate through the system. To do this, run the DescribeTasks command repeatedly, starting with a couple of seconds of wait time and increasing gradually up to five minutes of wait time. Add wait time between subsequent commands, even if the DescribeTasks command returns an accurate response. Apply an exponential backoff algorithm starting with a couple of seconds of wait time, and increase gradually up to about five minutes of wait time.
329
329
  */
330
330
  runTask(params: ECS.Types.RunTaskRequest, callback?: (err: AWSError, data: ECS.Types.RunTaskResponse) => void): Request<ECS.Types.RunTaskResponse, AWSError>;
331
331
  /**
332
- * Starts a new task using the specified task definition. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide. Alternatively, you can use StartTask to use your own scheduler or place tasks manually on specific container instances. The Amazon ECS API follows an eventual consistency model. This is because of the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your Amazon ECS resources might not be immediately visible to all subsequent commands you run. Keep this in mind when you carry out an API command that immediately follows a previous API command. To manage eventual consistency, you can do the following: Confirm the state of the resource before you run a command to modify it. Run the DescribeTasks command using an exponential backoff algorithm to ensure that you allow enough time for the previous command to propagate through the system. To do this, run the DescribeTasks command repeatedly, starting with a couple of seconds of wait time and increasing gradually up to five minutes of wait time. Add wait time between subsequent commands, even if the DescribeTasks command returns an accurate response. Apply an exponential backoff algorithm starting with a couple of seconds of wait time, and increase gradually up to about five minutes of wait time.
332
+ * Starts a new task using the specified task definition. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide. Alternatively, you can use StartTask to use your own scheduler or place tasks manually on specific container instances. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. The Amazon ECS API follows an eventual consistency model. This is because of the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your Amazon ECS resources might not be immediately visible to all subsequent commands you run. Keep this in mind when you carry out an API command that immediately follows a previous API command. To manage eventual consistency, you can do the following: Confirm the state of the resource before you run a command to modify it. Run the DescribeTasks command using an exponential backoff algorithm to ensure that you allow enough time for the previous command to propagate through the system. To do this, run the DescribeTasks command repeatedly, starting with a couple of seconds of wait time and increasing gradually up to five minutes of wait time. Add wait time between subsequent commands, even if the DescribeTasks command returns an accurate response. Apply an exponential backoff algorithm starting with a couple of seconds of wait time, and increase gradually up to about five minutes of wait time.
333
333
  */
334
334
  runTask(callback?: (err: AWSError, data: ECS.Types.RunTaskResponse) => void): Request<ECS.Types.RunTaskResponse, AWSError>;
335
335
  /**
336
- * Starts a new task from the specified task definition on the specified container instance or instances. Alternatively, you can use RunTask to place tasks for you. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.
336
+ * Starts a new task from the specified task definition on the specified container instance or instances. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. Alternatively, you can use RunTask to place tasks for you. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.
337
337
  */
338
338
  startTask(params: ECS.Types.StartTaskRequest, callback?: (err: AWSError, data: ECS.Types.StartTaskResponse) => void): Request<ECS.Types.StartTaskResponse, AWSError>;
339
339
  /**
340
- * Starts a new task from the specified task definition on the specified container instance or instances. Alternatively, you can use RunTask to place tasks for you. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.
340
+ * Starts a new task from the specified task definition on the specified container instance or instances. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. Alternatively, you can use RunTask to place tasks for you. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.
341
341
  */
342
342
  startTask(callback?: (err: AWSError, data: ECS.Types.StartTaskResponse) => void): Request<ECS.Types.StartTaskResponse, AWSError>;
343
343
  /**
@@ -2593,7 +2593,7 @@ declare namespace ECS {
2593
2593
  */
2594
2594
  status?: ManagedScalingStatus;
2595
2595
  /**
2596
- * The target capacity value for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. A value of 100 results in the Amazon EC2 instances in your Auto Scaling group being completely used.
2596
+ * The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a targetCapacity of 90. The default value of 100 percent results in the Amazon EC2 instances in your Auto Scaling group being completely used.
2597
2597
  */
2598
2598
  targetCapacity?: ManagedScalingTargetCapacity;
2599
2599
  /**
@@ -3291,7 +3291,7 @@ declare namespace ECS {
3291
3291
  */
3292
3292
  portName: String;
3293
3293
  /**
3294
- * The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
3294
+ * The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
3295
3295
  */
3296
3296
  discoveryName?: String;
3297
3297
  /**
@@ -3306,7 +3306,7 @@ declare namespace ECS {
3306
3306
  export type ServiceConnectServiceList = ServiceConnectService[];
3307
3307
  export interface ServiceConnectServiceResource {
3308
3308
  /**
3309
- * The discovery name of this Service Connect resource. The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
3309
+ * The discovery name of this Service Connect resource. The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
3310
3310
  */
3311
3311
  discoveryName?: String;
3312
3312
  /**
@@ -3810,7 +3810,7 @@ declare namespace ECS {
3810
3810
  */
3811
3811
  runtimePlatform?: RuntimePlatform;
3812
3812
  /**
3813
- * The task launch types the task definition was validated against. To determine which task launch types the task definition is validated for, see the TaskDefinition$compatibilities parameter.
3813
+ * The task launch types the task definition was validated against. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
3814
3814
  */
3815
3815
  requiresCompatibilities?: CompatibilityList;
3816
3816
  /**
@@ -28,11 +28,11 @@ declare class IdentityStore extends Service {
28
28
  */
29
29
  createGroupMembership(callback?: (err: AWSError, data: IdentityStore.Types.CreateGroupMembershipResponse) => void): Request<IdentityStore.Types.CreateGroupMembershipResponse, AWSError>;
30
30
  /**
31
- * Creates a new user within the specified identity store.
31
+ * Creates a user within the specified identity store.
32
32
  */
33
33
  createUser(params: IdentityStore.Types.CreateUserRequest, callback?: (err: AWSError, data: IdentityStore.Types.CreateUserResponse) => void): Request<IdentityStore.Types.CreateUserResponse, AWSError>;
34
34
  /**
35
- * Creates a new user within the specified identity store.
35
+ * Creates a user within the specified identity store.
36
36
  */
37
37
  createUser(callback?: (err: AWSError, data: IdentityStore.Types.CreateUserResponse) => void): Request<IdentityStore.Types.CreateUserResponse, AWSError>;
38
38
  /**
@@ -254,7 +254,7 @@ declare namespace IdentityStore {
254
254
  */
255
255
  IdentityStoreId: IdentityStoreId;
256
256
  /**
257
- * A string containing the name of the group. This value is commonly displayed when the group is referenced.
257
+ * A string containing the name of the group. This value is commonly displayed when the group is referenced. "Administrator" and "AWSAdministrators" are reserved names and can't be used for users or groups.
258
258
  */
259
259
  DisplayName?: GroupDisplayName;
260
260
  /**
@@ -278,15 +278,15 @@ declare namespace IdentityStore {
278
278
  */
279
279
  IdentityStoreId: IdentityStoreId;
280
280
  /**
281
- * A unique string used to identify the user. The length limit is 128 characters. This value can consist of letters, accented characters, symbols, numbers, and punctuation. This value is specified at the time the user is created and stored as an attribute of the user object in the identity store.
281
+ * A unique string used to identify the user. The length limit is 128 characters. This value can consist of letters, accented characters, symbols, numbers, and punctuation. This value is specified at the time the user is created and stored as an attribute of the user object in the identity store. "Administrator" and "AWSAdministrators" are reserved names and can't be used for users or groups.
282
282
  */
283
283
  UserName?: UserName;
284
284
  /**
285
- * An object containing the user's name.
285
+ * An object containing the name of the user.
286
286
  */
287
287
  Name?: Name;
288
288
  /**
289
- * A string containing the user's name. This value is typically formatted for display when the user is referenced. For example, "John Doe."
289
+ * A string containing the name of the user. This value is typically formatted for display when the user is referenced. For example, "John Doe."
290
290
  */
291
291
  DisplayName?: SensitiveStringType;
292
292
  /**
@@ -294,7 +294,7 @@ declare namespace IdentityStore {
294
294
  */
295
295
  NickName?: SensitiveStringType;
296
296
  /**
297
- * A string containing a URL that may be associated with the user.
297
+ * A string containing a URL that might be associated with the user.
298
298
  */
299
299
  ProfileUrl?: SensitiveStringType;
300
300
  /**
@@ -310,11 +310,11 @@ declare namespace IdentityStore {
310
310
  */
311
311
  PhoneNumbers?: PhoneNumbers;
312
312
  /**
313
- * A string indicating the user's type. Possible values depend on each customer's specific needs, so they are left unspecified.
313
+ * A string indicating the type of user. Possible values are left unspecified. The value can vary based on your specific use case.
314
314
  */
315
315
  UserType?: SensitiveStringType;
316
316
  /**
317
- * A string containing the user's title. Possible values are left unspecified given that they depend on each customer's specific needs.
317
+ * A string containing the title of the user. Possible values are left unspecified. The value can vary based on your specific use case.
318
318
  */
319
319
  Title?: SensitiveStringType;
320
320
  /**
@@ -322,11 +322,11 @@ declare namespace IdentityStore {
322
322
  */
323
323
  PreferredLanguage?: SensitiveStringType;
324
324
  /**
325
- * A string containing the user's geographical region or location.
325
+ * A string containing the geographical region or location of the user.
326
326
  */
327
327
  Locale?: SensitiveStringType;
328
328
  /**
329
- * A string containing the user's time zone.
329
+ * A string containing the time zone of the user.
330
330
  */
331
331
  Timezone?: SensitiveStringType;
332
332
  }
@@ -461,7 +461,7 @@ declare namespace IdentityStore {
461
461
  */
462
462
  Name?: Name;
463
463
  /**
464
- * The user's name value for display.
464
+ * The display name of the user.
465
465
  */
466
466
  DisplayName?: SensitiveStringType;
467
467
  /**
@@ -473,11 +473,11 @@ declare namespace IdentityStore {
473
473
  */
474
474
  ProfileUrl?: SensitiveStringType;
475
475
  /**
476
- * The user's email value.
476
+ * The email address of the user.
477
477
  */
478
478
  Emails?: Emails;
479
479
  /**
480
- * The user's physical address.
480
+ * The physical address of the user.
481
481
  */
482
482
  Addresses?: Addresses;
483
483
  /**
@@ -485,11 +485,11 @@ declare namespace IdentityStore {
485
485
  */
486
486
  PhoneNumbers?: PhoneNumbers;
487
487
  /**
488
- * A string indicating the user's type.
488
+ * A string indicating the type of user.
489
489
  */
490
490
  UserType?: SensitiveStringType;
491
491
  /**
492
- * A string containing the user's title.
492
+ * A string containing the title of the user.
493
493
  */
494
494
  Title?: SensitiveStringType;
495
495
  /**
@@ -497,7 +497,7 @@ declare namespace IdentityStore {
497
497
  */
498
498
  PreferredLanguage?: SensitiveStringType;
499
499
  /**
500
- * A string containing the user's geographical region or location.
500
+ * A string containing the geographical region or location of the user.
501
501
  */
502
502
  Locale?: SensitiveStringType;
503
503
  /**
@@ -554,7 +554,7 @@ declare namespace IdentityStore {
554
554
  */
555
555
  IdentityStoreId: IdentityStoreId;
556
556
  /**
557
- * A unique identifier for a user or group that is not the primary identifier. This value can be an identifier from an external identity provider (IdP) that is associated with the user, the group, or a unique attribute. For example, a unique GroupDisplayName.
557
+ * A unique identifier for a user or group that is not the primary identifier. This value can be an identifier from an external identity provider (IdP) that is associated with the user, the group, or a unique attribute. For the unique attribute, the only valid path is displayName.
558
558
  */
559
559
  AlternateIdentifier: AlternateIdentifier;
560
560
  }
@@ -598,7 +598,7 @@ declare namespace IdentityStore {
598
598
  */
599
599
  IdentityStoreId: IdentityStoreId;
600
600
  /**
601
- * A unique identifier for a user or group that is not the primary identifier. This value can be an identifier from an external identity provider (IdP) that is associated with the user, the group, or a unique attribute. For example, a unique UserDisplayName.
601
+ * A unique identifier for a user or group that is not the primary identifier. This value can be an identifier from an external identity provider (IdP) that is associated with the user, the group, or a unique attribute. For the unique attribute, the only valid paths are userName and emails.value.
602
602
  */
603
603
  AlternateIdentifier: AlternateIdentifier;
604
604
  }
@@ -618,7 +618,7 @@ declare namespace IdentityStore {
618
618
  */
619
619
  GroupId: ResourceId;
620
620
  /**
621
- * The group’s display name value. The length limit is 1,024 characters. This value can consist of letters, accented characters, symbols, numbers, punctuation, tab, new line, carriage return, space, and nonbreaking space in this attribute. This value is specified at the time the group is created and stored as an attribute of the group object in the identity store.
621
+ * The display name value for the group. The length limit is 1,024 characters. This value can consist of letters, accented characters, symbols, numbers, punctuation, tab, new line, carriage return, space, and nonbreaking space in this attribute. This value is specified at the time the group is created and stored as an attribute of the group object in the identity store.
622
622
  */
623
623
  DisplayName?: GroupDisplayName;
624
624
  /**
@@ -912,11 +912,11 @@ declare namespace IdentityStore {
912
912
  */
913
913
  ExternalIds?: ExternalIds;
914
914
  /**
915
- * An object containing the user's name.
915
+ * An object containing the name of the user.
916
916
  */
917
917
  Name?: Name;
918
918
  /**
919
- * A string containing the user's name that's formatted for display when the user is referenced. For example, "John Doe."
919
+ * A string containing the name of the user that is formatted for display when the user is referenced. For example, "John Doe."
920
920
  */
921
921
  DisplayName?: SensitiveStringType;
922
922
  /**
@@ -924,7 +924,7 @@ declare namespace IdentityStore {
924
924
  */
925
925
  NickName?: SensitiveStringType;
926
926
  /**
927
- * A string containing a URL that may be associated with the user.
927
+ * A string containing a URL that might be associated with the user.
928
928
  */
929
929
  ProfileUrl?: SensitiveStringType;
930
930
  /**
@@ -940,11 +940,11 @@ declare namespace IdentityStore {
940
940
  */
941
941
  PhoneNumbers?: PhoneNumbers;
942
942
  /**
943
- * A string indicating the user's type. Possible values depend on each customer's specific needs, so they are left unspecified.
943
+ * A string indicating the type of user. Possible values are left unspecified. The value can vary based on your specific use case.
944
944
  */
945
945
  UserType?: SensitiveStringType;
946
946
  /**
947
- * A string containing the user's title. Possible values depend on each customer's specific needs, so they are left unspecified.
947
+ * A string containing the title of the user. Possible values are left unspecified. The value can vary based on your specific use case.
948
948
  */
949
949
  Title?: SensitiveStringType;
950
950
  /**
@@ -952,11 +952,11 @@ declare namespace IdentityStore {
952
952
  */
953
953
  PreferredLanguage?: SensitiveStringType;
954
954
  /**
955
- * A string containing the user's geographical region or location.
955
+ * A string containing the geographical region or location of the user.
956
956
  */
957
957
  Locale?: SensitiveStringType;
958
958
  /**
959
- * A string containing the user's time zone.
959
+ * A string containing the time zone of the user.
960
960
  */
961
961
  Timezone?: SensitiveStringType;
962
962
  /**
@@ -1118,7 +1118,7 @@ declare namespace NetworkFirewall {
1118
1118
  */
1119
1119
  DestinationPort: Port;
1120
1120
  }
1121
- export type IPAddressType = "DUALSTACK"|"IPV4"|string;
1121
+ export type IPAddressType = "DUALSTACK"|"IPV4"|"IPV6"|string;
1122
1122
  export interface IPSet {
1123
1123
  /**
1124
1124
  * The list of IP addresses and address ranges, in CIDR notation.
@@ -2986,7 +2986,7 @@ declare namespace ServiceCatalog {
2986
2986
  export type Principals = Principal[];
2987
2987
  export type ProductArn = string;
2988
2988
  export type ProductSource = "ACCOUNT"|string;
2989
- export type ProductType = "CLOUD_FORMATION_TEMPLATE"|"MARKETPLACE"|"DEFAULT_CUSTOM"|"TERRAFORM_OPEN_SOURCE"|string;
2989
+ export type ProductType = "CLOUD_FORMATION_TEMPLATE"|"MARKETPLACE"|"TERRAFORM_OPEN_SOURCE"|string;
2990
2990
  export type ProductViewAggregationType = string;
2991
2991
  export interface ProductViewAggregationValue {
2992
2992
  /**
@@ -3545,7 +3545,7 @@ declare namespace ServiceCatalog {
3545
3545
  */
3546
3546
  ProvisioningArtifactMetadata?: ProvisioningArtifactInfo;
3547
3547
  }
3548
- export type ProvisioningArtifactType = "CLOUD_FORMATION_TEMPLATE"|"MARKETPLACE_AMI"|"MARKETPLACE_CAR"|"DEFAULT_CUSTOM"|"TERRAFORM_OPEN_SOURCE"|string;
3548
+ export type ProvisioningArtifactType = "CLOUD_FORMATION_TEMPLATE"|"MARKETPLACE_AMI"|"MARKETPLACE_CAR"|"TERRAFORM_OPEN_SOURCE"|string;
3549
3549
  export interface ProvisioningArtifactView {
3550
3550
  /**
3551
3551
  * Summary information about a product view.
@@ -68,11 +68,11 @@ declare class VPCLattice extends Service {
68
68
  */
69
69
  createServiceNetworkServiceAssociation(callback?: (err: AWSError, data: VPCLattice.Types.CreateServiceNetworkServiceAssociationResponse) => void): Request<VPCLattice.Types.CreateServiceNetworkServiceAssociationResponse, AWSError>;
70
70
  /**
71
- * Associates a VPC with a service network. When you associate a VPC with the service network, it enables all the resources within that VPC to be clients and communicate with other services in the service network. For more information, see Manage VPC associations in the Amazon VPC Lattice User Guide. You can't use this operation if there is a disassociation in progress. If the association fails, retry by deleting the association and recreating it. As a result of this operation, the association gets created in the service network account and the VPC owner account. Once a security group is added to the VPC association it cannot be removed. You can add or update the security groups being used for the VPC association once a security group is attached. To remove all security groups you must reassociate the VPC.
71
+ * Associates a VPC with a service network. When you associate a VPC with the service network, it enables all the resources within that VPC to be clients and communicate with other services in the service network. For more information, see Manage VPC associations in the Amazon VPC Lattice User Guide. You can't use this operation if there is a disassociation in progress. If the association fails, retry by deleting the association and recreating it. As a result of this operation, the association gets created in the service network account and the VPC owner account. If you add a security group to the service network and VPC association, the association must continue to always have at least one security group. You can add or edit security groups at any time. However, to remove all security groups, you must first delete the association and recreate it without security groups.
72
72
  */
73
73
  createServiceNetworkVpcAssociation(params: VPCLattice.Types.CreateServiceNetworkVpcAssociationRequest, callback?: (err: AWSError, data: VPCLattice.Types.CreateServiceNetworkVpcAssociationResponse) => void): Request<VPCLattice.Types.CreateServiceNetworkVpcAssociationResponse, AWSError>;
74
74
  /**
75
- * Associates a VPC with a service network. When you associate a VPC with the service network, it enables all the resources within that VPC to be clients and communicate with other services in the service network. For more information, see Manage VPC associations in the Amazon VPC Lattice User Guide. You can't use this operation if there is a disassociation in progress. If the association fails, retry by deleting the association and recreating it. As a result of this operation, the association gets created in the service network account and the VPC owner account. Once a security group is added to the VPC association it cannot be removed. You can add or update the security groups being used for the VPC association once a security group is attached. To remove all security groups you must reassociate the VPC.
75
+ * Associates a VPC with a service network. When you associate a VPC with the service network, it enables all the resources within that VPC to be clients and communicate with other services in the service network. For more information, see Manage VPC associations in the Amazon VPC Lattice User Guide. You can't use this operation if there is a disassociation in progress. If the association fails, retry by deleting the association and recreating it. As a result of this operation, the association gets created in the service network account and the VPC owner account. If you add a security group to the service network and VPC association, the association must continue to always have at least one security group. You can add or edit security groups at any time. However, to remove all security groups, you must first delete the association and recreate it without security groups.
76
76
  */
77
77
  createServiceNetworkVpcAssociation(callback?: (err: AWSError, data: VPCLattice.Types.CreateServiceNetworkVpcAssociationResponse) => void): Request<VPCLattice.Types.CreateServiceNetworkVpcAssociationResponse, AWSError>;
78
78
  /**
@@ -92,11 +92,11 @@ declare class VPCLattice extends Service {
92
92
  */
93
93
  deleteAccessLogSubscription(callback?: (err: AWSError, data: VPCLattice.Types.DeleteAccessLogSubscriptionResponse) => void): Request<VPCLattice.Types.DeleteAccessLogSubscriptionResponse, AWSError>;
94
94
  /**
95
- * Deletes the specified auth policy. If an auth is set to Amazon Web Services_IAM and the auth policy is deleted, all requests will be denied by default. If you are trying to remove the auth policy completely, you must set the auth_type to NONE. If auth is enabled on the resource, but no auth policy is set, all requests will be denied.
95
+ * Deletes the specified auth policy. If an auth is set to AWS_IAM and the auth policy is deleted, all requests will be denied by default. If you are trying to remove the auth policy completely, you must set the auth_type to NONE. If auth is enabled on the resource, but no auth policy is set, all requests will be denied.
96
96
  */
97
97
  deleteAuthPolicy(params: VPCLattice.Types.DeleteAuthPolicyRequest, callback?: (err: AWSError, data: VPCLattice.Types.DeleteAuthPolicyResponse) => void): Request<VPCLattice.Types.DeleteAuthPolicyResponse, AWSError>;
98
98
  /**
99
- * Deletes the specified auth policy. If an auth is set to Amazon Web Services_IAM and the auth policy is deleted, all requests will be denied by default. If you are trying to remove the auth policy completely, you must set the auth_type to NONE. If auth is enabled on the resource, but no auth policy is set, all requests will be denied.
99
+ * Deletes the specified auth policy. If an auth is set to AWS_IAM and the auth policy is deleted, all requests will be denied by default. If you are trying to remove the auth policy completely, you must set the auth_type to NONE. If auth is enabled on the resource, but no auth policy is set, all requests will be denied.
100
100
  */
101
101
  deleteAuthPolicy(callback?: (err: AWSError, data: VPCLattice.Types.DeleteAuthPolicyResponse) => void): Request<VPCLattice.Types.DeleteAuthPolicyResponse, AWSError>;
102
102
  /**
@@ -196,11 +196,11 @@ declare class VPCLattice extends Service {
196
196
  */
197
197
  getListener(callback?: (err: AWSError, data: VPCLattice.Types.GetListenerResponse) => void): Request<VPCLattice.Types.GetListenerResponse, AWSError>;
198
198
  /**
199
- * Retrieves information about the resource policy. The resource policy is an IAM policy created by AWS RAM on behalf of the resource owner when they share a resource.
199
+ * Retrieves information about the resource policy. The resource policy is an IAM policy created on behalf of the resource owner when they share a resource.
200
200
  */
201
201
  getResourcePolicy(params: VPCLattice.Types.GetResourcePolicyRequest, callback?: (err: AWSError, data: VPCLattice.Types.GetResourcePolicyResponse) => void): Request<VPCLattice.Types.GetResourcePolicyResponse, AWSError>;
202
202
  /**
203
- * Retrieves information about the resource policy. The resource policy is an IAM policy created by AWS RAM on behalf of the resource owner when they share a resource.
203
+ * Retrieves information about the resource policy. The resource policy is an IAM policy created on behalf of the resource owner when they share a resource.
204
204
  */
205
205
  getResourcePolicy(callback?: (err: AWSError, data: VPCLattice.Types.GetResourcePolicyResponse) => void): Request<VPCLattice.Types.GetResourcePolicyResponse, AWSError>;
206
206
  /**
@@ -332,11 +332,11 @@ declare class VPCLattice extends Service {
332
332
  */
333
333
  listTargets(callback?: (err: AWSError, data: VPCLattice.Types.ListTargetsResponse) => void): Request<VPCLattice.Types.ListTargetsResponse, AWSError>;
334
334
  /**
335
- * Creates or updates the auth policy.
335
+ * Creates or updates the auth policy. The policy string in JSON must not contain newlines or blank lines.
336
336
  */
337
337
  putAuthPolicy(params: VPCLattice.Types.PutAuthPolicyRequest, callback?: (err: AWSError, data: VPCLattice.Types.PutAuthPolicyResponse) => void): Request<VPCLattice.Types.PutAuthPolicyResponse, AWSError>;
338
338
  /**
339
- * Creates or updates the auth policy.
339
+ * Creates or updates the auth policy. The policy string in JSON must not contain newlines or blank lines.
340
340
  */
341
341
  putAuthPolicy(callback?: (err: AWSError, data: VPCLattice.Types.PutAuthPolicyResponse) => void): Request<VPCLattice.Types.PutAuthPolicyResponse, AWSError>;
342
342
  /**
@@ -412,11 +412,11 @@ declare class VPCLattice extends Service {
412
412
  */
413
413
  updateServiceNetwork(callback?: (err: AWSError, data: VPCLattice.Types.UpdateServiceNetworkResponse) => void): Request<VPCLattice.Types.UpdateServiceNetworkResponse, AWSError>;
414
414
  /**
415
- * Updates the service network and VPC association. Once you add a security group, it cannot be removed.
415
+ * Updates the service network and VPC association. If you add a security group to the service network and VPC association, the association must continue to always have at least one security group. You can add or edit security groups at any time. However, to remove all security groups, you must first delete the association and recreate it without security groups.
416
416
  */
417
417
  updateServiceNetworkVpcAssociation(params: VPCLattice.Types.UpdateServiceNetworkVpcAssociationRequest, callback?: (err: AWSError, data: VPCLattice.Types.UpdateServiceNetworkVpcAssociationResponse) => void): Request<VPCLattice.Types.UpdateServiceNetworkVpcAssociationResponse, AWSError>;
418
418
  /**
419
- * Updates the service network and VPC association. Once you add a security group, it cannot be removed.
419
+ * Updates the service network and VPC association. If you add a security group to the service network and VPC association, the association must continue to always have at least one security group. You can add or edit security groups at any time. However, to remove all security groups, you must first delete the association and recreate it without security groups.
420
420
  */
421
421
  updateServiceNetworkVpcAssociation(callback?: (err: AWSError, data: VPCLattice.Types.UpdateServiceNetworkVpcAssociationResponse) => void): Request<VPCLattice.Types.UpdateServiceNetworkVpcAssociationResponse, AWSError>;
422
422
  /**
@@ -1138,7 +1138,7 @@ declare namespace VPCLattice {
1138
1138
  */
1139
1139
  policy?: AuthPolicyString;
1140
1140
  /**
1141
- * The state of the auth policy. The auth policy is only active when the auth type is set to Amazon Web Services_IAM. If you provide a policy, then authentication and authorization decisions are made based on this policy and the client's IAM policy. If the auth type is NONE, then any auth policy you provide will remain inactive. For more information, see Create a service network in the Amazon VPC Lattice User Guide.
1141
+ * The state of the auth policy. The auth policy is only active when the auth type is set to AWS_IAM. If you provide a policy, then authentication and authorization decisions are made based on this policy and the client's IAM policy. If the auth type is NONE, then any auth policy you provide will remain inactive. For more information, see Create a service network in the Amazon VPC Lattice User Guide.
1142
1142
  */
1143
1143
  state?: AuthPolicyState;
1144
1144
  }
@@ -1196,13 +1196,13 @@ declare namespace VPCLattice {
1196
1196
  }
1197
1197
  export interface GetResourcePolicyRequest {
1198
1198
  /**
1199
- * An IAM policy.
1199
+ * The Amazon Resource Name (ARN) of the service network or service.
1200
1200
  */
1201
1201
  resourceArn: ResourceArn;
1202
1202
  }
1203
1203
  export interface GetResourcePolicyResponse {
1204
1204
  /**
1205
- * The Amazon Resource Name (ARN) of the service network or service.
1205
+ * An IAM policy.
1206
1206
  */
1207
1207
  policy?: PolicyString;
1208
1208
  }
@@ -1945,7 +1945,7 @@ declare namespace VPCLattice {
1945
1945
  export type Port = number;
1946
1946
  export interface PutAuthPolicyRequest {
1947
1947
  /**
1948
- * The auth policy.
1948
+ * The auth policy. The policy string in JSON must not contain newlines or blank lines.
1949
1949
  */
1950
1950
  policy: AuthPolicyString;
1951
1951
  /**
@@ -1955,17 +1955,17 @@ declare namespace VPCLattice {
1955
1955
  }
1956
1956
  export interface PutAuthPolicyResponse {
1957
1957
  /**
1958
- * The auth policy.
1958
+ * The auth policy. The policy string in JSON must not contain newlines or blank lines.
1959
1959
  */
1960
1960
  policy?: AuthPolicyString;
1961
1961
  /**
1962
- * The state of the auth policy. The auth policy is only active when the auth type is set to Amazon Web Services_IAM. If you provide a policy, then authentication and authorization decisions are made based on this policy and the client's IAM policy. If the Auth type is NONE, then, any auth policy you provide will remain inactive. For more information, see Create a service network in the Amazon VPC Lattice User Guide.
1962
+ * The state of the auth policy. The auth policy is only active when the auth type is set to AWS_IAM. If you provide a policy, then authentication and authorization decisions are made based on this policy and the client's IAM policy. If the Auth type is NONE, then, any auth policy you provide will remain inactive. For more information, see Create a service network in the Amazon VPC Lattice User Guide.
1963
1963
  */
1964
1964
  state?: AuthPolicyState;
1965
1965
  }
1966
1966
  export interface PutResourcePolicyRequest {
1967
1967
  /**
1968
- * An IAM policy.
1968
+ * An IAM policy. The policy string in JSON must not contain newlines or blank lines.
1969
1969
  */
1970
1970
  policy: PolicyString;
1971
1971
  /**
@@ -2634,7 +2634,7 @@ declare namespace VPCLattice {
2634
2634
  }
2635
2635
  export interface UpdateServiceNetworkVpcAssociationRequest {
2636
2636
  /**
2637
- * The IDs of the security groups. Once you add a security group, it cannot be removed.
2637
+ * The IDs of the security groups.
2638
2638
  */
2639
2639
  securityGroupIds: UpdateServiceNetworkVpcAssociationRequestSecurityGroupIdsList;
2640
2640
  /**
@@ -83,7 +83,7 @@ return /******/ (function(modules) { // webpackBootstrap
83
83
  /**
84
84
  * @constant
85
85
  */
86
- VERSION: '2.1350.0',
86
+ VERSION: '2.1351.0',
87
87
 
88
88
  /**
89
89
  * @api private
@@ -395,7 +395,7 @@ return /******/ (function(modules) { // webpackBootstrap
395
395
  /**
396
396
  * @constant
397
397
  */
398
- VERSION: '2.1350.0',
398
+ VERSION: '2.1351.0',
399
399
 
400
400
  /**
401
401
  * @api private
@@ -61988,7 +61988,7 @@ return /******/ (function(modules) { // webpackBootstrap
61988
61988
  /* 1288 */
61989
61989
  /***/ (function(module, exports) {
61990
61990
 
61991
- module.exports = {"version":"2.0","metadata":{"apiVersion":"2020-07-14","endpointPrefix":"ivsrealtime","jsonVersion":"1.1","protocol":"rest-json","serviceAbbreviation":"ivsrealtime","serviceFullName":"Amazon Interactive Video Service RealTime","serviceId":"IVS RealTime","signatureVersion":"v4","signingName":"ivs","uid":"ivs-realtime-2020-07-14"},"operations":{"CreateParticipantToken":{"http":{"requestUri":"/CreateParticipantToken","responseCode":200},"input":{"type":"structure","required":["stageArn"],"members":{"attributes":{"shape":"S2"},"capabilities":{"shape":"S4"},"duration":{"type":"integer"},"stageArn":{},"userId":{}}},"output":{"type":"structure","members":{"participantToken":{"shape":"Sa"}}}},"CreateStage":{"http":{"requestUri":"/CreateStage","responseCode":200},"input":{"type":"structure","members":{"name":{},"participantTokenConfigurations":{"type":"list","member":{"type":"structure","members":{"attributes":{"shape":"S2"},"capabilities":{"shape":"S4"},"duration":{"type":"integer"},"userId":{}}}},"tags":{"shape":"Si"}}},"output":{"type":"structure","members":{"participantTokens":{"type":"list","member":{"shape":"Sa"}},"stage":{"shape":"Sn"}}}},"DeleteStage":{"http":{"requestUri":"/DeleteStage","responseCode":200},"input":{"type":"structure","required":["arn"],"members":{"arn":{}}},"output":{"type":"structure","members":{}}},"DisconnectParticipant":{"http":{"requestUri":"/DisconnectParticipant","responseCode":200},"input":{"type":"structure","required":["participantId","stageArn"],"members":{"participantId":{},"reason":{},"stageArn":{}}},"output":{"type":"structure","members":{}}},"GetStage":{"http":{"requestUri":"/GetStage","responseCode":200},"input":{"type":"structure","required":["arn"],"members":{"arn":{}}},"output":{"type":"structure","members":{"stage":{"shape":"Sn"}}}},"ListStages":{"http":{"requestUri":"/ListStages","responseCode":200},"input":{"type":"structure","members":{"maxResults":{"type":"integer"},"nextToken":{}}},"output":{"type":"structure","required":["stages"],"members":{"nextToken":{},"stages":{"type":"list","member":{"type":"structure","required":["arn"],"members":{"activeSessionId":{},"arn":{},"name":{},"tags":{"shape":"Si"}}}}}}},"ListTagsForResource":{"http":{"method":"GET","requestUri":"/tags/{resourceArn}","responseCode":200},"input":{"type":"structure","required":["resourceArn"],"members":{"resourceArn":{"location":"uri","locationName":"resourceArn"}}},"output":{"type":"structure","required":["tags"],"members":{"tags":{"shape":"Si"}}}},"TagResource":{"http":{"requestUri":"/tags/{resourceArn}","responseCode":200},"input":{"type":"structure","required":["resourceArn","tags"],"members":{"resourceArn":{"location":"uri","locationName":"resourceArn"},"tags":{"shape":"Si"}}},"output":{"type":"structure","members":{}}},"UntagResource":{"http":{"method":"DELETE","requestUri":"/tags/{resourceArn}","responseCode":200},"input":{"type":"structure","required":["resourceArn","tagKeys"],"members":{"resourceArn":{"location":"uri","locationName":"resourceArn"},"tagKeys":{"location":"querystring","locationName":"tagKeys","type":"list","member":{}}}},"output":{"type":"structure","members":{}},"idempotent":true},"UpdateStage":{"http":{"requestUri":"/UpdateStage","responseCode":200},"input":{"type":"structure","required":["arn"],"members":{"arn":{},"name":{}}},"output":{"type":"structure","members":{"stage":{"shape":"Sn"}}}}},"shapes":{"S2":{"type":"map","key":{},"value":{}},"S4":{"type":"list","member":{}},"Sa":{"type":"structure","members":{"attributes":{"shape":"S2"},"capabilities":{"shape":"S4"},"duration":{"type":"integer"},"expirationTime":{"type":"timestamp"},"participantId":{},"token":{},"userId":{}}},"Si":{"type":"map","key":{},"value":{}},"Sn":{"type":"structure","required":["arn"],"members":{"activeSessionId":{},"arn":{},"name":{},"tags":{"shape":"Si"}}}}}
61991
+ module.exports = {"version":"2.0","metadata":{"apiVersion":"2020-07-14","endpointPrefix":"ivsrealtime","jsonVersion":"1.1","protocol":"rest-json","serviceAbbreviation":"ivsrealtime","serviceFullName":"Amazon Interactive Video Service RealTime","serviceId":"IVS RealTime","signatureVersion":"v4","signingName":"ivs","uid":"ivs-realtime-2020-07-14"},"operations":{"CreateParticipantToken":{"http":{"requestUri":"/CreateParticipantToken","responseCode":200},"input":{"type":"structure","required":["stageArn"],"members":{"attributes":{"shape":"S2"},"capabilities":{"shape":"S4"},"duration":{"type":"integer"},"stageArn":{},"userId":{}}},"output":{"type":"structure","members":{"participantToken":{"shape":"Sa"}}}},"CreateStage":{"http":{"requestUri":"/CreateStage","responseCode":200},"input":{"type":"structure","members":{"name":{},"participantTokenConfigurations":{"type":"list","member":{"type":"structure","members":{"attributes":{"shape":"S2"},"capabilities":{"shape":"S4"},"duration":{"type":"integer"},"userId":{}}}},"tags":{"shape":"Si"}}},"output":{"type":"structure","members":{"participantTokens":{"type":"list","member":{"shape":"Sa"}},"stage":{"shape":"Sn"}}}},"DeleteStage":{"http":{"requestUri":"/DeleteStage","responseCode":200},"input":{"type":"structure","required":["arn"],"members":{"arn":{}}},"output":{"type":"structure","members":{}}},"DisconnectParticipant":{"http":{"requestUri":"/DisconnectParticipant","responseCode":200},"input":{"type":"structure","required":["participantId","stageArn"],"members":{"participantId":{},"reason":{},"stageArn":{}}},"output":{"type":"structure","members":{}}},"GetStage":{"http":{"requestUri":"/GetStage","responseCode":200},"input":{"type":"structure","required":["arn"],"members":{"arn":{}}},"output":{"type":"structure","members":{"stage":{"shape":"Sn"}}}},"ListStages":{"http":{"requestUri":"/ListStages","responseCode":200},"input":{"type":"structure","members":{"maxResults":{"type":"integer"},"nextToken":{}}},"output":{"type":"structure","required":["stages"],"members":{"nextToken":{},"stages":{"type":"list","member":{"type":"structure","required":["arn"],"members":{"activeSessionId":{},"arn":{},"name":{},"tags":{"shape":"Si"}}}}}}},"ListTagsForResource":{"http":{"method":"GET","requestUri":"/tags/{resourceArn}","responseCode":200},"input":{"type":"structure","required":["resourceArn"],"members":{"resourceArn":{"location":"uri","locationName":"resourceArn"}}},"output":{"type":"structure","required":["tags"],"members":{"tags":{"shape":"Si"}}}},"TagResource":{"http":{"requestUri":"/tags/{resourceArn}","responseCode":200},"input":{"type":"structure","required":["resourceArn","tags"],"members":{"resourceArn":{"location":"uri","locationName":"resourceArn"},"tags":{"shape":"Si"}}},"output":{"type":"structure","members":{}}},"UntagResource":{"http":{"method":"DELETE","requestUri":"/tags/{resourceArn}","responseCode":200},"input":{"type":"structure","required":["resourceArn","tagKeys"],"members":{"resourceArn":{"location":"uri","locationName":"resourceArn"},"tagKeys":{"location":"querystring","locationName":"tagKeys","type":"list","member":{}}}},"output":{"type":"structure","members":{}},"idempotent":true},"UpdateStage":{"http":{"requestUri":"/UpdateStage","responseCode":200},"input":{"type":"structure","required":["arn"],"members":{"arn":{},"name":{}}},"output":{"type":"structure","members":{"stage":{"shape":"Sn"}}}}},"shapes":{"S2":{"type":"map","key":{},"value":{}},"S4":{"type":"list","member":{}},"Sa":{"type":"structure","members":{"attributes":{"shape":"S2"},"capabilities":{"shape":"S4"},"duration":{"type":"integer"},"expirationTime":{"type":"timestamp","timestampFormat":"iso8601"},"participantId":{},"token":{"type":"string","sensitive":true},"userId":{}}},"Si":{"type":"map","key":{},"value":{}},"Sn":{"type":"structure","required":["arn"],"members":{"activeSessionId":{},"arn":{},"name":{},"tags":{"shape":"Si"}}}}}
61992
61992
 
61993
61993
  /***/ }),
61994
61994
  /* 1289 */
@@ -1,4 +1,4 @@
1
- // AWS SDK for JavaScript v2.1350.0
1
+ // AWS SDK for JavaScript v2.1351.0
2
2
  // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
3
  // License at https://sdk.amazonaws.com/js/BUNDLE_LICENSE.txt
4
4
  (function(){function r(e,n,t){function o(i,f){if(!n[i]){if(!e[i]){var c="function"==typeof require&&require;if(!f&&c)return c(i,!0);if(u)return u(i,!0);var a=new Error("Cannot find module '"+i+"'");throw a.code="MODULE_NOT_FOUND",a}var p=n[i]={exports:{}};e[i][0].call(p.exports,function(r){var n=e[i][1][r];return o(n||r)},p,p.exports,r,e,n,t)}return n[i].exports}for(var u="function"==typeof require&&require,i=0;i<t.length;i++)o(t[i]);return o}return r})()({1:[function(require,module,exports){
@@ -254874,7 +254874,7 @@ AWS.util.update(AWS, {
254874
254874
  /**
254875
254875
  * @constant
254876
254876
  */
254877
- VERSION: '2.1350.0',
254877
+ VERSION: '2.1351.0',
254878
254878
 
254879
254879
  /**
254880
254880
  * @api private
@@ -276991,7 +276991,7 @@ var LRUCache = /** @class */ (function () {
276991
276991
  }());
276992
276992
  exports.LRUCache = LRUCache;
276993
276993
  },{}],462:[function(require,module,exports){
276994
- // AWS SDK for JavaScript v2.1350.0
276994
+ // AWS SDK for JavaScript v2.1351.0
276995
276995
  // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
276996
276996
  // License at https://sdk.amazonaws.com/js/BUNDLE_LICENSE.txt
276997
276997
  require('./browser_loader');