aws-sdk 2.1408.0 → 2.1410.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +12 -1
- package/README.md +1 -1
- package/apis/amp-2020-08-01.min.json +149 -149
- package/apis/batch-2016-08-10.min.json +58 -45
- package/apis/ecs-2014-11-13.min.json +3 -0
- package/apis/ivs-2020-07-14.min.json +2 -2
- package/apis/sagemaker-2017-07-24.min.json +720 -697
- package/apis/transfer-2018-11-05.min.json +68 -67
- package/clients/amp.d.ts +132 -132
- package/clients/batch.d.ts +23 -11
- package/clients/ecs.d.ts +4 -0
- package/clients/mediaconvert.d.ts +7 -7
- package/clients/sagemaker.d.ts +36 -7
- package/clients/transfer.d.ts +11 -6
- package/dist/aws-sdk-core-react-native.js +1 -1
- package/dist/aws-sdk-react-native.js +7 -7
- package/dist/aws-sdk.js +155 -152
- package/dist/aws-sdk.min.js +55 -55
- package/lib/core.js +1 -1
- package/package.json +1 -1
package/clients/batch.d.ts
CHANGED
@@ -12,19 +12,19 @@ declare class Batch extends Service {
|
|
12
12
|
constructor(options?: Batch.Types.ClientConfiguration)
|
13
13
|
config: Config & Batch.Types.ClientConfiguration;
|
14
14
|
/**
|
15
|
-
* Cancels a job in an Batch job queue. Jobs that are in the SUBMITTED or PENDING are canceled. A job inRUNNABLE remains in RUNNABLE until it reaches the head of the job queue. Then the job status is updated to FAILED. Jobs that progressed to the STARTING or RUNNING state aren't canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation.
|
15
|
+
* Cancels a job in an Batch job queue. Jobs that are in the SUBMITTED or PENDING are canceled. A job inRUNNABLE remains in RUNNABLE until it reaches the head of the job queue. Then the job status is updated to FAILED. A PENDING job is canceled after all dependency jobs are completed. Therefore, it may take longer than expected to cancel a job in PENDING status. When you try to cancel an array parent job in PENDING, Batch attempts to cancel all child jobs. The array parent job is canceled when all child jobs are completed. Jobs that progressed to the STARTING or RUNNING state aren't canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation.
|
16
16
|
*/
|
17
17
|
cancelJob(params: Batch.Types.CancelJobRequest, callback?: (err: AWSError, data: Batch.Types.CancelJobResponse) => void): Request<Batch.Types.CancelJobResponse, AWSError>;
|
18
18
|
/**
|
19
|
-
* Cancels a job in an Batch job queue. Jobs that are in the SUBMITTED or PENDING are canceled. A job inRUNNABLE remains in RUNNABLE until it reaches the head of the job queue. Then the job status is updated to FAILED. Jobs that progressed to the STARTING or RUNNING state aren't canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation.
|
19
|
+
* Cancels a job in an Batch job queue. Jobs that are in the SUBMITTED or PENDING are canceled. A job inRUNNABLE remains in RUNNABLE until it reaches the head of the job queue. Then the job status is updated to FAILED. A PENDING job is canceled after all dependency jobs are completed. Therefore, it may take longer than expected to cancel a job in PENDING status. When you try to cancel an array parent job in PENDING, Batch attempts to cancel all child jobs. The array parent job is canceled when all child jobs are completed. Jobs that progressed to the STARTING or RUNNING state aren't canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation.
|
20
20
|
*/
|
21
21
|
cancelJob(callback?: (err: AWSError, data: Batch.Types.CancelJobResponse) => void): Request<Batch.Types.CancelJobResponse, AWSError>;
|
22
22
|
/**
|
23
|
-
* Creates an Batch compute environment. You can create MANAGED or UNMANAGED compute environments. MANAGED compute environments can use Amazon EC2 or Fargate resources. UNMANAGED compute environments can only use EC2 resources. In a managed compute environment, Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. Either, you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price. Multi-node parallel jobs aren't supported on Spot Instances. In an unmanaged compute environment, you can manage your own EC2 compute resources and have flexibility with how you configure your compute resources. For example, you can use custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that's associated with it. Then, launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide. To create a compute environment that uses EKS resources, the caller must have permissions to call eks:DescribeCluster. Batch doesn't automatically upgrade the AMIs in a compute environment after it's created. For example, it also doesn't update the AMIs in your compute environment when a newer version of the Amazon ECS optimized AMI is available. You're responsible for the management of the guest operating system. This includes any updates and security patches. You're also responsible for any additional application software or utilities that you install on the compute resources. There are two ways to use a new AMI for your Batch jobs. The original method is to complete these steps: Create a new compute environment with the new AMI. Add the compute environment to an existing job queue. Remove the earlier compute environment from your job queue. Delete the earlier compute environment. In April 2022, Batch added enhanced support for updating compute environments. For more information, see Updating compute environments. To use the enhanced updating of compute environments to update AMIs, follow these rules: Either don't set the service role (serviceRole) parameter or set it to the AWSBatchServiceRole service-linked role. Set the allocation strategy (allocationStrategy) parameter to BEST_FIT_PROGRESSIVE or SPOT_CAPACITY_OPTIMIZED. Set the update to latest image version (updateToLatestImageVersion) parameter to true. Don't specify an AMI ID in imageId, imageIdOverride (in ec2Configuration ), or in the launch template (launchTemplate). In that case, Batch selects the latest Amazon ECS optimized AMI that's supported by Batch at the time the infrastructure update is initiated. Alternatively, you can specify the AMI ID in the imageId or imageIdOverride parameters, or the launch template identified by the LaunchTemplate properties. Changing any of these properties starts an infrastructure update. If the AMI ID is specified in the launch template, it can't be replaced by specifying an AMI ID in either the imageId or imageIdOverride parameters. It can only be replaced by specifying a different launch template, or if the launch template version is set to $Default or $Latest, by setting either a new default version for the launch template (if $Default) or by adding a new version to the launch template (if $Latest). If these rules are followed, any update that starts an infrastructure update causes the AMI ID to be re-selected. If the version setting in the launch template (launchTemplate) is set to $Latest or $Default, the latest or default version of the launch template is evaluated up at the time of the infrastructure update, even if the launchTemplate wasn't updated.
|
23
|
+
* Creates an Batch compute environment. You can create MANAGED or UNMANAGED compute environments. MANAGED compute environments can use Amazon EC2 or Fargate resources. UNMANAGED compute environments can only use EC2 resources. In a managed compute environment, Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. Either, you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price. Multi-node parallel jobs aren't supported on Spot Instances. In an unmanaged compute environment, you can manage your own EC2 compute resources and have flexibility with how you configure your compute resources. For example, you can use custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that's associated with it. Then, launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide. To create a compute environment that uses EKS resources, the caller must have permissions to call eks:DescribeCluster. Batch doesn't automatically upgrade the AMIs in a compute environment after it's created. For example, it also doesn't update the AMIs in your compute environment when a newer version of the Amazon ECS optimized AMI is available. You're responsible for the management of the guest operating system. This includes any updates and security patches. You're also responsible for any additional application software or utilities that you install on the compute resources. There are two ways to use a new AMI for your Batch jobs. The original method is to complete these steps: Create a new compute environment with the new AMI. Add the compute environment to an existing job queue. Remove the earlier compute environment from your job queue. Delete the earlier compute environment. In April 2022, Batch added enhanced support for updating compute environments. For more information, see Updating compute environments. To use the enhanced updating of compute environments to update AMIs, follow these rules: Either don't set the service role (serviceRole) parameter or set it to the AWSBatchServiceRole service-linked role. Set the allocation strategy (allocationStrategy) parameter to BEST_FIT_PROGRESSIVE or SPOT_CAPACITY_OPTIMIZED. Set the update to latest image version (updateToLatestImageVersion) parameter to true. The updateToLatestImageVersion parameter is used when you update a compute environment. This parameter is ignored when you create a compute environment. Don't specify an AMI ID in imageId, imageIdOverride (in ec2Configuration ), or in the launch template (launchTemplate). In that case, Batch selects the latest Amazon ECS optimized AMI that's supported by Batch at the time the infrastructure update is initiated. Alternatively, you can specify the AMI ID in the imageId or imageIdOverride parameters, or the launch template identified by the LaunchTemplate properties. Changing any of these properties starts an infrastructure update. If the AMI ID is specified in the launch template, it can't be replaced by specifying an AMI ID in either the imageId or imageIdOverride parameters. It can only be replaced by specifying a different launch template, or if the launch template version is set to $Default or $Latest, by setting either a new default version for the launch template (if $Default) or by adding a new version to the launch template (if $Latest). If these rules are followed, any update that starts an infrastructure update causes the AMI ID to be re-selected. If the version setting in the launch template (launchTemplate) is set to $Latest or $Default, the latest or default version of the launch template is evaluated up at the time of the infrastructure update, even if the launchTemplate wasn't updated.
|
24
24
|
*/
|
25
25
|
createComputeEnvironment(params: Batch.Types.CreateComputeEnvironmentRequest, callback?: (err: AWSError, data: Batch.Types.CreateComputeEnvironmentResponse) => void): Request<Batch.Types.CreateComputeEnvironmentResponse, AWSError>;
|
26
26
|
/**
|
27
|
-
* Creates an Batch compute environment. You can create MANAGED or UNMANAGED compute environments. MANAGED compute environments can use Amazon EC2 or Fargate resources. UNMANAGED compute environments can only use EC2 resources. In a managed compute environment, Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. Either, you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price. Multi-node parallel jobs aren't supported on Spot Instances. In an unmanaged compute environment, you can manage your own EC2 compute resources and have flexibility with how you configure your compute resources. For example, you can use custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that's associated with it. Then, launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide. To create a compute environment that uses EKS resources, the caller must have permissions to call eks:DescribeCluster. Batch doesn't automatically upgrade the AMIs in a compute environment after it's created. For example, it also doesn't update the AMIs in your compute environment when a newer version of the Amazon ECS optimized AMI is available. You're responsible for the management of the guest operating system. This includes any updates and security patches. You're also responsible for any additional application software or utilities that you install on the compute resources. There are two ways to use a new AMI for your Batch jobs. The original method is to complete these steps: Create a new compute environment with the new AMI. Add the compute environment to an existing job queue. Remove the earlier compute environment from your job queue. Delete the earlier compute environment. In April 2022, Batch added enhanced support for updating compute environments. For more information, see Updating compute environments. To use the enhanced updating of compute environments to update AMIs, follow these rules: Either don't set the service role (serviceRole) parameter or set it to the AWSBatchServiceRole service-linked role. Set the allocation strategy (allocationStrategy) parameter to BEST_FIT_PROGRESSIVE or SPOT_CAPACITY_OPTIMIZED. Set the update to latest image version (updateToLatestImageVersion) parameter to true. Don't specify an AMI ID in imageId, imageIdOverride (in ec2Configuration ), or in the launch template (launchTemplate). In that case, Batch selects the latest Amazon ECS optimized AMI that's supported by Batch at the time the infrastructure update is initiated. Alternatively, you can specify the AMI ID in the imageId or imageIdOverride parameters, or the launch template identified by the LaunchTemplate properties. Changing any of these properties starts an infrastructure update. If the AMI ID is specified in the launch template, it can't be replaced by specifying an AMI ID in either the imageId or imageIdOverride parameters. It can only be replaced by specifying a different launch template, or if the launch template version is set to $Default or $Latest, by setting either a new default version for the launch template (if $Default) or by adding a new version to the launch template (if $Latest). If these rules are followed, any update that starts an infrastructure update causes the AMI ID to be re-selected. If the version setting in the launch template (launchTemplate) is set to $Latest or $Default, the latest or default version of the launch template is evaluated up at the time of the infrastructure update, even if the launchTemplate wasn't updated.
|
27
|
+
* Creates an Batch compute environment. You can create MANAGED or UNMANAGED compute environments. MANAGED compute environments can use Amazon EC2 or Fargate resources. UNMANAGED compute environments can only use EC2 resources. In a managed compute environment, Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. Either, you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price. Multi-node parallel jobs aren't supported on Spot Instances. In an unmanaged compute environment, you can manage your own EC2 compute resources and have flexibility with how you configure your compute resources. For example, you can use custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that's associated with it. Then, launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide. To create a compute environment that uses EKS resources, the caller must have permissions to call eks:DescribeCluster. Batch doesn't automatically upgrade the AMIs in a compute environment after it's created. For example, it also doesn't update the AMIs in your compute environment when a newer version of the Amazon ECS optimized AMI is available. You're responsible for the management of the guest operating system. This includes any updates and security patches. You're also responsible for any additional application software or utilities that you install on the compute resources. There are two ways to use a new AMI for your Batch jobs. The original method is to complete these steps: Create a new compute environment with the new AMI. Add the compute environment to an existing job queue. Remove the earlier compute environment from your job queue. Delete the earlier compute environment. In April 2022, Batch added enhanced support for updating compute environments. For more information, see Updating compute environments. To use the enhanced updating of compute environments to update AMIs, follow these rules: Either don't set the service role (serviceRole) parameter or set it to the AWSBatchServiceRole service-linked role. Set the allocation strategy (allocationStrategy) parameter to BEST_FIT_PROGRESSIVE or SPOT_CAPACITY_OPTIMIZED. Set the update to latest image version (updateToLatestImageVersion) parameter to true. The updateToLatestImageVersion parameter is used when you update a compute environment. This parameter is ignored when you create a compute environment. Don't specify an AMI ID in imageId, imageIdOverride (in ec2Configuration ), or in the launch template (launchTemplate). In that case, Batch selects the latest Amazon ECS optimized AMI that's supported by Batch at the time the infrastructure update is initiated. Alternatively, you can specify the AMI ID in the imageId or imageIdOverride parameters, or the launch template identified by the LaunchTemplate properties. Changing any of these properties starts an infrastructure update. If the AMI ID is specified in the launch template, it can't be replaced by specifying an AMI ID in either the imageId or imageIdOverride parameters. It can only be replaced by specifying a different launch template, or if the launch template version is set to $Default or $Latest, by setting either a new default version for the launch template (if $Default) or by adding a new version to the launch template (if $Latest). If these rules are followed, any update that starts an infrastructure update causes the AMI ID to be re-selected. If the version setting in the launch template (launchTemplate) is set to $Latest or $Default, the latest or default version of the launch template is evaluated up at the time of the infrastructure update, even if the launchTemplate wasn't updated.
|
28
28
|
*/
|
29
29
|
createComputeEnvironment(callback?: (err: AWSError, data: Batch.Types.CreateComputeEnvironmentResponse) => void): Request<Batch.Types.CreateComputeEnvironmentResponse, AWSError>;
|
30
30
|
/**
|
@@ -386,15 +386,15 @@ declare namespace Batch {
|
|
386
386
|
*/
|
387
387
|
allocationStrategy?: CRAllocationStrategy;
|
388
388
|
/**
|
389
|
-
* The minimum number of
|
389
|
+
* The minimum number of vCPUs that a compute environment should maintain (even if the compute environment is DISABLED). This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
|
390
390
|
*/
|
391
391
|
minvCpus?: Integer;
|
392
392
|
/**
|
393
|
-
* The maximum number of
|
393
|
+
* The maximum number of vCPUs that a compute environment can support. With both BEST_FIT_PROGRESSIVE and SPOT_CAPACITY_OPTIMIZED allocation strategies using On-Demand or Spot Instances, and the BEST_FIT strategy using Spot Instances, Batch might need to exceed maxvCpus to meet your capacity requirements. In this event, Batch never exceeds maxvCpus by more than a single instance. For example, no more than a single instance from among those specified in your compute environment is allocated.
|
394
394
|
*/
|
395
395
|
maxvCpus: Integer;
|
396
396
|
/**
|
397
|
-
* The desired number of
|
397
|
+
* The desired number of vCPUS in the compute environment. Batch modifies this value between the minimum and maximum values based on job queue demand. This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
|
398
398
|
*/
|
399
399
|
desiredvCpus?: Integer;
|
400
400
|
/**
|
@@ -448,7 +448,7 @@ declare namespace Batch {
|
|
448
448
|
}
|
449
449
|
export interface ComputeResourceUpdate {
|
450
450
|
/**
|
451
|
-
* The minimum number of
|
451
|
+
* The minimum number of vCPUs that an environment should maintain (even if the compute environment is DISABLED). This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
|
452
452
|
*/
|
453
453
|
minvCpus?: Integer;
|
454
454
|
/**
|
@@ -456,7 +456,7 @@ declare namespace Batch {
|
|
456
456
|
*/
|
457
457
|
maxvCpus?: Integer;
|
458
458
|
/**
|
459
|
-
* The desired number of
|
459
|
+
* The desired number of vCPUS in the compute environment. Batch modifies this value between the minimum and maximum values based on job queue demand. This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it. Batch doesn't support changing the desired number of vCPUs of an existing compute environment. Don't specify this parameter for compute environments using Amazon EKS clusters. When you update the desiredvCpus setting, the value must be between the minvCpus and maxvCpus values. Additionally, the updated desiredvCpus value must be greater than or equal to the current desiredvCpus value. For more information, see Troubleshooting Batch in the Batch User Guide.
|
460
460
|
*/
|
461
461
|
desiredvCpus?: Integer;
|
462
462
|
/**
|
@@ -625,6 +625,7 @@ declare namespace Batch {
|
|
625
625
|
* The amount of ephemeral storage allocated for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate.
|
626
626
|
*/
|
627
627
|
ephemeralStorage?: EphemeralStorage;
|
628
|
+
runtimePlatform?: RuntimePlatform;
|
628
629
|
}
|
629
630
|
export interface ContainerOverrides {
|
630
631
|
/**
|
@@ -636,7 +637,7 @@ declare namespace Batch {
|
|
636
637
|
*/
|
637
638
|
memory?: Integer;
|
638
639
|
/**
|
639
|
-
* The command to send to the container that overrides the default command from the Docker image or the job definition.
|
640
|
+
* The command to send to the container that overrides the default command from the Docker image or the job definition. This parameter can't contain an empty string.
|
640
641
|
*/
|
641
642
|
command?: StringList;
|
642
643
|
/**
|
@@ -737,6 +738,7 @@ declare namespace Batch {
|
|
737
738
|
* The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate.
|
738
739
|
*/
|
739
740
|
ephemeralStorage?: EphemeralStorage;
|
741
|
+
runtimePlatform?: RuntimePlatform;
|
740
742
|
}
|
741
743
|
export interface ContainerSummary {
|
742
744
|
/**
|
@@ -2107,6 +2109,16 @@ declare namespace Batch {
|
|
2107
2109
|
*/
|
2108
2110
|
evaluateOnExit?: EvaluateOnExitList;
|
2109
2111
|
}
|
2112
|
+
export interface RuntimePlatform {
|
2113
|
+
/**
|
2114
|
+
* The operating system for the compute environment. Valid values are: LINUX (default), WINDOWS_SERVER_2019_CORE, WINDOWS_SERVER_2019_FULL, WINDOWS_SERVER_2022_CORE, and WINDOWS_SERVER_2022_FULL. The following parameters can’t be set for Windows containers: linuxParameters, privileged, user, ulimits, readonlyRootFilesystem, and efsVolumeConfiguration. The Batch Scheduler checks before registering a task definition with Fargate. If the job requires a Windows container and the first compute environment is LINUX, the compute environment is skipped and the next is checked until a Windows-based compute environment is found. Fargate Spot is not supported for Windows-based containers on Fargate. A job queue will be blocked if a Fargate Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both FARGATE and FARGATE_SPOT compute environments to the same job queue.
|
2115
|
+
*/
|
2116
|
+
operatingSystemFamily?: String;
|
2117
|
+
/**
|
2118
|
+
* The vCPU architecture. The default value is X86_64. Valid values are X86_64 and ARM64. This parameter must be set to X86_64 for Windows containers.
|
2119
|
+
*/
|
2120
|
+
cpuArchitecture?: String;
|
2121
|
+
}
|
2110
2122
|
export interface SchedulingPolicyDetail {
|
2111
2123
|
/**
|
2112
2124
|
* The name of the scheduling policy.
|
@@ -2167,7 +2179,7 @@ declare namespace Batch {
|
|
2167
2179
|
*/
|
2168
2180
|
jobQueue: String;
|
2169
2181
|
/**
|
2170
|
-
* The share identifier for the job.
|
2182
|
+
* The share identifier for the job. Don't specify this parameter if the job queue doesn't have a scheduling policy. If the job queue has a scheduling policy, then this parameter must be specified. This string is limited to 255 alphanumeric characters, and can be followed by an asterisk (*).
|
2171
2183
|
*/
|
2172
2184
|
shareIdentifier?: String;
|
2173
2185
|
/**
|
package/clients/ecs.d.ts
CHANGED
@@ -971,6 +971,10 @@ declare namespace ECS {
|
|
971
971
|
* The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide.
|
972
972
|
*/
|
973
973
|
firelensConfiguration?: FirelensConfiguration;
|
974
|
+
/**
|
975
|
+
* A list of ARNs in SSM or Amazon S3 to a credential spec (credspeccode>) file that configures a container for Active Directory authentication. This parameter is only used with domainless authentication. The format for each ARN is credentialspecdomainless:MyARN. Replace MyARN with the ARN in SSM or Amazon S3. The credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.
|
976
|
+
*/
|
977
|
+
credentialSpecs?: StringList;
|
974
978
|
}
|
975
979
|
export type ContainerDefinitions = ContainerDefinition[];
|
976
980
|
export type ContainerDependencies = ContainerDependency[];
|
@@ -981,11 +981,11 @@ declare namespace MediaConvert {
|
|
981
981
|
export type CaptionDestinationType = "BURN_IN"|"DVB_SUB"|"EMBEDDED"|"EMBEDDED_PLUS_SCTE20"|"IMSC"|"SCTE20_PLUS_EMBEDDED"|"SCC"|"SRT"|"SMI"|"TELETEXT"|"TTML"|"WEBVTT"|string;
|
982
982
|
export interface CaptionSelector {
|
983
983
|
/**
|
984
|
-
* The specific language to extract from source, using the ISO 639-2 or ISO 639-3 three-letter language code. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in
|
984
|
+
* The specific language to extract from source, using the ISO 639-2 or ISO 639-3 three-letter language code. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.
|
985
985
|
*/
|
986
986
|
CustomLanguageCode?: __stringMin3Max3PatternAZaZ3;
|
987
987
|
/**
|
988
|
-
* The specific language to extract from source. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in
|
988
|
+
* The specific language to extract from source. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.
|
989
989
|
*/
|
990
990
|
LanguageCode?: LanguageCode;
|
991
991
|
/**
|
@@ -1994,7 +1994,7 @@ Within your job settings, all of your DVB-Sub settings must be identical.
|
|
1994
1994
|
}
|
1995
1995
|
export interface DvbSubSourceSettings {
|
1996
1996
|
/**
|
1997
|
-
* When using DVB-Sub with Burn-
|
1997
|
+
* When using DVB-Sub with Burn-in, use this PID for the source content. Unused for DVB-Sub passthrough. All DVB-Sub content is passed through, regardless of selectors.
|
1998
1998
|
*/
|
1999
1999
|
Pid?: __integerMin1Max2147483647;
|
2000
2000
|
}
|
@@ -3051,7 +3051,7 @@ Within your job settings, all of your DVB-Sub settings must be identical.
|
|
3051
3051
|
*/
|
3052
3052
|
SegmentLengthControl?: HlsSegmentLengthControl;
|
3053
3053
|
/**
|
3054
|
-
* Specify the number of segments to write to a subdirectory before starting a new one. You
|
3054
|
+
* Specify the number of segments to write to a subdirectory before starting a new one. You must also set Directory structure to Subdirectory per stream for this setting to have an effect.
|
3055
3055
|
*/
|
3056
3056
|
SegmentsPerSubdirectory?: __integerMin1Max2147483647;
|
3057
3057
|
/**
|
@@ -4639,7 +4639,7 @@ When you specify Version 1, you must also set ID3 metadata (timedMetadata) to Pa
|
|
4639
4639
|
*/
|
4640
4640
|
AfdSignaling?: MxfAfdSignaling;
|
4641
4641
|
/**
|
4642
|
-
* Specify the MXF profile, also called shim, for this output.
|
4642
|
+
* Specify the MXF profile, also called shim, for this output. To automatically select a profile according to your output video codec and resolution, leave blank. For a list of codecs supported with each MXF profile, see https://docs.aws.amazon.com/mediaconvert/latest/ug/codecs-supported-with-each-mxf-profile.html. For more information about the automatic selection behavior, see https://docs.aws.amazon.com/mediaconvert/latest/ug/default-automatic-selection-of-mxf-profiles.html.
|
4643
4643
|
*/
|
4644
4644
|
Profile?: MxfProfile;
|
4645
4645
|
/**
|
@@ -5596,7 +5596,7 @@ When you specify Version 1, you must also set ID3 metadata (timedMetadata) to Pa
|
|
5596
5596
|
*/
|
5597
5597
|
AvcIntraSettings?: AvcIntraSettings;
|
5598
5598
|
/**
|
5599
|
-
* Specifies the video codec. This must be equal to one of the enum values defined by the object VideoCodec. To passthrough the video stream of your input JPEG2000, VC-3, AVC-INTRA or Apple ProRes
|
5599
|
+
* Specifies the video codec. This must be equal to one of the enum values defined by the object VideoCodec. To passthrough the video stream of your input JPEG2000, VC-3, AVC-INTRA or Apple ProRes video without any video encoding: Choose Passthrough. If you have multiple input videos, note that they must have identical encoding attributes. When you choose Passthrough, your output container must be MXF or QuickTime MOV.
|
5600
5600
|
*/
|
5601
5601
|
Codec?: VideoCodec;
|
5602
5602
|
/**
|
@@ -5722,7 +5722,7 @@ When you specify Version 1, you must also set ID3 metadata (timedMetadata) to Pa
|
|
5722
5722
|
*/
|
5723
5723
|
DolbyVision?: DolbyVision;
|
5724
5724
|
/**
|
5725
|
-
* Enable HDR10+
|
5725
|
+
* Enable HDR10+ analysis and metadata injection. Compatible with HEVC only.
|
5726
5726
|
*/
|
5727
5727
|
Hdr10Plus?: Hdr10Plus;
|
5728
5728
|
/**
|
package/clients/sagemaker.d.ts
CHANGED
@@ -3353,7 +3353,7 @@ declare namespace SageMaker {
|
|
3353
3353
|
*/
|
3354
3354
|
TabularJobConfig?: TabularJobConfig;
|
3355
3355
|
/**
|
3356
|
-
* Settings used to configure an AutoML job V2 for a time-series forecasting problem type.
|
3356
|
+
* Settings used to configure an AutoML job V2 for a time-series forecasting problem type. The TimeSeriesForecastingJobConfig problem type is only available in private beta. Contact Amazon Web Services Support or your account manager to learn more about access privileges.
|
3357
3357
|
*/
|
3358
3358
|
TimeSeriesForecastingJobConfig?: TimeSeriesForecastingJobConfig;
|
3359
3359
|
}
|
@@ -6860,11 +6860,15 @@ declare namespace SageMaker {
|
|
6860
6860
|
/**
|
6861
6861
|
* Update policy for a blue/green deployment. If this update policy is specified, SageMaker creates a new fleet during the deployment while maintaining the old fleet. SageMaker flips traffic to the new fleet according to the specified traffic routing configuration. Only one update policy should be used in the deployment configuration. If no update policy is specified, SageMaker uses a blue/green deployment strategy with all at once traffic shifting by default.
|
6862
6862
|
*/
|
6863
|
-
BlueGreenUpdatePolicy
|
6863
|
+
BlueGreenUpdatePolicy?: BlueGreenUpdatePolicy;
|
6864
6864
|
/**
|
6865
6865
|
* Automatic rollback configuration for handling endpoint deployment failures and recovery.
|
6866
6866
|
*/
|
6867
6867
|
AutoRollbackConfiguration?: AutoRollbackConfig;
|
6868
|
+
/**
|
6869
|
+
* Specifies a rolling deployment strategy for updating a SageMaker endpoint.
|
6870
|
+
*/
|
6871
|
+
RollingUpdatePolicy?: RollingUpdatePolicy;
|
6868
6872
|
}
|
6869
6873
|
export interface DeploymentRecommendation {
|
6870
6874
|
/**
|
@@ -10878,7 +10882,7 @@ declare namespace SageMaker {
|
|
10878
10882
|
/**
|
10879
10883
|
* The instance types to use for the load test.
|
10880
10884
|
*/
|
10881
|
-
InstanceType
|
10885
|
+
InstanceType?: ProductionVariantInstanceType;
|
10882
10886
|
/**
|
10883
10887
|
* The inference specification name in the model package version.
|
10884
10888
|
*/
|
@@ -10887,6 +10891,7 @@ declare namespace SageMaker {
|
|
10887
10891
|
* The parameter you want to benchmark against.
|
10888
10892
|
*/
|
10889
10893
|
EnvironmentParameterRanges?: EnvironmentParameterRanges;
|
10894
|
+
ServerlessConfig?: ProductionVariantServerlessConfig;
|
10890
10895
|
}
|
10891
10896
|
export type EndpointInputConfigurations = EndpointInputConfiguration[];
|
10892
10897
|
export interface EndpointMetadata {
|
@@ -10921,11 +10926,12 @@ declare namespace SageMaker {
|
|
10921
10926
|
/**
|
10922
10927
|
* The instance type recommended by Amazon SageMaker Inference Recommender.
|
10923
10928
|
*/
|
10924
|
-
InstanceType
|
10929
|
+
InstanceType?: ProductionVariantInstanceType;
|
10925
10930
|
/**
|
10926
10931
|
* The number of instances recommended to launch initially.
|
10927
10932
|
*/
|
10928
|
-
InitialInstanceCount
|
10933
|
+
InitialInstanceCount?: InitialInstanceCount;
|
10934
|
+
ServerlessConfig?: ProductionVariantServerlessConfig;
|
10929
10935
|
}
|
10930
10936
|
export interface EndpointPerformance {
|
10931
10937
|
/**
|
@@ -10936,7 +10942,7 @@ declare namespace SageMaker {
|
|
10936
10942
|
}
|
10937
10943
|
export type EndpointPerformances = EndpointPerformance[];
|
10938
10944
|
export type EndpointSortKey = "Name"|"CreationTime"|"Status"|string;
|
10939
|
-
export type EndpointStatus = "OutOfService"|"Creating"|"Updating"|"SystemUpdating"|"RollingBack"|"InService"|"Deleting"|"Failed"|string;
|
10945
|
+
export type EndpointStatus = "OutOfService"|"Creating"|"Updating"|"SystemUpdating"|"RollingBack"|"InService"|"Deleting"|"Failed"|"UpdateRollbackFailed"|string;
|
10940
10946
|
export interface EndpointSummary {
|
10941
10947
|
/**
|
10942
10948
|
* The name of the endpoint.
|
@@ -12556,6 +12562,7 @@ declare namespace SageMaker {
|
|
12556
12562
|
SupportedResponseMIMETypes: ResponseMIMETypes;
|
12557
12563
|
}
|
12558
12564
|
export type InferenceSpecificationName = string;
|
12565
|
+
export type InitialInstanceCount = number;
|
12559
12566
|
export type InitialNumberOfUsers = number;
|
12560
12567
|
export type InitialTaskCount = number;
|
12561
12568
|
export interface InputConfig {
|
@@ -17099,6 +17106,7 @@ declare namespace SageMaker {
|
|
17099
17106
|
*/
|
17100
17107
|
CrossAccountModelRegisterRoleArn?: RoleArn;
|
17101
17108
|
}
|
17109
|
+
export type ModelSetupTime = number;
|
17102
17110
|
export type ModelSortKey = "Name"|"CreationTime"|string;
|
17103
17111
|
export interface ModelStepMetadata {
|
17104
17112
|
/**
|
@@ -18720,7 +18728,7 @@ declare namespace SageMaker {
|
|
18720
18728
|
*/
|
18721
18729
|
MaxConcurrency: ServerlessMaxConcurrency;
|
18722
18730
|
/**
|
18723
|
-
* The amount of provisioned concurrency to allocate for the serverless endpoint. Should be less than or equal to MaxConcurrency.
|
18731
|
+
* The amount of provisioned concurrency to allocate for the serverless endpoint. Should be less than or equal to MaxConcurrency. This field is not supported for serverless endpoint recommendations for Inference Recommender jobs. For more information about creating an Inference Recommender job, see CreateInferenceRecommendationsJobs.
|
18724
18732
|
*/
|
18725
18733
|
ProvisionedConcurrency?: ServerlessProvisionedConcurrency;
|
18726
18734
|
}
|
@@ -19247,6 +19255,10 @@ declare namespace SageMaker {
|
|
19247
19255
|
* Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. This field is used for optimizing your model using SageMaker Neo. For more information, see DataInputConfig.
|
19248
19256
|
*/
|
19249
19257
|
DataInputConfig?: RecommendationJobDataInputConfig;
|
19258
|
+
/**
|
19259
|
+
* The endpoint type to receive recommendations for. By default this is null, and the results of the inference recommendation job return a combined list of both real-time and serverless benchmarks. By specifying a value for this field, you can receive a longer list of benchmarks for the desired endpoint type.
|
19260
|
+
*/
|
19261
|
+
SupportedEndpointType?: RecommendationJobSupportedEndpointType;
|
19250
19262
|
}
|
19251
19263
|
export type RecommendationJobDataInputConfig = string;
|
19252
19264
|
export type RecommendationJobDescription = string;
|
@@ -19353,6 +19365,7 @@ declare namespace SageMaker {
|
|
19353
19365
|
ModelLatencyThresholds?: ModelLatencyThresholds;
|
19354
19366
|
}
|
19355
19367
|
export type RecommendationJobSupportedContentTypes = String[];
|
19368
|
+
export type RecommendationJobSupportedEndpointType = "RealTime"|"Serverless"|string;
|
19356
19369
|
export type RecommendationJobSupportedInstanceTypes = String[];
|
19357
19370
|
export type RecommendationJobType = "Default"|"Advanced"|string;
|
19358
19371
|
export interface RecommendationJobVpcConfig {
|
@@ -19394,6 +19407,10 @@ declare namespace SageMaker {
|
|
19394
19407
|
* The expected memory utilization at maximum invocations per minute for the instance. NaN indicates that the value is not available.
|
19395
19408
|
*/
|
19396
19409
|
MemoryUtilization?: UtilizationMetric;
|
19410
|
+
/**
|
19411
|
+
* The time it takes to launch new compute resources for a serverless endpoint. The time can vary depending on the model size, how long it takes to download the model, and the start-up time of the container. NaN indicates that the value is not available.
|
19412
|
+
*/
|
19413
|
+
ModelSetupTime?: ModelSetupTime;
|
19397
19414
|
}
|
19398
19415
|
export type RecommendationStatus = "IN_PROGRESS"|"COMPLETED"|"FAILED"|"NOT_APPLICABLE"|string;
|
19399
19416
|
export type RecommendationStepType = "BENCHMARK"|string;
|
@@ -19613,6 +19630,18 @@ declare namespace SageMaker {
|
|
19613
19630
|
MaximumRetryAttempts: MaximumRetryAttempts;
|
19614
19631
|
}
|
19615
19632
|
export type RoleArn = string;
|
19633
|
+
export interface RollingUpdatePolicy {
|
19634
|
+
MaximumBatchSize: CapacitySize;
|
19635
|
+
/**
|
19636
|
+
* The length of the baking period, during which SageMaker monitors alarms for each batch on the new fleet.
|
19637
|
+
*/
|
19638
|
+
WaitIntervalInSeconds: WaitIntervalInSeconds;
|
19639
|
+
/**
|
19640
|
+
* The time limit for the total deployment. Exceeding this limit causes a timeout.
|
19641
|
+
*/
|
19642
|
+
MaximumExecutionTimeoutInSeconds?: MaximumExecutionTimeoutInSeconds;
|
19643
|
+
RollbackMaximumBatchSize?: CapacitySize;
|
19644
|
+
}
|
19616
19645
|
export type RootAccess = "Enabled"|"Disabled"|string;
|
19617
19646
|
export type RuleConfigurationName = string;
|
19618
19647
|
export type RuleEvaluationStatus = "InProgress"|"NoIssuesFound"|"IssuesFound"|"Error"|"Stopping"|"Stopped"|string;
|
package/clients/transfer.d.ts
CHANGED
@@ -532,7 +532,12 @@ declare namespace Transfer {
|
|
532
532
|
* Used for outbound requests (from an Transfer Family server to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values: SYNC: The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not). NONE: Specifies that no MDN response is required.
|
533
533
|
*/
|
534
534
|
MdnResponse?: MdnResponse;
|
535
|
+
/**
|
536
|
+
* Provides Basic authentication support to the AS2 Connectors API. To use Basic authentication, you must provide the name or Amazon Resource Name (ARN) of a secret in Secrets Manager. The default value for this parameter is null, which indicates that Basic authentication is not enabled for the connector. If the connector should use Basic authentication, the secret needs to be in the following format: { "Username": "user-name", "Password": "user-password" } Replace user-name and user-password with the credentials for the actual user that is being authenticated. Note the following: You are storing these credentials in Secrets Manager, not passing them directly into this API. If you are using the API, SDKs, or CloudFormation to configure your connector, then you must create the secret before you can enable Basic authentication. However, if you are using the Amazon Web Services management console, you can have the system create the secret for you. If you have previously enabled Basic authentication for a connector, you can disable it by using the UpdateConnector API call. For example, if you are using the CLI, you can run the following command to remove Basic authentication: update-connector --connector-id my-connector-id --as2-config 'BasicAuthSecretId=""'
|
537
|
+
*/
|
538
|
+
BasicAuthSecretId?: As2ConnectorSecretId;
|
535
539
|
}
|
540
|
+
export type As2ConnectorSecretId = string;
|
536
541
|
export type As2Id = string;
|
537
542
|
export type As2Transport = "HTTP"|string;
|
538
543
|
export type As2Transports = As2Transport[];
|
@@ -630,7 +635,7 @@ declare namespace Transfer {
|
|
630
635
|
*/
|
631
636
|
BaseDirectory: HomeDirectory;
|
632
637
|
/**
|
633
|
-
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer.
|
638
|
+
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer. If you are using Basic authentication for your AS2 connector, the access role requires the secretsmanager:GetSecretValue permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the kms:Decrypt permission for that key.
|
634
639
|
*/
|
635
640
|
AccessRole: Role;
|
636
641
|
/**
|
@@ -658,7 +663,7 @@ declare namespace Transfer {
|
|
658
663
|
*/
|
659
664
|
As2Config: As2ConnectorConfig;
|
660
665
|
/**
|
661
|
-
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer.
|
666
|
+
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer. If you are using Basic authentication for your AS2 connector, the access role requires the secretsmanager:GetSecretValue permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the kms:Decrypt permission for that key.
|
662
667
|
*/
|
663
668
|
AccessRole: Role;
|
664
669
|
/**
|
@@ -1211,7 +1216,7 @@ declare namespace Transfer {
|
|
1211
1216
|
*/
|
1212
1217
|
BaseDirectory?: HomeDirectory;
|
1213
1218
|
/**
|
1214
|
-
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer.
|
1219
|
+
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer. If you are using Basic authentication for your AS2 connector, the access role requires the secretsmanager:GetSecretValue permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the kms:Decrypt permission for that key.
|
1215
1220
|
*/
|
1216
1221
|
AccessRole?: Role;
|
1217
1222
|
/**
|
@@ -1295,7 +1300,7 @@ declare namespace Transfer {
|
|
1295
1300
|
*/
|
1296
1301
|
As2Config?: As2ConnectorConfig;
|
1297
1302
|
/**
|
1298
|
-
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer.
|
1303
|
+
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer. If you are using Basic authentication for your AS2 connector, the access role requires the secretsmanager:GetSecretValue permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the kms:Decrypt permission for that key.
|
1299
1304
|
*/
|
1300
1305
|
AccessRole?: Role;
|
1301
1306
|
/**
|
@@ -2706,7 +2711,7 @@ declare namespace Transfer {
|
|
2706
2711
|
*/
|
2707
2712
|
BaseDirectory?: HomeDirectory;
|
2708
2713
|
/**
|
2709
|
-
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer.
|
2714
|
+
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer. If you are using Basic authentication for your AS2 connector, the access role requires the secretsmanager:GetSecretValue permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the kms:Decrypt permission for that key.
|
2710
2715
|
*/
|
2711
2716
|
AccessRole?: Role;
|
2712
2717
|
}
|
@@ -2754,7 +2759,7 @@ declare namespace Transfer {
|
|
2754
2759
|
*/
|
2755
2760
|
As2Config?: As2ConnectorConfig;
|
2756
2761
|
/**
|
2757
|
-
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer.
|
2762
|
+
* With AS2, you can send files by calling StartFileTransfer and specifying the file paths in the request parameter, SendFilePaths. We use the file’s parent directory (for example, for --send-file-paths /bucket/dir/file.txt, parent directory is /bucket/dir/) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the AccessRole needs to provide read and write access to the parent directory of the file location used in the StartFileTransfer request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with StartFileTransfer. If you are using Basic authentication for your AS2 connector, the access role requires the secretsmanager:GetSecretValue permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the kms:Decrypt permission for that key.
|
2758
2763
|
*/
|
2759
2764
|
AccessRole?: Role;
|
2760
2765
|
/**
|