aws-sdk 2.1611.0 → 2.1613.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -35,6 +35,14 @@ declare class Personalize extends Service {
35
35
  * You incur campaign costs while it is active. To avoid unnecessary costs, make sure to delete the campaign when you are finished. For information about campaign costs, see Amazon Personalize pricing. Creates a campaign that deploys a solution version. When a client calls the GetRecommendations and GetPersonalizedRanking APIs, a campaign is specified in the request. Minimum Provisioned TPS and Auto-Scaling A high minProvisionedTPS will increase your cost. We recommend starting with 1 for minProvisionedTPS (the default). Track your usage using Amazon CloudWatch metrics, and increase the minProvisionedTPS as necessary. When you create an Amazon Personalize campaign, you can specify the minimum provisioned transactions per second (minProvisionedTPS) for the campaign. This is the baseline transaction throughput for the campaign provisioned by Amazon Personalize. It sets the minimum billing charge for the campaign while it is active. A transaction is a single GetRecommendations or GetPersonalizedRanking request. The default minProvisionedTPS is 1. If your TPS increases beyond the minProvisionedTPS, Amazon Personalize auto-scales the provisioned capacity up and down, but never below minProvisionedTPS. There's a short time delay while the capacity is increased that might cause loss of transactions. When your traffic reduces, capacity returns to the minProvisionedTPS. You are charged for the the minimum provisioned TPS or, if your requests exceed the minProvisionedTPS, the actual TPS. The actual TPS is the total number of recommendation requests you make. We recommend starting with a low minProvisionedTPS, track your usage using Amazon CloudWatch metrics, and then increase the minProvisionedTPS as necessary. For more information about campaign costs, see Amazon Personalize pricing. Status A campaign can be in one of the following states: CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED DELETE PENDING > DELETE IN_PROGRESS To get the campaign status, call DescribeCampaign. Wait until the status of the campaign is ACTIVE before asking the campaign for recommendations. Related APIs ListCampaigns DescribeCampaign UpdateCampaign DeleteCampaign
36
36
  */
37
37
  createCampaign(callback?: (err: AWSError, data: Personalize.Types.CreateCampaignResponse) => void): Request<Personalize.Types.CreateCampaignResponse, AWSError>;
38
+ /**
39
+ * Creates a batch job that deletes all references to specific users from an Amazon Personalize dataset group in batches. You specify the users to delete in a CSV file of userIds in an Amazon S3 bucket. After a job completes, Amazon Personalize no longer trains on the users’ data and no longer considers the users when generating user segments. For more information about creating a data deletion job, see Deleting users. Your input file must be a CSV file with a single USER_ID column that lists the users IDs. For more information about preparing the CSV file, see Preparing your data deletion file and uploading it to Amazon S3. To give Amazon Personalize permission to access your input CSV file of userIds, you must specify an IAM service role that has permission to read from the data source. This role needs GetObject and ListBucket permissions for the bucket and its content. These permissions are the same as importing data. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources. After you create a job, it can take up to a day to delete all references to the users from datasets and models. Until the job completes, Amazon Personalize continues to use the data when training. And if you use a User Segmentation recipe, the users might appear in user segments. Status A data deletion job can have one of the following statuses: PENDING &gt; IN_PROGRESS &gt; COMPLETED -or- FAILED To get the status of the data deletion job, call DescribeDataDeletionJob API operation and specify the Amazon Resource Name (ARN) of the job. If the status is FAILED, the response includes a failureReason key, which describes why the job failed. Related APIs ListDataDeletionJobs DescribeDataDeletionJob
40
+ */
41
+ createDataDeletionJob(params: Personalize.Types.CreateDataDeletionJobRequest, callback?: (err: AWSError, data: Personalize.Types.CreateDataDeletionJobResponse) => void): Request<Personalize.Types.CreateDataDeletionJobResponse, AWSError>;
42
+ /**
43
+ * Creates a batch job that deletes all references to specific users from an Amazon Personalize dataset group in batches. You specify the users to delete in a CSV file of userIds in an Amazon S3 bucket. After a job completes, Amazon Personalize no longer trains on the users’ data and no longer considers the users when generating user segments. For more information about creating a data deletion job, see Deleting users. Your input file must be a CSV file with a single USER_ID column that lists the users IDs. For more information about preparing the CSV file, see Preparing your data deletion file and uploading it to Amazon S3. To give Amazon Personalize permission to access your input CSV file of userIds, you must specify an IAM service role that has permission to read from the data source. This role needs GetObject and ListBucket permissions for the bucket and its content. These permissions are the same as importing data. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources. After you create a job, it can take up to a day to delete all references to the users from datasets and models. Until the job completes, Amazon Personalize continues to use the data when training. And if you use a User Segmentation recipe, the users might appear in user segments. Status A data deletion job can have one of the following statuses: PENDING &gt; IN_PROGRESS &gt; COMPLETED -or- FAILED To get the status of the data deletion job, call DescribeDataDeletionJob API operation and specify the Amazon Resource Name (ARN) of the job. If the status is FAILED, the response includes a failureReason key, which describes why the job failed. Related APIs ListDataDeletionJobs DescribeDataDeletionJob
44
+ */
45
+ createDataDeletionJob(callback?: (err: AWSError, data: Personalize.Types.CreateDataDeletionJobResponse) => void): Request<Personalize.Types.CreateDataDeletionJobResponse, AWSError>;
38
46
  /**
39
47
  * Creates an empty dataset and adds it to the specified dataset group. Use CreateDatasetImportJob to import your training data to a dataset. There are 5 types of datasets: Item interactions Items Users Action interactions Actions Each dataset type has an associated schema with required field types. Only the Item interactions dataset is required in order to train a model (also referred to as creating a solution). A dataset can be in one of the following states: CREATE PENDING &gt; CREATE IN_PROGRESS &gt; ACTIVE -or- CREATE FAILED DELETE PENDING &gt; DELETE IN_PROGRESS To get the status of the dataset, call DescribeDataset. Related APIs CreateDatasetGroup ListDatasets DescribeDataset DeleteDataset
40
48
  */
@@ -227,6 +235,14 @@ declare class Personalize extends Service {
227
235
  * Describes the given campaign, including its status. A campaign can be in one of the following states: CREATE PENDING &gt; CREATE IN_PROGRESS &gt; ACTIVE -or- CREATE FAILED DELETE PENDING &gt; DELETE IN_PROGRESS When the status is CREATE FAILED, the response includes the failureReason key, which describes why. For more information on campaigns, see CreateCampaign.
228
236
  */
229
237
  describeCampaign(callback?: (err: AWSError, data: Personalize.Types.DescribeCampaignResponse) => void): Request<Personalize.Types.DescribeCampaignResponse, AWSError>;
238
+ /**
239
+ * Describes the data deletion job created by CreateDataDeletionJob, including the job status.
240
+ */
241
+ describeDataDeletionJob(params: Personalize.Types.DescribeDataDeletionJobRequest, callback?: (err: AWSError, data: Personalize.Types.DescribeDataDeletionJobResponse) => void): Request<Personalize.Types.DescribeDataDeletionJobResponse, AWSError>;
242
+ /**
243
+ * Describes the data deletion job created by CreateDataDeletionJob, including the job status.
244
+ */
245
+ describeDataDeletionJob(callback?: (err: AWSError, data: Personalize.Types.DescribeDataDeletionJobResponse) => void): Request<Personalize.Types.DescribeDataDeletionJobResponse, AWSError>;
230
246
  /**
231
247
  * Describes the given dataset. For more information on datasets, see CreateDataset.
232
248
  */
@@ -363,6 +379,14 @@ declare class Personalize extends Service {
363
379
  * Returns a list of campaigns that use the given solution. When a solution is not specified, all the campaigns associated with the account are listed. The response provides the properties for each campaign, including the Amazon Resource Name (ARN). For more information on campaigns, see CreateCampaign.
364
380
  */
365
381
  listCampaigns(callback?: (err: AWSError, data: Personalize.Types.ListCampaignsResponse) => void): Request<Personalize.Types.ListCampaignsResponse, AWSError>;
382
+ /**
383
+ * Returns a list of data deletion jobs for a dataset group ordered by creation time, with the most recent first. When a dataset group is not specified, all the data deletion jobs associated with the account are listed. The response provides the properties for each job, including the Amazon Resource Name (ARN). For more information on data deletion jobs, see Deleting users.
384
+ */
385
+ listDataDeletionJobs(params: Personalize.Types.ListDataDeletionJobsRequest, callback?: (err: AWSError, data: Personalize.Types.ListDataDeletionJobsResponse) => void): Request<Personalize.Types.ListDataDeletionJobsResponse, AWSError>;
386
+ /**
387
+ * Returns a list of data deletion jobs for a dataset group ordered by creation time, with the most recent first. When a dataset group is not specified, all the data deletion jobs associated with the account are listed. The response provides the properties for each job, including the Amazon Resource Name (ARN). For more information on data deletion jobs, see Deleting users.
388
+ */
389
+ listDataDeletionJobs(callback?: (err: AWSError, data: Personalize.Types.ListDataDeletionJobsResponse) => void): Request<Personalize.Types.ListDataDeletionJobsResponse, AWSError>;
366
390
  /**
367
391
  * Returns a list of dataset export jobs that use the given dataset. When a dataset is not specified, all the dataset export jobs associated with the account are listed. The response provides the properties for each dataset export job, including the Amazon Resource Name (ARN). For more information on dataset export jobs, see CreateDatasetExportJob. For more information on datasets, see CreateDataset.
368
392
  */
@@ -1090,6 +1114,34 @@ declare namespace Personalize {
1090
1114
  */
1091
1115
  campaignArn?: Arn;
1092
1116
  }
1117
+ export interface CreateDataDeletionJobRequest {
1118
+ /**
1119
+ * The name for the data deletion job.
1120
+ */
1121
+ jobName: Name;
1122
+ /**
1123
+ * The Amazon Resource Name (ARN) of the dataset group that has the datasets you want to delete records from.
1124
+ */
1125
+ datasetGroupArn: Arn;
1126
+ /**
1127
+ * The Amazon S3 bucket that contains the list of userIds of the users to delete.
1128
+ */
1129
+ dataSource: DataSource;
1130
+ /**
1131
+ * The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source.
1132
+ */
1133
+ roleArn: RoleArn;
1134
+ /**
1135
+ * A list of tags to apply to the data deletion job.
1136
+ */
1137
+ tags?: Tags;
1138
+ }
1139
+ export interface CreateDataDeletionJobResponse {
1140
+ /**
1141
+ * The Amazon Resource Name (ARN) of the data deletion job.
1142
+ */
1143
+ dataDeletionJobArn?: Arn;
1144
+ }
1093
1145
  export interface CreateDatasetExportJobRequest {
1094
1146
  /**
1095
1147
  * The name for the dataset export job.
@@ -1406,9 +1458,79 @@ declare namespace Personalize {
1406
1458
  */
1407
1459
  solutionVersionArn?: Arn;
1408
1460
  }
1461
+ export interface DataDeletionJob {
1462
+ /**
1463
+ * The name of the data deletion job.
1464
+ */
1465
+ jobName?: Name;
1466
+ /**
1467
+ * The Amazon Resource Name (ARN) of the data deletion job.
1468
+ */
1469
+ dataDeletionJobArn?: Arn;
1470
+ /**
1471
+ * The Amazon Resource Name (ARN) of the dataset group the job deletes records from.
1472
+ */
1473
+ datasetGroupArn?: Arn;
1474
+ dataSource?: DataSource;
1475
+ /**
1476
+ * The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source.
1477
+ */
1478
+ roleArn?: RoleArn;
1479
+ /**
1480
+ * The status of the data deletion job. A data deletion job can have one of the following statuses: PENDING &gt; IN_PROGRESS &gt; COMPLETED -or- FAILED
1481
+ */
1482
+ status?: Status;
1483
+ /**
1484
+ * The number of records deleted by a COMPLETED job.
1485
+ */
1486
+ numDeleted?: Integer;
1487
+ /**
1488
+ * The creation date and time (in Unix time) of the data deletion job.
1489
+ */
1490
+ creationDateTime?: _Date;
1491
+ /**
1492
+ * The date and time (in Unix time) the data deletion job was last updated.
1493
+ */
1494
+ lastUpdatedDateTime?: _Date;
1495
+ /**
1496
+ * If a data deletion job fails, provides the reason why.
1497
+ */
1498
+ failureReason?: FailureReason;
1499
+ }
1500
+ export interface DataDeletionJobSummary {
1501
+ /**
1502
+ * The Amazon Resource Name (ARN) of the data deletion job.
1503
+ */
1504
+ dataDeletionJobArn?: Arn;
1505
+ /**
1506
+ * The Amazon Resource Name (ARN) of the dataset group the job deleted records from.
1507
+ */
1508
+ datasetGroupArn?: Arn;
1509
+ /**
1510
+ * The name of the data deletion job.
1511
+ */
1512
+ jobName?: Name;
1513
+ /**
1514
+ * The status of the data deletion job. A data deletion job can have one of the following statuses: PENDING &gt; IN_PROGRESS &gt; COMPLETED -or- FAILED
1515
+ */
1516
+ status?: Status;
1517
+ /**
1518
+ * The creation date and time (in Unix time) of the data deletion job.
1519
+ */
1520
+ creationDateTime?: _Date;
1521
+ /**
1522
+ * The date and time (in Unix time) the data deletion job was last updated.
1523
+ */
1524
+ lastUpdatedDateTime?: _Date;
1525
+ /**
1526
+ * If a data deletion job fails, provides the reason why.
1527
+ */
1528
+ failureReason?: FailureReason;
1529
+ }
1530
+ export type DataDeletionJobs = DataDeletionJobSummary[];
1409
1531
  export interface DataSource {
1410
1532
  /**
1411
- * The path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For example: s3://bucket-name/folder-name/
1533
+ * For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete. For example: s3://bucket-name/folder-name/fileName.csv If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a / after the folder name: s3://bucket-name/folder-name/
1412
1534
  */
1413
1535
  dataLocation?: S3Location;
1414
1536
  }
@@ -1940,6 +2062,18 @@ declare namespace Personalize {
1940
2062
  */
1941
2063
  campaign?: Campaign;
1942
2064
  }
2065
+ export interface DescribeDataDeletionJobRequest {
2066
+ /**
2067
+ * The Amazon Resource Name (ARN) of the data deletion job.
2068
+ */
2069
+ dataDeletionJobArn: Arn;
2070
+ }
2071
+ export interface DescribeDataDeletionJobResponse {
2072
+ /**
2073
+ * Information about the data deletion job, including the status. The status is one of the following values: PENDING IN_PROGRESS COMPLETED FAILED
2074
+ */
2075
+ dataDeletionJob?: DataDeletionJob;
2076
+ }
1943
2077
  export interface DescribeDatasetExportJobRequest {
1944
2078
  /**
1945
2079
  * The Amazon Resource Name (ARN) of the dataset export job to describe.
@@ -2333,6 +2467,7 @@ declare namespace Personalize {
2333
2467
  export type HyperParameters = {[key: string]: ParameterValue};
2334
2468
  export type ImportMode = "FULL"|"INCREMENTAL"|string;
2335
2469
  export type IngestionMode = "BULK"|"PUT"|"ALL"|string;
2470
+ export type Integer = number;
2336
2471
  export interface IntegerHyperParameterRange {
2337
2472
  /**
2338
2473
  * The name of the hyperparameter.
@@ -2424,6 +2559,30 @@ declare namespace Personalize {
2424
2559
  */
2425
2560
  nextToken?: NextToken;
2426
2561
  }
2562
+ export interface ListDataDeletionJobsRequest {
2563
+ /**
2564
+ * The Amazon Resource Name (ARN) of the dataset group to list data deletion jobs for.
2565
+ */
2566
+ datasetGroupArn?: Arn;
2567
+ /**
2568
+ * A token returned from the previous call to ListDataDeletionJobs for getting the next set of jobs (if they exist).
2569
+ */
2570
+ nextToken?: NextToken;
2571
+ /**
2572
+ * The maximum number of data deletion jobs to return.
2573
+ */
2574
+ maxResults?: MaxResults;
2575
+ }
2576
+ export interface ListDataDeletionJobsResponse {
2577
+ /**
2578
+ * The list of data deletion jobs.
2579
+ */
2580
+ dataDeletionJobs?: DataDeletionJobs;
2581
+ /**
2582
+ * A token for getting the next set of data deletion jobs (if they exist).
2583
+ */
2584
+ nextToken?: NextToken;
2585
+ }
2427
2586
  export interface ListDatasetExportJobsRequest {
2428
2587
  /**
2429
2588
  * The Amazon Resource Name (ARN) of the dataset to list the dataset export jobs for.
@@ -477,7 +477,7 @@ declare namespace RedshiftServerless {
477
477
  export type Boolean = boolean;
478
478
  export interface ConfigParameter {
479
479
  /**
480
- * The key of the parameter. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
480
+ * The key of the parameter. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, use_fips_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
481
481
  */
482
482
  parameterKey?: ParameterKey;
483
483
  /**
@@ -767,7 +767,7 @@ declare namespace RedshiftServerless {
767
767
  */
768
768
  baseCapacity?: Integer;
769
769
  /**
770
- * An array of parameters to set for advanced control over a database. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
770
+ * An array of parameters to set for advanced control over a database. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, use_fips_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
771
771
  */
772
772
  configParameters?: ConfigParameterList;
773
773
  /**
@@ -1313,7 +1313,7 @@ declare namespace RedshiftServerless {
1313
1313
  */
1314
1314
  nextToken?: PaginationToken;
1315
1315
  /**
1316
- * All of the returned scheduled action objects.
1316
+ * All of the returned scheduled action association objects.
1317
1317
  */
1318
1318
  scheduledActions?: ScheduledActionsList;
1319
1319
  }
@@ -1784,6 +1784,16 @@ declare namespace RedshiftServerless {
1784
1784
  */
1785
1785
  cron?: String;
1786
1786
  }
1787
+ export interface ScheduledActionAssociation {
1788
+ /**
1789
+ * Name of associated Amazon Redshift Serverless namespace.
1790
+ */
1791
+ namespaceName?: NamespaceName;
1792
+ /**
1793
+ * Name of associated scheduled action.
1794
+ */
1795
+ scheduledActionName?: ScheduledActionName;
1796
+ }
1787
1797
  export type ScheduledActionName = string;
1788
1798
  export interface ScheduledActionResponse {
1789
1799
  /**
@@ -1828,7 +1838,7 @@ declare namespace RedshiftServerless {
1828
1838
  state?: State;
1829
1839
  targetAction?: TargetAction;
1830
1840
  }
1831
- export type ScheduledActionsList = ScheduledActionName[];
1841
+ export type ScheduledActionsList = ScheduledActionAssociation[];
1832
1842
  export type SecurityGroupId = string;
1833
1843
  export type SecurityGroupIdList = SecurityGroupId[];
1834
1844
  export interface Snapshot {
@@ -2252,7 +2262,7 @@ declare namespace RedshiftServerless {
2252
2262
  */
2253
2263
  baseCapacity?: Integer;
2254
2264
  /**
2255
- * An array of parameters to set for advanced control over a database. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
2265
+ * An array of parameters to set for advanced control over a database. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, use_fips_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
2256
2266
  */
2257
2267
  configParameters?: ConfigParameterList;
2258
2268
  /**
@@ -2359,7 +2369,7 @@ declare namespace RedshiftServerless {
2359
2369
  */
2360
2370
  baseCapacity?: Integer;
2361
2371
  /**
2362
- * An array of parameters to set for advanced control over a database. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
2372
+ * An array of parameters to set for advanced control over a database. The options are auto_mv, datestyle, enable_case_sensitive_identifier, enable_user_activity_logging, query_group, search_path, require_ssl, use_fips_ssl, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
2363
2373
  */
2364
2374
  configParameters?: ConfigParameterList;
2365
2375
  /**
@@ -2407,7 +2417,7 @@ declare namespace RedshiftServerless {
2407
2417
  */
2408
2418
  port?: Integer;
2409
2419
  /**
2410
- * A value that specifies whether the workgroup can be accessible from a public network
2420
+ * A value that specifies whether the workgroup can be accessible from a public network.
2411
2421
  */
2412
2422
  publiclyAccessible?: Boolean;
2413
2423
  /**
@@ -22592,7 +22592,7 @@ declare namespace SageMaker {
22592
22592
  SplitType?: SplitType;
22593
22593
  }
22594
22594
  export type TransformInstanceCount = number;
22595
- export type TransformInstanceType = "ml.m4.xlarge"|"ml.m4.2xlarge"|"ml.m4.4xlarge"|"ml.m4.10xlarge"|"ml.m4.16xlarge"|"ml.c4.xlarge"|"ml.c4.2xlarge"|"ml.c4.4xlarge"|"ml.c4.8xlarge"|"ml.p2.xlarge"|"ml.p2.8xlarge"|"ml.p2.16xlarge"|"ml.p3.2xlarge"|"ml.p3.8xlarge"|"ml.p3.16xlarge"|"ml.c5.xlarge"|"ml.c5.2xlarge"|"ml.c5.4xlarge"|"ml.c5.9xlarge"|"ml.c5.18xlarge"|"ml.m5.large"|"ml.m5.xlarge"|"ml.m5.2xlarge"|"ml.m5.4xlarge"|"ml.m5.12xlarge"|"ml.m5.24xlarge"|"ml.g4dn.xlarge"|"ml.g4dn.2xlarge"|"ml.g4dn.4xlarge"|"ml.g4dn.8xlarge"|"ml.g4dn.12xlarge"|"ml.g4dn.16xlarge"|string;
22595
+ export type TransformInstanceType = "ml.m4.xlarge"|"ml.m4.2xlarge"|"ml.m4.4xlarge"|"ml.m4.10xlarge"|"ml.m4.16xlarge"|"ml.c4.xlarge"|"ml.c4.2xlarge"|"ml.c4.4xlarge"|"ml.c4.8xlarge"|"ml.p2.xlarge"|"ml.p2.8xlarge"|"ml.p2.16xlarge"|"ml.p3.2xlarge"|"ml.p3.8xlarge"|"ml.p3.16xlarge"|"ml.c5.xlarge"|"ml.c5.2xlarge"|"ml.c5.4xlarge"|"ml.c5.9xlarge"|"ml.c5.18xlarge"|"ml.m5.large"|"ml.m5.xlarge"|"ml.m5.2xlarge"|"ml.m5.4xlarge"|"ml.m5.12xlarge"|"ml.m5.24xlarge"|"ml.m6i.large"|"ml.m6i.xlarge"|"ml.m6i.2xlarge"|"ml.m6i.4xlarge"|"ml.m6i.8xlarge"|"ml.m6i.12xlarge"|"ml.m6i.16xlarge"|"ml.m6i.24xlarge"|"ml.m6i.32xlarge"|"ml.c6i.large"|"ml.c6i.xlarge"|"ml.c6i.2xlarge"|"ml.c6i.4xlarge"|"ml.c6i.8xlarge"|"ml.c6i.12xlarge"|"ml.c6i.16xlarge"|"ml.c6i.24xlarge"|"ml.c6i.32xlarge"|"ml.r6i.large"|"ml.r6i.xlarge"|"ml.r6i.2xlarge"|"ml.r6i.4xlarge"|"ml.r6i.8xlarge"|"ml.r6i.12xlarge"|"ml.r6i.16xlarge"|"ml.r6i.24xlarge"|"ml.r6i.32xlarge"|"ml.m7i.large"|"ml.m7i.xlarge"|"ml.m7i.2xlarge"|"ml.m7i.4xlarge"|"ml.m7i.8xlarge"|"ml.m7i.12xlarge"|"ml.m7i.16xlarge"|"ml.m7i.24xlarge"|"ml.m7i.48xlarge"|"ml.c7i.large"|"ml.c7i.xlarge"|"ml.c7i.2xlarge"|"ml.c7i.4xlarge"|"ml.c7i.8xlarge"|"ml.c7i.12xlarge"|"ml.c7i.16xlarge"|"ml.c7i.24xlarge"|"ml.c7i.48xlarge"|"ml.r7i.large"|"ml.r7i.xlarge"|"ml.r7i.2xlarge"|"ml.r7i.4xlarge"|"ml.r7i.8xlarge"|"ml.r7i.12xlarge"|"ml.r7i.16xlarge"|"ml.r7i.24xlarge"|"ml.r7i.48xlarge"|"ml.g4dn.xlarge"|"ml.g4dn.2xlarge"|"ml.g4dn.4xlarge"|"ml.g4dn.8xlarge"|"ml.g4dn.12xlarge"|"ml.g4dn.16xlarge"|"ml.g5.xlarge"|"ml.g5.2xlarge"|"ml.g5.4xlarge"|"ml.g5.8xlarge"|"ml.g5.12xlarge"|"ml.g5.16xlarge"|"ml.g5.24xlarge"|"ml.g5.48xlarge"|string;
22596
22596
  export type TransformInstanceTypes = TransformInstanceType[];
22597
22597
  export interface TransformJob {
22598
22598
  /**
@@ -888,6 +888,10 @@ declare namespace SESV2 {
888
888
  * The ReplacementEmailContent associated with a BulkEmailEntry.
889
889
  */
890
890
  ReplacementEmailContent?: ReplacementEmailContent;
891
+ /**
892
+ * The list of message headers associated with the BulkEmailEntry data type. Headers Not Present in BulkEmailEntry: If a header is specified in Template but not in BulkEmailEntry, the header from Template will be added to the outgoing email. Headers Present in BulkEmailEntry: If a header is specified in BulkEmailEntry, it takes precedence over any header of the same name specified in Template : If the header is also defined within Template, the value from BulkEmailEntry will replace the header's value in the email. If the header is not defined within Template, it will simply be added to the email as specified in BulkEmailEntry.
893
+ */
894
+ ReplacementHeaders?: MessageHeaderList;
891
895
  }
892
896
  export type BulkEmailEntryList = BulkEmailEntry[];
893
897
  export interface BulkEmailEntryResult {
@@ -83,7 +83,7 @@ return /******/ (function(modules) { // webpackBootstrap
83
83
  /**
84
84
  * @constant
85
85
  */
86
- VERSION: '2.1611.0',
86
+ VERSION: '2.1613.0',
87
87
 
88
88
  /**
89
89
  * @api private