cdk-lambda-subminute 2.0.389 → 2.0.391
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.jsii +3 -3
- package/lib/cdk-lambda-subminute.js +3 -3
- package/node_modules/aws-sdk/README.md +1 -1
- package/node_modules/aws-sdk/apis/batch-2016-08-10.min.json +387 -115
- package/node_modules/aws-sdk/apis/bedrock-agent-runtime-2023-07-26.min.json +28 -21
- package/node_modules/aws-sdk/apis/ce-2017-10-25.min.json +140 -105
- package/node_modules/aws-sdk/apis/docdb-elastic-2022-11-28.min.json +209 -62
- package/node_modules/aws-sdk/apis/ec2-2016-11-15.min.json +61 -61
- package/node_modules/aws-sdk/apis/iot-2015-05-28.min.json +12 -12
- package/node_modules/aws-sdk/apis/migrationhuborchestrator-2021-08-28.min.json +190 -86
- package/node_modules/aws-sdk/apis/models.lex.v2-2020-08-07.min.json +268 -186
- package/node_modules/aws-sdk/apis/quicksight-2018-04-01.min.json +518 -516
- package/node_modules/aws-sdk/apis/sagemaker-2017-07-24.min.json +1115 -1099
- package/node_modules/aws-sdk/apis/securitylake-2018-05-10.min.json +3 -7
- package/node_modules/aws-sdk/apis/wafv2-2019-07-29.min.json +126 -123
- package/node_modules/aws-sdk/clients/batch.d.ts +395 -36
- package/node_modules/aws-sdk/clients/bedrockagentruntime.d.ts +7 -1
- package/node_modules/aws-sdk/clients/costexplorer.d.ts +40 -0
- package/node_modules/aws-sdk/clients/docdbelastic.d.ts +253 -112
- package/node_modules/aws-sdk/clients/ec2.d.ts +2 -1
- package/node_modules/aws-sdk/clients/eks.d.ts +1 -1
- package/node_modules/aws-sdk/clients/iot.d.ts +11 -10
- package/node_modules/aws-sdk/clients/lexmodelsv2.d.ts +107 -0
- package/node_modules/aws-sdk/clients/migrationhuborchestrator.d.ts +141 -8
- package/node_modules/aws-sdk/clients/quicksight.d.ts +9 -0
- package/node_modules/aws-sdk/clients/sagemaker.d.ts +39 -10
- package/node_modules/aws-sdk/clients/securitylake.d.ts +9 -5
- package/node_modules/aws-sdk/clients/wafv2.d.ts +5 -0
- package/node_modules/aws-sdk/dist/aws-sdk-core-react-native.js +1 -1
- package/node_modules/aws-sdk/dist/aws-sdk-react-native.js +13 -13
- package/node_modules/aws-sdk/dist/aws-sdk.js +216 -181
- package/node_modules/aws-sdk/dist/aws-sdk.min.js +102 -102
- package/node_modules/aws-sdk/lib/core.js +1 -1
- package/node_modules/aws-sdk/package.json +1 -1
- package/package.json +3 -3
@@ -293,11 +293,11 @@ declare class SageMaker extends Service {
|
|
293
293
|
*/
|
294
294
|
createLabelingJob(callback?: (err: AWSError, data: SageMaker.Types.CreateLabelingJobResponse) => void): Request<SageMaker.Types.CreateLabelingJobResponse, AWSError>;
|
295
295
|
/**
|
296
|
-
* Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions. Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job. To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment.
|
296
|
+
* Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions. Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job. To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment. To run a batch transform using your model, you start a job with the CreateTransformJob API. SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location. In the request, you also provide an IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.
|
297
297
|
*/
|
298
298
|
createModel(params: SageMaker.Types.CreateModelInput, callback?: (err: AWSError, data: SageMaker.Types.CreateModelOutput) => void): Request<SageMaker.Types.CreateModelOutput, AWSError>;
|
299
299
|
/**
|
300
|
-
* Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions. Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job. To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment.
|
300
|
+
* Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions. Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job. To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment. To run a batch transform using your model, you start a job with the CreateTransformJob API. SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location. In the request, you also provide an IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.
|
301
301
|
*/
|
302
302
|
createModel(callback?: (err: AWSError, data: SageMaker.Types.CreateModelOutput) => void): Request<SageMaker.Types.CreateModelOutput, AWSError>;
|
303
303
|
/**
|
@@ -2381,11 +2381,11 @@ declare class SageMaker extends Service {
|
|
2381
2381
|
*/
|
2382
2382
|
updateExperiment(callback?: (err: AWSError, data: SageMaker.Types.UpdateExperimentResponse) => void): Request<SageMaker.Types.UpdateExperimentResponse, AWSError>;
|
2383
2383
|
/**
|
2384
|
-
* Updates the feature group by either adding features or updating the online store configuration. Use one of the following request parameters at a time while using the UpdateFeatureGroup API. You can add features for your feature group using the FeatureAdditions request parameter. Features cannot be removed from a feature group. You can update the online store configuration by using the OnlineStoreConfig request parameter. If a TtlDuration is specified, the default TtlDuration applies for all records added to the feature group after the feature group is updated. If a record level TtlDuration exists from using the PutRecord API, the record level TtlDuration applies to that record instead of the default TtlDuration.
|
2384
|
+
* Updates the feature group by either adding features or updating the online store configuration. Use one of the following request parameters at a time while using the UpdateFeatureGroup API. You can add features for your feature group using the FeatureAdditions request parameter. Features cannot be removed from a feature group. You can update the online store configuration by using the OnlineStoreConfig request parameter. If a TtlDuration is specified, the default TtlDuration applies for all records added to the feature group after the feature group is updated. If a record level TtlDuration exists from using the PutRecord API, the record level TtlDuration applies to that record instead of the default TtlDuration. To remove the default TtlDuration from an existing feature group, use the UpdateFeatureGroup API and set the TtlDuration Unit and Value to null.
|
2385
2385
|
*/
|
2386
2386
|
updateFeatureGroup(params: SageMaker.Types.UpdateFeatureGroupRequest, callback?: (err: AWSError, data: SageMaker.Types.UpdateFeatureGroupResponse) => void): Request<SageMaker.Types.UpdateFeatureGroupResponse, AWSError>;
|
2387
2387
|
/**
|
2388
|
-
* Updates the feature group by either adding features or updating the online store configuration. Use one of the following request parameters at a time while using the UpdateFeatureGroup API. You can add features for your feature group using the FeatureAdditions request parameter. Features cannot be removed from a feature group. You can update the online store configuration by using the OnlineStoreConfig request parameter. If a TtlDuration is specified, the default TtlDuration applies for all records added to the feature group after the feature group is updated. If a record level TtlDuration exists from using the PutRecord API, the record level TtlDuration applies to that record instead of the default TtlDuration.
|
2388
|
+
* Updates the feature group by either adding features or updating the online store configuration. Use one of the following request parameters at a time while using the UpdateFeatureGroup API. You can add features for your feature group using the FeatureAdditions request parameter. Features cannot be removed from a feature group. You can update the online store configuration by using the OnlineStoreConfig request parameter. If a TtlDuration is specified, the default TtlDuration applies for all records added to the feature group after the feature group is updated. If a record level TtlDuration exists from using the PutRecord API, the record level TtlDuration applies to that record instead of the default TtlDuration. To remove the default TtlDuration from an existing feature group, use the UpdateFeatureGroup API and set the TtlDuration Unit and Value to null.
|
2389
2389
|
*/
|
2390
2390
|
updateFeatureGroup(callback?: (err: AWSError, data: SageMaker.Types.UpdateFeatureGroupResponse) => void): Request<SageMaker.Types.UpdateFeatureGroupResponse, AWSError>;
|
2391
2391
|
/**
|
@@ -3432,7 +3432,7 @@ declare namespace SageMaker {
|
|
3432
3432
|
export type AutoMLJobName = string;
|
3433
3433
|
export interface AutoMLJobObjective {
|
3434
3434
|
/**
|
3435
|
-
* The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset. The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type. For tabular problem types: List of available metrics: Regression:
|
3435
|
+
* The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset. The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type. For tabular problem types: List of available metrics: Regression: MAE, MSE, R2, RMSE Binary classification: Accuracy, AUC, BalancedAccuracy, F1, Precision, Recall Multiclass classification: Accuracy, BalancedAccuracy, F1macro, PrecisionMacro, RecallMacro For a description of each metric, see Autopilot metrics for classification and regression. Default objective metrics: Regression: MSE. Binary classification: F1. Multiclass classification: Accuracy. For image or text classification problem types: List of available metrics: Accuracy For a description of each metric, see Autopilot metrics for text and image classification. Default objective metrics: Accuracy For time-series forecasting problem types: List of available metrics: RMSE, wQL, Average wQL, MASE, MAPE, WAPE For a description of each metric, see Autopilot metrics for time-series forecasting. Default objective metrics: AverageWeightedQuantileLoss For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the AutoMLJobObjective field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.
|
3436
3436
|
*/
|
3437
3437
|
MetricName: AutoMLMetricEnum;
|
3438
3438
|
}
|
@@ -5898,7 +5898,7 @@ declare namespace SageMaker {
|
|
5898
5898
|
*/
|
5899
5899
|
ModelPackageDescription?: EntityDescription;
|
5900
5900
|
/**
|
5901
|
-
* Specifies details about inference jobs that can
|
5901
|
+
* Specifies details about inference jobs that you can run with models based on this model package, including the following information: The Amazon ECR paths of containers that contain the inference code and model artifacts. The instance types that the model package supports for transform jobs and real-time endpoints used for inference. The input and output content formats that the model package supports for inference.
|
5902
5902
|
*/
|
5903
5903
|
InferenceSpecification?: InferenceSpecification;
|
5904
5904
|
/**
|
@@ -5958,6 +5958,10 @@ declare namespace SageMaker {
|
|
5958
5958
|
* Indicates if you want to skip model validation.
|
5959
5959
|
*/
|
5960
5960
|
SkipModelValidation?: SkipModelValidation;
|
5961
|
+
/**
|
5962
|
+
* The URI of the source for the model package. If you want to clone a model package, set it to the model package Amazon Resource Name (ARN). If you want to register a model, set it to the model ARN.
|
5963
|
+
*/
|
5964
|
+
SourceUri?: ModelPackageSourceUri;
|
5961
5965
|
}
|
5962
5966
|
export interface CreateModelPackageOutput {
|
5963
5967
|
/**
|
@@ -7706,7 +7710,7 @@ declare namespace SageMaker {
|
|
7706
7710
|
*/
|
7707
7711
|
LastUserActivityTimestamp?: Timestamp;
|
7708
7712
|
/**
|
7709
|
-
* The creation time.
|
7713
|
+
* The creation time of the application. After an application has been shut down for 24 hours, SageMaker deletes all metadata for the application. To be considered an update and retain application metadata, applications must be restarted within 24 hours after the previous application has been shut down. After this time window, creation of an application is considered a new application rather than an update of the previous application.
|
7710
7714
|
*/
|
7711
7715
|
CreationTime?: Timestamp;
|
7712
7716
|
/**
|
@@ -9797,7 +9801,7 @@ declare namespace SageMaker {
|
|
9797
9801
|
*/
|
9798
9802
|
CreationTime: CreationTime;
|
9799
9803
|
/**
|
9800
|
-
* Details about inference jobs that can
|
9804
|
+
* Details about inference jobs that you can run with models based on this model package.
|
9801
9805
|
*/
|
9802
9806
|
InferenceSpecification?: InferenceSpecification;
|
9803
9807
|
/**
|
@@ -9867,6 +9871,10 @@ declare namespace SageMaker {
|
|
9867
9871
|
* Indicates if you want to skip model validation.
|
9868
9872
|
*/
|
9869
9873
|
SkipModelValidation?: SkipModelValidation;
|
9874
|
+
/**
|
9875
|
+
* The URI of the source for the model package.
|
9876
|
+
*/
|
9877
|
+
SourceUri?: ModelPackageSourceUri;
|
9870
9878
|
}
|
9871
9879
|
export interface DescribeModelQualityJobDefinitionRequest {
|
9872
9880
|
/**
|
@@ -18028,7 +18036,7 @@ declare namespace SageMaker {
|
|
18028
18036
|
*/
|
18029
18037
|
ModelDataQuality?: ModelDataQuality;
|
18030
18038
|
/**
|
18031
|
-
* Metrics that measure
|
18039
|
+
* Metrics that measure bias in a model.
|
18032
18040
|
*/
|
18033
18041
|
Bias?: Bias;
|
18034
18042
|
/**
|
@@ -18131,6 +18139,10 @@ declare namespace SageMaker {
|
|
18131
18139
|
* An array of additional Inference Specification objects.
|
18132
18140
|
*/
|
18133
18141
|
AdditionalInferenceSpecifications?: AdditionalInferenceSpecifications;
|
18142
|
+
/**
|
18143
|
+
* The URI of the source for the model package.
|
18144
|
+
*/
|
18145
|
+
SourceUri?: ModelPackageSourceUri;
|
18134
18146
|
/**
|
18135
18147
|
* A list of the tags associated with the model package. For more information, see Tagging Amazon Web Services resources in the Amazon Web Services General Reference Guide.
|
18136
18148
|
*/
|
@@ -18167,6 +18179,10 @@ declare namespace SageMaker {
|
|
18167
18179
|
* The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The model artifacts must be in an S3 bucket that is in the same region as the model package.
|
18168
18180
|
*/
|
18169
18181
|
ModelDataUrl?: Url;
|
18182
|
+
/**
|
18183
|
+
* Specifies the location of ML model data to deploy during endpoint creation.
|
18184
|
+
*/
|
18185
|
+
ModelDataSource?: ModelDataSource;
|
18170
18186
|
/**
|
18171
18187
|
* The Amazon Web Services Marketplace product ID of the model package.
|
18172
18188
|
*/
|
@@ -18252,6 +18268,7 @@ declare namespace SageMaker {
|
|
18252
18268
|
}
|
18253
18269
|
export type ModelPackageGroupSummaryList = ModelPackageGroupSummary[];
|
18254
18270
|
export type ModelPackageSortBy = "Name"|"CreationTime"|string;
|
18271
|
+
export type ModelPackageSourceUri = string;
|
18255
18272
|
export type ModelPackageStatus = "Pending"|"InProgress"|"Completed"|"Failed"|"Deleting"|string;
|
18256
18273
|
export interface ModelPackageStatusDetails {
|
18257
18274
|
/**
|
@@ -21483,6 +21500,10 @@ declare namespace SageMaker {
|
|
21483
21500
|
* The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The model artifacts must be in an S3 bucket that is in the same Amazon Web Services region as the algorithm.
|
21484
21501
|
*/
|
21485
21502
|
ModelDataUrl?: Url;
|
21503
|
+
/**
|
21504
|
+
* Specifies the location of ML model data to deploy during endpoint creation.
|
21505
|
+
*/
|
21506
|
+
ModelDataSource?: ModelDataSource;
|
21486
21507
|
/**
|
21487
21508
|
* The name of an algorithm that was used to create the model package. The algorithm must be either an algorithm resource in your SageMaker account or an algorithm in Amazon Web Services Marketplace that you are subscribed to.
|
21488
21509
|
*/
|
@@ -23653,6 +23674,14 @@ declare namespace SageMaker {
|
|
23653
23674
|
* An array of additional Inference Specification objects to be added to the existing array additional Inference Specification. Total number of additional Inference Specifications can not exceed 15. Each additional Inference Specification specifies artifacts based on this model package that can be used on inference endpoints. Generally used with SageMaker Neo to store the compiled artifacts.
|
23654
23675
|
*/
|
23655
23676
|
AdditionalInferenceSpecificationsToAdd?: AdditionalInferenceSpecifications;
|
23677
|
+
/**
|
23678
|
+
* Specifies details about inference jobs that you can run with models based on this model package, including the following information: The Amazon ECR paths of containers that contain the inference code and model artifacts. The instance types that the model package supports for transform jobs and real-time endpoints used for inference. The input and output content formats that the model package supports for inference.
|
23679
|
+
*/
|
23680
|
+
InferenceSpecification?: InferenceSpecification;
|
23681
|
+
/**
|
23682
|
+
* The URI of the source for the model package.
|
23683
|
+
*/
|
23684
|
+
SourceUri?: ModelPackageSourceUri;
|
23656
23685
|
}
|
23657
23686
|
export interface UpdateModelPackageOutput {
|
23658
23687
|
/**
|
@@ -24206,7 +24235,7 @@ declare namespace SageMaker {
|
|
24206
24235
|
export type Vertices = Vertex[];
|
24207
24236
|
export interface VisibilityConditions {
|
24208
24237
|
/**
|
24209
|
-
* The key that specifies the tag that you're using to filter the search results. It must be in the following format: Tags.<key>
|
24238
|
+
* The key that specifies the tag that you're using to filter the search results. It must be in the following format: Tags.<key>.
|
24210
24239
|
*/
|
24211
24240
|
Key?: VisibilityConditionsKey;
|
24212
24241
|
/**
|
@@ -295,7 +295,7 @@ declare namespace SecurityLake {
|
|
295
295
|
sourceVersion?: AwsLogSourceVersion;
|
296
296
|
}
|
297
297
|
export type AwsLogSourceConfigurationList = AwsLogSourceConfiguration[];
|
298
|
-
export type AwsLogSourceName = "ROUTE53"|"VPC_FLOW"|"SH_FINDINGS"|"CLOUD_TRAIL_MGMT"|"LAMBDA_EXECUTION"|"S3_DATA"|string;
|
298
|
+
export type AwsLogSourceName = "ROUTE53"|"VPC_FLOW"|"SH_FINDINGS"|"CLOUD_TRAIL_MGMT"|"LAMBDA_EXECUTION"|"S3_DATA"|"EKS_AUDIT"|"WAF"|string;
|
299
299
|
export interface AwsLogSourceResource {
|
300
300
|
/**
|
301
301
|
* The name for a Amazon Web Services source. This must be a Regionally unique value.
|
@@ -325,7 +325,7 @@ declare namespace SecurityLake {
|
|
325
325
|
/**
|
326
326
|
* The configuration for the third-party custom source.
|
327
327
|
*/
|
328
|
-
configuration
|
328
|
+
configuration: CustomLogSourceConfiguration;
|
329
329
|
/**
|
330
330
|
* The Open Cybersecurity Schema Framework (OCSF) event classes which describes the type of data that the custom source will send to Security Lake. The supported event classes are: ACCESS_ACTIVITY FILE_ACTIVITY KERNEL_ACTIVITY KERNEL_EXTENSION MEMORY_ACTIVITY MODULE_ACTIVITY PROCESS_ACTIVITY REGISTRY_KEY_ACTIVITY REGISTRY_VALUE_ACTIVITY RESOURCE_ACTIVITY SCHEDULED_JOB_ACTIVITY SECURITY_FINDING ACCOUNT_CHANGE AUTHENTICATION AUTHORIZATION ENTITY_MANAGEMENT_AUDIT DHCP_ACTIVITY NETWORK_ACTIVITY DNS_ACTIVITY FTP_ACTIVITY HTTP_ACTIVITY RDP_ACTIVITY SMB_ACTIVITY SSH_ACTIVITY CONFIG_STATE INVENTORY_INFO EMAIL_ACTIVITY API_ACTIVITY CLOUD_API
|
331
331
|
*/
|
@@ -366,7 +366,7 @@ declare namespace SecurityLake {
|
|
366
366
|
/**
|
367
367
|
* Enable Security Lake with the specified configuration settings, to begin collecting security data for new accounts in your organization.
|
368
368
|
*/
|
369
|
-
autoEnableNewAccount
|
369
|
+
autoEnableNewAccount?: DataLakeAutoEnableNewAccountConfigurationList;
|
370
370
|
}
|
371
371
|
export interface CreateDataLakeOrganizationConfigurationResponse {
|
372
372
|
}
|
@@ -716,7 +716,7 @@ declare namespace SecurityLake {
|
|
716
716
|
/**
|
717
717
|
* Turns off automatic enablement of Security Lake for member accounts that are added to an organization.
|
718
718
|
*/
|
719
|
-
autoEnableNewAccount
|
719
|
+
autoEnableNewAccount?: DataLakeAutoEnableNewAccountConfigurationList;
|
720
720
|
}
|
721
721
|
export interface DeleteDataLakeOrganizationConfigurationResponse {
|
722
722
|
}
|
@@ -928,7 +928,7 @@ declare namespace SecurityLake {
|
|
928
928
|
}
|
929
929
|
export interface ListTagsForResourceRequest {
|
930
930
|
/**
|
931
|
-
* The Amazon Resource Name (ARN) of the Amazon Security Lake resource to retrieve the tags
|
931
|
+
* The Amazon Resource Name (ARN) of the Amazon Security Lake resource for which you want to retrieve the tags.
|
932
932
|
*/
|
933
933
|
resourceArn: AmazonResourceName;
|
934
934
|
}
|
@@ -1126,6 +1126,10 @@ declare namespace SecurityLake {
|
|
1126
1126
|
* Specify the Region or Regions that will contribute data to the rollup region.
|
1127
1127
|
*/
|
1128
1128
|
configurations: DataLakeConfigurationList;
|
1129
|
+
/**
|
1130
|
+
* The Amazon Resource Name (ARN) used to create and update the Glue table. This table contains partitions generated by the ingestion and normalization of Amazon Web Services log sources and custom sources.
|
1131
|
+
*/
|
1132
|
+
metaStoreManagerRoleArn?: RoleArn;
|
1129
1133
|
}
|
1130
1134
|
export interface UpdateDataLakeResponse {
|
1131
1135
|
/**
|
@@ -1164,6 +1164,7 @@ declare namespace WAFV2 {
|
|
1164
1164
|
export type EntityDescription = string;
|
1165
1165
|
export type EntityId = string;
|
1166
1166
|
export type EntityName = string;
|
1167
|
+
export type EvaluationWindowSec = number;
|
1167
1168
|
export interface ExcludedRule {
|
1168
1169
|
/**
|
1169
1170
|
* The name of the rule whose action you want to override to Count.
|
@@ -2496,6 +2497,10 @@ declare namespace WAFV2 {
|
|
2496
2497
|
* The limit on requests per 5-minute period for a single aggregation instance for the rate-based rule. If the rate-based statement includes a ScopeDownStatement, this limit is applied only to the requests that match the statement. Examples: If you aggregate on just the IP address, this is the limit on requests from any single IP address. If you aggregate on the HTTP method and the query argument name "city", then this is the limit on requests for any single method, city pair.
|
2497
2498
|
*/
|
2498
2499
|
Limit: RateLimit;
|
2500
|
+
/**
|
2501
|
+
* The amount of time, in seconds, that WAF should include in its request counts, looking back from the current time. For example, for a setting of 120, when WAF checks the rate, it counts the requests for the 2 minutes immediately preceding the current time. Valid settings are 60, 120, 300, and 600. This setting doesn't determine how often WAF checks the rate, but how far back it looks each time it checks. WAF checks the rate about every 10 seconds. Default: 300 (5 minutes)
|
2502
|
+
*/
|
2503
|
+
EvaluationWindowSec?: EvaluationWindowSec;
|
2499
2504
|
/**
|
2500
2505
|
* Setting that indicates how to aggregate the request counts. Web requests that are missing any of the components specified in the aggregation keys are omitted from the rate-based rule evaluation and handling. CONSTANT - Count and limit the requests that match the rate-based rule's scope-down statement. With this option, the counted requests aren't further aggregated. The scope-down statement is the only specification used. When the count of all requests that satisfy the scope-down statement goes over the limit, WAF applies the rule action to all requests that satisfy the scope-down statement. With this option, you must configure the ScopeDownStatement property. CUSTOM_KEYS - Aggregate the request counts using one or more web request components as the aggregate keys. With this option, you must specify the aggregate keys in the CustomKeys property. To aggregate on only the IP address or only the forwarded IP address, don't use custom keys. Instead, set the aggregate key type to IP or FORWARDED_IP. FORWARDED_IP - Aggregate the request counts on the first IP address in an HTTP header. With this option, you must specify the header to use in the ForwardedIPConfig property. To aggregate on a combination of the forwarded IP address with other aggregate keys, use CUSTOM_KEYS. IP - Aggregate the request counts on the IP address from the web request origin. To aggregate on a combination of the IP address with other aggregate keys, use CUSTOM_KEYS.
|
2501
2506
|
*/
|