aws-sdk-rekognition 1.71.0 → 1.73.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 905750bbefaa5fa1c3911030a8c24e9e373f87cf5a4f38f0538f84df5c63e3f5
4
- data.tar.gz: 3ab5a14ff6b83ce0ba0e653531d5bfa784db068aae320298d5417e6ecc6bf8c6
3
+ metadata.gz: 92251b0f044672d1160f7436ab44d156c043c2a438af9b8b150887d9a013fa7d
4
+ data.tar.gz: f105900aa5b99126639d60d0dd2b08959f5133f2dd1180037143f63a4b9ca63b
5
5
  SHA512:
6
- metadata.gz: 5596e290509cace40198994218271e9f9de737d4c0f1fc1d8741f4c07d2faf3e094630e383919617706b129a79463e96cf1da121147fd134c8f8c8cccfcf71f0
7
- data.tar.gz: feb2605c0dc3ca1c0070a9e55f01fe147266b7a3b932fc7344c95e944d3f6846eb13539533c9f2128021bab665a6fa5c3ad0b56f8c42a36c89daa7cc13e90200
6
+ metadata.gz: d2800231e49700ed3309794aee9f209641b25ef95a30d09edf89edd203c364298b7315ae6a1f7736f1264fb4589ad8d96dce5ada003fc2c899def6edcd3d6673
7
+ data.tar.gz: 7f8267247b5d8cdbc34730ce8d87f1f5c327e791fffe64c7fad5ec0cb43c38bfdb5a4f151ac14c9eb219f76137cdb0f74ebf5726eaf6265f06cddc2d7c993274
data/CHANGELOG.md CHANGED
@@ -1,6 +1,16 @@
1
1
  Unreleased Changes
2
2
  ------------------
3
3
 
4
+ 1.73.0 (2022-12-12)
5
+ ------------------
6
+
7
+ * Feature - Adds support for "aliases" and "categories", inclusion and exclusion filters for labels and label categories, and aggregating labels by video segment timestamps for Stored Video Label Detection APIs.
8
+
9
+ 1.72.0 (2022-11-11)
10
+ ------------------
11
+
12
+ * Feature - Adding support for ImageProperties feature to detect dominant colors and image brightness, sharpness, and contrast, inclusion and exclusion filters for labels and label categories, new fields to the API response, "aliases" and "categories"
13
+
4
14
  1.71.0 (2022-10-25)
5
15
  ------------------
6
16
 
data/VERSION CHANGED
@@ -1 +1 @@
1
- 1.71.0
1
+ 1.73.0
@@ -2177,23 +2177,85 @@ module Aws::Rekognition
2177
2177
  # For an example, see Analyzing images stored in an Amazon S3 bucket in
2178
2178
  # the Amazon Rekognition Developer Guide.
2179
2179
  #
2180
- # <note markdown="1"> `DetectLabels` does not support the detection of activities. However,
2181
- # activity detection is supported for label detection in videos. For
2182
- # more information, see StartLabelDetection in the Amazon Rekognition
2183
- # Developer Guide.
2184
- #
2185
- # </note>
2186
- #
2187
2180
  # You pass the input image as base64-encoded image bytes or as a
2188
2181
  # reference to an image in an Amazon S3 bucket. If you use the AWS CLI
2189
2182
  # to call Amazon Rekognition operations, passing image bytes is not
2190
2183
  # supported. The image must be either a PNG or JPEG formatted file.
2191
2184
  #
2185
+ # **Optional Parameters**
2186
+ #
2187
+ # You can specify one or both of the `GENERAL_LABELS` and
2188
+ # `IMAGE_PROPERTIES` feature types when calling the DetectLabels API.
2189
+ # Including `GENERAL_LABELS` will ensure the response includes the
2190
+ # labels detected in the input image, while including `IMAGE_PROPERTIES
2191
+ # `will ensure the response includes information about the image quality
2192
+ # and color.
2193
+ #
2194
+ # When using `GENERAL_LABELS` and/or `IMAGE_PROPERTIES` you can provide
2195
+ # filtering criteria to the Settings parameter. You can filter with sets
2196
+ # of individual labels or with label categories. You can specify
2197
+ # inclusive filters, exclusive filters, or a combination of inclusive
2198
+ # and exclusive filters. For more information on filtering see
2199
+ # [Detecting Labels in an Image][1].
2200
+ #
2201
+ # You can specify `MinConfidence` to control the confidence threshold
2202
+ # for the labels returned. The default is 55%. You can also add the
2203
+ # `MaxLabels` parameter to limit the number of labels returned. The
2204
+ # default and upper limit is 1000 labels.
2205
+ #
2206
+ # **Response Elements**
2207
+ #
2192
2208
  # For each object, scene, and concept the API returns one or more
2193
- # labels. Each label provides the object name, and the level of
2194
- # confidence that the image contains the object. For example, suppose
2195
- # the input image has a lighthouse, the sea, and a rock. The response
2196
- # includes all three labels, one for each object.
2209
+ # labels. The API returns the following types of information regarding
2210
+ # labels:
2211
+ #
2212
+ # * Name - The name of the detected label.
2213
+ #
2214
+ # * Confidence - The level of confidence in the label assigned to a
2215
+ # detected object.
2216
+ #
2217
+ # * Parents - The ancestor labels for a detected label. DetectLabels
2218
+ # returns a hierarchical taxonomy of detected labels. For example, a
2219
+ # detected car might be assigned the label car. The label car has two
2220
+ # parent labels: Vehicle (its parent) and Transportation (its
2221
+ # grandparent). The response includes the all ancestors for a label,
2222
+ # where every ancestor is a unique label. In the previous example,
2223
+ # Car, Vehicle, and Transportation are returned as unique labels in
2224
+ # the response.
2225
+ #
2226
+ # * Aliases - Possible Aliases for the label.
2227
+ #
2228
+ # * Categories - The label categories that the detected label belongs
2229
+ # to.
2230
+ #
2231
+ # * BoundingBox — Bounding boxes are described for all instances of
2232
+ # detected common object labels, returned in an array of Instance
2233
+ # objects. An Instance object contains a BoundingBox object,
2234
+ # describing the location of the label on the input image. It also
2235
+ # includes the confidence for the accuracy of the detected bounding
2236
+ # box.
2237
+ #
2238
+ # The API returns the following information regarding the image, as part
2239
+ # of the ImageProperties structure:
2240
+ #
2241
+ # * Quality - Information about the Sharpness, Brightness, and Contrast
2242
+ # of the input image, scored between 0 to 100. Image quality is
2243
+ # returned for the entire image, as well as the background and the
2244
+ # foreground.
2245
+ #
2246
+ # * Dominant Color - An array of the dominant colors in the image.
2247
+ #
2248
+ # * Foreground - Information about the sharpness, brightness, and
2249
+ # dominant colors of the input image’s foreground.
2250
+ #
2251
+ # * Background - Information about the sharpness, brightness, and
2252
+ # dominant colors of the input image’s background.
2253
+ #
2254
+ # The list of returned labels will include at least one label for every
2255
+ # detected object, along with information about that label. In the
2256
+ # following example, suppose the input image has a lighthouse, the sea,
2257
+ # and a rock. The response includes all three labels, one for each
2258
+ # object, as well as the confidence in the label:
2197
2259
  #
2198
2260
  # `\{Name: lighthouse, Confidence: 98.4629\}`
2199
2261
  #
@@ -2201,11 +2263,9 @@ module Aws::Rekognition
2201
2263
  #
2202
2264
  # ` \{Name: sea,Confidence: 75.061\}`
2203
2265
  #
2204
- # In the preceding example, the operation returns one label for each of
2205
- # the three objects. The operation can also return multiple labels for
2206
- # the same object in the image. For example, if the input image shows a
2207
- # flower (for example, a tulip), the operation might return the
2208
- # following three labels.
2266
+ # The list of labels can include multiple labels for the same object.
2267
+ # For example, if the input image shows a flower (for example, a tulip),
2268
+ # the operation might return the following three labels.
2209
2269
  #
2210
2270
  # `\{Name: flower,Confidence: 99.0562\}`
2211
2271
  #
@@ -2216,36 +2276,21 @@ module Aws::Rekognition
2216
2276
  # In this example, the detection algorithm more precisely identifies the
2217
2277
  # flower as a tulip.
2218
2278
  #
2219
- # In response, the API returns an array of labels. In addition, the
2220
- # response also includes the orientation correction. Optionally, you can
2221
- # specify `MinConfidence` to control the confidence threshold for the
2222
- # labels returned. The default is 55%. You can also add the `MaxLabels`
2223
- # parameter to limit the number of labels returned.
2224
- #
2225
2279
  # <note markdown="1"> If the object detected is a person, the operation doesn't provide the
2226
2280
  # same facial details that the DetectFaces operation provides.
2227
2281
  #
2228
2282
  # </note>
2229
2283
  #
2230
- # `DetectLabels` returns bounding boxes for instances of common object
2231
- # labels in an array of Instance objects. An `Instance` object contains
2232
- # a BoundingBox object, for the location of the label on the image. It
2233
- # also includes the confidence by which the bounding box was detected.
2234
- #
2235
- # `DetectLabels` also returns a hierarchical taxonomy of detected
2236
- # labels. For example, a detected car might be assigned the label *car*.
2237
- # The label *car* has two parent labels: *Vehicle* (its parent) and
2238
- # *Transportation* (its grandparent). The response returns the entire
2239
- # list of ancestors for a label. Each ancestor is a unique label in the
2240
- # response. In the previous example, *Car*, *Vehicle*, and
2241
- # *Transportation* are returned as unique labels in the response.
2242
- #
2243
2284
  # This is a stateless API operation. That is, the operation does not
2244
2285
  # persist any data.
2245
2286
  #
2246
2287
  # This operation requires permissions to perform the
2247
2288
  # `rekognition:DetectLabels` action.
2248
2289
  #
2290
+ #
2291
+ #
2292
+ # [1]: https://docs.aws.amazon.com/rekognition/latest/dg/labels-detect-labels-image.html
2293
+ #
2249
2294
  # @option params [required, Types::Image] :image
2250
2295
  # The input image as base64-encoded bytes or an S3 object. If you use
2251
2296
  # the AWS CLI to call Amazon Rekognition operations, passing image bytes
@@ -2270,11 +2315,26 @@ module Aws::Rekognition
2270
2315
  # If `MinConfidence` is not specified, the operation returns labels with
2271
2316
  # a confidence values greater than or equal to 55 percent.
2272
2317
  #
2318
+ # @option params [Array<String>] :features
2319
+ # A list of the types of analysis to perform. Specifying GENERAL\_LABELS
2320
+ # uses the label detection feature, while specifying IMAGE\_PROPERTIES
2321
+ # returns information regarding image color and quality. If no option is
2322
+ # specified GENERAL\_LABELS is used by default.
2323
+ #
2324
+ # @option params [Types::DetectLabelsSettings] :settings
2325
+ # A list of the filters to be applied to returned detected labels and
2326
+ # image properties. Specified filters can be inclusive, exclusive, or a
2327
+ # combination of both. Filters can be used for individual labels or
2328
+ # label categories. The exact label names or label categories must be
2329
+ # supplied. For a full list of labels and label categories, see LINK
2330
+ # HERE.
2331
+ #
2273
2332
  # @return [Types::DetectLabelsResponse] Returns a {Seahorse::Client::Response response} object which responds to the following methods:
2274
2333
  #
2275
2334
  # * {Types::DetectLabelsResponse#labels #labels} => Array&lt;Types::Label&gt;
2276
2335
  # * {Types::DetectLabelsResponse#orientation_correction #orientation_correction} => String
2277
2336
  # * {Types::DetectLabelsResponse#label_model_version #label_model_version} => String
2337
+ # * {Types::DetectLabelsResponse#image_properties #image_properties} => Types::DetectLabelsImageProperties
2278
2338
  #
2279
2339
  #
2280
2340
  # @example Example: To detect labels
@@ -2319,6 +2379,18 @@ module Aws::Rekognition
2319
2379
  # },
2320
2380
  # max_labels: 1,
2321
2381
  # min_confidence: 1.0,
2382
+ # features: ["GENERAL_LABELS"], # accepts GENERAL_LABELS, IMAGE_PROPERTIES
2383
+ # settings: {
2384
+ # general_labels: {
2385
+ # label_inclusion_filters: ["GeneralLabelsFilterValue"],
2386
+ # label_exclusion_filters: ["GeneralLabelsFilterValue"],
2387
+ # label_category_inclusion_filters: ["GeneralLabelsFilterValue"],
2388
+ # label_category_exclusion_filters: ["GeneralLabelsFilterValue"],
2389
+ # },
2390
+ # image_properties: {
2391
+ # max_dominant_colors: 1,
2392
+ # },
2393
+ # },
2322
2394
  # })
2323
2395
  #
2324
2396
  # @example Response structure
@@ -2332,10 +2404,55 @@ module Aws::Rekognition
2332
2404
  # resp.labels[0].instances[0].bounding_box.left #=> Float
2333
2405
  # resp.labels[0].instances[0].bounding_box.top #=> Float
2334
2406
  # resp.labels[0].instances[0].confidence #=> Float
2407
+ # resp.labels[0].instances[0].dominant_colors #=> Array
2408
+ # resp.labels[0].instances[0].dominant_colors[0].red #=> Integer
2409
+ # resp.labels[0].instances[0].dominant_colors[0].blue #=> Integer
2410
+ # resp.labels[0].instances[0].dominant_colors[0].green #=> Integer
2411
+ # resp.labels[0].instances[0].dominant_colors[0].hex_code #=> String
2412
+ # resp.labels[0].instances[0].dominant_colors[0].css_color #=> String
2413
+ # resp.labels[0].instances[0].dominant_colors[0].simplified_color #=> String
2414
+ # resp.labels[0].instances[0].dominant_colors[0].pixel_percent #=> Float
2335
2415
  # resp.labels[0].parents #=> Array
2336
2416
  # resp.labels[0].parents[0].name #=> String
2417
+ # resp.labels[0].aliases #=> Array
2418
+ # resp.labels[0].aliases[0].name #=> String
2419
+ # resp.labels[0].categories #=> Array
2420
+ # resp.labels[0].categories[0].name #=> String
2337
2421
  # resp.orientation_correction #=> String, one of "ROTATE_0", "ROTATE_90", "ROTATE_180", "ROTATE_270"
2338
2422
  # resp.label_model_version #=> String
2423
+ # resp.image_properties.quality.brightness #=> Float
2424
+ # resp.image_properties.quality.sharpness #=> Float
2425
+ # resp.image_properties.quality.contrast #=> Float
2426
+ # resp.image_properties.dominant_colors #=> Array
2427
+ # resp.image_properties.dominant_colors[0].red #=> Integer
2428
+ # resp.image_properties.dominant_colors[0].blue #=> Integer
2429
+ # resp.image_properties.dominant_colors[0].green #=> Integer
2430
+ # resp.image_properties.dominant_colors[0].hex_code #=> String
2431
+ # resp.image_properties.dominant_colors[0].css_color #=> String
2432
+ # resp.image_properties.dominant_colors[0].simplified_color #=> String
2433
+ # resp.image_properties.dominant_colors[0].pixel_percent #=> Float
2434
+ # resp.image_properties.foreground.quality.brightness #=> Float
2435
+ # resp.image_properties.foreground.quality.sharpness #=> Float
2436
+ # resp.image_properties.foreground.quality.contrast #=> Float
2437
+ # resp.image_properties.foreground.dominant_colors #=> Array
2438
+ # resp.image_properties.foreground.dominant_colors[0].red #=> Integer
2439
+ # resp.image_properties.foreground.dominant_colors[0].blue #=> Integer
2440
+ # resp.image_properties.foreground.dominant_colors[0].green #=> Integer
2441
+ # resp.image_properties.foreground.dominant_colors[0].hex_code #=> String
2442
+ # resp.image_properties.foreground.dominant_colors[0].css_color #=> String
2443
+ # resp.image_properties.foreground.dominant_colors[0].simplified_color #=> String
2444
+ # resp.image_properties.foreground.dominant_colors[0].pixel_percent #=> Float
2445
+ # resp.image_properties.background.quality.brightness #=> Float
2446
+ # resp.image_properties.background.quality.sharpness #=> Float
2447
+ # resp.image_properties.background.quality.contrast #=> Float
2448
+ # resp.image_properties.background.dominant_colors #=> Array
2449
+ # resp.image_properties.background.dominant_colors[0].red #=> Integer
2450
+ # resp.image_properties.background.dominant_colors[0].blue #=> Integer
2451
+ # resp.image_properties.background.dominant_colors[0].green #=> Integer
2452
+ # resp.image_properties.background.dominant_colors[0].hex_code #=> String
2453
+ # resp.image_properties.background.dominant_colors[0].css_color #=> String
2454
+ # resp.image_properties.background.dominant_colors[0].simplified_color #=> String
2455
+ # resp.image_properties.background.dominant_colors[0].pixel_percent #=> Float
2339
2456
  #
2340
2457
  # @overload detect_labels(params = {})
2341
2458
  # @param [Hash] params ({})
@@ -3286,25 +3403,69 @@ module Aws::Rekognition
3286
3403
  # StartLabelDetection which returns a job identifier (`JobId`). When the
3287
3404
  # label detection operation finishes, Amazon Rekognition publishes a
3288
3405
  # completion status to the Amazon Simple Notification Service topic
3289
- # registered in the initial call to `StartlabelDetection`. To get the
3290
- # results of the label detection operation, first check that the status
3291
- # value published to the Amazon SNS topic is `SUCCEEDED`. If so, call
3292
- # GetLabelDetection and pass the job identifier (`JobId`) from the
3293
- # initial call to `StartLabelDetection`.
3406
+ # registered in the initial call to `StartlabelDetection`.
3407
+ #
3408
+ # To get the results of the label detection operation, first check that
3409
+ # the status value published to the Amazon SNS topic is `SUCCEEDED`. If
3410
+ # so, call GetLabelDetection and pass the job identifier (`JobId`) from
3411
+ # the initial call to `StartLabelDetection`.
3294
3412
  #
3295
3413
  # `GetLabelDetection` returns an array of detected labels (`Labels`)
3296
3414
  # sorted by the time the labels were detected. You can also sort by the
3297
- # label name by specifying `NAME` for the `SortBy` input parameter.
3415
+ # label name by specifying `NAME` for the `SortBy` input parameter. If
3416
+ # there is no `NAME` specified, the default sort is by timestamp.
3298
3417
  #
3299
- # The labels returned include the label name, the percentage confidence
3300
- # in the accuracy of the detected label, and the time the label was
3301
- # detected in the video.
3418
+ # You can select how results are aggregated by using the `AggregateBy`
3419
+ # input parameter. The default aggregation method is `TIMESTAMPS`. You
3420
+ # can also aggregate by `SEGMENTS`, which aggregates all instances of
3421
+ # labels detected in a given segment.
3302
3422
  #
3303
- # The returned labels also include bounding box information for common
3304
- # objects, a hierarchical taxonomy of detected labels, and the version
3305
- # of the label model used for detection.
3423
+ # The returned Labels array may include the following attributes:
3306
3424
  #
3307
- # Use MaxResults parameter to limit the number of labels returned. If
3425
+ # * Name - The name of the detected label.
3426
+ #
3427
+ # * Confidence - The level of confidence in the label assigned to a
3428
+ # detected object.
3429
+ #
3430
+ # * Parents - The ancestor labels for a detected label.
3431
+ # GetLabelDetection returns a hierarchical taxonomy of detected
3432
+ # labels. For example, a detected car might be assigned the label car.
3433
+ # The label car has two parent labels: Vehicle (its parent) and
3434
+ # Transportation (its grandparent). The response includes the all
3435
+ # ancestors for a label, where every ancestor is a unique label. In
3436
+ # the previous example, Car, Vehicle, and Transportation are returned
3437
+ # as unique labels in the response.
3438
+ #
3439
+ # * Aliases - Possible Aliases for the label.
3440
+ #
3441
+ # * Categories - The label categories that the detected label belongs
3442
+ # to.
3443
+ #
3444
+ # * BoundingBox — Bounding boxes are described for all instances of
3445
+ # detected common object labels, returned in an array of Instance
3446
+ # objects. An Instance object contains a BoundingBox object,
3447
+ # describing the location of the label on the input image. It also
3448
+ # includes the confidence for the accuracy of the detected bounding
3449
+ # box.
3450
+ #
3451
+ # * Timestamp - Time, in milliseconds from the start of the video, that
3452
+ # the label was detected. For aggregation by `SEGMENTS`, the
3453
+ # `StartTimestampMillis`, `EndTimestampMillis`, and `DurationMillis`
3454
+ # structures are what define a segment. Although the “Timestamp”
3455
+ # structure is still returned with each label, its value is set to be
3456
+ # the same as `StartTimestampMillis`.
3457
+ #
3458
+ # Timestamp and Bounding box information are returned for detected
3459
+ # Instances, only if aggregation is done by `TIMESTAMPS`. If aggregating
3460
+ # by `SEGMENTS`, information about detected instances isn’t returned.
3461
+ #
3462
+ # The version of the label model used for the detection is also
3463
+ # returned.
3464
+ #
3465
+ # **Note `DominantColors` isn't returned for `Instances`, although it
3466
+ # is shown as part of the response in the sample seen below.**
3467
+ #
3468
+ # Use `MaxResults` parameter to limit the number of labels returned. If
3308
3469
  # there are more results than specified in `MaxResults`, the value of
3309
3470
  # `NextToken` in the operation response contains a pagination token for
3310
3471
  # getting the next set of results. To get the next page of results, call
@@ -3336,6 +3497,10 @@ module Aws::Rekognition
3336
3497
  # group, the array element are sorted by detection confidence. The
3337
3498
  # default sort is by `TIMESTAMP`.
3338
3499
  #
3500
+ # @option params [String] :aggregate_by
3501
+ # Defines how to aggregate the returned results. Results can be
3502
+ # aggregated by timestamps or segments.
3503
+ #
3339
3504
  # @return [Types::GetLabelDetectionResponse] Returns a {Seahorse::Client::Response response} object which responds to the following methods:
3340
3505
  #
3341
3506
  # * {Types::GetLabelDetectionResponse#job_status #job_status} => String
@@ -3354,6 +3519,7 @@ module Aws::Rekognition
3354
3519
  # max_results: 1,
3355
3520
  # next_token: "PaginationToken",
3356
3521
  # sort_by: "NAME", # accepts NAME, TIMESTAMP
3522
+ # aggregate_by: "TIMESTAMPS", # accepts TIMESTAMPS, SEGMENTS
3357
3523
  # })
3358
3524
  #
3359
3525
  # @example Response structure
@@ -3378,8 +3544,23 @@ module Aws::Rekognition
3378
3544
  # resp.labels[0].label.instances[0].bounding_box.left #=> Float
3379
3545
  # resp.labels[0].label.instances[0].bounding_box.top #=> Float
3380
3546
  # resp.labels[0].label.instances[0].confidence #=> Float
3547
+ # resp.labels[0].label.instances[0].dominant_colors #=> Array
3548
+ # resp.labels[0].label.instances[0].dominant_colors[0].red #=> Integer
3549
+ # resp.labels[0].label.instances[0].dominant_colors[0].blue #=> Integer
3550
+ # resp.labels[0].label.instances[0].dominant_colors[0].green #=> Integer
3551
+ # resp.labels[0].label.instances[0].dominant_colors[0].hex_code #=> String
3552
+ # resp.labels[0].label.instances[0].dominant_colors[0].css_color #=> String
3553
+ # resp.labels[0].label.instances[0].dominant_colors[0].simplified_color #=> String
3554
+ # resp.labels[0].label.instances[0].dominant_colors[0].pixel_percent #=> Float
3381
3555
  # resp.labels[0].label.parents #=> Array
3382
3556
  # resp.labels[0].label.parents[0].name #=> String
3557
+ # resp.labels[0].label.aliases #=> Array
3558
+ # resp.labels[0].label.aliases[0].name #=> String
3559
+ # resp.labels[0].label.categories #=> Array
3560
+ # resp.labels[0].label.categories[0].name #=> String
3561
+ # resp.labels[0].start_timestamp_millis #=> Integer
3562
+ # resp.labels[0].end_timestamp_millis #=> Integer
3563
+ # resp.labels[0].duration_millis #=> Integer
3383
3564
  # resp.label_model_version #=> String
3384
3565
  #
3385
3566
  # @overload get_label_detection(params = {})
@@ -5615,6 +5796,22 @@ module Aws::Rekognition
5615
5796
  # so, call GetLabelDetection and pass the job identifier (`JobId`) from
5616
5797
  # the initial call to `StartLabelDetection`.
5617
5798
  #
5799
+ # *Optional Parameters*
5800
+ #
5801
+ # `StartLabelDetection` has the `GENERAL_LABELS` Feature applied by
5802
+ # default. This feature allows you to provide filtering criteria to the
5803
+ # `Settings` parameter. You can filter with sets of individual labels or
5804
+ # with label categories. You can specify inclusive filters, exclusive
5805
+ # filters, or a combination of inclusive and exclusive filters. For more
5806
+ # information on filtering, see [Detecting labels in a video][1].
5807
+ #
5808
+ # You can specify `MinConfidence` to control the confidence threshold
5809
+ # for the labels returned. The default is 50.
5810
+ #
5811
+ #
5812
+ #
5813
+ # [1]: https://docs.aws.amazon.com/rekognition/latest/dg/labels-detecting-labels-video.html
5814
+ #
5618
5815
  # @option params [required, Types::Video] :video
5619
5816
  # The video in which you want to detect labels. The video must be stored
5620
5817
  # in an Amazon S3 bucket.
@@ -5634,7 +5831,8 @@ module Aws::Rekognition
5634
5831
  # lower than this specified value.
5635
5832
  #
5636
5833
  # If you don't specify `MinConfidence`, the operation returns labels
5637
- # with confidence values greater than or equal to 50 percent.
5834
+ # and bounding boxes (if detected) with confidence values greater than
5835
+ # or equal to 50 percent.
5638
5836
  #
5639
5837
  # @option params [Types::NotificationChannel] :notification_channel
5640
5838
  # The Amazon SNS topic ARN you want Amazon Rekognition Video to publish
@@ -5648,6 +5846,15 @@ module Aws::Rekognition
5648
5846
  # Service topic. For example, you can use `JobTag` to group related jobs
5649
5847
  # and identify them in the completion notification.
5650
5848
  #
5849
+ # @option params [Array<String>] :features
5850
+ # The features to return after video analysis. You can specify that
5851
+ # GENERAL\_LABELS are returned.
5852
+ #
5853
+ # @option params [Types::LabelDetectionSettings] :settings
5854
+ # The settings for a StartLabelDetection request.Contains the specified
5855
+ # parameters for the label detection request of an asynchronous label
5856
+ # analysis operation. Settings can include filters for GENERAL\_LABELS.
5857
+ #
5651
5858
  # @return [Types::StartLabelDetectionResponse] Returns a {Seahorse::Client::Response response} object which responds to the following methods:
5652
5859
  #
5653
5860
  # * {Types::StartLabelDetectionResponse#job_id #job_id} => String
@@ -5669,6 +5876,15 @@ module Aws::Rekognition
5669
5876
  # role_arn: "RoleArn", # required
5670
5877
  # },
5671
5878
  # job_tag: "JobTag",
5879
+ # features: ["GENERAL_LABELS"], # accepts GENERAL_LABELS
5880
+ # settings: {
5881
+ # general_labels: {
5882
+ # label_inclusion_filters: ["GeneralLabelsFilterValue"],
5883
+ # label_exclusion_filters: ["GeneralLabelsFilterValue"],
5884
+ # label_category_inclusion_filters: ["GeneralLabelsFilterValue"],
5885
+ # label_category_exclusion_filters: ["GeneralLabelsFilterValue"],
5886
+ # },
5887
+ # },
5672
5888
  # })
5673
5889
  #
5674
5890
  # @example Response structure
@@ -5933,7 +6149,9 @@ module Aws::Rekognition
5933
6149
  # @option params [Types::StreamProcessingStartSelector] :start_selector
5934
6150
  # Specifies the starting point in the Kinesis stream to start
5935
6151
  # processing. You can use the producer timestamp or the fragment number.
5936
- # For more information, see [Fragment][1].
6152
+ # If you use the producer timestamp, you must put the time in
6153
+ # milliseconds. For more information about fragment numbers, see
6154
+ # [Fragment][1].
5937
6155
  #
5938
6156
  # This is a required parameter for label detection stream processors and
5939
6157
  # should not be used to start a face search stream processor.
@@ -6338,7 +6556,7 @@ module Aws::Rekognition
6338
6556
  params: params,
6339
6557
  config: config)
6340
6558
  context[:gem_name] = 'aws-sdk-rekognition'
6341
- context[:gem_version] = '1.71.0'
6559
+ context[:gem_version] = '1.73.0'
6342
6560
  Seahorse::Client::Request.new(handlers, context)
6343
6561
  end
6344
6562