aws-sdk-kinesis 1.1.0 → 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/lib/aws-sdk-kinesis.rb +1 -1
- data/lib/aws-sdk-kinesis/client.rb +280 -165
- data/lib/aws-sdk-kinesis/client_api.rb +35 -3
- data/lib/aws-sdk-kinesis/types.rb +184 -65
- metadata +2 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 9681b116c4e7af7fd4da150cc1610664c731b47f
|
4
|
+
data.tar.gz: 123cf0eb3babcef676f54b404085a3abd22b02a7
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 934ea7436d3ce43095b0c6f50ae6fd018f78e644dca588e985908651f5e3192856c490877d360961a4fdf9628f8ef8d172b5e4745e75c0d501b67a1842486b9f
|
7
|
+
data.tar.gz: e2cc4da0a2c4578f7c5f4ab5217f823bee422db18c693b3bd80ebc90cb715b2360ad3339acc4a010b00f51b8217bfc804689abb66ba987f8a920cf2f64f4aecc
|
data/lib/aws-sdk-kinesis.rb
CHANGED
@@ -155,13 +155,16 @@ module Aws::Kinesis
|
|
155
155
|
|
156
156
|
# @!group API Operations
|
157
157
|
|
158
|
-
# Adds or updates tags for the specified Kinesis stream. Each
|
159
|
-
# have up to 10 tags.
|
158
|
+
# Adds or updates tags for the specified Kinesis data stream. Each
|
159
|
+
# stream can have up to 10 tags.
|
160
160
|
#
|
161
161
|
# If tags have already been assigned to the stream, `AddTagsToStream`
|
162
162
|
# overwrites any existing tags that correspond to the specified tag
|
163
163
|
# keys.
|
164
164
|
#
|
165
|
+
# AddTagsToStream has a limit of five transactions per second per
|
166
|
+
# account.
|
167
|
+
#
|
165
168
|
# @option params [required, String] :stream_name
|
166
169
|
# The name of the stream.
|
167
170
|
#
|
@@ -188,30 +191,30 @@ module Aws::Kinesis
|
|
188
191
|
req.send_request(options)
|
189
192
|
end
|
190
193
|
|
191
|
-
# Creates a Kinesis stream. A stream captures and transports data
|
194
|
+
# Creates a Kinesis data stream. A stream captures and transports data
|
192
195
|
# records that are continuously emitted from different data sources or
|
193
196
|
# *producers*. Scale-out within a stream is explicitly supported by
|
194
197
|
# means of shards, which are uniquely identified groups of data records
|
195
198
|
# in a stream.
|
196
199
|
#
|
197
200
|
# You specify and control the number of shards that a stream is composed
|
198
|
-
# of. Each shard can support reads up to
|
199
|
-
# to a maximum data read total of 2 MB per second. Each shard can
|
201
|
+
# of. Each shard can support reads up to five transactions per second,
|
202
|
+
# up to a maximum data read total of 2 MB per second. Each shard can
|
200
203
|
# support writes up to 1,000 records per second, up to a maximum data
|
201
|
-
# write total of 1 MB per second.
|
204
|
+
# write total of 1 MB per second. If the amount of data input increases
|
202
205
|
# or decreases, you can add or remove shards.
|
203
206
|
#
|
204
207
|
# The stream name identifies the stream. The name is scoped to the AWS
|
205
|
-
# account used by the application. It is also scoped by
|
206
|
-
# two streams in two different accounts can have the same name, and
|
207
|
-
# streams in the same account, but in two different
|
208
|
-
# the same name.
|
208
|
+
# account used by the application. It is also scoped by AWS Region. That
|
209
|
+
# is, two streams in two different accounts can have the same name, and
|
210
|
+
# two streams in the same account, but in two different Regions, can
|
211
|
+
# have the same name.
|
209
212
|
#
|
210
213
|
# `CreateStream` is an asynchronous operation. Upon receiving a
|
211
|
-
# `CreateStream` request, Kinesis Streams immediately returns and
|
212
|
-
# the stream status to `CREATING`. After the stream is created,
|
213
|
-
# Streams sets the stream status to `ACTIVE`. You should
|
214
|
-
# and write operations only on an `ACTIVE` stream.
|
214
|
+
# `CreateStream` request, Kinesis Data Streams immediately returns and
|
215
|
+
# sets the stream status to `CREATING`. After the stream is created,
|
216
|
+
# Kinesis Data Streams sets the stream status to `ACTIVE`. You should
|
217
|
+
# perform read and write operations only on an `ACTIVE` stream.
|
215
218
|
#
|
216
219
|
# You receive a `LimitExceededException` when making a `CreateStream`
|
217
220
|
# request when you try to do one of the following:
|
@@ -221,14 +224,14 @@ module Aws::Kinesis
|
|
221
224
|
#
|
222
225
|
# * Create more shards than are authorized for your account.
|
223
226
|
#
|
224
|
-
# For the default shard limit for an AWS account, see [
|
225
|
-
# Limits][1] in the *Amazon Kinesis Streams Developer
|
226
|
-
# increase this limit, [contact AWS Support][2].
|
227
|
+
# For the default shard limit for an AWS account, see [Amazon Kinesis
|
228
|
+
# Data Streams Limits][1] in the *Amazon Kinesis Data Streams Developer
|
229
|
+
# Guide*. To increase this limit, [contact AWS Support][2].
|
227
230
|
#
|
228
231
|
# You can use `DescribeStream` to check the stream status, which is
|
229
232
|
# returned in `StreamStatus`.
|
230
233
|
#
|
231
|
-
# CreateStream has a limit of
|
234
|
+
# CreateStream has a limit of five transactions per second per account.
|
232
235
|
#
|
233
236
|
#
|
234
237
|
#
|
@@ -238,9 +241,9 @@ module Aws::Kinesis
|
|
238
241
|
# @option params [required, String] :stream_name
|
239
242
|
# A name to identify the stream. The stream name is scoped to the AWS
|
240
243
|
# account used by the application that creates the stream. It is also
|
241
|
-
# scoped by
|
242
|
-
# can have the same name. Two streams in the same AWS account
|
243
|
-
# different
|
244
|
+
# scoped by AWS Region. That is, two streams in two different AWS
|
245
|
+
# accounts can have the same name. Two streams in the same AWS account
|
246
|
+
# but in two different Regions can also have the same name.
|
244
247
|
#
|
245
248
|
# @option params [required, Integer] :shard_count
|
246
249
|
# The number of shards that the stream will use. The throughput of the
|
@@ -267,8 +270,8 @@ module Aws::Kinesis
|
|
267
270
|
req.send_request(options)
|
268
271
|
end
|
269
272
|
|
270
|
-
# Decreases the Kinesis stream's retention period, which is the
|
271
|
-
# of time data records are accessible after they are added to the
|
273
|
+
# Decreases the Kinesis data stream's retention period, which is the
|
274
|
+
# length of time data records are accessible after they are added to the
|
272
275
|
# stream. The minimum value of a stream's retention period is 24 hours.
|
273
276
|
#
|
274
277
|
# This operation may result in lost data. For example, if the stream's
|
@@ -300,18 +303,18 @@ module Aws::Kinesis
|
|
300
303
|
req.send_request(options)
|
301
304
|
end
|
302
305
|
|
303
|
-
# Deletes a Kinesis stream and all its shards and data. You must
|
304
|
-
# down any applications that are operating on the stream before you
|
306
|
+
# Deletes a Kinesis data stream and all its shards and data. You must
|
307
|
+
# shut down any applications that are operating on the stream before you
|
305
308
|
# delete the stream. If an application attempts to operate on a deleted
|
306
309
|
# stream, it receives the exception `ResourceNotFoundException`.
|
307
310
|
#
|
308
311
|
# If the stream is in the `ACTIVE` state, you can delete it. After a
|
309
312
|
# `DeleteStream` request, the specified stream is in the `DELETING`
|
310
|
-
# state until Kinesis Streams completes the deletion.
|
313
|
+
# state until Kinesis Data Streams completes the deletion.
|
311
314
|
#
|
312
|
-
# **Note:** Kinesis Streams might continue to accept data read and
|
313
|
-
# operations, such as PutRecord, PutRecords, and GetRecords, on a
|
314
|
-
# in the `DELETING` state until the stream deletion is complete.
|
315
|
+
# **Note:** Kinesis Data Streams might continue to accept data read and
|
316
|
+
# write operations, such as PutRecord, PutRecords, and GetRecords, on a
|
317
|
+
# stream in the `DELETING` state until the stream deletion is complete.
|
315
318
|
#
|
316
319
|
# When you delete a stream, any shards in that stream are also deleted,
|
317
320
|
# and any tags are dissociated from the stream.
|
@@ -319,7 +322,7 @@ module Aws::Kinesis
|
|
319
322
|
# You can use the DescribeStream operation to check the state of the
|
320
323
|
# stream, which is returned in `StreamStatus`.
|
321
324
|
#
|
322
|
-
# DeleteStream has a limit of
|
325
|
+
# DeleteStream has a limit of five transactions per second per account.
|
323
326
|
#
|
324
327
|
# @option params [required, String] :stream_name
|
325
328
|
# The name of the stream to delete.
|
@@ -346,7 +349,7 @@ module Aws::Kinesis
|
|
346
349
|
# If you update your account limits, the old limits might be returned
|
347
350
|
# for a few minutes.
|
348
351
|
#
|
349
|
-
# This operation has a limit of
|
352
|
+
# This operation has a limit of one transaction per second per account.
|
350
353
|
#
|
351
354
|
# @return [Types::DescribeLimitsOutput] Returns a {Seahorse::Client::Response response} object which responds to the following methods:
|
352
355
|
#
|
@@ -367,7 +370,7 @@ module Aws::Kinesis
|
|
367
370
|
req.send_request(options)
|
368
371
|
end
|
369
372
|
|
370
|
-
# Describes the specified Kinesis stream.
|
373
|
+
# Describes the specified Kinesis data stream.
|
371
374
|
#
|
372
375
|
# The information returned includes the stream name, Amazon Resource
|
373
376
|
# Name (ARN), creation time, enhanced metric configuration, and shard
|
@@ -380,7 +383,7 @@ module Aws::Kinesis
|
|
380
383
|
#
|
381
384
|
# You can limit the number of shards returned by each call. For more
|
382
385
|
# information, see [Retrieving Shards from a Stream][1] in the *Amazon
|
383
|
-
# Kinesis Streams Developer Guide*.
|
386
|
+
# Kinesis Data Streams Developer Guide*.
|
384
387
|
#
|
385
388
|
# There are no guarantees about the chronological order shards returned.
|
386
389
|
# To process shards in chronological order, use the ID of the parent
|
@@ -446,7 +449,7 @@ module Aws::Kinesis
|
|
446
449
|
req.send_request(options)
|
447
450
|
end
|
448
451
|
|
449
|
-
# Provides a summarized description of the specified Kinesis stream
|
452
|
+
# Provides a summarized description of the specified Kinesis data stream
|
450
453
|
# without the shard list.
|
451
454
|
#
|
452
455
|
# The information returned includes the stream name, Amazon Resource
|
@@ -492,7 +495,7 @@ module Aws::Kinesis
|
|
492
495
|
# Disables enhanced monitoring.
|
493
496
|
#
|
494
497
|
# @option params [required, String] :stream_name
|
495
|
-
# The name of the Kinesis stream for which to disable enhanced
|
498
|
+
# The name of the Kinesis data stream for which to disable enhanced
|
496
499
|
# monitoring.
|
497
500
|
#
|
498
501
|
# @option params [required, Array<String>] :shard_level_metrics
|
@@ -517,8 +520,8 @@ module Aws::Kinesis
|
|
517
520
|
#
|
518
521
|
# * `ALL`
|
519
522
|
#
|
520
|
-
# For more information, see [Monitoring the Amazon Kinesis Streams
|
521
|
-
# Service with Amazon CloudWatch][1] in the *Amazon Kinesis Streams
|
523
|
+
# For more information, see [Monitoring the Amazon Kinesis Data Streams
|
524
|
+
# Service with Amazon CloudWatch][1] in the *Amazon Kinesis Data Streams
|
522
525
|
# Developer Guide*.
|
523
526
|
#
|
524
527
|
#
|
@@ -555,7 +558,8 @@ module Aws::Kinesis
|
|
555
558
|
req.send_request(options)
|
556
559
|
end
|
557
560
|
|
558
|
-
# Enables enhanced Kinesis stream monitoring for shard-level
|
561
|
+
# Enables enhanced Kinesis data stream monitoring for shard-level
|
562
|
+
# metrics.
|
559
563
|
#
|
560
564
|
# @option params [required, String] :stream_name
|
561
565
|
# The name of the stream for which to enable enhanced monitoring.
|
@@ -582,8 +586,8 @@ module Aws::Kinesis
|
|
582
586
|
#
|
583
587
|
# * `ALL`
|
584
588
|
#
|
585
|
-
# For more information, see [Monitoring the Amazon Kinesis Streams
|
586
|
-
# Service with Amazon CloudWatch][1] in the *Amazon Kinesis Streams
|
589
|
+
# For more information, see [Monitoring the Amazon Kinesis Data Streams
|
590
|
+
# Service with Amazon CloudWatch][1] in the *Amazon Kinesis Data Streams
|
587
591
|
# Developer Guide*.
|
588
592
|
#
|
589
593
|
#
|
@@ -620,7 +624,7 @@ module Aws::Kinesis
|
|
620
624
|
req.send_request(options)
|
621
625
|
end
|
622
626
|
|
623
|
-
# Gets data records from a Kinesis stream's shard.
|
627
|
+
# Gets data records from a Kinesis data stream's shard.
|
624
628
|
#
|
625
629
|
# Specify a shard iterator using the `ShardIterator` parameter. The
|
626
630
|
# shard iterator specifies the position in the shard from which you want
|
@@ -630,17 +634,17 @@ module Aws::Kinesis
|
|
630
634
|
# to a portion of the shard that contains records.
|
631
635
|
#
|
632
636
|
# You can scale by provisioning multiple shards per stream while
|
633
|
-
# considering service limits (for more information, see [
|
634
|
-
# Limits][1] in the *Amazon Kinesis Streams Developer
|
635
|
-
# application should have one thread per shard, each
|
636
|
-
# continuously from its stream. To read from a stream
|
637
|
-
# GetRecords in a loop. Use GetShardIterator to get
|
638
|
-
# to specify in the first GetRecords call. GetRecords
|
639
|
-
# shard iterator in `NextShardIterator`. Specify the shard
|
640
|
-
# returned in `NextShardIterator` in subsequent calls to
|
641
|
-
# the shard has been closed, the shard iterator can't
|
642
|
-
# and GetRecords returns `null` in `NextShardIterator`.
|
643
|
-
# terminate the loop when the shard is closed, or when the shard
|
637
|
+
# considering service limits (for more information, see [Amazon Kinesis
|
638
|
+
# Data Streams Limits][1] in the *Amazon Kinesis Data Streams Developer
|
639
|
+
# Guide*). Your application should have one thread per shard, each
|
640
|
+
# reading continuously from its stream. To read from a stream
|
641
|
+
# continually, call GetRecords in a loop. Use GetShardIterator to get
|
642
|
+
# the shard iterator to specify in the first GetRecords call. GetRecords
|
643
|
+
# returns a new shard iterator in `NextShardIterator`. Specify the shard
|
644
|
+
# iterator returned in `NextShardIterator` in subsequent calls to
|
645
|
+
# GetRecords. If the shard has been closed, the shard iterator can't
|
646
|
+
# return more data and GetRecords returns `null` in `NextShardIterator`.
|
647
|
+
# You can terminate the loop when the shard is closed, or when the shard
|
644
648
|
# iterator reaches the record with the sequence number or other
|
645
649
|
# attribute that marks it as the last record to process.
|
646
650
|
#
|
@@ -653,10 +657,10 @@ module Aws::Kinesis
|
|
653
657
|
# The size of the data returned by GetRecords varies depending on the
|
654
658
|
# utilization of the shard. The maximum size of data that GetRecords can
|
655
659
|
# return is 10 MB. If a call returns this amount of data, subsequent
|
656
|
-
# calls made within the next
|
660
|
+
# calls made within the next five seconds throw
|
657
661
|
# `ProvisionedThroughputExceededException`. If there is insufficient
|
658
|
-
# provisioned throughput on the
|
659
|
-
# next
|
662
|
+
# provisioned throughput on the stream, subsequent calls made within the
|
663
|
+
# next one second throw `ProvisionedThroughputExceededException`.
|
660
664
|
# GetRecords won't return any data when it throws an exception. For
|
661
665
|
# this reason, we recommend that you wait one second between calls to
|
662
666
|
# GetRecords; however, it's possible that the application will get
|
@@ -665,7 +669,7 @@ module Aws::Kinesis
|
|
665
669
|
# To detect whether the application is falling behind in processing, you
|
666
670
|
# can use the `MillisBehindLatest` response attribute. You can also
|
667
671
|
# monitor the stream using CloudWatch metrics and other mechanisms (see
|
668
|
-
# [Monitoring][2] in the *Amazon Kinesis Streams Developer Guide*).
|
672
|
+
# [Monitoring][2] in the *Amazon Kinesis Data Streams Developer Guide*).
|
669
673
|
#
|
670
674
|
# Each Amazon Kinesis record includes a value,
|
671
675
|
# `ApproximateArrivalTimestamp`, that is set when a stream successfully
|
@@ -760,24 +764,25 @@ module Aws::Kinesis
|
|
760
764
|
# If a GetShardIterator request is made too often, you receive a
|
761
765
|
# `ProvisionedThroughputExceededException`. For more information about
|
762
766
|
# throughput limits, see GetRecords, and [Streams Limits][1] in the
|
763
|
-
# *Amazon Kinesis Streams Developer Guide*.
|
767
|
+
# *Amazon Kinesis Data Streams Developer Guide*.
|
764
768
|
#
|
765
769
|
# If the shard is closed, GetShardIterator returns a valid iterator for
|
766
770
|
# the last sequence number of the shard. A shard can be closed as a
|
767
771
|
# result of using SplitShard or MergeShards.
|
768
772
|
#
|
769
|
-
# GetShardIterator has a limit of
|
770
|
-
# per open shard.
|
773
|
+
# GetShardIterator has a limit of five transactions per second per
|
774
|
+
# account per open shard.
|
771
775
|
#
|
772
776
|
#
|
773
777
|
#
|
774
778
|
# [1]: http://docs.aws.amazon.com/kinesis/latest/dev/service-sizes-and-limits.html
|
775
779
|
#
|
776
780
|
# @option params [required, String] :stream_name
|
777
|
-
# The name of the Amazon Kinesis stream.
|
781
|
+
# The name of the Amazon Kinesis data stream.
|
778
782
|
#
|
779
783
|
# @option params [required, String] :shard_id
|
780
|
-
# The shard ID of the Kinesis Streams shard to get the iterator
|
784
|
+
# The shard ID of the Kinesis Data Streams shard to get the iterator
|
785
|
+
# for.
|
781
786
|
#
|
782
787
|
# @option params [required, String] :shard_iterator_type
|
783
788
|
# Determines how the shard iterator is used to start reading data
|
@@ -844,7 +849,7 @@ module Aws::Kinesis
|
|
844
849
|
req.send_request(options)
|
845
850
|
end
|
846
851
|
|
847
|
-
# Increases the
|
852
|
+
# Increases the Kinesis data stream's retention period, which is the
|
848
853
|
# length of time data records are accessible after they are added to the
|
849
854
|
# stream. The maximum value of a stream's retention period is 168 hours
|
850
855
|
# (7 days).
|
@@ -883,13 +888,121 @@ module Aws::Kinesis
|
|
883
888
|
req.send_request(options)
|
884
889
|
end
|
885
890
|
|
886
|
-
# Lists
|
891
|
+
# Lists the shards in a stream and provides information about each
|
892
|
+
# shard.
|
893
|
+
#
|
894
|
+
# This API is a new operation that is used by the Amazon Kinesis Client
|
895
|
+
# Library (KCL). If you have a fine-grained IAM policy that only allows
|
896
|
+
# specific operations, you must update your policy to allow calls to
|
897
|
+
# this API. For more information, see [Controlling Access to Amazon
|
898
|
+
# Kinesis Data Streams Resources Using IAM][1].
|
899
|
+
#
|
900
|
+
#
|
901
|
+
#
|
902
|
+
# [1]: https://docs.aws.amazon.com/streams/latest/dev/controlling-access.html
|
903
|
+
#
|
904
|
+
# @option params [String] :stream_name
|
905
|
+
# The name of the data stream whose shards you want to list.
|
906
|
+
#
|
907
|
+
# You cannot specify this parameter if you specify the `NextToken`
|
908
|
+
# parameter.
|
909
|
+
#
|
910
|
+
# @option params [String] :next_token
|
911
|
+
# When the number of shards in the data stream is greater than the
|
912
|
+
# default value for the `MaxResults` parameter, or if you explicitly
|
913
|
+
# specify a value for `MaxResults` that is less than the number of
|
914
|
+
# shards in the data stream, the response includes a pagination token
|
915
|
+
# named `NextToken`. You can specify this `NextToken` value in a
|
916
|
+
# subsequent call to `ListShards` to list the next set of shards.
|
917
|
+
#
|
918
|
+
# Don't specify `StreamName` or `StreamCreationTimestamp` if you
|
919
|
+
# specify `NextToken` because the latter unambiguously identifies the
|
920
|
+
# stream.
|
921
|
+
#
|
922
|
+
# You can optionally specify a value for the `MaxResults` parameter when
|
923
|
+
# you specify `NextToken`. If you specify a `MaxResults` value that is
|
924
|
+
# less than the number of shards that the operation returns if you
|
925
|
+
# don't specify `MaxResults`, the response will contain a new
|
926
|
+
# `NextToken` value. You can use the new `NextToken` value in a
|
927
|
+
# subsequent call to the `ListShards` operation.
|
928
|
+
#
|
929
|
+
# Tokens expire after 300 seconds. When you obtain a value for
|
930
|
+
# `NextToken` in the response to a call to `ListShards`, you have 300
|
931
|
+
# seconds to use that value. If you specify an expired token in a call
|
932
|
+
# to `ListShards`, you get `ExpiredNextTokenException`.
|
933
|
+
#
|
934
|
+
# @option params [String] :exclusive_start_shard_id
|
935
|
+
# The ID of the shard to start the list with.
|
936
|
+
#
|
937
|
+
# If you don't specify this parameter, the default behavior is for
|
938
|
+
# `ListShards` to list the shards starting with the first one in the
|
939
|
+
# stream.
|
940
|
+
#
|
941
|
+
# You cannot specify this parameter if you specify `NextToken`.
|
942
|
+
#
|
943
|
+
# @option params [Integer] :max_results
|
944
|
+
# The maximum number of shards to return in a single call to
|
945
|
+
# `ListShards`. The minimum value you can specify for this parameter is
|
946
|
+
# 1, and the maximum is 1,000, which is also the default.
|
947
|
+
#
|
948
|
+
# When the number of shards to be listed is greater than the value of
|
949
|
+
# `MaxResults`, the response contains a `NextToken` value that you can
|
950
|
+
# use in a subsequent call to `ListShards` to list the next set of
|
951
|
+
# shards.
|
952
|
+
#
|
953
|
+
# @option params [Time,DateTime,Date,Integer,String] :stream_creation_timestamp
|
954
|
+
# Specify this input parameter to distinguish data streams that have the
|
955
|
+
# same name. For example, if you create a data stream and then delete
|
956
|
+
# it, and you later create another data stream with the same name, you
|
957
|
+
# can use this input parameter to specify which of the two streams you
|
958
|
+
# want to list the shards for.
|
959
|
+
#
|
960
|
+
# You cannot specify this parameter if you specify the `NextToken`
|
961
|
+
# parameter.
|
962
|
+
#
|
963
|
+
# @return [Types::ListShardsOutput] Returns a {Seahorse::Client::Response response} object which responds to the following methods:
|
964
|
+
#
|
965
|
+
# * {Types::ListShardsOutput#shards #shards} => Array<Types::Shard>
|
966
|
+
# * {Types::ListShardsOutput#next_token #next_token} => String
|
967
|
+
#
|
968
|
+
# @example Request syntax with placeholder values
|
969
|
+
#
|
970
|
+
# resp = client.list_shards({
|
971
|
+
# stream_name: "StreamName",
|
972
|
+
# next_token: "NextToken",
|
973
|
+
# exclusive_start_shard_id: "ShardId",
|
974
|
+
# max_results: 1,
|
975
|
+
# stream_creation_timestamp: Time.now,
|
976
|
+
# })
|
977
|
+
#
|
978
|
+
# @example Response structure
|
979
|
+
#
|
980
|
+
# resp.shards #=> Array
|
981
|
+
# resp.shards[0].shard_id #=> String
|
982
|
+
# resp.shards[0].parent_shard_id #=> String
|
983
|
+
# resp.shards[0].adjacent_parent_shard_id #=> String
|
984
|
+
# resp.shards[0].hash_key_range.starting_hash_key #=> String
|
985
|
+
# resp.shards[0].hash_key_range.ending_hash_key #=> String
|
986
|
+
# resp.shards[0].sequence_number_range.starting_sequence_number #=> String
|
987
|
+
# resp.shards[0].sequence_number_range.ending_sequence_number #=> String
|
988
|
+
# resp.next_token #=> String
|
989
|
+
#
|
990
|
+
# @see http://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/ListShards AWS API Documentation
|
991
|
+
#
|
992
|
+
# @overload list_shards(params = {})
|
993
|
+
# @param [Hash] params ({})
|
994
|
+
def list_shards(params = {}, options = {})
|
995
|
+
req = build_request(:list_shards, params)
|
996
|
+
req.send_request(options)
|
997
|
+
end
|
998
|
+
|
999
|
+
# Lists your Kinesis data streams.
|
887
1000
|
#
|
888
1001
|
# The number of streams may be too large to return from a single call to
|
889
1002
|
# `ListStreams`. You can limit the number of returned streams using the
|
890
1003
|
# `Limit` parameter. If you do not specify a value for the `Limit`
|
891
|
-
# parameter, Kinesis Streams uses the default limit, which is
|
892
|
-
# 10.
|
1004
|
+
# parameter, Kinesis Data Streams uses the default limit, which is
|
1005
|
+
# currently 10.
|
893
1006
|
#
|
894
1007
|
# You can detect if there are more streams available to list by using
|
895
1008
|
# the `HasMoreStreams` flag from the returned output. If there are more
|
@@ -900,7 +1013,7 @@ module Aws::Kinesis
|
|
900
1013
|
# request is then added to the list. You can continue this process until
|
901
1014
|
# all the stream names have been collected in the list.
|
902
1015
|
#
|
903
|
-
# ListStreams has a limit of
|
1016
|
+
# ListStreams has a limit of five transactions per second per account.
|
904
1017
|
#
|
905
1018
|
# @option params [Integer] :limit
|
906
1019
|
# The maximum number of streams to list.
|
@@ -935,7 +1048,8 @@ module Aws::Kinesis
|
|
935
1048
|
req.send_request(options)
|
936
1049
|
end
|
937
1050
|
|
938
|
-
# Lists the tags for the specified Kinesis stream.
|
1051
|
+
# Lists the tags for the specified Kinesis data stream. This operation
|
1052
|
+
# has a limit of five transactions per second per account.
|
939
1053
|
#
|
940
1054
|
# @option params [required, String] :stream_name
|
941
1055
|
# The name of the stream.
|
@@ -980,8 +1094,8 @@ module Aws::Kinesis
|
|
980
1094
|
req.send_request(options)
|
981
1095
|
end
|
982
1096
|
|
983
|
-
# Merges two adjacent shards in a Kinesis stream and combines them
|
984
|
-
# a single shard to reduce the stream's capacity to ingest and
|
1097
|
+
# Merges two adjacent shards in a Kinesis data stream and combines them
|
1098
|
+
# into a single shard to reduce the stream's capacity to ingest and
|
985
1099
|
# transport data. Two shards are considered adjacent if the union of the
|
986
1100
|
# hash key ranges for the two shards form a contiguous set with no gaps.
|
987
1101
|
# For example, if you have two shards, one with a hash key range of
|
@@ -995,7 +1109,7 @@ module Aws::Kinesis
|
|
995
1109
|
# capacity of a stream because of excess capacity that is not being
|
996
1110
|
# used. You must specify the shard to be merged and the adjacent shard
|
997
1111
|
# for a stream. For more information about merging shards, see [Merge
|
998
|
-
# Two Shards][1] in the *Amazon Kinesis Streams Developer Guide*.
|
1112
|
+
# Two Shards][1] in the *Amazon Kinesis Data Streams Developer Guide*.
|
999
1113
|
#
|
1000
1114
|
# If the stream is in the `ACTIVE` state, you can call `MergeShards`. If
|
1001
1115
|
# a stream is in the `CREATING`, `UPDATING`, or `DELETING` state,
|
@@ -1007,20 +1121,20 @@ module Aws::Kinesis
|
|
1007
1121
|
# returned in `StreamStatus`.
|
1008
1122
|
#
|
1009
1123
|
# `MergeShards` is an asynchronous operation. Upon receiving a
|
1010
|
-
# `MergeShards` request, Amazon Kinesis immediately returns
|
1011
|
-
# and sets the `StreamStatus` to `UPDATING`. After the
|
1012
|
-
# completed,
|
1013
|
-
# and write operations continue to work while the
|
1014
|
-
# `UPDATING` state.
|
1124
|
+
# `MergeShards` request, Amazon Kinesis Data Streams immediately returns
|
1125
|
+
# a response and sets the `StreamStatus` to `UPDATING`. After the
|
1126
|
+
# operation is completed, Kinesis Data Streams sets the `StreamStatus`
|
1127
|
+
# to `ACTIVE`. Read and write operations continue to work while the
|
1128
|
+
# stream is in the `UPDATING` state.
|
1015
1129
|
#
|
1016
1130
|
# You use DescribeStream to determine the shard IDs that are specified
|
1017
1131
|
# in the `MergeShards` request.
|
1018
1132
|
#
|
1019
1133
|
# If you try to operate on too many streams in parallel using
|
1020
|
-
# CreateStream, DeleteStream, `MergeShards
|
1021
|
-
#
|
1134
|
+
# CreateStream, DeleteStream, `MergeShards`, or SplitShard, you receive
|
1135
|
+
# a `LimitExceededException`.
|
1022
1136
|
#
|
1023
|
-
# `MergeShards` has a limit of
|
1137
|
+
# `MergeShards` has a limit of five transactions per second per account.
|
1024
1138
|
#
|
1025
1139
|
#
|
1026
1140
|
#
|
@@ -1055,7 +1169,7 @@ module Aws::Kinesis
|
|
1055
1169
|
req.send_request(options)
|
1056
1170
|
end
|
1057
1171
|
|
1058
|
-
# Writes a single data record into an Amazon Kinesis stream. Call
|
1172
|
+
# Writes a single data record into an Amazon Kinesis data stream. Call
|
1059
1173
|
# `PutRecord` to send data into the stream for real-time ingestion and
|
1060
1174
|
# subsequent processing, one record at a time. Each shard can support
|
1061
1175
|
# writes up to 1,000 records per second, up to a maximum data write
|
@@ -1068,11 +1182,11 @@ module Aws::Kinesis
|
|
1068
1182
|
# log file, geographic/location data, website clickstream data, and so
|
1069
1183
|
# on.
|
1070
1184
|
#
|
1071
|
-
# The partition key is used by Kinesis Streams to distribute data
|
1072
|
-
# shards. Kinesis Streams segregates the data records that
|
1073
|
-
# stream into multiple shards, using the partition key
|
1074
|
-
# each data record to determine the shard to which a
|
1075
|
-
# belongs.
|
1185
|
+
# The partition key is used by Kinesis Data Streams to distribute data
|
1186
|
+
# across shards. Kinesis Data Streams segregates the data records that
|
1187
|
+
# belong to a stream into multiple shards, using the partition key
|
1188
|
+
# associated with each data record to determine the shard to which a
|
1189
|
+
# given data record belongs.
|
1076
1190
|
#
|
1077
1191
|
# Partition keys are Unicode strings, with a maximum length limit of 256
|
1078
1192
|
# characters for each key. An MD5 hash function is used to map partition
|
@@ -1081,7 +1195,7 @@ module Aws::Kinesis
|
|
1081
1195
|
# hashing the partition key to determine the shard by explicitly
|
1082
1196
|
# specifying a hash value using the `ExplicitHashKey` parameter. For
|
1083
1197
|
# more information, see [Adding Data to a Stream][1] in the *Amazon
|
1084
|
-
# Kinesis Streams Developer Guide*.
|
1198
|
+
# Kinesis Data Streams Developer Guide*.
|
1085
1199
|
#
|
1086
1200
|
# `PutRecord` returns the shard ID of where the data record was placed
|
1087
1201
|
# and the sequence number that was assigned to the data record.
|
@@ -1090,8 +1204,8 @@ module Aws::Kinesis
|
|
1090
1204
|
# a stream, not across all shards within a stream. To guarantee strictly
|
1091
1205
|
# increasing ordering, write serially to a shard and use the
|
1092
1206
|
# `SequenceNumberForOrdering` parameter. For more information, see
|
1093
|
-
# [Adding Data to a Stream][1] in the *Amazon Kinesis Streams
|
1094
|
-
# Guide*.
|
1207
|
+
# [Adding Data to a Stream][1] in the *Amazon Kinesis Data Streams
|
1208
|
+
# Developer Guide*.
|
1095
1209
|
#
|
1096
1210
|
# If a `PutRecord` request cannot be processed because of insufficient
|
1097
1211
|
# provisioned throughput on the shard involved in the request,
|
@@ -1118,13 +1232,13 @@ module Aws::Kinesis
|
|
1118
1232
|
# @option params [required, String] :partition_key
|
1119
1233
|
# Determines which shard in the stream the data record is assigned to.
|
1120
1234
|
# Partition keys are Unicode strings with a maximum length limit of 256
|
1121
|
-
# characters for each key. Amazon Kinesis uses the
|
1122
|
-
# input to a hash function that maps the partition key
|
1123
|
-
# data to a specific shard. Specifically, an MD5 hash
|
1124
|
-
# to map partition keys to 128-bit integer values and
|
1125
|
-
# data records to shards. As a result of this hashing
|
1126
|
-
# data records with the same partition key map to the
|
1127
|
-
# the stream.
|
1235
|
+
# characters for each key. Amazon Kinesis Data Streams uses the
|
1236
|
+
# partition key as input to a hash function that maps the partition key
|
1237
|
+
# and associated data to a specific shard. Specifically, an MD5 hash
|
1238
|
+
# function is used to map partition keys to 128-bit integer values and
|
1239
|
+
# to map associated data records to shards. As a result of this hashing
|
1240
|
+
# mechanism, all data records with the same partition key map to the
|
1241
|
+
# same shard within the stream.
|
1128
1242
|
#
|
1129
1243
|
# @option params [String] :explicit_hash_key
|
1130
1244
|
# The hash value used to explicitly determine the shard the data record
|
@@ -1169,9 +1283,9 @@ module Aws::Kinesis
|
|
1169
1283
|
req.send_request(options)
|
1170
1284
|
end
|
1171
1285
|
|
1172
|
-
# Writes multiple data records into a Kinesis stream in a single
|
1173
|
-
# (also referred to as a `PutRecords` request). Use this operation
|
1174
|
-
# send data into the stream for data ingestion and processing.
|
1286
|
+
# Writes multiple data records into a Kinesis data stream in a single
|
1287
|
+
# call (also referred to as a `PutRecords` request). Use this operation
|
1288
|
+
# to send data into the stream for data ingestion and processing.
|
1175
1289
|
#
|
1176
1290
|
# Each `PutRecords` request can support up to 500 records. Each record
|
1177
1291
|
# in the request can be as large as 1 MB, up to a limit of 5 MB for the
|
@@ -1189,21 +1303,21 @@ module Aws::Kinesis
|
|
1189
1303
|
# log file, geographic/location data, website clickstream data, and so
|
1190
1304
|
# on.
|
1191
1305
|
#
|
1192
|
-
# The partition key is used by Kinesis Streams as input to a hash
|
1306
|
+
# The partition key is used by Kinesis Data Streams as input to a hash
|
1193
1307
|
# function that maps the partition key and associated data to a specific
|
1194
1308
|
# shard. An MD5 hash function is used to map partition keys to 128-bit
|
1195
1309
|
# integer values and to map associated data records to shards. As a
|
1196
1310
|
# result of this hashing mechanism, all data records with the same
|
1197
1311
|
# partition key map to the same shard within the stream. For more
|
1198
1312
|
# information, see [Adding Data to a Stream][1] in the *Amazon Kinesis
|
1199
|
-
# Streams Developer Guide*.
|
1313
|
+
# Data Streams Developer Guide*.
|
1200
1314
|
#
|
1201
1315
|
# Each record in the `Records` array may include an optional parameter,
|
1202
1316
|
# `ExplicitHashKey`, which overrides the partition key to shard mapping.
|
1203
1317
|
# This parameter allows a data producer to determine explicitly the
|
1204
1318
|
# shard where the record is stored. For more information, see [Adding
|
1205
|
-
# Multiple Records with PutRecords][2] in the *Amazon Kinesis
|
1206
|
-
# Developer Guide*.
|
1319
|
+
# Multiple Records with PutRecords][2] in the *Amazon Kinesis Data
|
1320
|
+
# Streams Developer Guide*.
|
1207
1321
|
#
|
1208
1322
|
# The `PutRecords` response includes an array of response `Records`.
|
1209
1323
|
# Each record in the response array directly correlates with a record in
|
@@ -1212,9 +1326,9 @@ module Aws::Kinesis
|
|
1212
1326
|
# includes the same number of records as the request array.
|
1213
1327
|
#
|
1214
1328
|
# The response `Records` array includes both successfully and
|
1215
|
-
# unsuccessfully processed records.
|
1216
|
-
# all records in each `PutRecords` request. A single record
|
1217
|
-
# not stop the processing of subsequent records.
|
1329
|
+
# unsuccessfully processed records. Kinesis Data Streams attempts to
|
1330
|
+
# process all records in each `PutRecords` request. A single record
|
1331
|
+
# failure does not stop the processing of subsequent records.
|
1218
1332
|
#
|
1219
1333
|
# A successfully processed record includes `ShardId` and
|
1220
1334
|
# `SequenceNumber` values. The `ShardId` parameter identifies the shard
|
@@ -1231,7 +1345,7 @@ module Aws::Kinesis
|
|
1231
1345
|
# account ID, stream name, and shard ID of the record that was
|
1232
1346
|
# throttled. For more information about partially successful responses,
|
1233
1347
|
# see [Adding Multiple Records with PutRecords][3] in the *Amazon
|
1234
|
-
# Kinesis Streams Developer Guide*.
|
1348
|
+
# Kinesis Data Streams Developer Guide*.
|
1235
1349
|
#
|
1236
1350
|
# By default, data records are accessible for 24 hours from the time
|
1237
1351
|
# that they are added to a stream. You can use
|
@@ -1288,12 +1402,15 @@ module Aws::Kinesis
|
|
1288
1402
|
req.send_request(options)
|
1289
1403
|
end
|
1290
1404
|
|
1291
|
-
# Removes tags from the specified Kinesis stream. Removed tags are
|
1405
|
+
# Removes tags from the specified Kinesis data stream. Removed tags are
|
1292
1406
|
# deleted and cannot be recovered after this operation successfully
|
1293
1407
|
# completes.
|
1294
1408
|
#
|
1295
1409
|
# If you specify a tag that does not exist, it is ignored.
|
1296
1410
|
#
|
1411
|
+
# RemoveTagsFromStream has a limit of five transactions per second per
|
1412
|
+
# account.
|
1413
|
+
#
|
1297
1414
|
# @option params [required, String] :stream_name
|
1298
1415
|
# The name of the stream.
|
1299
1416
|
#
|
@@ -1318,35 +1435,35 @@ module Aws::Kinesis
|
|
1318
1435
|
req.send_request(options)
|
1319
1436
|
end
|
1320
1437
|
|
1321
|
-
# Splits a shard into two new shards in the Kinesis stream, to
|
1322
|
-
# the stream's capacity to ingest and transport data.
|
1323
|
-
# called when there is a need to increase the overall
|
1324
|
-
# stream because of an expected increase in the volume of
|
1325
|
-
# being ingested.
|
1438
|
+
# Splits a shard into two new shards in the Kinesis data stream, to
|
1439
|
+
# increase the stream's capacity to ingest and transport data.
|
1440
|
+
# `SplitShard` is called when there is a need to increase the overall
|
1441
|
+
# capacity of a stream because of an expected increase in the volume of
|
1442
|
+
# data records being ingested.
|
1326
1443
|
#
|
1327
1444
|
# You can also use `SplitShard` when a shard appears to be approaching
|
1328
1445
|
# its maximum utilization; for example, the producers sending data into
|
1329
1446
|
# the specific shard are suddenly sending more than previously
|
1330
1447
|
# anticipated. You can also call `SplitShard` to increase stream
|
1331
|
-
# capacity, so that more Kinesis Streams applications can
|
1332
|
-
# read data from the stream for real-time processing.
|
1448
|
+
# capacity, so that more Kinesis Data Streams applications can
|
1449
|
+
# simultaneously read data from the stream for real-time processing.
|
1333
1450
|
#
|
1334
1451
|
# You must specify the shard to be split and the new hash key, which is
|
1335
1452
|
# the position in the shard where the shard gets split in two. In many
|
1336
1453
|
# cases, the new hash key might be the average of the beginning and
|
1337
1454
|
# ending hash key, but it can be any hash key value in the range being
|
1338
1455
|
# mapped into the shard. For more information, see [Split a Shard][1] in
|
1339
|
-
# the *Amazon Kinesis Streams Developer Guide*.
|
1456
|
+
# the *Amazon Kinesis Data Streams Developer Guide*.
|
1340
1457
|
#
|
1341
1458
|
# You can use DescribeStream to determine the shard ID and hash key
|
1342
1459
|
# values for the `ShardToSplit` and `NewStartingHashKey` parameters that
|
1343
1460
|
# are specified in the `SplitShard` request.
|
1344
1461
|
#
|
1345
1462
|
# `SplitShard` is an asynchronous operation. Upon receiving a
|
1346
|
-
# `SplitShard` request, Kinesis Streams immediately returns a
|
1347
|
-
# and sets the stream status to `UPDATING`. After the operation
|
1348
|
-
# completed, Kinesis Streams sets the stream status to `ACTIVE`.
|
1349
|
-
# and write operations continue to work while the stream is in the
|
1463
|
+
# `SplitShard` request, Kinesis Data Streams immediately returns a
|
1464
|
+
# response and sets the stream status to `UPDATING`. After the operation
|
1465
|
+
# is completed, Kinesis Data Streams sets the stream status to `ACTIVE`.
|
1466
|
+
# Read and write operations continue to work while the stream is in the
|
1350
1467
|
# `UPDATING` state.
|
1351
1468
|
#
|
1352
1469
|
# You can use `DescribeStream` to check the status of the stream, which
|
@@ -1360,14 +1477,14 @@ module Aws::Kinesis
|
|
1360
1477
|
# authorized for your account, you receive a `LimitExceededException`.
|
1361
1478
|
#
|
1362
1479
|
# For the default shard limit for an AWS account, see [Streams
|
1363
|
-
# Limits][2] in the *Amazon Kinesis Streams Developer Guide*. To
|
1480
|
+
# Limits][2] in the *Amazon Kinesis Data Streams Developer Guide*. To
|
1364
1481
|
# increase this limit, [contact AWS Support][3].
|
1365
1482
|
#
|
1366
1483
|
# If you try to operate on too many streams simultaneously using
|
1367
1484
|
# CreateStream, DeleteStream, MergeShards, and/or SplitShard, you
|
1368
1485
|
# receive a `LimitExceededException`.
|
1369
1486
|
#
|
1370
|
-
# `SplitShard` has a limit of
|
1487
|
+
# `SplitShard` has a limit of five transactions per second per account.
|
1371
1488
|
#
|
1372
1489
|
#
|
1373
1490
|
#
|
@@ -1414,11 +1531,11 @@ module Aws::Kinesis
|
|
1414
1531
|
# specified stream.
|
1415
1532
|
#
|
1416
1533
|
# Starting encryption is an asynchronous operation. Upon receiving the
|
1417
|
-
# request, Kinesis Streams returns immediately and sets the status
|
1418
|
-
# the stream to `UPDATING`. After the update is complete, Kinesis
|
1419
|
-
# Streams sets the status of the stream back to `ACTIVE`. Updating
|
1420
|
-
# applying encryption normally takes a few seconds to complete, but
|
1421
|
-
# can take minutes. You can continue to read and write data to your
|
1534
|
+
# request, Kinesis Data Streams returns immediately and sets the status
|
1535
|
+
# of the stream to `UPDATING`. After the update is complete, Kinesis
|
1536
|
+
# Data Streams sets the status of the stream back to `ACTIVE`. Updating
|
1537
|
+
# or applying encryption normally takes a few seconds to complete, but
|
1538
|
+
# it can take minutes. You can continue to read and write data to your
|
1422
1539
|
# stream while its status is `UPDATING`. Once the status of the stream
|
1423
1540
|
# is `ACTIVE`, encryption begins for records written to the stream.
|
1424
1541
|
#
|
@@ -1438,11 +1555,11 @@ module Aws::Kinesis
|
|
1438
1555
|
# The encryption type to use. The only valid value is `KMS`.
|
1439
1556
|
#
|
1440
1557
|
# @option params [required, String] :key_id
|
1441
|
-
# The GUID for the customer-managed KMS key to use for encryption.
|
1442
|
-
# value can be a globally unique identifier, a fully specified
|
1443
|
-
# either an alias or a key, or an alias
|
1444
|
-
# can also use a master key owned by
|
1445
|
-
# alias `aws/kinesis`.
|
1558
|
+
# The GUID for the customer-managed AWS KMS key to use for encryption.
|
1559
|
+
# This value can be a globally unique identifier, a fully specified
|
1560
|
+
# Amazon Resource Name (ARN) to either an alias or a key, or an alias
|
1561
|
+
# name prefixed by "alias/".You can also use a master key owned by
|
1562
|
+
# Kinesis Data Streams by specifying the alias `aws/kinesis`.
|
1446
1563
|
#
|
1447
1564
|
# * Key ARN example:
|
1448
1565
|
# `arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012`
|
@@ -1455,7 +1572,7 @@ module Aws::Kinesis
|
|
1455
1572
|
#
|
1456
1573
|
# * Alias name example: `alias/MyAliasName`
|
1457
1574
|
#
|
1458
|
-
# * Master key owned by Kinesis Streams: `alias/aws/kinesis`
|
1575
|
+
# * Master key owned by Kinesis Data Streams: `alias/aws/kinesis`
|
1459
1576
|
#
|
1460
1577
|
# @return [Struct] Returns an empty {Seahorse::Client::Response response}.
|
1461
1578
|
#
|
@@ -1479,13 +1596,13 @@ module Aws::Kinesis
|
|
1479
1596
|
# Disables server-side encryption for a specified stream.
|
1480
1597
|
#
|
1481
1598
|
# Stopping encryption is an asynchronous operation. Upon receiving the
|
1482
|
-
# request, Kinesis Streams returns immediately and sets the status
|
1483
|
-
# the stream to `UPDATING`. After the update is complete, Kinesis
|
1484
|
-
# Streams sets the status of the stream back to `ACTIVE`. Stopping
|
1599
|
+
# request, Kinesis Data Streams returns immediately and sets the status
|
1600
|
+
# of the stream to `UPDATING`. After the update is complete, Kinesis
|
1601
|
+
# Data Streams sets the status of the stream back to `ACTIVE`. Stopping
|
1485
1602
|
# encryption normally takes a few seconds to complete, but it can take
|
1486
1603
|
# minutes. You can continue to read and write data to your stream while
|
1487
1604
|
# its status is `UPDATING`. Once the status of the stream is `ACTIVE`,
|
1488
|
-
# records written to the stream are no longer encrypted by Kinesis
|
1605
|
+
# records written to the stream are no longer encrypted by Kinesis Data
|
1489
1606
|
# Streams.
|
1490
1607
|
#
|
1491
1608
|
# API Limits: You can successfully disable server-side encryption 25
|
@@ -1504,11 +1621,11 @@ module Aws::Kinesis
|
|
1504
1621
|
# The encryption type. The only valid value is `KMS`.
|
1505
1622
|
#
|
1506
1623
|
# @option params [required, String] :key_id
|
1507
|
-
# The GUID for the customer-managed KMS key to use for encryption.
|
1508
|
-
# value can be a globally unique identifier, a fully specified
|
1509
|
-
# either an alias or a key, or an alias
|
1510
|
-
# can also use a master key owned by
|
1511
|
-
# alias `aws/kinesis`.
|
1624
|
+
# The GUID for the customer-managed AWS KMS key to use for encryption.
|
1625
|
+
# This value can be a globally unique identifier, a fully specified
|
1626
|
+
# Amazon Resource Name (ARN) to either an alias or a key, or an alias
|
1627
|
+
# name prefixed by "alias/".You can also use a master key owned by
|
1628
|
+
# Kinesis Data Streams by specifying the alias `aws/kinesis`.
|
1512
1629
|
#
|
1513
1630
|
# * Key ARN example:
|
1514
1631
|
# `arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012`
|
@@ -1521,7 +1638,7 @@ module Aws::Kinesis
|
|
1521
1638
|
#
|
1522
1639
|
# * Alias name example: `alias/MyAliasName`
|
1523
1640
|
#
|
1524
|
-
# * Master key owned by Kinesis Streams: `alias/aws/kinesis`
|
1641
|
+
# * Master key owned by Kinesis Data Streams: `alias/aws/kinesis`
|
1525
1642
|
#
|
1526
1643
|
# @return [Struct] Returns an empty {Seahorse::Client::Response response}.
|
1527
1644
|
#
|
@@ -1546,45 +1663,43 @@ module Aws::Kinesis
|
|
1546
1663
|
# number of shards.
|
1547
1664
|
#
|
1548
1665
|
# Updating the shard count is an asynchronous operation. Upon receiving
|
1549
|
-
# the request, Kinesis Streams returns immediately and sets the
|
1550
|
-
# of the stream to `UPDATING`. After the update is complete,
|
1551
|
-
# Streams sets the status of the stream back to `ACTIVE`.
|
1552
|
-
# the size of the stream, the scaling action could take a
|
1553
|
-
# complete. You can continue to read and write data to
|
1554
|
-
# its status is `UPDATING`.
|
1666
|
+
# the request, Kinesis Data Streams returns immediately and sets the
|
1667
|
+
# status of the stream to `UPDATING`. After the update is complete,
|
1668
|
+
# Kinesis Data Streams sets the status of the stream back to `ACTIVE`.
|
1669
|
+
# Depending on the size of the stream, the scaling action could take a
|
1670
|
+
# few minutes to complete. You can continue to read and write data to
|
1671
|
+
# your stream while its status is `UPDATING`.
|
1555
1672
|
#
|
1556
|
-
# To update the shard count, Kinesis Streams performs splits or
|
1557
|
-
# on individual shards. This can cause short-lived shards to be
|
1558
|
-
# in addition to the final shards. We recommend that you double
|
1559
|
-
# the shard count, as this results in the fewest number of
|
1560
|
-
# merges.
|
1673
|
+
# To update the shard count, Kinesis Data Streams performs splits or
|
1674
|
+
# merges on individual shards. This can cause short-lived shards to be
|
1675
|
+
# created, in addition to the final shards. We recommend that you double
|
1676
|
+
# or halve the shard count, as this results in the fewest number of
|
1677
|
+
# splits or merges.
|
1561
1678
|
#
|
1562
|
-
# This operation has the following limits
|
1563
|
-
# account unless otherwise noted. You cannot:
|
1679
|
+
# This operation has the following limits. You cannot do the following:
|
1564
1680
|
#
|
1565
|
-
# * Scale more than twice per rolling 24
|
1681
|
+
# * Scale more than twice per rolling 24-hour period per stream
|
1566
1682
|
#
|
1567
|
-
# * Scale up to double your current shard count
|
1683
|
+
# * Scale up to more than double your current shard count for a stream
|
1568
1684
|
#
|
1569
|
-
# * Scale down below half your current shard count
|
1685
|
+
# * Scale down below half your current shard count for a stream
|
1570
1686
|
#
|
1571
|
-
# * Scale up to more 500 shards in a stream
|
1687
|
+
# * Scale up to more than 500 shards in a stream
|
1572
1688
|
#
|
1573
1689
|
# * Scale a stream with more than 500 shards down unless the result is
|
1574
1690
|
# less than 500 shards
|
1575
1691
|
#
|
1576
|
-
# * Scale up more the shard
|
1577
|
-
#
|
1578
|
-
# *
|
1692
|
+
# * Scale up to more than the shard limit for your account
|
1579
1693
|
#
|
1580
1694
|
# For the default limits for an AWS account, see [Streams Limits][1] in
|
1581
|
-
# the *Amazon Kinesis Streams Developer Guide*. To
|
1582
|
-
#
|
1695
|
+
# the *Amazon Kinesis Data Streams Developer Guide*. To request an
|
1696
|
+
# increase in the call rate limit, the shard limit for this API, or your
|
1697
|
+
# overall shard limit, use the [limits form][2].
|
1583
1698
|
#
|
1584
1699
|
#
|
1585
1700
|
#
|
1586
1701
|
# [1]: http://docs.aws.amazon.com/kinesis/latest/dev/service-sizes-and-limits.html
|
1587
|
-
# [2]:
|
1702
|
+
# [2]: https://console.aws.amazon.com/support/v1#/case/create?issueType=service-limit-increase&limitType=service-code-kinesis
|
1588
1703
|
#
|
1589
1704
|
# @option params [required, String] :stream_name
|
1590
1705
|
# The name of the stream.
|
@@ -1637,7 +1752,7 @@ module Aws::Kinesis
|
|
1637
1752
|
params: params,
|
1638
1753
|
config: config)
|
1639
1754
|
context[:gem_name] = 'aws-sdk-kinesis'
|
1640
|
-
context[:gem_version] = '1.
|
1755
|
+
context[:gem_version] = '1.2.0'
|
1641
1756
|
Seahorse::Client::Request.new(handlers, context)
|
1642
1757
|
end
|
1643
1758
|
|