aws-sdk-firehose 1.1.0 → 1.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 9ab712fb3edb51fbbdd42a659ebbea505d210b80
4
- data.tar.gz: 4fcea870930960f690384d7f6e94d1f9fd51101d
3
+ metadata.gz: f4af5896c9f34d011f0c5a4ab5737d5ba18205f8
4
+ data.tar.gz: a5e8625fcbf53b9d69d2832b70537cd4c221eeeb
5
5
  SHA512:
6
- metadata.gz: 08d069b36ba59bc957f59c5057d8b0b89ce70bbd5d75aa9c6eec3dd9b8e3e95853375a027e0d50e3006e77b05e927737b01fc97f28de1362808fadf087a2339b
7
- data.tar.gz: 3fc01c7f714fe33a11d1650e929953b9a09c7f72a7c864d084136b094b5499a11da55409d028edb33a4e3fa6674423595a2ee5a710af995b91725aa8d647f39e
6
+ metadata.gz: 83f24cc8b997436c6e9ea9821c6d3b2c202cd2b2330f6fab8b59b31960ec5b618d113eb417ddfd781a98d8fa53b1cf421384a60f891a43992255e2fc68f389cf
7
+ data.tar.gz: 4aa07cee5e527b231bd5dcbf230bc6eda2e10bdffbac71cdfb2ae284c173bd91ecaf35944c504c1070829dd27217d73bdebd96ed5b22e88377e9dfe6518c46aa
@@ -42,6 +42,6 @@ require_relative 'aws-sdk-firehose/customizations'
42
42
  # @service
43
43
  module Aws::Firehose
44
44
 
45
- GEM_VERSION = '1.1.0'
45
+ GEM_VERSION = '1.2.0'
46
46
 
47
47
  end
@@ -157,7 +157,7 @@ module Aws::Firehose
157
157
 
158
158
  # Creates a delivery stream.
159
159
  #
160
- # By default, you can create up to 20 delivery streams per region.
160
+ # By default, you can create up to 50 delivery streams per AWS Region.
161
161
  #
162
162
  # This is an asynchronous operation that immediately returns. The
163
163
  # initial status of the delivery stream is `CREATING`. After the
@@ -166,29 +166,30 @@ module Aws::Firehose
166
166
  # `ACTIVE` state cause an exception. To check the state of a delivery
167
167
  # stream, use DescribeDeliveryStream.
168
168
  #
169
- # A Kinesis Firehose delivery stream can be configured to receive
169
+ # A Kinesis Data Firehose delivery stream can be configured to receive
170
170
  # records directly from providers using PutRecord or PutRecordBatch, or
171
- # it can be configured to use an existing Kinesis stream as its source.
172
- # To specify a Kinesis stream as input, set the `DeliveryStreamType`
173
- # parameter to `KinesisStreamAsSource`, and provide the Kinesis stream
174
- # ARN and role ARN in the `KinesisStreamSourceConfiguration` parameter.
171
+ # it can be configured to use an existing Kinesis data stream as its
172
+ # source. To specify a Kinesis data stream as input, set the
173
+ # `DeliveryStreamType` parameter to `KinesisStreamAsSource`, and provide
174
+ # the Kinesis data stream Amazon Resource Name (ARN) and role ARN in the
175
+ # `KinesisStreamSourceConfiguration` parameter.
175
176
  #
176
177
  # A delivery stream is configured with a single destination: Amazon S3,
177
- # Amazon ES, or Amazon Redshift. You must specify only one of the
178
+ # Amazon ES, Amazon Redshift, or Splunk. Specify only one of the
178
179
  # following destination configuration parameters:
179
- # **ExtendedS3DestinationConfiguration**,
180
- # **S3DestinationConfiguration**,
181
- # **ElasticsearchDestinationConfiguration**, or
182
- # **RedshiftDestinationConfiguration**.
183
- #
184
- # When you specify **S3DestinationConfiguration**, you can also provide
185
- # the following optional values: **BufferingHints**,
186
- # **EncryptionConfiguration**, and **CompressionFormat**. By default, if
187
- # no **BufferingHints** value is provided, Kinesis Firehose buffers data
180
+ # `ExtendedS3DestinationConfiguration`, `S3DestinationConfiguration`,
181
+ # `ElasticsearchDestinationConfiguration`,
182
+ # `RedshiftDestinationConfiguration`, or
183
+ # `SplunkDestinationConfiguration`.
184
+ #
185
+ # When you specify `S3DestinationConfiguration`, you can also provide
186
+ # the following optional values: `BufferingHints`,
187
+ # `EncryptionConfiguration`, and `CompressionFormat`. By default, if no
188
+ # `BufferingHints` value is provided, Kinesis Data Firehose buffers data
188
189
  # up to 5 MB or for 5 minutes, whichever condition is satisfied first.
189
- # Note that **BufferingHints** is a hint, so there are some cases where
190
- # the service cannot adhere to these conditions strictly; for example,
191
- # record boundaries are such that the size is a little over or under the
190
+ # `BufferingHints` is a hint, so there are some cases where the service
191
+ # cannot adhere to these conditions strictly. For example, record
192
+ # boundaries are such that the size is a little over or under the
192
193
  # configured buffering size. By default, no encryption is performed. We
193
194
  # strongly recommend that you enable encryption to ensure secure data
194
195
  # storage in Amazon S3.
@@ -196,26 +197,27 @@ module Aws::Firehose
196
197
  # A few notes about Amazon Redshift as a destination:
197
198
  #
198
199
  # * An Amazon Redshift destination requires an S3 bucket as intermediate
199
- # location, as Kinesis Firehose first delivers data to S3 and then
200
- # uses `COPY` syntax to load data into an Amazon Redshift table. This
201
- # is specified in the
202
- # **RedshiftDestinationConfiguration.S3Configuration** parameter.
200
+ # location. This is because Kinesis Data Firehose first delivers data
201
+ # to Amazon S3 and then uses `COPY` syntax to load data into an Amazon
202
+ # Redshift table. This is specified in the
203
+ # `RedshiftDestinationConfiguration.S3Configuration` parameter.
203
204
  #
204
205
  # * The compression formats `SNAPPY` or `ZIP` cannot be specified in
205
- # **RedshiftDestinationConfiguration.S3Configuration** because the
206
+ # `RedshiftDestinationConfiguration.S3Configuration` because the
206
207
  # Amazon Redshift `COPY` operation that reads from the S3 bucket
207
208
  # doesn't support these compression formats.
208
209
  #
209
- # * We strongly recommend that you use the user name and password you
210
- # provide exclusively with Kinesis Firehose, and that the permissions
211
- # for the account are restricted for Amazon Redshift `INSERT`
212
- # permissions.
210
+ # * We strongly recommend that you use the user name and password that
211
+ # you provide exclusively with Kinesis Data Firehose. In addition, the
212
+ # permissions for the account should be restricted for Amazon Redshift
213
+ # `INSERT` permissions.
213
214
  #
214
- # Kinesis Firehose assumes the IAM role that is configured as part of
215
- # the destination. The role should allow the Kinesis Firehose principal
216
- # to assume the role, and the role should have permissions that allow
217
- # the service to deliver the data. For more information, see [Amazon S3
218
- # Bucket Access][1] in the *Amazon Kinesis Firehose Developer Guide*.
215
+ # Kinesis Data Firehose assumes the IAM role that is configured as part
216
+ # of the destination. The role should allow the Kinesis Data Firehose
217
+ # principal to assume the role, and the role should have permissions
218
+ # that allow the service to deliver the data. For more information, see
219
+ # [Grant Kinesis Firehose Access to an Amazon S3 Destination][1] in the
220
+ # *Amazon Kinesis Data Firehose Developer Guide*.
219
221
  #
220
222
  #
221
223
  #
@@ -223,8 +225,8 @@ module Aws::Firehose
223
225
  #
224
226
  # @option params [required, String] :delivery_stream_name
225
227
  # The name of the delivery stream. This name must be unique per AWS
226
- # account in the same region. If the delivery streams are in different
227
- # accounts or different regions, you can have multiple delivery streams
228
+ # account in the same Region. If the delivery streams are in different
229
+ # accounts or different Regions, you can have multiple delivery streams
228
230
  # with the same name.
229
231
  #
230
232
  # @option params [String] :delivery_stream_type
@@ -234,13 +236,14 @@ module Aws::Firehose
234
236
  # * `DirectPut`\: Provider applications access the delivery stream
235
237
  # directly.
236
238
  #
237
- # * `KinesisStreamAsSource`\: The delivery stream uses a Kinesis stream
238
- # as a source.
239
+ # * `KinesisStreamAsSource`\: The delivery stream uses a Kinesis data
240
+ # stream as a source.
239
241
  #
240
242
  # @option params [Types::KinesisStreamSourceConfiguration] :kinesis_stream_source_configuration
241
- # When a Kinesis stream is used as the source for the delivery stream, a
242
- # KinesisStreamSourceConfiguration containing the Kinesis stream ARN and
243
- # the role ARN for the source stream.
243
+ # When a Kinesis data stream is used as the source for the delivery
244
+ # stream, a KinesisStreamSourceConfiguration containing the Kinesis data
245
+ # stream Amazon Resource Name (ARN) and the role ARN for the source
246
+ # stream.
244
247
  #
245
248
  # @option params [Types::S3DestinationConfiguration] :s3_destination_configuration
246
249
  # \[Deprecated\] The destination in Amazon S3. You can specify only one
@@ -582,8 +585,8 @@ module Aws::Firehose
582
585
 
583
586
  # Describes the specified delivery stream and gets the status. For
584
587
  # example, after your delivery stream is created, call
585
- # DescribeDeliveryStream to see if the delivery stream is `ACTIVE` and
586
- # therefore ready for data to be sent to it.
588
+ # `DescribeDeliveryStream` to see whether the delivery stream is
589
+ # `ACTIVE` and therefore ready for data to be sent to it.
587
590
  #
588
591
  # @option params [required, String] :delivery_stream_name
589
592
  # The name of the delivery stream.
@@ -594,8 +597,8 @@ module Aws::Firehose
594
597
  #
595
598
  # @option params [String] :exclusive_start_destination_id
596
599
  # The ID of the destination to start returning the destination
597
- # information. Currently, Kinesis Firehose supports one destination per
598
- # delivery stream.
600
+ # information. Currently, Kinesis Data Firehose supports one destination
601
+ # per delivery stream.
599
602
  #
600
603
  # @return [Types::DescribeDeliveryStreamOutput] Returns a {Seahorse::Client::Response response} object which responds to the following methods:
601
604
  #
@@ -771,13 +774,13 @@ module Aws::Firehose
771
774
  # Lists your delivery streams.
772
775
  #
773
776
  # The number of delivery streams might be too large to return using a
774
- # single call to ListDeliveryStreams. You can limit the number of
777
+ # single call to `ListDeliveryStreams`. You can limit the number of
775
778
  # delivery streams returned, using the **Limit** parameter. To determine
776
779
  # whether there are more delivery streams to list, check the value of
777
- # **HasMoreDeliveryStreams** in the output. If there are more delivery
780
+ # `HasMoreDeliveryStreams` in the output. If there are more delivery
778
781
  # streams to list, you can request them by specifying the name of the
779
782
  # last delivery stream returned in the call in the
780
- # **ExclusiveStartDeliveryStreamName** parameter of a subsequent call.
783
+ # `ExclusiveStartDeliveryStreamName` parameter of a subsequent call.
781
784
  #
782
785
  # @option params [Integer] :limit
783
786
  # The maximum number of delivery streams to list. The default value is
@@ -789,8 +792,8 @@ module Aws::Firehose
789
792
  # * `DirectPut`\: Provider applications access the delivery stream
790
793
  # directly.
791
794
  #
792
- # * `KinesisStreamAsSource`\: The delivery stream uses a Kinesis stream
793
- # as a source.
795
+ # * `KinesisStreamAsSource`\: The delivery stream uses a Kinesis data
796
+ # stream as a source.
794
797
  #
795
798
  # This parameter is optional. If this parameter is omitted, delivery
796
799
  # streams of all types are returned.
@@ -826,43 +829,89 @@ module Aws::Firehose
826
829
  req.send_request(options)
827
830
  end
828
831
 
829
- # Writes a single data record into an Amazon Kinesis Firehose delivery
830
- # stream. To write multiple data records into a delivery stream, use
831
- # PutRecordBatch. Applications using these operations are referred to as
832
- # producers.
832
+ # Lists the tags for the specified delivery stream. This operation has a
833
+ # limit of five transactions per second per account.
834
+ #
835
+ # @option params [required, String] :delivery_stream_name
836
+ # The name of the delivery stream whose tags you want to list.
837
+ #
838
+ # @option params [String] :exclusive_start_tag_key
839
+ # The key to use as the starting point for the list of tags. If you set
840
+ # this parameter, `ListTagsForDeliveryStream` gets all tags that occur
841
+ # after `ExclusiveStartTagKey`.
842
+ #
843
+ # @option params [Integer] :limit
844
+ # The number of tags to return. If this number is less than the total
845
+ # number of tags associated with the delivery stream, `HasMoreTags` is
846
+ # set to `true` in the response. To list additional tags, set
847
+ # `ExclusiveStartTagKey` to the last key in the response.
848
+ #
849
+ # @return [Types::ListTagsForDeliveryStreamOutput] Returns a {Seahorse::Client::Response response} object which responds to the following methods:
850
+ #
851
+ # * {Types::ListTagsForDeliveryStreamOutput#tags #tags} => Array<Types::Tag>
852
+ # * {Types::ListTagsForDeliveryStreamOutput#has_more_tags #has_more_tags} => Boolean
853
+ #
854
+ # @example Request syntax with placeholder values
855
+ #
856
+ # resp = client.list_tags_for_delivery_stream({
857
+ # delivery_stream_name: "DeliveryStreamName", # required
858
+ # exclusive_start_tag_key: "TagKey",
859
+ # limit: 1,
860
+ # })
861
+ #
862
+ # @example Response structure
863
+ #
864
+ # resp.tags #=> Array
865
+ # resp.tags[0].key #=> String
866
+ # resp.tags[0].value #=> String
867
+ # resp.has_more_tags #=> Boolean
868
+ #
869
+ # @see http://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/ListTagsForDeliveryStream AWS API Documentation
870
+ #
871
+ # @overload list_tags_for_delivery_stream(params = {})
872
+ # @param [Hash] params ({})
873
+ def list_tags_for_delivery_stream(params = {}, options = {})
874
+ req = build_request(:list_tags_for_delivery_stream, params)
875
+ req.send_request(options)
876
+ end
877
+
878
+ # Writes a single data record into an Amazon Kinesis Data Firehose
879
+ # delivery stream. To write multiple data records into a delivery
880
+ # stream, use PutRecordBatch. Applications using these operations are
881
+ # referred to as producers.
833
882
  #
834
883
  # By default, each delivery stream can take in up to 2,000 transactions
835
884
  # per second, 5,000 records per second, or 5 MB per second. Note that if
836
- # you use PutRecord and PutRecordBatch, the limits are an aggregate
885
+ # you use `PutRecord` and PutRecordBatch, the limits are an aggregate
837
886
  # across these two operations for each delivery stream. For more
838
887
  # information about limits and how to request an increase, see [Amazon
839
- # Kinesis Firehose Limits][1].
888
+ # Kinesis Data Firehose Limits][1].
840
889
  #
841
890
  # You must specify the name of the delivery stream and the data record
842
- # when using PutRecord. The data record consists of a data blob that can
843
- # be up to 1,000 KB in size, and any kind of data, for example, a
844
- # segment from a log file, geographic location data, website clickstream
845
- # data, and so on.
891
+ # when using `PutRecord`. The data record consists of a data blob that
892
+ # can be up to 1,000 KB in size and any kind of data. For example, it
893
+ # can be a segment from a log file, geographic location data, website
894
+ # clickstream data, and so on.
846
895
  #
847
- # Kinesis Firehose buffers records before delivering them to the
896
+ # Kinesis Data Firehose buffers records before delivering them to the
848
897
  # destination. To disambiguate the data blobs at the destination, a
849
898
  # common solution is to use delimiters in the data, such as a newline
850
899
  # (`\n`) or some other character unique within the data. This allows the
851
900
  # consumer application to parse individual data items when reading the
852
901
  # data from the destination.
853
902
  #
854
- # The PutRecord operation returns a **RecordId**, which is a unique
903
+ # The `PutRecord` operation returns a `RecordId`, which is a unique
855
904
  # string assigned to each record. Producer applications can use this ID
856
905
  # for purposes such as auditability and investigation.
857
906
  #
858
- # If the PutRecord operation throws a **ServiceUnavailableException**,
907
+ # If the `PutRecord` operation throws a `ServiceUnavailableException`,
859
908
  # back off and retry. If the exception persists, it is possible that the
860
909
  # throughput limits have been exceeded for the delivery stream.
861
910
  #
862
- # Data records sent to Kinesis Firehose are stored for 24 hours from the
863
- # time they are added to a delivery stream as it attempts to send the
864
- # records to the destination. If the destination is unreachable for more
865
- # than 24 hours, the data is no longer available.
911
+ # Data records sent to Kinesis Data Firehose are stored for 24 hours
912
+ # from the time they are added to a delivery stream as it attempts to
913
+ # send the records to the destination. If the destination is unreachable
914
+ # for more than 24 hours, the data is no longer available.
866
915
  #
867
916
  #
868
917
  #
@@ -908,63 +957,61 @@ module Aws::Firehose
908
957
  #
909
958
  # By default, each delivery stream can take in up to 2,000 transactions
910
959
  # per second, 5,000 records per second, or 5 MB per second. If you use
911
- # PutRecord and PutRecordBatch, the limits are an aggregate across these
912
- # two operations for each delivery stream. For more information about
913
- # limits, see [Amazon Kinesis Firehose Limits][1].
960
+ # PutRecord and `PutRecordBatch`, the limits are an aggregate across
961
+ # these two operations for each delivery stream. For more information
962
+ # about limits, see [Amazon Kinesis Data Firehose Limits][1].
914
963
  #
915
- # Each PutRecordBatch request supports up to 500 records. Each record in
916
- # the request can be as large as 1,000 KB (before 64-bit encoding), up
917
- # to a limit of 4 MB for the entire request. These limits cannot be
964
+ # Each `PutRecordBatch` request supports up to 500 records. Each record
965
+ # in the request can be as large as 1,000 KB (before 64-bit encoding),
966
+ # up to a limit of 4 MB for the entire request. These limits cannot be
918
967
  # changed.
919
968
  #
920
969
  # You must specify the name of the delivery stream and the data record
921
970
  # when using PutRecord. The data record consists of a data blob that can
922
- # be up to 1,000 KB in size, and any kind of data. For example, it could
923
- # be a segment from a log file, geographic location data, web site
971
+ # be up to 1,000 KB in size and any kind of data. For example, it could
972
+ # be a segment from a log file, geographic location data, website
924
973
  # clickstream data, and so on.
925
974
  #
926
- # Kinesis Firehose buffers records before delivering them to the
975
+ # Kinesis Data Firehose buffers records before delivering them to the
927
976
  # destination. To disambiguate the data blobs at the destination, a
928
977
  # common solution is to use delimiters in the data, such as a newline
929
978
  # (`\n`) or some other character unique within the data. This allows the
930
979
  # consumer application to parse individual data items when reading the
931
980
  # data from the destination.
932
981
  #
933
- # The PutRecordBatch response includes a count of failed records,
934
- # **FailedPutCount**, and an array of responses, **RequestResponses**.
935
- # Each entry in the **RequestResponses** array provides additional
936
- # information about the processed record. It directly correlates with a
937
- # record in the request array using the same ordering, from the top to
938
- # the bottom. The response array always includes the same number of
939
- # records as the request array. **RequestResponses** includes both
940
- # successfully and unsuccessfully processed records. Kinesis Firehose
941
- # attempts to process all records in each PutRecordBatch request. A
942
- # single record failure does not stop the processing of subsequent
943
- # records.
944
- #
945
- # A successfully processed record includes a **RecordId** value, which
946
- # is unique for the record. An unsuccessfully processed record includes
947
- # **ErrorCode** and **ErrorMessage** values. **ErrorCode** reflects the
948
- # type of error, and is one of the following values:
949
- # `ServiceUnavailable` or `InternalFailure`. **ErrorMessage** provides
950
- # more detailed information about the error.
982
+ # The `PutRecordBatch` response includes a count of failed records,
983
+ # `FailedPutCount`, and an array of responses, `RequestResponses`. Each
984
+ # entry in the `RequestResponses` array provides additional information
985
+ # about the processed record. It directly correlates with a record in
986
+ # the request array using the same ordering, from the top to the bottom.
987
+ # The response array always includes the same number of records as the
988
+ # request array. `RequestResponses` includes both successfully and
989
+ # unsuccessfully processed records. Kinesis Data Firehose attempts to
990
+ # process all records in each `PutRecordBatch` request. A single record
991
+ # failure does not stop the processing of subsequent records.
992
+ #
993
+ # A successfully processed record includes a `RecordId` value, which is
994
+ # unique for the record. An unsuccessfully processed record includes
995
+ # `ErrorCode` and `ErrorMessage` values. `ErrorCode` reflects the type
996
+ # of error, and is one of the following values: `ServiceUnavailable` or
997
+ # `InternalFailure`. `ErrorMessage` provides more detailed information
998
+ # about the error.
951
999
  #
952
1000
  # If there is an internal server error or a timeout, the write might
953
- # have completed or it might have failed. If **FailedPutCount** is
954
- # greater than 0, retry the request, resending only those records that
955
- # might have failed processing. This minimizes the possible duplicate
956
- # records and also reduces the total bytes sent (and corresponding
957
- # charges). We recommend that you handle any duplicates at the
958
- # destination.
1001
+ # have completed or it might have failed. If `FailedPutCount` is greater
1002
+ # than 0, retry the request, resending only those records that might
1003
+ # have failed processing. This minimizes the possible duplicate records
1004
+ # and also reduces the total bytes sent (and corresponding charges). We
1005
+ # recommend that you handle any duplicates at the destination.
959
1006
  #
960
- # If PutRecordBatch throws **ServiceUnavailableException**, back off and
1007
+ # If `PutRecordBatch` throws `ServiceUnavailableException`, back off and
961
1008
  # retry. If the exception persists, it is possible that the throughput
962
1009
  # limits have been exceeded for the delivery stream.
963
1010
  #
964
- # Data records sent to Kinesis Firehose are stored for 24 hours from the
965
- # time they are added to a delivery stream as it attempts to send the
966
- # records to the destination. If the destination is unreachable for more
967
- # than 24 hours, the data is no longer available.
1011
+ # Data records sent to Kinesis Data Firehose are stored for 24 hours
1012
+ # from the time they are added to a delivery stream as it attempts to
1013
+ # send the records to the destination. If the destination is unreachable
1014
+ # for more than 24 hours, the data is no longer available.
968
1015
  #
969
1016
  #
970
1017
  #
@@ -1009,51 +1056,133 @@ module Aws::Firehose
1009
1056
  req.send_request(options)
1010
1057
  end
1011
1058
 
1012
- # Updates the specified destination of the specified delivery stream.
1059
+ # Adds or updates tags for the specified delivery stream. A tag is a
1060
+ # key-value pair (the value is optional) that you can define and assign
1061
+ # to AWS resources. If you specify a tag that already exists, the tag
1062
+ # value is replaced with the value that you specify in the request. Tags
1063
+ # are metadata. For example, you can add friendly names and descriptions
1064
+ # or other types of information that can help you distinguish the
1065
+ # delivery stream. For more information about tags, see [Using Cost
1066
+ # Allocation Tags][1] in the *AWS Billing and Cost Management User
1067
+ # Guide*.
1013
1068
  #
1014
- # You can use this operation to change the destination type (for
1015
- # example, to replace the Amazon S3 destination with Amazon Redshift) or
1016
- # change the parameters associated with a destination (for example, to
1017
- # change the bucket name of the Amazon S3 destination). The update might
1018
- # not occur immediately. The target delivery stream remains active while
1019
- # the configurations are updated, so data writes to the delivery stream
1020
- # can continue during this process. The updated configurations are
1021
- # usually effective within a few minutes.
1069
+ # Each delivery stream can have up to 50 tags.
1070
+ #
1071
+ # This operation has a limit of five transactions per second per
1072
+ # account.
1073
+ #
1074
+ #
1075
+ #
1076
+ # [1]: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
1077
+ #
1078
+ # @option params [required, String] :delivery_stream_name
1079
+ # The name of the delivery stream to which you want to add the tags.
1080
+ #
1081
+ # @option params [required, Array<Types::Tag>] :tags
1082
+ # A set of key-value pairs to use to create the tags.
1083
+ #
1084
+ # @return [Struct] Returns an empty {Seahorse::Client::Response response}.
1022
1085
  #
1023
- # Note that switching between Amazon ES and other services is not
1024
- # supported. For an Amazon ES destination, you can only update to
1025
- # another Amazon ES destination.
1086
+ # @example Request syntax with placeholder values
1087
+ #
1088
+ # resp = client.tag_delivery_stream({
1089
+ # delivery_stream_name: "DeliveryStreamName", # required
1090
+ # tags: [ # required
1091
+ # {
1092
+ # key: "TagKey", # required
1093
+ # value: "TagValue",
1094
+ # },
1095
+ # ],
1096
+ # })
1097
+ #
1098
+ # @see http://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/TagDeliveryStream AWS API Documentation
1099
+ #
1100
+ # @overload tag_delivery_stream(params = {})
1101
+ # @param [Hash] params ({})
1102
+ def tag_delivery_stream(params = {}, options = {})
1103
+ req = build_request(:tag_delivery_stream, params)
1104
+ req.send_request(options)
1105
+ end
1106
+
1107
+ # Removes tags from the specified delivery stream. Removed tags are
1108
+ # deleted, and you can't recover them after this operation successfully
1109
+ # completes.
1110
+ #
1111
+ # If you specify a tag that doesn't exist, the operation ignores it.
1112
+ #
1113
+ # This operation has a limit of five transactions per second per
1114
+ # account.
1115
+ #
1116
+ # @option params [required, String] :delivery_stream_name
1117
+ # The name of the delivery stream.
1118
+ #
1119
+ # @option params [required, Array<String>] :tag_keys
1120
+ # A list of tag keys. Each corresponding tag is removed from the
1121
+ # delivery stream.
1122
+ #
1123
+ # @return [Struct] Returns an empty {Seahorse::Client::Response response}.
1124
+ #
1125
+ # @example Request syntax with placeholder values
1126
+ #
1127
+ # resp = client.untag_delivery_stream({
1128
+ # delivery_stream_name: "DeliveryStreamName", # required
1129
+ # tag_keys: ["TagKey"], # required
1130
+ # })
1131
+ #
1132
+ # @see http://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/UntagDeliveryStream AWS API Documentation
1133
+ #
1134
+ # @overload untag_delivery_stream(params = {})
1135
+ # @param [Hash] params ({})
1136
+ def untag_delivery_stream(params = {}, options = {})
1137
+ req = build_request(:untag_delivery_stream, params)
1138
+ req.send_request(options)
1139
+ end
1140
+
1141
+ # Updates the specified destination of the specified delivery stream.
1142
+ #
1143
+ # Use this operation to change the destination type (for example, to
1144
+ # replace the Amazon S3 destination with Amazon Redshift) or change the
1145
+ # parameters associated with a destination (for example, to change the
1146
+ # bucket name of the Amazon S3 destination). The update might not occur
1147
+ # immediately. The target delivery stream remains active while the
1148
+ # configurations are updated, so data writes to the delivery stream can
1149
+ # continue during this process. The updated configurations are usually
1150
+ # effective within a few minutes.
1151
+ #
1152
+ # Switching between Amazon ES and other services is not supported. For
1153
+ # an Amazon ES destination, you can only update to another Amazon ES
1154
+ # destination.
1026
1155
  #
1027
- # If the destination type is the same, Kinesis Firehose merges the
1156
+ # If the destination type is the same, Kinesis Data Firehose merges the
1028
1157
  # configuration parameters specified with the destination configuration
1029
1158
  # that already exists on the delivery stream. If any of the parameters
1030
1159
  # are not specified in the call, the existing values are retained. For
1031
1160
  # example, in the Amazon S3 destination, if EncryptionConfiguration is
1032
- # not specified, then the existing EncryptionConfiguration is maintained
1033
- # on the destination.
1161
+ # not specified, then the existing `EncryptionConfiguration` is
1162
+ # maintained on the destination.
1034
1163
  #
1035
1164
  # If the destination type is not the same, for example, changing the
1036
- # destination from Amazon S3 to Amazon Redshift, Kinesis Firehose does
1037
- # not merge any parameters. In this case, all parameters must be
1165
+ # destination from Amazon S3 to Amazon Redshift, Kinesis Data Firehose
1166
+ # does not merge any parameters. In this case, all parameters must be
1038
1167
  # specified.
1039
1168
  #
1040
- # Kinesis Firehose uses **CurrentDeliveryStreamVersionId** to avoid race
1041
- # conditions and conflicting merges. This is a required field, and the
1042
- # service updates the configuration only if the existing configuration
1043
- # has a version ID that matches. After the update is applied
1044
- # successfully, the version ID is updated, and can be retrieved using
1045
- # DescribeDeliveryStream. Use the new version ID to set
1046
- # **CurrentDeliveryStreamVersionId** in the next call.
1169
+ # Kinesis Data Firehose uses `CurrentDeliveryStreamVersionId` to avoid
1170
+ # race conditions and conflicting merges. This is a required field, and
1171
+ # the service updates the configuration only if the existing
1172
+ # configuration has a version ID that matches. After the update is
1173
+ # applied successfully, the version ID is updated, and you can retrieve
1174
+ # it using DescribeDeliveryStream. Use the new version ID to set
1175
+ # `CurrentDeliveryStreamVersionId` in the next call.
1047
1176
  #
1048
1177
  # @option params [required, String] :delivery_stream_name
1049
1178
  # The name of the delivery stream.
1050
1179
  #
1051
1180
  # @option params [required, String] :current_delivery_stream_version_id
1052
- # Obtain this value from the **VersionId** result of
1053
- # DeliveryStreamDescription. This value is required, and helps the
1054
- # service to perform conditional operations. For example, if there is an
1181
+ # Obtain this value from the `VersionId` result of
1182
+ # DeliveryStreamDescription. This value is required, and it helps the
1183
+ # service perform conditional operations. For example, if there is an
1055
1184
  # interleaving update and this value is null, then the update
1056
- # destination fails. After the update is successful, the **VersionId**
1185
+ # destination fails. After the update is successful, the `VersionId`
1057
1186
  # value is updated. The service then performs a merge of the old
1058
1187
  # configuration with the new configuration.
1059
1188
  #
@@ -1365,7 +1494,7 @@ module Aws::Firehose
1365
1494
  params: params,
1366
1495
  config: config)
1367
1496
  context[:gem_name] = 'aws-sdk-firehose'
1368
- context[:gem_version] = '1.1.0'
1497
+ context[:gem_version] = '1.2.0'
1369
1498
  Seahorse::Client::Request.new(handlers, context)
1370
1499
  end
1371
1500