google-apis-speech_v1 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 84c6e1d3368e57725598e7cef75aa5c21a5e2e4f290d7ec3a46526d98c3db1d9
4
+ data.tar.gz: 1e256ae64fdd9b0171510e4ca2eb99aafaa879473fe689bf075ec7d87e255450
5
+ SHA512:
6
+ metadata.gz: 7c05d47ec611979c35f2846d5dd8302c47b7d84bdf41bed47766221e75b5555a57b83dbe821b52353aa42138a1a1575f3e01226e22a4131733fe2c87a77a6056
7
+ data.tar.gz: e7a5fe6285dff07288a40ae389cfb848242848b3058132da9e33909d64b1348e9257eeb610d6a08e18bdd6507ea1a7b6d52fc1c898332b691cef8b4ae29cc955
@@ -0,0 +1,13 @@
1
+ --hide-void-return
2
+ --no-private
3
+ --verbose
4
+ --title=google-apis-speech_v1
5
+ --markup-provider=redcarpet
6
+ --markup=markdown
7
+ --main OVERVIEW.md
8
+ lib/google/apis/speech_v1/*.rb
9
+ lib/google/apis/speech_v1.rb
10
+ -
11
+ OVERVIEW.md
12
+ CHANGELOG.md
13
+ LICENSE.md
@@ -0,0 +1,7 @@
1
+ # Release history for google-apis-speech_v1
2
+
3
+ ### v0.1.0 (2021-01-07)
4
+
5
+ * Regenerated using generator version 0.1.1
6
+ * Regenerated from discovery document revision 20200824
7
+
@@ -0,0 +1,202 @@
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [yyyy] [name of copyright owner]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
@@ -0,0 +1,96 @@
1
+ # Simple REST client for version V1 of the Cloud Speech-to-Text API
2
+
3
+ This is a simple client library for version V1 of the Cloud Speech-to-Text API. It provides:
4
+
5
+ * A client object that connects to the HTTP/JSON REST endpoint for the service.
6
+ * Ruby objects for data structures related to the service.
7
+ * Integration with the googleauth gem for authentication using OAuth, API keys, and service accounts.
8
+ * Control of retry, pagination, and timeouts.
9
+
10
+ Note that although this client library is supported and will continue to be updated to track changes to the service, it is otherwise considered complete and not under active development. Many Google services, especially Google Cloud Platform services, may provide a more modern client that is under more active development and improvement. See the section below titled *Which client should I use?* for more information.
11
+
12
+ ## Getting started
13
+
14
+ ### Before you begin
15
+
16
+ There are a few setup steps you need to complete before you can use this library:
17
+
18
+ 1. If you don't already have a Google account, [sign up](https://www.google.com/accounts).
19
+ 2. If you have never created a Google APIs Console project, read about [Managing Projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects) and create a project in the [Google API Console](https://console.cloud.google.com/).
20
+ 3. Most APIs need to be enabled for your project. [Enable it](https://console.cloud.google.com/apis/library/speech.googleapis.com) in the console.
21
+
22
+ ### Installation
23
+
24
+ Add this line to your application's Gemfile:
25
+
26
+ ```ruby
27
+ gem 'google-apis-speech_v1', '~> 0.1'
28
+ ```
29
+
30
+ And then execute:
31
+
32
+ ```
33
+ $ bundle
34
+ ```
35
+
36
+ Or install it yourself as:
37
+
38
+ ```
39
+ $ gem install google-apis-speech_v1
40
+ ```
41
+
42
+ ### Creating a client object
43
+
44
+ Once the gem is installed, you can load the client code and instantiate a client.
45
+
46
+ ```ruby
47
+ # Load the client
48
+ require "google/apis/speech_v1"
49
+
50
+ # Create a client object
51
+ client = Google::Apis::SpeechV1::SpeechService.new
52
+
53
+ # Authenticate calls
54
+ client.authentication = # ... use the googleauth gem to create credentials
55
+ ```
56
+
57
+ See the class reference docs for information on the methods you can call from a client.
58
+
59
+ ## Documentation
60
+
61
+ More detailed descriptions of the Google simple REST clients are available in two documents.
62
+
63
+ * The [Usage Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/usage-guide.md) discusses how to make API calls, how to use the provided data structures, and how to work the various features of the client library, including media upload and download, error handling, retries, pagination, and logging.
64
+ * The [Auth Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/auth-guide.md) discusses authentication in the client libraries, including API keys, OAuth 2.0, service accounts, and environment variables.
65
+
66
+ (Note: the above documents are written for the simple REST clients in general, and their examples may not reflect the Speech service in particular.)
67
+
68
+ For reference information on specific calls in the Cloud Speech-to-Text API, see the {Google::Apis::SpeechV1::SpeechService class reference docs}.
69
+
70
+ ## Which client should I use?
71
+
72
+ Google provides two types of Ruby API client libraries: **simple REST clients** and **modern clients**.
73
+
74
+ This library, `google-apis-speech_v1`, is a simple REST client. You can identify these clients by their gem names, which are always in the form `google-apis-<servicename>_<serviceversion>`. The simple REST clients connect to HTTP/JSON REST endpoints and are automatically generated from service discovery documents. They support most API functionality, but their class interfaces are sometimes awkward.
75
+
76
+ Modern clients are produced by a modern code generator, sometimes combined with hand-crafted functionality. Most modern clients connect to high-performance gRPC endpoints, although a few are backed by REST services. Modern clients are available for many Google services, especially Google Cloud Platform services, but do not yet support all the services covered by the simple clients.
77
+
78
+ Gem names for modern clients are often of the form `google-cloud-<service_name>`. (For example, [google-cloud-pubsub](https://rubygems.org/gems/google-cloud-pubsub).) Note that most modern clients also have corresponding "versioned" gems with names like `google-cloud-<service_name>-<version>`. (For example, [google-cloud-pubsub-v1](https://rubygems.org/gems/google-cloud-pubsub-v1).) The "versioned" gems can be used directly, but often provide lower-level interfaces. In most cases, the main gem is recommended.
79
+
80
+ **For most users, we recommend the modern client, if one is available.** Compared with simple clients, modern clients are generally much easier to use and more Ruby-like, support more advanced features such as streaming and long-running operations, and often provide much better performance. You may consider using a simple client instead, if a modern client is not yet available for the service you want to use, or if you are not able to use gRPC on your infrastructure.
81
+
82
+ The [product documentation](https://cloud.google.com/speech-to-text/docs/quickstart-protocol) may provide guidance regarding the preferred client library to use.
83
+
84
+ ## Supported Ruby versions
85
+
86
+ This library is supported on Ruby 2.5+.
87
+
88
+ Google provides official support for Ruby versions that are actively supported by Ruby Core -- that is, Ruby versions that are either in normal maintenance or in security maintenance, and not end of life. Currently, this means Ruby 2.5 and later. Older versions of Ruby _may_ still work, but are unsupported and not recommended. See https://www.ruby-lang.org/en/downloads/branches/ for details about the Ruby support schedule.
89
+
90
+ ## License
91
+
92
+ This library is licensed under Apache 2.0. Full license text is available in the {file:LICENSE.md LICENSE}.
93
+
94
+ ## Support
95
+
96
+ Please [report bugs at the project on Github](https://github.com/google/google-api-ruby-client/issues). Don't hesitate to [ask questions](http://stackoverflow.com/questions/tagged/google-api-ruby-client) about the client or APIs on [StackOverflow](http://stackoverflow.com).
@@ -0,0 +1,15 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require "google/apis/speech_v1"
@@ -0,0 +1,36 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'google/apis/speech_v1/service.rb'
16
+ require 'google/apis/speech_v1/classes.rb'
17
+ require 'google/apis/speech_v1/representations.rb'
18
+ require 'google/apis/speech_v1/gem_version.rb'
19
+
20
+ module Google
21
+ module Apis
22
+ # Cloud Speech-to-Text API
23
+ #
24
+ # Converts audio to text by applying powerful neural network models.
25
+ #
26
+ # @see https://cloud.google.com/speech-to-text/docs/quickstart-protocol
27
+ module SpeechV1
28
+ # Version of the Cloud Speech-to-Text API this client connects to.
29
+ # This is NOT the gem version.
30
+ VERSION = 'V1'
31
+
32
+ # View and manage your data across Google Cloud Platform services
33
+ AUTH_CLOUD_PLATFORM = 'https://www.googleapis.com/auth/cloud-platform'
34
+ end
35
+ end
36
+ end
@@ -0,0 +1,740 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'date'
16
+ require 'google/apis/core/base_service'
17
+ require 'google/apis/core/json_representation'
18
+ require 'google/apis/core/hashable'
19
+ require 'google/apis/errors'
20
+
21
+ module Google
22
+ module Apis
23
+ module SpeechV1
24
+
25
+ # The response message for Operations.ListOperations.
26
+ class ListOperationsResponse
27
+ include Google::Apis::Core::Hashable
28
+
29
+ # The standard List next-page token.
30
+ # Corresponds to the JSON property `nextPageToken`
31
+ # @return [String]
32
+ attr_accessor :next_page_token
33
+
34
+ # A list of operations that matches the specified filter in the request.
35
+ # Corresponds to the JSON property `operations`
36
+ # @return [Array<Google::Apis::SpeechV1::Operation>]
37
+ attr_accessor :operations
38
+
39
+ def initialize(**args)
40
+ update!(**args)
41
+ end
42
+
43
+ # Update properties of this object
44
+ def update!(**args)
45
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
46
+ @operations = args[:operations] if args.key?(:operations)
47
+ end
48
+ end
49
+
50
+ # Describes the progress of a long-running `LongRunningRecognize` call. It is
51
+ # included in the `metadata` field of the `Operation` returned by the `
52
+ # GetOperation` call of the `google::longrunning::Operations` service.
53
+ class LongRunningRecognizeMetadata
54
+ include Google::Apis::Core::Hashable
55
+
56
+ # Time of the most recent processing update.
57
+ # Corresponds to the JSON property `lastUpdateTime`
58
+ # @return [String]
59
+ attr_accessor :last_update_time
60
+
61
+ # Approximate percentage of audio processed thus far. Guaranteed to be 100 when
62
+ # the audio is fully processed and the results are available.
63
+ # Corresponds to the JSON property `progressPercent`
64
+ # @return [Fixnum]
65
+ attr_accessor :progress_percent
66
+
67
+ # Time when the request was received.
68
+ # Corresponds to the JSON property `startTime`
69
+ # @return [String]
70
+ attr_accessor :start_time
71
+
72
+ # Output only. The URI of the audio file being transcribed. Empty if the audio
73
+ # was sent as byte content.
74
+ # Corresponds to the JSON property `uri`
75
+ # @return [String]
76
+ attr_accessor :uri
77
+
78
+ def initialize(**args)
79
+ update!(**args)
80
+ end
81
+
82
+ # Update properties of this object
83
+ def update!(**args)
84
+ @last_update_time = args[:last_update_time] if args.key?(:last_update_time)
85
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
86
+ @start_time = args[:start_time] if args.key?(:start_time)
87
+ @uri = args[:uri] if args.key?(:uri)
88
+ end
89
+ end
90
+
91
+ # The top-level message sent by the client for the `LongRunningRecognize` method.
92
+ class LongRunningRecognizeRequest
93
+ include Google::Apis::Core::Hashable
94
+
95
+ # Contains audio data in the encoding specified in the `RecognitionConfig`.
96
+ # Either `content` or `uri` must be supplied. Supplying both or neither returns
97
+ # google.rpc.Code.INVALID_ARGUMENT. See [content limits](https://cloud.google.
98
+ # com/speech-to-text/quotas#content).
99
+ # Corresponds to the JSON property `audio`
100
+ # @return [Google::Apis::SpeechV1::RecognitionAudio]
101
+ attr_accessor :audio
102
+
103
+ # Provides information to the recognizer that specifies how to process the
104
+ # request.
105
+ # Corresponds to the JSON property `config`
106
+ # @return [Google::Apis::SpeechV1::RecognitionConfig]
107
+ attr_accessor :config
108
+
109
+ def initialize(**args)
110
+ update!(**args)
111
+ end
112
+
113
+ # Update properties of this object
114
+ def update!(**args)
115
+ @audio = args[:audio] if args.key?(:audio)
116
+ @config = args[:config] if args.key?(:config)
117
+ end
118
+ end
119
+
120
+ # The only message returned to the client by the `LongRunningRecognize` method.
121
+ # It contains the result as zero or more sequential `SpeechRecognitionResult`
122
+ # messages. It is included in the `result.response` field of the `Operation`
123
+ # returned by the `GetOperation` call of the `google::longrunning::Operations`
124
+ # service.
125
+ class LongRunningRecognizeResponse
126
+ include Google::Apis::Core::Hashable
127
+
128
+ # Sequential list of transcription results corresponding to sequential portions
129
+ # of audio.
130
+ # Corresponds to the JSON property `results`
131
+ # @return [Array<Google::Apis::SpeechV1::SpeechRecognitionResult>]
132
+ attr_accessor :results
133
+
134
+ def initialize(**args)
135
+ update!(**args)
136
+ end
137
+
138
+ # Update properties of this object
139
+ def update!(**args)
140
+ @results = args[:results] if args.key?(:results)
141
+ end
142
+ end
143
+
144
+ # This resource represents a long-running operation that is the result of a
145
+ # network API call.
146
+ class Operation
147
+ include Google::Apis::Core::Hashable
148
+
149
+ # If the value is `false`, it means the operation is still in progress. If `true`
150
+ # , the operation is completed, and either `error` or `response` is available.
151
+ # Corresponds to the JSON property `done`
152
+ # @return [Boolean]
153
+ attr_accessor :done
154
+ alias_method :done?, :done
155
+
156
+ # The `Status` type defines a logical error model that is suitable for different
157
+ # programming environments, including REST APIs and RPC APIs. It is used by [
158
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
159
+ # data: error code, error message, and error details. You can find out more
160
+ # about this error model and how to work with it in the [API Design Guide](https:
161
+ # //cloud.google.com/apis/design/errors).
162
+ # Corresponds to the JSON property `error`
163
+ # @return [Google::Apis::SpeechV1::Status]
164
+ attr_accessor :error
165
+
166
+ # Service-specific metadata associated with the operation. It typically contains
167
+ # progress information and common metadata such as create time. Some services
168
+ # might not provide such metadata. Any method that returns a long-running
169
+ # operation should document the metadata type, if any.
170
+ # Corresponds to the JSON property `metadata`
171
+ # @return [Hash<String,Object>]
172
+ attr_accessor :metadata
173
+
174
+ # The server-assigned name, which is only unique within the same service that
175
+ # originally returns it. If you use the default HTTP mapping, the `name` should
176
+ # be a resource name ending with `operations/`unique_id``.
177
+ # Corresponds to the JSON property `name`
178
+ # @return [String]
179
+ attr_accessor :name
180
+
181
+ # The normal response of the operation in case of success. If the original
182
+ # method returns no data on success, such as `Delete`, the response is `google.
183
+ # protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`,
184
+ # the response should be the resource. For other methods, the response should
185
+ # have the type `XxxResponse`, where `Xxx` is the original method name. For
186
+ # example, if the original method name is `TakeSnapshot()`, the inferred
187
+ # response type is `TakeSnapshotResponse`.
188
+ # Corresponds to the JSON property `response`
189
+ # @return [Hash<String,Object>]
190
+ attr_accessor :response
191
+
192
+ def initialize(**args)
193
+ update!(**args)
194
+ end
195
+
196
+ # Update properties of this object
197
+ def update!(**args)
198
+ @done = args[:done] if args.key?(:done)
199
+ @error = args[:error] if args.key?(:error)
200
+ @metadata = args[:metadata] if args.key?(:metadata)
201
+ @name = args[:name] if args.key?(:name)
202
+ @response = args[:response] if args.key?(:response)
203
+ end
204
+ end
205
+
206
+ # Contains audio data in the encoding specified in the `RecognitionConfig`.
207
+ # Either `content` or `uri` must be supplied. Supplying both or neither returns
208
+ # google.rpc.Code.INVALID_ARGUMENT. See [content limits](https://cloud.google.
209
+ # com/speech-to-text/quotas#content).
210
+ class RecognitionAudio
211
+ include Google::Apis::Core::Hashable
212
+
213
+ # The audio data bytes encoded as specified in `RecognitionConfig`. Note: as
214
+ # with all bytes fields, proto buffers use a pure binary representation, whereas
215
+ # JSON representations use base64.
216
+ # Corresponds to the JSON property `content`
217
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
218
+ # @return [String]
219
+ attr_accessor :content
220
+
221
+ # URI that points to a file that contains audio data bytes as specified in `
222
+ # RecognitionConfig`. The file must not be compressed (for example, gzip).
223
+ # Currently, only Google Cloud Storage URIs are supported, which must be
224
+ # specified in the following format: `gs://bucket_name/object_name` (other URI
225
+ # formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see [
226
+ # Request URIs](https://cloud.google.com/storage/docs/reference-uris).
227
+ # Corresponds to the JSON property `uri`
228
+ # @return [String]
229
+ attr_accessor :uri
230
+
231
+ def initialize(**args)
232
+ update!(**args)
233
+ end
234
+
235
+ # Update properties of this object
236
+ def update!(**args)
237
+ @content = args[:content] if args.key?(:content)
238
+ @uri = args[:uri] if args.key?(:uri)
239
+ end
240
+ end
241
+
242
+ # Provides information to the recognizer that specifies how to process the
243
+ # request.
244
+ class RecognitionConfig
245
+ include Google::Apis::Core::Hashable
246
+
247
+ # The number of channels in the input audio data. ONLY set this for MULTI-
248
+ # CHANNEL recognition. Valid values for LINEAR16 and FLAC are `1`-`8`. Valid
249
+ # values for OGG_OPUS are '1'-'254'. Valid value for MULAW, AMR, AMR_WB and
250
+ # SPEEX_WITH_HEADER_BYTE is only `1`. If `0` or omitted, defaults to one channel
251
+ # (mono). Note: We only recognize the first channel by default. To perform
252
+ # independent recognition on each channel set `
253
+ # enable_separate_recognition_per_channel` to 'true'.
254
+ # Corresponds to the JSON property `audioChannelCount`
255
+ # @return [Fixnum]
256
+ attr_accessor :audio_channel_count
257
+
258
+ # Config to enable speaker diarization.
259
+ # Corresponds to the JSON property `diarizationConfig`
260
+ # @return [Google::Apis::SpeechV1::SpeakerDiarizationConfig]
261
+ attr_accessor :diarization_config
262
+
263
+ # If 'true', adds punctuation to recognition result hypotheses. This feature is
264
+ # only available in select languages. Setting this for requests in other
265
+ # languages has no effect at all. The default 'false' value does not add
266
+ # punctuation to result hypotheses.
267
+ # Corresponds to the JSON property `enableAutomaticPunctuation`
268
+ # @return [Boolean]
269
+ attr_accessor :enable_automatic_punctuation
270
+ alias_method :enable_automatic_punctuation?, :enable_automatic_punctuation
271
+
272
+ # This needs to be set to `true` explicitly and `audio_channel_count` > 1 to get
273
+ # each channel recognized separately. The recognition result will contain a `
274
+ # channel_tag` field to state which channel that result belongs to. If this is
275
+ # not true, we will only recognize the first channel. The request is billed
276
+ # cumulatively for all channels recognized: `audio_channel_count` multiplied by
277
+ # the length of the audio.
278
+ # Corresponds to the JSON property `enableSeparateRecognitionPerChannel`
279
+ # @return [Boolean]
280
+ attr_accessor :enable_separate_recognition_per_channel
281
+ alias_method :enable_separate_recognition_per_channel?, :enable_separate_recognition_per_channel
282
+
283
+ # If `true`, the top result includes a list of words and the start and end time
284
+ # offsets (timestamps) for those words. If `false`, no word-level time offset
285
+ # information is returned. The default is `false`.
286
+ # Corresponds to the JSON property `enableWordTimeOffsets`
287
+ # @return [Boolean]
288
+ attr_accessor :enable_word_time_offsets
289
+ alias_method :enable_word_time_offsets?, :enable_word_time_offsets
290
+
291
+ # Encoding of audio data sent in all `RecognitionAudio` messages. This field is
292
+ # optional for `FLAC` and `WAV` audio files and required for all other audio
293
+ # formats. For details, see AudioEncoding.
294
+ # Corresponds to the JSON property `encoding`
295
+ # @return [String]
296
+ attr_accessor :encoding
297
+
298
+ # Required. The language of the supplied audio as a [BCP-47](https://www.rfc-
299
+ # editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US". See [Language
300
+ # Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of
301
+ # the currently supported language codes.
302
+ # Corresponds to the JSON property `languageCode`
303
+ # @return [String]
304
+ attr_accessor :language_code
305
+
306
+ # Maximum number of recognition hypotheses to be returned. Specifically, the
307
+ # maximum number of `SpeechRecognitionAlternative` messages within each `
308
+ # SpeechRecognitionResult`. The server may return fewer than `max_alternatives`.
309
+ # Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of one.
310
+ # If omitted, will return a maximum of one.
311
+ # Corresponds to the JSON property `maxAlternatives`
312
+ # @return [Fixnum]
313
+ attr_accessor :max_alternatives
314
+
315
+ # Description of audio data to be recognized.
316
+ # Corresponds to the JSON property `metadata`
317
+ # @return [Google::Apis::SpeechV1::RecognitionMetadata]
318
+ attr_accessor :metadata
319
+
320
+ # Which model to select for the given request. Select the model best suited to
321
+ # your domain to get best results. If a model is not explicitly specified, then
322
+ # we auto-select a model based on the parameters in the RecognitionConfig. *
323
+ # Model* *Description* command_and_search Best for short queries such as voice
324
+ # commands or voice search. phone_call Best for audio that originated from a
325
+ # phone call (typically recorded at an 8khz sampling rate). video Best for audio
326
+ # that originated from from video or includes multiple speakers. Ideally the
327
+ # audio is recorded at a 16khz or greater sampling rate. This is a premium model
328
+ # that costs more than the standard rate. default Best for audio that is not one
329
+ # of the specific audio models. For example, long-form audio. Ideally the audio
330
+ # is high-fidelity, recorded at a 16khz or greater sampling rate.
331
+ # Corresponds to the JSON property `model`
332
+ # @return [String]
333
+ attr_accessor :model
334
+
335
+ # If set to `true`, the server will attempt to filter out profanities, replacing
336
+ # all but the initial character in each filtered word with asterisks, e.g. "f***"
337
+ # . If set to `false` or omitted, profanities won't be filtered out.
338
+ # Corresponds to the JSON property `profanityFilter`
339
+ # @return [Boolean]
340
+ attr_accessor :profanity_filter
341
+ alias_method :profanity_filter?, :profanity_filter
342
+
343
+ # Sample rate in Hertz of the audio data sent in all `RecognitionAudio` messages.
344
+ # Valid values are: 8000-48000. 16000 is optimal. For best results, set the
345
+ # sampling rate of the audio source to 16000 Hz. If that's not possible, use the
346
+ # native sample rate of the audio source (instead of re-sampling). This field is
347
+ # optional for FLAC and WAV audio files, but is required for all other audio
348
+ # formats. For details, see AudioEncoding.
349
+ # Corresponds to the JSON property `sampleRateHertz`
350
+ # @return [Fixnum]
351
+ attr_accessor :sample_rate_hertz
352
+
353
+ # Array of SpeechContext. A means to provide context to assist the speech
354
+ # recognition. For more information, see [speech adaptation](https://cloud.
355
+ # google.com/speech-to-text/docs/context-strength).
356
+ # Corresponds to the JSON property `speechContexts`
357
+ # @return [Array<Google::Apis::SpeechV1::SpeechContext>]
358
+ attr_accessor :speech_contexts
359
+
360
+ # Set to true to use an enhanced model for speech recognition. If `use_enhanced`
361
+ # is set to true and the `model` field is not set, then an appropriate enhanced
362
+ # model is chosen if an enhanced model exists for the audio. If `use_enhanced`
363
+ # is true and an enhanced version of the specified model does not exist, then
364
+ # the speech is recognized using the standard version of the specified model.
365
+ # Corresponds to the JSON property `useEnhanced`
366
+ # @return [Boolean]
367
+ attr_accessor :use_enhanced
368
+ alias_method :use_enhanced?, :use_enhanced
369
+
370
+ def initialize(**args)
371
+ update!(**args)
372
+ end
373
+
374
+ # Update properties of this object
375
+ def update!(**args)
376
+ @audio_channel_count = args[:audio_channel_count] if args.key?(:audio_channel_count)
377
+ @diarization_config = args[:diarization_config] if args.key?(:diarization_config)
378
+ @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
379
+ @enable_separate_recognition_per_channel = args[:enable_separate_recognition_per_channel] if args.key?(:enable_separate_recognition_per_channel)
380
+ @enable_word_time_offsets = args[:enable_word_time_offsets] if args.key?(:enable_word_time_offsets)
381
+ @encoding = args[:encoding] if args.key?(:encoding)
382
+ @language_code = args[:language_code] if args.key?(:language_code)
383
+ @max_alternatives = args[:max_alternatives] if args.key?(:max_alternatives)
384
+ @metadata = args[:metadata] if args.key?(:metadata)
385
+ @model = args[:model] if args.key?(:model)
386
+ @profanity_filter = args[:profanity_filter] if args.key?(:profanity_filter)
387
+ @sample_rate_hertz = args[:sample_rate_hertz] if args.key?(:sample_rate_hertz)
388
+ @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
389
+ @use_enhanced = args[:use_enhanced] if args.key?(:use_enhanced)
390
+ end
391
+ end
392
+
393
+ # Description of audio data to be recognized.
394
+ class RecognitionMetadata
395
+ include Google::Apis::Core::Hashable
396
+
397
+ # Description of the content. Eg. "Recordings of federal supreme court hearings
398
+ # from 2012".
399
+ # Corresponds to the JSON property `audioTopic`
400
+ # @return [String]
401
+ attr_accessor :audio_topic
402
+
403
+ # The industry vertical to which this speech recognition request most closely
404
+ # applies. This is most indicative of the topics contained in the audio. Use the
405
+ # 6-digit NAICS code to identify the industry vertical - see https://www.naics.
406
+ # com/search/.
407
+ # Corresponds to the JSON property `industryNaicsCodeOfAudio`
408
+ # @return [Fixnum]
409
+ attr_accessor :industry_naics_code_of_audio
410
+
411
+ # The use case most closely describing the audio content to be recognized.
412
+ # Corresponds to the JSON property `interactionType`
413
+ # @return [String]
414
+ attr_accessor :interaction_type
415
+
416
+ # The audio type that most closely describes the audio being recognized.
417
+ # Corresponds to the JSON property `microphoneDistance`
418
+ # @return [String]
419
+ attr_accessor :microphone_distance
420
+
421
+ # The original media the speech was recorded on.
422
+ # Corresponds to the JSON property `originalMediaType`
423
+ # @return [String]
424
+ attr_accessor :original_media_type
425
+
426
+ # Mime type of the original audio file. For example `audio/m4a`, `audio/x-alaw-
427
+ # basic`, `audio/mp3`, `audio/3gpp`. A list of possible audio mime types is
428
+ # maintained at http://www.iana.org/assignments/media-types/media-types.xhtml#
429
+ # audio
430
+ # Corresponds to the JSON property `originalMimeType`
431
+ # @return [String]
432
+ attr_accessor :original_mime_type
433
+
434
+ # The device used to make the recording. Examples 'Nexus 5X' or 'Polycom
435
+ # SoundStation IP 6000' or 'POTS' or 'VoIP' or 'Cardioid Microphone'.
436
+ # Corresponds to the JSON property `recordingDeviceName`
437
+ # @return [String]
438
+ attr_accessor :recording_device_name
439
+
440
+ # The type of device the speech was recorded with.
441
+ # Corresponds to the JSON property `recordingDeviceType`
442
+ # @return [String]
443
+ attr_accessor :recording_device_type
444
+
445
+ def initialize(**args)
446
+ update!(**args)
447
+ end
448
+
449
+ # Update properties of this object
450
+ def update!(**args)
451
+ @audio_topic = args[:audio_topic] if args.key?(:audio_topic)
452
+ @industry_naics_code_of_audio = args[:industry_naics_code_of_audio] if args.key?(:industry_naics_code_of_audio)
453
+ @interaction_type = args[:interaction_type] if args.key?(:interaction_type)
454
+ @microphone_distance = args[:microphone_distance] if args.key?(:microphone_distance)
455
+ @original_media_type = args[:original_media_type] if args.key?(:original_media_type)
456
+ @original_mime_type = args[:original_mime_type] if args.key?(:original_mime_type)
457
+ @recording_device_name = args[:recording_device_name] if args.key?(:recording_device_name)
458
+ @recording_device_type = args[:recording_device_type] if args.key?(:recording_device_type)
459
+ end
460
+ end
461
+
462
+ # The top-level message sent by the client for the `Recognize` method.
463
+ class RecognizeRequest
464
+ include Google::Apis::Core::Hashable
465
+
466
+ # Contains audio data in the encoding specified in the `RecognitionConfig`.
467
+ # Either `content` or `uri` must be supplied. Supplying both or neither returns
468
+ # google.rpc.Code.INVALID_ARGUMENT. See [content limits](https://cloud.google.
469
+ # com/speech-to-text/quotas#content).
470
+ # Corresponds to the JSON property `audio`
471
+ # @return [Google::Apis::SpeechV1::RecognitionAudio]
472
+ attr_accessor :audio
473
+
474
+ # Provides information to the recognizer that specifies how to process the
475
+ # request.
476
+ # Corresponds to the JSON property `config`
477
+ # @return [Google::Apis::SpeechV1::RecognitionConfig]
478
+ attr_accessor :config
479
+
480
+ def initialize(**args)
481
+ update!(**args)
482
+ end
483
+
484
+ # Update properties of this object
485
+ def update!(**args)
486
+ @audio = args[:audio] if args.key?(:audio)
487
+ @config = args[:config] if args.key?(:config)
488
+ end
489
+ end
490
+
491
+ # The only message returned to the client by the `Recognize` method. It contains
492
+ # the result as zero or more sequential `SpeechRecognitionResult` messages.
493
+ class RecognizeResponse
494
+ include Google::Apis::Core::Hashable
495
+
496
+ # Sequential list of transcription results corresponding to sequential portions
497
+ # of audio.
498
+ # Corresponds to the JSON property `results`
499
+ # @return [Array<Google::Apis::SpeechV1::SpeechRecognitionResult>]
500
+ attr_accessor :results
501
+
502
+ def initialize(**args)
503
+ update!(**args)
504
+ end
505
+
506
+ # Update properties of this object
507
+ def update!(**args)
508
+ @results = args[:results] if args.key?(:results)
509
+ end
510
+ end
511
+
512
+ # Config to enable speaker diarization.
513
+ class SpeakerDiarizationConfig
514
+ include Google::Apis::Core::Hashable
515
+
516
+ # If 'true', enables speaker detection for each recognized word in the top
517
+ # alternative of the recognition result using a speaker_tag provided in the
518
+ # WordInfo.
519
+ # Corresponds to the JSON property `enableSpeakerDiarization`
520
+ # @return [Boolean]
521
+ attr_accessor :enable_speaker_diarization
522
+ alias_method :enable_speaker_diarization?, :enable_speaker_diarization
523
+
524
+ # Maximum number of speakers in the conversation. This range gives you more
525
+ # flexibility by allowing the system to automatically determine the correct
526
+ # number of speakers. If not set, the default value is 6.
527
+ # Corresponds to the JSON property `maxSpeakerCount`
528
+ # @return [Fixnum]
529
+ attr_accessor :max_speaker_count
530
+
531
+ # Minimum number of speakers in the conversation. This range gives you more
532
+ # flexibility by allowing the system to automatically determine the correct
533
+ # number of speakers. If not set, the default value is 2.
534
+ # Corresponds to the JSON property `minSpeakerCount`
535
+ # @return [Fixnum]
536
+ attr_accessor :min_speaker_count
537
+
538
+ # Output only. Unused.
539
+ # Corresponds to the JSON property `speakerTag`
540
+ # @return [Fixnum]
541
+ attr_accessor :speaker_tag
542
+
543
+ def initialize(**args)
544
+ update!(**args)
545
+ end
546
+
547
+ # Update properties of this object
548
+ def update!(**args)
549
+ @enable_speaker_diarization = args[:enable_speaker_diarization] if args.key?(:enable_speaker_diarization)
550
+ @max_speaker_count = args[:max_speaker_count] if args.key?(:max_speaker_count)
551
+ @min_speaker_count = args[:min_speaker_count] if args.key?(:min_speaker_count)
552
+ @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
553
+ end
554
+ end
555
+
556
+ # Provides "hints" to the speech recognizer to favor specific words and phrases
557
+ # in the results.
558
+ class SpeechContext
559
+ include Google::Apis::Core::Hashable
560
+
561
+ # A list of strings containing words and phrases "hints" so that the speech
562
+ # recognition is more likely to recognize them. This can be used to improve the
563
+ # accuracy for specific words and phrases, for example, if specific commands are
564
+ # typically spoken by the user. This can also be used to add additional words to
565
+ # the vocabulary of the recognizer. See [usage limits](https://cloud.google.com/
566
+ # speech-to-text/quotas#content). List items can also be set to classes for
567
+ # groups of words that represent common concepts that occur in natural language.
568
+ # For example, rather than providing phrase hints for every month of the year,
569
+ # using the $MONTH class improves the likelihood of correctly transcribing audio
570
+ # that includes months.
571
+ # Corresponds to the JSON property `phrases`
572
+ # @return [Array<String>]
573
+ attr_accessor :phrases
574
+
575
+ def initialize(**args)
576
+ update!(**args)
577
+ end
578
+
579
+ # Update properties of this object
580
+ def update!(**args)
581
+ @phrases = args[:phrases] if args.key?(:phrases)
582
+ end
583
+ end
584
+
585
+ # Alternative hypotheses (a.k.a. n-best list).
586
+ class SpeechRecognitionAlternative
587
+ include Google::Apis::Core::Hashable
588
+
589
+ # The confidence estimate between 0.0 and 1.0. A higher number indicates an
590
+ # estimated greater likelihood that the recognized words are correct. This field
591
+ # is set only for the top alternative of a non-streaming result or, of a
592
+ # streaming result where `is_final=true`. This field is not guaranteed to be
593
+ # accurate and users should not rely on it to be always provided. The default of
594
+ # 0.0 is a sentinel value indicating `confidence` was not set.
595
+ # Corresponds to the JSON property `confidence`
596
+ # @return [Float]
597
+ attr_accessor :confidence
598
+
599
+ # Transcript text representing the words that the user spoke.
600
+ # Corresponds to the JSON property `transcript`
601
+ # @return [String]
602
+ attr_accessor :transcript
603
+
604
+ # A list of word-specific information for each recognized word. Note: When `
605
+ # enable_speaker_diarization` is true, you will see all the words from the
606
+ # beginning of the audio.
607
+ # Corresponds to the JSON property `words`
608
+ # @return [Array<Google::Apis::SpeechV1::WordInfo>]
609
+ attr_accessor :words
610
+
611
+ def initialize(**args)
612
+ update!(**args)
613
+ end
614
+
615
+ # Update properties of this object
616
+ def update!(**args)
617
+ @confidence = args[:confidence] if args.key?(:confidence)
618
+ @transcript = args[:transcript] if args.key?(:transcript)
619
+ @words = args[:words] if args.key?(:words)
620
+ end
621
+ end
622
+
623
+ # A speech recognition result corresponding to a portion of the audio.
624
+ class SpeechRecognitionResult
625
+ include Google::Apis::Core::Hashable
626
+
627
+ # May contain one or more recognition hypotheses (up to the maximum specified in
628
+ # `max_alternatives`). These alternatives are ordered in terms of accuracy, with
629
+ # the top (first) alternative being the most probable, as ranked by the
630
+ # recognizer.
631
+ # Corresponds to the JSON property `alternatives`
632
+ # @return [Array<Google::Apis::SpeechV1::SpeechRecognitionAlternative>]
633
+ attr_accessor :alternatives
634
+
635
+ # For multi-channel audio, this is the channel number corresponding to the
636
+ # recognized result for the audio from that channel. For audio_channel_count = N,
637
+ # its output values can range from '1' to 'N'.
638
+ # Corresponds to the JSON property `channelTag`
639
+ # @return [Fixnum]
640
+ attr_accessor :channel_tag
641
+
642
+ def initialize(**args)
643
+ update!(**args)
644
+ end
645
+
646
+ # Update properties of this object
647
+ def update!(**args)
648
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
649
+ @channel_tag = args[:channel_tag] if args.key?(:channel_tag)
650
+ end
651
+ end
652
+
653
+ # The `Status` type defines a logical error model that is suitable for different
654
+ # programming environments, including REST APIs and RPC APIs. It is used by [
655
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
656
+ # data: error code, error message, and error details. You can find out more
657
+ # about this error model and how to work with it in the [API Design Guide](https:
658
+ # //cloud.google.com/apis/design/errors).
659
+ class Status
660
+ include Google::Apis::Core::Hashable
661
+
662
+ # The status code, which should be an enum value of google.rpc.Code.
663
+ # Corresponds to the JSON property `code`
664
+ # @return [Fixnum]
665
+ attr_accessor :code
666
+
667
+ # A list of messages that carry the error details. There is a common set of
668
+ # message types for APIs to use.
669
+ # Corresponds to the JSON property `details`
670
+ # @return [Array<Hash<String,Object>>]
671
+ attr_accessor :details
672
+
673
+ # A developer-facing error message, which should be in English. Any user-facing
674
+ # error message should be localized and sent in the google.rpc.Status.details
675
+ # field, or localized by the client.
676
+ # Corresponds to the JSON property `message`
677
+ # @return [String]
678
+ attr_accessor :message
679
+
680
+ def initialize(**args)
681
+ update!(**args)
682
+ end
683
+
684
+ # Update properties of this object
685
+ def update!(**args)
686
+ @code = args[:code] if args.key?(:code)
687
+ @details = args[:details] if args.key?(:details)
688
+ @message = args[:message] if args.key?(:message)
689
+ end
690
+ end
691
+
692
+ # Word-specific information for recognized words.
693
+ class WordInfo
694
+ include Google::Apis::Core::Hashable
695
+
696
+ # Time offset relative to the beginning of the audio, and corresponding to the
697
+ # end of the spoken word. This field is only set if `enable_word_time_offsets=
698
+ # true` and only in the top hypothesis. This is an experimental feature and the
699
+ # accuracy of the time offset can vary.
700
+ # Corresponds to the JSON property `endTime`
701
+ # @return [String]
702
+ attr_accessor :end_time
703
+
704
+ # Output only. A distinct integer value is assigned for every speaker within the
705
+ # audio. This field specifies which one of those speakers was detected to have
706
+ # spoken this word. Value ranges from '1' to diarization_speaker_count.
707
+ # speaker_tag is set if enable_speaker_diarization = 'true' and only in the top
708
+ # alternative.
709
+ # Corresponds to the JSON property `speakerTag`
710
+ # @return [Fixnum]
711
+ attr_accessor :speaker_tag
712
+
713
+ # Time offset relative to the beginning of the audio, and corresponding to the
714
+ # start of the spoken word. This field is only set if `enable_word_time_offsets=
715
+ # true` and only in the top hypothesis. This is an experimental feature and the
716
+ # accuracy of the time offset can vary.
717
+ # Corresponds to the JSON property `startTime`
718
+ # @return [String]
719
+ attr_accessor :start_time
720
+
721
+ # The word corresponding to this set of information.
722
+ # Corresponds to the JSON property `word`
723
+ # @return [String]
724
+ attr_accessor :word
725
+
726
+ def initialize(**args)
727
+ update!(**args)
728
+ end
729
+
730
+ # Update properties of this object
731
+ def update!(**args)
732
+ @end_time = args[:end_time] if args.key?(:end_time)
733
+ @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
734
+ @start_time = args[:start_time] if args.key?(:start_time)
735
+ @word = args[:word] if args.key?(:word)
736
+ end
737
+ end
738
+ end
739
+ end
740
+ end