google-api-client 0.28.4 → 0.28.5

Sign up to get free protection for your applications and to get access to all the features.
Files changed (426) hide show
  1. checksums.yaml +5 -5
  2. data/.kokoro/build.sh +3 -5
  3. data/.kokoro/continuous/linux.cfg +1 -1
  4. data/.kokoro/presubmit/linux.cfg +1 -1
  5. data/CHANGELOG.md +147 -0
  6. data/generated/google/apis/accesscontextmanager_v1.rb +34 -0
  7. data/generated/google/apis/accesscontextmanager_v1/classes.rb +755 -0
  8. data/generated/google/apis/accesscontextmanager_v1/representations.rb +282 -0
  9. data/generated/google/apis/accesscontextmanager_v1/service.rb +788 -0
  10. data/generated/google/apis/accesscontextmanager_v1beta.rb +1 -1
  11. data/generated/google/apis/accesscontextmanager_v1beta/classes.rb +46 -30
  12. data/generated/google/apis/accesscontextmanager_v1beta/representations.rb +4 -0
  13. data/generated/google/apis/adexchangebuyer2_v2beta1.rb +1 -1
  14. data/generated/google/apis/adexchangebuyer2_v2beta1/classes.rb +2 -2
  15. data/generated/google/apis/admin_directory_v1.rb +1 -1
  16. data/generated/google/apis/admin_directory_v1/classes.rb +5 -50
  17. data/generated/google/apis/admin_directory_v1/representations.rb +0 -2
  18. data/generated/google/apis/alertcenter_v1beta1.rb +1 -1
  19. data/generated/google/apis/alertcenter_v1beta1/classes.rb +3 -2
  20. data/generated/google/apis/alertcenter_v1beta1/service.rb +7 -7
  21. data/generated/google/apis/analyticsreporting_v4.rb +1 -1
  22. data/generated/google/apis/analyticsreporting_v4/classes.rb +638 -0
  23. data/generated/google/apis/analyticsreporting_v4/representations.rb +248 -0
  24. data/generated/google/apis/analyticsreporting_v4/service.rb +30 -0
  25. data/generated/google/apis/androiddeviceprovisioning_v1.rb +1 -1
  26. data/generated/google/apis/androiddeviceprovisioning_v1/classes.rb +10 -10
  27. data/generated/google/apis/androidenterprise_v1.rb +1 -1
  28. data/generated/google/apis/androidenterprise_v1/classes.rb +8 -0
  29. data/generated/google/apis/androidenterprise_v1/representations.rb +2 -0
  30. data/generated/google/apis/androidmanagement_v1.rb +1 -1
  31. data/generated/google/apis/androidmanagement_v1/classes.rb +59 -2
  32. data/generated/google/apis/androidmanagement_v1/representations.rb +33 -0
  33. data/generated/google/apis/appengine_v1.rb +1 -1
  34. data/generated/google/apis/appengine_v1/classes.rb +43 -98
  35. data/generated/google/apis/appengine_v1/representations.rb +17 -35
  36. data/generated/google/apis/appengine_v1alpha.rb +1 -1
  37. data/generated/google/apis/appengine_v1alpha/classes.rb +0 -97
  38. data/generated/google/apis/appengine_v1alpha/representations.rb +0 -35
  39. data/generated/google/apis/appengine_v1beta.rb +1 -1
  40. data/generated/google/apis/appengine_v1beta/classes.rb +3 -99
  41. data/generated/google/apis/appengine_v1beta/representations.rb +0 -35
  42. data/generated/google/apis/bigquery_v2.rb +1 -1
  43. data/generated/google/apis/bigquery_v2/classes.rb +244 -173
  44. data/generated/google/apis/bigquery_v2/representations.rb +79 -58
  45. data/generated/google/apis/bigquerydatatransfer_v1.rb +3 -3
  46. data/generated/google/apis/bigquerydatatransfer_v1/classes.rb +10 -10
  47. data/generated/google/apis/bigquerydatatransfer_v1/service.rb +38 -6
  48. data/generated/google/apis/bigtableadmin_v2.rb +1 -1
  49. data/generated/google/apis/bigtableadmin_v2/classes.rb +4 -4
  50. data/generated/google/apis/binaryauthorization_v1beta1.rb +1 -1
  51. data/generated/google/apis/binaryauthorization_v1beta1/classes.rb +66 -6
  52. data/generated/google/apis/binaryauthorization_v1beta1/representations.rb +17 -0
  53. data/generated/google/apis/cloudasset_v1.rb +34 -0
  54. data/generated/google/apis/cloudasset_v1/classes.rb +805 -0
  55. data/generated/google/apis/cloudasset_v1/representations.rb +263 -0
  56. data/generated/google/apis/cloudasset_v1/service.rb +190 -0
  57. data/generated/google/apis/cloudasset_v1beta1.rb +1 -1
  58. data/generated/google/apis/cloudasset_v1beta1/classes.rb +20 -18
  59. data/generated/google/apis/cloudasset_v1beta1/service.rb +4 -4
  60. data/generated/google/apis/cloudbilling_v1.rb +1 -1
  61. data/generated/google/apis/cloudbilling_v1/classes.rb +1 -1
  62. data/generated/google/apis/cloudbuild_v1.rb +1 -1
  63. data/generated/google/apis/cloudbuild_v1/classes.rb +149 -10
  64. data/generated/google/apis/cloudbuild_v1/representations.rb +65 -0
  65. data/generated/google/apis/cloudbuild_v1alpha1.rb +1 -1
  66. data/generated/google/apis/cloudbuild_v1alpha1/classes.rb +6 -0
  67. data/generated/google/apis/cloudbuild_v1alpha1/representations.rb +2 -0
  68. data/generated/google/apis/cloudbuild_v1alpha1/service.rb +1 -1
  69. data/generated/google/apis/cloudfunctions_v1.rb +1 -1
  70. data/generated/google/apis/cloudfunctions_v1/classes.rb +11 -4
  71. data/generated/google/apis/cloudfunctions_v1/service.rb +8 -2
  72. data/generated/google/apis/cloudfunctions_v1beta2.rb +1 -1
  73. data/generated/google/apis/cloudfunctions_v1beta2/classes.rb +2 -1
  74. data/generated/google/apis/cloudfunctions_v1beta2/service.rb +6 -0
  75. data/generated/google/apis/cloudidentity_v1.rb +7 -1
  76. data/generated/google/apis/cloudidentity_v1/classes.rb +13 -13
  77. data/generated/google/apis/cloudidentity_v1/service.rb +6 -15
  78. data/generated/google/apis/cloudidentity_v1beta1.rb +7 -1
  79. data/generated/google/apis/cloudidentity_v1beta1/classes.rb +10 -10
  80. data/generated/google/apis/cloudidentity_v1beta1/service.rb +4 -10
  81. data/generated/google/apis/cloudiot_v1.rb +1 -1
  82. data/generated/google/apis/cloudiot_v1/classes.rb +11 -11
  83. data/generated/google/apis/cloudkms_v1.rb +1 -1
  84. data/generated/google/apis/cloudkms_v1/classes.rb +7 -3
  85. data/generated/google/apis/cloudprivatecatalog_v1beta1.rb +35 -0
  86. data/generated/google/apis/cloudprivatecatalog_v1beta1/classes.rb +358 -0
  87. data/generated/google/apis/cloudprivatecatalog_v1beta1/representations.rb +123 -0
  88. data/generated/google/apis/cloudprivatecatalog_v1beta1/service.rb +486 -0
  89. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1.rb +35 -0
  90. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/classes.rb +1212 -0
  91. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/representations.rb +399 -0
  92. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/service.rb +1073 -0
  93. data/generated/google/apis/cloudresourcemanager_v1.rb +1 -1
  94. data/generated/google/apis/cloudresourcemanager_v1/classes.rb +17 -16
  95. data/generated/google/apis/cloudresourcemanager_v1beta1.rb +1 -1
  96. data/generated/google/apis/cloudresourcemanager_v1beta1/classes.rb +3 -3
  97. data/generated/google/apis/cloudresourcemanager_v2.rb +1 -1
  98. data/generated/google/apis/cloudresourcemanager_v2/classes.rb +14 -15
  99. data/generated/google/apis/cloudresourcemanager_v2/service.rb +1 -1
  100. data/generated/google/apis/cloudresourcemanager_v2beta1.rb +1 -1
  101. data/generated/google/apis/cloudresourcemanager_v2beta1/classes.rb +14 -15
  102. data/generated/google/apis/cloudresourcemanager_v2beta1/service.rb +1 -1
  103. data/generated/google/apis/cloudscheduler_v1beta1.rb +1 -1
  104. data/generated/google/apis/cloudscheduler_v1beta1/classes.rb +60 -44
  105. data/generated/google/apis/cloudscheduler_v1beta1/service.rb +5 -2
  106. data/generated/google/apis/cloudsearch_v1.rb +1 -1
  107. data/generated/google/apis/cloudsearch_v1/classes.rb +220 -48
  108. data/generated/google/apis/cloudsearch_v1/representations.rb +91 -0
  109. data/generated/google/apis/cloudsearch_v1/service.rb +15 -13
  110. data/generated/google/apis/cloudshell_v1.rb +1 -1
  111. data/generated/google/apis/cloudshell_v1/classes.rb +10 -10
  112. data/generated/google/apis/cloudshell_v1alpha1.rb +1 -1
  113. data/generated/google/apis/cloudshell_v1alpha1/classes.rb +17 -10
  114. data/generated/google/apis/cloudshell_v1alpha1/representations.rb +1 -0
  115. data/generated/google/apis/{partners_v2.rb → cloudtasks_v2.rb} +11 -9
  116. data/generated/google/apis/cloudtasks_v2/classes.rb +1432 -0
  117. data/generated/google/apis/cloudtasks_v2/representations.rb +408 -0
  118. data/generated/google/apis/cloudtasks_v2/service.rb +856 -0
  119. data/generated/google/apis/cloudtasks_v2beta2.rb +1 -1
  120. data/generated/google/apis/cloudtasks_v2beta2/classes.rb +119 -88
  121. data/generated/google/apis/cloudtasks_v2beta2/service.rb +3 -2
  122. data/generated/google/apis/cloudtasks_v2beta3.rb +1 -1
  123. data/generated/google/apis/cloudtasks_v2beta3/classes.rb +122 -90
  124. data/generated/google/apis/cloudtasks_v2beta3/service.rb +3 -2
  125. data/generated/google/apis/cloudtrace_v2.rb +1 -1
  126. data/generated/google/apis/cloudtrace_v2/classes.rb +10 -10
  127. data/generated/google/apis/composer_v1.rb +1 -1
  128. data/generated/google/apis/composer_v1/classes.rb +21 -15
  129. data/generated/google/apis/composer_v1beta1.rb +1 -1
  130. data/generated/google/apis/composer_v1beta1/classes.rb +165 -29
  131. data/generated/google/apis/composer_v1beta1/representations.rb +50 -0
  132. data/generated/google/apis/compute_alpha.rb +1 -1
  133. data/generated/google/apis/compute_alpha/classes.rb +7147 -4656
  134. data/generated/google/apis/compute_alpha/representations.rb +1205 -236
  135. data/generated/google/apis/compute_alpha/service.rb +4338 -3274
  136. data/generated/google/apis/compute_beta.rb +1 -1
  137. data/generated/google/apis/compute_beta/classes.rb +5974 -4567
  138. data/generated/google/apis/compute_beta/representations.rb +846 -283
  139. data/generated/google/apis/compute_beta/service.rb +4274 -3153
  140. data/generated/google/apis/compute_v1.rb +1 -1
  141. data/generated/google/apis/compute_v1/classes.rb +325 -50
  142. data/generated/google/apis/compute_v1/representations.rb +104 -1
  143. data/generated/google/apis/compute_v1/service.rb +153 -2
  144. data/generated/google/apis/container_v1.rb +1 -1
  145. data/generated/google/apis/container_v1/classes.rb +1 -0
  146. data/generated/google/apis/container_v1/service.rb +4 -4
  147. data/generated/google/apis/container_v1beta1.rb +1 -1
  148. data/generated/google/apis/container_v1beta1/classes.rb +7 -0
  149. data/generated/google/apis/container_v1beta1/representations.rb +2 -0
  150. data/generated/google/apis/container_v1beta1/service.rb +4 -4
  151. data/generated/google/apis/containeranalysis_v1alpha1.rb +1 -1
  152. data/generated/google/apis/containeranalysis_v1alpha1/classes.rb +25 -17
  153. data/generated/google/apis/containeranalysis_v1alpha1/representations.rb +1 -0
  154. data/generated/google/apis/containeranalysis_v1beta1.rb +1 -1
  155. data/generated/google/apis/containeranalysis_v1beta1/classes.rb +137 -12
  156. data/generated/google/apis/containeranalysis_v1beta1/representations.rb +33 -0
  157. data/generated/google/apis/content_v2.rb +1 -1
  158. data/generated/google/apis/content_v2/classes.rb +204 -93
  159. data/generated/google/apis/content_v2/representations.rb +49 -0
  160. data/generated/google/apis/content_v2/service.rb +82 -41
  161. data/generated/google/apis/content_v2_1.rb +1 -1
  162. data/generated/google/apis/content_v2_1/classes.rb +360 -209
  163. data/generated/google/apis/content_v2_1/representations.rb +129 -56
  164. data/generated/google/apis/content_v2_1/service.rb +97 -10
  165. data/generated/google/apis/dataflow_v1b3.rb +1 -1
  166. data/generated/google/apis/dataflow_v1b3/classes.rb +51 -19
  167. data/generated/google/apis/dataflow_v1b3/representations.rb +2 -0
  168. data/generated/google/apis/dataflow_v1b3/service.rb +133 -25
  169. data/generated/google/apis/dataproc_v1.rb +1 -1
  170. data/generated/google/apis/dataproc_v1/classes.rb +20 -15
  171. data/generated/google/apis/dataproc_v1/representations.rb +1 -0
  172. data/generated/google/apis/dataproc_v1beta2.rb +1 -1
  173. data/generated/google/apis/dataproc_v1beta2/classes.rb +516 -45
  174. data/generated/google/apis/dataproc_v1beta2/representations.rb +185 -7
  175. data/generated/google/apis/dataproc_v1beta2/service.rb +575 -6
  176. data/generated/google/apis/dfareporting_v3_3.rb +1 -1
  177. data/generated/google/apis/dfareporting_v3_3/classes.rb +3 -3
  178. data/generated/google/apis/dialogflow_v2.rb +1 -1
  179. data/generated/google/apis/dialogflow_v2/classes.rb +126 -77
  180. data/generated/google/apis/dialogflow_v2/service.rb +40 -24
  181. data/generated/google/apis/dialogflow_v2beta1.rb +1 -1
  182. data/generated/google/apis/dialogflow_v2beta1/classes.rb +126 -77
  183. data/generated/google/apis/dialogflow_v2beta1/service.rb +40 -24
  184. data/generated/google/apis/dlp_v2.rb +1 -1
  185. data/generated/google/apis/dlp_v2/classes.rb +44 -41
  186. data/generated/google/apis/dlp_v2/representations.rb +12 -0
  187. data/generated/google/apis/dlp_v2/service.rb +35 -0
  188. data/generated/google/apis/dns_v1.rb +1 -1
  189. data/generated/google/apis/dns_v1/classes.rb +163 -190
  190. data/generated/google/apis/dns_v1/representations.rb +34 -0
  191. data/generated/google/apis/dns_v1/service.rb +15 -110
  192. data/generated/google/apis/dns_v1beta2.rb +1 -1
  193. data/generated/google/apis/dns_v1beta2/classes.rb +117 -248
  194. data/generated/google/apis/dns_v1beta2/service.rb +21 -141
  195. data/generated/google/apis/dns_v2beta1.rb +1 -1
  196. data/generated/google/apis/dns_v2beta1/classes.rb +163 -190
  197. data/generated/google/apis/dns_v2beta1/representations.rb +34 -0
  198. data/generated/google/apis/dns_v2beta1/service.rb +15 -110
  199. data/generated/google/apis/docs_v1.rb +1 -1
  200. data/generated/google/apis/docs_v1/classes.rb +118 -47
  201. data/generated/google/apis/docs_v1/representations.rb +39 -0
  202. data/generated/google/apis/drive_v2.rb +1 -1
  203. data/generated/google/apis/drive_v2/service.rb +3 -1
  204. data/generated/google/apis/drive_v3.rb +1 -1
  205. data/generated/google/apis/drive_v3/service.rb +3 -2
  206. data/generated/google/apis/factchecktools_v1alpha1.rb +34 -0
  207. data/generated/google/apis/factchecktools_v1alpha1/classes.rb +459 -0
  208. data/generated/google/apis/factchecktools_v1alpha1/representations.rb +207 -0
  209. data/generated/google/apis/factchecktools_v1alpha1/service.rb +300 -0
  210. data/generated/google/apis/file_v1.rb +1 -1
  211. data/generated/google/apis/file_v1/classes.rb +203 -10
  212. data/generated/google/apis/file_v1/representations.rb +70 -0
  213. data/generated/google/apis/file_v1/service.rb +190 -0
  214. data/generated/google/apis/file_v1beta1.rb +1 -1
  215. data/generated/google/apis/file_v1beta1/classes.rb +10 -10
  216. data/generated/google/apis/firebasedynamiclinks_v1.rb +1 -1
  217. data/generated/google/apis/firebasedynamiclinks_v1/classes.rb +13 -10
  218. data/generated/google/apis/firebaserules_v1.rb +1 -1
  219. data/generated/google/apis/firebaserules_v1/service.rb +1 -1
  220. data/generated/google/apis/fitness_v1.rb +1 -1
  221. data/generated/google/apis/fitness_v1/classes.rb +3 -0
  222. data/generated/google/apis/fitness_v1/service.rb +1 -45
  223. data/generated/google/apis/games_management_v1management.rb +2 -2
  224. data/generated/google/apis/games_v1.rb +2 -2
  225. data/generated/google/apis/genomics_v1.rb +1 -10
  226. data/generated/google/apis/genomics_v1/classes.rb +190 -3321
  227. data/generated/google/apis/genomics_v1/representations.rb +128 -1265
  228. data/generated/google/apis/genomics_v1/service.rb +75 -1982
  229. data/generated/google/apis/genomics_v1alpha2.rb +1 -1
  230. data/generated/google/apis/genomics_v1alpha2/classes.rb +11 -51
  231. data/generated/google/apis/genomics_v1alpha2/representations.rb +0 -26
  232. data/generated/google/apis/genomics_v1alpha2/service.rb +1 -2
  233. data/generated/google/apis/genomics_v2alpha1.rb +1 -1
  234. data/generated/google/apis/genomics_v2alpha1/classes.rb +19 -58
  235. data/generated/google/apis/genomics_v2alpha1/representations.rb +0 -26
  236. data/generated/google/apis/genomics_v2alpha1/service.rb +1 -2
  237. data/generated/google/apis/groupssettings_v1.rb +2 -2
  238. data/generated/google/apis/groupssettings_v1/classes.rb +126 -1
  239. data/generated/google/apis/groupssettings_v1/representations.rb +18 -0
  240. data/generated/google/apis/groupssettings_v1/service.rb +1 -1
  241. data/generated/google/apis/iam_v1.rb +1 -1
  242. data/generated/google/apis/iam_v1/classes.rb +123 -1
  243. data/generated/google/apis/iam_v1/representations.rb +67 -0
  244. data/generated/google/apis/iam_v1/service.rb +198 -5
  245. data/generated/google/apis/iamcredentials_v1.rb +1 -1
  246. data/generated/google/apis/iamcredentials_v1/classes.rb +8 -4
  247. data/generated/google/apis/iamcredentials_v1/service.rb +10 -5
  248. data/generated/google/apis/iap_v1.rb +1 -1
  249. data/generated/google/apis/iap_v1/classes.rb +1 -1
  250. data/generated/google/apis/iap_v1beta1.rb +1 -1
  251. data/generated/google/apis/iap_v1beta1/classes.rb +1 -1
  252. data/generated/google/apis/jobs_v2.rb +1 -1
  253. data/generated/google/apis/jobs_v2/classes.rb +7 -9
  254. data/generated/google/apis/jobs_v3.rb +1 -1
  255. data/generated/google/apis/jobs_v3/classes.rb +1 -1
  256. data/generated/google/apis/jobs_v3p1beta1.rb +1 -1
  257. data/generated/google/apis/jobs_v3p1beta1/classes.rb +11 -11
  258. data/generated/google/apis/language_v1.rb +1 -1
  259. data/generated/google/apis/language_v1/classes.rb +5 -5
  260. data/generated/google/apis/language_v1beta1.rb +1 -1
  261. data/generated/google/apis/language_v1beta1/classes.rb +5 -5
  262. data/generated/google/apis/language_v1beta2.rb +1 -1
  263. data/generated/google/apis/language_v1beta2/classes.rb +5 -5
  264. data/generated/google/apis/logging_v2.rb +1 -1
  265. data/generated/google/apis/logging_v2/classes.rb +2 -3
  266. data/generated/google/apis/logging_v2beta1.rb +1 -1
  267. data/generated/google/apis/logging_v2beta1/classes.rb +2 -3
  268. data/generated/google/apis/ml_v1.rb +1 -1
  269. data/generated/google/apis/ml_v1/classes.rb +158 -36
  270. data/generated/google/apis/ml_v1/representations.rb +23 -2
  271. data/generated/google/apis/monitoring_v3.rb +1 -1
  272. data/generated/google/apis/monitoring_v3/classes.rb +8 -7
  273. data/generated/google/apis/monitoring_v3/service.rb +6 -1
  274. data/generated/google/apis/oauth2_v1.rb +2 -5
  275. data/generated/google/apis/oauth2_v1/classes.rb +0 -124
  276. data/generated/google/apis/oauth2_v1/representations.rb +0 -62
  277. data/generated/google/apis/oauth2_v1/service.rb +0 -159
  278. data/generated/google/apis/oauth2_v2.rb +2 -5
  279. data/generated/google/apis/people_v1.rb +3 -3
  280. data/generated/google/apis/people_v1/classes.rb +19 -18
  281. data/generated/google/apis/people_v1/service.rb +4 -0
  282. data/generated/google/apis/plus_domains_v1.rb +3 -3
  283. data/generated/google/apis/plus_v1.rb +3 -3
  284. data/generated/google/apis/poly_v1.rb +1 -1
  285. data/generated/google/apis/poly_v1/classes.rb +5 -4
  286. data/generated/google/apis/proximitybeacon_v1beta1.rb +1 -1
  287. data/generated/google/apis/proximitybeacon_v1beta1/classes.rb +8 -6
  288. data/generated/google/apis/pubsub_v1.rb +1 -1
  289. data/generated/google/apis/pubsub_v1/classes.rb +53 -38
  290. data/generated/google/apis/pubsub_v1/representations.rb +16 -0
  291. data/generated/google/apis/pubsub_v1/service.rb +6 -29
  292. data/generated/google/apis/pubsub_v1beta2.rb +1 -1
  293. data/generated/google/apis/pubsub_v1beta2/classes.rb +45 -1
  294. data/generated/google/apis/pubsub_v1beta2/representations.rb +16 -0
  295. data/generated/google/apis/redis_v1.rb +1 -1
  296. data/generated/google/apis/redis_v1beta1.rb +1 -1
  297. data/generated/google/apis/redis_v1beta1/classes.rb +0 -10
  298. data/generated/google/apis/redis_v1beta1/representations.rb +0 -1
  299. data/generated/google/apis/remotebuildexecution_v1.rb +1 -1
  300. data/generated/google/apis/remotebuildexecution_v1/classes.rb +42 -28
  301. data/generated/google/apis/remotebuildexecution_v1/representations.rb +2 -0
  302. data/generated/google/apis/remotebuildexecution_v1alpha.rb +1 -1
  303. data/generated/google/apis/remotebuildexecution_v1alpha/classes.rb +42 -28
  304. data/generated/google/apis/remotebuildexecution_v1alpha/representations.rb +2 -0
  305. data/generated/google/apis/remotebuildexecution_v2.rb +1 -1
  306. data/generated/google/apis/remotebuildexecution_v2/classes.rb +52 -38
  307. data/generated/google/apis/remotebuildexecution_v2/representations.rb +2 -0
  308. data/generated/google/apis/reseller_v1.rb +1 -1
  309. data/generated/google/apis/reseller_v1/classes.rb +32 -39
  310. data/generated/google/apis/reseller_v1/service.rb +1 -1
  311. data/generated/google/apis/runtimeconfig_v1.rb +1 -1
  312. data/generated/google/apis/runtimeconfig_v1/classes.rb +10 -10
  313. data/generated/google/apis/runtimeconfig_v1beta1.rb +1 -1
  314. data/generated/google/apis/runtimeconfig_v1beta1/classes.rb +25 -24
  315. data/generated/google/apis/script_v1.rb +1 -1
  316. data/generated/google/apis/script_v1/classes.rb +0 -6
  317. data/generated/google/apis/script_v1/representations.rb +0 -1
  318. data/generated/google/apis/serviceconsumermanagement_v1.rb +1 -1
  319. data/generated/google/apis/serviceconsumermanagement_v1/classes.rb +73 -151
  320. data/generated/google/apis/serviceconsumermanagement_v1/service.rb +48 -50
  321. data/generated/google/apis/servicecontrol_v1.rb +1 -1
  322. data/generated/google/apis/servicecontrol_v1/classes.rb +108 -24
  323. data/generated/google/apis/servicecontrol_v1/representations.rb +45 -0
  324. data/generated/google/apis/servicemanagement_v1.rb +1 -1
  325. data/generated/google/apis/servicemanagement_v1/classes.rb +35 -113
  326. data/generated/google/apis/servicemanagement_v1/service.rb +6 -3
  327. data/generated/google/apis/servicenetworking_v1.rb +38 -0
  328. data/generated/google/apis/servicenetworking_v1/classes.rb +3591 -0
  329. data/generated/google/apis/servicenetworking_v1/representations.rb +1082 -0
  330. data/generated/google/apis/servicenetworking_v1/service.rb +440 -0
  331. data/generated/google/apis/servicenetworking_v1beta.rb +1 -1
  332. data/generated/google/apis/servicenetworking_v1beta/classes.rb +32 -110
  333. data/generated/google/apis/serviceusage_v1.rb +1 -1
  334. data/generated/google/apis/serviceusage_v1/classes.rb +33 -150
  335. data/generated/google/apis/serviceusage_v1beta1.rb +1 -1
  336. data/generated/google/apis/serviceusage_v1beta1/classes.rb +34 -190
  337. data/generated/google/apis/sheets_v4.rb +1 -1
  338. data/generated/google/apis/sheets_v4/classes.rb +115 -26
  339. data/generated/google/apis/slides_v1.rb +1 -1
  340. data/generated/google/apis/slides_v1/classes.rb +2 -2
  341. data/generated/google/apis/sourcerepo_v1.rb +1 -1
  342. data/generated/google/apis/sourcerepo_v1/classes.rb +1 -1
  343. data/generated/google/apis/spanner_v1.rb +1 -1
  344. data/generated/google/apis/spanner_v1/classes.rb +171 -0
  345. data/generated/google/apis/spanner_v1/representations.rb +49 -0
  346. data/generated/google/apis/spanner_v1/service.rb +51 -1
  347. data/generated/google/apis/speech_v1.rb +1 -1
  348. data/generated/google/apis/speech_v1/classes.rb +107 -10
  349. data/generated/google/apis/speech_v1/representations.rb +24 -0
  350. data/generated/google/apis/speech_v1p1beta1.rb +1 -1
  351. data/generated/google/apis/speech_v1p1beta1/classes.rb +16 -10
  352. data/generated/google/apis/speech_v1p1beta1/representations.rb +1 -0
  353. data/generated/google/apis/sqladmin_v1beta4.rb +1 -1
  354. data/generated/google/apis/sqladmin_v1beta4/classes.rb +11 -15
  355. data/generated/google/apis/sqladmin_v1beta4/representations.rb +1 -0
  356. data/generated/google/apis/storage_v1.rb +1 -1
  357. data/generated/google/apis/storage_v1/classes.rb +57 -4
  358. data/generated/google/apis/storage_v1/representations.rb +19 -1
  359. data/generated/google/apis/storagetransfer_v1.rb +2 -2
  360. data/generated/google/apis/storagetransfer_v1/classes.rb +28 -21
  361. data/generated/google/apis/storagetransfer_v1/service.rb +4 -4
  362. data/generated/google/apis/streetviewpublish_v1.rb +1 -1
  363. data/generated/google/apis/streetviewpublish_v1/classes.rb +26 -26
  364. data/generated/google/apis/streetviewpublish_v1/service.rb +27 -31
  365. data/generated/google/apis/tagmanager_v1.rb +1 -1
  366. data/generated/google/apis/tagmanager_v1/service.rb +0 -46
  367. data/generated/google/apis/tagmanager_v2.rb +1 -1
  368. data/generated/google/apis/tagmanager_v2/classes.rb +197 -292
  369. data/generated/google/apis/tagmanager_v2/representations.rb +62 -103
  370. data/generated/google/apis/tagmanager_v2/service.rb +219 -181
  371. data/generated/google/apis/tasks_v1.rb +2 -2
  372. data/generated/google/apis/tasks_v1/service.rb +5 -5
  373. data/generated/google/apis/testing_v1.rb +1 -1
  374. data/generated/google/apis/testing_v1/classes.rb +13 -13
  375. data/generated/google/apis/toolresults_v1beta3.rb +1 -1
  376. data/generated/google/apis/toolresults_v1beta3/classes.rb +92 -0
  377. data/generated/google/apis/toolresults_v1beta3/representations.rb +47 -0
  378. data/generated/google/apis/tpu_v1.rb +1 -1
  379. data/generated/google/apis/tpu_v1/classes.rb +10 -10
  380. data/generated/google/apis/tpu_v1alpha1.rb +1 -1
  381. data/generated/google/apis/tpu_v1alpha1/classes.rb +10 -10
  382. data/generated/google/apis/vault_v1.rb +1 -1
  383. data/generated/google/apis/vault_v1/classes.rb +7 -0
  384. data/generated/google/apis/vault_v1/representations.rb +1 -0
  385. data/generated/google/apis/videointelligence_v1.rb +3 -2
  386. data/generated/google/apis/videointelligence_v1/classes.rb +2193 -350
  387. data/generated/google/apis/videointelligence_v1/representations.rb +805 -6
  388. data/generated/google/apis/videointelligence_v1/service.rb +2 -1
  389. data/generated/google/apis/videointelligence_v1beta2.rb +3 -2
  390. data/generated/google/apis/videointelligence_v1beta2/classes.rb +2448 -605
  391. data/generated/google/apis/videointelligence_v1beta2/representations.rb +806 -7
  392. data/generated/google/apis/videointelligence_v1beta2/service.rb +2 -1
  393. data/generated/google/apis/videointelligence_v1p1beta1.rb +3 -2
  394. data/generated/google/apis/videointelligence_v1p1beta1/classes.rb +2422 -579
  395. data/generated/google/apis/videointelligence_v1p1beta1/representations.rb +806 -7
  396. data/generated/google/apis/videointelligence_v1p1beta1/service.rb +2 -1
  397. data/generated/google/apis/videointelligence_v1p2beta1.rb +3 -2
  398. data/generated/google/apis/videointelligence_v1p2beta1/classes.rb +2645 -830
  399. data/generated/google/apis/videointelligence_v1p2beta1/representations.rb +796 -12
  400. data/generated/google/apis/videointelligence_v1p2beta1/service.rb +2 -1
  401. data/generated/google/apis/videointelligence_v1p3beta1.rb +36 -0
  402. data/generated/google/apis/videointelligence_v1p3beta1/classes.rb +4687 -0
  403. data/generated/google/apis/videointelligence_v1p3beta1/representations.rb +2005 -0
  404. data/generated/google/apis/videointelligence_v1p3beta1/service.rb +94 -0
  405. data/generated/google/apis/vision_v1.rb +1 -1
  406. data/generated/google/apis/vision_v1/classes.rb +1977 -40
  407. data/generated/google/apis/vision_v1/representations.rb +833 -0
  408. data/generated/google/apis/vision_v1p1beta1.rb +1 -1
  409. data/generated/google/apis/vision_v1p1beta1/classes.rb +1972 -35
  410. data/generated/google/apis/vision_v1p1beta1/representations.rb +833 -0
  411. data/generated/google/apis/vision_v1p2beta1.rb +1 -1
  412. data/generated/google/apis/vision_v1p2beta1/classes.rb +1972 -35
  413. data/generated/google/apis/vision_v1p2beta1/representations.rb +833 -0
  414. data/generated/google/apis/websecurityscanner_v1beta.rb +34 -0
  415. data/generated/google/apis/websecurityscanner_v1beta/classes.rb +973 -0
  416. data/generated/google/apis/websecurityscanner_v1beta/representations.rb +452 -0
  417. data/generated/google/apis/websecurityscanner_v1beta/service.rb +548 -0
  418. data/generated/google/apis/youtube_partner_v1.rb +1 -1
  419. data/generated/google/apis/youtubereporting_v1.rb +1 -1
  420. data/lib/google/apis/core/http_command.rb +1 -0
  421. data/lib/google/apis/generator/model.rb +1 -1
  422. data/lib/google/apis/version.rb +1 -1
  423. metadata +39 -8
  424. data/generated/google/apis/partners_v2/classes.rb +0 -2260
  425. data/generated/google/apis/partners_v2/representations.rb +0 -905
  426. data/generated/google/apis/partners_v2/service.rb +0 -1077
@@ -25,7 +25,7 @@ module Google
25
25
  # @see https://developers.google.com/vault
26
26
  module VaultV1
27
27
  VERSION = 'V1'
28
- REVISION = '20181128'
28
+ REVISION = '20190312'
29
29
 
30
30
  # Manage your eDiscovery data
31
31
  AUTH_EDISCOVERY = 'https://www.googleapis.com/auth/ediscovery'
@@ -962,6 +962,12 @@ module Google
962
962
  # @return [String]
963
963
  attr_accessor :export_format
964
964
 
965
+ # Set to true to export confidential mode content.
966
+ # Corresponds to the JSON property `showConfidentialModeContent`
967
+ # @return [Boolean]
968
+ attr_accessor :show_confidential_mode_content
969
+ alias_method :show_confidential_mode_content?, :show_confidential_mode_content
970
+
965
971
  def initialize(**args)
966
972
  update!(**args)
967
973
  end
@@ -969,6 +975,7 @@ module Google
969
975
  # Update properties of this object
970
976
  def update!(**args)
971
977
  @export_format = args[:export_format] if args.key?(:export_format)
978
+ @show_confidential_mode_content = args[:show_confidential_mode_content] if args.key?(:show_confidential_mode_content)
972
979
  end
973
980
  end
974
981
 
@@ -605,6 +605,7 @@ module Google
605
605
  # @private
606
606
  class Representation < Google::Apis::Core::JsonRepresentation
607
607
  property :export_format, as: 'exportFormat'
608
+ property :show_confidential_mode_content, as: 'showConfidentialModeContent'
608
609
  end
609
610
  end
610
611
 
@@ -21,12 +21,13 @@ module Google
21
21
  # Cloud Video Intelligence API
22
22
  #
23
23
  # Detects objects, explicit content, and scene changes in videos. It also
24
- # specifies the region for annotation and transcribes speech to text.
24
+ # specifies the region for annotation and transcribes speech to text. Supports
25
+ # both asynchronous API and streaming API.
25
26
  #
26
27
  # @see https://cloud.google.com/video-intelligence/docs/
27
28
  module VideointelligenceV1
28
29
  VERSION = 'V1'
29
- REVISION = '20190112'
30
+ REVISION = '20190308'
30
31
 
31
32
  # View and manage your data across Google Cloud Platform services
32
33
  AUTH_CLOUD_PLATFORM = 'https://www.googleapis.com/auth/cloud-platform'
@@ -277,6 +277,16 @@ module Google
277
277
  class GoogleCloudVideointelligenceV1LabelDetectionConfig
278
278
  include Google::Apis::Core::Hashable
279
279
 
280
+ # The confidence threshold we perform filtering on the labels from
281
+ # frame-level detection. If not set, it is set to 0.4 by default. The valid
282
+ # range for this threshold is [0.1, 0.9]. Any value set outside of this
283
+ # range will be clipped.
284
+ # Note: for best results please follow the default threshold. We will update
285
+ # the default threshold everytime when we release a new model.
286
+ # Corresponds to the JSON property `frameConfidenceThreshold`
287
+ # @return [Float]
288
+ attr_accessor :frame_confidence_threshold
289
+
280
290
  # What labels should be detected with LABEL_DETECTION, in addition to
281
291
  # video-level labels or segment-level labels.
282
292
  # If unspecified, defaults to `SHOT_MODE`.
@@ -299,15 +309,27 @@ module Google
299
309
  attr_accessor :stationary_camera
300
310
  alias_method :stationary_camera?, :stationary_camera
301
311
 
312
+ # The confidence threshold we perform filtering on the labels from
313
+ # video-level and shot-level detections. If not set, it is set to 0.3 by
314
+ # default. The valid range for this threshold is [0.1, 0.9]. Any value set
315
+ # outside of this range will be clipped.
316
+ # Note: for best results please follow the default threshold. We will update
317
+ # the default threshold everytime when we release a new model.
318
+ # Corresponds to the JSON property `videoConfidenceThreshold`
319
+ # @return [Float]
320
+ attr_accessor :video_confidence_threshold
321
+
302
322
  def initialize(**args)
303
323
  update!(**args)
304
324
  end
305
325
 
306
326
  # Update properties of this object
307
327
  def update!(**args)
328
+ @frame_confidence_threshold = args[:frame_confidence_threshold] if args.key?(:frame_confidence_threshold)
308
329
  @label_detection_mode = args[:label_detection_mode] if args.key?(:label_detection_mode)
309
330
  @model = args[:model] if args.key?(:model)
310
331
  @stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
332
+ @video_confidence_threshold = args[:video_confidence_threshold] if args.key?(:video_confidence_threshold)
311
333
  end
312
334
  end
313
335
 
@@ -362,6 +384,184 @@ module Google
362
384
  end
363
385
  end
364
386
 
387
+ # Normalized bounding box.
388
+ # The normalized vertex coordinates are relative to the original image.
389
+ # Range: [0, 1].
390
+ class GoogleCloudVideointelligenceV1NormalizedBoundingBox
391
+ include Google::Apis::Core::Hashable
392
+
393
+ # Bottom Y coordinate.
394
+ # Corresponds to the JSON property `bottom`
395
+ # @return [Float]
396
+ attr_accessor :bottom
397
+
398
+ # Left X coordinate.
399
+ # Corresponds to the JSON property `left`
400
+ # @return [Float]
401
+ attr_accessor :left
402
+
403
+ # Right X coordinate.
404
+ # Corresponds to the JSON property `right`
405
+ # @return [Float]
406
+ attr_accessor :right
407
+
408
+ # Top Y coordinate.
409
+ # Corresponds to the JSON property `top`
410
+ # @return [Float]
411
+ attr_accessor :top
412
+
413
+ def initialize(**args)
414
+ update!(**args)
415
+ end
416
+
417
+ # Update properties of this object
418
+ def update!(**args)
419
+ @bottom = args[:bottom] if args.key?(:bottom)
420
+ @left = args[:left] if args.key?(:left)
421
+ @right = args[:right] if args.key?(:right)
422
+ @top = args[:top] if args.key?(:top)
423
+ end
424
+ end
425
+
426
+ # Normalized bounding polygon for text (that might not be aligned with axis).
427
+ # Contains list of the corner points in clockwise order starting from
428
+ # top-left corner. For example, for a rectangular bounding box:
429
+ # When the text is horizontal it might look like:
430
+ # 0----1
431
+ # | |
432
+ # 3----2
433
+ # When it's clockwise rotated 180 degrees around the top-left corner it
434
+ # becomes:
435
+ # 2----3
436
+ # | |
437
+ # 1----0
438
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
439
+ # than 0, or greater than 1 due to trignometric calculations for location of
440
+ # the box.
441
+ class GoogleCloudVideointelligenceV1NormalizedBoundingPoly
442
+ include Google::Apis::Core::Hashable
443
+
444
+ # Normalized vertices of the bounding polygon.
445
+ # Corresponds to the JSON property `vertices`
446
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1NormalizedVertex>]
447
+ attr_accessor :vertices
448
+
449
+ def initialize(**args)
450
+ update!(**args)
451
+ end
452
+
453
+ # Update properties of this object
454
+ def update!(**args)
455
+ @vertices = args[:vertices] if args.key?(:vertices)
456
+ end
457
+ end
458
+
459
+ # A vertex represents a 2D point in the image.
460
+ # NOTE: the normalized vertex coordinates are relative to the original image
461
+ # and range from 0 to 1.
462
+ class GoogleCloudVideointelligenceV1NormalizedVertex
463
+ include Google::Apis::Core::Hashable
464
+
465
+ # X coordinate.
466
+ # Corresponds to the JSON property `x`
467
+ # @return [Float]
468
+ attr_accessor :x
469
+
470
+ # Y coordinate.
471
+ # Corresponds to the JSON property `y`
472
+ # @return [Float]
473
+ attr_accessor :y
474
+
475
+ def initialize(**args)
476
+ update!(**args)
477
+ end
478
+
479
+ # Update properties of this object
480
+ def update!(**args)
481
+ @x = args[:x] if args.key?(:x)
482
+ @y = args[:y] if args.key?(:y)
483
+ end
484
+ end
485
+
486
+ # Annotations corresponding to one tracked object.
487
+ class GoogleCloudVideointelligenceV1ObjectTrackingAnnotation
488
+ include Google::Apis::Core::Hashable
489
+
490
+ # Object category's labeling confidence of this track.
491
+ # Corresponds to the JSON property `confidence`
492
+ # @return [Float]
493
+ attr_accessor :confidence
494
+
495
+ # Detected entity from video analysis.
496
+ # Corresponds to the JSON property `entity`
497
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1Entity]
498
+ attr_accessor :entity
499
+
500
+ # Information corresponding to all frames where this object track appears.
501
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
502
+ # messages in frames.
503
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
504
+ # Corresponds to the JSON property `frames`
505
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ObjectTrackingFrame>]
506
+ attr_accessor :frames
507
+
508
+ # Video segment.
509
+ # Corresponds to the JSON property `segment`
510
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment]
511
+ attr_accessor :segment
512
+
513
+ # Streaming mode ONLY.
514
+ # In streaming mode, we do not know the end time of a tracked object
515
+ # before it is completed. Hence, there is no VideoSegment info returned.
516
+ # Instead, we provide a unique identifiable integer track_id so that
517
+ # the customers can correlate the results of the ongoing
518
+ # ObjectTrackAnnotation of the same track_id over time.
519
+ # Corresponds to the JSON property `trackId`
520
+ # @return [Fixnum]
521
+ attr_accessor :track_id
522
+
523
+ def initialize(**args)
524
+ update!(**args)
525
+ end
526
+
527
+ # Update properties of this object
528
+ def update!(**args)
529
+ @confidence = args[:confidence] if args.key?(:confidence)
530
+ @entity = args[:entity] if args.key?(:entity)
531
+ @frames = args[:frames] if args.key?(:frames)
532
+ @segment = args[:segment] if args.key?(:segment)
533
+ @track_id = args[:track_id] if args.key?(:track_id)
534
+ end
535
+ end
536
+
537
+ # Video frame level annotations for object detection and tracking. This field
538
+ # stores per frame location, time offset, and confidence.
539
+ class GoogleCloudVideointelligenceV1ObjectTrackingFrame
540
+ include Google::Apis::Core::Hashable
541
+
542
+ # Normalized bounding box.
543
+ # The normalized vertex coordinates are relative to the original image.
544
+ # Range: [0, 1].
545
+ # Corresponds to the JSON property `normalizedBoundingBox`
546
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1NormalizedBoundingBox]
547
+ attr_accessor :normalized_bounding_box
548
+
549
+ # The timestamp of the frame in microseconds.
550
+ # Corresponds to the JSON property `timeOffset`
551
+ # @return [String]
552
+ attr_accessor :time_offset
553
+
554
+ def initialize(**args)
555
+ update!(**args)
556
+ end
557
+
558
+ # Update properties of this object
559
+ def update!(**args)
560
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
561
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
562
+ end
563
+ end
564
+
365
565
  # Config for SHOT_CHANGE_DETECTION.
366
566
  class GoogleCloudVideointelligenceV1ShotChangeDetectionConfig
367
567
  include Google::Apis::Core::Hashable
@@ -574,31 +774,44 @@ module Google
574
774
  end
575
775
  end
576
776
 
577
- # Annotation progress for a single video.
578
- class GoogleCloudVideointelligenceV1VideoAnnotationProgress
777
+ # Annotations related to one detected OCR text snippet. This will contain the
778
+ # corresponding text, confidence value, and frame level information for each
779
+ # detection.
780
+ class GoogleCloudVideointelligenceV1TextAnnotation
579
781
  include Google::Apis::Core::Hashable
580
782
 
581
- # Video file location in
582
- # [Google Cloud Storage](https://cloud.google.com/storage/).
583
- # Corresponds to the JSON property `inputUri`
783
+ # All video segments where OCR detected text appears.
784
+ # Corresponds to the JSON property `segments`
785
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextSegment>]
786
+ attr_accessor :segments
787
+
788
+ # The detected text.
789
+ # Corresponds to the JSON property `text`
584
790
  # @return [String]
585
- attr_accessor :input_uri
791
+ attr_accessor :text
586
792
 
587
- # Approximate percentage processed thus far. Guaranteed to be
588
- # 100 when fully processed.
589
- # Corresponds to the JSON property `progressPercent`
590
- # @return [Fixnum]
591
- attr_accessor :progress_percent
793
+ def initialize(**args)
794
+ update!(**args)
795
+ end
592
796
 
593
- # Time when the request was received.
594
- # Corresponds to the JSON property `startTime`
595
- # @return [String]
596
- attr_accessor :start_time
797
+ # Update properties of this object
798
+ def update!(**args)
799
+ @segments = args[:segments] if args.key?(:segments)
800
+ @text = args[:text] if args.key?(:text)
801
+ end
802
+ end
597
803
 
598
- # Time of the most recent update.
599
- # Corresponds to the JSON property `updateTime`
600
- # @return [String]
601
- attr_accessor :update_time
804
+ # Config for TEXT_DETECTION.
805
+ class GoogleCloudVideointelligenceV1TextDetectionConfig
806
+ include Google::Apis::Core::Hashable
807
+
808
+ # Language hint can be specified if the language to be detected is known a
809
+ # priori. It can increase the accuracy of the detection. Language hint must
810
+ # be language code in BCP-47 format.
811
+ # Automatic language detection is performed if no hint is provided.
812
+ # Corresponds to the JSON property `languageHints`
813
+ # @return [Array<String>]
814
+ attr_accessor :language_hints
602
815
 
603
816
  def initialize(**args)
604
817
  update!(**args)
@@ -606,54 +819,163 @@ module Google
606
819
 
607
820
  # Update properties of this object
608
821
  def update!(**args)
609
- @input_uri = args[:input_uri] if args.key?(:input_uri)
610
- @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
611
- @start_time = args[:start_time] if args.key?(:start_time)
612
- @update_time = args[:update_time] if args.key?(:update_time)
822
+ @language_hints = args[:language_hints] if args.key?(:language_hints)
613
823
  end
614
824
  end
615
825
 
616
- # Annotation results for a single video.
617
- class GoogleCloudVideointelligenceV1VideoAnnotationResults
826
+ # Video frame level annotation results for text annotation (OCR).
827
+ # Contains information regarding timestamp and bounding box locations for the
828
+ # frames containing detected OCR text snippets.
829
+ class GoogleCloudVideointelligenceV1TextFrame
618
830
  include Google::Apis::Core::Hashable
619
831
 
620
- # The `Status` type defines a logical error model that is suitable for different
621
- # programming environments, including REST APIs and RPC APIs. It is used by
622
- # [gRPC](https://github.com/grpc). The error model is designed to be:
623
- # - Simple to use and understand for most users
624
- # - Flexible enough to meet unexpected needs
625
- # # Overview
626
- # The `Status` message contains three pieces of data: error code, error message,
627
- # and error details. The error code should be an enum value of
628
- # google.rpc.Code, but it may accept additional error codes if needed. The
629
- # error message should be a developer-facing English message that helps
630
- # developers *understand* and *resolve* the error. If a localized user-facing
631
- # error message is needed, put the localized message in the error details or
632
- # localize it in the client. The optional error details may contain arbitrary
633
- # information about the error. There is a predefined set of error detail types
634
- # in the package `google.rpc` that can be used for common error conditions.
635
- # # Language mapping
636
- # The `Status` message is the logical representation of the error model, but it
637
- # is not necessarily the actual wire format. When the `Status` message is
638
- # exposed in different client libraries and different wire protocols, it can be
639
- # mapped differently. For example, it will likely be mapped to some exceptions
640
- # in Java, but more likely mapped to some error codes in C.
641
- # # Other uses
642
- # The error model and the `Status` message can be used in a variety of
643
- # environments, either with or without APIs, to provide a
644
- # consistent developer experience across different environments.
645
- # Example uses of this error model include:
646
- # - Partial errors. If a service needs to return partial errors to the client,
647
- # it may embed the `Status` in the normal response to indicate the partial
648
- # errors.
649
- # - Workflow errors. A typical workflow has multiple steps. Each step may
650
- # have a `Status` message for error reporting.
651
- # - Batch operations. If a client uses batch request and batch response, the
652
- # `Status` message should be used directly inside batch response, one for
653
- # each error sub-response.
654
- # - Asynchronous operations. If an API call embeds asynchronous operation
655
- # results in its response, the status of those operations should be
656
- # represented directly using the `Status` message.
832
+ # Normalized bounding polygon for text (that might not be aligned with axis).
833
+ # Contains list of the corner points in clockwise order starting from
834
+ # top-left corner. For example, for a rectangular bounding box:
835
+ # When the text is horizontal it might look like:
836
+ # 0----1
837
+ # | |
838
+ # 3----2
839
+ # When it's clockwise rotated 180 degrees around the top-left corner it
840
+ # becomes:
841
+ # 2----3
842
+ # | |
843
+ # 1----0
844
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
845
+ # than 0, or greater than 1 due to trignometric calculations for location of
846
+ # the box.
847
+ # Corresponds to the JSON property `rotatedBoundingBox`
848
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1NormalizedBoundingPoly]
849
+ attr_accessor :rotated_bounding_box
850
+
851
+ # Timestamp of this frame.
852
+ # Corresponds to the JSON property `timeOffset`
853
+ # @return [String]
854
+ attr_accessor :time_offset
855
+
856
+ def initialize(**args)
857
+ update!(**args)
858
+ end
859
+
860
+ # Update properties of this object
861
+ def update!(**args)
862
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
863
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
864
+ end
865
+ end
866
+
867
+ # Video segment level annotation results for text detection.
868
+ class GoogleCloudVideointelligenceV1TextSegment
869
+ include Google::Apis::Core::Hashable
870
+
871
+ # Confidence for the track of detected text. It is calculated as the highest
872
+ # over all frames where OCR detected text appears.
873
+ # Corresponds to the JSON property `confidence`
874
+ # @return [Float]
875
+ attr_accessor :confidence
876
+
877
+ # Information related to the frames where OCR detected text appears.
878
+ # Corresponds to the JSON property `frames`
879
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextFrame>]
880
+ attr_accessor :frames
881
+
882
+ # Video segment.
883
+ # Corresponds to the JSON property `segment`
884
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment]
885
+ attr_accessor :segment
886
+
887
+ def initialize(**args)
888
+ update!(**args)
889
+ end
890
+
891
+ # Update properties of this object
892
+ def update!(**args)
893
+ @confidence = args[:confidence] if args.key?(:confidence)
894
+ @frames = args[:frames] if args.key?(:frames)
895
+ @segment = args[:segment] if args.key?(:segment)
896
+ end
897
+ end
898
+
899
+ # Annotation progress for a single video.
900
+ class GoogleCloudVideointelligenceV1VideoAnnotationProgress
901
+ include Google::Apis::Core::Hashable
902
+
903
+ # Video file location in
904
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
905
+ # Corresponds to the JSON property `inputUri`
906
+ # @return [String]
907
+ attr_accessor :input_uri
908
+
909
+ # Approximate percentage processed thus far. Guaranteed to be
910
+ # 100 when fully processed.
911
+ # Corresponds to the JSON property `progressPercent`
912
+ # @return [Fixnum]
913
+ attr_accessor :progress_percent
914
+
915
+ # Time when the request was received.
916
+ # Corresponds to the JSON property `startTime`
917
+ # @return [String]
918
+ attr_accessor :start_time
919
+
920
+ # Time of the most recent update.
921
+ # Corresponds to the JSON property `updateTime`
922
+ # @return [String]
923
+ attr_accessor :update_time
924
+
925
+ def initialize(**args)
926
+ update!(**args)
927
+ end
928
+
929
+ # Update properties of this object
930
+ def update!(**args)
931
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
932
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
933
+ @start_time = args[:start_time] if args.key?(:start_time)
934
+ @update_time = args[:update_time] if args.key?(:update_time)
935
+ end
936
+ end
937
+
938
+ # Annotation results for a single video.
939
+ class GoogleCloudVideointelligenceV1VideoAnnotationResults
940
+ include Google::Apis::Core::Hashable
941
+
942
+ # The `Status` type defines a logical error model that is suitable for
943
+ # different programming environments, including REST APIs and RPC APIs. It is
944
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
945
+ # - Simple to use and understand for most users
946
+ # - Flexible enough to meet unexpected needs
947
+ # # Overview
948
+ # The `Status` message contains three pieces of data: error code, error
949
+ # message, and error details. The error code should be an enum value of
950
+ # google.rpc.Code, but it may accept additional error codes if needed. The
951
+ # error message should be a developer-facing English message that helps
952
+ # developers *understand* and *resolve* the error. If a localized user-facing
953
+ # error message is needed, put the localized message in the error details or
954
+ # localize it in the client. The optional error details may contain arbitrary
955
+ # information about the error. There is a predefined set of error detail types
956
+ # in the package `google.rpc` that can be used for common error conditions.
957
+ # # Language mapping
958
+ # The `Status` message is the logical representation of the error model, but it
959
+ # is not necessarily the actual wire format. When the `Status` message is
960
+ # exposed in different client libraries and different wire protocols, it can be
961
+ # mapped differently. For example, it will likely be mapped to some exceptions
962
+ # in Java, but more likely mapped to some error codes in C.
963
+ # # Other uses
964
+ # The error model and the `Status` message can be used in a variety of
965
+ # environments, either with or without APIs, to provide a
966
+ # consistent developer experience across different environments.
967
+ # Example uses of this error model include:
968
+ # - Partial errors. If a service needs to return partial errors to the client,
969
+ # it may embed the `Status` in the normal response to indicate the partial
970
+ # errors.
971
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
972
+ # have a `Status` message for error reporting.
973
+ # - Batch operations. If a client uses batch request and batch response, the
974
+ # `Status` message should be used directly inside batch response, one for
975
+ # each error sub-response.
976
+ # - Asynchronous operations. If an API call embeds asynchronous operation
977
+ # results in its response, the status of those operations should be
978
+ # represented directly using the `Status` message.
657
979
  # - Logging. If some API errors are stored in logs, the message `Status` could
658
980
  # be used directly after any stripping needed for security/privacy reasons.
659
981
  # Corresponds to the JSON property `error`
@@ -679,6 +1001,11 @@ module Google
679
1001
  # @return [String]
680
1002
  attr_accessor :input_uri
681
1003
 
1004
+ # Annotations for list of objects detected and tracked in video.
1005
+ # Corresponds to the JSON property `objectAnnotations`
1006
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>]
1007
+ attr_accessor :object_annotations
1008
+
682
1009
  # Label annotations on video level or user specified segment level.
683
1010
  # There is exactly one element for each unique label.
684
1011
  # Corresponds to the JSON property `segmentLabelAnnotations`
@@ -701,6 +1028,13 @@ module Google
701
1028
  # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1SpeechTranscription>]
702
1029
  attr_accessor :speech_transcriptions
703
1030
 
1031
+ # OCR text detection and tracking.
1032
+ # Annotations for list of detected text snippets. Each will have list of
1033
+ # frame information associated with it.
1034
+ # Corresponds to the JSON property `textAnnotations`
1035
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextAnnotation>]
1036
+ attr_accessor :text_annotations
1037
+
704
1038
  def initialize(**args)
705
1039
  update!(**args)
706
1040
  end
@@ -711,10 +1045,12 @@ module Google
711
1045
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
712
1046
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
713
1047
  @input_uri = args[:input_uri] if args.key?(:input_uri)
1048
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
714
1049
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
715
1050
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
716
1051
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
717
1052
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
1053
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
718
1054
  end
719
1055
  end
720
1056
 
@@ -749,6 +1085,11 @@ module Google
749
1085
  # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1SpeechTranscriptionConfig]
750
1086
  attr_accessor :speech_transcription_config
751
1087
 
1088
+ # Config for TEXT_DETECTION.
1089
+ # Corresponds to the JSON property `textDetectionConfig`
1090
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextDetectionConfig]
1091
+ attr_accessor :text_detection_config
1092
+
752
1093
  def initialize(**args)
753
1094
  update!(**args)
754
1095
  end
@@ -760,6 +1101,7 @@ module Google
760
1101
  @segments = args[:segments] if args.key?(:segments)
761
1102
  @shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
762
1103
  @speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
1104
+ @text_detection_config = args[:text_detection_config] if args.key?(:text_detection_config)
763
1105
  end
764
1106
  end
765
1107
 
@@ -1062,29 +1404,31 @@ module Google
1062
1404
  end
1063
1405
  end
1064
1406
 
1065
- # Alternative hypotheses (a.k.a. n-best list).
1066
- class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
1407
+ # Normalized bounding box.
1408
+ # The normalized vertex coordinates are relative to the original image.
1409
+ # Range: [0, 1].
1410
+ class GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox
1067
1411
  include Google::Apis::Core::Hashable
1068
1412
 
1069
- # The confidence estimate between 0.0 and 1.0. A higher number
1070
- # indicates an estimated greater likelihood that the recognized words are
1071
- # correct. This field is typically provided only for the top hypothesis, and
1072
- # only for `is_final=true` results. Clients should not rely on the
1073
- # `confidence` field as it is not guaranteed to be accurate or consistent.
1074
- # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1075
- # Corresponds to the JSON property `confidence`
1413
+ # Bottom Y coordinate.
1414
+ # Corresponds to the JSON property `bottom`
1076
1415
  # @return [Float]
1077
- attr_accessor :confidence
1416
+ attr_accessor :bottom
1078
1417
 
1079
- # Transcript text representing the words that the user spoke.
1080
- # Corresponds to the JSON property `transcript`
1081
- # @return [String]
1082
- attr_accessor :transcript
1418
+ # Left X coordinate.
1419
+ # Corresponds to the JSON property `left`
1420
+ # @return [Float]
1421
+ attr_accessor :left
1083
1422
 
1084
- # A list of word-specific information for each recognized word.
1085
- # Corresponds to the JSON property `words`
1086
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2WordInfo>]
1087
- attr_accessor :words
1423
+ # Right X coordinate.
1424
+ # Corresponds to the JSON property `right`
1425
+ # @return [Float]
1426
+ attr_accessor :right
1427
+
1428
+ # Top Y coordinate.
1429
+ # Corresponds to the JSON property `top`
1430
+ # @return [Float]
1431
+ attr_accessor :top
1088
1432
 
1089
1433
  def initialize(**args)
1090
1434
  update!(**args)
@@ -1092,31 +1436,35 @@ module Google
1092
1436
 
1093
1437
  # Update properties of this object
1094
1438
  def update!(**args)
1095
- @confidence = args[:confidence] if args.key?(:confidence)
1096
- @transcript = args[:transcript] if args.key?(:transcript)
1097
- @words = args[:words] if args.key?(:words)
1439
+ @bottom = args[:bottom] if args.key?(:bottom)
1440
+ @left = args[:left] if args.key?(:left)
1441
+ @right = args[:right] if args.key?(:right)
1442
+ @top = args[:top] if args.key?(:top)
1098
1443
  end
1099
1444
  end
1100
1445
 
1101
- # A speech recognition result corresponding to a portion of the audio.
1102
- class GoogleCloudVideointelligenceV1beta2SpeechTranscription
1446
+ # Normalized bounding polygon for text (that might not be aligned with axis).
1447
+ # Contains list of the corner points in clockwise order starting from
1448
+ # top-left corner. For example, for a rectangular bounding box:
1449
+ # When the text is horizontal it might look like:
1450
+ # 0----1
1451
+ # | |
1452
+ # 3----2
1453
+ # When it's clockwise rotated 180 degrees around the top-left corner it
1454
+ # becomes:
1455
+ # 2----3
1456
+ # | |
1457
+ # 1----0
1458
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1459
+ # than 0, or greater than 1 due to trignometric calculations for location of
1460
+ # the box.
1461
+ class GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly
1103
1462
  include Google::Apis::Core::Hashable
1104
1463
 
1105
- # May contain one or more recognition hypotheses (up to the maximum specified
1106
- # in `max_alternatives`). These alternatives are ordered in terms of
1107
- # accuracy, with the top (first) alternative being the most probable, as
1108
- # ranked by the recognizer.
1109
- # Corresponds to the JSON property `alternatives`
1110
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
1111
- attr_accessor :alternatives
1112
-
1113
- # Output only. The
1114
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
1115
- # language in this result. This language code was detected to have the most
1116
- # likelihood of being spoken in the audio.
1117
- # Corresponds to the JSON property `languageCode`
1118
- # @return [String]
1119
- attr_accessor :language_code
1464
+ # Normalized vertices of the bounding polygon.
1465
+ # Corresponds to the JSON property `vertices`
1466
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2NormalizedVertex>]
1467
+ attr_accessor :vertices
1120
1468
 
1121
1469
  def initialize(**args)
1122
1470
  update!(**args)
@@ -1124,29 +1472,1103 @@ module Google
1124
1472
 
1125
1473
  # Update properties of this object
1126
1474
  def update!(**args)
1127
- @alternatives = args[:alternatives] if args.key?(:alternatives)
1128
- @language_code = args[:language_code] if args.key?(:language_code)
1475
+ @vertices = args[:vertices] if args.key?(:vertices)
1129
1476
  end
1130
1477
  end
1131
1478
 
1132
- # Annotation progress for a single video.
1133
- class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
1479
+ # A vertex represents a 2D point in the image.
1480
+ # NOTE: the normalized vertex coordinates are relative to the original image
1481
+ # and range from 0 to 1.
1482
+ class GoogleCloudVideointelligenceV1beta2NormalizedVertex
1134
1483
  include Google::Apis::Core::Hashable
1135
1484
 
1136
- # Video file location in
1137
- # [Google Cloud Storage](https://cloud.google.com/storage/).
1138
- # Corresponds to the JSON property `inputUri`
1139
- # @return [String]
1140
- attr_accessor :input_uri
1141
-
1142
- # Approximate percentage processed thus far. Guaranteed to be
1143
- # 100 when fully processed.
1144
- # Corresponds to the JSON property `progressPercent`
1145
- # @return [Fixnum]
1146
- attr_accessor :progress_percent
1485
+ # X coordinate.
1486
+ # Corresponds to the JSON property `x`
1487
+ # @return [Float]
1488
+ attr_accessor :x
1147
1489
 
1148
- # Time when the request was received.
1149
- # Corresponds to the JSON property `startTime`
1490
+ # Y coordinate.
1491
+ # Corresponds to the JSON property `y`
1492
+ # @return [Float]
1493
+ attr_accessor :y
1494
+
1495
+ def initialize(**args)
1496
+ update!(**args)
1497
+ end
1498
+
1499
+ # Update properties of this object
1500
+ def update!(**args)
1501
+ @x = args[:x] if args.key?(:x)
1502
+ @y = args[:y] if args.key?(:y)
1503
+ end
1504
+ end
1505
+
1506
+ # Annotations corresponding to one tracked object.
1507
+ class GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation
1508
+ include Google::Apis::Core::Hashable
1509
+
1510
+ # Object category's labeling confidence of this track.
1511
+ # Corresponds to the JSON property `confidence`
1512
+ # @return [Float]
1513
+ attr_accessor :confidence
1514
+
1515
+ # Detected entity from video analysis.
1516
+ # Corresponds to the JSON property `entity`
1517
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2Entity]
1518
+ attr_accessor :entity
1519
+
1520
+ # Information corresponding to all frames where this object track appears.
1521
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
1522
+ # messages in frames.
1523
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
1524
+ # Corresponds to the JSON property `frames`
1525
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame>]
1526
+ attr_accessor :frames
1527
+
1528
+ # Video segment.
1529
+ # Corresponds to the JSON property `segment`
1530
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2VideoSegment]
1531
+ attr_accessor :segment
1532
+
1533
+ # Streaming mode ONLY.
1534
+ # In streaming mode, we do not know the end time of a tracked object
1535
+ # before it is completed. Hence, there is no VideoSegment info returned.
1536
+ # Instead, we provide a unique identifiable integer track_id so that
1537
+ # the customers can correlate the results of the ongoing
1538
+ # ObjectTrackAnnotation of the same track_id over time.
1539
+ # Corresponds to the JSON property `trackId`
1540
+ # @return [Fixnum]
1541
+ attr_accessor :track_id
1542
+
1543
+ def initialize(**args)
1544
+ update!(**args)
1545
+ end
1546
+
1547
+ # Update properties of this object
1548
+ def update!(**args)
1549
+ @confidence = args[:confidence] if args.key?(:confidence)
1550
+ @entity = args[:entity] if args.key?(:entity)
1551
+ @frames = args[:frames] if args.key?(:frames)
1552
+ @segment = args[:segment] if args.key?(:segment)
1553
+ @track_id = args[:track_id] if args.key?(:track_id)
1554
+ end
1555
+ end
1556
+
1557
+ # Video frame level annotations for object detection and tracking. This field
1558
+ # stores per frame location, time offset, and confidence.
1559
+ class GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame
1560
+ include Google::Apis::Core::Hashable
1561
+
1562
+ # Normalized bounding box.
1563
+ # The normalized vertex coordinates are relative to the original image.
1564
+ # Range: [0, 1].
1565
+ # Corresponds to the JSON property `normalizedBoundingBox`
1566
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox]
1567
+ attr_accessor :normalized_bounding_box
1568
+
1569
+ # The timestamp of the frame in microseconds.
1570
+ # Corresponds to the JSON property `timeOffset`
1571
+ # @return [String]
1572
+ attr_accessor :time_offset
1573
+
1574
+ def initialize(**args)
1575
+ update!(**args)
1576
+ end
1577
+
1578
+ # Update properties of this object
1579
+ def update!(**args)
1580
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
1581
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1582
+ end
1583
+ end
1584
+
1585
+ # Alternative hypotheses (a.k.a. n-best list).
1586
+ class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
1587
+ include Google::Apis::Core::Hashable
1588
+
1589
+ # The confidence estimate between 0.0 and 1.0. A higher number
1590
+ # indicates an estimated greater likelihood that the recognized words are
1591
+ # correct. This field is typically provided only for the top hypothesis, and
1592
+ # only for `is_final=true` results. Clients should not rely on the
1593
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
1594
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1595
+ # Corresponds to the JSON property `confidence`
1596
+ # @return [Float]
1597
+ attr_accessor :confidence
1598
+
1599
+ # Transcript text representing the words that the user spoke.
1600
+ # Corresponds to the JSON property `transcript`
1601
+ # @return [String]
1602
+ attr_accessor :transcript
1603
+
1604
+ # A list of word-specific information for each recognized word.
1605
+ # Corresponds to the JSON property `words`
1606
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2WordInfo>]
1607
+ attr_accessor :words
1608
+
1609
+ def initialize(**args)
1610
+ update!(**args)
1611
+ end
1612
+
1613
+ # Update properties of this object
1614
+ def update!(**args)
1615
+ @confidence = args[:confidence] if args.key?(:confidence)
1616
+ @transcript = args[:transcript] if args.key?(:transcript)
1617
+ @words = args[:words] if args.key?(:words)
1618
+ end
1619
+ end
1620
+
1621
+ # A speech recognition result corresponding to a portion of the audio.
1622
+ class GoogleCloudVideointelligenceV1beta2SpeechTranscription
1623
+ include Google::Apis::Core::Hashable
1624
+
1625
+ # May contain one or more recognition hypotheses (up to the maximum specified
1626
+ # in `max_alternatives`). These alternatives are ordered in terms of
1627
+ # accuracy, with the top (first) alternative being the most probable, as
1628
+ # ranked by the recognizer.
1629
+ # Corresponds to the JSON property `alternatives`
1630
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
1631
+ attr_accessor :alternatives
1632
+
1633
+ # Output only. The
1634
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
1635
+ # language in this result. This language code was detected to have the most
1636
+ # likelihood of being spoken in the audio.
1637
+ # Corresponds to the JSON property `languageCode`
1638
+ # @return [String]
1639
+ attr_accessor :language_code
1640
+
1641
+ def initialize(**args)
1642
+ update!(**args)
1643
+ end
1644
+
1645
+ # Update properties of this object
1646
+ def update!(**args)
1647
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
1648
+ @language_code = args[:language_code] if args.key?(:language_code)
1649
+ end
1650
+ end
1651
+
1652
+ # Annotations related to one detected OCR text snippet. This will contain the
1653
+ # corresponding text, confidence value, and frame level information for each
1654
+ # detection.
1655
+ class GoogleCloudVideointelligenceV1beta2TextAnnotation
1656
+ include Google::Apis::Core::Hashable
1657
+
1658
+ # All video segments where OCR detected text appears.
1659
+ # Corresponds to the JSON property `segments`
1660
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2TextSegment>]
1661
+ attr_accessor :segments
1662
+
1663
+ # The detected text.
1664
+ # Corresponds to the JSON property `text`
1665
+ # @return [String]
1666
+ attr_accessor :text
1667
+
1668
+ def initialize(**args)
1669
+ update!(**args)
1670
+ end
1671
+
1672
+ # Update properties of this object
1673
+ def update!(**args)
1674
+ @segments = args[:segments] if args.key?(:segments)
1675
+ @text = args[:text] if args.key?(:text)
1676
+ end
1677
+ end
1678
+
1679
+ # Video frame level annotation results for text annotation (OCR).
1680
+ # Contains information regarding timestamp and bounding box locations for the
1681
+ # frames containing detected OCR text snippets.
1682
+ class GoogleCloudVideointelligenceV1beta2TextFrame
1683
+ include Google::Apis::Core::Hashable
1684
+
1685
+ # Normalized bounding polygon for text (that might not be aligned with axis).
1686
+ # Contains list of the corner points in clockwise order starting from
1687
+ # top-left corner. For example, for a rectangular bounding box:
1688
+ # When the text is horizontal it might look like:
1689
+ # 0----1
1690
+ # | |
1691
+ # 3----2
1692
+ # When it's clockwise rotated 180 degrees around the top-left corner it
1693
+ # becomes:
1694
+ # 2----3
1695
+ # | |
1696
+ # 1----0
1697
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1698
+ # than 0, or greater than 1 due to trignometric calculations for location of
1699
+ # the box.
1700
+ # Corresponds to the JSON property `rotatedBoundingBox`
1701
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly]
1702
+ attr_accessor :rotated_bounding_box
1703
+
1704
+ # Timestamp of this frame.
1705
+ # Corresponds to the JSON property `timeOffset`
1706
+ # @return [String]
1707
+ attr_accessor :time_offset
1708
+
1709
+ def initialize(**args)
1710
+ update!(**args)
1711
+ end
1712
+
1713
+ # Update properties of this object
1714
+ def update!(**args)
1715
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
1716
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1717
+ end
1718
+ end
1719
+
1720
+ # Video segment level annotation results for text detection.
1721
+ class GoogleCloudVideointelligenceV1beta2TextSegment
1722
+ include Google::Apis::Core::Hashable
1723
+
1724
+ # Confidence for the track of detected text. It is calculated as the highest
1725
+ # over all frames where OCR detected text appears.
1726
+ # Corresponds to the JSON property `confidence`
1727
+ # @return [Float]
1728
+ attr_accessor :confidence
1729
+
1730
+ # Information related to the frames where OCR detected text appears.
1731
+ # Corresponds to the JSON property `frames`
1732
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2TextFrame>]
1733
+ attr_accessor :frames
1734
+
1735
+ # Video segment.
1736
+ # Corresponds to the JSON property `segment`
1737
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2VideoSegment]
1738
+ attr_accessor :segment
1739
+
1740
+ def initialize(**args)
1741
+ update!(**args)
1742
+ end
1743
+
1744
+ # Update properties of this object
1745
+ def update!(**args)
1746
+ @confidence = args[:confidence] if args.key?(:confidence)
1747
+ @frames = args[:frames] if args.key?(:frames)
1748
+ @segment = args[:segment] if args.key?(:segment)
1749
+ end
1750
+ end
1751
+
1752
+ # Annotation progress for a single video.
1753
+ class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
1754
+ include Google::Apis::Core::Hashable
1755
+
1756
+ # Video file location in
1757
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
1758
+ # Corresponds to the JSON property `inputUri`
1759
+ # @return [String]
1760
+ attr_accessor :input_uri
1761
+
1762
+ # Approximate percentage processed thus far. Guaranteed to be
1763
+ # 100 when fully processed.
1764
+ # Corresponds to the JSON property `progressPercent`
1765
+ # @return [Fixnum]
1766
+ attr_accessor :progress_percent
1767
+
1768
+ # Time when the request was received.
1769
+ # Corresponds to the JSON property `startTime`
1770
+ # @return [String]
1771
+ attr_accessor :start_time
1772
+
1773
+ # Time of the most recent update.
1774
+ # Corresponds to the JSON property `updateTime`
1775
+ # @return [String]
1776
+ attr_accessor :update_time
1777
+
1778
+ def initialize(**args)
1779
+ update!(**args)
1780
+ end
1781
+
1782
+ # Update properties of this object
1783
+ def update!(**args)
1784
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
1785
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
1786
+ @start_time = args[:start_time] if args.key?(:start_time)
1787
+ @update_time = args[:update_time] if args.key?(:update_time)
1788
+ end
1789
+ end
1790
+
1791
+ # Annotation results for a single video.
1792
+ class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
1793
+ include Google::Apis::Core::Hashable
1794
+
1795
+ # The `Status` type defines a logical error model that is suitable for
1796
+ # different programming environments, including REST APIs and RPC APIs. It is
1797
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
1798
+ # - Simple to use and understand for most users
1799
+ # - Flexible enough to meet unexpected needs
1800
+ # # Overview
1801
+ # The `Status` message contains three pieces of data: error code, error
1802
+ # message, and error details. The error code should be an enum value of
1803
+ # google.rpc.Code, but it may accept additional error codes if needed. The
1804
+ # error message should be a developer-facing English message that helps
1805
+ # developers *understand* and *resolve* the error. If a localized user-facing
1806
+ # error message is needed, put the localized message in the error details or
1807
+ # localize it in the client. The optional error details may contain arbitrary
1808
+ # information about the error. There is a predefined set of error detail types
1809
+ # in the package `google.rpc` that can be used for common error conditions.
1810
+ # # Language mapping
1811
+ # The `Status` message is the logical representation of the error model, but it
1812
+ # is not necessarily the actual wire format. When the `Status` message is
1813
+ # exposed in different client libraries and different wire protocols, it can be
1814
+ # mapped differently. For example, it will likely be mapped to some exceptions
1815
+ # in Java, but more likely mapped to some error codes in C.
1816
+ # # Other uses
1817
+ # The error model and the `Status` message can be used in a variety of
1818
+ # environments, either with or without APIs, to provide a
1819
+ # consistent developer experience across different environments.
1820
+ # Example uses of this error model include:
1821
+ # - Partial errors. If a service needs to return partial errors to the client,
1822
+ # it may embed the `Status` in the normal response to indicate the partial
1823
+ # errors.
1824
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
1825
+ # have a `Status` message for error reporting.
1826
+ # - Batch operations. If a client uses batch request and batch response, the
1827
+ # `Status` message should be used directly inside batch response, one for
1828
+ # each error sub-response.
1829
+ # - Asynchronous operations. If an API call embeds asynchronous operation
1830
+ # results in its response, the status of those operations should be
1831
+ # represented directly using the `Status` message.
1832
+ # - Logging. If some API errors are stored in logs, the message `Status` could
1833
+ # be used directly after any stripping needed for security/privacy reasons.
1834
+ # Corresponds to the JSON property `error`
1835
+ # @return [Google::Apis::VideointelligenceV1::GoogleRpcStatus]
1836
+ attr_accessor :error
1837
+
1838
+ # Explicit content annotation (based on per-frame visual signals only).
1839
+ # If no explicit content has been detected in a frame, no annotations are
1840
+ # present for that frame.
1841
+ # Corresponds to the JSON property `explicitAnnotation`
1842
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
1843
+ attr_accessor :explicit_annotation
1844
+
1845
+ # Label annotations on frame level.
1846
+ # There is exactly one element for each unique label.
1847
+ # Corresponds to the JSON property `frameLabelAnnotations`
1848
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
1849
+ attr_accessor :frame_label_annotations
1850
+
1851
+ # Video file location in
1852
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
1853
+ # Corresponds to the JSON property `inputUri`
1854
+ # @return [String]
1855
+ attr_accessor :input_uri
1856
+
1857
+ # Annotations for list of objects detected and tracked in video.
1858
+ # Corresponds to the JSON property `objectAnnotations`
1859
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation>]
1860
+ attr_accessor :object_annotations
1861
+
1862
+ # Label annotations on video level or user specified segment level.
1863
+ # There is exactly one element for each unique label.
1864
+ # Corresponds to the JSON property `segmentLabelAnnotations`
1865
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
1866
+ attr_accessor :segment_label_annotations
1867
+
1868
+ # Shot annotations. Each shot is represented as a video segment.
1869
+ # Corresponds to the JSON property `shotAnnotations`
1870
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2VideoSegment>]
1871
+ attr_accessor :shot_annotations
1872
+
1873
+ # Label annotations on shot level.
1874
+ # There is exactly one element for each unique label.
1875
+ # Corresponds to the JSON property `shotLabelAnnotations`
1876
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
1877
+ attr_accessor :shot_label_annotations
1878
+
1879
+ # Speech transcription.
1880
+ # Corresponds to the JSON property `speechTranscriptions`
1881
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
1882
+ attr_accessor :speech_transcriptions
1883
+
1884
+ # OCR text detection and tracking.
1885
+ # Annotations for list of detected text snippets. Each will have list of
1886
+ # frame information associated with it.
1887
+ # Corresponds to the JSON property `textAnnotations`
1888
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2TextAnnotation>]
1889
+ attr_accessor :text_annotations
1890
+
1891
+ def initialize(**args)
1892
+ update!(**args)
1893
+ end
1894
+
1895
+ # Update properties of this object
1896
+ def update!(**args)
1897
+ @error = args[:error] if args.key?(:error)
1898
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
1899
+ @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
1900
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
1901
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
1902
+ @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
1903
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
1904
+ @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
1905
+ @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
1906
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
1907
+ end
1908
+ end
1909
+
1910
+ # Video segment.
1911
+ class GoogleCloudVideointelligenceV1beta2VideoSegment
1912
+ include Google::Apis::Core::Hashable
1913
+
1914
+ # Time-offset, relative to the beginning of the video,
1915
+ # corresponding to the end of the segment (inclusive).
1916
+ # Corresponds to the JSON property `endTimeOffset`
1917
+ # @return [String]
1918
+ attr_accessor :end_time_offset
1919
+
1920
+ # Time-offset, relative to the beginning of the video,
1921
+ # corresponding to the start of the segment (inclusive).
1922
+ # Corresponds to the JSON property `startTimeOffset`
1923
+ # @return [String]
1924
+ attr_accessor :start_time_offset
1925
+
1926
+ def initialize(**args)
1927
+ update!(**args)
1928
+ end
1929
+
1930
+ # Update properties of this object
1931
+ def update!(**args)
1932
+ @end_time_offset = args[:end_time_offset] if args.key?(:end_time_offset)
1933
+ @start_time_offset = args[:start_time_offset] if args.key?(:start_time_offset)
1934
+ end
1935
+ end
1936
+
1937
+ # Word-specific information for recognized words. Word information is only
1938
+ # included in the response when certain request parameters are set, such
1939
+ # as `enable_word_time_offsets`.
1940
+ class GoogleCloudVideointelligenceV1beta2WordInfo
1941
+ include Google::Apis::Core::Hashable
1942
+
1943
+ # Output only. The confidence estimate between 0.0 and 1.0. A higher number
1944
+ # indicates an estimated greater likelihood that the recognized words are
1945
+ # correct. This field is set only for the top alternative.
1946
+ # This field is not guaranteed to be accurate and users should not rely on it
1947
+ # to be always provided.
1948
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1949
+ # Corresponds to the JSON property `confidence`
1950
+ # @return [Float]
1951
+ attr_accessor :confidence
1952
+
1953
+ # Time offset relative to the beginning of the audio, and
1954
+ # corresponding to the end of the spoken word. This field is only set if
1955
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
1956
+ # experimental feature and the accuracy of the time offset can vary.
1957
+ # Corresponds to the JSON property `endTime`
1958
+ # @return [String]
1959
+ attr_accessor :end_time
1960
+
1961
+ # Output only. A distinct integer value is assigned for every speaker within
1962
+ # the audio. This field specifies which one of those speakers was detected to
1963
+ # have spoken this word. Value ranges from 1 up to diarization_speaker_count,
1964
+ # and is only set if speaker diarization is enabled.
1965
+ # Corresponds to the JSON property `speakerTag`
1966
+ # @return [Fixnum]
1967
+ attr_accessor :speaker_tag
1968
+
1969
+ # Time offset relative to the beginning of the audio, and
1970
+ # corresponding to the start of the spoken word. This field is only set if
1971
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
1972
+ # experimental feature and the accuracy of the time offset can vary.
1973
+ # Corresponds to the JSON property `startTime`
1974
+ # @return [String]
1975
+ attr_accessor :start_time
1976
+
1977
+ # The word corresponding to this set of information.
1978
+ # Corresponds to the JSON property `word`
1979
+ # @return [String]
1980
+ attr_accessor :word
1981
+
1982
+ def initialize(**args)
1983
+ update!(**args)
1984
+ end
1985
+
1986
+ # Update properties of this object
1987
+ def update!(**args)
1988
+ @confidence = args[:confidence] if args.key?(:confidence)
1989
+ @end_time = args[:end_time] if args.key?(:end_time)
1990
+ @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
1991
+ @start_time = args[:start_time] if args.key?(:start_time)
1992
+ @word = args[:word] if args.key?(:word)
1993
+ end
1994
+ end
1995
+
1996
+ # Video annotation progress. Included in the `metadata`
1997
+ # field of the `Operation` returned by the `GetOperation`
1998
+ # call of the `google::longrunning::Operations` service.
1999
+ class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
2000
+ include Google::Apis::Core::Hashable
2001
+
2002
+ # Progress metadata for all videos specified in `AnnotateVideoRequest`.
2003
+ # Corresponds to the JSON property `annotationProgress`
2004
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
2005
+ attr_accessor :annotation_progress
2006
+
2007
+ def initialize(**args)
2008
+ update!(**args)
2009
+ end
2010
+
2011
+ # Update properties of this object
2012
+ def update!(**args)
2013
+ @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
2014
+ end
2015
+ end
2016
+
2017
+ # Video annotation response. Included in the `response`
2018
+ # field of the `Operation` returned by the `GetOperation`
2019
+ # call of the `google::longrunning::Operations` service.
2020
+ class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
2021
+ include Google::Apis::Core::Hashable
2022
+
2023
+ # Annotation results for all videos specified in `AnnotateVideoRequest`.
2024
+ # Corresponds to the JSON property `annotationResults`
2025
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
2026
+ attr_accessor :annotation_results
2027
+
2028
+ def initialize(**args)
2029
+ update!(**args)
2030
+ end
2031
+
2032
+ # Update properties of this object
2033
+ def update!(**args)
2034
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
2035
+ end
2036
+ end
2037
+
2038
+ # Detected entity from video analysis.
2039
+ class GoogleCloudVideointelligenceV1p1beta1Entity
2040
+ include Google::Apis::Core::Hashable
2041
+
2042
+ # Textual description, e.g. `Fixed-gear bicycle`.
2043
+ # Corresponds to the JSON property `description`
2044
+ # @return [String]
2045
+ attr_accessor :description
2046
+
2047
+ # Opaque entity ID. Some IDs may be available in
2048
+ # [Google Knowledge Graph Search
2049
+ # API](https://developers.google.com/knowledge-graph/).
2050
+ # Corresponds to the JSON property `entityId`
2051
+ # @return [String]
2052
+ attr_accessor :entity_id
2053
+
2054
+ # Language code for `description` in BCP-47 format.
2055
+ # Corresponds to the JSON property `languageCode`
2056
+ # @return [String]
2057
+ attr_accessor :language_code
2058
+
2059
+ def initialize(**args)
2060
+ update!(**args)
2061
+ end
2062
+
2063
+ # Update properties of this object
2064
+ def update!(**args)
2065
+ @description = args[:description] if args.key?(:description)
2066
+ @entity_id = args[:entity_id] if args.key?(:entity_id)
2067
+ @language_code = args[:language_code] if args.key?(:language_code)
2068
+ end
2069
+ end
2070
+
2071
+ # Explicit content annotation (based on per-frame visual signals only).
2072
+ # If no explicit content has been detected in a frame, no annotations are
2073
+ # present for that frame.
2074
+ class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
2075
+ include Google::Apis::Core::Hashable
2076
+
2077
+ # All video frames where explicit content was detected.
2078
+ # Corresponds to the JSON property `frames`
2079
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
2080
+ attr_accessor :frames
2081
+
2082
+ def initialize(**args)
2083
+ update!(**args)
2084
+ end
2085
+
2086
+ # Update properties of this object
2087
+ def update!(**args)
2088
+ @frames = args[:frames] if args.key?(:frames)
2089
+ end
2090
+ end
2091
+
2092
+ # Video frame level annotation results for explicit content.
2093
+ class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
2094
+ include Google::Apis::Core::Hashable
2095
+
2096
+ # Likelihood of the pornography content..
2097
+ # Corresponds to the JSON property `pornographyLikelihood`
2098
+ # @return [String]
2099
+ attr_accessor :pornography_likelihood
2100
+
2101
+ # Time-offset, relative to the beginning of the video, corresponding to the
2102
+ # video frame for this location.
2103
+ # Corresponds to the JSON property `timeOffset`
2104
+ # @return [String]
2105
+ attr_accessor :time_offset
2106
+
2107
+ def initialize(**args)
2108
+ update!(**args)
2109
+ end
2110
+
2111
+ # Update properties of this object
2112
+ def update!(**args)
2113
+ @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
2114
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2115
+ end
2116
+ end
2117
+
2118
+ # Label annotation.
2119
+ class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
2120
+ include Google::Apis::Core::Hashable
2121
+
2122
+ # Common categories for the detected entity.
2123
+ # E.g. when the label is `Terrier` the category is likely `dog`. And in some
2124
+ # cases there might be more than one categories e.g. `Terrier` could also be
2125
+ # a `pet`.
2126
+ # Corresponds to the JSON property `categoryEntities`
2127
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity>]
2128
+ attr_accessor :category_entities
2129
+
2130
+ # Detected entity from video analysis.
2131
+ # Corresponds to the JSON property `entity`
2132
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity]
2133
+ attr_accessor :entity
2134
+
2135
+ # All video frames where a label was detected.
2136
+ # Corresponds to the JSON property `frames`
2137
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
2138
+ attr_accessor :frames
2139
+
2140
+ # All video segments where a label was detected.
2141
+ # Corresponds to the JSON property `segments`
2142
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
2143
+ attr_accessor :segments
2144
+
2145
+ def initialize(**args)
2146
+ update!(**args)
2147
+ end
2148
+
2149
+ # Update properties of this object
2150
+ def update!(**args)
2151
+ @category_entities = args[:category_entities] if args.key?(:category_entities)
2152
+ @entity = args[:entity] if args.key?(:entity)
2153
+ @frames = args[:frames] if args.key?(:frames)
2154
+ @segments = args[:segments] if args.key?(:segments)
2155
+ end
2156
+ end
2157
+
2158
+ # Video frame level annotation results for label detection.
2159
+ class GoogleCloudVideointelligenceV1p1beta1LabelFrame
2160
+ include Google::Apis::Core::Hashable
2161
+
2162
+ # Confidence that the label is accurate. Range: [0, 1].
2163
+ # Corresponds to the JSON property `confidence`
2164
+ # @return [Float]
2165
+ attr_accessor :confidence
2166
+
2167
+ # Time-offset, relative to the beginning of the video, corresponding to the
2168
+ # video frame for this location.
2169
+ # Corresponds to the JSON property `timeOffset`
2170
+ # @return [String]
2171
+ attr_accessor :time_offset
2172
+
2173
+ def initialize(**args)
2174
+ update!(**args)
2175
+ end
2176
+
2177
+ # Update properties of this object
2178
+ def update!(**args)
2179
+ @confidence = args[:confidence] if args.key?(:confidence)
2180
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2181
+ end
2182
+ end
2183
+
2184
+ # Video segment level annotation results for label detection.
2185
+ class GoogleCloudVideointelligenceV1p1beta1LabelSegment
2186
+ include Google::Apis::Core::Hashable
2187
+
2188
+ # Confidence that the label is accurate. Range: [0, 1].
2189
+ # Corresponds to the JSON property `confidence`
2190
+ # @return [Float]
2191
+ attr_accessor :confidence
2192
+
2193
+ # Video segment.
2194
+ # Corresponds to the JSON property `segment`
2195
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
2196
+ attr_accessor :segment
2197
+
2198
+ def initialize(**args)
2199
+ update!(**args)
2200
+ end
2201
+
2202
+ # Update properties of this object
2203
+ def update!(**args)
2204
+ @confidence = args[:confidence] if args.key?(:confidence)
2205
+ @segment = args[:segment] if args.key?(:segment)
2206
+ end
2207
+ end
2208
+
2209
+ # Normalized bounding box.
2210
+ # The normalized vertex coordinates are relative to the original image.
2211
+ # Range: [0, 1].
2212
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox
2213
+ include Google::Apis::Core::Hashable
2214
+
2215
+ # Bottom Y coordinate.
2216
+ # Corresponds to the JSON property `bottom`
2217
+ # @return [Float]
2218
+ attr_accessor :bottom
2219
+
2220
+ # Left X coordinate.
2221
+ # Corresponds to the JSON property `left`
2222
+ # @return [Float]
2223
+ attr_accessor :left
2224
+
2225
+ # Right X coordinate.
2226
+ # Corresponds to the JSON property `right`
2227
+ # @return [Float]
2228
+ attr_accessor :right
2229
+
2230
+ # Top Y coordinate.
2231
+ # Corresponds to the JSON property `top`
2232
+ # @return [Float]
2233
+ attr_accessor :top
2234
+
2235
+ def initialize(**args)
2236
+ update!(**args)
2237
+ end
2238
+
2239
+ # Update properties of this object
2240
+ def update!(**args)
2241
+ @bottom = args[:bottom] if args.key?(:bottom)
2242
+ @left = args[:left] if args.key?(:left)
2243
+ @right = args[:right] if args.key?(:right)
2244
+ @top = args[:top] if args.key?(:top)
2245
+ end
2246
+ end
2247
+
2248
+ # Normalized bounding polygon for text (that might not be aligned with axis).
2249
+ # Contains list of the corner points in clockwise order starting from
2250
+ # top-left corner. For example, for a rectangular bounding box:
2251
+ # When the text is horizontal it might look like:
2252
+ # 0----1
2253
+ # | |
2254
+ # 3----2
2255
+ # When it's clockwise rotated 180 degrees around the top-left corner it
2256
+ # becomes:
2257
+ # 2----3
2258
+ # | |
2259
+ # 1----0
2260
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2261
+ # than 0, or greater than 1 due to trignometric calculations for location of
2262
+ # the box.
2263
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly
2264
+ include Google::Apis::Core::Hashable
2265
+
2266
+ # Normalized vertices of the bounding polygon.
2267
+ # Corresponds to the JSON property `vertices`
2268
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1NormalizedVertex>]
2269
+ attr_accessor :vertices
2270
+
2271
+ def initialize(**args)
2272
+ update!(**args)
2273
+ end
2274
+
2275
+ # Update properties of this object
2276
+ def update!(**args)
2277
+ @vertices = args[:vertices] if args.key?(:vertices)
2278
+ end
2279
+ end
2280
+
2281
+ # A vertex represents a 2D point in the image.
2282
+ # NOTE: the normalized vertex coordinates are relative to the original image
2283
+ # and range from 0 to 1.
2284
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedVertex
2285
+ include Google::Apis::Core::Hashable
2286
+
2287
+ # X coordinate.
2288
+ # Corresponds to the JSON property `x`
2289
+ # @return [Float]
2290
+ attr_accessor :x
2291
+
2292
+ # Y coordinate.
2293
+ # Corresponds to the JSON property `y`
2294
+ # @return [Float]
2295
+ attr_accessor :y
2296
+
2297
+ def initialize(**args)
2298
+ update!(**args)
2299
+ end
2300
+
2301
+ # Update properties of this object
2302
+ def update!(**args)
2303
+ @x = args[:x] if args.key?(:x)
2304
+ @y = args[:y] if args.key?(:y)
2305
+ end
2306
+ end
2307
+
2308
+ # Annotations corresponding to one tracked object.
2309
+ class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation
2310
+ include Google::Apis::Core::Hashable
2311
+
2312
+ # Object category's labeling confidence of this track.
2313
+ # Corresponds to the JSON property `confidence`
2314
+ # @return [Float]
2315
+ attr_accessor :confidence
2316
+
2317
+ # Detected entity from video analysis.
2318
+ # Corresponds to the JSON property `entity`
2319
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity]
2320
+ attr_accessor :entity
2321
+
2322
+ # Information corresponding to all frames where this object track appears.
2323
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
2324
+ # messages in frames.
2325
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
2326
+ # Corresponds to the JSON property `frames`
2327
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame>]
2328
+ attr_accessor :frames
2329
+
2330
+ # Video segment.
2331
+ # Corresponds to the JSON property `segment`
2332
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
2333
+ attr_accessor :segment
2334
+
2335
+ # Streaming mode ONLY.
2336
+ # In streaming mode, we do not know the end time of a tracked object
2337
+ # before it is completed. Hence, there is no VideoSegment info returned.
2338
+ # Instead, we provide a unique identifiable integer track_id so that
2339
+ # the customers can correlate the results of the ongoing
2340
+ # ObjectTrackAnnotation of the same track_id over time.
2341
+ # Corresponds to the JSON property `trackId`
2342
+ # @return [Fixnum]
2343
+ attr_accessor :track_id
2344
+
2345
+ def initialize(**args)
2346
+ update!(**args)
2347
+ end
2348
+
2349
+ # Update properties of this object
2350
+ def update!(**args)
2351
+ @confidence = args[:confidence] if args.key?(:confidence)
2352
+ @entity = args[:entity] if args.key?(:entity)
2353
+ @frames = args[:frames] if args.key?(:frames)
2354
+ @segment = args[:segment] if args.key?(:segment)
2355
+ @track_id = args[:track_id] if args.key?(:track_id)
2356
+ end
2357
+ end
2358
+
2359
+ # Video frame level annotations for object detection and tracking. This field
2360
+ # stores per frame location, time offset, and confidence.
2361
+ class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame
2362
+ include Google::Apis::Core::Hashable
2363
+
2364
+ # Normalized bounding box.
2365
+ # The normalized vertex coordinates are relative to the original image.
2366
+ # Range: [0, 1].
2367
+ # Corresponds to the JSON property `normalizedBoundingBox`
2368
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox]
2369
+ attr_accessor :normalized_bounding_box
2370
+
2371
+ # The timestamp of the frame in microseconds.
2372
+ # Corresponds to the JSON property `timeOffset`
2373
+ # @return [String]
2374
+ attr_accessor :time_offset
2375
+
2376
+ def initialize(**args)
2377
+ update!(**args)
2378
+ end
2379
+
2380
+ # Update properties of this object
2381
+ def update!(**args)
2382
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
2383
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2384
+ end
2385
+ end
2386
+
2387
+ # Alternative hypotheses (a.k.a. n-best list).
2388
+ class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
2389
+ include Google::Apis::Core::Hashable
2390
+
2391
+ # The confidence estimate between 0.0 and 1.0. A higher number
2392
+ # indicates an estimated greater likelihood that the recognized words are
2393
+ # correct. This field is typically provided only for the top hypothesis, and
2394
+ # only for `is_final=true` results. Clients should not rely on the
2395
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
2396
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
2397
+ # Corresponds to the JSON property `confidence`
2398
+ # @return [Float]
2399
+ attr_accessor :confidence
2400
+
2401
+ # Transcript text representing the words that the user spoke.
2402
+ # Corresponds to the JSON property `transcript`
2403
+ # @return [String]
2404
+ attr_accessor :transcript
2405
+
2406
+ # A list of word-specific information for each recognized word.
2407
+ # Corresponds to the JSON property `words`
2408
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
2409
+ attr_accessor :words
2410
+
2411
+ def initialize(**args)
2412
+ update!(**args)
2413
+ end
2414
+
2415
+ # Update properties of this object
2416
+ def update!(**args)
2417
+ @confidence = args[:confidence] if args.key?(:confidence)
2418
+ @transcript = args[:transcript] if args.key?(:transcript)
2419
+ @words = args[:words] if args.key?(:words)
2420
+ end
2421
+ end
2422
+
2423
+ # A speech recognition result corresponding to a portion of the audio.
2424
+ class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
2425
+ include Google::Apis::Core::Hashable
2426
+
2427
+ # May contain one or more recognition hypotheses (up to the maximum specified
2428
+ # in `max_alternatives`). These alternatives are ordered in terms of
2429
+ # accuracy, with the top (first) alternative being the most probable, as
2430
+ # ranked by the recognizer.
2431
+ # Corresponds to the JSON property `alternatives`
2432
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
2433
+ attr_accessor :alternatives
2434
+
2435
+ # Output only. The
2436
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
2437
+ # language in this result. This language code was detected to have the most
2438
+ # likelihood of being spoken in the audio.
2439
+ # Corresponds to the JSON property `languageCode`
2440
+ # @return [String]
2441
+ attr_accessor :language_code
2442
+
2443
+ def initialize(**args)
2444
+ update!(**args)
2445
+ end
2446
+
2447
+ # Update properties of this object
2448
+ def update!(**args)
2449
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
2450
+ @language_code = args[:language_code] if args.key?(:language_code)
2451
+ end
2452
+ end
2453
+
2454
+ # Annotations related to one detected OCR text snippet. This will contain the
2455
+ # corresponding text, confidence value, and frame level information for each
2456
+ # detection.
2457
+ class GoogleCloudVideointelligenceV1p1beta1TextAnnotation
2458
+ include Google::Apis::Core::Hashable
2459
+
2460
+ # All video segments where OCR detected text appears.
2461
+ # Corresponds to the JSON property `segments`
2462
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1TextSegment>]
2463
+ attr_accessor :segments
2464
+
2465
+ # The detected text.
2466
+ # Corresponds to the JSON property `text`
2467
+ # @return [String]
2468
+ attr_accessor :text
2469
+
2470
+ def initialize(**args)
2471
+ update!(**args)
2472
+ end
2473
+
2474
+ # Update properties of this object
2475
+ def update!(**args)
2476
+ @segments = args[:segments] if args.key?(:segments)
2477
+ @text = args[:text] if args.key?(:text)
2478
+ end
2479
+ end
2480
+
2481
+ # Video frame level annotation results for text annotation (OCR).
2482
+ # Contains information regarding timestamp and bounding box locations for the
2483
+ # frames containing detected OCR text snippets.
2484
+ class GoogleCloudVideointelligenceV1p1beta1TextFrame
2485
+ include Google::Apis::Core::Hashable
2486
+
2487
+ # Normalized bounding polygon for text (that might not be aligned with axis).
2488
+ # Contains list of the corner points in clockwise order starting from
2489
+ # top-left corner. For example, for a rectangular bounding box:
2490
+ # When the text is horizontal it might look like:
2491
+ # 0----1
2492
+ # | |
2493
+ # 3----2
2494
+ # When it's clockwise rotated 180 degrees around the top-left corner it
2495
+ # becomes:
2496
+ # 2----3
2497
+ # | |
2498
+ # 1----0
2499
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2500
+ # than 0, or greater than 1 due to trignometric calculations for location of
2501
+ # the box.
2502
+ # Corresponds to the JSON property `rotatedBoundingBox`
2503
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly]
2504
+ attr_accessor :rotated_bounding_box
2505
+
2506
+ # Timestamp of this frame.
2507
+ # Corresponds to the JSON property `timeOffset`
2508
+ # @return [String]
2509
+ attr_accessor :time_offset
2510
+
2511
+ def initialize(**args)
2512
+ update!(**args)
2513
+ end
2514
+
2515
+ # Update properties of this object
2516
+ def update!(**args)
2517
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
2518
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2519
+ end
2520
+ end
2521
+
2522
+ # Video segment level annotation results for text detection.
2523
+ class GoogleCloudVideointelligenceV1p1beta1TextSegment
2524
+ include Google::Apis::Core::Hashable
2525
+
2526
+ # Confidence for the track of detected text. It is calculated as the highest
2527
+ # over all frames where OCR detected text appears.
2528
+ # Corresponds to the JSON property `confidence`
2529
+ # @return [Float]
2530
+ attr_accessor :confidence
2531
+
2532
+ # Information related to the frames where OCR detected text appears.
2533
+ # Corresponds to the JSON property `frames`
2534
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1TextFrame>]
2535
+ attr_accessor :frames
2536
+
2537
+ # Video segment.
2538
+ # Corresponds to the JSON property `segment`
2539
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
2540
+ attr_accessor :segment
2541
+
2542
+ def initialize(**args)
2543
+ update!(**args)
2544
+ end
2545
+
2546
+ # Update properties of this object
2547
+ def update!(**args)
2548
+ @confidence = args[:confidence] if args.key?(:confidence)
2549
+ @frames = args[:frames] if args.key?(:frames)
2550
+ @segment = args[:segment] if args.key?(:segment)
2551
+ end
2552
+ end
2553
+
2554
+ # Annotation progress for a single video.
2555
+ class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
2556
+ include Google::Apis::Core::Hashable
2557
+
2558
+ # Video file location in
2559
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
2560
+ # Corresponds to the JSON property `inputUri`
2561
+ # @return [String]
2562
+ attr_accessor :input_uri
2563
+
2564
+ # Approximate percentage processed thus far. Guaranteed to be
2565
+ # 100 when fully processed.
2566
+ # Corresponds to the JSON property `progressPercent`
2567
+ # @return [Fixnum]
2568
+ attr_accessor :progress_percent
2569
+
2570
+ # Time when the request was received.
2571
+ # Corresponds to the JSON property `startTime`
1150
2572
  # @return [String]
1151
2573
  attr_accessor :start_time
1152
2574
 
@@ -1169,17 +2591,17 @@ module Google
1169
2591
  end
1170
2592
 
1171
2593
  # Annotation results for a single video.
1172
- class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
2594
+ class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
1173
2595
  include Google::Apis::Core::Hashable
1174
2596
 
1175
- # The `Status` type defines a logical error model that is suitable for different
1176
- # programming environments, including REST APIs and RPC APIs. It is used by
1177
- # [gRPC](https://github.com/grpc). The error model is designed to be:
2597
+ # The `Status` type defines a logical error model that is suitable for
2598
+ # different programming environments, including REST APIs and RPC APIs. It is
2599
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
1178
2600
  # - Simple to use and understand for most users
1179
2601
  # - Flexible enough to meet unexpected needs
1180
2602
  # # Overview
1181
- # The `Status` message contains three pieces of data: error code, error message,
1182
- # and error details. The error code should be an enum value of
2603
+ # The `Status` message contains three pieces of data: error code, error
2604
+ # message, and error details. The error code should be an enum value of
1183
2605
  # google.rpc.Code, but it may accept additional error codes if needed. The
1184
2606
  # error message should be a developer-facing English message that helps
1185
2607
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -1219,13 +2641,13 @@ module Google
1219
2641
  # If no explicit content has been detected in a frame, no annotations are
1220
2642
  # present for that frame.
1221
2643
  # Corresponds to the JSON property `explicitAnnotation`
1222
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
2644
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
1223
2645
  attr_accessor :explicit_annotation
1224
2646
 
1225
2647
  # Label annotations on frame level.
1226
2648
  # There is exactly one element for each unique label.
1227
2649
  # Corresponds to the JSON property `frameLabelAnnotations`
1228
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
2650
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
1229
2651
  attr_accessor :frame_label_annotations
1230
2652
 
1231
2653
  # Video file location in
@@ -1234,28 +2656,40 @@ module Google
1234
2656
  # @return [String]
1235
2657
  attr_accessor :input_uri
1236
2658
 
2659
+ # Annotations for list of objects detected and tracked in video.
2660
+ # Corresponds to the JSON property `objectAnnotations`
2661
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>]
2662
+ attr_accessor :object_annotations
2663
+
1237
2664
  # Label annotations on video level or user specified segment level.
1238
2665
  # There is exactly one element for each unique label.
1239
2666
  # Corresponds to the JSON property `segmentLabelAnnotations`
1240
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
2667
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
1241
2668
  attr_accessor :segment_label_annotations
1242
2669
 
1243
2670
  # Shot annotations. Each shot is represented as a video segment.
1244
2671
  # Corresponds to the JSON property `shotAnnotations`
1245
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2VideoSegment>]
2672
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
1246
2673
  attr_accessor :shot_annotations
1247
2674
 
1248
2675
  # Label annotations on shot level.
1249
2676
  # There is exactly one element for each unique label.
1250
2677
  # Corresponds to the JSON property `shotLabelAnnotations`
1251
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
2678
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
1252
2679
  attr_accessor :shot_label_annotations
1253
2680
 
1254
2681
  # Speech transcription.
1255
2682
  # Corresponds to the JSON property `speechTranscriptions`
1256
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
2683
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
1257
2684
  attr_accessor :speech_transcriptions
1258
2685
 
2686
+ # OCR text detection and tracking.
2687
+ # Annotations for list of detected text snippets. Each will have list of
2688
+ # frame information associated with it.
2689
+ # Corresponds to the JSON property `textAnnotations`
2690
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>]
2691
+ attr_accessor :text_annotations
2692
+
1259
2693
  def initialize(**args)
1260
2694
  update!(**args)
1261
2695
  end
@@ -1266,15 +2700,17 @@ module Google
1266
2700
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
1267
2701
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
1268
2702
  @input_uri = args[:input_uri] if args.key?(:input_uri)
2703
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
1269
2704
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
1270
2705
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
1271
2706
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
1272
2707
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
2708
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
1273
2709
  end
1274
2710
  end
1275
2711
 
1276
2712
  # Video segment.
1277
- class GoogleCloudVideointelligenceV1beta2VideoSegment
2713
+ class GoogleCloudVideointelligenceV1p1beta1VideoSegment
1278
2714
  include Google::Apis::Core::Hashable
1279
2715
 
1280
2716
  # Time-offset, relative to the beginning of the video,
@@ -1303,7 +2739,7 @@ module Google
1303
2739
  # Word-specific information for recognized words. Word information is only
1304
2740
  # included in the response when certain request parameters are set, such
1305
2741
  # as `enable_word_time_offsets`.
1306
- class GoogleCloudVideointelligenceV1beta2WordInfo
2742
+ class GoogleCloudVideointelligenceV1p1beta1WordInfo
1307
2743
  include Google::Apis::Core::Hashable
1308
2744
 
1309
2745
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -1362,13 +2798,241 @@ module Google
1362
2798
  # Video annotation progress. Included in the `metadata`
1363
2799
  # field of the `Operation` returned by the `GetOperation`
1364
2800
  # call of the `google::longrunning::Operations` service.
1365
- class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
2801
+ class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
2802
+ include Google::Apis::Core::Hashable
2803
+
2804
+ # Progress metadata for all videos specified in `AnnotateVideoRequest`.
2805
+ # Corresponds to the JSON property `annotationProgress`
2806
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
2807
+ attr_accessor :annotation_progress
2808
+
2809
+ def initialize(**args)
2810
+ update!(**args)
2811
+ end
2812
+
2813
+ # Update properties of this object
2814
+ def update!(**args)
2815
+ @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
2816
+ end
2817
+ end
2818
+
2819
+ # Video annotation response. Included in the `response`
2820
+ # field of the `Operation` returned by the `GetOperation`
2821
+ # call of the `google::longrunning::Operations` service.
2822
+ class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
2823
+ include Google::Apis::Core::Hashable
2824
+
2825
+ # Annotation results for all videos specified in `AnnotateVideoRequest`.
2826
+ # Corresponds to the JSON property `annotationResults`
2827
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
2828
+ attr_accessor :annotation_results
2829
+
2830
+ def initialize(**args)
2831
+ update!(**args)
2832
+ end
2833
+
2834
+ # Update properties of this object
2835
+ def update!(**args)
2836
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
2837
+ end
2838
+ end
2839
+
2840
+ # Detected entity from video analysis.
2841
+ class GoogleCloudVideointelligenceV1p2beta1Entity
2842
+ include Google::Apis::Core::Hashable
2843
+
2844
+ # Textual description, e.g. `Fixed-gear bicycle`.
2845
+ # Corresponds to the JSON property `description`
2846
+ # @return [String]
2847
+ attr_accessor :description
2848
+
2849
+ # Opaque entity ID. Some IDs may be available in
2850
+ # [Google Knowledge Graph Search
2851
+ # API](https://developers.google.com/knowledge-graph/).
2852
+ # Corresponds to the JSON property `entityId`
2853
+ # @return [String]
2854
+ attr_accessor :entity_id
2855
+
2856
+ # Language code for `description` in BCP-47 format.
2857
+ # Corresponds to the JSON property `languageCode`
2858
+ # @return [String]
2859
+ attr_accessor :language_code
2860
+
2861
+ def initialize(**args)
2862
+ update!(**args)
2863
+ end
2864
+
2865
+ # Update properties of this object
2866
+ def update!(**args)
2867
+ @description = args[:description] if args.key?(:description)
2868
+ @entity_id = args[:entity_id] if args.key?(:entity_id)
2869
+ @language_code = args[:language_code] if args.key?(:language_code)
2870
+ end
2871
+ end
2872
+
2873
+ # Explicit content annotation (based on per-frame visual signals only).
2874
+ # If no explicit content has been detected in a frame, no annotations are
2875
+ # present for that frame.
2876
+ class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
2877
+ include Google::Apis::Core::Hashable
2878
+
2879
+ # All video frames where explicit content was detected.
2880
+ # Corresponds to the JSON property `frames`
2881
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
2882
+ attr_accessor :frames
2883
+
2884
+ def initialize(**args)
2885
+ update!(**args)
2886
+ end
2887
+
2888
+ # Update properties of this object
2889
+ def update!(**args)
2890
+ @frames = args[:frames] if args.key?(:frames)
2891
+ end
2892
+ end
2893
+
2894
+ # Video frame level annotation results for explicit content.
2895
+ class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
2896
+ include Google::Apis::Core::Hashable
2897
+
2898
+ # Likelihood of the pornography content..
2899
+ # Corresponds to the JSON property `pornographyLikelihood`
2900
+ # @return [String]
2901
+ attr_accessor :pornography_likelihood
2902
+
2903
+ # Time-offset, relative to the beginning of the video, corresponding to the
2904
+ # video frame for this location.
2905
+ # Corresponds to the JSON property `timeOffset`
2906
+ # @return [String]
2907
+ attr_accessor :time_offset
2908
+
2909
+ def initialize(**args)
2910
+ update!(**args)
2911
+ end
2912
+
2913
+ # Update properties of this object
2914
+ def update!(**args)
2915
+ @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
2916
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2917
+ end
2918
+ end
2919
+
2920
+ # Label annotation.
2921
+ class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
2922
+ include Google::Apis::Core::Hashable
2923
+
2924
+ # Common categories for the detected entity.
2925
+ # E.g. when the label is `Terrier` the category is likely `dog`. And in some
2926
+ # cases there might be more than one categories e.g. `Terrier` could also be
2927
+ # a `pet`.
2928
+ # Corresponds to the JSON property `categoryEntities`
2929
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity>]
2930
+ attr_accessor :category_entities
2931
+
2932
+ # Detected entity from video analysis.
2933
+ # Corresponds to the JSON property `entity`
2934
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity]
2935
+ attr_accessor :entity
2936
+
2937
+ # All video frames where a label was detected.
2938
+ # Corresponds to the JSON property `frames`
2939
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
2940
+ attr_accessor :frames
2941
+
2942
+ # All video segments where a label was detected.
2943
+ # Corresponds to the JSON property `segments`
2944
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
2945
+ attr_accessor :segments
2946
+
2947
+ def initialize(**args)
2948
+ update!(**args)
2949
+ end
2950
+
2951
+ # Update properties of this object
2952
+ def update!(**args)
2953
+ @category_entities = args[:category_entities] if args.key?(:category_entities)
2954
+ @entity = args[:entity] if args.key?(:entity)
2955
+ @frames = args[:frames] if args.key?(:frames)
2956
+ @segments = args[:segments] if args.key?(:segments)
2957
+ end
2958
+ end
2959
+
2960
+ # Video frame level annotation results for label detection.
2961
+ class GoogleCloudVideointelligenceV1p2beta1LabelFrame
2962
+ include Google::Apis::Core::Hashable
2963
+
2964
+ # Confidence that the label is accurate. Range: [0, 1].
2965
+ # Corresponds to the JSON property `confidence`
2966
+ # @return [Float]
2967
+ attr_accessor :confidence
2968
+
2969
+ # Time-offset, relative to the beginning of the video, corresponding to the
2970
+ # video frame for this location.
2971
+ # Corresponds to the JSON property `timeOffset`
2972
+ # @return [String]
2973
+ attr_accessor :time_offset
2974
+
2975
+ def initialize(**args)
2976
+ update!(**args)
2977
+ end
2978
+
2979
+ # Update properties of this object
2980
+ def update!(**args)
2981
+ @confidence = args[:confidence] if args.key?(:confidence)
2982
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2983
+ end
2984
+ end
2985
+
2986
+ # Video segment level annotation results for label detection.
2987
+ class GoogleCloudVideointelligenceV1p2beta1LabelSegment
2988
+ include Google::Apis::Core::Hashable
2989
+
2990
+ # Confidence that the label is accurate. Range: [0, 1].
2991
+ # Corresponds to the JSON property `confidence`
2992
+ # @return [Float]
2993
+ attr_accessor :confidence
2994
+
2995
+ # Video segment.
2996
+ # Corresponds to the JSON property `segment`
2997
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
2998
+ attr_accessor :segment
2999
+
3000
+ def initialize(**args)
3001
+ update!(**args)
3002
+ end
3003
+
3004
+ # Update properties of this object
3005
+ def update!(**args)
3006
+ @confidence = args[:confidence] if args.key?(:confidence)
3007
+ @segment = args[:segment] if args.key?(:segment)
3008
+ end
3009
+ end
3010
+
3011
+ # Normalized bounding box.
3012
+ # The normalized vertex coordinates are relative to the original image.
3013
+ # Range: [0, 1].
3014
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
1366
3015
  include Google::Apis::Core::Hashable
1367
3016
 
1368
- # Progress metadata for all videos specified in `AnnotateVideoRequest`.
1369
- # Corresponds to the JSON property `annotationProgress`
1370
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
1371
- attr_accessor :annotation_progress
3017
+ # Bottom Y coordinate.
3018
+ # Corresponds to the JSON property `bottom`
3019
+ # @return [Float]
3020
+ attr_accessor :bottom
3021
+
3022
+ # Left X coordinate.
3023
+ # Corresponds to the JSON property `left`
3024
+ # @return [Float]
3025
+ attr_accessor :left
3026
+
3027
+ # Right X coordinate.
3028
+ # Corresponds to the JSON property `right`
3029
+ # @return [Float]
3030
+ attr_accessor :right
3031
+
3032
+ # Top Y coordinate.
3033
+ # Corresponds to the JSON property `top`
3034
+ # @return [Float]
3035
+ attr_accessor :top
1372
3036
 
1373
3037
  def initialize(**args)
1374
3038
  update!(**args)
@@ -1376,20 +3040,35 @@ module Google
1376
3040
 
1377
3041
  # Update properties of this object
1378
3042
  def update!(**args)
1379
- @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
3043
+ @bottom = args[:bottom] if args.key?(:bottom)
3044
+ @left = args[:left] if args.key?(:left)
3045
+ @right = args[:right] if args.key?(:right)
3046
+ @top = args[:top] if args.key?(:top)
1380
3047
  end
1381
3048
  end
1382
3049
 
1383
- # Video annotation response. Included in the `response`
1384
- # field of the `Operation` returned by the `GetOperation`
1385
- # call of the `google::longrunning::Operations` service.
1386
- class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
3050
+ # Normalized bounding polygon for text (that might not be aligned with axis).
3051
+ # Contains list of the corner points in clockwise order starting from
3052
+ # top-left corner. For example, for a rectangular bounding box:
3053
+ # When the text is horizontal it might look like:
3054
+ # 0----1
3055
+ # | |
3056
+ # 3----2
3057
+ # When it's clockwise rotated 180 degrees around the top-left corner it
3058
+ # becomes:
3059
+ # 2----3
3060
+ # | |
3061
+ # 1----0
3062
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
3063
+ # than 0, or greater than 1 due to trignometric calculations for location of
3064
+ # the box.
3065
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
1387
3066
  include Google::Apis::Core::Hashable
1388
3067
 
1389
- # Annotation results for all videos specified in `AnnotateVideoRequest`.
1390
- # Corresponds to the JSON property `annotationResults`
1391
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
1392
- attr_accessor :annotation_results
3068
+ # Normalized vertices of the bounding polygon.
3069
+ # Corresponds to the JSON property `vertices`
3070
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
3071
+ attr_accessor :vertices
1393
3072
 
1394
3073
  def initialize(**args)
1395
3074
  update!(**args)
@@ -1397,30 +3076,25 @@ module Google
1397
3076
 
1398
3077
  # Update properties of this object
1399
3078
  def update!(**args)
1400
- @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
3079
+ @vertices = args[:vertices] if args.key?(:vertices)
1401
3080
  end
1402
3081
  end
1403
3082
 
1404
- # Detected entity from video analysis.
1405
- class GoogleCloudVideointelligenceV1p1beta1Entity
3083
+ # A vertex represents a 2D point in the image.
3084
+ # NOTE: the normalized vertex coordinates are relative to the original image
3085
+ # and range from 0 to 1.
3086
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
1406
3087
  include Google::Apis::Core::Hashable
1407
3088
 
1408
- # Textual description, e.g. `Fixed-gear bicycle`.
1409
- # Corresponds to the JSON property `description`
1410
- # @return [String]
1411
- attr_accessor :description
1412
-
1413
- # Opaque entity ID. Some IDs may be available in
1414
- # [Google Knowledge Graph Search
1415
- # API](https://developers.google.com/knowledge-graph/).
1416
- # Corresponds to the JSON property `entityId`
1417
- # @return [String]
1418
- attr_accessor :entity_id
3089
+ # X coordinate.
3090
+ # Corresponds to the JSON property `x`
3091
+ # @return [Float]
3092
+ attr_accessor :x
1419
3093
 
1420
- # Language code for `description` in BCP-47 format.
1421
- # Corresponds to the JSON property `languageCode`
1422
- # @return [String]
1423
- attr_accessor :language_code
3094
+ # Y coordinate.
3095
+ # Corresponds to the JSON property `y`
3096
+ # @return [Float]
3097
+ attr_accessor :y
1424
3098
 
1425
3099
  def initialize(**args)
1426
3100
  update!(**args)
@@ -1428,44 +3102,75 @@ module Google
1428
3102
 
1429
3103
  # Update properties of this object
1430
3104
  def update!(**args)
1431
- @description = args[:description] if args.key?(:description)
1432
- @entity_id = args[:entity_id] if args.key?(:entity_id)
1433
- @language_code = args[:language_code] if args.key?(:language_code)
3105
+ @x = args[:x] if args.key?(:x)
3106
+ @y = args[:y] if args.key?(:y)
1434
3107
  end
1435
3108
  end
1436
3109
 
1437
- # Explicit content annotation (based on per-frame visual signals only).
1438
- # If no explicit content has been detected in a frame, no annotations are
1439
- # present for that frame.
1440
- class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
3110
+ # Annotations corresponding to one tracked object.
3111
+ class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
1441
3112
  include Google::Apis::Core::Hashable
1442
3113
 
1443
- # All video frames where explicit content was detected.
3114
+ # Object category's labeling confidence of this track.
3115
+ # Corresponds to the JSON property `confidence`
3116
+ # @return [Float]
3117
+ attr_accessor :confidence
3118
+
3119
+ # Detected entity from video analysis.
3120
+ # Corresponds to the JSON property `entity`
3121
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity]
3122
+ attr_accessor :entity
3123
+
3124
+ # Information corresponding to all frames where this object track appears.
3125
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
3126
+ # messages in frames.
3127
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
1444
3128
  # Corresponds to the JSON property `frames`
1445
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
3129
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
1446
3130
  attr_accessor :frames
1447
3131
 
3132
+ # Video segment.
3133
+ # Corresponds to the JSON property `segment`
3134
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3135
+ attr_accessor :segment
3136
+
3137
+ # Streaming mode ONLY.
3138
+ # In streaming mode, we do not know the end time of a tracked object
3139
+ # before it is completed. Hence, there is no VideoSegment info returned.
3140
+ # Instead, we provide a unique identifiable integer track_id so that
3141
+ # the customers can correlate the results of the ongoing
3142
+ # ObjectTrackAnnotation of the same track_id over time.
3143
+ # Corresponds to the JSON property `trackId`
3144
+ # @return [Fixnum]
3145
+ attr_accessor :track_id
3146
+
1448
3147
  def initialize(**args)
1449
3148
  update!(**args)
1450
3149
  end
1451
3150
 
1452
3151
  # Update properties of this object
1453
3152
  def update!(**args)
3153
+ @confidence = args[:confidence] if args.key?(:confidence)
3154
+ @entity = args[:entity] if args.key?(:entity)
1454
3155
  @frames = args[:frames] if args.key?(:frames)
3156
+ @segment = args[:segment] if args.key?(:segment)
3157
+ @track_id = args[:track_id] if args.key?(:track_id)
1455
3158
  end
1456
3159
  end
1457
3160
 
1458
- # Video frame level annotation results for explicit content.
1459
- class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
3161
+ # Video frame level annotations for object detection and tracking. This field
3162
+ # stores per frame location, time offset, and confidence.
3163
+ class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
1460
3164
  include Google::Apis::Core::Hashable
1461
3165
 
1462
- # Likelihood of the pornography content..
1463
- # Corresponds to the JSON property `pornographyLikelihood`
1464
- # @return [String]
1465
- attr_accessor :pornography_likelihood
3166
+ # Normalized bounding box.
3167
+ # The normalized vertex coordinates are relative to the original image.
3168
+ # Range: [0, 1].
3169
+ # Corresponds to the JSON property `normalizedBoundingBox`
3170
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
3171
+ attr_accessor :normalized_bounding_box
1466
3172
 
1467
- # Time-offset, relative to the beginning of the video, corresponding to the
1468
- # video frame for this location.
3173
+ # The timestamp of the frame in microseconds.
1469
3174
  # Corresponds to the JSON property `timeOffset`
1470
3175
  # @return [String]
1471
3176
  attr_accessor :time_offset
@@ -1476,37 +3181,34 @@ module Google
1476
3181
 
1477
3182
  # Update properties of this object
1478
3183
  def update!(**args)
1479
- @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
3184
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
1480
3185
  @time_offset = args[:time_offset] if args.key?(:time_offset)
1481
3186
  end
1482
3187
  end
1483
3188
 
1484
- # Label annotation.
1485
- class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
3189
+ # Alternative hypotheses (a.k.a. n-best list).
3190
+ class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
1486
3191
  include Google::Apis::Core::Hashable
1487
3192
 
1488
- # Common categories for the detected entity.
1489
- # E.g. when the label is `Terrier` the category is likely `dog`. And in some
1490
- # cases there might be more than one categories e.g. `Terrier` could also be
1491
- # a `pet`.
1492
- # Corresponds to the JSON property `categoryEntities`
1493
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity>]
1494
- attr_accessor :category_entities
1495
-
1496
- # Detected entity from video analysis.
1497
- # Corresponds to the JSON property `entity`
1498
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity]
1499
- attr_accessor :entity
3193
+ # The confidence estimate between 0.0 and 1.0. A higher number
3194
+ # indicates an estimated greater likelihood that the recognized words are
3195
+ # correct. This field is typically provided only for the top hypothesis, and
3196
+ # only for `is_final=true` results. Clients should not rely on the
3197
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
3198
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
3199
+ # Corresponds to the JSON property `confidence`
3200
+ # @return [Float]
3201
+ attr_accessor :confidence
1500
3202
 
1501
- # All video frames where a label was detected.
1502
- # Corresponds to the JSON property `frames`
1503
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
1504
- attr_accessor :frames
3203
+ # Transcript text representing the words that the user spoke.
3204
+ # Corresponds to the JSON property `transcript`
3205
+ # @return [String]
3206
+ attr_accessor :transcript
1505
3207
 
1506
- # All video segments where a label was detected.
1507
- # Corresponds to the JSON property `segments`
1508
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
1509
- attr_accessor :segments
3208
+ # A list of word-specific information for each recognized word.
3209
+ # Corresponds to the JSON property `words`
3210
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
3211
+ attr_accessor :words
1510
3212
 
1511
3213
  def initialize(**args)
1512
3214
  update!(**args)
@@ -1514,27 +3216,31 @@ module Google
1514
3216
 
1515
3217
  # Update properties of this object
1516
3218
  def update!(**args)
1517
- @category_entities = args[:category_entities] if args.key?(:category_entities)
1518
- @entity = args[:entity] if args.key?(:entity)
1519
- @frames = args[:frames] if args.key?(:frames)
1520
- @segments = args[:segments] if args.key?(:segments)
3219
+ @confidence = args[:confidence] if args.key?(:confidence)
3220
+ @transcript = args[:transcript] if args.key?(:transcript)
3221
+ @words = args[:words] if args.key?(:words)
1521
3222
  end
1522
3223
  end
1523
3224
 
1524
- # Video frame level annotation results for label detection.
1525
- class GoogleCloudVideointelligenceV1p1beta1LabelFrame
3225
+ # A speech recognition result corresponding to a portion of the audio.
3226
+ class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
1526
3227
  include Google::Apis::Core::Hashable
1527
3228
 
1528
- # Confidence that the label is accurate. Range: [0, 1].
1529
- # Corresponds to the JSON property `confidence`
1530
- # @return [Float]
1531
- attr_accessor :confidence
3229
+ # May contain one or more recognition hypotheses (up to the maximum specified
3230
+ # in `max_alternatives`). These alternatives are ordered in terms of
3231
+ # accuracy, with the top (first) alternative being the most probable, as
3232
+ # ranked by the recognizer.
3233
+ # Corresponds to the JSON property `alternatives`
3234
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
3235
+ attr_accessor :alternatives
1532
3236
 
1533
- # Time-offset, relative to the beginning of the video, corresponding to the
1534
- # video frame for this location.
1535
- # Corresponds to the JSON property `timeOffset`
3237
+ # Output only. The
3238
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
3239
+ # language in this result. This language code was detected to have the most
3240
+ # likelihood of being spoken in the audio.
3241
+ # Corresponds to the JSON property `languageCode`
1536
3242
  # @return [String]
1537
- attr_accessor :time_offset
3243
+ attr_accessor :language_code
1538
3244
 
1539
3245
  def initialize(**args)
1540
3246
  update!(**args)
@@ -1542,24 +3248,26 @@ module Google
1542
3248
 
1543
3249
  # Update properties of this object
1544
3250
  def update!(**args)
1545
- @confidence = args[:confidence] if args.key?(:confidence)
1546
- @time_offset = args[:time_offset] if args.key?(:time_offset)
3251
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
3252
+ @language_code = args[:language_code] if args.key?(:language_code)
1547
3253
  end
1548
3254
  end
1549
3255
 
1550
- # Video segment level annotation results for label detection.
1551
- class GoogleCloudVideointelligenceV1p1beta1LabelSegment
3256
+ # Annotations related to one detected OCR text snippet. This will contain the
3257
+ # corresponding text, confidence value, and frame level information for each
3258
+ # detection.
3259
+ class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
1552
3260
  include Google::Apis::Core::Hashable
1553
3261
 
1554
- # Confidence that the label is accurate. Range: [0, 1].
1555
- # Corresponds to the JSON property `confidence`
1556
- # @return [Float]
1557
- attr_accessor :confidence
3262
+ # All video segments where OCR detected text appears.
3263
+ # Corresponds to the JSON property `segments`
3264
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
3265
+ attr_accessor :segments
1558
3266
 
1559
- # Video segment.
1560
- # Corresponds to the JSON property `segment`
1561
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
1562
- attr_accessor :segment
3267
+ # The detected text.
3268
+ # Corresponds to the JSON property `text`
3269
+ # @return [String]
3270
+ attr_accessor :text
1563
3271
 
1564
3272
  def initialize(**args)
1565
3273
  update!(**args)
@@ -1567,34 +3275,40 @@ module Google
1567
3275
 
1568
3276
  # Update properties of this object
1569
3277
  def update!(**args)
1570
- @confidence = args[:confidence] if args.key?(:confidence)
1571
- @segment = args[:segment] if args.key?(:segment)
3278
+ @segments = args[:segments] if args.key?(:segments)
3279
+ @text = args[:text] if args.key?(:text)
1572
3280
  end
1573
3281
  end
1574
3282
 
1575
- # Alternative hypotheses (a.k.a. n-best list).
1576
- class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
3283
+ # Video frame level annotation results for text annotation (OCR).
3284
+ # Contains information regarding timestamp and bounding box locations for the
3285
+ # frames containing detected OCR text snippets.
3286
+ class GoogleCloudVideointelligenceV1p2beta1TextFrame
1577
3287
  include Google::Apis::Core::Hashable
1578
3288
 
1579
- # The confidence estimate between 0.0 and 1.0. A higher number
1580
- # indicates an estimated greater likelihood that the recognized words are
1581
- # correct. This field is typically provided only for the top hypothesis, and
1582
- # only for `is_final=true` results. Clients should not rely on the
1583
- # `confidence` field as it is not guaranteed to be accurate or consistent.
1584
- # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1585
- # Corresponds to the JSON property `confidence`
1586
- # @return [Float]
1587
- attr_accessor :confidence
3289
+ # Normalized bounding polygon for text (that might not be aligned with axis).
3290
+ # Contains list of the corner points in clockwise order starting from
3291
+ # top-left corner. For example, for a rectangular bounding box:
3292
+ # When the text is horizontal it might look like:
3293
+ # 0----1
3294
+ # | |
3295
+ # 3----2
3296
+ # When it's clockwise rotated 180 degrees around the top-left corner it
3297
+ # becomes:
3298
+ # 2----3
3299
+ # | |
3300
+ # 1----0
3301
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
3302
+ # than 0, or greater than 1 due to trignometric calculations for location of
3303
+ # the box.
3304
+ # Corresponds to the JSON property `rotatedBoundingBox`
3305
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
3306
+ attr_accessor :rotated_bounding_box
1588
3307
 
1589
- # Transcript text representing the words that the user spoke.
1590
- # Corresponds to the JSON property `transcript`
3308
+ # Timestamp of this frame.
3309
+ # Corresponds to the JSON property `timeOffset`
1591
3310
  # @return [String]
1592
- attr_accessor :transcript
1593
-
1594
- # A list of word-specific information for each recognized word.
1595
- # Corresponds to the JSON property `words`
1596
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
1597
- attr_accessor :words
3311
+ attr_accessor :time_offset
1598
3312
 
1599
3313
  def initialize(**args)
1600
3314
  update!(**args)
@@ -1602,31 +3316,30 @@ module Google
1602
3316
 
1603
3317
  # Update properties of this object
1604
3318
  def update!(**args)
1605
- @confidence = args[:confidence] if args.key?(:confidence)
1606
- @transcript = args[:transcript] if args.key?(:transcript)
1607
- @words = args[:words] if args.key?(:words)
3319
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
3320
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1608
3321
  end
1609
3322
  end
1610
3323
 
1611
- # A speech recognition result corresponding to a portion of the audio.
1612
- class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
3324
+ # Video segment level annotation results for text detection.
3325
+ class GoogleCloudVideointelligenceV1p2beta1TextSegment
1613
3326
  include Google::Apis::Core::Hashable
1614
3327
 
1615
- # May contain one or more recognition hypotheses (up to the maximum specified
1616
- # in `max_alternatives`). These alternatives are ordered in terms of
1617
- # accuracy, with the top (first) alternative being the most probable, as
1618
- # ranked by the recognizer.
1619
- # Corresponds to the JSON property `alternatives`
1620
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
1621
- attr_accessor :alternatives
3328
+ # Confidence for the track of detected text. It is calculated as the highest
3329
+ # over all frames where OCR detected text appears.
3330
+ # Corresponds to the JSON property `confidence`
3331
+ # @return [Float]
3332
+ attr_accessor :confidence
1622
3333
 
1623
- # Output only. The
1624
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
1625
- # language in this result. This language code was detected to have the most
1626
- # likelihood of being spoken in the audio.
1627
- # Corresponds to the JSON property `languageCode`
1628
- # @return [String]
1629
- attr_accessor :language_code
3334
+ # Information related to the frames where OCR detected text appears.
3335
+ # Corresponds to the JSON property `frames`
3336
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
3337
+ attr_accessor :frames
3338
+
3339
+ # Video segment.
3340
+ # Corresponds to the JSON property `segment`
3341
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3342
+ attr_accessor :segment
1630
3343
 
1631
3344
  def initialize(**args)
1632
3345
  update!(**args)
@@ -1634,13 +3347,14 @@ module Google
1634
3347
 
1635
3348
  # Update properties of this object
1636
3349
  def update!(**args)
1637
- @alternatives = args[:alternatives] if args.key?(:alternatives)
1638
- @language_code = args[:language_code] if args.key?(:language_code)
3350
+ @confidence = args[:confidence] if args.key?(:confidence)
3351
+ @frames = args[:frames] if args.key?(:frames)
3352
+ @segment = args[:segment] if args.key?(:segment)
1639
3353
  end
1640
3354
  end
1641
3355
 
1642
3356
  # Annotation progress for a single video.
1643
- class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
3357
+ class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
1644
3358
  include Google::Apis::Core::Hashable
1645
3359
 
1646
3360
  # Video file location in
@@ -1679,17 +3393,17 @@ module Google
1679
3393
  end
1680
3394
 
1681
3395
  # Annotation results for a single video.
1682
- class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
3396
+ class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
1683
3397
  include Google::Apis::Core::Hashable
1684
3398
 
1685
- # The `Status` type defines a logical error model that is suitable for different
1686
- # programming environments, including REST APIs and RPC APIs. It is used by
1687
- # [gRPC](https://github.com/grpc). The error model is designed to be:
3399
+ # The `Status` type defines a logical error model that is suitable for
3400
+ # different programming environments, including REST APIs and RPC APIs. It is
3401
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
1688
3402
  # - Simple to use and understand for most users
1689
3403
  # - Flexible enough to meet unexpected needs
1690
3404
  # # Overview
1691
- # The `Status` message contains three pieces of data: error code, error message,
1692
- # and error details. The error code should be an enum value of
3405
+ # The `Status` message contains three pieces of data: error code, error
3406
+ # message, and error details. The error code should be an enum value of
1693
3407
  # google.rpc.Code, but it may accept additional error codes if needed. The
1694
3408
  # error message should be a developer-facing English message that helps
1695
3409
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -1729,13 +3443,13 @@ module Google
1729
3443
  # If no explicit content has been detected in a frame, no annotations are
1730
3444
  # present for that frame.
1731
3445
  # Corresponds to the JSON property `explicitAnnotation`
1732
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
3446
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
1733
3447
  attr_accessor :explicit_annotation
1734
3448
 
1735
3449
  # Label annotations on frame level.
1736
3450
  # There is exactly one element for each unique label.
1737
3451
  # Corresponds to the JSON property `frameLabelAnnotations`
1738
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3452
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1739
3453
  attr_accessor :frame_label_annotations
1740
3454
 
1741
3455
  # Video file location in
@@ -1744,28 +3458,40 @@ module Google
1744
3458
  # @return [String]
1745
3459
  attr_accessor :input_uri
1746
3460
 
3461
+ # Annotations for list of objects detected and tracked in video.
3462
+ # Corresponds to the JSON property `objectAnnotations`
3463
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
3464
+ attr_accessor :object_annotations
3465
+
1747
3466
  # Label annotations on video level or user specified segment level.
1748
3467
  # There is exactly one element for each unique label.
1749
3468
  # Corresponds to the JSON property `segmentLabelAnnotations`
1750
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3469
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1751
3470
  attr_accessor :segment_label_annotations
1752
3471
 
1753
3472
  # Shot annotations. Each shot is represented as a video segment.
1754
3473
  # Corresponds to the JSON property `shotAnnotations`
1755
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
3474
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
1756
3475
  attr_accessor :shot_annotations
1757
3476
 
1758
3477
  # Label annotations on shot level.
1759
3478
  # There is exactly one element for each unique label.
1760
3479
  # Corresponds to the JSON property `shotLabelAnnotations`
1761
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3480
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1762
3481
  attr_accessor :shot_label_annotations
1763
3482
 
1764
3483
  # Speech transcription.
1765
3484
  # Corresponds to the JSON property `speechTranscriptions`
1766
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
3485
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
1767
3486
  attr_accessor :speech_transcriptions
1768
3487
 
3488
+ # OCR text detection and tracking.
3489
+ # Annotations for list of detected text snippets. Each will have list of
3490
+ # frame information associated with it.
3491
+ # Corresponds to the JSON property `textAnnotations`
3492
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
3493
+ attr_accessor :text_annotations
3494
+
1769
3495
  def initialize(**args)
1770
3496
  update!(**args)
1771
3497
  end
@@ -1776,15 +3502,17 @@ module Google
1776
3502
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
1777
3503
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
1778
3504
  @input_uri = args[:input_uri] if args.key?(:input_uri)
3505
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
1779
3506
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
1780
3507
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
1781
3508
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
1782
3509
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
3510
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
1783
3511
  end
1784
3512
  end
1785
3513
 
1786
3514
  # Video segment.
1787
- class GoogleCloudVideointelligenceV1p1beta1VideoSegment
3515
+ class GoogleCloudVideointelligenceV1p2beta1VideoSegment
1788
3516
  include Google::Apis::Core::Hashable
1789
3517
 
1790
3518
  # Time-offset, relative to the beginning of the video,
@@ -1813,7 +3541,7 @@ module Google
1813
3541
  # Word-specific information for recognized words. Word information is only
1814
3542
  # included in the response when certain request parameters are set, such
1815
3543
  # as `enable_word_time_offsets`.
1816
- class GoogleCloudVideointelligenceV1p1beta1WordInfo
3544
+ class GoogleCloudVideointelligenceV1p2beta1WordInfo
1817
3545
  include Google::Apis::Core::Hashable
1818
3546
 
1819
3547
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -1872,12 +3600,12 @@ module Google
1872
3600
  # Video annotation progress. Included in the `metadata`
1873
3601
  # field of the `Operation` returned by the `GetOperation`
1874
3602
  # call of the `google::longrunning::Operations` service.
1875
- class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
3603
+ class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoProgress
1876
3604
  include Google::Apis::Core::Hashable
1877
3605
 
1878
3606
  # Progress metadata for all videos specified in `AnnotateVideoRequest`.
1879
3607
  # Corresponds to the JSON property `annotationProgress`
1880
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
3608
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress>]
1881
3609
  attr_accessor :annotation_progress
1882
3610
 
1883
3611
  def initialize(**args)
@@ -1893,12 +3621,12 @@ module Google
1893
3621
  # Video annotation response. Included in the `response`
1894
3622
  # field of the `Operation` returned by the `GetOperation`
1895
3623
  # call of the `google::longrunning::Operations` service.
1896
- class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
3624
+ class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoResponse
1897
3625
  include Google::Apis::Core::Hashable
1898
3626
 
1899
3627
  # Annotation results for all videos specified in `AnnotateVideoRequest`.
1900
3628
  # Corresponds to the JSON property `annotationResults`
1901
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
3629
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults>]
1902
3630
  attr_accessor :annotation_results
1903
3631
 
1904
3632
  def initialize(**args)
@@ -1912,7 +3640,7 @@ module Google
1912
3640
  end
1913
3641
 
1914
3642
  # Detected entity from video analysis.
1915
- class GoogleCloudVideointelligenceV1p2beta1Entity
3643
+ class GoogleCloudVideointelligenceV1p3beta1Entity
1916
3644
  include Google::Apis::Core::Hashable
1917
3645
 
1918
3646
  # Textual description, e.g. `Fixed-gear bicycle`.
@@ -1947,12 +3675,12 @@ module Google
1947
3675
  # Explicit content annotation (based on per-frame visual signals only).
1948
3676
  # If no explicit content has been detected in a frame, no annotations are
1949
3677
  # present for that frame.
1950
- class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
3678
+ class GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation
1951
3679
  include Google::Apis::Core::Hashable
1952
3680
 
1953
3681
  # All video frames where explicit content was detected.
1954
3682
  # Corresponds to the JSON property `frames`
1955
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
3683
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame>]
1956
3684
  attr_accessor :frames
1957
3685
 
1958
3686
  def initialize(**args)
@@ -1966,7 +3694,7 @@ module Google
1966
3694
  end
1967
3695
 
1968
3696
  # Video frame level annotation results for explicit content.
1969
- class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
3697
+ class GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame
1970
3698
  include Google::Apis::Core::Hashable
1971
3699
 
1972
3700
  # Likelihood of the pornography content..
@@ -1992,7 +3720,7 @@ module Google
1992
3720
  end
1993
3721
 
1994
3722
  # Label annotation.
1995
- class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
3723
+ class GoogleCloudVideointelligenceV1p3beta1LabelAnnotation
1996
3724
  include Google::Apis::Core::Hashable
1997
3725
 
1998
3726
  # Common categories for the detected entity.
@@ -2000,22 +3728,22 @@ module Google
2000
3728
  # cases there might be more than one categories e.g. `Terrier` could also be
2001
3729
  # a `pet`.
2002
3730
  # Corresponds to the JSON property `categoryEntities`
2003
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity>]
3731
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1Entity>]
2004
3732
  attr_accessor :category_entities
2005
3733
 
2006
3734
  # Detected entity from video analysis.
2007
3735
  # Corresponds to the JSON property `entity`
2008
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity]
3736
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1Entity]
2009
3737
  attr_accessor :entity
2010
3738
 
2011
3739
  # All video frames where a label was detected.
2012
3740
  # Corresponds to the JSON property `frames`
2013
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
3741
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelFrame>]
2014
3742
  attr_accessor :frames
2015
3743
 
2016
3744
  # All video segments where a label was detected.
2017
3745
  # Corresponds to the JSON property `segments`
2018
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
3746
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelSegment>]
2019
3747
  attr_accessor :segments
2020
3748
 
2021
3749
  def initialize(**args)
@@ -2032,7 +3760,7 @@ module Google
2032
3760
  end
2033
3761
 
2034
3762
  # Video frame level annotation results for label detection.
2035
- class GoogleCloudVideointelligenceV1p2beta1LabelFrame
3763
+ class GoogleCloudVideointelligenceV1p3beta1LabelFrame
2036
3764
  include Google::Apis::Core::Hashable
2037
3765
 
2038
3766
  # Confidence that the label is accurate. Range: [0, 1].
@@ -2058,7 +3786,7 @@ module Google
2058
3786
  end
2059
3787
 
2060
3788
  # Video segment level annotation results for label detection.
2061
- class GoogleCloudVideointelligenceV1p2beta1LabelSegment
3789
+ class GoogleCloudVideointelligenceV1p3beta1LabelSegment
2062
3790
  include Google::Apis::Core::Hashable
2063
3791
 
2064
3792
  # Confidence that the label is accurate. Range: [0, 1].
@@ -2068,7 +3796,7 @@ module Google
2068
3796
 
2069
3797
  # Video segment.
2070
3798
  # Corresponds to the JSON property `segment`
2071
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3799
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2072
3800
  attr_accessor :segment
2073
3801
 
2074
3802
  def initialize(**args)
@@ -2085,7 +3813,7 @@ module Google
2085
3813
  # Normalized bounding box.
2086
3814
  # The normalized vertex coordinates are relative to the original image.
2087
3815
  # Range: [0, 1].
2088
- class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
3816
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox
2089
3817
  include Google::Apis::Core::Hashable
2090
3818
 
2091
3819
  # Bottom Y coordinate.
@@ -2136,12 +3864,12 @@ module Google
2136
3864
  # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2137
3865
  # than 0, or greater than 1 due to trignometric calculations for location of
2138
3866
  # the box.
2139
- class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
3867
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly
2140
3868
  include Google::Apis::Core::Hashable
2141
3869
 
2142
3870
  # Normalized vertices of the bounding polygon.
2143
3871
  # Corresponds to the JSON property `vertices`
2144
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
3872
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1NormalizedVertex>]
2145
3873
  attr_accessor :vertices
2146
3874
 
2147
3875
  def initialize(**args)
@@ -2157,7 +3885,7 @@ module Google
2157
3885
  # A vertex represents a 2D point in the image.
2158
3886
  # NOTE: the normalized vertex coordinates are relative to the original image
2159
3887
  # and range from 0 to 1.
2160
- class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
3888
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedVertex
2161
3889
  include Google::Apis::Core::Hashable
2162
3890
 
2163
3891
  # X coordinate.
@@ -2182,7 +3910,7 @@ module Google
2182
3910
  end
2183
3911
 
2184
3912
  # Annotations corresponding to one tracked object.
2185
- class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
3913
+ class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation
2186
3914
  include Google::Apis::Core::Hashable
2187
3915
 
2188
3916
  # Object category's labeling confidence of this track.
@@ -2192,7 +3920,7 @@ module Google
2192
3920
 
2193
3921
  # Detected entity from video analysis.
2194
3922
  # Corresponds to the JSON property `entity`
2195
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity]
3923
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1Entity]
2196
3924
  attr_accessor :entity
2197
3925
 
2198
3926
  # Information corresponding to all frames where this object track appears.
@@ -2200,12 +3928,12 @@ module Google
2200
3928
  # messages in frames.
2201
3929
  # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
2202
3930
  # Corresponds to the JSON property `frames`
2203
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
3931
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame>]
2204
3932
  attr_accessor :frames
2205
3933
 
2206
3934
  # Video segment.
2207
3935
  # Corresponds to the JSON property `segment`
2208
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3936
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2209
3937
  attr_accessor :segment
2210
3938
 
2211
3939
  # Streaming mode ONLY.
@@ -2234,14 +3962,14 @@ module Google
2234
3962
 
2235
3963
  # Video frame level annotations for object detection and tracking. This field
2236
3964
  # stores per frame location, time offset, and confidence.
2237
- class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
3965
+ class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame
2238
3966
  include Google::Apis::Core::Hashable
2239
3967
 
2240
3968
  # Normalized bounding box.
2241
3969
  # The normalized vertex coordinates are relative to the original image.
2242
3970
  # Range: [0, 1].
2243
3971
  # Corresponds to the JSON property `normalizedBoundingBox`
2244
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
3972
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox]
2245
3973
  attr_accessor :normalized_bounding_box
2246
3974
 
2247
3975
  # The timestamp of the frame in microseconds.
@@ -2261,7 +3989,7 @@ module Google
2261
3989
  end
2262
3990
 
2263
3991
  # Alternative hypotheses (a.k.a. n-best list).
2264
- class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
3992
+ class GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative
2265
3993
  include Google::Apis::Core::Hashable
2266
3994
 
2267
3995
  # The confidence estimate between 0.0 and 1.0. A higher number
@@ -2281,7 +4009,7 @@ module Google
2281
4009
 
2282
4010
  # A list of word-specific information for each recognized word.
2283
4011
  # Corresponds to the JSON property `words`
2284
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
4012
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1WordInfo>]
2285
4013
  attr_accessor :words
2286
4014
 
2287
4015
  def initialize(**args)
@@ -2297,7 +4025,7 @@ module Google
2297
4025
  end
2298
4026
 
2299
4027
  # A speech recognition result corresponding to a portion of the audio.
2300
- class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
4028
+ class GoogleCloudVideointelligenceV1p3beta1SpeechTranscription
2301
4029
  include Google::Apis::Core::Hashable
2302
4030
 
2303
4031
  # May contain one or more recognition hypotheses (up to the maximum specified
@@ -2305,7 +4033,7 @@ module Google
2305
4033
  # accuracy, with the top (first) alternative being the most probable, as
2306
4034
  # ranked by the recognizer.
2307
4035
  # Corresponds to the JSON property `alternatives`
2308
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
4036
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative>]
2309
4037
  attr_accessor :alternatives
2310
4038
 
2311
4039
  # Output only. The
@@ -2327,15 +4055,130 @@ module Google
2327
4055
  end
2328
4056
  end
2329
4057
 
4058
+ # `StreamingAnnotateVideoResponse` is the only message returned to the client
4059
+ # by `StreamingAnnotateVideo`. A series of zero or more
4060
+ # `StreamingAnnotateVideoResponse` messages are streamed back to the client.
4061
+ class GoogleCloudVideointelligenceV1p3beta1StreamingAnnotateVideoResponse
4062
+ include Google::Apis::Core::Hashable
4063
+
4064
+ # Streaming annotation results corresponding to a portion of the video
4065
+ # that is currently being processed.
4066
+ # Corresponds to the JSON property `annotationResults`
4067
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults]
4068
+ attr_accessor :annotation_results
4069
+
4070
+ # GCS URI that stores annotation results of one streaming session.
4071
+ # It is a directory that can hold multiple files in JSON format.
4072
+ # Example uri format:
4073
+ # gs://bucket_id/object_id/cloud_project_name-session_id
4074
+ # Corresponds to the JSON property `annotationResultsUri`
4075
+ # @return [String]
4076
+ attr_accessor :annotation_results_uri
4077
+
4078
+ # The `Status` type defines a logical error model that is suitable for
4079
+ # different programming environments, including REST APIs and RPC APIs. It is
4080
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
4081
+ # - Simple to use and understand for most users
4082
+ # - Flexible enough to meet unexpected needs
4083
+ # # Overview
4084
+ # The `Status` message contains three pieces of data: error code, error
4085
+ # message, and error details. The error code should be an enum value of
4086
+ # google.rpc.Code, but it may accept additional error codes if needed. The
4087
+ # error message should be a developer-facing English message that helps
4088
+ # developers *understand* and *resolve* the error. If a localized user-facing
4089
+ # error message is needed, put the localized message in the error details or
4090
+ # localize it in the client. The optional error details may contain arbitrary
4091
+ # information about the error. There is a predefined set of error detail types
4092
+ # in the package `google.rpc` that can be used for common error conditions.
4093
+ # # Language mapping
4094
+ # The `Status` message is the logical representation of the error model, but it
4095
+ # is not necessarily the actual wire format. When the `Status` message is
4096
+ # exposed in different client libraries and different wire protocols, it can be
4097
+ # mapped differently. For example, it will likely be mapped to some exceptions
4098
+ # in Java, but more likely mapped to some error codes in C.
4099
+ # # Other uses
4100
+ # The error model and the `Status` message can be used in a variety of
4101
+ # environments, either with or without APIs, to provide a
4102
+ # consistent developer experience across different environments.
4103
+ # Example uses of this error model include:
4104
+ # - Partial errors. If a service needs to return partial errors to the client,
4105
+ # it may embed the `Status` in the normal response to indicate the partial
4106
+ # errors.
4107
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
4108
+ # have a `Status` message for error reporting.
4109
+ # - Batch operations. If a client uses batch request and batch response, the
4110
+ # `Status` message should be used directly inside batch response, one for
4111
+ # each error sub-response.
4112
+ # - Asynchronous operations. If an API call embeds asynchronous operation
4113
+ # results in its response, the status of those operations should be
4114
+ # represented directly using the `Status` message.
4115
+ # - Logging. If some API errors are stored in logs, the message `Status` could
4116
+ # be used directly after any stripping needed for security/privacy reasons.
4117
+ # Corresponds to the JSON property `error`
4118
+ # @return [Google::Apis::VideointelligenceV1::GoogleRpcStatus]
4119
+ attr_accessor :error
4120
+
4121
+ def initialize(**args)
4122
+ update!(**args)
4123
+ end
4124
+
4125
+ # Update properties of this object
4126
+ def update!(**args)
4127
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
4128
+ @annotation_results_uri = args[:annotation_results_uri] if args.key?(:annotation_results_uri)
4129
+ @error = args[:error] if args.key?(:error)
4130
+ end
4131
+ end
4132
+
4133
+ # Streaming annotation results corresponding to a portion of the video
4134
+ # that is currently being processed.
4135
+ class GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults
4136
+ include Google::Apis::Core::Hashable
4137
+
4138
+ # Explicit content annotation (based on per-frame visual signals only).
4139
+ # If no explicit content has been detected in a frame, no annotations are
4140
+ # present for that frame.
4141
+ # Corresponds to the JSON property `explicitAnnotation`
4142
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
4143
+ attr_accessor :explicit_annotation
4144
+
4145
+ # Label annotation results.
4146
+ # Corresponds to the JSON property `labelAnnotations`
4147
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
4148
+ attr_accessor :label_annotations
4149
+
4150
+ # Object tracking results.
4151
+ # Corresponds to the JSON property `objectAnnotations`
4152
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
4153
+ attr_accessor :object_annotations
4154
+
4155
+ # Shot annotation results. Each shot is represented as a video segment.
4156
+ # Corresponds to the JSON property `shotAnnotations`
4157
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
4158
+ attr_accessor :shot_annotations
4159
+
4160
+ def initialize(**args)
4161
+ update!(**args)
4162
+ end
4163
+
4164
+ # Update properties of this object
4165
+ def update!(**args)
4166
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
4167
+ @label_annotations = args[:label_annotations] if args.key?(:label_annotations)
4168
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
4169
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
4170
+ end
4171
+ end
4172
+
2330
4173
  # Annotations related to one detected OCR text snippet. This will contain the
2331
4174
  # corresponding text, confidence value, and frame level information for each
2332
4175
  # detection.
2333
- class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
4176
+ class GoogleCloudVideointelligenceV1p3beta1TextAnnotation
2334
4177
  include Google::Apis::Core::Hashable
2335
4178
 
2336
4179
  # All video segments where OCR detected text appears.
2337
4180
  # Corresponds to the JSON property `segments`
2338
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
4181
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1TextSegment>]
2339
4182
  attr_accessor :segments
2340
4183
 
2341
4184
  # The detected text.
@@ -2357,7 +4200,7 @@ module Google
2357
4200
  # Video frame level annotation results for text annotation (OCR).
2358
4201
  # Contains information regarding timestamp and bounding box locations for the
2359
4202
  # frames containing detected OCR text snippets.
2360
- class GoogleCloudVideointelligenceV1p2beta1TextFrame
4203
+ class GoogleCloudVideointelligenceV1p3beta1TextFrame
2361
4204
  include Google::Apis::Core::Hashable
2362
4205
 
2363
4206
  # Normalized bounding polygon for text (that might not be aligned with axis).
@@ -2376,7 +4219,7 @@ module Google
2376
4219
  # than 0, or greater than 1 due to trignometric calculations for location of
2377
4220
  # the box.
2378
4221
  # Corresponds to the JSON property `rotatedBoundingBox`
2379
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
4222
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly]
2380
4223
  attr_accessor :rotated_bounding_box
2381
4224
 
2382
4225
  # Timestamp of this frame.
@@ -2396,7 +4239,7 @@ module Google
2396
4239
  end
2397
4240
 
2398
4241
  # Video segment level annotation results for text detection.
2399
- class GoogleCloudVideointelligenceV1p2beta1TextSegment
4242
+ class GoogleCloudVideointelligenceV1p3beta1TextSegment
2400
4243
  include Google::Apis::Core::Hashable
2401
4244
 
2402
4245
  # Confidence for the track of detected text. It is calculated as the highest
@@ -2407,12 +4250,12 @@ module Google
2407
4250
 
2408
4251
  # Information related to the frames where OCR detected text appears.
2409
4252
  # Corresponds to the JSON property `frames`
2410
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
4253
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1TextFrame>]
2411
4254
  attr_accessor :frames
2412
4255
 
2413
4256
  # Video segment.
2414
4257
  # Corresponds to the JSON property `segment`
2415
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
4258
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2416
4259
  attr_accessor :segment
2417
4260
 
2418
4261
  def initialize(**args)
@@ -2428,7 +4271,7 @@ module Google
2428
4271
  end
2429
4272
 
2430
4273
  # Annotation progress for a single video.
2431
- class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
4274
+ class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress
2432
4275
  include Google::Apis::Core::Hashable
2433
4276
 
2434
4277
  # Video file location in
@@ -2467,17 +4310,17 @@ module Google
2467
4310
  end
2468
4311
 
2469
4312
  # Annotation results for a single video.
2470
- class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
4313
+ class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
2471
4314
  include Google::Apis::Core::Hashable
2472
4315
 
2473
- # The `Status` type defines a logical error model that is suitable for different
2474
- # programming environments, including REST APIs and RPC APIs. It is used by
2475
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4316
+ # The `Status` type defines a logical error model that is suitable for
4317
+ # different programming environments, including REST APIs and RPC APIs. It is
4318
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2476
4319
  # - Simple to use and understand for most users
2477
4320
  # - Flexible enough to meet unexpected needs
2478
4321
  # # Overview
2479
- # The `Status` message contains three pieces of data: error code, error message,
2480
- # and error details. The error code should be an enum value of
4322
+ # The `Status` message contains three pieces of data: error code, error
4323
+ # message, and error details. The error code should be an enum value of
2481
4324
  # google.rpc.Code, but it may accept additional error codes if needed. The
2482
4325
  # error message should be a developer-facing English message that helps
2483
4326
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -2517,13 +4360,13 @@ module Google
2517
4360
  # If no explicit content has been detected in a frame, no annotations are
2518
4361
  # present for that frame.
2519
4362
  # Corresponds to the JSON property `explicitAnnotation`
2520
- # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
4363
+ # @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
2521
4364
  attr_accessor :explicit_annotation
2522
4365
 
2523
4366
  # Label annotations on frame level.
2524
4367
  # There is exactly one element for each unique label.
2525
4368
  # Corresponds to the JSON property `frameLabelAnnotations`
2526
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4369
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2527
4370
  attr_accessor :frame_label_annotations
2528
4371
 
2529
4372
  # Video file location in
@@ -2534,36 +4377,36 @@ module Google
2534
4377
 
2535
4378
  # Annotations for list of objects detected and tracked in video.
2536
4379
  # Corresponds to the JSON property `objectAnnotations`
2537
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
4380
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
2538
4381
  attr_accessor :object_annotations
2539
4382
 
2540
4383
  # Label annotations on video level or user specified segment level.
2541
4384
  # There is exactly one element for each unique label.
2542
4385
  # Corresponds to the JSON property `segmentLabelAnnotations`
2543
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4386
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2544
4387
  attr_accessor :segment_label_annotations
2545
4388
 
2546
4389
  # Shot annotations. Each shot is represented as a video segment.
2547
4390
  # Corresponds to the JSON property `shotAnnotations`
2548
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
4391
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
2549
4392
  attr_accessor :shot_annotations
2550
4393
 
2551
4394
  # Label annotations on shot level.
2552
4395
  # There is exactly one element for each unique label.
2553
4396
  # Corresponds to the JSON property `shotLabelAnnotations`
2554
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4397
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2555
4398
  attr_accessor :shot_label_annotations
2556
4399
 
2557
4400
  # Speech transcription.
2558
4401
  # Corresponds to the JSON property `speechTranscriptions`
2559
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
4402
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>]
2560
4403
  attr_accessor :speech_transcriptions
2561
4404
 
2562
4405
  # OCR text detection and tracking.
2563
4406
  # Annotations for list of detected text snippets. Each will have list of
2564
4407
  # frame information associated with it.
2565
4408
  # Corresponds to the JSON property `textAnnotations`
2566
- # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
4409
+ # @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>]
2567
4410
  attr_accessor :text_annotations
2568
4411
 
2569
4412
  def initialize(**args)
@@ -2586,7 +4429,7 @@ module Google
2586
4429
  end
2587
4430
 
2588
4431
  # Video segment.
2589
- class GoogleCloudVideointelligenceV1p2beta1VideoSegment
4432
+ class GoogleCloudVideointelligenceV1p3beta1VideoSegment
2590
4433
  include Google::Apis::Core::Hashable
2591
4434
 
2592
4435
  # Time-offset, relative to the beginning of the video,
@@ -2615,7 +4458,7 @@ module Google
2615
4458
  # Word-specific information for recognized words. Word information is only
2616
4459
  # included in the response when certain request parameters are set, such
2617
4460
  # as `enable_word_time_offsets`.
2618
- class GoogleCloudVideointelligenceV1p2beta1WordInfo
4461
+ class GoogleCloudVideointelligenceV1p3beta1WordInfo
2619
4462
  include Google::Apis::Core::Hashable
2620
4463
 
2621
4464
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -2722,14 +4565,14 @@ module Google
2722
4565
  attr_accessor :done
2723
4566
  alias_method :done?, :done
2724
4567
 
2725
- # The `Status` type defines a logical error model that is suitable for different
2726
- # programming environments, including REST APIs and RPC APIs. It is used by
2727
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4568
+ # The `Status` type defines a logical error model that is suitable for
4569
+ # different programming environments, including REST APIs and RPC APIs. It is
4570
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2728
4571
  # - Simple to use and understand for most users
2729
4572
  # - Flexible enough to meet unexpected needs
2730
4573
  # # Overview
2731
- # The `Status` message contains three pieces of data: error code, error message,
2732
- # and error details. The error code should be an enum value of
4574
+ # The `Status` message contains three pieces of data: error code, error
4575
+ # message, and error details. The error code should be an enum value of
2733
4576
  # google.rpc.Code, but it may accept additional error codes if needed. The
2734
4577
  # error message should be a developer-facing English message that helps
2735
4578
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -2825,14 +4668,14 @@ module Google
2825
4668
  end
2826
4669
  end
2827
4670
 
2828
- # The `Status` type defines a logical error model that is suitable for different
2829
- # programming environments, including REST APIs and RPC APIs. It is used by
2830
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4671
+ # The `Status` type defines a logical error model that is suitable for
4672
+ # different programming environments, including REST APIs and RPC APIs. It is
4673
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2831
4674
  # - Simple to use and understand for most users
2832
4675
  # - Flexible enough to meet unexpected needs
2833
4676
  # # Overview
2834
- # The `Status` message contains three pieces of data: error code, error message,
2835
- # and error details. The error code should be an enum value of
4677
+ # The `Status` message contains three pieces of data: error code, error
4678
+ # message, and error details. The error code should be an enum value of
2836
4679
  # google.rpc.Code, but it may accept additional error codes if needed. The
2837
4680
  # error message should be a developer-facing English message that helps
2838
4681
  # developers *understand* and *resolve* the error. If a localized user-facing