google-api-client 0.28.4 → 0.29.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (750) hide show
  1. checksums.yaml +4 -4
  2. data/.kokoro/build.bat +9 -6
  3. data/.kokoro/build.sh +2 -34
  4. data/.kokoro/continuous/common.cfg +6 -1
  5. data/.kokoro/continuous/linux.cfg +1 -1
  6. data/.kokoro/continuous/windows.cfg +17 -1
  7. data/.kokoro/osx.sh +2 -33
  8. data/.kokoro/presubmit/common.cfg +6 -1
  9. data/.kokoro/presubmit/linux.cfg +1 -1
  10. data/.kokoro/presubmit/windows.cfg +17 -1
  11. data/.kokoro/trampoline.bat +10 -0
  12. data/.kokoro/trampoline.sh +3 -23
  13. data/CHANGELOG.md +460 -0
  14. data/README.md +1 -1
  15. data/Rakefile +31 -0
  16. data/bin/generate-api +4 -2
  17. data/generated/google/apis/abusiveexperiencereport_v1/service.rb +2 -2
  18. data/generated/google/apis/acceleratedmobilepageurl_v1/service.rb +1 -1
  19. data/generated/google/apis/accessapproval_v1beta1/classes.rb +333 -0
  20. data/generated/google/apis/accessapproval_v1beta1/representations.rb +174 -0
  21. data/generated/google/apis/accessapproval_v1beta1/service.rb +728 -0
  22. data/generated/google/apis/accessapproval_v1beta1.rb +34 -0
  23. data/generated/google/apis/accesscontextmanager_v1/classes.rb +755 -0
  24. data/generated/google/apis/accesscontextmanager_v1/representations.rb +282 -0
  25. data/generated/google/apis/accesscontextmanager_v1/service.rb +788 -0
  26. data/generated/google/apis/accesscontextmanager_v1.rb +34 -0
  27. data/generated/google/apis/accesscontextmanager_v1beta/classes.rb +47 -31
  28. data/generated/google/apis/accesscontextmanager_v1beta/representations.rb +4 -0
  29. data/generated/google/apis/accesscontextmanager_v1beta/service.rb +16 -16
  30. data/generated/google/apis/accesscontextmanager_v1beta.rb +1 -1
  31. data/generated/google/apis/adexchangebuyer2_v2beta1/classes.rb +95 -200
  32. data/generated/google/apis/adexchangebuyer2_v2beta1/representations.rb +0 -32
  33. data/generated/google/apis/adexchangebuyer2_v2beta1/service.rb +64 -104
  34. data/generated/google/apis/adexchangebuyer2_v2beta1.rb +1 -1
  35. data/generated/google/apis/adexchangebuyer_v1_2/service.rb +7 -7
  36. data/generated/google/apis/adexchangebuyer_v1_3/service.rb +21 -21
  37. data/generated/google/apis/adexchangebuyer_v1_4/service.rb +38 -38
  38. data/generated/google/apis/adexperiencereport_v1/service.rb +2 -2
  39. data/generated/google/apis/admin_datatransfer_v1/service.rb +5 -5
  40. data/generated/google/apis/admin_directory_v1/classes.rb +5 -50
  41. data/generated/google/apis/admin_directory_v1/representations.rb +0 -2
  42. data/generated/google/apis/admin_directory_v1/service.rb +113 -113
  43. data/generated/google/apis/admin_directory_v1.rb +1 -1
  44. data/generated/google/apis/admin_reports_v1/service.rb +6 -6
  45. data/generated/google/apis/admin_reports_v1.rb +1 -1
  46. data/generated/google/apis/adsense_v1_4/service.rb +39 -39
  47. data/generated/google/apis/adsensehost_v4_1/service.rb +26 -26
  48. data/generated/google/apis/alertcenter_v1beta1/classes.rb +101 -2
  49. data/generated/google/apis/alertcenter_v1beta1/representations.rb +25 -0
  50. data/generated/google/apis/alertcenter_v1beta1/service.rb +17 -16
  51. data/generated/google/apis/alertcenter_v1beta1.rb +1 -1
  52. data/generated/google/apis/analytics_v2_4/service.rb +6 -6
  53. data/generated/google/apis/analytics_v3/service.rb +88 -88
  54. data/generated/google/apis/analyticsreporting_v4/classes.rb +638 -0
  55. data/generated/google/apis/analyticsreporting_v4/representations.rb +248 -0
  56. data/generated/google/apis/analyticsreporting_v4/service.rb +31 -1
  57. data/generated/google/apis/analyticsreporting_v4.rb +1 -1
  58. data/generated/google/apis/androiddeviceprovisioning_v1/classes.rb +51 -11
  59. data/generated/google/apis/androiddeviceprovisioning_v1/representations.rb +6 -0
  60. data/generated/google/apis/androiddeviceprovisioning_v1/service.rb +26 -26
  61. data/generated/google/apis/androiddeviceprovisioning_v1.rb +1 -1
  62. data/generated/google/apis/androidenterprise_v1/classes.rb +26 -30
  63. data/generated/google/apis/androidenterprise_v1/representations.rb +2 -14
  64. data/generated/google/apis/androidenterprise_v1/service.rb +85 -121
  65. data/generated/google/apis/androidenterprise_v1.rb +1 -1
  66. data/generated/google/apis/androidmanagement_v1/classes.rb +358 -4
  67. data/generated/google/apis/androidmanagement_v1/representations.rb +163 -0
  68. data/generated/google/apis/androidmanagement_v1/service.rb +191 -21
  69. data/generated/google/apis/androidmanagement_v1.rb +1 -1
  70. data/generated/google/apis/androidpublisher_v1/service.rb +2 -2
  71. data/generated/google/apis/androidpublisher_v1_1/service.rb +3 -3
  72. data/generated/google/apis/androidpublisher_v2/service.rb +64 -70
  73. data/generated/google/apis/androidpublisher_v2.rb +1 -1
  74. data/generated/google/apis/androidpublisher_v3/classes.rb +113 -0
  75. data/generated/google/apis/androidpublisher_v3/representations.rb +58 -0
  76. data/generated/google/apis/androidpublisher_v3/service.rb +234 -64
  77. data/generated/google/apis/androidpublisher_v3.rb +1 -1
  78. data/generated/google/apis/appengine_v1/classes.rb +45 -100
  79. data/generated/google/apis/appengine_v1/representations.rb +17 -35
  80. data/generated/google/apis/appengine_v1/service.rb +45 -39
  81. data/generated/google/apis/appengine_v1.rb +1 -1
  82. data/generated/google/apis/appengine_v1alpha/classes.rb +2 -99
  83. data/generated/google/apis/appengine_v1alpha/representations.rb +0 -35
  84. data/generated/google/apis/appengine_v1alpha/service.rb +15 -15
  85. data/generated/google/apis/appengine_v1alpha.rb +1 -1
  86. data/generated/google/apis/appengine_v1beta/classes.rb +7 -102
  87. data/generated/google/apis/appengine_v1beta/representations.rb +0 -35
  88. data/generated/google/apis/appengine_v1beta/service.rb +45 -39
  89. data/generated/google/apis/appengine_v1beta.rb +1 -1
  90. data/generated/google/apis/appengine_v1beta4/service.rb +20 -20
  91. data/generated/google/apis/appengine_v1beta5/service.rb +20 -20
  92. data/generated/google/apis/appsactivity_v1/service.rb +5 -4
  93. data/generated/google/apis/appsactivity_v1.rb +1 -1
  94. data/generated/google/apis/appsmarket_v2/service.rb +3 -3
  95. data/generated/google/apis/appstate_v1/service.rb +5 -5
  96. data/generated/google/apis/bigquery_v2/classes.rb +1121 -114
  97. data/generated/google/apis/bigquery_v2/representations.rb +414 -26
  98. data/generated/google/apis/bigquery_v2/service.rb +184 -22
  99. data/generated/google/apis/bigquery_v2.rb +1 -1
  100. data/generated/google/apis/bigquerydatatransfer_v1/classes.rb +88 -10
  101. data/generated/google/apis/bigquerydatatransfer_v1/representations.rb +43 -0
  102. data/generated/google/apis/bigquerydatatransfer_v1/service.rb +142 -34
  103. data/generated/google/apis/bigquerydatatransfer_v1.rb +3 -3
  104. data/generated/google/apis/bigtableadmin_v1/service.rb +3 -3
  105. data/generated/google/apis/bigtableadmin_v1.rb +2 -2
  106. data/generated/google/apis/bigtableadmin_v2/classes.rb +14 -14
  107. data/generated/google/apis/bigtableadmin_v2/service.rb +142 -33
  108. data/generated/google/apis/bigtableadmin_v2.rb +2 -2
  109. data/generated/google/apis/binaryauthorization_v1beta1/classes.rb +66 -6
  110. data/generated/google/apis/binaryauthorization_v1beta1/representations.rb +17 -0
  111. data/generated/google/apis/binaryauthorization_v1beta1/service.rb +17 -13
  112. data/generated/google/apis/binaryauthorization_v1beta1.rb +1 -1
  113. data/generated/google/apis/blogger_v2/service.rb +9 -9
  114. data/generated/google/apis/blogger_v3/service.rb +33 -33
  115. data/generated/google/apis/books_v1/service.rb +51 -51
  116. data/generated/google/apis/calendar_v3/classes.rb +1 -1
  117. data/generated/google/apis/calendar_v3/service.rb +47 -47
  118. data/generated/google/apis/calendar_v3.rb +1 -1
  119. data/generated/google/apis/chat_v1/service.rb +8 -8
  120. data/generated/google/apis/civicinfo_v2/service.rb +5 -5
  121. data/generated/google/apis/classroom_v1/classes.rb +77 -0
  122. data/generated/google/apis/classroom_v1/representations.rb +32 -0
  123. data/generated/google/apis/classroom_v1/service.rb +276 -51
  124. data/generated/google/apis/classroom_v1.rb +7 -1
  125. data/generated/google/apis/cloudasset_v1/classes.rb +818 -0
  126. data/generated/google/apis/cloudasset_v1/representations.rb +264 -0
  127. data/generated/google/apis/cloudasset_v1/service.rb +191 -0
  128. data/generated/google/apis/cloudasset_v1.rb +34 -0
  129. data/generated/google/apis/cloudasset_v1beta1/classes.rb +33 -18
  130. data/generated/google/apis/cloudasset_v1beta1/representations.rb +1 -0
  131. data/generated/google/apis/cloudasset_v1beta1/service.rb +13 -13
  132. data/generated/google/apis/cloudasset_v1beta1.rb +2 -2
  133. data/generated/google/apis/cloudbilling_v1/classes.rb +1 -1
  134. data/generated/google/apis/cloudbilling_v1/service.rb +14 -14
  135. data/generated/google/apis/cloudbilling_v1.rb +1 -1
  136. data/generated/google/apis/cloudbuild_v1/classes.rb +162 -11
  137. data/generated/google/apis/cloudbuild_v1/representations.rb +67 -0
  138. data/generated/google/apis/cloudbuild_v1/service.rb +21 -15
  139. data/generated/google/apis/cloudbuild_v1.rb +1 -1
  140. data/generated/google/apis/cloudbuild_v1alpha1/classes.rb +7 -1
  141. data/generated/google/apis/cloudbuild_v1alpha1/representations.rb +2 -0
  142. data/generated/google/apis/cloudbuild_v1alpha1/service.rb +6 -6
  143. data/generated/google/apis/cloudbuild_v1alpha1.rb +1 -1
  144. data/generated/google/apis/clouddebugger_v2/service.rb +8 -8
  145. data/generated/google/apis/clouderrorreporting_v1beta1/classes.rb +19 -16
  146. data/generated/google/apis/clouderrorreporting_v1beta1/service.rb +12 -11
  147. data/generated/google/apis/clouderrorreporting_v1beta1.rb +1 -1
  148. data/generated/google/apis/cloudfunctions_v1/classes.rb +21 -17
  149. data/generated/google/apis/cloudfunctions_v1/service.rb +22 -16
  150. data/generated/google/apis/cloudfunctions_v1.rb +1 -1
  151. data/generated/google/apis/cloudfunctions_v1beta2/classes.rb +20 -16
  152. data/generated/google/apis/cloudfunctions_v1beta2/service.rb +17 -11
  153. data/generated/google/apis/cloudfunctions_v1beta2.rb +1 -1
  154. data/generated/google/apis/cloudidentity_v1/classes.rb +14 -14
  155. data/generated/google/apis/cloudidentity_v1/service.rb +18 -27
  156. data/generated/google/apis/cloudidentity_v1.rb +7 -1
  157. data/generated/google/apis/cloudidentity_v1beta1/classes.rb +11 -11
  158. data/generated/google/apis/cloudidentity_v1beta1/service.rb +15 -21
  159. data/generated/google/apis/cloudidentity_v1beta1.rb +7 -1
  160. data/generated/google/apis/cloudiot_v1/classes.rb +11 -11
  161. data/generated/google/apis/cloudiot_v1/service.rb +23 -330
  162. data/generated/google/apis/cloudiot_v1.rb +1 -1
  163. data/generated/google/apis/cloudkms_v1/classes.rb +7 -3
  164. data/generated/google/apis/cloudkms_v1/service.rb +30 -30
  165. data/generated/google/apis/cloudkms_v1.rb +1 -1
  166. data/generated/google/apis/cloudprivatecatalog_v1beta1/classes.rb +358 -0
  167. data/generated/google/apis/cloudprivatecatalog_v1beta1/representations.rb +123 -0
  168. data/generated/google/apis/cloudprivatecatalog_v1beta1/service.rb +486 -0
  169. data/generated/google/apis/cloudprivatecatalog_v1beta1.rb +35 -0
  170. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/classes.rb +1212 -0
  171. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/representations.rb +399 -0
  172. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/service.rb +1073 -0
  173. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1.rb +35 -0
  174. data/generated/google/apis/cloudprofiler_v2/service.rb +3 -3
  175. data/generated/google/apis/cloudresourcemanager_v1/classes.rb +24 -22
  176. data/generated/google/apis/cloudresourcemanager_v1/service.rb +68 -59
  177. data/generated/google/apis/cloudresourcemanager_v1.rb +1 -1
  178. data/generated/google/apis/cloudresourcemanager_v1beta1/classes.rb +3 -3
  179. data/generated/google/apis/cloudresourcemanager_v1beta1/service.rb +53 -42
  180. data/generated/google/apis/cloudresourcemanager_v1beta1.rb +1 -1
  181. data/generated/google/apis/cloudresourcemanager_v2/classes.rb +15 -16
  182. data/generated/google/apis/cloudresourcemanager_v2/service.rb +13 -13
  183. data/generated/google/apis/cloudresourcemanager_v2.rb +1 -1
  184. data/generated/google/apis/cloudresourcemanager_v2beta1/classes.rb +15 -16
  185. data/generated/google/apis/cloudresourcemanager_v2beta1/service.rb +13 -13
  186. data/generated/google/apis/cloudresourcemanager_v2beta1.rb +1 -1
  187. data/generated/google/apis/cloudscheduler_v1/classes.rb +994 -0
  188. data/generated/google/apis/cloudscheduler_v1/representations.rb +297 -0
  189. data/generated/google/apis/cloudscheduler_v1/service.rb +448 -0
  190. data/generated/google/apis/cloudscheduler_v1.rb +34 -0
  191. data/generated/google/apis/cloudscheduler_v1beta1/classes.rb +160 -44
  192. data/generated/google/apis/cloudscheduler_v1beta1/representations.rb +33 -0
  193. data/generated/google/apis/cloudscheduler_v1beta1/service.rb +15 -12
  194. data/generated/google/apis/cloudscheduler_v1beta1.rb +1 -1
  195. data/generated/google/apis/cloudsearch_v1/classes.rb +245 -59
  196. data/generated/google/apis/cloudsearch_v1/representations.rb +91 -0
  197. data/generated/google/apis/cloudsearch_v1/service.rb +86 -80
  198. data/generated/google/apis/cloudsearch_v1.rb +1 -1
  199. data/generated/google/apis/cloudshell_v1/classes.rb +11 -11
  200. data/generated/google/apis/cloudshell_v1/service.rb +4 -4
  201. data/generated/google/apis/cloudshell_v1.rb +1 -1
  202. data/generated/google/apis/cloudshell_v1alpha1/classes.rb +24 -11
  203. data/generated/google/apis/cloudshell_v1alpha1/representations.rb +2 -0
  204. data/generated/google/apis/cloudshell_v1alpha1/service.rb +11 -10
  205. data/generated/google/apis/cloudshell_v1alpha1.rb +1 -1
  206. data/generated/google/apis/cloudtasks_v2/classes.rb +1436 -0
  207. data/generated/google/apis/cloudtasks_v2/representations.rb +408 -0
  208. data/generated/google/apis/cloudtasks_v2/service.rb +856 -0
  209. data/generated/google/apis/{partners_v2.rb → cloudtasks_v2.rb} +11 -9
  210. data/generated/google/apis/cloudtasks_v2beta2/classes.rb +141 -102
  211. data/generated/google/apis/cloudtasks_v2beta2/service.rb +44 -43
  212. data/generated/google/apis/cloudtasks_v2beta2.rb +1 -1
  213. data/generated/google/apis/cloudtasks_v2beta3/classes.rb +388 -108
  214. data/generated/google/apis/cloudtasks_v2beta3/representations.rb +65 -0
  215. data/generated/google/apis/cloudtasks_v2beta3/service.rb +40 -39
  216. data/generated/google/apis/cloudtasks_v2beta3.rb +1 -1
  217. data/generated/google/apis/cloudtrace_v1/service.rb +3 -3
  218. data/generated/google/apis/cloudtrace_v2/classes.rb +10 -10
  219. data/generated/google/apis/cloudtrace_v2/service.rb +2 -2
  220. data/generated/google/apis/cloudtrace_v2.rb +1 -1
  221. data/generated/google/apis/commentanalyzer_v1alpha1/classes.rb +484 -0
  222. data/generated/google/apis/commentanalyzer_v1alpha1/representations.rb +210 -0
  223. data/generated/google/apis/commentanalyzer_v1alpha1/service.rb +124 -0
  224. data/generated/google/apis/commentanalyzer_v1alpha1.rb +39 -0
  225. data/generated/google/apis/composer_v1/classes.rb +21 -15
  226. data/generated/google/apis/composer_v1/service.rb +9 -9
  227. data/generated/google/apis/composer_v1.rb +1 -1
  228. data/generated/google/apis/composer_v1beta1/classes.rb +175 -36
  229. data/generated/google/apis/composer_v1beta1/representations.rb +50 -0
  230. data/generated/google/apis/composer_v1beta1/service.rb +9 -9
  231. data/generated/google/apis/composer_v1beta1.rb +1 -1
  232. data/generated/google/apis/compute_alpha/classes.rb +10112 -7289
  233. data/generated/google/apis/compute_alpha/representations.rb +1337 -219
  234. data/generated/google/apis/compute_alpha/service.rb +4259 -2728
  235. data/generated/google/apis/compute_alpha.rb +1 -1
  236. data/generated/google/apis/compute_beta/classes.rb +4254 -2781
  237. data/generated/google/apis/compute_beta/representations.rb +853 -283
  238. data/generated/google/apis/compute_beta/service.rb +7077 -5955
  239. data/generated/google/apis/compute_beta.rb +1 -1
  240. data/generated/google/apis/compute_v1/classes.rb +1259 -93
  241. data/generated/google/apis/compute_v1/representations.rb +450 -1
  242. data/generated/google/apis/compute_v1/service.rb +1085 -400
  243. data/generated/google/apis/compute_v1.rb +1 -1
  244. data/generated/google/apis/container_v1/classes.rb +201 -22
  245. data/generated/google/apis/container_v1/representations.rb +69 -0
  246. data/generated/google/apis/container_v1/service.rb +151 -102
  247. data/generated/google/apis/container_v1.rb +1 -1
  248. data/generated/google/apis/container_v1beta1/classes.rb +215 -25
  249. data/generated/google/apis/container_v1beta1/representations.rb +86 -0
  250. data/generated/google/apis/container_v1beta1/service.rb +106 -106
  251. data/generated/google/apis/container_v1beta1.rb +1 -1
  252. data/generated/google/apis/containeranalysis_v1alpha1/classes.rb +26 -18
  253. data/generated/google/apis/containeranalysis_v1alpha1/representations.rb +1 -0
  254. data/generated/google/apis/containeranalysis_v1alpha1/service.rb +33 -33
  255. data/generated/google/apis/containeranalysis_v1alpha1.rb +1 -1
  256. data/generated/google/apis/containeranalysis_v1beta1/classes.rb +226 -12
  257. data/generated/google/apis/containeranalysis_v1beta1/representations.rb +58 -0
  258. data/generated/google/apis/containeranalysis_v1beta1/service.rb +24 -24
  259. data/generated/google/apis/containeranalysis_v1beta1.rb +1 -1
  260. data/generated/google/apis/content_v2/classes.rb +218 -101
  261. data/generated/google/apis/content_v2/representations.rb +49 -0
  262. data/generated/google/apis/content_v2/service.rb +189 -152
  263. data/generated/google/apis/content_v2.rb +1 -1
  264. data/generated/google/apis/content_v2_1/classes.rb +387 -216
  265. data/generated/google/apis/content_v2_1/representations.rb +131 -56
  266. data/generated/google/apis/content_v2_1/service.rb +190 -107
  267. data/generated/google/apis/content_v2_1.rb +1 -1
  268. data/generated/google/apis/customsearch_v1/service.rb +2 -2
  269. data/generated/google/apis/dataflow_v1b3/classes.rb +148 -31
  270. data/generated/google/apis/dataflow_v1b3/representations.rb +45 -0
  271. data/generated/google/apis/dataflow_v1b3/service.rb +415 -56
  272. data/generated/google/apis/dataflow_v1b3.rb +1 -1
  273. data/generated/google/apis/datafusion_v1beta1/classes.rb +1304 -0
  274. data/generated/google/apis/datafusion_v1beta1/representations.rb +469 -0
  275. data/generated/google/apis/datafusion_v1beta1/service.rb +657 -0
  276. data/generated/google/apis/datafusion_v1beta1.rb +43 -0
  277. data/generated/google/apis/dataproc_v1/classes.rb +27 -22
  278. data/generated/google/apis/dataproc_v1/representations.rb +1 -0
  279. data/generated/google/apis/dataproc_v1/service.rb +261 -45
  280. data/generated/google/apis/dataproc_v1.rb +1 -1
  281. data/generated/google/apis/dataproc_v1beta2/classes.rb +534 -50
  282. data/generated/google/apis/dataproc_v1beta2/representations.rb +185 -7
  283. data/generated/google/apis/dataproc_v1beta2/service.rb +617 -51
  284. data/generated/google/apis/dataproc_v1beta2.rb +1 -1
  285. data/generated/google/apis/datastore_v1/classes.rb +20 -16
  286. data/generated/google/apis/datastore_v1/service.rb +15 -15
  287. data/generated/google/apis/datastore_v1.rb +1 -1
  288. data/generated/google/apis/datastore_v1beta1/classes.rb +10 -10
  289. data/generated/google/apis/datastore_v1beta1/service.rb +2 -2
  290. data/generated/google/apis/datastore_v1beta1.rb +1 -1
  291. data/generated/google/apis/datastore_v1beta3/classes.rb +10 -6
  292. data/generated/google/apis/datastore_v1beta3/service.rb +7 -7
  293. data/generated/google/apis/datastore_v1beta3.rb +1 -1
  294. data/generated/google/apis/deploymentmanager_alpha/service.rb +37 -37
  295. data/generated/google/apis/deploymentmanager_v2/service.rb +18 -18
  296. data/generated/google/apis/deploymentmanager_v2beta/service.rb +32 -32
  297. data/generated/google/apis/dfareporting_v3_1/service.rb +206 -206
  298. data/generated/google/apis/dfareporting_v3_2/service.rb +206 -206
  299. data/generated/google/apis/dfareporting_v3_3/classes.rb +3 -3
  300. data/generated/google/apis/dfareporting_v3_3/service.rb +204 -204
  301. data/generated/google/apis/dfareporting_v3_3.rb +1 -1
  302. data/generated/google/apis/dialogflow_v2/classes.rb +367 -82
  303. data/generated/google/apis/dialogflow_v2/representations.rb +99 -0
  304. data/generated/google/apis/dialogflow_v2/service.rb +76 -60
  305. data/generated/google/apis/dialogflow_v2.rb +1 -1
  306. data/generated/google/apis/dialogflow_v2beta1/classes.rb +199 -88
  307. data/generated/google/apis/dialogflow_v2beta1/representations.rb +31 -0
  308. data/generated/google/apis/dialogflow_v2beta1/service.rb +154 -94
  309. data/generated/google/apis/dialogflow_v2beta1.rb +1 -1
  310. data/generated/google/apis/digitalassetlinks_v1/service.rb +7 -6
  311. data/generated/google/apis/digitalassetlinks_v1.rb +1 -1
  312. data/generated/google/apis/discovery_v1/service.rb +2 -2
  313. data/generated/google/apis/dlp_v2/classes.rb +116 -45
  314. data/generated/google/apis/dlp_v2/representations.rb +32 -0
  315. data/generated/google/apis/dlp_v2/service.rb +85 -45
  316. data/generated/google/apis/dlp_v2.rb +1 -1
  317. data/generated/google/apis/dns_v1/classes.rb +83 -1
  318. data/generated/google/apis/dns_v1/representations.rb +34 -0
  319. data/generated/google/apis/dns_v1/service.rb +15 -15
  320. data/generated/google/apis/dns_v1.rb +1 -1
  321. data/generated/google/apis/dns_v1beta2/classes.rb +81 -1
  322. data/generated/google/apis/dns_v1beta2/representations.rb +33 -0
  323. data/generated/google/apis/dns_v1beta2/service.rb +21 -21
  324. data/generated/google/apis/dns_v1beta2.rb +1 -1
  325. data/generated/google/apis/dns_v2beta1/classes.rb +83 -1
  326. data/generated/google/apis/dns_v2beta1/representations.rb +34 -0
  327. data/generated/google/apis/dns_v2beta1/service.rb +16 -16
  328. data/generated/google/apis/dns_v2beta1.rb +1 -1
  329. data/generated/google/apis/docs_v1/classes.rb +265 -47
  330. data/generated/google/apis/docs_v1/representations.rb +96 -0
  331. data/generated/google/apis/docs_v1/service.rb +3 -3
  332. data/generated/google/apis/docs_v1.rb +1 -1
  333. data/generated/google/apis/doubleclickbidmanager_v1/classes.rb +6 -4
  334. data/generated/google/apis/doubleclickbidmanager_v1/service.rb +9 -9
  335. data/generated/google/apis/doubleclickbidmanager_v1.rb +1 -1
  336. data/generated/google/apis/doubleclicksearch_v2/service.rb +10 -10
  337. data/generated/google/apis/drive_v2/classes.rb +601 -80
  338. data/generated/google/apis/drive_v2/representations.rb +152 -0
  339. data/generated/google/apis/drive_v2/service.rb +574 -164
  340. data/generated/google/apis/drive_v2.rb +1 -1
  341. data/generated/google/apis/drive_v3/classes.rb +591 -75
  342. data/generated/google/apis/drive_v3/representations.rb +151 -0
  343. data/generated/google/apis/drive_v3/service.rb +483 -116
  344. data/generated/google/apis/drive_v3.rb +1 -1
  345. data/generated/google/apis/driveactivity_v2/classes.rb +149 -17
  346. data/generated/google/apis/driveactivity_v2/representations.rb +69 -0
  347. data/generated/google/apis/driveactivity_v2/service.rb +1 -1
  348. data/generated/google/apis/driveactivity_v2.rb +1 -1
  349. data/generated/google/apis/factchecktools_v1alpha1/classes.rb +459 -0
  350. data/generated/google/apis/factchecktools_v1alpha1/representations.rb +207 -0
  351. data/generated/google/apis/factchecktools_v1alpha1/service.rb +300 -0
  352. data/generated/google/apis/factchecktools_v1alpha1.rb +34 -0
  353. data/generated/google/apis/fcm_v1/classes.rb +424 -0
  354. data/generated/google/apis/fcm_v1/representations.rb +167 -0
  355. data/generated/google/apis/fcm_v1/service.rb +97 -0
  356. data/generated/google/apis/fcm_v1.rb +35 -0
  357. data/generated/google/apis/file_v1/classes.rb +646 -11
  358. data/generated/google/apis/file_v1/representations.rb +207 -0
  359. data/generated/google/apis/file_v1/service.rb +196 -6
  360. data/generated/google/apis/file_v1.rb +1 -1
  361. data/generated/google/apis/file_v1beta1/classes.rb +461 -19
  362. data/generated/google/apis/file_v1beta1/representations.rb +137 -0
  363. data/generated/google/apis/file_v1beta1/service.rb +11 -11
  364. data/generated/google/apis/file_v1beta1.rb +1 -1
  365. data/generated/google/apis/firebasedynamiclinks_v1/classes.rb +41 -14
  366. data/generated/google/apis/firebasedynamiclinks_v1/representations.rb +4 -0
  367. data/generated/google/apis/firebasedynamiclinks_v1/service.rb +5 -5
  368. data/generated/google/apis/firebasedynamiclinks_v1.rb +1 -1
  369. data/generated/google/apis/firebasehosting_v1beta1/classes.rb +13 -13
  370. data/generated/google/apis/firebasehosting_v1beta1/service.rb +14 -14
  371. data/generated/google/apis/firebasehosting_v1beta1.rb +1 -1
  372. data/generated/google/apis/firebaserules_v1/classes.rb +10 -2
  373. data/generated/google/apis/firebaserules_v1/service.rb +12 -12
  374. data/generated/google/apis/firebaserules_v1.rb +1 -1
  375. data/generated/google/apis/firestore_v1/classes.rb +15 -15
  376. data/generated/google/apis/firestore_v1/service.rb +28 -28
  377. data/generated/google/apis/firestore_v1.rb +1 -1
  378. data/generated/google/apis/firestore_v1beta1/classes.rb +15 -15
  379. data/generated/google/apis/firestore_v1beta1/service.rb +19 -19
  380. data/generated/google/apis/firestore_v1beta1.rb +1 -1
  381. data/generated/google/apis/firestore_v1beta2/classes.rb +10 -10
  382. data/generated/google/apis/firestore_v1beta2/service.rb +9 -9
  383. data/generated/google/apis/firestore_v1beta2.rb +1 -1
  384. data/generated/google/apis/fitness_v1/classes.rb +4 -1
  385. data/generated/google/apis/fitness_v1/service.rb +14 -58
  386. data/generated/google/apis/fitness_v1.rb +1 -1
  387. data/generated/google/apis/fusiontables_v1/service.rb +32 -32
  388. data/generated/google/apis/fusiontables_v2/service.rb +34 -34
  389. data/generated/google/apis/games_configuration_v1configuration/service.rb +13 -13
  390. data/generated/google/apis/games_management_v1management/service.rb +27 -27
  391. data/generated/google/apis/games_management_v1management.rb +2 -2
  392. data/generated/google/apis/games_v1/service.rb +53 -53
  393. data/generated/google/apis/games_v1.rb +3 -3
  394. data/generated/google/apis/genomics_v1/classes.rb +190 -3321
  395. data/generated/google/apis/genomics_v1/representations.rb +128 -1265
  396. data/generated/google/apis/genomics_v1/service.rb +75 -1982
  397. data/generated/google/apis/genomics_v1.rb +1 -10
  398. data/generated/google/apis/genomics_v1alpha2/classes.rb +13 -53
  399. data/generated/google/apis/genomics_v1alpha2/representations.rb +0 -26
  400. data/generated/google/apis/genomics_v1alpha2/service.rb +11 -12
  401. data/generated/google/apis/genomics_v1alpha2.rb +1 -1
  402. data/generated/google/apis/genomics_v2alpha1/classes.rb +26 -58
  403. data/generated/google/apis/genomics_v2alpha1/representations.rb +1 -26
  404. data/generated/google/apis/genomics_v2alpha1/service.rb +6 -7
  405. data/generated/google/apis/genomics_v2alpha1.rb +1 -1
  406. data/generated/google/apis/gmail_v1/classes.rb +29 -0
  407. data/generated/google/apis/gmail_v1/representations.rb +13 -0
  408. data/generated/google/apis/gmail_v1/service.rb +142 -66
  409. data/generated/google/apis/gmail_v1.rb +1 -1
  410. data/generated/google/apis/groupsmigration_v1/service.rb +1 -1
  411. data/generated/google/apis/groupssettings_v1/classes.rb +126 -1
  412. data/generated/google/apis/groupssettings_v1/representations.rb +18 -0
  413. data/generated/google/apis/groupssettings_v1/service.rb +4 -4
  414. data/generated/google/apis/groupssettings_v1.rb +2 -2
  415. data/generated/google/apis/healthcare_v1alpha2/classes.rb +2849 -0
  416. data/generated/google/apis/healthcare_v1alpha2/representations.rb +1260 -0
  417. data/generated/google/apis/healthcare_v1alpha2/service.rb +4011 -0
  418. data/generated/google/apis/healthcare_v1alpha2.rb +34 -0
  419. data/generated/google/apis/healthcare_v1beta1/classes.rb +2464 -0
  420. data/generated/google/apis/healthcare_v1beta1/representations.rb +1042 -0
  421. data/generated/google/apis/healthcare_v1beta1/service.rb +3413 -0
  422. data/generated/google/apis/healthcare_v1beta1.rb +34 -0
  423. data/generated/google/apis/iam_v1/classes.rb +171 -1
  424. data/generated/google/apis/iam_v1/representations.rb +95 -0
  425. data/generated/google/apis/iam_v1/service.rb +249 -39
  426. data/generated/google/apis/iam_v1.rb +1 -1
  427. data/generated/google/apis/iamcredentials_v1/classes.rb +8 -4
  428. data/generated/google/apis/iamcredentials_v1/service.rb +15 -10
  429. data/generated/google/apis/iamcredentials_v1.rb +1 -1
  430. data/generated/google/apis/iap_v1/classes.rb +1 -1
  431. data/generated/google/apis/iap_v1/service.rb +3 -3
  432. data/generated/google/apis/iap_v1.rb +1 -1
  433. data/generated/google/apis/iap_v1beta1/classes.rb +1 -1
  434. data/generated/google/apis/iap_v1beta1/service.rb +3 -3
  435. data/generated/google/apis/iap_v1beta1.rb +1 -1
  436. data/generated/google/apis/identitytoolkit_v3/service.rb +20 -20
  437. data/generated/google/apis/indexing_v3/service.rb +2 -2
  438. data/generated/google/apis/jobs_v2/classes.rb +16 -17
  439. data/generated/google/apis/jobs_v2/service.rb +17 -17
  440. data/generated/google/apis/jobs_v2.rb +1 -1
  441. data/generated/google/apis/jobs_v3/classes.rb +14 -8
  442. data/generated/google/apis/jobs_v3/service.rb +16 -17
  443. data/generated/google/apis/jobs_v3.rb +1 -1
  444. data/generated/google/apis/jobs_v3p1beta1/classes.rb +26 -20
  445. data/generated/google/apis/jobs_v3p1beta1/service.rb +17 -18
  446. data/generated/google/apis/jobs_v3p1beta1.rb +1 -1
  447. data/generated/google/apis/kgsearch_v1/service.rb +1 -1
  448. data/generated/google/apis/language_v1/classes.rb +8 -7
  449. data/generated/google/apis/language_v1/service.rb +6 -6
  450. data/generated/google/apis/language_v1.rb +1 -1
  451. data/generated/google/apis/language_v1beta1/classes.rb +5 -5
  452. data/generated/google/apis/language_v1beta1/service.rb +4 -4
  453. data/generated/google/apis/language_v1beta1.rb +1 -1
  454. data/generated/google/apis/language_v1beta2/classes.rb +8 -7
  455. data/generated/google/apis/language_v1beta2/service.rb +6 -6
  456. data/generated/google/apis/language_v1beta2.rb +1 -1
  457. data/generated/google/apis/libraryagent_v1/service.rb +6 -6
  458. data/generated/google/apis/licensing_v1/service.rb +7 -7
  459. data/generated/google/apis/logging_v2/classes.rb +8 -3
  460. data/generated/google/apis/logging_v2/representations.rb +1 -0
  461. data/generated/google/apis/logging_v2/service.rb +72 -72
  462. data/generated/google/apis/logging_v2.rb +1 -1
  463. data/generated/google/apis/manufacturers_v1/service.rb +4 -4
  464. data/generated/google/apis/mirror_v1/service.rb +24 -24
  465. data/generated/google/apis/ml_v1/classes.rb +240 -52
  466. data/generated/google/apis/ml_v1/representations.rb +25 -2
  467. data/generated/google/apis/ml_v1/service.rb +36 -36
  468. data/generated/google/apis/ml_v1.rb +1 -1
  469. data/generated/google/apis/monitoring_v3/classes.rb +22 -18
  470. data/generated/google/apis/monitoring_v3/representations.rb +2 -1
  471. data/generated/google/apis/monitoring_v3/service.rb +42 -37
  472. data/generated/google/apis/monitoring_v3.rb +1 -1
  473. data/generated/google/apis/oauth2_v1/classes.rb +0 -124
  474. data/generated/google/apis/oauth2_v1/representations.rb +0 -62
  475. data/generated/google/apis/oauth2_v1/service.rb +3 -162
  476. data/generated/google/apis/oauth2_v1.rb +3 -6
  477. data/generated/google/apis/oauth2_v2/service.rb +4 -4
  478. data/generated/google/apis/oauth2_v2.rb +3 -6
  479. data/generated/google/apis/oslogin_v1/service.rb +8 -7
  480. data/generated/google/apis/oslogin_v1.rb +3 -2
  481. data/generated/google/apis/oslogin_v1alpha/service.rb +8 -7
  482. data/generated/google/apis/oslogin_v1alpha.rb +3 -2
  483. data/generated/google/apis/oslogin_v1beta/service.rb +8 -7
  484. data/generated/google/apis/oslogin_v1beta.rb +3 -2
  485. data/generated/google/apis/pagespeedonline_v1/service.rb +1 -1
  486. data/generated/google/apis/pagespeedonline_v2/service.rb +1 -1
  487. data/generated/google/apis/pagespeedonline_v4/service.rb +1 -1
  488. data/generated/google/apis/pagespeedonline_v5/classes.rb +43 -0
  489. data/generated/google/apis/pagespeedonline_v5/representations.rb +18 -0
  490. data/generated/google/apis/pagespeedonline_v5/service.rb +1 -1
  491. data/generated/google/apis/pagespeedonline_v5.rb +1 -1
  492. data/generated/google/apis/people_v1/classes.rb +38 -29
  493. data/generated/google/apis/people_v1/representations.rb +1 -0
  494. data/generated/google/apis/people_v1/service.rb +18 -13
  495. data/generated/google/apis/people_v1.rb +2 -5
  496. data/generated/google/apis/playcustomapp_v1/service.rb +1 -1
  497. data/generated/google/apis/plus_domains_v1/service.rb +18 -392
  498. data/generated/google/apis/plus_domains_v1.rb +4 -10
  499. data/generated/google/apis/plus_v1/service.rb +16 -16
  500. data/generated/google/apis/plus_v1.rb +4 -4
  501. data/generated/google/apis/poly_v1/classes.rb +8 -6
  502. data/generated/google/apis/poly_v1/service.rb +15 -12
  503. data/generated/google/apis/poly_v1.rb +1 -1
  504. data/generated/google/apis/proximitybeacon_v1beta1/classes.rb +8 -6
  505. data/generated/google/apis/proximitybeacon_v1beta1/service.rb +17 -17
  506. data/generated/google/apis/proximitybeacon_v1beta1.rb +1 -1
  507. data/generated/google/apis/pubsub_v1/classes.rb +55 -39
  508. data/generated/google/apis/pubsub_v1/representations.rb +16 -0
  509. data/generated/google/apis/pubsub_v1/service.rb +46 -69
  510. data/generated/google/apis/pubsub_v1.rb +1 -1
  511. data/generated/google/apis/pubsub_v1beta1a/service.rb +15 -15
  512. data/generated/google/apis/pubsub_v1beta2/classes.rb +45 -1
  513. data/generated/google/apis/pubsub_v1beta2/representations.rb +16 -0
  514. data/generated/google/apis/pubsub_v1beta2/service.rb +20 -20
  515. data/generated/google/apis/pubsub_v1beta2.rb +1 -1
  516. data/generated/google/apis/redis_v1/classes.rb +30 -10
  517. data/generated/google/apis/redis_v1/representations.rb +13 -0
  518. data/generated/google/apis/redis_v1/service.rb +51 -15
  519. data/generated/google/apis/redis_v1.rb +1 -1
  520. data/generated/google/apis/redis_v1beta1/classes.rb +18 -21
  521. data/generated/google/apis/redis_v1beta1/representations.rb +0 -1
  522. data/generated/google/apis/redis_v1beta1/service.rb +15 -15
  523. data/generated/google/apis/redis_v1beta1.rb +1 -1
  524. data/generated/google/apis/remotebuildexecution_v1/classes.rb +50 -35
  525. data/generated/google/apis/remotebuildexecution_v1/representations.rb +2 -0
  526. data/generated/google/apis/remotebuildexecution_v1/service.rb +7 -7
  527. data/generated/google/apis/remotebuildexecution_v1.rb +1 -1
  528. data/generated/google/apis/remotebuildexecution_v1alpha/classes.rb +48 -33
  529. data/generated/google/apis/remotebuildexecution_v1alpha/representations.rb +2 -0
  530. data/generated/google/apis/remotebuildexecution_v1alpha/service.rb +10 -10
  531. data/generated/google/apis/remotebuildexecution_v1alpha.rb +1 -1
  532. data/generated/google/apis/remotebuildexecution_v2/classes.rb +58 -43
  533. data/generated/google/apis/remotebuildexecution_v2/representations.rb +2 -0
  534. data/generated/google/apis/remotebuildexecution_v2/service.rb +9 -9
  535. data/generated/google/apis/remotebuildexecution_v2.rb +1 -1
  536. data/generated/google/apis/replicapool_v1beta1/service.rb +10 -10
  537. data/generated/google/apis/reseller_v1/classes.rb +32 -39
  538. data/generated/google/apis/reseller_v1/service.rb +18 -18
  539. data/generated/google/apis/reseller_v1.rb +1 -1
  540. data/generated/google/apis/run_v1/classes.rb +73 -0
  541. data/generated/google/apis/run_v1/representations.rb +43 -0
  542. data/generated/google/apis/run_v1/service.rb +90 -0
  543. data/generated/google/apis/run_v1.rb +35 -0
  544. data/generated/google/apis/run_v1alpha1/classes.rb +3882 -0
  545. data/generated/google/apis/run_v1alpha1/representations.rb +1425 -0
  546. data/generated/google/apis/run_v1alpha1/service.rb +2071 -0
  547. data/generated/google/apis/run_v1alpha1.rb +35 -0
  548. data/generated/google/apis/runtimeconfig_v1/classes.rb +11 -11
  549. data/generated/google/apis/runtimeconfig_v1/service.rb +3 -3
  550. data/generated/google/apis/runtimeconfig_v1.rb +1 -1
  551. data/generated/google/apis/runtimeconfig_v1beta1/classes.rb +26 -25
  552. data/generated/google/apis/runtimeconfig_v1beta1/service.rb +22 -22
  553. data/generated/google/apis/runtimeconfig_v1beta1.rb +1 -1
  554. data/generated/google/apis/safebrowsing_v4/service.rb +7 -7
  555. data/generated/google/apis/script_v1/classes.rb +167 -6
  556. data/generated/google/apis/script_v1/representations.rb +79 -1
  557. data/generated/google/apis/script_v1/service.rb +16 -16
  558. data/generated/google/apis/script_v1.rb +1 -1
  559. data/generated/google/apis/searchconsole_v1/service.rb +1 -1
  560. data/generated/google/apis/securitycenter_v1/classes.rb +1627 -0
  561. data/generated/google/apis/securitycenter_v1/representations.rb +569 -0
  562. data/generated/google/apis/securitycenter_v1/service.rb +1110 -0
  563. data/generated/google/apis/securitycenter_v1.rb +35 -0
  564. data/generated/google/apis/securitycenter_v1beta1/classes.rb +1514 -0
  565. data/generated/google/apis/securitycenter_v1beta1/representations.rb +548 -0
  566. data/generated/google/apis/securitycenter_v1beta1/service.rb +1035 -0
  567. data/generated/google/apis/securitycenter_v1beta1.rb +35 -0
  568. data/generated/google/apis/servicebroker_v1/classes.rb +1 -1
  569. data/generated/google/apis/servicebroker_v1/service.rb +3 -3
  570. data/generated/google/apis/servicebroker_v1.rb +1 -1
  571. data/generated/google/apis/servicebroker_v1alpha1/classes.rb +1 -1
  572. data/generated/google/apis/servicebroker_v1alpha1/service.rb +16 -16
  573. data/generated/google/apis/servicebroker_v1alpha1.rb +1 -1
  574. data/generated/google/apis/servicebroker_v1beta1/classes.rb +1 -1
  575. data/generated/google/apis/servicebroker_v1beta1/service.rb +21 -21
  576. data/generated/google/apis/servicebroker_v1beta1.rb +1 -1
  577. data/generated/google/apis/serviceconsumermanagement_v1/classes.rb +453 -149
  578. data/generated/google/apis/serviceconsumermanagement_v1/representations.rb +202 -29
  579. data/generated/google/apis/serviceconsumermanagement_v1/service.rb +148 -62
  580. data/generated/google/apis/serviceconsumermanagement_v1.rb +1 -1
  581. data/generated/google/apis/servicecontrol_v1/classes.rb +122 -25
  582. data/generated/google/apis/servicecontrol_v1/representations.rb +47 -0
  583. data/generated/google/apis/servicecontrol_v1/service.rb +3 -3
  584. data/generated/google/apis/servicecontrol_v1.rb +1 -1
  585. data/generated/google/apis/servicemanagement_v1/classes.rb +93 -110
  586. data/generated/google/apis/servicemanagement_v1/representations.rb +13 -26
  587. data/generated/google/apis/servicemanagement_v1/service.rb +30 -27
  588. data/generated/google/apis/servicemanagement_v1.rb +1 -1
  589. data/generated/google/apis/servicenetworking_v1/classes.rb +3626 -0
  590. data/generated/google/apis/servicenetworking_v1/representations.rb +1055 -0
  591. data/generated/google/apis/servicenetworking_v1/service.rb +440 -0
  592. data/generated/google/apis/servicenetworking_v1.rb +38 -0
  593. data/generated/google/apis/servicenetworking_v1beta/classes.rb +65 -108
  594. data/generated/google/apis/servicenetworking_v1beta/representations.rb +2 -29
  595. data/generated/google/apis/servicenetworking_v1beta/service.rb +6 -6
  596. data/generated/google/apis/servicenetworking_v1beta.rb +1 -1
  597. data/generated/google/apis/serviceusage_v1/classes.rb +160 -109
  598. data/generated/google/apis/serviceusage_v1/representations.rb +42 -26
  599. data/generated/google/apis/serviceusage_v1/service.rb +17 -19
  600. data/generated/google/apis/serviceusage_v1.rb +1 -1
  601. data/generated/google/apis/serviceusage_v1beta1/classes.rb +161 -110
  602. data/generated/google/apis/serviceusage_v1beta1/representations.rb +42 -26
  603. data/generated/google/apis/serviceusage_v1beta1/service.rb +7 -7
  604. data/generated/google/apis/serviceusage_v1beta1.rb +1 -1
  605. data/generated/google/apis/sheets_v4/classes.rb +115 -26
  606. data/generated/google/apis/sheets_v4/service.rb +17 -17
  607. data/generated/google/apis/sheets_v4.rb +1 -1
  608. data/generated/google/apis/site_verification_v1/service.rb +7 -7
  609. data/generated/google/apis/slides_v1/classes.rb +2 -2
  610. data/generated/google/apis/slides_v1/service.rb +5 -5
  611. data/generated/google/apis/slides_v1.rb +1 -1
  612. data/generated/google/apis/sourcerepo_v1/classes.rb +183 -1
  613. data/generated/google/apis/sourcerepo_v1/representations.rb +45 -0
  614. data/generated/google/apis/sourcerepo_v1/service.rb +45 -10
  615. data/generated/google/apis/sourcerepo_v1.rb +1 -1
  616. data/generated/google/apis/spanner_v1/classes.rb +231 -17
  617. data/generated/google/apis/spanner_v1/representations.rb +66 -0
  618. data/generated/google/apis/spanner_v1/service.rb +92 -42
  619. data/generated/google/apis/spanner_v1.rb +1 -1
  620. data/generated/google/apis/speech_v1/classes.rb +110 -13
  621. data/generated/google/apis/speech_v1/representations.rb +24 -0
  622. data/generated/google/apis/speech_v1/service.rb +9 -7
  623. data/generated/google/apis/speech_v1.rb +1 -1
  624. data/generated/google/apis/speech_v1p1beta1/classes.rb +19 -13
  625. data/generated/google/apis/speech_v1p1beta1/representations.rb +1 -0
  626. data/generated/google/apis/speech_v1p1beta1/service.rb +9 -7
  627. data/generated/google/apis/speech_v1p1beta1.rb +1 -1
  628. data/generated/google/apis/sqladmin_v1beta4/classes.rb +94 -17
  629. data/generated/google/apis/sqladmin_v1beta4/representations.rb +36 -0
  630. data/generated/google/apis/sqladmin_v1beta4/service.rb +44 -44
  631. data/generated/google/apis/sqladmin_v1beta4.rb +1 -1
  632. data/generated/google/apis/storage_v1/classes.rb +201 -4
  633. data/generated/google/apis/storage_v1/representations.rb +76 -1
  634. data/generated/google/apis/storage_v1/service.rb +488 -93
  635. data/generated/google/apis/storage_v1.rb +1 -1
  636. data/generated/google/apis/storage_v1beta1/service.rb +24 -24
  637. data/generated/google/apis/storage_v1beta2/service.rb +34 -34
  638. data/generated/google/apis/storagetransfer_v1/classes.rb +44 -44
  639. data/generated/google/apis/storagetransfer_v1/service.rb +35 -36
  640. data/generated/google/apis/storagetransfer_v1.rb +2 -2
  641. data/generated/google/apis/streetviewpublish_v1/classes.rb +27 -27
  642. data/generated/google/apis/streetviewpublish_v1/service.rb +36 -40
  643. data/generated/google/apis/streetviewpublish_v1.rb +1 -1
  644. data/generated/google/apis/surveys_v2/service.rb +8 -8
  645. data/generated/google/apis/tagmanager_v1/service.rb +49 -95
  646. data/generated/google/apis/tagmanager_v1.rb +1 -1
  647. data/generated/google/apis/tagmanager_v2/classes.rb +197 -292
  648. data/generated/google/apis/tagmanager_v2/representations.rb +62 -103
  649. data/generated/google/apis/tagmanager_v2/service.rb +287 -249
  650. data/generated/google/apis/tagmanager_v2.rb +1 -1
  651. data/generated/google/apis/tasks_v1/service.rb +19 -19
  652. data/generated/google/apis/tasks_v1.rb +2 -2
  653. data/generated/google/apis/testing_v1/classes.rb +44 -39
  654. data/generated/google/apis/testing_v1/representations.rb +3 -1
  655. data/generated/google/apis/testing_v1/service.rb +5 -5
  656. data/generated/google/apis/testing_v1.rb +1 -1
  657. data/generated/google/apis/texttospeech_v1/service.rb +2 -2
  658. data/generated/google/apis/texttospeech_v1.rb +1 -1
  659. data/generated/google/apis/texttospeech_v1beta1/service.rb +2 -2
  660. data/generated/google/apis/texttospeech_v1beta1.rb +1 -1
  661. data/generated/google/apis/toolresults_v1beta3/classes.rb +340 -17
  662. data/generated/google/apis/toolresults_v1beta3/representations.rb +90 -0
  663. data/generated/google/apis/toolresults_v1beta3/service.rb +140 -24
  664. data/generated/google/apis/toolresults_v1beta3.rb +1 -1
  665. data/generated/google/apis/tpu_v1/classes.rb +21 -15
  666. data/generated/google/apis/tpu_v1/representations.rb +1 -0
  667. data/generated/google/apis/tpu_v1/service.rb +17 -17
  668. data/generated/google/apis/tpu_v1.rb +1 -1
  669. data/generated/google/apis/tpu_v1alpha1/classes.rb +21 -15
  670. data/generated/google/apis/tpu_v1alpha1/representations.rb +1 -0
  671. data/generated/google/apis/tpu_v1alpha1/service.rb +17 -17
  672. data/generated/google/apis/tpu_v1alpha1.rb +1 -1
  673. data/generated/google/apis/translate_v2/service.rb +5 -5
  674. data/generated/google/apis/urlshortener_v1/service.rb +3 -3
  675. data/generated/google/apis/vault_v1/classes.rb +44 -18
  676. data/generated/google/apis/vault_v1/representations.rb +4 -0
  677. data/generated/google/apis/vault_v1/service.rb +28 -28
  678. data/generated/google/apis/vault_v1.rb +1 -1
  679. data/generated/google/apis/videointelligence_v1/classes.rb +2193 -350
  680. data/generated/google/apis/videointelligence_v1/representations.rb +805 -6
  681. data/generated/google/apis/videointelligence_v1/service.rb +7 -6
  682. data/generated/google/apis/videointelligence_v1.rb +3 -2
  683. data/generated/google/apis/videointelligence_v1beta2/classes.rb +2448 -605
  684. data/generated/google/apis/videointelligence_v1beta2/representations.rb +806 -7
  685. data/generated/google/apis/videointelligence_v1beta2/service.rb +3 -2
  686. data/generated/google/apis/videointelligence_v1beta2.rb +3 -2
  687. data/generated/google/apis/videointelligence_v1p1beta1/classes.rb +2422 -579
  688. data/generated/google/apis/videointelligence_v1p1beta1/representations.rb +806 -7
  689. data/generated/google/apis/videointelligence_v1p1beta1/service.rb +3 -2
  690. data/generated/google/apis/videointelligence_v1p1beta1.rb +3 -2
  691. data/generated/google/apis/videointelligence_v1p2beta1/classes.rb +2645 -830
  692. data/generated/google/apis/videointelligence_v1p2beta1/representations.rb +796 -12
  693. data/generated/google/apis/videointelligence_v1p2beta1/service.rb +3 -2
  694. data/generated/google/apis/videointelligence_v1p2beta1.rb +3 -2
  695. data/generated/google/apis/videointelligence_v1p3beta1/classes.rb +4687 -0
  696. data/generated/google/apis/videointelligence_v1p3beta1/representations.rb +2005 -0
  697. data/generated/google/apis/videointelligence_v1p3beta1/service.rb +94 -0
  698. data/generated/google/apis/videointelligence_v1p3beta1.rb +36 -0
  699. data/generated/google/apis/vision_v1/classes.rb +4397 -124
  700. data/generated/google/apis/vision_v1/representations.rb +2366 -541
  701. data/generated/google/apis/vision_v1/service.rb +160 -33
  702. data/generated/google/apis/vision_v1.rb +1 -1
  703. data/generated/google/apis/vision_v1p1beta1/classes.rb +4451 -158
  704. data/generated/google/apis/vision_v1p1beta1/representations.rb +2415 -576
  705. data/generated/google/apis/vision_v1p1beta1/service.rb +73 -2
  706. data/generated/google/apis/vision_v1p1beta1.rb +1 -1
  707. data/generated/google/apis/vision_v1p2beta1/classes.rb +4451 -158
  708. data/generated/google/apis/vision_v1p2beta1/representations.rb +2443 -604
  709. data/generated/google/apis/vision_v1p2beta1/service.rb +73 -2
  710. data/generated/google/apis/vision_v1p2beta1.rb +1 -1
  711. data/generated/google/apis/webfonts_v1/service.rb +1 -1
  712. data/generated/google/apis/webmasters_v3/classes.rb +0 -166
  713. data/generated/google/apis/webmasters_v3/representations.rb +0 -93
  714. data/generated/google/apis/webmasters_v3/service.rb +9 -180
  715. data/generated/google/apis/webmasters_v3.rb +1 -1
  716. data/generated/google/apis/websecurityscanner_v1alpha/service.rb +13 -13
  717. data/generated/google/apis/websecurityscanner_v1beta/classes.rb +973 -0
  718. data/generated/google/apis/websecurityscanner_v1beta/representations.rb +452 -0
  719. data/generated/google/apis/websecurityscanner_v1beta/service.rb +548 -0
  720. data/generated/google/apis/websecurityscanner_v1beta.rb +34 -0
  721. data/generated/google/apis/youtube_analytics_v1/service.rb +8 -8
  722. data/generated/google/apis/youtube_analytics_v1beta1/service.rb +8 -8
  723. data/generated/google/apis/youtube_analytics_v2/service.rb +8 -8
  724. data/generated/google/apis/youtube_partner_v1/classes.rb +15 -34
  725. data/generated/google/apis/youtube_partner_v1/representations.rb +4 -17
  726. data/generated/google/apis/youtube_partner_v1/service.rb +74 -74
  727. data/generated/google/apis/youtube_partner_v1.rb +1 -1
  728. data/generated/google/apis/youtube_v3/service.rb +71 -71
  729. data/generated/google/apis/youtube_v3.rb +1 -1
  730. data/generated/google/apis/youtubereporting_v1/classes.rb +2 -2
  731. data/generated/google/apis/youtubereporting_v1/service.rb +8 -8
  732. data/generated/google/apis/youtubereporting_v1.rb +1 -1
  733. data/google-api-client.gemspec +2 -2
  734. data/lib/google/apis/core/http_command.rb +1 -0
  735. data/lib/google/apis/core/json_representation.rb +4 -0
  736. data/lib/google/apis/core/upload.rb +3 -3
  737. data/lib/google/apis/generator/model.rb +1 -1
  738. data/lib/google/apis/generator/templates/_method.tmpl +3 -3
  739. data/lib/google/apis/version.rb +1 -1
  740. metadata +86 -17
  741. data/.kokoro/common.cfg +0 -22
  742. data/.kokoro/windows.sh +0 -32
  743. data/generated/google/apis/logging_v2beta1/classes.rb +0 -1765
  744. data/generated/google/apis/logging_v2beta1/representations.rb +0 -537
  745. data/generated/google/apis/logging_v2beta1/service.rb +0 -570
  746. data/generated/google/apis/logging_v2beta1.rb +0 -46
  747. data/generated/google/apis/partners_v2/classes.rb +0 -2260
  748. data/generated/google/apis/partners_v2/representations.rb +0 -905
  749. data/generated/google/apis/partners_v2/service.rb +0 -1077
  750. data/samples/web/.env +0 -2
@@ -235,6 +235,184 @@ module Google
235
235
  end
236
236
  end
237
237
 
238
+ # Normalized bounding box.
239
+ # The normalized vertex coordinates are relative to the original image.
240
+ # Range: [0, 1].
241
+ class GoogleCloudVideointelligenceV1NormalizedBoundingBox
242
+ include Google::Apis::Core::Hashable
243
+
244
+ # Bottom Y coordinate.
245
+ # Corresponds to the JSON property `bottom`
246
+ # @return [Float]
247
+ attr_accessor :bottom
248
+
249
+ # Left X coordinate.
250
+ # Corresponds to the JSON property `left`
251
+ # @return [Float]
252
+ attr_accessor :left
253
+
254
+ # Right X coordinate.
255
+ # Corresponds to the JSON property `right`
256
+ # @return [Float]
257
+ attr_accessor :right
258
+
259
+ # Top Y coordinate.
260
+ # Corresponds to the JSON property `top`
261
+ # @return [Float]
262
+ attr_accessor :top
263
+
264
+ def initialize(**args)
265
+ update!(**args)
266
+ end
267
+
268
+ # Update properties of this object
269
+ def update!(**args)
270
+ @bottom = args[:bottom] if args.key?(:bottom)
271
+ @left = args[:left] if args.key?(:left)
272
+ @right = args[:right] if args.key?(:right)
273
+ @top = args[:top] if args.key?(:top)
274
+ end
275
+ end
276
+
277
+ # Normalized bounding polygon for text (that might not be aligned with axis).
278
+ # Contains list of the corner points in clockwise order starting from
279
+ # top-left corner. For example, for a rectangular bounding box:
280
+ # When the text is horizontal it might look like:
281
+ # 0----1
282
+ # | |
283
+ # 3----2
284
+ # When it's clockwise rotated 180 degrees around the top-left corner it
285
+ # becomes:
286
+ # 2----3
287
+ # | |
288
+ # 1----0
289
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
290
+ # than 0, or greater than 1 due to trignometric calculations for location of
291
+ # the box.
292
+ class GoogleCloudVideointelligenceV1NormalizedBoundingPoly
293
+ include Google::Apis::Core::Hashable
294
+
295
+ # Normalized vertices of the bounding polygon.
296
+ # Corresponds to the JSON property `vertices`
297
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1NormalizedVertex>]
298
+ attr_accessor :vertices
299
+
300
+ def initialize(**args)
301
+ update!(**args)
302
+ end
303
+
304
+ # Update properties of this object
305
+ def update!(**args)
306
+ @vertices = args[:vertices] if args.key?(:vertices)
307
+ end
308
+ end
309
+
310
+ # A vertex represents a 2D point in the image.
311
+ # NOTE: the normalized vertex coordinates are relative to the original image
312
+ # and range from 0 to 1.
313
+ class GoogleCloudVideointelligenceV1NormalizedVertex
314
+ include Google::Apis::Core::Hashable
315
+
316
+ # X coordinate.
317
+ # Corresponds to the JSON property `x`
318
+ # @return [Float]
319
+ attr_accessor :x
320
+
321
+ # Y coordinate.
322
+ # Corresponds to the JSON property `y`
323
+ # @return [Float]
324
+ attr_accessor :y
325
+
326
+ def initialize(**args)
327
+ update!(**args)
328
+ end
329
+
330
+ # Update properties of this object
331
+ def update!(**args)
332
+ @x = args[:x] if args.key?(:x)
333
+ @y = args[:y] if args.key?(:y)
334
+ end
335
+ end
336
+
337
+ # Annotations corresponding to one tracked object.
338
+ class GoogleCloudVideointelligenceV1ObjectTrackingAnnotation
339
+ include Google::Apis::Core::Hashable
340
+
341
+ # Object category's labeling confidence of this track.
342
+ # Corresponds to the JSON property `confidence`
343
+ # @return [Float]
344
+ attr_accessor :confidence
345
+
346
+ # Detected entity from video analysis.
347
+ # Corresponds to the JSON property `entity`
348
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1Entity]
349
+ attr_accessor :entity
350
+
351
+ # Information corresponding to all frames where this object track appears.
352
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
353
+ # messages in frames.
354
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
355
+ # Corresponds to the JSON property `frames`
356
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1ObjectTrackingFrame>]
357
+ attr_accessor :frames
358
+
359
+ # Video segment.
360
+ # Corresponds to the JSON property `segment`
361
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1VideoSegment]
362
+ attr_accessor :segment
363
+
364
+ # Streaming mode ONLY.
365
+ # In streaming mode, we do not know the end time of a tracked object
366
+ # before it is completed. Hence, there is no VideoSegment info returned.
367
+ # Instead, we provide a unique identifiable integer track_id so that
368
+ # the customers can correlate the results of the ongoing
369
+ # ObjectTrackAnnotation of the same track_id over time.
370
+ # Corresponds to the JSON property `trackId`
371
+ # @return [Fixnum]
372
+ attr_accessor :track_id
373
+
374
+ def initialize(**args)
375
+ update!(**args)
376
+ end
377
+
378
+ # Update properties of this object
379
+ def update!(**args)
380
+ @confidence = args[:confidence] if args.key?(:confidence)
381
+ @entity = args[:entity] if args.key?(:entity)
382
+ @frames = args[:frames] if args.key?(:frames)
383
+ @segment = args[:segment] if args.key?(:segment)
384
+ @track_id = args[:track_id] if args.key?(:track_id)
385
+ end
386
+ end
387
+
388
+ # Video frame level annotations for object detection and tracking. This field
389
+ # stores per frame location, time offset, and confidence.
390
+ class GoogleCloudVideointelligenceV1ObjectTrackingFrame
391
+ include Google::Apis::Core::Hashable
392
+
393
+ # Normalized bounding box.
394
+ # The normalized vertex coordinates are relative to the original image.
395
+ # Range: [0, 1].
396
+ # Corresponds to the JSON property `normalizedBoundingBox`
397
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1NormalizedBoundingBox]
398
+ attr_accessor :normalized_bounding_box
399
+
400
+ # The timestamp of the frame in microseconds.
401
+ # Corresponds to the JSON property `timeOffset`
402
+ # @return [String]
403
+ attr_accessor :time_offset
404
+
405
+ def initialize(**args)
406
+ update!(**args)
407
+ end
408
+
409
+ # Update properties of this object
410
+ def update!(**args)
411
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
412
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
413
+ end
414
+ end
415
+
238
416
  # Alternative hypotheses (a.k.a. n-best list).
239
417
  class GoogleCloudVideointelligenceV1SpeechRecognitionAlternative
240
418
  include Google::Apis::Core::Hashable
@@ -302,31 +480,62 @@ module Google
302
480
  end
303
481
  end
304
482
 
305
- # Annotation progress for a single video.
306
- class GoogleCloudVideointelligenceV1VideoAnnotationProgress
483
+ # Annotations related to one detected OCR text snippet. This will contain the
484
+ # corresponding text, confidence value, and frame level information for each
485
+ # detection.
486
+ class GoogleCloudVideointelligenceV1TextAnnotation
307
487
  include Google::Apis::Core::Hashable
308
488
 
309
- # Video file location in
310
- # [Google Cloud Storage](https://cloud.google.com/storage/).
311
- # Corresponds to the JSON property `inputUri`
489
+ # All video segments where OCR detected text appears.
490
+ # Corresponds to the JSON property `segments`
491
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1TextSegment>]
492
+ attr_accessor :segments
493
+
494
+ # The detected text.
495
+ # Corresponds to the JSON property `text`
312
496
  # @return [String]
313
- attr_accessor :input_uri
497
+ attr_accessor :text
314
498
 
315
- # Approximate percentage processed thus far. Guaranteed to be
316
- # 100 when fully processed.
317
- # Corresponds to the JSON property `progressPercent`
318
- # @return [Fixnum]
319
- attr_accessor :progress_percent
499
+ def initialize(**args)
500
+ update!(**args)
501
+ end
320
502
 
321
- # Time when the request was received.
322
- # Corresponds to the JSON property `startTime`
323
- # @return [String]
324
- attr_accessor :start_time
503
+ # Update properties of this object
504
+ def update!(**args)
505
+ @segments = args[:segments] if args.key?(:segments)
506
+ @text = args[:text] if args.key?(:text)
507
+ end
508
+ end
325
509
 
326
- # Time of the most recent update.
327
- # Corresponds to the JSON property `updateTime`
510
+ # Video frame level annotation results for text annotation (OCR).
511
+ # Contains information regarding timestamp and bounding box locations for the
512
+ # frames containing detected OCR text snippets.
513
+ class GoogleCloudVideointelligenceV1TextFrame
514
+ include Google::Apis::Core::Hashable
515
+
516
+ # Normalized bounding polygon for text (that might not be aligned with axis).
517
+ # Contains list of the corner points in clockwise order starting from
518
+ # top-left corner. For example, for a rectangular bounding box:
519
+ # When the text is horizontal it might look like:
520
+ # 0----1
521
+ # | |
522
+ # 3----2
523
+ # When it's clockwise rotated 180 degrees around the top-left corner it
524
+ # becomes:
525
+ # 2----3
526
+ # | |
527
+ # 1----0
528
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
529
+ # than 0, or greater than 1 due to trignometric calculations for location of
530
+ # the box.
531
+ # Corresponds to the JSON property `rotatedBoundingBox`
532
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1NormalizedBoundingPoly]
533
+ attr_accessor :rotated_bounding_box
534
+
535
+ # Timestamp of this frame.
536
+ # Corresponds to the JSON property `timeOffset`
328
537
  # @return [String]
329
- attr_accessor :update_time
538
+ attr_accessor :time_offset
330
539
 
331
540
  def initialize(**args)
332
541
  update!(**args)
@@ -334,36 +543,105 @@ module Google
334
543
 
335
544
  # Update properties of this object
336
545
  def update!(**args)
337
- @input_uri = args[:input_uri] if args.key?(:input_uri)
338
- @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
339
- @start_time = args[:start_time] if args.key?(:start_time)
340
- @update_time = args[:update_time] if args.key?(:update_time)
546
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
547
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
341
548
  end
342
549
  end
343
550
 
344
- # Annotation results for a single video.
345
- class GoogleCloudVideointelligenceV1VideoAnnotationResults
551
+ # Video segment level annotation results for text detection.
552
+ class GoogleCloudVideointelligenceV1TextSegment
346
553
  include Google::Apis::Core::Hashable
347
554
 
348
- # The `Status` type defines a logical error model that is suitable for different
349
- # programming environments, including REST APIs and RPC APIs. It is used by
350
- # [gRPC](https://github.com/grpc). The error model is designed to be:
351
- # - Simple to use and understand for most users
352
- # - Flexible enough to meet unexpected needs
353
- # # Overview
354
- # The `Status` message contains three pieces of data: error code, error message,
355
- # and error details. The error code should be an enum value of
356
- # google.rpc.Code, but it may accept additional error codes if needed. The
357
- # error message should be a developer-facing English message that helps
358
- # developers *understand* and *resolve* the error. If a localized user-facing
359
- # error message is needed, put the localized message in the error details or
360
- # localize it in the client. The optional error details may contain arbitrary
361
- # information about the error. There is a predefined set of error detail types
362
- # in the package `google.rpc` that can be used for common error conditions.
363
- # # Language mapping
364
- # The `Status` message is the logical representation of the error model, but it
365
- # is not necessarily the actual wire format. When the `Status` message is
366
- # exposed in different client libraries and different wire protocols, it can be
555
+ # Confidence for the track of detected text. It is calculated as the highest
556
+ # over all frames where OCR detected text appears.
557
+ # Corresponds to the JSON property `confidence`
558
+ # @return [Float]
559
+ attr_accessor :confidence
560
+
561
+ # Information related to the frames where OCR detected text appears.
562
+ # Corresponds to the JSON property `frames`
563
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1TextFrame>]
564
+ attr_accessor :frames
565
+
566
+ # Video segment.
567
+ # Corresponds to the JSON property `segment`
568
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1VideoSegment]
569
+ attr_accessor :segment
570
+
571
+ def initialize(**args)
572
+ update!(**args)
573
+ end
574
+
575
+ # Update properties of this object
576
+ def update!(**args)
577
+ @confidence = args[:confidence] if args.key?(:confidence)
578
+ @frames = args[:frames] if args.key?(:frames)
579
+ @segment = args[:segment] if args.key?(:segment)
580
+ end
581
+ end
582
+
583
+ # Annotation progress for a single video.
584
+ class GoogleCloudVideointelligenceV1VideoAnnotationProgress
585
+ include Google::Apis::Core::Hashable
586
+
587
+ # Video file location in
588
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
589
+ # Corresponds to the JSON property `inputUri`
590
+ # @return [String]
591
+ attr_accessor :input_uri
592
+
593
+ # Approximate percentage processed thus far. Guaranteed to be
594
+ # 100 when fully processed.
595
+ # Corresponds to the JSON property `progressPercent`
596
+ # @return [Fixnum]
597
+ attr_accessor :progress_percent
598
+
599
+ # Time when the request was received.
600
+ # Corresponds to the JSON property `startTime`
601
+ # @return [String]
602
+ attr_accessor :start_time
603
+
604
+ # Time of the most recent update.
605
+ # Corresponds to the JSON property `updateTime`
606
+ # @return [String]
607
+ attr_accessor :update_time
608
+
609
+ def initialize(**args)
610
+ update!(**args)
611
+ end
612
+
613
+ # Update properties of this object
614
+ def update!(**args)
615
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
616
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
617
+ @start_time = args[:start_time] if args.key?(:start_time)
618
+ @update_time = args[:update_time] if args.key?(:update_time)
619
+ end
620
+ end
621
+
622
+ # Annotation results for a single video.
623
+ class GoogleCloudVideointelligenceV1VideoAnnotationResults
624
+ include Google::Apis::Core::Hashable
625
+
626
+ # The `Status` type defines a logical error model that is suitable for
627
+ # different programming environments, including REST APIs and RPC APIs. It is
628
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
629
+ # - Simple to use and understand for most users
630
+ # - Flexible enough to meet unexpected needs
631
+ # # Overview
632
+ # The `Status` message contains three pieces of data: error code, error
633
+ # message, and error details. The error code should be an enum value of
634
+ # google.rpc.Code, but it may accept additional error codes if needed. The
635
+ # error message should be a developer-facing English message that helps
636
+ # developers *understand* and *resolve* the error. If a localized user-facing
637
+ # error message is needed, put the localized message in the error details or
638
+ # localize it in the client. The optional error details may contain arbitrary
639
+ # information about the error. There is a predefined set of error detail types
640
+ # in the package `google.rpc` that can be used for common error conditions.
641
+ # # Language mapping
642
+ # The `Status` message is the logical representation of the error model, but it
643
+ # is not necessarily the actual wire format. When the `Status` message is
644
+ # exposed in different client libraries and different wire protocols, it can be
367
645
  # mapped differently. For example, it will likely be mapped to some exceptions
368
646
  # in Java, but more likely mapped to some error codes in C.
369
647
  # # Other uses
@@ -407,6 +685,11 @@ module Google
407
685
  # @return [String]
408
686
  attr_accessor :input_uri
409
687
 
688
+ # Annotations for list of objects detected and tracked in video.
689
+ # Corresponds to the JSON property `objectAnnotations`
690
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>]
691
+ attr_accessor :object_annotations
692
+
410
693
  # Label annotations on video level or user specified segment level.
411
694
  # There is exactly one element for each unique label.
412
695
  # Corresponds to the JSON property `segmentLabelAnnotations`
@@ -429,6 +712,13 @@ module Google
429
712
  # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1SpeechTranscription>]
430
713
  attr_accessor :speech_transcriptions
431
714
 
715
+ # OCR text detection and tracking.
716
+ # Annotations for list of detected text snippets. Each will have list of
717
+ # frame information associated with it.
718
+ # Corresponds to the JSON property `textAnnotations`
719
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1TextAnnotation>]
720
+ attr_accessor :text_annotations
721
+
432
722
  def initialize(**args)
433
723
  update!(**args)
434
724
  end
@@ -439,10 +729,12 @@ module Google
439
729
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
440
730
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
441
731
  @input_uri = args[:input_uri] if args.key?(:input_uri)
732
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
442
733
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
443
734
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
444
735
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
445
736
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
737
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
446
738
  end
447
739
  end
448
740
 
@@ -745,29 +1037,31 @@ module Google
745
1037
  end
746
1038
  end
747
1039
 
748
- # Alternative hypotheses (a.k.a. n-best list).
749
- class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
1040
+ # Normalized bounding box.
1041
+ # The normalized vertex coordinates are relative to the original image.
1042
+ # Range: [0, 1].
1043
+ class GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox
750
1044
  include Google::Apis::Core::Hashable
751
1045
 
752
- # The confidence estimate between 0.0 and 1.0. A higher number
753
- # indicates an estimated greater likelihood that the recognized words are
754
- # correct. This field is typically provided only for the top hypothesis, and
755
- # only for `is_final=true` results. Clients should not rely on the
756
- # `confidence` field as it is not guaranteed to be accurate or consistent.
757
- # The default of 0.0 is a sentinel value indicating `confidence` was not set.
758
- # Corresponds to the JSON property `confidence`
1046
+ # Bottom Y coordinate.
1047
+ # Corresponds to the JSON property `bottom`
759
1048
  # @return [Float]
760
- attr_accessor :confidence
1049
+ attr_accessor :bottom
761
1050
 
762
- # Transcript text representing the words that the user spoke.
763
- # Corresponds to the JSON property `transcript`
764
- # @return [String]
765
- attr_accessor :transcript
1051
+ # Left X coordinate.
1052
+ # Corresponds to the JSON property `left`
1053
+ # @return [Float]
1054
+ attr_accessor :left
766
1055
 
767
- # A list of word-specific information for each recognized word.
768
- # Corresponds to the JSON property `words`
769
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2WordInfo>]
770
- attr_accessor :words
1056
+ # Right X coordinate.
1057
+ # Corresponds to the JSON property `right`
1058
+ # @return [Float]
1059
+ attr_accessor :right
1060
+
1061
+ # Top Y coordinate.
1062
+ # Corresponds to the JSON property `top`
1063
+ # @return [Float]
1064
+ attr_accessor :top
771
1065
 
772
1066
  def initialize(**args)
773
1067
  update!(**args)
@@ -775,31 +1069,35 @@ module Google
775
1069
 
776
1070
  # Update properties of this object
777
1071
  def update!(**args)
778
- @confidence = args[:confidence] if args.key?(:confidence)
779
- @transcript = args[:transcript] if args.key?(:transcript)
780
- @words = args[:words] if args.key?(:words)
1072
+ @bottom = args[:bottom] if args.key?(:bottom)
1073
+ @left = args[:left] if args.key?(:left)
1074
+ @right = args[:right] if args.key?(:right)
1075
+ @top = args[:top] if args.key?(:top)
781
1076
  end
782
1077
  end
783
1078
 
784
- # A speech recognition result corresponding to a portion of the audio.
785
- class GoogleCloudVideointelligenceV1beta2SpeechTranscription
1079
+ # Normalized bounding polygon for text (that might not be aligned with axis).
1080
+ # Contains list of the corner points in clockwise order starting from
1081
+ # top-left corner. For example, for a rectangular bounding box:
1082
+ # When the text is horizontal it might look like:
1083
+ # 0----1
1084
+ # | |
1085
+ # 3----2
1086
+ # When it's clockwise rotated 180 degrees around the top-left corner it
1087
+ # becomes:
1088
+ # 2----3
1089
+ # | |
1090
+ # 1----0
1091
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1092
+ # than 0, or greater than 1 due to trignometric calculations for location of
1093
+ # the box.
1094
+ class GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly
786
1095
  include Google::Apis::Core::Hashable
787
1096
 
788
- # May contain one or more recognition hypotheses (up to the maximum specified
789
- # in `max_alternatives`). These alternatives are ordered in terms of
790
- # accuracy, with the top (first) alternative being the most probable, as
791
- # ranked by the recognizer.
792
- # Corresponds to the JSON property `alternatives`
793
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
794
- attr_accessor :alternatives
795
-
796
- # Output only. The
797
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
798
- # language in this result. This language code was detected to have the most
799
- # likelihood of being spoken in the audio.
800
- # Corresponds to the JSON property `languageCode`
801
- # @return [String]
802
- attr_accessor :language_code
1097
+ # Normalized vertices of the bounding polygon.
1098
+ # Corresponds to the JSON property `vertices`
1099
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2NormalizedVertex>]
1100
+ attr_accessor :vertices
803
1101
 
804
1102
  def initialize(**args)
805
1103
  update!(**args)
@@ -807,36 +1105,25 @@ module Google
807
1105
 
808
1106
  # Update properties of this object
809
1107
  def update!(**args)
810
- @alternatives = args[:alternatives] if args.key?(:alternatives)
811
- @language_code = args[:language_code] if args.key?(:language_code)
1108
+ @vertices = args[:vertices] if args.key?(:vertices)
812
1109
  end
813
1110
  end
814
1111
 
815
- # Annotation progress for a single video.
816
- class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
1112
+ # A vertex represents a 2D point in the image.
1113
+ # NOTE: the normalized vertex coordinates are relative to the original image
1114
+ # and range from 0 to 1.
1115
+ class GoogleCloudVideointelligenceV1beta2NormalizedVertex
817
1116
  include Google::Apis::Core::Hashable
818
1117
 
819
- # Video file location in
820
- # [Google Cloud Storage](https://cloud.google.com/storage/).
821
- # Corresponds to the JSON property `inputUri`
822
- # @return [String]
823
- attr_accessor :input_uri
824
-
825
- # Approximate percentage processed thus far. Guaranteed to be
826
- # 100 when fully processed.
827
- # Corresponds to the JSON property `progressPercent`
828
- # @return [Fixnum]
829
- attr_accessor :progress_percent
830
-
831
- # Time when the request was received.
832
- # Corresponds to the JSON property `startTime`
833
- # @return [String]
834
- attr_accessor :start_time
1118
+ # X coordinate.
1119
+ # Corresponds to the JSON property `x`
1120
+ # @return [Float]
1121
+ attr_accessor :x
835
1122
 
836
- # Time of the most recent update.
837
- # Corresponds to the JSON property `updateTime`
838
- # @return [String]
839
- attr_accessor :update_time
1123
+ # Y coordinate.
1124
+ # Corresponds to the JSON property `y`
1125
+ # @return [Float]
1126
+ attr_accessor :y
840
1127
 
841
1128
  def initialize(**args)
842
1129
  update!(**args)
@@ -844,100 +1131,47 @@ module Google
844
1131
 
845
1132
  # Update properties of this object
846
1133
  def update!(**args)
847
- @input_uri = args[:input_uri] if args.key?(:input_uri)
848
- @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
849
- @start_time = args[:start_time] if args.key?(:start_time)
850
- @update_time = args[:update_time] if args.key?(:update_time)
1134
+ @x = args[:x] if args.key?(:x)
1135
+ @y = args[:y] if args.key?(:y)
851
1136
  end
852
1137
  end
853
1138
 
854
- # Annotation results for a single video.
855
- class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
1139
+ # Annotations corresponding to one tracked object.
1140
+ class GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation
856
1141
  include Google::Apis::Core::Hashable
857
1142
 
858
- # The `Status` type defines a logical error model that is suitable for different
859
- # programming environments, including REST APIs and RPC APIs. It is used by
860
- # [gRPC](https://github.com/grpc). The error model is designed to be:
861
- # - Simple to use and understand for most users
862
- # - Flexible enough to meet unexpected needs
863
- # # Overview
864
- # The `Status` message contains three pieces of data: error code, error message,
865
- # and error details. The error code should be an enum value of
866
- # google.rpc.Code, but it may accept additional error codes if needed. The
867
- # error message should be a developer-facing English message that helps
868
- # developers *understand* and *resolve* the error. If a localized user-facing
869
- # error message is needed, put the localized message in the error details or
870
- # localize it in the client. The optional error details may contain arbitrary
871
- # information about the error. There is a predefined set of error detail types
872
- # in the package `google.rpc` that can be used for common error conditions.
873
- # # Language mapping
874
- # The `Status` message is the logical representation of the error model, but it
875
- # is not necessarily the actual wire format. When the `Status` message is
876
- # exposed in different client libraries and different wire protocols, it can be
877
- # mapped differently. For example, it will likely be mapped to some exceptions
878
- # in Java, but more likely mapped to some error codes in C.
879
- # # Other uses
880
- # The error model and the `Status` message can be used in a variety of
881
- # environments, either with or without APIs, to provide a
882
- # consistent developer experience across different environments.
883
- # Example uses of this error model include:
884
- # - Partial errors. If a service needs to return partial errors to the client,
885
- # it may embed the `Status` in the normal response to indicate the partial
886
- # errors.
887
- # - Workflow errors. A typical workflow has multiple steps. Each step may
888
- # have a `Status` message for error reporting.
889
- # - Batch operations. If a client uses batch request and batch response, the
890
- # `Status` message should be used directly inside batch response, one for
891
- # each error sub-response.
892
- # - Asynchronous operations. If an API call embeds asynchronous operation
893
- # results in its response, the status of those operations should be
894
- # represented directly using the `Status` message.
895
- # - Logging. If some API errors are stored in logs, the message `Status` could
896
- # be used directly after any stripping needed for security/privacy reasons.
897
- # Corresponds to the JSON property `error`
898
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus]
899
- attr_accessor :error
900
-
901
- # Explicit content annotation (based on per-frame visual signals only).
902
- # If no explicit content has been detected in a frame, no annotations are
903
- # present for that frame.
904
- # Corresponds to the JSON property `explicitAnnotation`
905
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
906
- attr_accessor :explicit_annotation
907
-
908
- # Label annotations on frame level.
909
- # There is exactly one element for each unique label.
910
- # Corresponds to the JSON property `frameLabelAnnotations`
911
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
912
- attr_accessor :frame_label_annotations
913
-
914
- # Video file location in
915
- # [Google Cloud Storage](https://cloud.google.com/storage/).
916
- # Corresponds to the JSON property `inputUri`
917
- # @return [String]
918
- attr_accessor :input_uri
1143
+ # Object category's labeling confidence of this track.
1144
+ # Corresponds to the JSON property `confidence`
1145
+ # @return [Float]
1146
+ attr_accessor :confidence
919
1147
 
920
- # Label annotations on video level or user specified segment level.
921
- # There is exactly one element for each unique label.
922
- # Corresponds to the JSON property `segmentLabelAnnotations`
923
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
924
- attr_accessor :segment_label_annotations
1148
+ # Detected entity from video analysis.
1149
+ # Corresponds to the JSON property `entity`
1150
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2Entity]
1151
+ attr_accessor :entity
925
1152
 
926
- # Shot annotations. Each shot is represented as a video segment.
927
- # Corresponds to the JSON property `shotAnnotations`
928
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2VideoSegment>]
929
- attr_accessor :shot_annotations
1153
+ # Information corresponding to all frames where this object track appears.
1154
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
1155
+ # messages in frames.
1156
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
1157
+ # Corresponds to the JSON property `frames`
1158
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame>]
1159
+ attr_accessor :frames
930
1160
 
931
- # Label annotations on shot level.
932
- # There is exactly one element for each unique label.
933
- # Corresponds to the JSON property `shotLabelAnnotations`
934
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
935
- attr_accessor :shot_label_annotations
1161
+ # Video segment.
1162
+ # Corresponds to the JSON property `segment`
1163
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2VideoSegment]
1164
+ attr_accessor :segment
936
1165
 
937
- # Speech transcription.
938
- # Corresponds to the JSON property `speechTranscriptions`
939
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
940
- attr_accessor :speech_transcriptions
1166
+ # Streaming mode ONLY.
1167
+ # In streaming mode, we do not know the end time of a tracked object
1168
+ # before it is completed. Hence, there is no VideoSegment info returned.
1169
+ # Instead, we provide a unique identifiable integer track_id so that
1170
+ # the customers can correlate the results of the ongoing
1171
+ # ObjectTrackAnnotation of the same track_id over time.
1172
+ # Corresponds to the JSON property `trackId`
1173
+ # @return [Fixnum]
1174
+ attr_accessor :track_id
941
1175
 
942
1176
  def initialize(**args)
943
1177
  update!(**args)
@@ -945,32 +1179,30 @@ module Google
945
1179
 
946
1180
  # Update properties of this object
947
1181
  def update!(**args)
948
- @error = args[:error] if args.key?(:error)
949
- @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
950
- @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
951
- @input_uri = args[:input_uri] if args.key?(:input_uri)
952
- @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
953
- @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
954
- @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
955
- @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
1182
+ @confidence = args[:confidence] if args.key?(:confidence)
1183
+ @entity = args[:entity] if args.key?(:entity)
1184
+ @frames = args[:frames] if args.key?(:frames)
1185
+ @segment = args[:segment] if args.key?(:segment)
1186
+ @track_id = args[:track_id] if args.key?(:track_id)
956
1187
  end
957
1188
  end
958
1189
 
959
- # Video segment.
960
- class GoogleCloudVideointelligenceV1beta2VideoSegment
1190
+ # Video frame level annotations for object detection and tracking. This field
1191
+ # stores per frame location, time offset, and confidence.
1192
+ class GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame
961
1193
  include Google::Apis::Core::Hashable
962
1194
 
963
- # Time-offset, relative to the beginning of the video,
964
- # corresponding to the end of the segment (inclusive).
965
- # Corresponds to the JSON property `endTimeOffset`
966
- # @return [String]
967
- attr_accessor :end_time_offset
1195
+ # Normalized bounding box.
1196
+ # The normalized vertex coordinates are relative to the original image.
1197
+ # Range: [0, 1].
1198
+ # Corresponds to the JSON property `normalizedBoundingBox`
1199
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox]
1200
+ attr_accessor :normalized_bounding_box
968
1201
 
969
- # Time-offset, relative to the beginning of the video,
970
- # corresponding to the start of the segment (inclusive).
971
- # Corresponds to the JSON property `startTimeOffset`
1202
+ # The timestamp of the frame in microseconds.
1203
+ # Corresponds to the JSON property `timeOffset`
972
1204
  # @return [String]
973
- attr_accessor :start_time_offset
1205
+ attr_accessor :time_offset
974
1206
 
975
1207
  def initialize(**args)
976
1208
  update!(**args)
@@ -978,55 +1210,66 @@ module Google
978
1210
 
979
1211
  # Update properties of this object
980
1212
  def update!(**args)
981
- @end_time_offset = args[:end_time_offset] if args.key?(:end_time_offset)
982
- @start_time_offset = args[:start_time_offset] if args.key?(:start_time_offset)
1213
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
1214
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
983
1215
  end
984
1216
  end
985
1217
 
986
- # Word-specific information for recognized words. Word information is only
987
- # included in the response when certain request parameters are set, such
988
- # as `enable_word_time_offsets`.
989
- class GoogleCloudVideointelligenceV1beta2WordInfo
1218
+ # Alternative hypotheses (a.k.a. n-best list).
1219
+ class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
990
1220
  include Google::Apis::Core::Hashable
991
1221
 
992
- # Output only. The confidence estimate between 0.0 and 1.0. A higher number
1222
+ # The confidence estimate between 0.0 and 1.0. A higher number
993
1223
  # indicates an estimated greater likelihood that the recognized words are
994
- # correct. This field is set only for the top alternative.
995
- # This field is not guaranteed to be accurate and users should not rely on it
996
- # to be always provided.
1224
+ # correct. This field is typically provided only for the top hypothesis, and
1225
+ # only for `is_final=true` results. Clients should not rely on the
1226
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
997
1227
  # The default of 0.0 is a sentinel value indicating `confidence` was not set.
998
1228
  # Corresponds to the JSON property `confidence`
999
1229
  # @return [Float]
1000
1230
  attr_accessor :confidence
1001
1231
 
1002
- # Time offset relative to the beginning of the audio, and
1003
- # corresponding to the end of the spoken word. This field is only set if
1004
- # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
1005
- # experimental feature and the accuracy of the time offset can vary.
1006
- # Corresponds to the JSON property `endTime`
1232
+ # Transcript text representing the words that the user spoke.
1233
+ # Corresponds to the JSON property `transcript`
1007
1234
  # @return [String]
1008
- attr_accessor :end_time
1235
+ attr_accessor :transcript
1009
1236
 
1010
- # Output only. A distinct integer value is assigned for every speaker within
1011
- # the audio. This field specifies which one of those speakers was detected to
1012
- # have spoken this word. Value ranges from 1 up to diarization_speaker_count,
1013
- # and is only set if speaker diarization is enabled.
1014
- # Corresponds to the JSON property `speakerTag`
1015
- # @return [Fixnum]
1016
- attr_accessor :speaker_tag
1237
+ # A list of word-specific information for each recognized word.
1238
+ # Corresponds to the JSON property `words`
1239
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2WordInfo>]
1240
+ attr_accessor :words
1017
1241
 
1018
- # Time offset relative to the beginning of the audio, and
1019
- # corresponding to the start of the spoken word. This field is only set if
1020
- # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
1021
- # experimental feature and the accuracy of the time offset can vary.
1022
- # Corresponds to the JSON property `startTime`
1023
- # @return [String]
1024
- attr_accessor :start_time
1242
+ def initialize(**args)
1243
+ update!(**args)
1244
+ end
1025
1245
 
1026
- # The word corresponding to this set of information.
1027
- # Corresponds to the JSON property `word`
1246
+ # Update properties of this object
1247
+ def update!(**args)
1248
+ @confidence = args[:confidence] if args.key?(:confidence)
1249
+ @transcript = args[:transcript] if args.key?(:transcript)
1250
+ @words = args[:words] if args.key?(:words)
1251
+ end
1252
+ end
1253
+
1254
+ # A speech recognition result corresponding to a portion of the audio.
1255
+ class GoogleCloudVideointelligenceV1beta2SpeechTranscription
1256
+ include Google::Apis::Core::Hashable
1257
+
1258
+ # May contain one or more recognition hypotheses (up to the maximum specified
1259
+ # in `max_alternatives`). These alternatives are ordered in terms of
1260
+ # accuracy, with the top (first) alternative being the most probable, as
1261
+ # ranked by the recognizer.
1262
+ # Corresponds to the JSON property `alternatives`
1263
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
1264
+ attr_accessor :alternatives
1265
+
1266
+ # Output only. The
1267
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
1268
+ # language in this result. This language code was detected to have the most
1269
+ # likelihood of being spoken in the audio.
1270
+ # Corresponds to the JSON property `languageCode`
1028
1271
  # @return [String]
1029
- attr_accessor :word
1272
+ attr_accessor :language_code
1030
1273
 
1031
1274
  def initialize(**args)
1032
1275
  update!(**args)
@@ -1034,24 +1277,26 @@ module Google
1034
1277
 
1035
1278
  # Update properties of this object
1036
1279
  def update!(**args)
1037
- @confidence = args[:confidence] if args.key?(:confidence)
1038
- @end_time = args[:end_time] if args.key?(:end_time)
1039
- @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
1040
- @start_time = args[:start_time] if args.key?(:start_time)
1041
- @word = args[:word] if args.key?(:word)
1280
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
1281
+ @language_code = args[:language_code] if args.key?(:language_code)
1042
1282
  end
1043
1283
  end
1044
1284
 
1045
- # Video annotation progress. Included in the `metadata`
1046
- # field of the `Operation` returned by the `GetOperation`
1047
- # call of the `google::longrunning::Operations` service.
1048
- class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
1285
+ # Annotations related to one detected OCR text snippet. This will contain the
1286
+ # corresponding text, confidence value, and frame level information for each
1287
+ # detection.
1288
+ class GoogleCloudVideointelligenceV1beta2TextAnnotation
1049
1289
  include Google::Apis::Core::Hashable
1050
1290
 
1051
- # Progress metadata for all videos specified in `AnnotateVideoRequest`.
1052
- # Corresponds to the JSON property `annotationProgress`
1053
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
1054
- attr_accessor :annotation_progress
1291
+ # All video segments where OCR detected text appears.
1292
+ # Corresponds to the JSON property `segments`
1293
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2TextSegment>]
1294
+ attr_accessor :segments
1295
+
1296
+ # The detected text.
1297
+ # Corresponds to the JSON property `text`
1298
+ # @return [String]
1299
+ attr_accessor :text
1055
1300
 
1056
1301
  def initialize(**args)
1057
1302
  update!(**args)
@@ -1059,20 +1304,40 @@ module Google
1059
1304
 
1060
1305
  # Update properties of this object
1061
1306
  def update!(**args)
1062
- @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
1307
+ @segments = args[:segments] if args.key?(:segments)
1308
+ @text = args[:text] if args.key?(:text)
1063
1309
  end
1064
1310
  end
1065
1311
 
1066
- # Video annotation response. Included in the `response`
1067
- # field of the `Operation` returned by the `GetOperation`
1068
- # call of the `google::longrunning::Operations` service.
1069
- class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
1312
+ # Video frame level annotation results for text annotation (OCR).
1313
+ # Contains information regarding timestamp and bounding box locations for the
1314
+ # frames containing detected OCR text snippets.
1315
+ class GoogleCloudVideointelligenceV1beta2TextFrame
1070
1316
  include Google::Apis::Core::Hashable
1071
1317
 
1072
- # Annotation results for all videos specified in `AnnotateVideoRequest`.
1073
- # Corresponds to the JSON property `annotationResults`
1074
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
1075
- attr_accessor :annotation_results
1318
+ # Normalized bounding polygon for text (that might not be aligned with axis).
1319
+ # Contains list of the corner points in clockwise order starting from
1320
+ # top-left corner. For example, for a rectangular bounding box:
1321
+ # When the text is horizontal it might look like:
1322
+ # 0----1
1323
+ # | |
1324
+ # 3----2
1325
+ # When it's clockwise rotated 180 degrees around the top-left corner it
1326
+ # becomes:
1327
+ # 2----3
1328
+ # | |
1329
+ # 1----0
1330
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1331
+ # than 0, or greater than 1 due to trignometric calculations for location of
1332
+ # the box.
1333
+ # Corresponds to the JSON property `rotatedBoundingBox`
1334
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly]
1335
+ attr_accessor :rotated_bounding_box
1336
+
1337
+ # Timestamp of this frame.
1338
+ # Corresponds to the JSON property `timeOffset`
1339
+ # @return [String]
1340
+ attr_accessor :time_offset
1076
1341
 
1077
1342
  def initialize(**args)
1078
1343
  update!(**args)
@@ -1080,53 +1345,1822 @@ module Google
1080
1345
 
1081
1346
  # Update properties of this object
1082
1347
  def update!(**args)
1083
- @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
1348
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
1349
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1084
1350
  end
1085
1351
  end
1086
1352
 
1087
- # Detected entity from video analysis.
1088
- class GoogleCloudVideointelligenceV1p1beta1Entity
1353
+ # Video segment level annotation results for text detection.
1354
+ class GoogleCloudVideointelligenceV1beta2TextSegment
1089
1355
  include Google::Apis::Core::Hashable
1090
1356
 
1091
- # Textual description, e.g. `Fixed-gear bicycle`.
1092
- # Corresponds to the JSON property `description`
1093
- # @return [String]
1094
- attr_accessor :description
1357
+ # Confidence for the track of detected text. It is calculated as the highest
1358
+ # over all frames where OCR detected text appears.
1359
+ # Corresponds to the JSON property `confidence`
1360
+ # @return [Float]
1361
+ attr_accessor :confidence
1095
1362
 
1096
- # Opaque entity ID. Some IDs may be available in
1097
- # [Google Knowledge Graph Search
1098
- # API](https://developers.google.com/knowledge-graph/).
1363
+ # Information related to the frames where OCR detected text appears.
1364
+ # Corresponds to the JSON property `frames`
1365
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2TextFrame>]
1366
+ attr_accessor :frames
1367
+
1368
+ # Video segment.
1369
+ # Corresponds to the JSON property `segment`
1370
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2VideoSegment]
1371
+ attr_accessor :segment
1372
+
1373
+ def initialize(**args)
1374
+ update!(**args)
1375
+ end
1376
+
1377
+ # Update properties of this object
1378
+ def update!(**args)
1379
+ @confidence = args[:confidence] if args.key?(:confidence)
1380
+ @frames = args[:frames] if args.key?(:frames)
1381
+ @segment = args[:segment] if args.key?(:segment)
1382
+ end
1383
+ end
1384
+
1385
+ # Annotation progress for a single video.
1386
+ class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
1387
+ include Google::Apis::Core::Hashable
1388
+
1389
+ # Video file location in
1390
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
1391
+ # Corresponds to the JSON property `inputUri`
1392
+ # @return [String]
1393
+ attr_accessor :input_uri
1394
+
1395
+ # Approximate percentage processed thus far. Guaranteed to be
1396
+ # 100 when fully processed.
1397
+ # Corresponds to the JSON property `progressPercent`
1398
+ # @return [Fixnum]
1399
+ attr_accessor :progress_percent
1400
+
1401
+ # Time when the request was received.
1402
+ # Corresponds to the JSON property `startTime`
1403
+ # @return [String]
1404
+ attr_accessor :start_time
1405
+
1406
+ # Time of the most recent update.
1407
+ # Corresponds to the JSON property `updateTime`
1408
+ # @return [String]
1409
+ attr_accessor :update_time
1410
+
1411
+ def initialize(**args)
1412
+ update!(**args)
1413
+ end
1414
+
1415
+ # Update properties of this object
1416
+ def update!(**args)
1417
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
1418
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
1419
+ @start_time = args[:start_time] if args.key?(:start_time)
1420
+ @update_time = args[:update_time] if args.key?(:update_time)
1421
+ end
1422
+ end
1423
+
1424
+ # Annotation results for a single video.
1425
+ class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
1426
+ include Google::Apis::Core::Hashable
1427
+
1428
+ # The `Status` type defines a logical error model that is suitable for
1429
+ # different programming environments, including REST APIs and RPC APIs. It is
1430
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
1431
+ # - Simple to use and understand for most users
1432
+ # - Flexible enough to meet unexpected needs
1433
+ # # Overview
1434
+ # The `Status` message contains three pieces of data: error code, error
1435
+ # message, and error details. The error code should be an enum value of
1436
+ # google.rpc.Code, but it may accept additional error codes if needed. The
1437
+ # error message should be a developer-facing English message that helps
1438
+ # developers *understand* and *resolve* the error. If a localized user-facing
1439
+ # error message is needed, put the localized message in the error details or
1440
+ # localize it in the client. The optional error details may contain arbitrary
1441
+ # information about the error. There is a predefined set of error detail types
1442
+ # in the package `google.rpc` that can be used for common error conditions.
1443
+ # # Language mapping
1444
+ # The `Status` message is the logical representation of the error model, but it
1445
+ # is not necessarily the actual wire format. When the `Status` message is
1446
+ # exposed in different client libraries and different wire protocols, it can be
1447
+ # mapped differently. For example, it will likely be mapped to some exceptions
1448
+ # in Java, but more likely mapped to some error codes in C.
1449
+ # # Other uses
1450
+ # The error model and the `Status` message can be used in a variety of
1451
+ # environments, either with or without APIs, to provide a
1452
+ # consistent developer experience across different environments.
1453
+ # Example uses of this error model include:
1454
+ # - Partial errors. If a service needs to return partial errors to the client,
1455
+ # it may embed the `Status` in the normal response to indicate the partial
1456
+ # errors.
1457
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
1458
+ # have a `Status` message for error reporting.
1459
+ # - Batch operations. If a client uses batch request and batch response, the
1460
+ # `Status` message should be used directly inside batch response, one for
1461
+ # each error sub-response.
1462
+ # - Asynchronous operations. If an API call embeds asynchronous operation
1463
+ # results in its response, the status of those operations should be
1464
+ # represented directly using the `Status` message.
1465
+ # - Logging. If some API errors are stored in logs, the message `Status` could
1466
+ # be used directly after any stripping needed for security/privacy reasons.
1467
+ # Corresponds to the JSON property `error`
1468
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus]
1469
+ attr_accessor :error
1470
+
1471
+ # Explicit content annotation (based on per-frame visual signals only).
1472
+ # If no explicit content has been detected in a frame, no annotations are
1473
+ # present for that frame.
1474
+ # Corresponds to the JSON property `explicitAnnotation`
1475
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
1476
+ attr_accessor :explicit_annotation
1477
+
1478
+ # Label annotations on frame level.
1479
+ # There is exactly one element for each unique label.
1480
+ # Corresponds to the JSON property `frameLabelAnnotations`
1481
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
1482
+ attr_accessor :frame_label_annotations
1483
+
1484
+ # Video file location in
1485
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
1486
+ # Corresponds to the JSON property `inputUri`
1487
+ # @return [String]
1488
+ attr_accessor :input_uri
1489
+
1490
+ # Annotations for list of objects detected and tracked in video.
1491
+ # Corresponds to the JSON property `objectAnnotations`
1492
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation>]
1493
+ attr_accessor :object_annotations
1494
+
1495
+ # Label annotations on video level or user specified segment level.
1496
+ # There is exactly one element for each unique label.
1497
+ # Corresponds to the JSON property `segmentLabelAnnotations`
1498
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
1499
+ attr_accessor :segment_label_annotations
1500
+
1501
+ # Shot annotations. Each shot is represented as a video segment.
1502
+ # Corresponds to the JSON property `shotAnnotations`
1503
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2VideoSegment>]
1504
+ attr_accessor :shot_annotations
1505
+
1506
+ # Label annotations on shot level.
1507
+ # There is exactly one element for each unique label.
1508
+ # Corresponds to the JSON property `shotLabelAnnotations`
1509
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
1510
+ attr_accessor :shot_label_annotations
1511
+
1512
+ # Speech transcription.
1513
+ # Corresponds to the JSON property `speechTranscriptions`
1514
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
1515
+ attr_accessor :speech_transcriptions
1516
+
1517
+ # OCR text detection and tracking.
1518
+ # Annotations for list of detected text snippets. Each will have list of
1519
+ # frame information associated with it.
1520
+ # Corresponds to the JSON property `textAnnotations`
1521
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1beta2TextAnnotation>]
1522
+ attr_accessor :text_annotations
1523
+
1524
+ def initialize(**args)
1525
+ update!(**args)
1526
+ end
1527
+
1528
+ # Update properties of this object
1529
+ def update!(**args)
1530
+ @error = args[:error] if args.key?(:error)
1531
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
1532
+ @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
1533
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
1534
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
1535
+ @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
1536
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
1537
+ @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
1538
+ @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
1539
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
1540
+ end
1541
+ end
1542
+
1543
+ # Video segment.
1544
+ class GoogleCloudVideointelligenceV1beta2VideoSegment
1545
+ include Google::Apis::Core::Hashable
1546
+
1547
+ # Time-offset, relative to the beginning of the video,
1548
+ # corresponding to the end of the segment (inclusive).
1549
+ # Corresponds to the JSON property `endTimeOffset`
1550
+ # @return [String]
1551
+ attr_accessor :end_time_offset
1552
+
1553
+ # Time-offset, relative to the beginning of the video,
1554
+ # corresponding to the start of the segment (inclusive).
1555
+ # Corresponds to the JSON property `startTimeOffset`
1556
+ # @return [String]
1557
+ attr_accessor :start_time_offset
1558
+
1559
+ def initialize(**args)
1560
+ update!(**args)
1561
+ end
1562
+
1563
+ # Update properties of this object
1564
+ def update!(**args)
1565
+ @end_time_offset = args[:end_time_offset] if args.key?(:end_time_offset)
1566
+ @start_time_offset = args[:start_time_offset] if args.key?(:start_time_offset)
1567
+ end
1568
+ end
1569
+
1570
+ # Word-specific information for recognized words. Word information is only
1571
+ # included in the response when certain request parameters are set, such
1572
+ # as `enable_word_time_offsets`.
1573
+ class GoogleCloudVideointelligenceV1beta2WordInfo
1574
+ include Google::Apis::Core::Hashable
1575
+
1576
+ # Output only. The confidence estimate between 0.0 and 1.0. A higher number
1577
+ # indicates an estimated greater likelihood that the recognized words are
1578
+ # correct. This field is set only for the top alternative.
1579
+ # This field is not guaranteed to be accurate and users should not rely on it
1580
+ # to be always provided.
1581
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1582
+ # Corresponds to the JSON property `confidence`
1583
+ # @return [Float]
1584
+ attr_accessor :confidence
1585
+
1586
+ # Time offset relative to the beginning of the audio, and
1587
+ # corresponding to the end of the spoken word. This field is only set if
1588
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
1589
+ # experimental feature and the accuracy of the time offset can vary.
1590
+ # Corresponds to the JSON property `endTime`
1591
+ # @return [String]
1592
+ attr_accessor :end_time
1593
+
1594
+ # Output only. A distinct integer value is assigned for every speaker within
1595
+ # the audio. This field specifies which one of those speakers was detected to
1596
+ # have spoken this word. Value ranges from 1 up to diarization_speaker_count,
1597
+ # and is only set if speaker diarization is enabled.
1598
+ # Corresponds to the JSON property `speakerTag`
1599
+ # @return [Fixnum]
1600
+ attr_accessor :speaker_tag
1601
+
1602
+ # Time offset relative to the beginning of the audio, and
1603
+ # corresponding to the start of the spoken word. This field is only set if
1604
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
1605
+ # experimental feature and the accuracy of the time offset can vary.
1606
+ # Corresponds to the JSON property `startTime`
1607
+ # @return [String]
1608
+ attr_accessor :start_time
1609
+
1610
+ # The word corresponding to this set of information.
1611
+ # Corresponds to the JSON property `word`
1612
+ # @return [String]
1613
+ attr_accessor :word
1614
+
1615
+ def initialize(**args)
1616
+ update!(**args)
1617
+ end
1618
+
1619
+ # Update properties of this object
1620
+ def update!(**args)
1621
+ @confidence = args[:confidence] if args.key?(:confidence)
1622
+ @end_time = args[:end_time] if args.key?(:end_time)
1623
+ @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
1624
+ @start_time = args[:start_time] if args.key?(:start_time)
1625
+ @word = args[:word] if args.key?(:word)
1626
+ end
1627
+ end
1628
+
1629
+ # Video annotation progress. Included in the `metadata`
1630
+ # field of the `Operation` returned by the `GetOperation`
1631
+ # call of the `google::longrunning::Operations` service.
1632
+ class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
1633
+ include Google::Apis::Core::Hashable
1634
+
1635
+ # Progress metadata for all videos specified in `AnnotateVideoRequest`.
1636
+ # Corresponds to the JSON property `annotationProgress`
1637
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
1638
+ attr_accessor :annotation_progress
1639
+
1640
+ def initialize(**args)
1641
+ update!(**args)
1642
+ end
1643
+
1644
+ # Update properties of this object
1645
+ def update!(**args)
1646
+ @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
1647
+ end
1648
+ end
1649
+
1650
+ # Video annotation response. Included in the `response`
1651
+ # field of the `Operation` returned by the `GetOperation`
1652
+ # call of the `google::longrunning::Operations` service.
1653
+ class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
1654
+ include Google::Apis::Core::Hashable
1655
+
1656
+ # Annotation results for all videos specified in `AnnotateVideoRequest`.
1657
+ # Corresponds to the JSON property `annotationResults`
1658
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
1659
+ attr_accessor :annotation_results
1660
+
1661
+ def initialize(**args)
1662
+ update!(**args)
1663
+ end
1664
+
1665
+ # Update properties of this object
1666
+ def update!(**args)
1667
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
1668
+ end
1669
+ end
1670
+
1671
+ # Detected entity from video analysis.
1672
+ class GoogleCloudVideointelligenceV1p1beta1Entity
1673
+ include Google::Apis::Core::Hashable
1674
+
1675
+ # Textual description, e.g. `Fixed-gear bicycle`.
1676
+ # Corresponds to the JSON property `description`
1677
+ # @return [String]
1678
+ attr_accessor :description
1679
+
1680
+ # Opaque entity ID. Some IDs may be available in
1681
+ # [Google Knowledge Graph Search
1682
+ # API](https://developers.google.com/knowledge-graph/).
1683
+ # Corresponds to the JSON property `entityId`
1684
+ # @return [String]
1685
+ attr_accessor :entity_id
1686
+
1687
+ # Language code for `description` in BCP-47 format.
1688
+ # Corresponds to the JSON property `languageCode`
1689
+ # @return [String]
1690
+ attr_accessor :language_code
1691
+
1692
+ def initialize(**args)
1693
+ update!(**args)
1694
+ end
1695
+
1696
+ # Update properties of this object
1697
+ def update!(**args)
1698
+ @description = args[:description] if args.key?(:description)
1699
+ @entity_id = args[:entity_id] if args.key?(:entity_id)
1700
+ @language_code = args[:language_code] if args.key?(:language_code)
1701
+ end
1702
+ end
1703
+
1704
+ # Explicit content annotation (based on per-frame visual signals only).
1705
+ # If no explicit content has been detected in a frame, no annotations are
1706
+ # present for that frame.
1707
+ class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
1708
+ include Google::Apis::Core::Hashable
1709
+
1710
+ # All video frames where explicit content was detected.
1711
+ # Corresponds to the JSON property `frames`
1712
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
1713
+ attr_accessor :frames
1714
+
1715
+ def initialize(**args)
1716
+ update!(**args)
1717
+ end
1718
+
1719
+ # Update properties of this object
1720
+ def update!(**args)
1721
+ @frames = args[:frames] if args.key?(:frames)
1722
+ end
1723
+ end
1724
+
1725
+ # Video frame level annotation results for explicit content.
1726
+ class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
1727
+ include Google::Apis::Core::Hashable
1728
+
1729
+ # Likelihood of the pornography content..
1730
+ # Corresponds to the JSON property `pornographyLikelihood`
1731
+ # @return [String]
1732
+ attr_accessor :pornography_likelihood
1733
+
1734
+ # Time-offset, relative to the beginning of the video, corresponding to the
1735
+ # video frame for this location.
1736
+ # Corresponds to the JSON property `timeOffset`
1737
+ # @return [String]
1738
+ attr_accessor :time_offset
1739
+
1740
+ def initialize(**args)
1741
+ update!(**args)
1742
+ end
1743
+
1744
+ # Update properties of this object
1745
+ def update!(**args)
1746
+ @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
1747
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1748
+ end
1749
+ end
1750
+
1751
+ # Label annotation.
1752
+ class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
1753
+ include Google::Apis::Core::Hashable
1754
+
1755
+ # Common categories for the detected entity.
1756
+ # E.g. when the label is `Terrier` the category is likely `dog`. And in some
1757
+ # cases there might be more than one categories e.g. `Terrier` could also be
1758
+ # a `pet`.
1759
+ # Corresponds to the JSON property `categoryEntities`
1760
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1Entity>]
1761
+ attr_accessor :category_entities
1762
+
1763
+ # Detected entity from video analysis.
1764
+ # Corresponds to the JSON property `entity`
1765
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1Entity]
1766
+ attr_accessor :entity
1767
+
1768
+ # All video frames where a label was detected.
1769
+ # Corresponds to the JSON property `frames`
1770
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
1771
+ attr_accessor :frames
1772
+
1773
+ # All video segments where a label was detected.
1774
+ # Corresponds to the JSON property `segments`
1775
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
1776
+ attr_accessor :segments
1777
+
1778
+ def initialize(**args)
1779
+ update!(**args)
1780
+ end
1781
+
1782
+ # Update properties of this object
1783
+ def update!(**args)
1784
+ @category_entities = args[:category_entities] if args.key?(:category_entities)
1785
+ @entity = args[:entity] if args.key?(:entity)
1786
+ @frames = args[:frames] if args.key?(:frames)
1787
+ @segments = args[:segments] if args.key?(:segments)
1788
+ end
1789
+ end
1790
+
1791
+ # Video frame level annotation results for label detection.
1792
+ class GoogleCloudVideointelligenceV1p1beta1LabelFrame
1793
+ include Google::Apis::Core::Hashable
1794
+
1795
+ # Confidence that the label is accurate. Range: [0, 1].
1796
+ # Corresponds to the JSON property `confidence`
1797
+ # @return [Float]
1798
+ attr_accessor :confidence
1799
+
1800
+ # Time-offset, relative to the beginning of the video, corresponding to the
1801
+ # video frame for this location.
1802
+ # Corresponds to the JSON property `timeOffset`
1803
+ # @return [String]
1804
+ attr_accessor :time_offset
1805
+
1806
+ def initialize(**args)
1807
+ update!(**args)
1808
+ end
1809
+
1810
+ # Update properties of this object
1811
+ def update!(**args)
1812
+ @confidence = args[:confidence] if args.key?(:confidence)
1813
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1814
+ end
1815
+ end
1816
+
1817
+ # Video segment level annotation results for label detection.
1818
+ class GoogleCloudVideointelligenceV1p1beta1LabelSegment
1819
+ include Google::Apis::Core::Hashable
1820
+
1821
+ # Confidence that the label is accurate. Range: [0, 1].
1822
+ # Corresponds to the JSON property `confidence`
1823
+ # @return [Float]
1824
+ attr_accessor :confidence
1825
+
1826
+ # Video segment.
1827
+ # Corresponds to the JSON property `segment`
1828
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
1829
+ attr_accessor :segment
1830
+
1831
+ def initialize(**args)
1832
+ update!(**args)
1833
+ end
1834
+
1835
+ # Update properties of this object
1836
+ def update!(**args)
1837
+ @confidence = args[:confidence] if args.key?(:confidence)
1838
+ @segment = args[:segment] if args.key?(:segment)
1839
+ end
1840
+ end
1841
+
1842
+ # Normalized bounding box.
1843
+ # The normalized vertex coordinates are relative to the original image.
1844
+ # Range: [0, 1].
1845
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox
1846
+ include Google::Apis::Core::Hashable
1847
+
1848
+ # Bottom Y coordinate.
1849
+ # Corresponds to the JSON property `bottom`
1850
+ # @return [Float]
1851
+ attr_accessor :bottom
1852
+
1853
+ # Left X coordinate.
1854
+ # Corresponds to the JSON property `left`
1855
+ # @return [Float]
1856
+ attr_accessor :left
1857
+
1858
+ # Right X coordinate.
1859
+ # Corresponds to the JSON property `right`
1860
+ # @return [Float]
1861
+ attr_accessor :right
1862
+
1863
+ # Top Y coordinate.
1864
+ # Corresponds to the JSON property `top`
1865
+ # @return [Float]
1866
+ attr_accessor :top
1867
+
1868
+ def initialize(**args)
1869
+ update!(**args)
1870
+ end
1871
+
1872
+ # Update properties of this object
1873
+ def update!(**args)
1874
+ @bottom = args[:bottom] if args.key?(:bottom)
1875
+ @left = args[:left] if args.key?(:left)
1876
+ @right = args[:right] if args.key?(:right)
1877
+ @top = args[:top] if args.key?(:top)
1878
+ end
1879
+ end
1880
+
1881
+ # Normalized bounding polygon for text (that might not be aligned with axis).
1882
+ # Contains list of the corner points in clockwise order starting from
1883
+ # top-left corner. For example, for a rectangular bounding box:
1884
+ # When the text is horizontal it might look like:
1885
+ # 0----1
1886
+ # | |
1887
+ # 3----2
1888
+ # When it's clockwise rotated 180 degrees around the top-left corner it
1889
+ # becomes:
1890
+ # 2----3
1891
+ # | |
1892
+ # 1----0
1893
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1894
+ # than 0, or greater than 1 due to trignometric calculations for location of
1895
+ # the box.
1896
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly
1897
+ include Google::Apis::Core::Hashable
1898
+
1899
+ # Normalized vertices of the bounding polygon.
1900
+ # Corresponds to the JSON property `vertices`
1901
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1NormalizedVertex>]
1902
+ attr_accessor :vertices
1903
+
1904
+ def initialize(**args)
1905
+ update!(**args)
1906
+ end
1907
+
1908
+ # Update properties of this object
1909
+ def update!(**args)
1910
+ @vertices = args[:vertices] if args.key?(:vertices)
1911
+ end
1912
+ end
1913
+
1914
+ # A vertex represents a 2D point in the image.
1915
+ # NOTE: the normalized vertex coordinates are relative to the original image
1916
+ # and range from 0 to 1.
1917
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedVertex
1918
+ include Google::Apis::Core::Hashable
1919
+
1920
+ # X coordinate.
1921
+ # Corresponds to the JSON property `x`
1922
+ # @return [Float]
1923
+ attr_accessor :x
1924
+
1925
+ # Y coordinate.
1926
+ # Corresponds to the JSON property `y`
1927
+ # @return [Float]
1928
+ attr_accessor :y
1929
+
1930
+ def initialize(**args)
1931
+ update!(**args)
1932
+ end
1933
+
1934
+ # Update properties of this object
1935
+ def update!(**args)
1936
+ @x = args[:x] if args.key?(:x)
1937
+ @y = args[:y] if args.key?(:y)
1938
+ end
1939
+ end
1940
+
1941
+ # Annotations corresponding to one tracked object.
1942
+ class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation
1943
+ include Google::Apis::Core::Hashable
1944
+
1945
+ # Object category's labeling confidence of this track.
1946
+ # Corresponds to the JSON property `confidence`
1947
+ # @return [Float]
1948
+ attr_accessor :confidence
1949
+
1950
+ # Detected entity from video analysis.
1951
+ # Corresponds to the JSON property `entity`
1952
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1Entity]
1953
+ attr_accessor :entity
1954
+
1955
+ # Information corresponding to all frames where this object track appears.
1956
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
1957
+ # messages in frames.
1958
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
1959
+ # Corresponds to the JSON property `frames`
1960
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame>]
1961
+ attr_accessor :frames
1962
+
1963
+ # Video segment.
1964
+ # Corresponds to the JSON property `segment`
1965
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
1966
+ attr_accessor :segment
1967
+
1968
+ # Streaming mode ONLY.
1969
+ # In streaming mode, we do not know the end time of a tracked object
1970
+ # before it is completed. Hence, there is no VideoSegment info returned.
1971
+ # Instead, we provide a unique identifiable integer track_id so that
1972
+ # the customers can correlate the results of the ongoing
1973
+ # ObjectTrackAnnotation of the same track_id over time.
1974
+ # Corresponds to the JSON property `trackId`
1975
+ # @return [Fixnum]
1976
+ attr_accessor :track_id
1977
+
1978
+ def initialize(**args)
1979
+ update!(**args)
1980
+ end
1981
+
1982
+ # Update properties of this object
1983
+ def update!(**args)
1984
+ @confidence = args[:confidence] if args.key?(:confidence)
1985
+ @entity = args[:entity] if args.key?(:entity)
1986
+ @frames = args[:frames] if args.key?(:frames)
1987
+ @segment = args[:segment] if args.key?(:segment)
1988
+ @track_id = args[:track_id] if args.key?(:track_id)
1989
+ end
1990
+ end
1991
+
1992
+ # Video frame level annotations for object detection and tracking. This field
1993
+ # stores per frame location, time offset, and confidence.
1994
+ class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame
1995
+ include Google::Apis::Core::Hashable
1996
+
1997
+ # Normalized bounding box.
1998
+ # The normalized vertex coordinates are relative to the original image.
1999
+ # Range: [0, 1].
2000
+ # Corresponds to the JSON property `normalizedBoundingBox`
2001
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox]
2002
+ attr_accessor :normalized_bounding_box
2003
+
2004
+ # The timestamp of the frame in microseconds.
2005
+ # Corresponds to the JSON property `timeOffset`
2006
+ # @return [String]
2007
+ attr_accessor :time_offset
2008
+
2009
+ def initialize(**args)
2010
+ update!(**args)
2011
+ end
2012
+
2013
+ # Update properties of this object
2014
+ def update!(**args)
2015
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
2016
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2017
+ end
2018
+ end
2019
+
2020
+ # Alternative hypotheses (a.k.a. n-best list).
2021
+ class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
2022
+ include Google::Apis::Core::Hashable
2023
+
2024
+ # The confidence estimate between 0.0 and 1.0. A higher number
2025
+ # indicates an estimated greater likelihood that the recognized words are
2026
+ # correct. This field is typically provided only for the top hypothesis, and
2027
+ # only for `is_final=true` results. Clients should not rely on the
2028
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
2029
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
2030
+ # Corresponds to the JSON property `confidence`
2031
+ # @return [Float]
2032
+ attr_accessor :confidence
2033
+
2034
+ # Transcript text representing the words that the user spoke.
2035
+ # Corresponds to the JSON property `transcript`
2036
+ # @return [String]
2037
+ attr_accessor :transcript
2038
+
2039
+ # A list of word-specific information for each recognized word.
2040
+ # Corresponds to the JSON property `words`
2041
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
2042
+ attr_accessor :words
2043
+
2044
+ def initialize(**args)
2045
+ update!(**args)
2046
+ end
2047
+
2048
+ # Update properties of this object
2049
+ def update!(**args)
2050
+ @confidence = args[:confidence] if args.key?(:confidence)
2051
+ @transcript = args[:transcript] if args.key?(:transcript)
2052
+ @words = args[:words] if args.key?(:words)
2053
+ end
2054
+ end
2055
+
2056
+ # A speech recognition result corresponding to a portion of the audio.
2057
+ class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
2058
+ include Google::Apis::Core::Hashable
2059
+
2060
+ # May contain one or more recognition hypotheses (up to the maximum specified
2061
+ # in `max_alternatives`). These alternatives are ordered in terms of
2062
+ # accuracy, with the top (first) alternative being the most probable, as
2063
+ # ranked by the recognizer.
2064
+ # Corresponds to the JSON property `alternatives`
2065
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
2066
+ attr_accessor :alternatives
2067
+
2068
+ # Output only. The
2069
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
2070
+ # language in this result. This language code was detected to have the most
2071
+ # likelihood of being spoken in the audio.
2072
+ # Corresponds to the JSON property `languageCode`
2073
+ # @return [String]
2074
+ attr_accessor :language_code
2075
+
2076
+ def initialize(**args)
2077
+ update!(**args)
2078
+ end
2079
+
2080
+ # Update properties of this object
2081
+ def update!(**args)
2082
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
2083
+ @language_code = args[:language_code] if args.key?(:language_code)
2084
+ end
2085
+ end
2086
+
2087
+ # Annotations related to one detected OCR text snippet. This will contain the
2088
+ # corresponding text, confidence value, and frame level information for each
2089
+ # detection.
2090
+ class GoogleCloudVideointelligenceV1p1beta1TextAnnotation
2091
+ include Google::Apis::Core::Hashable
2092
+
2093
+ # All video segments where OCR detected text appears.
2094
+ # Corresponds to the JSON property `segments`
2095
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1TextSegment>]
2096
+ attr_accessor :segments
2097
+
2098
+ # The detected text.
2099
+ # Corresponds to the JSON property `text`
2100
+ # @return [String]
2101
+ attr_accessor :text
2102
+
2103
+ def initialize(**args)
2104
+ update!(**args)
2105
+ end
2106
+
2107
+ # Update properties of this object
2108
+ def update!(**args)
2109
+ @segments = args[:segments] if args.key?(:segments)
2110
+ @text = args[:text] if args.key?(:text)
2111
+ end
2112
+ end
2113
+
2114
+ # Video frame level annotation results for text annotation (OCR).
2115
+ # Contains information regarding timestamp and bounding box locations for the
2116
+ # frames containing detected OCR text snippets.
2117
+ class GoogleCloudVideointelligenceV1p1beta1TextFrame
2118
+ include Google::Apis::Core::Hashable
2119
+
2120
+ # Normalized bounding polygon for text (that might not be aligned with axis).
2121
+ # Contains list of the corner points in clockwise order starting from
2122
+ # top-left corner. For example, for a rectangular bounding box:
2123
+ # When the text is horizontal it might look like:
2124
+ # 0----1
2125
+ # | |
2126
+ # 3----2
2127
+ # When it's clockwise rotated 180 degrees around the top-left corner it
2128
+ # becomes:
2129
+ # 2----3
2130
+ # | |
2131
+ # 1----0
2132
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2133
+ # than 0, or greater than 1 due to trignometric calculations for location of
2134
+ # the box.
2135
+ # Corresponds to the JSON property `rotatedBoundingBox`
2136
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly]
2137
+ attr_accessor :rotated_bounding_box
2138
+
2139
+ # Timestamp of this frame.
2140
+ # Corresponds to the JSON property `timeOffset`
2141
+ # @return [String]
2142
+ attr_accessor :time_offset
2143
+
2144
+ def initialize(**args)
2145
+ update!(**args)
2146
+ end
2147
+
2148
+ # Update properties of this object
2149
+ def update!(**args)
2150
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
2151
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2152
+ end
2153
+ end
2154
+
2155
+ # Video segment level annotation results for text detection.
2156
+ class GoogleCloudVideointelligenceV1p1beta1TextSegment
2157
+ include Google::Apis::Core::Hashable
2158
+
2159
+ # Confidence for the track of detected text. It is calculated as the highest
2160
+ # over all frames where OCR detected text appears.
2161
+ # Corresponds to the JSON property `confidence`
2162
+ # @return [Float]
2163
+ attr_accessor :confidence
2164
+
2165
+ # Information related to the frames where OCR detected text appears.
2166
+ # Corresponds to the JSON property `frames`
2167
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1TextFrame>]
2168
+ attr_accessor :frames
2169
+
2170
+ # Video segment.
2171
+ # Corresponds to the JSON property `segment`
2172
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
2173
+ attr_accessor :segment
2174
+
2175
+ def initialize(**args)
2176
+ update!(**args)
2177
+ end
2178
+
2179
+ # Update properties of this object
2180
+ def update!(**args)
2181
+ @confidence = args[:confidence] if args.key?(:confidence)
2182
+ @frames = args[:frames] if args.key?(:frames)
2183
+ @segment = args[:segment] if args.key?(:segment)
2184
+ end
2185
+ end
2186
+
2187
+ # Annotation progress for a single video.
2188
+ class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
2189
+ include Google::Apis::Core::Hashable
2190
+
2191
+ # Video file location in
2192
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
2193
+ # Corresponds to the JSON property `inputUri`
2194
+ # @return [String]
2195
+ attr_accessor :input_uri
2196
+
2197
+ # Approximate percentage processed thus far. Guaranteed to be
2198
+ # 100 when fully processed.
2199
+ # Corresponds to the JSON property `progressPercent`
2200
+ # @return [Fixnum]
2201
+ attr_accessor :progress_percent
2202
+
2203
+ # Time when the request was received.
2204
+ # Corresponds to the JSON property `startTime`
2205
+ # @return [String]
2206
+ attr_accessor :start_time
2207
+
2208
+ # Time of the most recent update.
2209
+ # Corresponds to the JSON property `updateTime`
2210
+ # @return [String]
2211
+ attr_accessor :update_time
2212
+
2213
+ def initialize(**args)
2214
+ update!(**args)
2215
+ end
2216
+
2217
+ # Update properties of this object
2218
+ def update!(**args)
2219
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
2220
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
2221
+ @start_time = args[:start_time] if args.key?(:start_time)
2222
+ @update_time = args[:update_time] if args.key?(:update_time)
2223
+ end
2224
+ end
2225
+
2226
+ # Annotation results for a single video.
2227
+ class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
2228
+ include Google::Apis::Core::Hashable
2229
+
2230
+ # The `Status` type defines a logical error model that is suitable for
2231
+ # different programming environments, including REST APIs and RPC APIs. It is
2232
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2233
+ # - Simple to use and understand for most users
2234
+ # - Flexible enough to meet unexpected needs
2235
+ # # Overview
2236
+ # The `Status` message contains three pieces of data: error code, error
2237
+ # message, and error details. The error code should be an enum value of
2238
+ # google.rpc.Code, but it may accept additional error codes if needed. The
2239
+ # error message should be a developer-facing English message that helps
2240
+ # developers *understand* and *resolve* the error. If a localized user-facing
2241
+ # error message is needed, put the localized message in the error details or
2242
+ # localize it in the client. The optional error details may contain arbitrary
2243
+ # information about the error. There is a predefined set of error detail types
2244
+ # in the package `google.rpc` that can be used for common error conditions.
2245
+ # # Language mapping
2246
+ # The `Status` message is the logical representation of the error model, but it
2247
+ # is not necessarily the actual wire format. When the `Status` message is
2248
+ # exposed in different client libraries and different wire protocols, it can be
2249
+ # mapped differently. For example, it will likely be mapped to some exceptions
2250
+ # in Java, but more likely mapped to some error codes in C.
2251
+ # # Other uses
2252
+ # The error model and the `Status` message can be used in a variety of
2253
+ # environments, either with or without APIs, to provide a
2254
+ # consistent developer experience across different environments.
2255
+ # Example uses of this error model include:
2256
+ # - Partial errors. If a service needs to return partial errors to the client,
2257
+ # it may embed the `Status` in the normal response to indicate the partial
2258
+ # errors.
2259
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
2260
+ # have a `Status` message for error reporting.
2261
+ # - Batch operations. If a client uses batch request and batch response, the
2262
+ # `Status` message should be used directly inside batch response, one for
2263
+ # each error sub-response.
2264
+ # - Asynchronous operations. If an API call embeds asynchronous operation
2265
+ # results in its response, the status of those operations should be
2266
+ # represented directly using the `Status` message.
2267
+ # - Logging. If some API errors are stored in logs, the message `Status` could
2268
+ # be used directly after any stripping needed for security/privacy reasons.
2269
+ # Corresponds to the JSON property `error`
2270
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus]
2271
+ attr_accessor :error
2272
+
2273
+ # Explicit content annotation (based on per-frame visual signals only).
2274
+ # If no explicit content has been detected in a frame, no annotations are
2275
+ # present for that frame.
2276
+ # Corresponds to the JSON property `explicitAnnotation`
2277
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
2278
+ attr_accessor :explicit_annotation
2279
+
2280
+ # Label annotations on frame level.
2281
+ # There is exactly one element for each unique label.
2282
+ # Corresponds to the JSON property `frameLabelAnnotations`
2283
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
2284
+ attr_accessor :frame_label_annotations
2285
+
2286
+ # Video file location in
2287
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
2288
+ # Corresponds to the JSON property `inputUri`
2289
+ # @return [String]
2290
+ attr_accessor :input_uri
2291
+
2292
+ # Annotations for list of objects detected and tracked in video.
2293
+ # Corresponds to the JSON property `objectAnnotations`
2294
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>]
2295
+ attr_accessor :object_annotations
2296
+
2297
+ # Label annotations on video level or user specified segment level.
2298
+ # There is exactly one element for each unique label.
2299
+ # Corresponds to the JSON property `segmentLabelAnnotations`
2300
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
2301
+ attr_accessor :segment_label_annotations
2302
+
2303
+ # Shot annotations. Each shot is represented as a video segment.
2304
+ # Corresponds to the JSON property `shotAnnotations`
2305
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
2306
+ attr_accessor :shot_annotations
2307
+
2308
+ # Label annotations on shot level.
2309
+ # There is exactly one element for each unique label.
2310
+ # Corresponds to the JSON property `shotLabelAnnotations`
2311
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
2312
+ attr_accessor :shot_label_annotations
2313
+
2314
+ # Speech transcription.
2315
+ # Corresponds to the JSON property `speechTranscriptions`
2316
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
2317
+ attr_accessor :speech_transcriptions
2318
+
2319
+ # OCR text detection and tracking.
2320
+ # Annotations for list of detected text snippets. Each will have list of
2321
+ # frame information associated with it.
2322
+ # Corresponds to the JSON property `textAnnotations`
2323
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>]
2324
+ attr_accessor :text_annotations
2325
+
2326
+ def initialize(**args)
2327
+ update!(**args)
2328
+ end
2329
+
2330
+ # Update properties of this object
2331
+ def update!(**args)
2332
+ @error = args[:error] if args.key?(:error)
2333
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
2334
+ @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
2335
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
2336
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
2337
+ @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
2338
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
2339
+ @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
2340
+ @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
2341
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
2342
+ end
2343
+ end
2344
+
2345
+ # Video segment.
2346
+ class GoogleCloudVideointelligenceV1p1beta1VideoSegment
2347
+ include Google::Apis::Core::Hashable
2348
+
2349
+ # Time-offset, relative to the beginning of the video,
2350
+ # corresponding to the end of the segment (inclusive).
2351
+ # Corresponds to the JSON property `endTimeOffset`
2352
+ # @return [String]
2353
+ attr_accessor :end_time_offset
2354
+
2355
+ # Time-offset, relative to the beginning of the video,
2356
+ # corresponding to the start of the segment (inclusive).
2357
+ # Corresponds to the JSON property `startTimeOffset`
2358
+ # @return [String]
2359
+ attr_accessor :start_time_offset
2360
+
2361
+ def initialize(**args)
2362
+ update!(**args)
2363
+ end
2364
+
2365
+ # Update properties of this object
2366
+ def update!(**args)
2367
+ @end_time_offset = args[:end_time_offset] if args.key?(:end_time_offset)
2368
+ @start_time_offset = args[:start_time_offset] if args.key?(:start_time_offset)
2369
+ end
2370
+ end
2371
+
2372
+ # Word-specific information for recognized words. Word information is only
2373
+ # included in the response when certain request parameters are set, such
2374
+ # as `enable_word_time_offsets`.
2375
+ class GoogleCloudVideointelligenceV1p1beta1WordInfo
2376
+ include Google::Apis::Core::Hashable
2377
+
2378
+ # Output only. The confidence estimate between 0.0 and 1.0. A higher number
2379
+ # indicates an estimated greater likelihood that the recognized words are
2380
+ # correct. This field is set only for the top alternative.
2381
+ # This field is not guaranteed to be accurate and users should not rely on it
2382
+ # to be always provided.
2383
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
2384
+ # Corresponds to the JSON property `confidence`
2385
+ # @return [Float]
2386
+ attr_accessor :confidence
2387
+
2388
+ # Time offset relative to the beginning of the audio, and
2389
+ # corresponding to the end of the spoken word. This field is only set if
2390
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
2391
+ # experimental feature and the accuracy of the time offset can vary.
2392
+ # Corresponds to the JSON property `endTime`
2393
+ # @return [String]
2394
+ attr_accessor :end_time
2395
+
2396
+ # Output only. A distinct integer value is assigned for every speaker within
2397
+ # the audio. This field specifies which one of those speakers was detected to
2398
+ # have spoken this word. Value ranges from 1 up to diarization_speaker_count,
2399
+ # and is only set if speaker diarization is enabled.
2400
+ # Corresponds to the JSON property `speakerTag`
2401
+ # @return [Fixnum]
2402
+ attr_accessor :speaker_tag
2403
+
2404
+ # Time offset relative to the beginning of the audio, and
2405
+ # corresponding to the start of the spoken word. This field is only set if
2406
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
2407
+ # experimental feature and the accuracy of the time offset can vary.
2408
+ # Corresponds to the JSON property `startTime`
2409
+ # @return [String]
2410
+ attr_accessor :start_time
2411
+
2412
+ # The word corresponding to this set of information.
2413
+ # Corresponds to the JSON property `word`
2414
+ # @return [String]
2415
+ attr_accessor :word
2416
+
2417
+ def initialize(**args)
2418
+ update!(**args)
2419
+ end
2420
+
2421
+ # Update properties of this object
2422
+ def update!(**args)
2423
+ @confidence = args[:confidence] if args.key?(:confidence)
2424
+ @end_time = args[:end_time] if args.key?(:end_time)
2425
+ @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
2426
+ @start_time = args[:start_time] if args.key?(:start_time)
2427
+ @word = args[:word] if args.key?(:word)
2428
+ end
2429
+ end
2430
+
2431
+ # Video annotation progress. Included in the `metadata`
2432
+ # field of the `Operation` returned by the `GetOperation`
2433
+ # call of the `google::longrunning::Operations` service.
2434
+ class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
2435
+ include Google::Apis::Core::Hashable
2436
+
2437
+ # Progress metadata for all videos specified in `AnnotateVideoRequest`.
2438
+ # Corresponds to the JSON property `annotationProgress`
2439
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
2440
+ attr_accessor :annotation_progress
2441
+
2442
+ def initialize(**args)
2443
+ update!(**args)
2444
+ end
2445
+
2446
+ # Update properties of this object
2447
+ def update!(**args)
2448
+ @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
2449
+ end
2450
+ end
2451
+
2452
+ # Video annotation request.
2453
+ class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoRequest
2454
+ include Google::Apis::Core::Hashable
2455
+
2456
+ # Requested video annotation features.
2457
+ # Corresponds to the JSON property `features`
2458
+ # @return [Array<String>]
2459
+ attr_accessor :features
2460
+
2461
+ # The video data bytes.
2462
+ # If unset, the input video(s) should be specified via `input_uri`.
2463
+ # If set, `input_uri` should be unset.
2464
+ # Corresponds to the JSON property `inputContent`
2465
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2466
+ # @return [String]
2467
+ attr_accessor :input_content
2468
+
2469
+ # Input video location. Currently, only
2470
+ # [Google Cloud Storage](https://cloud.google.com/storage/) URIs are
2471
+ # supported, which must be specified in the following format:
2472
+ # `gs://bucket-id/object-id` (other URI formats return
2473
+ # google.rpc.Code.INVALID_ARGUMENT). For more information, see
2474
+ # [Request URIs](/storage/docs/reference-uris).
2475
+ # A video URI may include wildcards in `object-id`, and thus identify
2476
+ # multiple videos. Supported wildcards: '*' to match 0 or more characters;
2477
+ # '?' to match 1 character. If unset, the input video should be embedded
2478
+ # in the request as `input_content`. If set, `input_content` should be unset.
2479
+ # Corresponds to the JSON property `inputUri`
2480
+ # @return [String]
2481
+ attr_accessor :input_uri
2482
+
2483
+ # Optional cloud region where annotation should take place. Supported cloud
2484
+ # regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region
2485
+ # is specified, a region will be determined based on video file location.
2486
+ # Corresponds to the JSON property `locationId`
2487
+ # @return [String]
2488
+ attr_accessor :location_id
2489
+
2490
+ # Optional location where the output (in JSON format) should be stored.
2491
+ # Currently, only [Google Cloud Storage](https://cloud.google.com/storage/)
2492
+ # URIs are supported, which must be specified in the following format:
2493
+ # `gs://bucket-id/object-id` (other URI formats return
2494
+ # google.rpc.Code.INVALID_ARGUMENT). For more information, see
2495
+ # [Request URIs](/storage/docs/reference-uris).
2496
+ # Corresponds to the JSON property `outputUri`
2497
+ # @return [String]
2498
+ attr_accessor :output_uri
2499
+
2500
+ # Video context and/or feature-specific parameters.
2501
+ # Corresponds to the JSON property `videoContext`
2502
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoContext]
2503
+ attr_accessor :video_context
2504
+
2505
+ def initialize(**args)
2506
+ update!(**args)
2507
+ end
2508
+
2509
+ # Update properties of this object
2510
+ def update!(**args)
2511
+ @features = args[:features] if args.key?(:features)
2512
+ @input_content = args[:input_content] if args.key?(:input_content)
2513
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
2514
+ @location_id = args[:location_id] if args.key?(:location_id)
2515
+ @output_uri = args[:output_uri] if args.key?(:output_uri)
2516
+ @video_context = args[:video_context] if args.key?(:video_context)
2517
+ end
2518
+ end
2519
+
2520
+ # Video annotation response. Included in the `response`
2521
+ # field of the `Operation` returned by the `GetOperation`
2522
+ # call of the `google::longrunning::Operations` service.
2523
+ class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
2524
+ include Google::Apis::Core::Hashable
2525
+
2526
+ # Annotation results for all videos specified in `AnnotateVideoRequest`.
2527
+ # Corresponds to the JSON property `annotationResults`
2528
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
2529
+ attr_accessor :annotation_results
2530
+
2531
+ def initialize(**args)
2532
+ update!(**args)
2533
+ end
2534
+
2535
+ # Update properties of this object
2536
+ def update!(**args)
2537
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
2538
+ end
2539
+ end
2540
+
2541
+ # Detected entity from video analysis.
2542
+ class GoogleCloudVideointelligenceV1p2beta1Entity
2543
+ include Google::Apis::Core::Hashable
2544
+
2545
+ # Textual description, e.g. `Fixed-gear bicycle`.
2546
+ # Corresponds to the JSON property `description`
2547
+ # @return [String]
2548
+ attr_accessor :description
2549
+
2550
+ # Opaque entity ID. Some IDs may be available in
2551
+ # [Google Knowledge Graph Search
2552
+ # API](https://developers.google.com/knowledge-graph/).
1099
2553
  # Corresponds to the JSON property `entityId`
1100
2554
  # @return [String]
1101
- attr_accessor :entity_id
2555
+ attr_accessor :entity_id
2556
+
2557
+ # Language code for `description` in BCP-47 format.
2558
+ # Corresponds to the JSON property `languageCode`
2559
+ # @return [String]
2560
+ attr_accessor :language_code
2561
+
2562
+ def initialize(**args)
2563
+ update!(**args)
2564
+ end
2565
+
2566
+ # Update properties of this object
2567
+ def update!(**args)
2568
+ @description = args[:description] if args.key?(:description)
2569
+ @entity_id = args[:entity_id] if args.key?(:entity_id)
2570
+ @language_code = args[:language_code] if args.key?(:language_code)
2571
+ end
2572
+ end
2573
+
2574
+ # Explicit content annotation (based on per-frame visual signals only).
2575
+ # If no explicit content has been detected in a frame, no annotations are
2576
+ # present for that frame.
2577
+ class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
2578
+ include Google::Apis::Core::Hashable
2579
+
2580
+ # All video frames where explicit content was detected.
2581
+ # Corresponds to the JSON property `frames`
2582
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
2583
+ attr_accessor :frames
2584
+
2585
+ def initialize(**args)
2586
+ update!(**args)
2587
+ end
2588
+
2589
+ # Update properties of this object
2590
+ def update!(**args)
2591
+ @frames = args[:frames] if args.key?(:frames)
2592
+ end
2593
+ end
2594
+
2595
+ # Config for EXPLICIT_CONTENT_DETECTION.
2596
+ class GoogleCloudVideointelligenceV1p2beta1ExplicitContentDetectionConfig
2597
+ include Google::Apis::Core::Hashable
2598
+
2599
+ # Model to use for explicit content detection.
2600
+ # Supported values: "builtin/stable" (the default if unset) and
2601
+ # "builtin/latest".
2602
+ # Corresponds to the JSON property `model`
2603
+ # @return [String]
2604
+ attr_accessor :model
2605
+
2606
+ def initialize(**args)
2607
+ update!(**args)
2608
+ end
2609
+
2610
+ # Update properties of this object
2611
+ def update!(**args)
2612
+ @model = args[:model] if args.key?(:model)
2613
+ end
2614
+ end
2615
+
2616
+ # Video frame level annotation results for explicit content.
2617
+ class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
2618
+ include Google::Apis::Core::Hashable
2619
+
2620
+ # Likelihood of the pornography content..
2621
+ # Corresponds to the JSON property `pornographyLikelihood`
2622
+ # @return [String]
2623
+ attr_accessor :pornography_likelihood
2624
+
2625
+ # Time-offset, relative to the beginning of the video, corresponding to the
2626
+ # video frame for this location.
2627
+ # Corresponds to the JSON property `timeOffset`
2628
+ # @return [String]
2629
+ attr_accessor :time_offset
2630
+
2631
+ def initialize(**args)
2632
+ update!(**args)
2633
+ end
2634
+
2635
+ # Update properties of this object
2636
+ def update!(**args)
2637
+ @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
2638
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2639
+ end
2640
+ end
2641
+
2642
+ # Label annotation.
2643
+ class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
2644
+ include Google::Apis::Core::Hashable
2645
+
2646
+ # Common categories for the detected entity.
2647
+ # E.g. when the label is `Terrier` the category is likely `dog`. And in some
2648
+ # cases there might be more than one categories e.g. `Terrier` could also be
2649
+ # a `pet`.
2650
+ # Corresponds to the JSON property `categoryEntities`
2651
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1Entity>]
2652
+ attr_accessor :category_entities
2653
+
2654
+ # Detected entity from video analysis.
2655
+ # Corresponds to the JSON property `entity`
2656
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1Entity]
2657
+ attr_accessor :entity
2658
+
2659
+ # All video frames where a label was detected.
2660
+ # Corresponds to the JSON property `frames`
2661
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
2662
+ attr_accessor :frames
2663
+
2664
+ # All video segments where a label was detected.
2665
+ # Corresponds to the JSON property `segments`
2666
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
2667
+ attr_accessor :segments
2668
+
2669
+ def initialize(**args)
2670
+ update!(**args)
2671
+ end
2672
+
2673
+ # Update properties of this object
2674
+ def update!(**args)
2675
+ @category_entities = args[:category_entities] if args.key?(:category_entities)
2676
+ @entity = args[:entity] if args.key?(:entity)
2677
+ @frames = args[:frames] if args.key?(:frames)
2678
+ @segments = args[:segments] if args.key?(:segments)
2679
+ end
2680
+ end
2681
+
2682
+ # Config for LABEL_DETECTION.
2683
+ class GoogleCloudVideointelligenceV1p2beta1LabelDetectionConfig
2684
+ include Google::Apis::Core::Hashable
2685
+
2686
+ # The confidence threshold we perform filtering on the labels from
2687
+ # frame-level detection. If not set, it is set to 0.4 by default. The valid
2688
+ # range for this threshold is [0.1, 0.9]. Any value set outside of this
2689
+ # range will be clipped.
2690
+ # Note: for best results please follow the default threshold. We will update
2691
+ # the default threshold everytime when we release a new model.
2692
+ # Corresponds to the JSON property `frameConfidenceThreshold`
2693
+ # @return [Float]
2694
+ attr_accessor :frame_confidence_threshold
2695
+
2696
+ # What labels should be detected with LABEL_DETECTION, in addition to
2697
+ # video-level labels or segment-level labels.
2698
+ # If unspecified, defaults to `SHOT_MODE`.
2699
+ # Corresponds to the JSON property `labelDetectionMode`
2700
+ # @return [String]
2701
+ attr_accessor :label_detection_mode
2702
+
2703
+ # Model to use for label detection.
2704
+ # Supported values: "builtin/stable" (the default if unset) and
2705
+ # "builtin/latest".
2706
+ # Corresponds to the JSON property `model`
2707
+ # @return [String]
2708
+ attr_accessor :model
2709
+
2710
+ # Whether the video has been shot from a stationary (i.e. non-moving) camera.
2711
+ # When set to true, might improve detection accuracy for moving objects.
2712
+ # Should be used with `SHOT_AND_FRAME_MODE` enabled.
2713
+ # Corresponds to the JSON property `stationaryCamera`
2714
+ # @return [Boolean]
2715
+ attr_accessor :stationary_camera
2716
+ alias_method :stationary_camera?, :stationary_camera
2717
+
2718
+ # The confidence threshold we perform filtering on the labels from
2719
+ # video-level and shot-level detections. If not set, it is set to 0.3 by
2720
+ # default. The valid range for this threshold is [0.1, 0.9]. Any value set
2721
+ # outside of this range will be clipped.
2722
+ # Note: for best results please follow the default threshold. We will update
2723
+ # the default threshold everytime when we release a new model.
2724
+ # Corresponds to the JSON property `videoConfidenceThreshold`
2725
+ # @return [Float]
2726
+ attr_accessor :video_confidence_threshold
2727
+
2728
+ def initialize(**args)
2729
+ update!(**args)
2730
+ end
2731
+
2732
+ # Update properties of this object
2733
+ def update!(**args)
2734
+ @frame_confidence_threshold = args[:frame_confidence_threshold] if args.key?(:frame_confidence_threshold)
2735
+ @label_detection_mode = args[:label_detection_mode] if args.key?(:label_detection_mode)
2736
+ @model = args[:model] if args.key?(:model)
2737
+ @stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
2738
+ @video_confidence_threshold = args[:video_confidence_threshold] if args.key?(:video_confidence_threshold)
2739
+ end
2740
+ end
2741
+
2742
+ # Video frame level annotation results for label detection.
2743
+ class GoogleCloudVideointelligenceV1p2beta1LabelFrame
2744
+ include Google::Apis::Core::Hashable
2745
+
2746
+ # Confidence that the label is accurate. Range: [0, 1].
2747
+ # Corresponds to the JSON property `confidence`
2748
+ # @return [Float]
2749
+ attr_accessor :confidence
2750
+
2751
+ # Time-offset, relative to the beginning of the video, corresponding to the
2752
+ # video frame for this location.
2753
+ # Corresponds to the JSON property `timeOffset`
2754
+ # @return [String]
2755
+ attr_accessor :time_offset
2756
+
2757
+ def initialize(**args)
2758
+ update!(**args)
2759
+ end
2760
+
2761
+ # Update properties of this object
2762
+ def update!(**args)
2763
+ @confidence = args[:confidence] if args.key?(:confidence)
2764
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2765
+ end
2766
+ end
2767
+
2768
+ # Video segment level annotation results for label detection.
2769
+ class GoogleCloudVideointelligenceV1p2beta1LabelSegment
2770
+ include Google::Apis::Core::Hashable
2771
+
2772
+ # Confidence that the label is accurate. Range: [0, 1].
2773
+ # Corresponds to the JSON property `confidence`
2774
+ # @return [Float]
2775
+ attr_accessor :confidence
2776
+
2777
+ # Video segment.
2778
+ # Corresponds to the JSON property `segment`
2779
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
2780
+ attr_accessor :segment
2781
+
2782
+ def initialize(**args)
2783
+ update!(**args)
2784
+ end
2785
+
2786
+ # Update properties of this object
2787
+ def update!(**args)
2788
+ @confidence = args[:confidence] if args.key?(:confidence)
2789
+ @segment = args[:segment] if args.key?(:segment)
2790
+ end
2791
+ end
2792
+
2793
+ # Normalized bounding box.
2794
+ # The normalized vertex coordinates are relative to the original image.
2795
+ # Range: [0, 1].
2796
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
2797
+ include Google::Apis::Core::Hashable
2798
+
2799
+ # Bottom Y coordinate.
2800
+ # Corresponds to the JSON property `bottom`
2801
+ # @return [Float]
2802
+ attr_accessor :bottom
2803
+
2804
+ # Left X coordinate.
2805
+ # Corresponds to the JSON property `left`
2806
+ # @return [Float]
2807
+ attr_accessor :left
2808
+
2809
+ # Right X coordinate.
2810
+ # Corresponds to the JSON property `right`
2811
+ # @return [Float]
2812
+ attr_accessor :right
2813
+
2814
+ # Top Y coordinate.
2815
+ # Corresponds to the JSON property `top`
2816
+ # @return [Float]
2817
+ attr_accessor :top
2818
+
2819
+ def initialize(**args)
2820
+ update!(**args)
2821
+ end
2822
+
2823
+ # Update properties of this object
2824
+ def update!(**args)
2825
+ @bottom = args[:bottom] if args.key?(:bottom)
2826
+ @left = args[:left] if args.key?(:left)
2827
+ @right = args[:right] if args.key?(:right)
2828
+ @top = args[:top] if args.key?(:top)
2829
+ end
2830
+ end
2831
+
2832
+ # Normalized bounding polygon for text (that might not be aligned with axis).
2833
+ # Contains list of the corner points in clockwise order starting from
2834
+ # top-left corner. For example, for a rectangular bounding box:
2835
+ # When the text is horizontal it might look like:
2836
+ # 0----1
2837
+ # | |
2838
+ # 3----2
2839
+ # When it's clockwise rotated 180 degrees around the top-left corner it
2840
+ # becomes:
2841
+ # 2----3
2842
+ # | |
2843
+ # 1----0
2844
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2845
+ # than 0, or greater than 1 due to trignometric calculations for location of
2846
+ # the box.
2847
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
2848
+ include Google::Apis::Core::Hashable
2849
+
2850
+ # Normalized vertices of the bounding polygon.
2851
+ # Corresponds to the JSON property `vertices`
2852
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
2853
+ attr_accessor :vertices
2854
+
2855
+ def initialize(**args)
2856
+ update!(**args)
2857
+ end
2858
+
2859
+ # Update properties of this object
2860
+ def update!(**args)
2861
+ @vertices = args[:vertices] if args.key?(:vertices)
2862
+ end
2863
+ end
2864
+
2865
+ # A vertex represents a 2D point in the image.
2866
+ # NOTE: the normalized vertex coordinates are relative to the original image
2867
+ # and range from 0 to 1.
2868
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
2869
+ include Google::Apis::Core::Hashable
2870
+
2871
+ # X coordinate.
2872
+ # Corresponds to the JSON property `x`
2873
+ # @return [Float]
2874
+ attr_accessor :x
2875
+
2876
+ # Y coordinate.
2877
+ # Corresponds to the JSON property `y`
2878
+ # @return [Float]
2879
+ attr_accessor :y
2880
+
2881
+ def initialize(**args)
2882
+ update!(**args)
2883
+ end
2884
+
2885
+ # Update properties of this object
2886
+ def update!(**args)
2887
+ @x = args[:x] if args.key?(:x)
2888
+ @y = args[:y] if args.key?(:y)
2889
+ end
2890
+ end
2891
+
2892
+ # Annotations corresponding to one tracked object.
2893
+ class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
2894
+ include Google::Apis::Core::Hashable
2895
+
2896
+ # Object category's labeling confidence of this track.
2897
+ # Corresponds to the JSON property `confidence`
2898
+ # @return [Float]
2899
+ attr_accessor :confidence
2900
+
2901
+ # Detected entity from video analysis.
2902
+ # Corresponds to the JSON property `entity`
2903
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1Entity]
2904
+ attr_accessor :entity
2905
+
2906
+ # Information corresponding to all frames where this object track appears.
2907
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
2908
+ # messages in frames.
2909
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
2910
+ # Corresponds to the JSON property `frames`
2911
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
2912
+ attr_accessor :frames
2913
+
2914
+ # Video segment.
2915
+ # Corresponds to the JSON property `segment`
2916
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
2917
+ attr_accessor :segment
2918
+
2919
+ # Streaming mode ONLY.
2920
+ # In streaming mode, we do not know the end time of a tracked object
2921
+ # before it is completed. Hence, there is no VideoSegment info returned.
2922
+ # Instead, we provide a unique identifiable integer track_id so that
2923
+ # the customers can correlate the results of the ongoing
2924
+ # ObjectTrackAnnotation of the same track_id over time.
2925
+ # Corresponds to the JSON property `trackId`
2926
+ # @return [Fixnum]
2927
+ attr_accessor :track_id
2928
+
2929
+ def initialize(**args)
2930
+ update!(**args)
2931
+ end
2932
+
2933
+ # Update properties of this object
2934
+ def update!(**args)
2935
+ @confidence = args[:confidence] if args.key?(:confidence)
2936
+ @entity = args[:entity] if args.key?(:entity)
2937
+ @frames = args[:frames] if args.key?(:frames)
2938
+ @segment = args[:segment] if args.key?(:segment)
2939
+ @track_id = args[:track_id] if args.key?(:track_id)
2940
+ end
2941
+ end
2942
+
2943
+ # Video frame level annotations for object detection and tracking. This field
2944
+ # stores per frame location, time offset, and confidence.
2945
+ class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
2946
+ include Google::Apis::Core::Hashable
2947
+
2948
+ # Normalized bounding box.
2949
+ # The normalized vertex coordinates are relative to the original image.
2950
+ # Range: [0, 1].
2951
+ # Corresponds to the JSON property `normalizedBoundingBox`
2952
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
2953
+ attr_accessor :normalized_bounding_box
2954
+
2955
+ # The timestamp of the frame in microseconds.
2956
+ # Corresponds to the JSON property `timeOffset`
2957
+ # @return [String]
2958
+ attr_accessor :time_offset
2959
+
2960
+ def initialize(**args)
2961
+ update!(**args)
2962
+ end
2963
+
2964
+ # Update properties of this object
2965
+ def update!(**args)
2966
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
2967
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2968
+ end
2969
+ end
2970
+
2971
+ # Config for SHOT_CHANGE_DETECTION.
2972
+ class GoogleCloudVideointelligenceV1p2beta1ShotChangeDetectionConfig
2973
+ include Google::Apis::Core::Hashable
2974
+
2975
+ # Model to use for shot change detection.
2976
+ # Supported values: "builtin/stable" (the default if unset) and
2977
+ # "builtin/latest".
2978
+ # Corresponds to the JSON property `model`
2979
+ # @return [String]
2980
+ attr_accessor :model
2981
+
2982
+ def initialize(**args)
2983
+ update!(**args)
2984
+ end
2985
+
2986
+ # Update properties of this object
2987
+ def update!(**args)
2988
+ @model = args[:model] if args.key?(:model)
2989
+ end
2990
+ end
2991
+
2992
+ # Provides "hints" to the speech recognizer to favor specific words and phrases
2993
+ # in the results.
2994
+ class GoogleCloudVideointelligenceV1p2beta1SpeechContext
2995
+ include Google::Apis::Core::Hashable
2996
+
2997
+ # *Optional* A list of strings containing words and phrases "hints" so that
2998
+ # the speech recognition is more likely to recognize them. This can be used
2999
+ # to improve the accuracy for specific words and phrases, for example, if
3000
+ # specific commands are typically spoken by the user. This can also be used
3001
+ # to add additional words to the vocabulary of the recognizer. See
3002
+ # [usage limits](https://cloud.google.com/speech/limits#content).
3003
+ # Corresponds to the JSON property `phrases`
3004
+ # @return [Array<String>]
3005
+ attr_accessor :phrases
3006
+
3007
+ def initialize(**args)
3008
+ update!(**args)
3009
+ end
3010
+
3011
+ # Update properties of this object
3012
+ def update!(**args)
3013
+ @phrases = args[:phrases] if args.key?(:phrases)
3014
+ end
3015
+ end
3016
+
3017
+ # Alternative hypotheses (a.k.a. n-best list).
3018
+ class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
3019
+ include Google::Apis::Core::Hashable
3020
+
3021
+ # The confidence estimate between 0.0 and 1.0. A higher number
3022
+ # indicates an estimated greater likelihood that the recognized words are
3023
+ # correct. This field is typically provided only for the top hypothesis, and
3024
+ # only for `is_final=true` results. Clients should not rely on the
3025
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
3026
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
3027
+ # Corresponds to the JSON property `confidence`
3028
+ # @return [Float]
3029
+ attr_accessor :confidence
3030
+
3031
+ # Transcript text representing the words that the user spoke.
3032
+ # Corresponds to the JSON property `transcript`
3033
+ # @return [String]
3034
+ attr_accessor :transcript
3035
+
3036
+ # A list of word-specific information for each recognized word.
3037
+ # Corresponds to the JSON property `words`
3038
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
3039
+ attr_accessor :words
3040
+
3041
+ def initialize(**args)
3042
+ update!(**args)
3043
+ end
3044
+
3045
+ # Update properties of this object
3046
+ def update!(**args)
3047
+ @confidence = args[:confidence] if args.key?(:confidence)
3048
+ @transcript = args[:transcript] if args.key?(:transcript)
3049
+ @words = args[:words] if args.key?(:words)
3050
+ end
3051
+ end
3052
+
3053
+ # A speech recognition result corresponding to a portion of the audio.
3054
+ class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
3055
+ include Google::Apis::Core::Hashable
3056
+
3057
+ # May contain one or more recognition hypotheses (up to the maximum specified
3058
+ # in `max_alternatives`). These alternatives are ordered in terms of
3059
+ # accuracy, with the top (first) alternative being the most probable, as
3060
+ # ranked by the recognizer.
3061
+ # Corresponds to the JSON property `alternatives`
3062
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
3063
+ attr_accessor :alternatives
3064
+
3065
+ # Output only. The
3066
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
3067
+ # language in this result. This language code was detected to have the most
3068
+ # likelihood of being spoken in the audio.
3069
+ # Corresponds to the JSON property `languageCode`
3070
+ # @return [String]
3071
+ attr_accessor :language_code
3072
+
3073
+ def initialize(**args)
3074
+ update!(**args)
3075
+ end
3076
+
3077
+ # Update properties of this object
3078
+ def update!(**args)
3079
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
3080
+ @language_code = args[:language_code] if args.key?(:language_code)
3081
+ end
3082
+ end
3083
+
3084
+ # Config for SPEECH_TRANSCRIPTION.
3085
+ class GoogleCloudVideointelligenceV1p2beta1SpeechTranscriptionConfig
3086
+ include Google::Apis::Core::Hashable
3087
+
3088
+ # *Optional* For file formats, such as MXF or MKV, supporting multiple audio
3089
+ # tracks, specify up to two tracks. Default: track 0.
3090
+ # Corresponds to the JSON property `audioTracks`
3091
+ # @return [Array<Fixnum>]
3092
+ attr_accessor :audio_tracks
3093
+
3094
+ # *Optional*
3095
+ # If set, specifies the estimated number of speakers in the conversation.
3096
+ # If not set, defaults to '2'.
3097
+ # Ignored unless enable_speaker_diarization is set to true.
3098
+ # Corresponds to the JSON property `diarizationSpeakerCount`
3099
+ # @return [Fixnum]
3100
+ attr_accessor :diarization_speaker_count
3101
+
3102
+ # *Optional* If 'true', adds punctuation to recognition result hypotheses.
3103
+ # This feature is only available in select languages. Setting this for
3104
+ # requests in other languages has no effect at all. The default 'false' value
3105
+ # does not add punctuation to result hypotheses. NOTE: "This is currently
3106
+ # offered as an experimental service, complimentary to all users. In the
3107
+ # future this may be exclusively available as a premium feature."
3108
+ # Corresponds to the JSON property `enableAutomaticPunctuation`
3109
+ # @return [Boolean]
3110
+ attr_accessor :enable_automatic_punctuation
3111
+ alias_method :enable_automatic_punctuation?, :enable_automatic_punctuation
3112
+
3113
+ # *Optional* If 'true', enables speaker detection for each recognized word in
3114
+ # the top alternative of the recognition result using a speaker_tag provided
3115
+ # in the WordInfo.
3116
+ # Note: When this is true, we send all the words from the beginning of the
3117
+ # audio for the top alternative in every consecutive responses.
3118
+ # This is done in order to improve our speaker tags as our models learn to
3119
+ # identify the speakers in the conversation over time.
3120
+ # Corresponds to the JSON property `enableSpeakerDiarization`
3121
+ # @return [Boolean]
3122
+ attr_accessor :enable_speaker_diarization
3123
+ alias_method :enable_speaker_diarization?, :enable_speaker_diarization
3124
+
3125
+ # *Optional* If `true`, the top result includes a list of words and the
3126
+ # confidence for those words. If `false`, no word-level confidence
3127
+ # information is returned. The default is `false`.
3128
+ # Corresponds to the JSON property `enableWordConfidence`
3129
+ # @return [Boolean]
3130
+ attr_accessor :enable_word_confidence
3131
+ alias_method :enable_word_confidence?, :enable_word_confidence
3132
+
3133
+ # *Optional* If set to `true`, the server will attempt to filter out
3134
+ # profanities, replacing all but the initial character in each filtered word
3135
+ # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
3136
+ # won't be filtered out.
3137
+ # Corresponds to the JSON property `filterProfanity`
3138
+ # @return [Boolean]
3139
+ attr_accessor :filter_profanity
3140
+ alias_method :filter_profanity?, :filter_profanity
1102
3141
 
1103
- # Language code for `description` in BCP-47 format.
3142
+ # *Required* The language of the supplied audio as a
3143
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
3144
+ # Example: "en-US".
3145
+ # See [Language Support](https://cloud.google.com/speech/docs/languages)
3146
+ # for a list of the currently supported language codes.
1104
3147
  # Corresponds to the JSON property `languageCode`
1105
3148
  # @return [String]
1106
3149
  attr_accessor :language_code
1107
3150
 
1108
- def initialize(**args)
1109
- update!(**args)
1110
- end
1111
-
1112
- # Update properties of this object
1113
- def update!(**args)
1114
- @description = args[:description] if args.key?(:description)
1115
- @entity_id = args[:entity_id] if args.key?(:entity_id)
1116
- @language_code = args[:language_code] if args.key?(:language_code)
1117
- end
1118
- end
1119
-
1120
- # Explicit content annotation (based on per-frame visual signals only).
1121
- # If no explicit content has been detected in a frame, no annotations are
1122
- # present for that frame.
1123
- class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
1124
- include Google::Apis::Core::Hashable
3151
+ # *Optional* Maximum number of recognition hypotheses to be returned.
3152
+ # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
3153
+ # within each `SpeechTranscription`. The server may return fewer than
3154
+ # `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will
3155
+ # return a maximum of one. If omitted, will return a maximum of one.
3156
+ # Corresponds to the JSON property `maxAlternatives`
3157
+ # @return [Fixnum]
3158
+ attr_accessor :max_alternatives
1125
3159
 
1126
- # All video frames where explicit content was detected.
1127
- # Corresponds to the JSON property `frames`
1128
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
1129
- attr_accessor :frames
3160
+ # *Optional* A means to provide context to assist the speech recognition.
3161
+ # Corresponds to the JSON property `speechContexts`
3162
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechContext>]
3163
+ attr_accessor :speech_contexts
1130
3164
 
1131
3165
  def initialize(**args)
1132
3166
  update!(**args)
@@ -1134,24 +3168,33 @@ module Google
1134
3168
 
1135
3169
  # Update properties of this object
1136
3170
  def update!(**args)
1137
- @frames = args[:frames] if args.key?(:frames)
3171
+ @audio_tracks = args[:audio_tracks] if args.key?(:audio_tracks)
3172
+ @diarization_speaker_count = args[:diarization_speaker_count] if args.key?(:diarization_speaker_count)
3173
+ @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
3174
+ @enable_speaker_diarization = args[:enable_speaker_diarization] if args.key?(:enable_speaker_diarization)
3175
+ @enable_word_confidence = args[:enable_word_confidence] if args.key?(:enable_word_confidence)
3176
+ @filter_profanity = args[:filter_profanity] if args.key?(:filter_profanity)
3177
+ @language_code = args[:language_code] if args.key?(:language_code)
3178
+ @max_alternatives = args[:max_alternatives] if args.key?(:max_alternatives)
3179
+ @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
1138
3180
  end
1139
3181
  end
1140
3182
 
1141
- # Video frame level annotation results for explicit content.
1142
- class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
3183
+ # Annotations related to one detected OCR text snippet. This will contain the
3184
+ # corresponding text, confidence value, and frame level information for each
3185
+ # detection.
3186
+ class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
1143
3187
  include Google::Apis::Core::Hashable
1144
3188
 
1145
- # Likelihood of the pornography content..
1146
- # Corresponds to the JSON property `pornographyLikelihood`
1147
- # @return [String]
1148
- attr_accessor :pornography_likelihood
3189
+ # All video segments where OCR detected text appears.
3190
+ # Corresponds to the JSON property `segments`
3191
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
3192
+ attr_accessor :segments
1149
3193
 
1150
- # Time-offset, relative to the beginning of the video, corresponding to the
1151
- # video frame for this location.
1152
- # Corresponds to the JSON property `timeOffset`
3194
+ # The detected text.
3195
+ # Corresponds to the JSON property `text`
1153
3196
  # @return [String]
1154
- attr_accessor :time_offset
3197
+ attr_accessor :text
1155
3198
 
1156
3199
  def initialize(**args)
1157
3200
  update!(**args)
@@ -1159,37 +3202,22 @@ module Google
1159
3202
 
1160
3203
  # Update properties of this object
1161
3204
  def update!(**args)
1162
- @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
1163
- @time_offset = args[:time_offset] if args.key?(:time_offset)
3205
+ @segments = args[:segments] if args.key?(:segments)
3206
+ @text = args[:text] if args.key?(:text)
1164
3207
  end
1165
3208
  end
1166
3209
 
1167
- # Label annotation.
1168
- class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
3210
+ # Config for TEXT_DETECTION.
3211
+ class GoogleCloudVideointelligenceV1p2beta1TextDetectionConfig
1169
3212
  include Google::Apis::Core::Hashable
1170
3213
 
1171
- # Common categories for the detected entity.
1172
- # E.g. when the label is `Terrier` the category is likely `dog`. And in some
1173
- # cases there might be more than one categories e.g. `Terrier` could also be
1174
- # a `pet`.
1175
- # Corresponds to the JSON property `categoryEntities`
1176
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1Entity>]
1177
- attr_accessor :category_entities
1178
-
1179
- # Detected entity from video analysis.
1180
- # Corresponds to the JSON property `entity`
1181
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1Entity]
1182
- attr_accessor :entity
1183
-
1184
- # All video frames where a label was detected.
1185
- # Corresponds to the JSON property `frames`
1186
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
1187
- attr_accessor :frames
1188
-
1189
- # All video segments where a label was detected.
1190
- # Corresponds to the JSON property `segments`
1191
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
1192
- attr_accessor :segments
3214
+ # Language hint can be specified if the language to be detected is known a
3215
+ # priori. It can increase the accuracy of the detection. Language hint must
3216
+ # be language code in BCP-47 format.
3217
+ # Automatic language detection is performed if no hint is provided.
3218
+ # Corresponds to the JSON property `languageHints`
3219
+ # @return [Array<String>]
3220
+ attr_accessor :language_hints
1193
3221
 
1194
3222
  def initialize(**args)
1195
3223
  update!(**args)
@@ -1197,24 +3225,36 @@ module Google
1197
3225
 
1198
3226
  # Update properties of this object
1199
3227
  def update!(**args)
1200
- @category_entities = args[:category_entities] if args.key?(:category_entities)
1201
- @entity = args[:entity] if args.key?(:entity)
1202
- @frames = args[:frames] if args.key?(:frames)
1203
- @segments = args[:segments] if args.key?(:segments)
3228
+ @language_hints = args[:language_hints] if args.key?(:language_hints)
1204
3229
  end
1205
3230
  end
1206
3231
 
1207
- # Video frame level annotation results for label detection.
1208
- class GoogleCloudVideointelligenceV1p1beta1LabelFrame
3232
+ # Video frame level annotation results for text annotation (OCR).
3233
+ # Contains information regarding timestamp and bounding box locations for the
3234
+ # frames containing detected OCR text snippets.
3235
+ class GoogleCloudVideointelligenceV1p2beta1TextFrame
1209
3236
  include Google::Apis::Core::Hashable
1210
3237
 
1211
- # Confidence that the label is accurate. Range: [0, 1].
1212
- # Corresponds to the JSON property `confidence`
1213
- # @return [Float]
1214
- attr_accessor :confidence
3238
+ # Normalized bounding polygon for text (that might not be aligned with axis).
3239
+ # Contains list of the corner points in clockwise order starting from
3240
+ # top-left corner. For example, for a rectangular bounding box:
3241
+ # When the text is horizontal it might look like:
3242
+ # 0----1
3243
+ # | |
3244
+ # 3----2
3245
+ # When it's clockwise rotated 180 degrees around the top-left corner it
3246
+ # becomes:
3247
+ # 2----3
3248
+ # | |
3249
+ # 1----0
3250
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
3251
+ # than 0, or greater than 1 due to trignometric calculations for location of
3252
+ # the box.
3253
+ # Corresponds to the JSON property `rotatedBoundingBox`
3254
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
3255
+ attr_accessor :rotated_bounding_box
1215
3256
 
1216
- # Time-offset, relative to the beginning of the video, corresponding to the
1217
- # video frame for this location.
3257
+ # Timestamp of this frame.
1218
3258
  # Corresponds to the JSON property `timeOffset`
1219
3259
  # @return [String]
1220
3260
  attr_accessor :time_offset
@@ -1225,23 +3265,29 @@ module Google
1225
3265
 
1226
3266
  # Update properties of this object
1227
3267
  def update!(**args)
1228
- @confidence = args[:confidence] if args.key?(:confidence)
3268
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
1229
3269
  @time_offset = args[:time_offset] if args.key?(:time_offset)
1230
3270
  end
1231
3271
  end
1232
3272
 
1233
- # Video segment level annotation results for label detection.
1234
- class GoogleCloudVideointelligenceV1p1beta1LabelSegment
3273
+ # Video segment level annotation results for text detection.
3274
+ class GoogleCloudVideointelligenceV1p2beta1TextSegment
1235
3275
  include Google::Apis::Core::Hashable
1236
3276
 
1237
- # Confidence that the label is accurate. Range: [0, 1].
3277
+ # Confidence for the track of detected text. It is calculated as the highest
3278
+ # over all frames where OCR detected text appears.
1238
3279
  # Corresponds to the JSON property `confidence`
1239
3280
  # @return [Float]
1240
3281
  attr_accessor :confidence
1241
3282
 
3283
+ # Information related to the frames where OCR detected text appears.
3284
+ # Corresponds to the JSON property `frames`
3285
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
3286
+ attr_accessor :frames
3287
+
1242
3288
  # Video segment.
1243
3289
  # Corresponds to the JSON property `segment`
1244
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
3290
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
1245
3291
  attr_accessor :segment
1246
3292
 
1247
3293
  def initialize(**args)
@@ -1251,79 +3297,13 @@ module Google
1251
3297
  # Update properties of this object
1252
3298
  def update!(**args)
1253
3299
  @confidence = args[:confidence] if args.key?(:confidence)
3300
+ @frames = args[:frames] if args.key?(:frames)
1254
3301
  @segment = args[:segment] if args.key?(:segment)
1255
3302
  end
1256
3303
  end
1257
3304
 
1258
- # Alternative hypotheses (a.k.a. n-best list).
1259
- class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
1260
- include Google::Apis::Core::Hashable
1261
-
1262
- # The confidence estimate between 0.0 and 1.0. A higher number
1263
- # indicates an estimated greater likelihood that the recognized words are
1264
- # correct. This field is typically provided only for the top hypothesis, and
1265
- # only for `is_final=true` results. Clients should not rely on the
1266
- # `confidence` field as it is not guaranteed to be accurate or consistent.
1267
- # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1268
- # Corresponds to the JSON property `confidence`
1269
- # @return [Float]
1270
- attr_accessor :confidence
1271
-
1272
- # Transcript text representing the words that the user spoke.
1273
- # Corresponds to the JSON property `transcript`
1274
- # @return [String]
1275
- attr_accessor :transcript
1276
-
1277
- # A list of word-specific information for each recognized word.
1278
- # Corresponds to the JSON property `words`
1279
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
1280
- attr_accessor :words
1281
-
1282
- def initialize(**args)
1283
- update!(**args)
1284
- end
1285
-
1286
- # Update properties of this object
1287
- def update!(**args)
1288
- @confidence = args[:confidence] if args.key?(:confidence)
1289
- @transcript = args[:transcript] if args.key?(:transcript)
1290
- @words = args[:words] if args.key?(:words)
1291
- end
1292
- end
1293
-
1294
- # A speech recognition result corresponding to a portion of the audio.
1295
- class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
1296
- include Google::Apis::Core::Hashable
1297
-
1298
- # May contain one or more recognition hypotheses (up to the maximum specified
1299
- # in `max_alternatives`). These alternatives are ordered in terms of
1300
- # accuracy, with the top (first) alternative being the most probable, as
1301
- # ranked by the recognizer.
1302
- # Corresponds to the JSON property `alternatives`
1303
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
1304
- attr_accessor :alternatives
1305
-
1306
- # Output only. The
1307
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
1308
- # language in this result. This language code was detected to have the most
1309
- # likelihood of being spoken in the audio.
1310
- # Corresponds to the JSON property `languageCode`
1311
- # @return [String]
1312
- attr_accessor :language_code
1313
-
1314
- def initialize(**args)
1315
- update!(**args)
1316
- end
1317
-
1318
- # Update properties of this object
1319
- def update!(**args)
1320
- @alternatives = args[:alternatives] if args.key?(:alternatives)
1321
- @language_code = args[:language_code] if args.key?(:language_code)
1322
- end
1323
- end
1324
-
1325
3305
  # Annotation progress for a single video.
1326
- class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
3306
+ class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
1327
3307
  include Google::Apis::Core::Hashable
1328
3308
 
1329
3309
  # Video file location in
@@ -1362,17 +3342,17 @@ module Google
1362
3342
  end
1363
3343
 
1364
3344
  # Annotation results for a single video.
1365
- class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
3345
+ class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
1366
3346
  include Google::Apis::Core::Hashable
1367
3347
 
1368
- # The `Status` type defines a logical error model that is suitable for different
1369
- # programming environments, including REST APIs and RPC APIs. It is used by
1370
- # [gRPC](https://github.com/grpc). The error model is designed to be:
3348
+ # The `Status` type defines a logical error model that is suitable for
3349
+ # different programming environments, including REST APIs and RPC APIs. It is
3350
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
1371
3351
  # - Simple to use and understand for most users
1372
3352
  # - Flexible enough to meet unexpected needs
1373
3353
  # # Overview
1374
- # The `Status` message contains three pieces of data: error code, error message,
1375
- # and error details. The error code should be an enum value of
3354
+ # The `Status` message contains three pieces of data: error code, error
3355
+ # message, and error details. The error code should be an enum value of
1376
3356
  # google.rpc.Code, but it may accept additional error codes if needed. The
1377
3357
  # error message should be a developer-facing English message that helps
1378
3358
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -1412,13 +3392,13 @@ module Google
1412
3392
  # If no explicit content has been detected in a frame, no annotations are
1413
3393
  # present for that frame.
1414
3394
  # Corresponds to the JSON property `explicitAnnotation`
1415
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
3395
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
1416
3396
  attr_accessor :explicit_annotation
1417
3397
 
1418
3398
  # Label annotations on frame level.
1419
3399
  # There is exactly one element for each unique label.
1420
3400
  # Corresponds to the JSON property `frameLabelAnnotations`
1421
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3401
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1422
3402
  attr_accessor :frame_label_annotations
1423
3403
 
1424
3404
  # Video file location in
@@ -1427,28 +3407,40 @@ module Google
1427
3407
  # @return [String]
1428
3408
  attr_accessor :input_uri
1429
3409
 
3410
+ # Annotations for list of objects detected and tracked in video.
3411
+ # Corresponds to the JSON property `objectAnnotations`
3412
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
3413
+ attr_accessor :object_annotations
3414
+
1430
3415
  # Label annotations on video level or user specified segment level.
1431
3416
  # There is exactly one element for each unique label.
1432
3417
  # Corresponds to the JSON property `segmentLabelAnnotations`
1433
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3418
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1434
3419
  attr_accessor :segment_label_annotations
1435
3420
 
1436
3421
  # Shot annotations. Each shot is represented as a video segment.
1437
3422
  # Corresponds to the JSON property `shotAnnotations`
1438
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
3423
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
1439
3424
  attr_accessor :shot_annotations
1440
3425
 
1441
3426
  # Label annotations on shot level.
1442
3427
  # There is exactly one element for each unique label.
1443
3428
  # Corresponds to the JSON property `shotLabelAnnotations`
1444
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3429
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1445
3430
  attr_accessor :shot_label_annotations
1446
3431
 
1447
3432
  # Speech transcription.
1448
3433
  # Corresponds to the JSON property `speechTranscriptions`
1449
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
3434
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
1450
3435
  attr_accessor :speech_transcriptions
1451
3436
 
3437
+ # OCR text detection and tracking.
3438
+ # Annotations for list of detected text snippets. Each will have list of
3439
+ # frame information associated with it.
3440
+ # Corresponds to the JSON property `textAnnotations`
3441
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
3442
+ attr_accessor :text_annotations
3443
+
1452
3444
  def initialize(**args)
1453
3445
  update!(**args)
1454
3446
  end
@@ -1459,15 +3451,68 @@ module Google
1459
3451
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
1460
3452
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
1461
3453
  @input_uri = args[:input_uri] if args.key?(:input_uri)
3454
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
1462
3455
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
1463
3456
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
1464
3457
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
1465
3458
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
3459
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
3460
+ end
3461
+ end
3462
+
3463
+ # Video context and/or feature-specific parameters.
3464
+ class GoogleCloudVideointelligenceV1p2beta1VideoContext
3465
+ include Google::Apis::Core::Hashable
3466
+
3467
+ # Config for EXPLICIT_CONTENT_DETECTION.
3468
+ # Corresponds to the JSON property `explicitContentDetectionConfig`
3469
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentDetectionConfig]
3470
+ attr_accessor :explicit_content_detection_config
3471
+
3472
+ # Config for LABEL_DETECTION.
3473
+ # Corresponds to the JSON property `labelDetectionConfig`
3474
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelDetectionConfig]
3475
+ attr_accessor :label_detection_config
3476
+
3477
+ # Video segments to annotate. The segments may overlap and are not required
3478
+ # to be contiguous or span the whole video. If unspecified, each video is
3479
+ # treated as a single segment.
3480
+ # Corresponds to the JSON property `segments`
3481
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
3482
+ attr_accessor :segments
3483
+
3484
+ # Config for SHOT_CHANGE_DETECTION.
3485
+ # Corresponds to the JSON property `shotChangeDetectionConfig`
3486
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ShotChangeDetectionConfig]
3487
+ attr_accessor :shot_change_detection_config
3488
+
3489
+ # Config for SPEECH_TRANSCRIPTION.
3490
+ # Corresponds to the JSON property `speechTranscriptionConfig`
3491
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscriptionConfig]
3492
+ attr_accessor :speech_transcription_config
3493
+
3494
+ # Config for TEXT_DETECTION.
3495
+ # Corresponds to the JSON property `textDetectionConfig`
3496
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextDetectionConfig]
3497
+ attr_accessor :text_detection_config
3498
+
3499
+ def initialize(**args)
3500
+ update!(**args)
3501
+ end
3502
+
3503
+ # Update properties of this object
3504
+ def update!(**args)
3505
+ @explicit_content_detection_config = args[:explicit_content_detection_config] if args.key?(:explicit_content_detection_config)
3506
+ @label_detection_config = args[:label_detection_config] if args.key?(:label_detection_config)
3507
+ @segments = args[:segments] if args.key?(:segments)
3508
+ @shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
3509
+ @speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
3510
+ @text_detection_config = args[:text_detection_config] if args.key?(:text_detection_config)
1466
3511
  end
1467
3512
  end
1468
3513
 
1469
3514
  # Video segment.
1470
- class GoogleCloudVideointelligenceV1p1beta1VideoSegment
3515
+ class GoogleCloudVideointelligenceV1p2beta1VideoSegment
1471
3516
  include Google::Apis::Core::Hashable
1472
3517
 
1473
3518
  # Time-offset, relative to the beginning of the video,
@@ -1496,7 +3541,7 @@ module Google
1496
3541
  # Word-specific information for recognized words. Word information is only
1497
3542
  # included in the response when certain request parameters are set, such
1498
3543
  # as `enable_word_time_offsets`.
1499
- class GoogleCloudVideointelligenceV1p1beta1WordInfo
3544
+ class GoogleCloudVideointelligenceV1p2beta1WordInfo
1500
3545
  include Google::Apis::Core::Hashable
1501
3546
 
1502
3547
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -1555,12 +3600,12 @@ module Google
1555
3600
  # Video annotation progress. Included in the `metadata`
1556
3601
  # field of the `Operation` returned by the `GetOperation`
1557
3602
  # call of the `google::longrunning::Operations` service.
1558
- class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
3603
+ class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoProgress
1559
3604
  include Google::Apis::Core::Hashable
1560
3605
 
1561
3606
  # Progress metadata for all videos specified in `AnnotateVideoRequest`.
1562
3607
  # Corresponds to the JSON property `annotationProgress`
1563
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
3608
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress>]
1564
3609
  attr_accessor :annotation_progress
1565
3610
 
1566
3611
  def initialize(**args)
@@ -1573,83 +3618,15 @@ module Google
1573
3618
  end
1574
3619
  end
1575
3620
 
1576
- # Video annotation request.
1577
- class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoRequest
1578
- include Google::Apis::Core::Hashable
1579
-
1580
- # Requested video annotation features.
1581
- # Corresponds to the JSON property `features`
1582
- # @return [Array<String>]
1583
- attr_accessor :features
1584
-
1585
- # The video data bytes.
1586
- # If unset, the input video(s) should be specified via `input_uri`.
1587
- # If set, `input_uri` should be unset.
1588
- # Corresponds to the JSON property `inputContent`
1589
- # NOTE: Values are automatically base64 encoded/decoded in the client library.
1590
- # @return [String]
1591
- attr_accessor :input_content
1592
-
1593
- # Input video location. Currently, only
1594
- # [Google Cloud Storage](https://cloud.google.com/storage/) URIs are
1595
- # supported, which must be specified in the following format:
1596
- # `gs://bucket-id/object-id` (other URI formats return
1597
- # google.rpc.Code.INVALID_ARGUMENT). For more information, see
1598
- # [Request URIs](/storage/docs/reference-uris).
1599
- # A video URI may include wildcards in `object-id`, and thus identify
1600
- # multiple videos. Supported wildcards: '*' to match 0 or more characters;
1601
- # '?' to match 1 character. If unset, the input video should be embedded
1602
- # in the request as `input_content`. If set, `input_content` should be unset.
1603
- # Corresponds to the JSON property `inputUri`
1604
- # @return [String]
1605
- attr_accessor :input_uri
1606
-
1607
- # Optional cloud region where annotation should take place. Supported cloud
1608
- # regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region
1609
- # is specified, a region will be determined based on video file location.
1610
- # Corresponds to the JSON property `locationId`
1611
- # @return [String]
1612
- attr_accessor :location_id
1613
-
1614
- # Optional location where the output (in JSON format) should be stored.
1615
- # Currently, only [Google Cloud Storage](https://cloud.google.com/storage/)
1616
- # URIs are supported, which must be specified in the following format:
1617
- # `gs://bucket-id/object-id` (other URI formats return
1618
- # google.rpc.Code.INVALID_ARGUMENT). For more information, see
1619
- # [Request URIs](/storage/docs/reference-uris).
1620
- # Corresponds to the JSON property `outputUri`
1621
- # @return [String]
1622
- attr_accessor :output_uri
1623
-
1624
- # Video context and/or feature-specific parameters.
1625
- # Corresponds to the JSON property `videoContext`
1626
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoContext]
1627
- attr_accessor :video_context
1628
-
1629
- def initialize(**args)
1630
- update!(**args)
1631
- end
1632
-
1633
- # Update properties of this object
1634
- def update!(**args)
1635
- @features = args[:features] if args.key?(:features)
1636
- @input_content = args[:input_content] if args.key?(:input_content)
1637
- @input_uri = args[:input_uri] if args.key?(:input_uri)
1638
- @location_id = args[:location_id] if args.key?(:location_id)
1639
- @output_uri = args[:output_uri] if args.key?(:output_uri)
1640
- @video_context = args[:video_context] if args.key?(:video_context)
1641
- end
1642
- end
1643
-
1644
3621
  # Video annotation response. Included in the `response`
1645
3622
  # field of the `Operation` returned by the `GetOperation`
1646
3623
  # call of the `google::longrunning::Operations` service.
1647
- class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
3624
+ class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoResponse
1648
3625
  include Google::Apis::Core::Hashable
1649
3626
 
1650
3627
  # Annotation results for all videos specified in `AnnotateVideoRequest`.
1651
3628
  # Corresponds to the JSON property `annotationResults`
1652
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
3629
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults>]
1653
3630
  attr_accessor :annotation_results
1654
3631
 
1655
3632
  def initialize(**args)
@@ -1663,7 +3640,7 @@ module Google
1663
3640
  end
1664
3641
 
1665
3642
  # Detected entity from video analysis.
1666
- class GoogleCloudVideointelligenceV1p2beta1Entity
3643
+ class GoogleCloudVideointelligenceV1p3beta1Entity
1667
3644
  include Google::Apis::Core::Hashable
1668
3645
 
1669
3646
  # Textual description, e.g. `Fixed-gear bicycle`.
@@ -1698,12 +3675,12 @@ module Google
1698
3675
  # Explicit content annotation (based on per-frame visual signals only).
1699
3676
  # If no explicit content has been detected in a frame, no annotations are
1700
3677
  # present for that frame.
1701
- class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
3678
+ class GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation
1702
3679
  include Google::Apis::Core::Hashable
1703
3680
 
1704
3681
  # All video frames where explicit content was detected.
1705
3682
  # Corresponds to the JSON property `frames`
1706
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
3683
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame>]
1707
3684
  attr_accessor :frames
1708
3685
 
1709
3686
  def initialize(**args)
@@ -1716,29 +3693,8 @@ module Google
1716
3693
  end
1717
3694
  end
1718
3695
 
1719
- # Config for EXPLICIT_CONTENT_DETECTION.
1720
- class GoogleCloudVideointelligenceV1p2beta1ExplicitContentDetectionConfig
1721
- include Google::Apis::Core::Hashable
1722
-
1723
- # Model to use for explicit content detection.
1724
- # Supported values: "builtin/stable" (the default if unset) and
1725
- # "builtin/latest".
1726
- # Corresponds to the JSON property `model`
1727
- # @return [String]
1728
- attr_accessor :model
1729
-
1730
- def initialize(**args)
1731
- update!(**args)
1732
- end
1733
-
1734
- # Update properties of this object
1735
- def update!(**args)
1736
- @model = args[:model] if args.key?(:model)
1737
- end
1738
- end
1739
-
1740
3696
  # Video frame level annotation results for explicit content.
1741
- class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
3697
+ class GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame
1742
3698
  include Google::Apis::Core::Hashable
1743
3699
 
1744
3700
  # Likelihood of the pornography content..
@@ -1764,7 +3720,7 @@ module Google
1764
3720
  end
1765
3721
 
1766
3722
  # Label annotation.
1767
- class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
3723
+ class GoogleCloudVideointelligenceV1p3beta1LabelAnnotation
1768
3724
  include Google::Apis::Core::Hashable
1769
3725
 
1770
3726
  # Common categories for the detected entity.
@@ -1772,22 +3728,22 @@ module Google
1772
3728
  # cases there might be more than one categories e.g. `Terrier` could also be
1773
3729
  # a `pet`.
1774
3730
  # Corresponds to the JSON property `categoryEntities`
1775
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1Entity>]
3731
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1Entity>]
1776
3732
  attr_accessor :category_entities
1777
3733
 
1778
3734
  # Detected entity from video analysis.
1779
3735
  # Corresponds to the JSON property `entity`
1780
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1Entity]
3736
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1Entity]
1781
3737
  attr_accessor :entity
1782
3738
 
1783
3739
  # All video frames where a label was detected.
1784
3740
  # Corresponds to the JSON property `frames`
1785
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
3741
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1LabelFrame>]
1786
3742
  attr_accessor :frames
1787
3743
 
1788
3744
  # All video segments where a label was detected.
1789
3745
  # Corresponds to the JSON property `segments`
1790
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
3746
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1LabelSegment>]
1791
3747
  attr_accessor :segments
1792
3748
 
1793
3749
  def initialize(**args)
@@ -1803,46 +3759,8 @@ module Google
1803
3759
  end
1804
3760
  end
1805
3761
 
1806
- # Config for LABEL_DETECTION.
1807
- class GoogleCloudVideointelligenceV1p2beta1LabelDetectionConfig
1808
- include Google::Apis::Core::Hashable
1809
-
1810
- # What labels should be detected with LABEL_DETECTION, in addition to
1811
- # video-level labels or segment-level labels.
1812
- # If unspecified, defaults to `SHOT_MODE`.
1813
- # Corresponds to the JSON property `labelDetectionMode`
1814
- # @return [String]
1815
- attr_accessor :label_detection_mode
1816
-
1817
- # Model to use for label detection.
1818
- # Supported values: "builtin/stable" (the default if unset) and
1819
- # "builtin/latest".
1820
- # Corresponds to the JSON property `model`
1821
- # @return [String]
1822
- attr_accessor :model
1823
-
1824
- # Whether the video has been shot from a stationary (i.e. non-moving) camera.
1825
- # When set to true, might improve detection accuracy for moving objects.
1826
- # Should be used with `SHOT_AND_FRAME_MODE` enabled.
1827
- # Corresponds to the JSON property `stationaryCamera`
1828
- # @return [Boolean]
1829
- attr_accessor :stationary_camera
1830
- alias_method :stationary_camera?, :stationary_camera
1831
-
1832
- def initialize(**args)
1833
- update!(**args)
1834
- end
1835
-
1836
- # Update properties of this object
1837
- def update!(**args)
1838
- @label_detection_mode = args[:label_detection_mode] if args.key?(:label_detection_mode)
1839
- @model = args[:model] if args.key?(:model)
1840
- @stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
1841
- end
1842
- end
1843
-
1844
3762
  # Video frame level annotation results for label detection.
1845
- class GoogleCloudVideointelligenceV1p2beta1LabelFrame
3763
+ class GoogleCloudVideointelligenceV1p3beta1LabelFrame
1846
3764
  include Google::Apis::Core::Hashable
1847
3765
 
1848
3766
  # Confidence that the label is accurate. Range: [0, 1].
@@ -1868,7 +3786,7 @@ module Google
1868
3786
  end
1869
3787
 
1870
3788
  # Video segment level annotation results for label detection.
1871
- class GoogleCloudVideointelligenceV1p2beta1LabelSegment
3789
+ class GoogleCloudVideointelligenceV1p3beta1LabelSegment
1872
3790
  include Google::Apis::Core::Hashable
1873
3791
 
1874
3792
  # Confidence that the label is accurate. Range: [0, 1].
@@ -1878,7 +3796,7 @@ module Google
1878
3796
 
1879
3797
  # Video segment.
1880
3798
  # Corresponds to the JSON property `segment`
1881
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3799
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
1882
3800
  attr_accessor :segment
1883
3801
 
1884
3802
  def initialize(**args)
@@ -1895,7 +3813,7 @@ module Google
1895
3813
  # Normalized bounding box.
1896
3814
  # The normalized vertex coordinates are relative to the original image.
1897
3815
  # Range: [0, 1].
1898
- class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
3816
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox
1899
3817
  include Google::Apis::Core::Hashable
1900
3818
 
1901
3819
  # Bottom Y coordinate.
@@ -1946,12 +3864,12 @@ module Google
1946
3864
  # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1947
3865
  # than 0, or greater than 1 due to trignometric calculations for location of
1948
3866
  # the box.
1949
- class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
3867
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly
1950
3868
  include Google::Apis::Core::Hashable
1951
3869
 
1952
3870
  # Normalized vertices of the bounding polygon.
1953
3871
  # Corresponds to the JSON property `vertices`
1954
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
3872
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1NormalizedVertex>]
1955
3873
  attr_accessor :vertices
1956
3874
 
1957
3875
  def initialize(**args)
@@ -1967,7 +3885,7 @@ module Google
1967
3885
  # A vertex represents a 2D point in the image.
1968
3886
  # NOTE: the normalized vertex coordinates are relative to the original image
1969
3887
  # and range from 0 to 1.
1970
- class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
3888
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedVertex
1971
3889
  include Google::Apis::Core::Hashable
1972
3890
 
1973
3891
  # X coordinate.
@@ -1992,7 +3910,7 @@ module Google
1992
3910
  end
1993
3911
 
1994
3912
  # Annotations corresponding to one tracked object.
1995
- class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
3913
+ class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation
1996
3914
  include Google::Apis::Core::Hashable
1997
3915
 
1998
3916
  # Object category's labeling confidence of this track.
@@ -2002,7 +3920,7 @@ module Google
2002
3920
 
2003
3921
  # Detected entity from video analysis.
2004
3922
  # Corresponds to the JSON property `entity`
2005
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1Entity]
3923
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1Entity]
2006
3924
  attr_accessor :entity
2007
3925
 
2008
3926
  # Information corresponding to all frames where this object track appears.
@@ -2010,12 +3928,12 @@ module Google
2010
3928
  # messages in frames.
2011
3929
  # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
2012
3930
  # Corresponds to the JSON property `frames`
2013
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
3931
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame>]
2014
3932
  attr_accessor :frames
2015
3933
 
2016
3934
  # Video segment.
2017
3935
  # Corresponds to the JSON property `segment`
2018
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3936
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2019
3937
  attr_accessor :segment
2020
3938
 
2021
3939
  # Streaming mode ONLY.
@@ -2044,67 +3962,20 @@ module Google
2044
3962
 
2045
3963
  # Video frame level annotations for object detection and tracking. This field
2046
3964
  # stores per frame location, time offset, and confidence.
2047
- class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
3965
+ class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame
2048
3966
  include Google::Apis::Core::Hashable
2049
3967
 
2050
3968
  # Normalized bounding box.
2051
3969
  # The normalized vertex coordinates are relative to the original image.
2052
3970
  # Range: [0, 1].
2053
- # Corresponds to the JSON property `normalizedBoundingBox`
2054
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
2055
- attr_accessor :normalized_bounding_box
2056
-
2057
- # The timestamp of the frame in microseconds.
2058
- # Corresponds to the JSON property `timeOffset`
2059
- # @return [String]
2060
- attr_accessor :time_offset
2061
-
2062
- def initialize(**args)
2063
- update!(**args)
2064
- end
2065
-
2066
- # Update properties of this object
2067
- def update!(**args)
2068
- @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
2069
- @time_offset = args[:time_offset] if args.key?(:time_offset)
2070
- end
2071
- end
2072
-
2073
- # Config for SHOT_CHANGE_DETECTION.
2074
- class GoogleCloudVideointelligenceV1p2beta1ShotChangeDetectionConfig
2075
- include Google::Apis::Core::Hashable
2076
-
2077
- # Model to use for shot change detection.
2078
- # Supported values: "builtin/stable" (the default if unset) and
2079
- # "builtin/latest".
2080
- # Corresponds to the JSON property `model`
2081
- # @return [String]
2082
- attr_accessor :model
2083
-
2084
- def initialize(**args)
2085
- update!(**args)
2086
- end
2087
-
2088
- # Update properties of this object
2089
- def update!(**args)
2090
- @model = args[:model] if args.key?(:model)
2091
- end
2092
- end
2093
-
2094
- # Provides "hints" to the speech recognizer to favor specific words and phrases
2095
- # in the results.
2096
- class GoogleCloudVideointelligenceV1p2beta1SpeechContext
2097
- include Google::Apis::Core::Hashable
3971
+ # Corresponds to the JSON property `normalizedBoundingBox`
3972
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox]
3973
+ attr_accessor :normalized_bounding_box
2098
3974
 
2099
- # *Optional* A list of strings containing words and phrases "hints" so that
2100
- # the speech recognition is more likely to recognize them. This can be used
2101
- # to improve the accuracy for specific words and phrases, for example, if
2102
- # specific commands are typically spoken by the user. This can also be used
2103
- # to add additional words to the vocabulary of the recognizer. See
2104
- # [usage limits](https://cloud.google.com/speech/limits#content).
2105
- # Corresponds to the JSON property `phrases`
2106
- # @return [Array<String>]
2107
- attr_accessor :phrases
3975
+ # The timestamp of the frame in microseconds.
3976
+ # Corresponds to the JSON property `timeOffset`
3977
+ # @return [String]
3978
+ attr_accessor :time_offset
2108
3979
 
2109
3980
  def initialize(**args)
2110
3981
  update!(**args)
@@ -2112,12 +3983,13 @@ module Google
2112
3983
 
2113
3984
  # Update properties of this object
2114
3985
  def update!(**args)
2115
- @phrases = args[:phrases] if args.key?(:phrases)
3986
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
3987
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2116
3988
  end
2117
3989
  end
2118
3990
 
2119
3991
  # Alternative hypotheses (a.k.a. n-best list).
2120
- class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
3992
+ class GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative
2121
3993
  include Google::Apis::Core::Hashable
2122
3994
 
2123
3995
  # The confidence estimate between 0.0 and 1.0. A higher number
@@ -2137,7 +4009,7 @@ module Google
2137
4009
 
2138
4010
  # A list of word-specific information for each recognized word.
2139
4011
  # Corresponds to the JSON property `words`
2140
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
4012
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1WordInfo>]
2141
4013
  attr_accessor :words
2142
4014
 
2143
4015
  def initialize(**args)
@@ -2153,7 +4025,7 @@ module Google
2153
4025
  end
2154
4026
 
2155
4027
  # A speech recognition result corresponding to a portion of the audio.
2156
- class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
4028
+ class GoogleCloudVideointelligenceV1p3beta1SpeechTranscription
2157
4029
  include Google::Apis::Core::Hashable
2158
4030
 
2159
4031
  # May contain one or more recognition hypotheses (up to the maximum specified
@@ -2161,7 +4033,7 @@ module Google
2161
4033
  # accuracy, with the top (first) alternative being the most probable, as
2162
4034
  # ranked by the recognizer.
2163
4035
  # Corresponds to the JSON property `alternatives`
2164
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
4036
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative>]
2165
4037
  attr_accessor :alternatives
2166
4038
 
2167
4039
  # Output only. The
@@ -2183,86 +4055,107 @@ module Google
2183
4055
  end
2184
4056
  end
2185
4057
 
2186
- # Config for SPEECH_TRANSCRIPTION.
2187
- class GoogleCloudVideointelligenceV1p2beta1SpeechTranscriptionConfig
4058
+ # `StreamingAnnotateVideoResponse` is the only message returned to the client
4059
+ # by `StreamingAnnotateVideo`. A series of zero or more
4060
+ # `StreamingAnnotateVideoResponse` messages are streamed back to the client.
4061
+ class GoogleCloudVideointelligenceV1p3beta1StreamingAnnotateVideoResponse
2188
4062
  include Google::Apis::Core::Hashable
2189
4063
 
2190
- # *Optional* For file formats, such as MXF or MKV, supporting multiple audio
2191
- # tracks, specify up to two tracks. Default: track 0.
2192
- # Corresponds to the JSON property `audioTracks`
2193
- # @return [Array<Fixnum>]
2194
- attr_accessor :audio_tracks
4064
+ # Streaming annotation results corresponding to a portion of the video
4065
+ # that is currently being processed.
4066
+ # Corresponds to the JSON property `annotationResults`
4067
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults]
4068
+ attr_accessor :annotation_results
2195
4069
 
2196
- # *Optional*
2197
- # If set, specifies the estimated number of speakers in the conversation.
2198
- # If not set, defaults to '2'.
2199
- # Ignored unless enable_speaker_diarization is set to true.
2200
- # Corresponds to the JSON property `diarizationSpeakerCount`
2201
- # @return [Fixnum]
2202
- attr_accessor :diarization_speaker_count
4070
+ # GCS URI that stores annotation results of one streaming session.
4071
+ # It is a directory that can hold multiple files in JSON format.
4072
+ # Example uri format:
4073
+ # gs://bucket_id/object_id/cloud_project_name-session_id
4074
+ # Corresponds to the JSON property `annotationResultsUri`
4075
+ # @return [String]
4076
+ attr_accessor :annotation_results_uri
2203
4077
 
2204
- # *Optional* If 'true', adds punctuation to recognition result hypotheses.
2205
- # This feature is only available in select languages. Setting this for
2206
- # requests in other languages has no effect at all. The default 'false' value
2207
- # does not add punctuation to result hypotheses. NOTE: "This is currently
2208
- # offered as an experimental service, complimentary to all users. In the
2209
- # future this may be exclusively available as a premium feature."
2210
- # Corresponds to the JSON property `enableAutomaticPunctuation`
2211
- # @return [Boolean]
2212
- attr_accessor :enable_automatic_punctuation
2213
- alias_method :enable_automatic_punctuation?, :enable_automatic_punctuation
4078
+ # The `Status` type defines a logical error model that is suitable for
4079
+ # different programming environments, including REST APIs and RPC APIs. It is
4080
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
4081
+ # - Simple to use and understand for most users
4082
+ # - Flexible enough to meet unexpected needs
4083
+ # # Overview
4084
+ # The `Status` message contains three pieces of data: error code, error
4085
+ # message, and error details. The error code should be an enum value of
4086
+ # google.rpc.Code, but it may accept additional error codes if needed. The
4087
+ # error message should be a developer-facing English message that helps
4088
+ # developers *understand* and *resolve* the error. If a localized user-facing
4089
+ # error message is needed, put the localized message in the error details or
4090
+ # localize it in the client. The optional error details may contain arbitrary
4091
+ # information about the error. There is a predefined set of error detail types
4092
+ # in the package `google.rpc` that can be used for common error conditions.
4093
+ # # Language mapping
4094
+ # The `Status` message is the logical representation of the error model, but it
4095
+ # is not necessarily the actual wire format. When the `Status` message is
4096
+ # exposed in different client libraries and different wire protocols, it can be
4097
+ # mapped differently. For example, it will likely be mapped to some exceptions
4098
+ # in Java, but more likely mapped to some error codes in C.
4099
+ # # Other uses
4100
+ # The error model and the `Status` message can be used in a variety of
4101
+ # environments, either with or without APIs, to provide a
4102
+ # consistent developer experience across different environments.
4103
+ # Example uses of this error model include:
4104
+ # - Partial errors. If a service needs to return partial errors to the client,
4105
+ # it may embed the `Status` in the normal response to indicate the partial
4106
+ # errors.
4107
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
4108
+ # have a `Status` message for error reporting.
4109
+ # - Batch operations. If a client uses batch request and batch response, the
4110
+ # `Status` message should be used directly inside batch response, one for
4111
+ # each error sub-response.
4112
+ # - Asynchronous operations. If an API call embeds asynchronous operation
4113
+ # results in its response, the status of those operations should be
4114
+ # represented directly using the `Status` message.
4115
+ # - Logging. If some API errors are stored in logs, the message `Status` could
4116
+ # be used directly after any stripping needed for security/privacy reasons.
4117
+ # Corresponds to the JSON property `error`
4118
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus]
4119
+ attr_accessor :error
2214
4120
 
2215
- # *Optional* If 'true', enables speaker detection for each recognized word in
2216
- # the top alternative of the recognition result using a speaker_tag provided
2217
- # in the WordInfo.
2218
- # Note: When this is true, we send all the words from the beginning of the
2219
- # audio for the top alternative in every consecutive responses.
2220
- # This is done in order to improve our speaker tags as our models learn to
2221
- # identify the speakers in the conversation over time.
2222
- # Corresponds to the JSON property `enableSpeakerDiarization`
2223
- # @return [Boolean]
2224
- attr_accessor :enable_speaker_diarization
2225
- alias_method :enable_speaker_diarization?, :enable_speaker_diarization
4121
+ def initialize(**args)
4122
+ update!(**args)
4123
+ end
2226
4124
 
2227
- # *Optional* If `true`, the top result includes a list of words and the
2228
- # confidence for those words. If `false`, no word-level confidence
2229
- # information is returned. The default is `false`.
2230
- # Corresponds to the JSON property `enableWordConfidence`
2231
- # @return [Boolean]
2232
- attr_accessor :enable_word_confidence
2233
- alias_method :enable_word_confidence?, :enable_word_confidence
4125
+ # Update properties of this object
4126
+ def update!(**args)
4127
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
4128
+ @annotation_results_uri = args[:annotation_results_uri] if args.key?(:annotation_results_uri)
4129
+ @error = args[:error] if args.key?(:error)
4130
+ end
4131
+ end
2234
4132
 
2235
- # *Optional* If set to `true`, the server will attempt to filter out
2236
- # profanities, replacing all but the initial character in each filtered word
2237
- # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
2238
- # won't be filtered out.
2239
- # Corresponds to the JSON property `filterProfanity`
2240
- # @return [Boolean]
2241
- attr_accessor :filter_profanity
2242
- alias_method :filter_profanity?, :filter_profanity
4133
+ # Streaming annotation results corresponding to a portion of the video
4134
+ # that is currently being processed.
4135
+ class GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults
4136
+ include Google::Apis::Core::Hashable
2243
4137
 
2244
- # *Required* The language of the supplied audio as a
2245
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
2246
- # Example: "en-US".
2247
- # See [Language Support](https://cloud.google.com/speech/docs/languages)
2248
- # for a list of the currently supported language codes.
2249
- # Corresponds to the JSON property `languageCode`
2250
- # @return [String]
2251
- attr_accessor :language_code
4138
+ # Explicit content annotation (based on per-frame visual signals only).
4139
+ # If no explicit content has been detected in a frame, no annotations are
4140
+ # present for that frame.
4141
+ # Corresponds to the JSON property `explicitAnnotation`
4142
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
4143
+ attr_accessor :explicit_annotation
2252
4144
 
2253
- # *Optional* Maximum number of recognition hypotheses to be returned.
2254
- # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
2255
- # within each `SpeechTranscription`. The server may return fewer than
2256
- # `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will
2257
- # return a maximum of one. If omitted, will return a maximum of one.
2258
- # Corresponds to the JSON property `maxAlternatives`
2259
- # @return [Fixnum]
2260
- attr_accessor :max_alternatives
4145
+ # Label annotation results.
4146
+ # Corresponds to the JSON property `labelAnnotations`
4147
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
4148
+ attr_accessor :label_annotations
2261
4149
 
2262
- # *Optional* A means to provide context to assist the speech recognition.
2263
- # Corresponds to the JSON property `speechContexts`
2264
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechContext>]
2265
- attr_accessor :speech_contexts
4150
+ # Object tracking results.
4151
+ # Corresponds to the JSON property `objectAnnotations`
4152
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
4153
+ attr_accessor :object_annotations
4154
+
4155
+ # Shot annotation results. Each shot is represented as a video segment.
4156
+ # Corresponds to the JSON property `shotAnnotations`
4157
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
4158
+ attr_accessor :shot_annotations
2266
4159
 
2267
4160
  def initialize(**args)
2268
4161
  update!(**args)
@@ -2270,27 +4163,22 @@ module Google
2270
4163
 
2271
4164
  # Update properties of this object
2272
4165
  def update!(**args)
2273
- @audio_tracks = args[:audio_tracks] if args.key?(:audio_tracks)
2274
- @diarization_speaker_count = args[:diarization_speaker_count] if args.key?(:diarization_speaker_count)
2275
- @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
2276
- @enable_speaker_diarization = args[:enable_speaker_diarization] if args.key?(:enable_speaker_diarization)
2277
- @enable_word_confidence = args[:enable_word_confidence] if args.key?(:enable_word_confidence)
2278
- @filter_profanity = args[:filter_profanity] if args.key?(:filter_profanity)
2279
- @language_code = args[:language_code] if args.key?(:language_code)
2280
- @max_alternatives = args[:max_alternatives] if args.key?(:max_alternatives)
2281
- @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
4166
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
4167
+ @label_annotations = args[:label_annotations] if args.key?(:label_annotations)
4168
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
4169
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
2282
4170
  end
2283
4171
  end
2284
4172
 
2285
4173
  # Annotations related to one detected OCR text snippet. This will contain the
2286
4174
  # corresponding text, confidence value, and frame level information for each
2287
4175
  # detection.
2288
- class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
4176
+ class GoogleCloudVideointelligenceV1p3beta1TextAnnotation
2289
4177
  include Google::Apis::Core::Hashable
2290
4178
 
2291
4179
  # All video segments where OCR detected text appears.
2292
4180
  # Corresponds to the JSON property `segments`
2293
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
4181
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1TextSegment>]
2294
4182
  attr_accessor :segments
2295
4183
 
2296
4184
  # The detected text.
@@ -2309,32 +4197,10 @@ module Google
2309
4197
  end
2310
4198
  end
2311
4199
 
2312
- # Config for TEXT_DETECTION.
2313
- class GoogleCloudVideointelligenceV1p2beta1TextDetectionConfig
2314
- include Google::Apis::Core::Hashable
2315
-
2316
- # Language hint can be specified if the language to be detected is known a
2317
- # priori. It can increase the accuracy of the detection. Language hint must
2318
- # be language code in BCP-47 format.
2319
- # Automatic language detection is performed if no hint is provided.
2320
- # Corresponds to the JSON property `languageHints`
2321
- # @return [Array<String>]
2322
- attr_accessor :language_hints
2323
-
2324
- def initialize(**args)
2325
- update!(**args)
2326
- end
2327
-
2328
- # Update properties of this object
2329
- def update!(**args)
2330
- @language_hints = args[:language_hints] if args.key?(:language_hints)
2331
- end
2332
- end
2333
-
2334
4200
  # Video frame level annotation results for text annotation (OCR).
2335
4201
  # Contains information regarding timestamp and bounding box locations for the
2336
4202
  # frames containing detected OCR text snippets.
2337
- class GoogleCloudVideointelligenceV1p2beta1TextFrame
4203
+ class GoogleCloudVideointelligenceV1p3beta1TextFrame
2338
4204
  include Google::Apis::Core::Hashable
2339
4205
 
2340
4206
  # Normalized bounding polygon for text (that might not be aligned with axis).
@@ -2353,7 +4219,7 @@ module Google
2353
4219
  # than 0, or greater than 1 due to trignometric calculations for location of
2354
4220
  # the box.
2355
4221
  # Corresponds to the JSON property `rotatedBoundingBox`
2356
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
4222
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly]
2357
4223
  attr_accessor :rotated_bounding_box
2358
4224
 
2359
4225
  # Timestamp of this frame.
@@ -2373,7 +4239,7 @@ module Google
2373
4239
  end
2374
4240
 
2375
4241
  # Video segment level annotation results for text detection.
2376
- class GoogleCloudVideointelligenceV1p2beta1TextSegment
4242
+ class GoogleCloudVideointelligenceV1p3beta1TextSegment
2377
4243
  include Google::Apis::Core::Hashable
2378
4244
 
2379
4245
  # Confidence for the track of detected text. It is calculated as the highest
@@ -2384,12 +4250,12 @@ module Google
2384
4250
 
2385
4251
  # Information related to the frames where OCR detected text appears.
2386
4252
  # Corresponds to the JSON property `frames`
2387
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
4253
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1TextFrame>]
2388
4254
  attr_accessor :frames
2389
4255
 
2390
4256
  # Video segment.
2391
4257
  # Corresponds to the JSON property `segment`
2392
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
4258
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2393
4259
  attr_accessor :segment
2394
4260
 
2395
4261
  def initialize(**args)
@@ -2405,7 +4271,7 @@ module Google
2405
4271
  end
2406
4272
 
2407
4273
  # Annotation progress for a single video.
2408
- class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
4274
+ class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress
2409
4275
  include Google::Apis::Core::Hashable
2410
4276
 
2411
4277
  # Video file location in
@@ -2444,17 +4310,17 @@ module Google
2444
4310
  end
2445
4311
 
2446
4312
  # Annotation results for a single video.
2447
- class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
4313
+ class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
2448
4314
  include Google::Apis::Core::Hashable
2449
4315
 
2450
- # The `Status` type defines a logical error model that is suitable for different
2451
- # programming environments, including REST APIs and RPC APIs. It is used by
2452
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4316
+ # The `Status` type defines a logical error model that is suitable for
4317
+ # different programming environments, including REST APIs and RPC APIs. It is
4318
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2453
4319
  # - Simple to use and understand for most users
2454
4320
  # - Flexible enough to meet unexpected needs
2455
4321
  # # Overview
2456
- # The `Status` message contains three pieces of data: error code, error message,
2457
- # and error details. The error code should be an enum value of
4322
+ # The `Status` message contains three pieces of data: error code, error
4323
+ # message, and error details. The error code should be an enum value of
2458
4324
  # google.rpc.Code, but it may accept additional error codes if needed. The
2459
4325
  # error message should be a developer-facing English message that helps
2460
4326
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -2494,13 +4360,13 @@ module Google
2494
4360
  # If no explicit content has been detected in a frame, no annotations are
2495
4361
  # present for that frame.
2496
4362
  # Corresponds to the JSON property `explicitAnnotation`
2497
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
4363
+ # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
2498
4364
  attr_accessor :explicit_annotation
2499
4365
 
2500
4366
  # Label annotations on frame level.
2501
4367
  # There is exactly one element for each unique label.
2502
4368
  # Corresponds to the JSON property `frameLabelAnnotations`
2503
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4369
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2504
4370
  attr_accessor :frame_label_annotations
2505
4371
 
2506
4372
  # Video file location in
@@ -2511,36 +4377,36 @@ module Google
2511
4377
 
2512
4378
  # Annotations for list of objects detected and tracked in video.
2513
4379
  # Corresponds to the JSON property `objectAnnotations`
2514
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
4380
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
2515
4381
  attr_accessor :object_annotations
2516
4382
 
2517
4383
  # Label annotations on video level or user specified segment level.
2518
4384
  # There is exactly one element for each unique label.
2519
4385
  # Corresponds to the JSON property `segmentLabelAnnotations`
2520
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4386
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2521
4387
  attr_accessor :segment_label_annotations
2522
4388
 
2523
4389
  # Shot annotations. Each shot is represented as a video segment.
2524
4390
  # Corresponds to the JSON property `shotAnnotations`
2525
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
4391
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
2526
4392
  attr_accessor :shot_annotations
2527
4393
 
2528
4394
  # Label annotations on shot level.
2529
4395
  # There is exactly one element for each unique label.
2530
4396
  # Corresponds to the JSON property `shotLabelAnnotations`
2531
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4397
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2532
4398
  attr_accessor :shot_label_annotations
2533
4399
 
2534
4400
  # Speech transcription.
2535
4401
  # Corresponds to the JSON property `speechTranscriptions`
2536
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
4402
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>]
2537
4403
  attr_accessor :speech_transcriptions
2538
4404
 
2539
4405
  # OCR text detection and tracking.
2540
4406
  # Annotations for list of detected text snippets. Each will have list of
2541
4407
  # frame information associated with it.
2542
4408
  # Corresponds to the JSON property `textAnnotations`
2543
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
4409
+ # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>]
2544
4410
  attr_accessor :text_annotations
2545
4411
 
2546
4412
  def initialize(**args)
@@ -2562,59 +4428,8 @@ module Google
2562
4428
  end
2563
4429
  end
2564
4430
 
2565
- # Video context and/or feature-specific parameters.
2566
- class GoogleCloudVideointelligenceV1p2beta1VideoContext
2567
- include Google::Apis::Core::Hashable
2568
-
2569
- # Config for EXPLICIT_CONTENT_DETECTION.
2570
- # Corresponds to the JSON property `explicitContentDetectionConfig`
2571
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentDetectionConfig]
2572
- attr_accessor :explicit_content_detection_config
2573
-
2574
- # Config for LABEL_DETECTION.
2575
- # Corresponds to the JSON property `labelDetectionConfig`
2576
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelDetectionConfig]
2577
- attr_accessor :label_detection_config
2578
-
2579
- # Video segments to annotate. The segments may overlap and are not required
2580
- # to be contiguous or span the whole video. If unspecified, each video is
2581
- # treated as a single segment.
2582
- # Corresponds to the JSON property `segments`
2583
- # @return [Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
2584
- attr_accessor :segments
2585
-
2586
- # Config for SHOT_CHANGE_DETECTION.
2587
- # Corresponds to the JSON property `shotChangeDetectionConfig`
2588
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ShotChangeDetectionConfig]
2589
- attr_accessor :shot_change_detection_config
2590
-
2591
- # Config for SPEECH_TRANSCRIPTION.
2592
- # Corresponds to the JSON property `speechTranscriptionConfig`
2593
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscriptionConfig]
2594
- attr_accessor :speech_transcription_config
2595
-
2596
- # Config for TEXT_DETECTION.
2597
- # Corresponds to the JSON property `textDetectionConfig`
2598
- # @return [Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextDetectionConfig]
2599
- attr_accessor :text_detection_config
2600
-
2601
- def initialize(**args)
2602
- update!(**args)
2603
- end
2604
-
2605
- # Update properties of this object
2606
- def update!(**args)
2607
- @explicit_content_detection_config = args[:explicit_content_detection_config] if args.key?(:explicit_content_detection_config)
2608
- @label_detection_config = args[:label_detection_config] if args.key?(:label_detection_config)
2609
- @segments = args[:segments] if args.key?(:segments)
2610
- @shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
2611
- @speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
2612
- @text_detection_config = args[:text_detection_config] if args.key?(:text_detection_config)
2613
- end
2614
- end
2615
-
2616
4431
  # Video segment.
2617
- class GoogleCloudVideointelligenceV1p2beta1VideoSegment
4432
+ class GoogleCloudVideointelligenceV1p3beta1VideoSegment
2618
4433
  include Google::Apis::Core::Hashable
2619
4434
 
2620
4435
  # Time-offset, relative to the beginning of the video,
@@ -2643,7 +4458,7 @@ module Google
2643
4458
  # Word-specific information for recognized words. Word information is only
2644
4459
  # included in the response when certain request parameters are set, such
2645
4460
  # as `enable_word_time_offsets`.
2646
- class GoogleCloudVideointelligenceV1p2beta1WordInfo
4461
+ class GoogleCloudVideointelligenceV1p3beta1WordInfo
2647
4462
  include Google::Apis::Core::Hashable
2648
4463
 
2649
4464
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -2712,14 +4527,14 @@ module Google
2712
4527
  attr_accessor :done
2713
4528
  alias_method :done?, :done
2714
4529
 
2715
- # The `Status` type defines a logical error model that is suitable for different
2716
- # programming environments, including REST APIs and RPC APIs. It is used by
2717
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4530
+ # The `Status` type defines a logical error model that is suitable for
4531
+ # different programming environments, including REST APIs and RPC APIs. It is
4532
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2718
4533
  # - Simple to use and understand for most users
2719
4534
  # - Flexible enough to meet unexpected needs
2720
4535
  # # Overview
2721
- # The `Status` message contains three pieces of data: error code, error message,
2722
- # and error details. The error code should be an enum value of
4536
+ # The `Status` message contains three pieces of data: error code, error
4537
+ # message, and error details. The error code should be an enum value of
2723
4538
  # google.rpc.Code, but it may accept additional error codes if needed. The
2724
4539
  # error message should be a developer-facing English message that helps
2725
4540
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -2796,14 +4611,14 @@ module Google
2796
4611
  end
2797
4612
  end
2798
4613
 
2799
- # The `Status` type defines a logical error model that is suitable for different
2800
- # programming environments, including REST APIs and RPC APIs. It is used by
2801
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4614
+ # The `Status` type defines a logical error model that is suitable for
4615
+ # different programming environments, including REST APIs and RPC APIs. It is
4616
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2802
4617
  # - Simple to use and understand for most users
2803
4618
  # - Flexible enough to meet unexpected needs
2804
4619
  # # Overview
2805
- # The `Status` message contains three pieces of data: error code, error message,
2806
- # and error details. The error code should be an enum value of
4620
+ # The `Status` message contains three pieces of data: error code, error
4621
+ # message, and error details. The error code should be an enum value of
2807
4622
  # google.rpc.Code, but it may accept additional error codes if needed. The
2808
4623
  # error message should be a developer-facing English message that helps
2809
4624
  # developers *understand* and *resolve* the error. If a localized user-facing