google-api-client 0.28.4 → 0.29.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (750) hide show
  1. checksums.yaml +4 -4
  2. data/.kokoro/build.bat +9 -6
  3. data/.kokoro/build.sh +2 -34
  4. data/.kokoro/continuous/common.cfg +6 -1
  5. data/.kokoro/continuous/linux.cfg +1 -1
  6. data/.kokoro/continuous/windows.cfg +17 -1
  7. data/.kokoro/osx.sh +2 -33
  8. data/.kokoro/presubmit/common.cfg +6 -1
  9. data/.kokoro/presubmit/linux.cfg +1 -1
  10. data/.kokoro/presubmit/windows.cfg +17 -1
  11. data/.kokoro/trampoline.bat +10 -0
  12. data/.kokoro/trampoline.sh +3 -23
  13. data/CHANGELOG.md +460 -0
  14. data/README.md +1 -1
  15. data/Rakefile +31 -0
  16. data/bin/generate-api +4 -2
  17. data/generated/google/apis/abusiveexperiencereport_v1/service.rb +2 -2
  18. data/generated/google/apis/acceleratedmobilepageurl_v1/service.rb +1 -1
  19. data/generated/google/apis/accessapproval_v1beta1/classes.rb +333 -0
  20. data/generated/google/apis/accessapproval_v1beta1/representations.rb +174 -0
  21. data/generated/google/apis/accessapproval_v1beta1/service.rb +728 -0
  22. data/generated/google/apis/accessapproval_v1beta1.rb +34 -0
  23. data/generated/google/apis/accesscontextmanager_v1/classes.rb +755 -0
  24. data/generated/google/apis/accesscontextmanager_v1/representations.rb +282 -0
  25. data/generated/google/apis/accesscontextmanager_v1/service.rb +788 -0
  26. data/generated/google/apis/accesscontextmanager_v1.rb +34 -0
  27. data/generated/google/apis/accesscontextmanager_v1beta/classes.rb +47 -31
  28. data/generated/google/apis/accesscontextmanager_v1beta/representations.rb +4 -0
  29. data/generated/google/apis/accesscontextmanager_v1beta/service.rb +16 -16
  30. data/generated/google/apis/accesscontextmanager_v1beta.rb +1 -1
  31. data/generated/google/apis/adexchangebuyer2_v2beta1/classes.rb +95 -200
  32. data/generated/google/apis/adexchangebuyer2_v2beta1/representations.rb +0 -32
  33. data/generated/google/apis/adexchangebuyer2_v2beta1/service.rb +64 -104
  34. data/generated/google/apis/adexchangebuyer2_v2beta1.rb +1 -1
  35. data/generated/google/apis/adexchangebuyer_v1_2/service.rb +7 -7
  36. data/generated/google/apis/adexchangebuyer_v1_3/service.rb +21 -21
  37. data/generated/google/apis/adexchangebuyer_v1_4/service.rb +38 -38
  38. data/generated/google/apis/adexperiencereport_v1/service.rb +2 -2
  39. data/generated/google/apis/admin_datatransfer_v1/service.rb +5 -5
  40. data/generated/google/apis/admin_directory_v1/classes.rb +5 -50
  41. data/generated/google/apis/admin_directory_v1/representations.rb +0 -2
  42. data/generated/google/apis/admin_directory_v1/service.rb +113 -113
  43. data/generated/google/apis/admin_directory_v1.rb +1 -1
  44. data/generated/google/apis/admin_reports_v1/service.rb +6 -6
  45. data/generated/google/apis/admin_reports_v1.rb +1 -1
  46. data/generated/google/apis/adsense_v1_4/service.rb +39 -39
  47. data/generated/google/apis/adsensehost_v4_1/service.rb +26 -26
  48. data/generated/google/apis/alertcenter_v1beta1/classes.rb +101 -2
  49. data/generated/google/apis/alertcenter_v1beta1/representations.rb +25 -0
  50. data/generated/google/apis/alertcenter_v1beta1/service.rb +17 -16
  51. data/generated/google/apis/alertcenter_v1beta1.rb +1 -1
  52. data/generated/google/apis/analytics_v2_4/service.rb +6 -6
  53. data/generated/google/apis/analytics_v3/service.rb +88 -88
  54. data/generated/google/apis/analyticsreporting_v4/classes.rb +638 -0
  55. data/generated/google/apis/analyticsreporting_v4/representations.rb +248 -0
  56. data/generated/google/apis/analyticsreporting_v4/service.rb +31 -1
  57. data/generated/google/apis/analyticsreporting_v4.rb +1 -1
  58. data/generated/google/apis/androiddeviceprovisioning_v1/classes.rb +51 -11
  59. data/generated/google/apis/androiddeviceprovisioning_v1/representations.rb +6 -0
  60. data/generated/google/apis/androiddeviceprovisioning_v1/service.rb +26 -26
  61. data/generated/google/apis/androiddeviceprovisioning_v1.rb +1 -1
  62. data/generated/google/apis/androidenterprise_v1/classes.rb +26 -30
  63. data/generated/google/apis/androidenterprise_v1/representations.rb +2 -14
  64. data/generated/google/apis/androidenterprise_v1/service.rb +85 -121
  65. data/generated/google/apis/androidenterprise_v1.rb +1 -1
  66. data/generated/google/apis/androidmanagement_v1/classes.rb +358 -4
  67. data/generated/google/apis/androidmanagement_v1/representations.rb +163 -0
  68. data/generated/google/apis/androidmanagement_v1/service.rb +191 -21
  69. data/generated/google/apis/androidmanagement_v1.rb +1 -1
  70. data/generated/google/apis/androidpublisher_v1/service.rb +2 -2
  71. data/generated/google/apis/androidpublisher_v1_1/service.rb +3 -3
  72. data/generated/google/apis/androidpublisher_v2/service.rb +64 -70
  73. data/generated/google/apis/androidpublisher_v2.rb +1 -1
  74. data/generated/google/apis/androidpublisher_v3/classes.rb +113 -0
  75. data/generated/google/apis/androidpublisher_v3/representations.rb +58 -0
  76. data/generated/google/apis/androidpublisher_v3/service.rb +234 -64
  77. data/generated/google/apis/androidpublisher_v3.rb +1 -1
  78. data/generated/google/apis/appengine_v1/classes.rb +45 -100
  79. data/generated/google/apis/appengine_v1/representations.rb +17 -35
  80. data/generated/google/apis/appengine_v1/service.rb +45 -39
  81. data/generated/google/apis/appengine_v1.rb +1 -1
  82. data/generated/google/apis/appengine_v1alpha/classes.rb +2 -99
  83. data/generated/google/apis/appengine_v1alpha/representations.rb +0 -35
  84. data/generated/google/apis/appengine_v1alpha/service.rb +15 -15
  85. data/generated/google/apis/appengine_v1alpha.rb +1 -1
  86. data/generated/google/apis/appengine_v1beta/classes.rb +7 -102
  87. data/generated/google/apis/appengine_v1beta/representations.rb +0 -35
  88. data/generated/google/apis/appengine_v1beta/service.rb +45 -39
  89. data/generated/google/apis/appengine_v1beta.rb +1 -1
  90. data/generated/google/apis/appengine_v1beta4/service.rb +20 -20
  91. data/generated/google/apis/appengine_v1beta5/service.rb +20 -20
  92. data/generated/google/apis/appsactivity_v1/service.rb +5 -4
  93. data/generated/google/apis/appsactivity_v1.rb +1 -1
  94. data/generated/google/apis/appsmarket_v2/service.rb +3 -3
  95. data/generated/google/apis/appstate_v1/service.rb +5 -5
  96. data/generated/google/apis/bigquery_v2/classes.rb +1121 -114
  97. data/generated/google/apis/bigquery_v2/representations.rb +414 -26
  98. data/generated/google/apis/bigquery_v2/service.rb +184 -22
  99. data/generated/google/apis/bigquery_v2.rb +1 -1
  100. data/generated/google/apis/bigquerydatatransfer_v1/classes.rb +88 -10
  101. data/generated/google/apis/bigquerydatatransfer_v1/representations.rb +43 -0
  102. data/generated/google/apis/bigquerydatatransfer_v1/service.rb +142 -34
  103. data/generated/google/apis/bigquerydatatransfer_v1.rb +3 -3
  104. data/generated/google/apis/bigtableadmin_v1/service.rb +3 -3
  105. data/generated/google/apis/bigtableadmin_v1.rb +2 -2
  106. data/generated/google/apis/bigtableadmin_v2/classes.rb +14 -14
  107. data/generated/google/apis/bigtableadmin_v2/service.rb +142 -33
  108. data/generated/google/apis/bigtableadmin_v2.rb +2 -2
  109. data/generated/google/apis/binaryauthorization_v1beta1/classes.rb +66 -6
  110. data/generated/google/apis/binaryauthorization_v1beta1/representations.rb +17 -0
  111. data/generated/google/apis/binaryauthorization_v1beta1/service.rb +17 -13
  112. data/generated/google/apis/binaryauthorization_v1beta1.rb +1 -1
  113. data/generated/google/apis/blogger_v2/service.rb +9 -9
  114. data/generated/google/apis/blogger_v3/service.rb +33 -33
  115. data/generated/google/apis/books_v1/service.rb +51 -51
  116. data/generated/google/apis/calendar_v3/classes.rb +1 -1
  117. data/generated/google/apis/calendar_v3/service.rb +47 -47
  118. data/generated/google/apis/calendar_v3.rb +1 -1
  119. data/generated/google/apis/chat_v1/service.rb +8 -8
  120. data/generated/google/apis/civicinfo_v2/service.rb +5 -5
  121. data/generated/google/apis/classroom_v1/classes.rb +77 -0
  122. data/generated/google/apis/classroom_v1/representations.rb +32 -0
  123. data/generated/google/apis/classroom_v1/service.rb +276 -51
  124. data/generated/google/apis/classroom_v1.rb +7 -1
  125. data/generated/google/apis/cloudasset_v1/classes.rb +818 -0
  126. data/generated/google/apis/cloudasset_v1/representations.rb +264 -0
  127. data/generated/google/apis/cloudasset_v1/service.rb +191 -0
  128. data/generated/google/apis/cloudasset_v1.rb +34 -0
  129. data/generated/google/apis/cloudasset_v1beta1/classes.rb +33 -18
  130. data/generated/google/apis/cloudasset_v1beta1/representations.rb +1 -0
  131. data/generated/google/apis/cloudasset_v1beta1/service.rb +13 -13
  132. data/generated/google/apis/cloudasset_v1beta1.rb +2 -2
  133. data/generated/google/apis/cloudbilling_v1/classes.rb +1 -1
  134. data/generated/google/apis/cloudbilling_v1/service.rb +14 -14
  135. data/generated/google/apis/cloudbilling_v1.rb +1 -1
  136. data/generated/google/apis/cloudbuild_v1/classes.rb +162 -11
  137. data/generated/google/apis/cloudbuild_v1/representations.rb +67 -0
  138. data/generated/google/apis/cloudbuild_v1/service.rb +21 -15
  139. data/generated/google/apis/cloudbuild_v1.rb +1 -1
  140. data/generated/google/apis/cloudbuild_v1alpha1/classes.rb +7 -1
  141. data/generated/google/apis/cloudbuild_v1alpha1/representations.rb +2 -0
  142. data/generated/google/apis/cloudbuild_v1alpha1/service.rb +6 -6
  143. data/generated/google/apis/cloudbuild_v1alpha1.rb +1 -1
  144. data/generated/google/apis/clouddebugger_v2/service.rb +8 -8
  145. data/generated/google/apis/clouderrorreporting_v1beta1/classes.rb +19 -16
  146. data/generated/google/apis/clouderrorreporting_v1beta1/service.rb +12 -11
  147. data/generated/google/apis/clouderrorreporting_v1beta1.rb +1 -1
  148. data/generated/google/apis/cloudfunctions_v1/classes.rb +21 -17
  149. data/generated/google/apis/cloudfunctions_v1/service.rb +22 -16
  150. data/generated/google/apis/cloudfunctions_v1.rb +1 -1
  151. data/generated/google/apis/cloudfunctions_v1beta2/classes.rb +20 -16
  152. data/generated/google/apis/cloudfunctions_v1beta2/service.rb +17 -11
  153. data/generated/google/apis/cloudfunctions_v1beta2.rb +1 -1
  154. data/generated/google/apis/cloudidentity_v1/classes.rb +14 -14
  155. data/generated/google/apis/cloudidentity_v1/service.rb +18 -27
  156. data/generated/google/apis/cloudidentity_v1.rb +7 -1
  157. data/generated/google/apis/cloudidentity_v1beta1/classes.rb +11 -11
  158. data/generated/google/apis/cloudidentity_v1beta1/service.rb +15 -21
  159. data/generated/google/apis/cloudidentity_v1beta1.rb +7 -1
  160. data/generated/google/apis/cloudiot_v1/classes.rb +11 -11
  161. data/generated/google/apis/cloudiot_v1/service.rb +23 -330
  162. data/generated/google/apis/cloudiot_v1.rb +1 -1
  163. data/generated/google/apis/cloudkms_v1/classes.rb +7 -3
  164. data/generated/google/apis/cloudkms_v1/service.rb +30 -30
  165. data/generated/google/apis/cloudkms_v1.rb +1 -1
  166. data/generated/google/apis/cloudprivatecatalog_v1beta1/classes.rb +358 -0
  167. data/generated/google/apis/cloudprivatecatalog_v1beta1/representations.rb +123 -0
  168. data/generated/google/apis/cloudprivatecatalog_v1beta1/service.rb +486 -0
  169. data/generated/google/apis/cloudprivatecatalog_v1beta1.rb +35 -0
  170. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/classes.rb +1212 -0
  171. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/representations.rb +399 -0
  172. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/service.rb +1073 -0
  173. data/generated/google/apis/cloudprivatecatalogproducer_v1beta1.rb +35 -0
  174. data/generated/google/apis/cloudprofiler_v2/service.rb +3 -3
  175. data/generated/google/apis/cloudresourcemanager_v1/classes.rb +24 -22
  176. data/generated/google/apis/cloudresourcemanager_v1/service.rb +68 -59
  177. data/generated/google/apis/cloudresourcemanager_v1.rb +1 -1
  178. data/generated/google/apis/cloudresourcemanager_v1beta1/classes.rb +3 -3
  179. data/generated/google/apis/cloudresourcemanager_v1beta1/service.rb +53 -42
  180. data/generated/google/apis/cloudresourcemanager_v1beta1.rb +1 -1
  181. data/generated/google/apis/cloudresourcemanager_v2/classes.rb +15 -16
  182. data/generated/google/apis/cloudresourcemanager_v2/service.rb +13 -13
  183. data/generated/google/apis/cloudresourcemanager_v2.rb +1 -1
  184. data/generated/google/apis/cloudresourcemanager_v2beta1/classes.rb +15 -16
  185. data/generated/google/apis/cloudresourcemanager_v2beta1/service.rb +13 -13
  186. data/generated/google/apis/cloudresourcemanager_v2beta1.rb +1 -1
  187. data/generated/google/apis/cloudscheduler_v1/classes.rb +994 -0
  188. data/generated/google/apis/cloudscheduler_v1/representations.rb +297 -0
  189. data/generated/google/apis/cloudscheduler_v1/service.rb +448 -0
  190. data/generated/google/apis/cloudscheduler_v1.rb +34 -0
  191. data/generated/google/apis/cloudscheduler_v1beta1/classes.rb +160 -44
  192. data/generated/google/apis/cloudscheduler_v1beta1/representations.rb +33 -0
  193. data/generated/google/apis/cloudscheduler_v1beta1/service.rb +15 -12
  194. data/generated/google/apis/cloudscheduler_v1beta1.rb +1 -1
  195. data/generated/google/apis/cloudsearch_v1/classes.rb +245 -59
  196. data/generated/google/apis/cloudsearch_v1/representations.rb +91 -0
  197. data/generated/google/apis/cloudsearch_v1/service.rb +86 -80
  198. data/generated/google/apis/cloudsearch_v1.rb +1 -1
  199. data/generated/google/apis/cloudshell_v1/classes.rb +11 -11
  200. data/generated/google/apis/cloudshell_v1/service.rb +4 -4
  201. data/generated/google/apis/cloudshell_v1.rb +1 -1
  202. data/generated/google/apis/cloudshell_v1alpha1/classes.rb +24 -11
  203. data/generated/google/apis/cloudshell_v1alpha1/representations.rb +2 -0
  204. data/generated/google/apis/cloudshell_v1alpha1/service.rb +11 -10
  205. data/generated/google/apis/cloudshell_v1alpha1.rb +1 -1
  206. data/generated/google/apis/cloudtasks_v2/classes.rb +1436 -0
  207. data/generated/google/apis/cloudtasks_v2/representations.rb +408 -0
  208. data/generated/google/apis/cloudtasks_v2/service.rb +856 -0
  209. data/generated/google/apis/{partners_v2.rb → cloudtasks_v2.rb} +11 -9
  210. data/generated/google/apis/cloudtasks_v2beta2/classes.rb +141 -102
  211. data/generated/google/apis/cloudtasks_v2beta2/service.rb +44 -43
  212. data/generated/google/apis/cloudtasks_v2beta2.rb +1 -1
  213. data/generated/google/apis/cloudtasks_v2beta3/classes.rb +388 -108
  214. data/generated/google/apis/cloudtasks_v2beta3/representations.rb +65 -0
  215. data/generated/google/apis/cloudtasks_v2beta3/service.rb +40 -39
  216. data/generated/google/apis/cloudtasks_v2beta3.rb +1 -1
  217. data/generated/google/apis/cloudtrace_v1/service.rb +3 -3
  218. data/generated/google/apis/cloudtrace_v2/classes.rb +10 -10
  219. data/generated/google/apis/cloudtrace_v2/service.rb +2 -2
  220. data/generated/google/apis/cloudtrace_v2.rb +1 -1
  221. data/generated/google/apis/commentanalyzer_v1alpha1/classes.rb +484 -0
  222. data/generated/google/apis/commentanalyzer_v1alpha1/representations.rb +210 -0
  223. data/generated/google/apis/commentanalyzer_v1alpha1/service.rb +124 -0
  224. data/generated/google/apis/commentanalyzer_v1alpha1.rb +39 -0
  225. data/generated/google/apis/composer_v1/classes.rb +21 -15
  226. data/generated/google/apis/composer_v1/service.rb +9 -9
  227. data/generated/google/apis/composer_v1.rb +1 -1
  228. data/generated/google/apis/composer_v1beta1/classes.rb +175 -36
  229. data/generated/google/apis/composer_v1beta1/representations.rb +50 -0
  230. data/generated/google/apis/composer_v1beta1/service.rb +9 -9
  231. data/generated/google/apis/composer_v1beta1.rb +1 -1
  232. data/generated/google/apis/compute_alpha/classes.rb +10112 -7289
  233. data/generated/google/apis/compute_alpha/representations.rb +1337 -219
  234. data/generated/google/apis/compute_alpha/service.rb +4259 -2728
  235. data/generated/google/apis/compute_alpha.rb +1 -1
  236. data/generated/google/apis/compute_beta/classes.rb +4254 -2781
  237. data/generated/google/apis/compute_beta/representations.rb +853 -283
  238. data/generated/google/apis/compute_beta/service.rb +7077 -5955
  239. data/generated/google/apis/compute_beta.rb +1 -1
  240. data/generated/google/apis/compute_v1/classes.rb +1259 -93
  241. data/generated/google/apis/compute_v1/representations.rb +450 -1
  242. data/generated/google/apis/compute_v1/service.rb +1085 -400
  243. data/generated/google/apis/compute_v1.rb +1 -1
  244. data/generated/google/apis/container_v1/classes.rb +201 -22
  245. data/generated/google/apis/container_v1/representations.rb +69 -0
  246. data/generated/google/apis/container_v1/service.rb +151 -102
  247. data/generated/google/apis/container_v1.rb +1 -1
  248. data/generated/google/apis/container_v1beta1/classes.rb +215 -25
  249. data/generated/google/apis/container_v1beta1/representations.rb +86 -0
  250. data/generated/google/apis/container_v1beta1/service.rb +106 -106
  251. data/generated/google/apis/container_v1beta1.rb +1 -1
  252. data/generated/google/apis/containeranalysis_v1alpha1/classes.rb +26 -18
  253. data/generated/google/apis/containeranalysis_v1alpha1/representations.rb +1 -0
  254. data/generated/google/apis/containeranalysis_v1alpha1/service.rb +33 -33
  255. data/generated/google/apis/containeranalysis_v1alpha1.rb +1 -1
  256. data/generated/google/apis/containeranalysis_v1beta1/classes.rb +226 -12
  257. data/generated/google/apis/containeranalysis_v1beta1/representations.rb +58 -0
  258. data/generated/google/apis/containeranalysis_v1beta1/service.rb +24 -24
  259. data/generated/google/apis/containeranalysis_v1beta1.rb +1 -1
  260. data/generated/google/apis/content_v2/classes.rb +218 -101
  261. data/generated/google/apis/content_v2/representations.rb +49 -0
  262. data/generated/google/apis/content_v2/service.rb +189 -152
  263. data/generated/google/apis/content_v2.rb +1 -1
  264. data/generated/google/apis/content_v2_1/classes.rb +387 -216
  265. data/generated/google/apis/content_v2_1/representations.rb +131 -56
  266. data/generated/google/apis/content_v2_1/service.rb +190 -107
  267. data/generated/google/apis/content_v2_1.rb +1 -1
  268. data/generated/google/apis/customsearch_v1/service.rb +2 -2
  269. data/generated/google/apis/dataflow_v1b3/classes.rb +148 -31
  270. data/generated/google/apis/dataflow_v1b3/representations.rb +45 -0
  271. data/generated/google/apis/dataflow_v1b3/service.rb +415 -56
  272. data/generated/google/apis/dataflow_v1b3.rb +1 -1
  273. data/generated/google/apis/datafusion_v1beta1/classes.rb +1304 -0
  274. data/generated/google/apis/datafusion_v1beta1/representations.rb +469 -0
  275. data/generated/google/apis/datafusion_v1beta1/service.rb +657 -0
  276. data/generated/google/apis/datafusion_v1beta1.rb +43 -0
  277. data/generated/google/apis/dataproc_v1/classes.rb +27 -22
  278. data/generated/google/apis/dataproc_v1/representations.rb +1 -0
  279. data/generated/google/apis/dataproc_v1/service.rb +261 -45
  280. data/generated/google/apis/dataproc_v1.rb +1 -1
  281. data/generated/google/apis/dataproc_v1beta2/classes.rb +534 -50
  282. data/generated/google/apis/dataproc_v1beta2/representations.rb +185 -7
  283. data/generated/google/apis/dataproc_v1beta2/service.rb +617 -51
  284. data/generated/google/apis/dataproc_v1beta2.rb +1 -1
  285. data/generated/google/apis/datastore_v1/classes.rb +20 -16
  286. data/generated/google/apis/datastore_v1/service.rb +15 -15
  287. data/generated/google/apis/datastore_v1.rb +1 -1
  288. data/generated/google/apis/datastore_v1beta1/classes.rb +10 -10
  289. data/generated/google/apis/datastore_v1beta1/service.rb +2 -2
  290. data/generated/google/apis/datastore_v1beta1.rb +1 -1
  291. data/generated/google/apis/datastore_v1beta3/classes.rb +10 -6
  292. data/generated/google/apis/datastore_v1beta3/service.rb +7 -7
  293. data/generated/google/apis/datastore_v1beta3.rb +1 -1
  294. data/generated/google/apis/deploymentmanager_alpha/service.rb +37 -37
  295. data/generated/google/apis/deploymentmanager_v2/service.rb +18 -18
  296. data/generated/google/apis/deploymentmanager_v2beta/service.rb +32 -32
  297. data/generated/google/apis/dfareporting_v3_1/service.rb +206 -206
  298. data/generated/google/apis/dfareporting_v3_2/service.rb +206 -206
  299. data/generated/google/apis/dfareporting_v3_3/classes.rb +3 -3
  300. data/generated/google/apis/dfareporting_v3_3/service.rb +204 -204
  301. data/generated/google/apis/dfareporting_v3_3.rb +1 -1
  302. data/generated/google/apis/dialogflow_v2/classes.rb +367 -82
  303. data/generated/google/apis/dialogflow_v2/representations.rb +99 -0
  304. data/generated/google/apis/dialogflow_v2/service.rb +76 -60
  305. data/generated/google/apis/dialogflow_v2.rb +1 -1
  306. data/generated/google/apis/dialogflow_v2beta1/classes.rb +199 -88
  307. data/generated/google/apis/dialogflow_v2beta1/representations.rb +31 -0
  308. data/generated/google/apis/dialogflow_v2beta1/service.rb +154 -94
  309. data/generated/google/apis/dialogflow_v2beta1.rb +1 -1
  310. data/generated/google/apis/digitalassetlinks_v1/service.rb +7 -6
  311. data/generated/google/apis/digitalassetlinks_v1.rb +1 -1
  312. data/generated/google/apis/discovery_v1/service.rb +2 -2
  313. data/generated/google/apis/dlp_v2/classes.rb +116 -45
  314. data/generated/google/apis/dlp_v2/representations.rb +32 -0
  315. data/generated/google/apis/dlp_v2/service.rb +85 -45
  316. data/generated/google/apis/dlp_v2.rb +1 -1
  317. data/generated/google/apis/dns_v1/classes.rb +83 -1
  318. data/generated/google/apis/dns_v1/representations.rb +34 -0
  319. data/generated/google/apis/dns_v1/service.rb +15 -15
  320. data/generated/google/apis/dns_v1.rb +1 -1
  321. data/generated/google/apis/dns_v1beta2/classes.rb +81 -1
  322. data/generated/google/apis/dns_v1beta2/representations.rb +33 -0
  323. data/generated/google/apis/dns_v1beta2/service.rb +21 -21
  324. data/generated/google/apis/dns_v1beta2.rb +1 -1
  325. data/generated/google/apis/dns_v2beta1/classes.rb +83 -1
  326. data/generated/google/apis/dns_v2beta1/representations.rb +34 -0
  327. data/generated/google/apis/dns_v2beta1/service.rb +16 -16
  328. data/generated/google/apis/dns_v2beta1.rb +1 -1
  329. data/generated/google/apis/docs_v1/classes.rb +265 -47
  330. data/generated/google/apis/docs_v1/representations.rb +96 -0
  331. data/generated/google/apis/docs_v1/service.rb +3 -3
  332. data/generated/google/apis/docs_v1.rb +1 -1
  333. data/generated/google/apis/doubleclickbidmanager_v1/classes.rb +6 -4
  334. data/generated/google/apis/doubleclickbidmanager_v1/service.rb +9 -9
  335. data/generated/google/apis/doubleclickbidmanager_v1.rb +1 -1
  336. data/generated/google/apis/doubleclicksearch_v2/service.rb +10 -10
  337. data/generated/google/apis/drive_v2/classes.rb +601 -80
  338. data/generated/google/apis/drive_v2/representations.rb +152 -0
  339. data/generated/google/apis/drive_v2/service.rb +574 -164
  340. data/generated/google/apis/drive_v2.rb +1 -1
  341. data/generated/google/apis/drive_v3/classes.rb +591 -75
  342. data/generated/google/apis/drive_v3/representations.rb +151 -0
  343. data/generated/google/apis/drive_v3/service.rb +483 -116
  344. data/generated/google/apis/drive_v3.rb +1 -1
  345. data/generated/google/apis/driveactivity_v2/classes.rb +149 -17
  346. data/generated/google/apis/driveactivity_v2/representations.rb +69 -0
  347. data/generated/google/apis/driveactivity_v2/service.rb +1 -1
  348. data/generated/google/apis/driveactivity_v2.rb +1 -1
  349. data/generated/google/apis/factchecktools_v1alpha1/classes.rb +459 -0
  350. data/generated/google/apis/factchecktools_v1alpha1/representations.rb +207 -0
  351. data/generated/google/apis/factchecktools_v1alpha1/service.rb +300 -0
  352. data/generated/google/apis/factchecktools_v1alpha1.rb +34 -0
  353. data/generated/google/apis/fcm_v1/classes.rb +424 -0
  354. data/generated/google/apis/fcm_v1/representations.rb +167 -0
  355. data/generated/google/apis/fcm_v1/service.rb +97 -0
  356. data/generated/google/apis/fcm_v1.rb +35 -0
  357. data/generated/google/apis/file_v1/classes.rb +646 -11
  358. data/generated/google/apis/file_v1/representations.rb +207 -0
  359. data/generated/google/apis/file_v1/service.rb +196 -6
  360. data/generated/google/apis/file_v1.rb +1 -1
  361. data/generated/google/apis/file_v1beta1/classes.rb +461 -19
  362. data/generated/google/apis/file_v1beta1/representations.rb +137 -0
  363. data/generated/google/apis/file_v1beta1/service.rb +11 -11
  364. data/generated/google/apis/file_v1beta1.rb +1 -1
  365. data/generated/google/apis/firebasedynamiclinks_v1/classes.rb +41 -14
  366. data/generated/google/apis/firebasedynamiclinks_v1/representations.rb +4 -0
  367. data/generated/google/apis/firebasedynamiclinks_v1/service.rb +5 -5
  368. data/generated/google/apis/firebasedynamiclinks_v1.rb +1 -1
  369. data/generated/google/apis/firebasehosting_v1beta1/classes.rb +13 -13
  370. data/generated/google/apis/firebasehosting_v1beta1/service.rb +14 -14
  371. data/generated/google/apis/firebasehosting_v1beta1.rb +1 -1
  372. data/generated/google/apis/firebaserules_v1/classes.rb +10 -2
  373. data/generated/google/apis/firebaserules_v1/service.rb +12 -12
  374. data/generated/google/apis/firebaserules_v1.rb +1 -1
  375. data/generated/google/apis/firestore_v1/classes.rb +15 -15
  376. data/generated/google/apis/firestore_v1/service.rb +28 -28
  377. data/generated/google/apis/firestore_v1.rb +1 -1
  378. data/generated/google/apis/firestore_v1beta1/classes.rb +15 -15
  379. data/generated/google/apis/firestore_v1beta1/service.rb +19 -19
  380. data/generated/google/apis/firestore_v1beta1.rb +1 -1
  381. data/generated/google/apis/firestore_v1beta2/classes.rb +10 -10
  382. data/generated/google/apis/firestore_v1beta2/service.rb +9 -9
  383. data/generated/google/apis/firestore_v1beta2.rb +1 -1
  384. data/generated/google/apis/fitness_v1/classes.rb +4 -1
  385. data/generated/google/apis/fitness_v1/service.rb +14 -58
  386. data/generated/google/apis/fitness_v1.rb +1 -1
  387. data/generated/google/apis/fusiontables_v1/service.rb +32 -32
  388. data/generated/google/apis/fusiontables_v2/service.rb +34 -34
  389. data/generated/google/apis/games_configuration_v1configuration/service.rb +13 -13
  390. data/generated/google/apis/games_management_v1management/service.rb +27 -27
  391. data/generated/google/apis/games_management_v1management.rb +2 -2
  392. data/generated/google/apis/games_v1/service.rb +53 -53
  393. data/generated/google/apis/games_v1.rb +3 -3
  394. data/generated/google/apis/genomics_v1/classes.rb +190 -3321
  395. data/generated/google/apis/genomics_v1/representations.rb +128 -1265
  396. data/generated/google/apis/genomics_v1/service.rb +75 -1982
  397. data/generated/google/apis/genomics_v1.rb +1 -10
  398. data/generated/google/apis/genomics_v1alpha2/classes.rb +13 -53
  399. data/generated/google/apis/genomics_v1alpha2/representations.rb +0 -26
  400. data/generated/google/apis/genomics_v1alpha2/service.rb +11 -12
  401. data/generated/google/apis/genomics_v1alpha2.rb +1 -1
  402. data/generated/google/apis/genomics_v2alpha1/classes.rb +26 -58
  403. data/generated/google/apis/genomics_v2alpha1/representations.rb +1 -26
  404. data/generated/google/apis/genomics_v2alpha1/service.rb +6 -7
  405. data/generated/google/apis/genomics_v2alpha1.rb +1 -1
  406. data/generated/google/apis/gmail_v1/classes.rb +29 -0
  407. data/generated/google/apis/gmail_v1/representations.rb +13 -0
  408. data/generated/google/apis/gmail_v1/service.rb +142 -66
  409. data/generated/google/apis/gmail_v1.rb +1 -1
  410. data/generated/google/apis/groupsmigration_v1/service.rb +1 -1
  411. data/generated/google/apis/groupssettings_v1/classes.rb +126 -1
  412. data/generated/google/apis/groupssettings_v1/representations.rb +18 -0
  413. data/generated/google/apis/groupssettings_v1/service.rb +4 -4
  414. data/generated/google/apis/groupssettings_v1.rb +2 -2
  415. data/generated/google/apis/healthcare_v1alpha2/classes.rb +2849 -0
  416. data/generated/google/apis/healthcare_v1alpha2/representations.rb +1260 -0
  417. data/generated/google/apis/healthcare_v1alpha2/service.rb +4011 -0
  418. data/generated/google/apis/healthcare_v1alpha2.rb +34 -0
  419. data/generated/google/apis/healthcare_v1beta1/classes.rb +2464 -0
  420. data/generated/google/apis/healthcare_v1beta1/representations.rb +1042 -0
  421. data/generated/google/apis/healthcare_v1beta1/service.rb +3413 -0
  422. data/generated/google/apis/healthcare_v1beta1.rb +34 -0
  423. data/generated/google/apis/iam_v1/classes.rb +171 -1
  424. data/generated/google/apis/iam_v1/representations.rb +95 -0
  425. data/generated/google/apis/iam_v1/service.rb +249 -39
  426. data/generated/google/apis/iam_v1.rb +1 -1
  427. data/generated/google/apis/iamcredentials_v1/classes.rb +8 -4
  428. data/generated/google/apis/iamcredentials_v1/service.rb +15 -10
  429. data/generated/google/apis/iamcredentials_v1.rb +1 -1
  430. data/generated/google/apis/iap_v1/classes.rb +1 -1
  431. data/generated/google/apis/iap_v1/service.rb +3 -3
  432. data/generated/google/apis/iap_v1.rb +1 -1
  433. data/generated/google/apis/iap_v1beta1/classes.rb +1 -1
  434. data/generated/google/apis/iap_v1beta1/service.rb +3 -3
  435. data/generated/google/apis/iap_v1beta1.rb +1 -1
  436. data/generated/google/apis/identitytoolkit_v3/service.rb +20 -20
  437. data/generated/google/apis/indexing_v3/service.rb +2 -2
  438. data/generated/google/apis/jobs_v2/classes.rb +16 -17
  439. data/generated/google/apis/jobs_v2/service.rb +17 -17
  440. data/generated/google/apis/jobs_v2.rb +1 -1
  441. data/generated/google/apis/jobs_v3/classes.rb +14 -8
  442. data/generated/google/apis/jobs_v3/service.rb +16 -17
  443. data/generated/google/apis/jobs_v3.rb +1 -1
  444. data/generated/google/apis/jobs_v3p1beta1/classes.rb +26 -20
  445. data/generated/google/apis/jobs_v3p1beta1/service.rb +17 -18
  446. data/generated/google/apis/jobs_v3p1beta1.rb +1 -1
  447. data/generated/google/apis/kgsearch_v1/service.rb +1 -1
  448. data/generated/google/apis/language_v1/classes.rb +8 -7
  449. data/generated/google/apis/language_v1/service.rb +6 -6
  450. data/generated/google/apis/language_v1.rb +1 -1
  451. data/generated/google/apis/language_v1beta1/classes.rb +5 -5
  452. data/generated/google/apis/language_v1beta1/service.rb +4 -4
  453. data/generated/google/apis/language_v1beta1.rb +1 -1
  454. data/generated/google/apis/language_v1beta2/classes.rb +8 -7
  455. data/generated/google/apis/language_v1beta2/service.rb +6 -6
  456. data/generated/google/apis/language_v1beta2.rb +1 -1
  457. data/generated/google/apis/libraryagent_v1/service.rb +6 -6
  458. data/generated/google/apis/licensing_v1/service.rb +7 -7
  459. data/generated/google/apis/logging_v2/classes.rb +8 -3
  460. data/generated/google/apis/logging_v2/representations.rb +1 -0
  461. data/generated/google/apis/logging_v2/service.rb +72 -72
  462. data/generated/google/apis/logging_v2.rb +1 -1
  463. data/generated/google/apis/manufacturers_v1/service.rb +4 -4
  464. data/generated/google/apis/mirror_v1/service.rb +24 -24
  465. data/generated/google/apis/ml_v1/classes.rb +240 -52
  466. data/generated/google/apis/ml_v1/representations.rb +25 -2
  467. data/generated/google/apis/ml_v1/service.rb +36 -36
  468. data/generated/google/apis/ml_v1.rb +1 -1
  469. data/generated/google/apis/monitoring_v3/classes.rb +22 -18
  470. data/generated/google/apis/monitoring_v3/representations.rb +2 -1
  471. data/generated/google/apis/monitoring_v3/service.rb +42 -37
  472. data/generated/google/apis/monitoring_v3.rb +1 -1
  473. data/generated/google/apis/oauth2_v1/classes.rb +0 -124
  474. data/generated/google/apis/oauth2_v1/representations.rb +0 -62
  475. data/generated/google/apis/oauth2_v1/service.rb +3 -162
  476. data/generated/google/apis/oauth2_v1.rb +3 -6
  477. data/generated/google/apis/oauth2_v2/service.rb +4 -4
  478. data/generated/google/apis/oauth2_v2.rb +3 -6
  479. data/generated/google/apis/oslogin_v1/service.rb +8 -7
  480. data/generated/google/apis/oslogin_v1.rb +3 -2
  481. data/generated/google/apis/oslogin_v1alpha/service.rb +8 -7
  482. data/generated/google/apis/oslogin_v1alpha.rb +3 -2
  483. data/generated/google/apis/oslogin_v1beta/service.rb +8 -7
  484. data/generated/google/apis/oslogin_v1beta.rb +3 -2
  485. data/generated/google/apis/pagespeedonline_v1/service.rb +1 -1
  486. data/generated/google/apis/pagespeedonline_v2/service.rb +1 -1
  487. data/generated/google/apis/pagespeedonline_v4/service.rb +1 -1
  488. data/generated/google/apis/pagespeedonline_v5/classes.rb +43 -0
  489. data/generated/google/apis/pagespeedonline_v5/representations.rb +18 -0
  490. data/generated/google/apis/pagespeedonline_v5/service.rb +1 -1
  491. data/generated/google/apis/pagespeedonline_v5.rb +1 -1
  492. data/generated/google/apis/people_v1/classes.rb +38 -29
  493. data/generated/google/apis/people_v1/representations.rb +1 -0
  494. data/generated/google/apis/people_v1/service.rb +18 -13
  495. data/generated/google/apis/people_v1.rb +2 -5
  496. data/generated/google/apis/playcustomapp_v1/service.rb +1 -1
  497. data/generated/google/apis/plus_domains_v1/service.rb +18 -392
  498. data/generated/google/apis/plus_domains_v1.rb +4 -10
  499. data/generated/google/apis/plus_v1/service.rb +16 -16
  500. data/generated/google/apis/plus_v1.rb +4 -4
  501. data/generated/google/apis/poly_v1/classes.rb +8 -6
  502. data/generated/google/apis/poly_v1/service.rb +15 -12
  503. data/generated/google/apis/poly_v1.rb +1 -1
  504. data/generated/google/apis/proximitybeacon_v1beta1/classes.rb +8 -6
  505. data/generated/google/apis/proximitybeacon_v1beta1/service.rb +17 -17
  506. data/generated/google/apis/proximitybeacon_v1beta1.rb +1 -1
  507. data/generated/google/apis/pubsub_v1/classes.rb +55 -39
  508. data/generated/google/apis/pubsub_v1/representations.rb +16 -0
  509. data/generated/google/apis/pubsub_v1/service.rb +46 -69
  510. data/generated/google/apis/pubsub_v1.rb +1 -1
  511. data/generated/google/apis/pubsub_v1beta1a/service.rb +15 -15
  512. data/generated/google/apis/pubsub_v1beta2/classes.rb +45 -1
  513. data/generated/google/apis/pubsub_v1beta2/representations.rb +16 -0
  514. data/generated/google/apis/pubsub_v1beta2/service.rb +20 -20
  515. data/generated/google/apis/pubsub_v1beta2.rb +1 -1
  516. data/generated/google/apis/redis_v1/classes.rb +30 -10
  517. data/generated/google/apis/redis_v1/representations.rb +13 -0
  518. data/generated/google/apis/redis_v1/service.rb +51 -15
  519. data/generated/google/apis/redis_v1.rb +1 -1
  520. data/generated/google/apis/redis_v1beta1/classes.rb +18 -21
  521. data/generated/google/apis/redis_v1beta1/representations.rb +0 -1
  522. data/generated/google/apis/redis_v1beta1/service.rb +15 -15
  523. data/generated/google/apis/redis_v1beta1.rb +1 -1
  524. data/generated/google/apis/remotebuildexecution_v1/classes.rb +50 -35
  525. data/generated/google/apis/remotebuildexecution_v1/representations.rb +2 -0
  526. data/generated/google/apis/remotebuildexecution_v1/service.rb +7 -7
  527. data/generated/google/apis/remotebuildexecution_v1.rb +1 -1
  528. data/generated/google/apis/remotebuildexecution_v1alpha/classes.rb +48 -33
  529. data/generated/google/apis/remotebuildexecution_v1alpha/representations.rb +2 -0
  530. data/generated/google/apis/remotebuildexecution_v1alpha/service.rb +10 -10
  531. data/generated/google/apis/remotebuildexecution_v1alpha.rb +1 -1
  532. data/generated/google/apis/remotebuildexecution_v2/classes.rb +58 -43
  533. data/generated/google/apis/remotebuildexecution_v2/representations.rb +2 -0
  534. data/generated/google/apis/remotebuildexecution_v2/service.rb +9 -9
  535. data/generated/google/apis/remotebuildexecution_v2.rb +1 -1
  536. data/generated/google/apis/replicapool_v1beta1/service.rb +10 -10
  537. data/generated/google/apis/reseller_v1/classes.rb +32 -39
  538. data/generated/google/apis/reseller_v1/service.rb +18 -18
  539. data/generated/google/apis/reseller_v1.rb +1 -1
  540. data/generated/google/apis/run_v1/classes.rb +73 -0
  541. data/generated/google/apis/run_v1/representations.rb +43 -0
  542. data/generated/google/apis/run_v1/service.rb +90 -0
  543. data/generated/google/apis/run_v1.rb +35 -0
  544. data/generated/google/apis/run_v1alpha1/classes.rb +3882 -0
  545. data/generated/google/apis/run_v1alpha1/representations.rb +1425 -0
  546. data/generated/google/apis/run_v1alpha1/service.rb +2071 -0
  547. data/generated/google/apis/run_v1alpha1.rb +35 -0
  548. data/generated/google/apis/runtimeconfig_v1/classes.rb +11 -11
  549. data/generated/google/apis/runtimeconfig_v1/service.rb +3 -3
  550. data/generated/google/apis/runtimeconfig_v1.rb +1 -1
  551. data/generated/google/apis/runtimeconfig_v1beta1/classes.rb +26 -25
  552. data/generated/google/apis/runtimeconfig_v1beta1/service.rb +22 -22
  553. data/generated/google/apis/runtimeconfig_v1beta1.rb +1 -1
  554. data/generated/google/apis/safebrowsing_v4/service.rb +7 -7
  555. data/generated/google/apis/script_v1/classes.rb +167 -6
  556. data/generated/google/apis/script_v1/representations.rb +79 -1
  557. data/generated/google/apis/script_v1/service.rb +16 -16
  558. data/generated/google/apis/script_v1.rb +1 -1
  559. data/generated/google/apis/searchconsole_v1/service.rb +1 -1
  560. data/generated/google/apis/securitycenter_v1/classes.rb +1627 -0
  561. data/generated/google/apis/securitycenter_v1/representations.rb +569 -0
  562. data/generated/google/apis/securitycenter_v1/service.rb +1110 -0
  563. data/generated/google/apis/securitycenter_v1.rb +35 -0
  564. data/generated/google/apis/securitycenter_v1beta1/classes.rb +1514 -0
  565. data/generated/google/apis/securitycenter_v1beta1/representations.rb +548 -0
  566. data/generated/google/apis/securitycenter_v1beta1/service.rb +1035 -0
  567. data/generated/google/apis/securitycenter_v1beta1.rb +35 -0
  568. data/generated/google/apis/servicebroker_v1/classes.rb +1 -1
  569. data/generated/google/apis/servicebroker_v1/service.rb +3 -3
  570. data/generated/google/apis/servicebroker_v1.rb +1 -1
  571. data/generated/google/apis/servicebroker_v1alpha1/classes.rb +1 -1
  572. data/generated/google/apis/servicebroker_v1alpha1/service.rb +16 -16
  573. data/generated/google/apis/servicebroker_v1alpha1.rb +1 -1
  574. data/generated/google/apis/servicebroker_v1beta1/classes.rb +1 -1
  575. data/generated/google/apis/servicebroker_v1beta1/service.rb +21 -21
  576. data/generated/google/apis/servicebroker_v1beta1.rb +1 -1
  577. data/generated/google/apis/serviceconsumermanagement_v1/classes.rb +453 -149
  578. data/generated/google/apis/serviceconsumermanagement_v1/representations.rb +202 -29
  579. data/generated/google/apis/serviceconsumermanagement_v1/service.rb +148 -62
  580. data/generated/google/apis/serviceconsumermanagement_v1.rb +1 -1
  581. data/generated/google/apis/servicecontrol_v1/classes.rb +122 -25
  582. data/generated/google/apis/servicecontrol_v1/representations.rb +47 -0
  583. data/generated/google/apis/servicecontrol_v1/service.rb +3 -3
  584. data/generated/google/apis/servicecontrol_v1.rb +1 -1
  585. data/generated/google/apis/servicemanagement_v1/classes.rb +93 -110
  586. data/generated/google/apis/servicemanagement_v1/representations.rb +13 -26
  587. data/generated/google/apis/servicemanagement_v1/service.rb +30 -27
  588. data/generated/google/apis/servicemanagement_v1.rb +1 -1
  589. data/generated/google/apis/servicenetworking_v1/classes.rb +3626 -0
  590. data/generated/google/apis/servicenetworking_v1/representations.rb +1055 -0
  591. data/generated/google/apis/servicenetworking_v1/service.rb +440 -0
  592. data/generated/google/apis/servicenetworking_v1.rb +38 -0
  593. data/generated/google/apis/servicenetworking_v1beta/classes.rb +65 -108
  594. data/generated/google/apis/servicenetworking_v1beta/representations.rb +2 -29
  595. data/generated/google/apis/servicenetworking_v1beta/service.rb +6 -6
  596. data/generated/google/apis/servicenetworking_v1beta.rb +1 -1
  597. data/generated/google/apis/serviceusage_v1/classes.rb +160 -109
  598. data/generated/google/apis/serviceusage_v1/representations.rb +42 -26
  599. data/generated/google/apis/serviceusage_v1/service.rb +17 -19
  600. data/generated/google/apis/serviceusage_v1.rb +1 -1
  601. data/generated/google/apis/serviceusage_v1beta1/classes.rb +161 -110
  602. data/generated/google/apis/serviceusage_v1beta1/representations.rb +42 -26
  603. data/generated/google/apis/serviceusage_v1beta1/service.rb +7 -7
  604. data/generated/google/apis/serviceusage_v1beta1.rb +1 -1
  605. data/generated/google/apis/sheets_v4/classes.rb +115 -26
  606. data/generated/google/apis/sheets_v4/service.rb +17 -17
  607. data/generated/google/apis/sheets_v4.rb +1 -1
  608. data/generated/google/apis/site_verification_v1/service.rb +7 -7
  609. data/generated/google/apis/slides_v1/classes.rb +2 -2
  610. data/generated/google/apis/slides_v1/service.rb +5 -5
  611. data/generated/google/apis/slides_v1.rb +1 -1
  612. data/generated/google/apis/sourcerepo_v1/classes.rb +183 -1
  613. data/generated/google/apis/sourcerepo_v1/representations.rb +45 -0
  614. data/generated/google/apis/sourcerepo_v1/service.rb +45 -10
  615. data/generated/google/apis/sourcerepo_v1.rb +1 -1
  616. data/generated/google/apis/spanner_v1/classes.rb +231 -17
  617. data/generated/google/apis/spanner_v1/representations.rb +66 -0
  618. data/generated/google/apis/spanner_v1/service.rb +92 -42
  619. data/generated/google/apis/spanner_v1.rb +1 -1
  620. data/generated/google/apis/speech_v1/classes.rb +110 -13
  621. data/generated/google/apis/speech_v1/representations.rb +24 -0
  622. data/generated/google/apis/speech_v1/service.rb +9 -7
  623. data/generated/google/apis/speech_v1.rb +1 -1
  624. data/generated/google/apis/speech_v1p1beta1/classes.rb +19 -13
  625. data/generated/google/apis/speech_v1p1beta1/representations.rb +1 -0
  626. data/generated/google/apis/speech_v1p1beta1/service.rb +9 -7
  627. data/generated/google/apis/speech_v1p1beta1.rb +1 -1
  628. data/generated/google/apis/sqladmin_v1beta4/classes.rb +94 -17
  629. data/generated/google/apis/sqladmin_v1beta4/representations.rb +36 -0
  630. data/generated/google/apis/sqladmin_v1beta4/service.rb +44 -44
  631. data/generated/google/apis/sqladmin_v1beta4.rb +1 -1
  632. data/generated/google/apis/storage_v1/classes.rb +201 -4
  633. data/generated/google/apis/storage_v1/representations.rb +76 -1
  634. data/generated/google/apis/storage_v1/service.rb +488 -93
  635. data/generated/google/apis/storage_v1.rb +1 -1
  636. data/generated/google/apis/storage_v1beta1/service.rb +24 -24
  637. data/generated/google/apis/storage_v1beta2/service.rb +34 -34
  638. data/generated/google/apis/storagetransfer_v1/classes.rb +44 -44
  639. data/generated/google/apis/storagetransfer_v1/service.rb +35 -36
  640. data/generated/google/apis/storagetransfer_v1.rb +2 -2
  641. data/generated/google/apis/streetviewpublish_v1/classes.rb +27 -27
  642. data/generated/google/apis/streetviewpublish_v1/service.rb +36 -40
  643. data/generated/google/apis/streetviewpublish_v1.rb +1 -1
  644. data/generated/google/apis/surveys_v2/service.rb +8 -8
  645. data/generated/google/apis/tagmanager_v1/service.rb +49 -95
  646. data/generated/google/apis/tagmanager_v1.rb +1 -1
  647. data/generated/google/apis/tagmanager_v2/classes.rb +197 -292
  648. data/generated/google/apis/tagmanager_v2/representations.rb +62 -103
  649. data/generated/google/apis/tagmanager_v2/service.rb +287 -249
  650. data/generated/google/apis/tagmanager_v2.rb +1 -1
  651. data/generated/google/apis/tasks_v1/service.rb +19 -19
  652. data/generated/google/apis/tasks_v1.rb +2 -2
  653. data/generated/google/apis/testing_v1/classes.rb +44 -39
  654. data/generated/google/apis/testing_v1/representations.rb +3 -1
  655. data/generated/google/apis/testing_v1/service.rb +5 -5
  656. data/generated/google/apis/testing_v1.rb +1 -1
  657. data/generated/google/apis/texttospeech_v1/service.rb +2 -2
  658. data/generated/google/apis/texttospeech_v1.rb +1 -1
  659. data/generated/google/apis/texttospeech_v1beta1/service.rb +2 -2
  660. data/generated/google/apis/texttospeech_v1beta1.rb +1 -1
  661. data/generated/google/apis/toolresults_v1beta3/classes.rb +340 -17
  662. data/generated/google/apis/toolresults_v1beta3/representations.rb +90 -0
  663. data/generated/google/apis/toolresults_v1beta3/service.rb +140 -24
  664. data/generated/google/apis/toolresults_v1beta3.rb +1 -1
  665. data/generated/google/apis/tpu_v1/classes.rb +21 -15
  666. data/generated/google/apis/tpu_v1/representations.rb +1 -0
  667. data/generated/google/apis/tpu_v1/service.rb +17 -17
  668. data/generated/google/apis/tpu_v1.rb +1 -1
  669. data/generated/google/apis/tpu_v1alpha1/classes.rb +21 -15
  670. data/generated/google/apis/tpu_v1alpha1/representations.rb +1 -0
  671. data/generated/google/apis/tpu_v1alpha1/service.rb +17 -17
  672. data/generated/google/apis/tpu_v1alpha1.rb +1 -1
  673. data/generated/google/apis/translate_v2/service.rb +5 -5
  674. data/generated/google/apis/urlshortener_v1/service.rb +3 -3
  675. data/generated/google/apis/vault_v1/classes.rb +44 -18
  676. data/generated/google/apis/vault_v1/representations.rb +4 -0
  677. data/generated/google/apis/vault_v1/service.rb +28 -28
  678. data/generated/google/apis/vault_v1.rb +1 -1
  679. data/generated/google/apis/videointelligence_v1/classes.rb +2193 -350
  680. data/generated/google/apis/videointelligence_v1/representations.rb +805 -6
  681. data/generated/google/apis/videointelligence_v1/service.rb +7 -6
  682. data/generated/google/apis/videointelligence_v1.rb +3 -2
  683. data/generated/google/apis/videointelligence_v1beta2/classes.rb +2448 -605
  684. data/generated/google/apis/videointelligence_v1beta2/representations.rb +806 -7
  685. data/generated/google/apis/videointelligence_v1beta2/service.rb +3 -2
  686. data/generated/google/apis/videointelligence_v1beta2.rb +3 -2
  687. data/generated/google/apis/videointelligence_v1p1beta1/classes.rb +2422 -579
  688. data/generated/google/apis/videointelligence_v1p1beta1/representations.rb +806 -7
  689. data/generated/google/apis/videointelligence_v1p1beta1/service.rb +3 -2
  690. data/generated/google/apis/videointelligence_v1p1beta1.rb +3 -2
  691. data/generated/google/apis/videointelligence_v1p2beta1/classes.rb +2645 -830
  692. data/generated/google/apis/videointelligence_v1p2beta1/representations.rb +796 -12
  693. data/generated/google/apis/videointelligence_v1p2beta1/service.rb +3 -2
  694. data/generated/google/apis/videointelligence_v1p2beta1.rb +3 -2
  695. data/generated/google/apis/videointelligence_v1p3beta1/classes.rb +4687 -0
  696. data/generated/google/apis/videointelligence_v1p3beta1/representations.rb +2005 -0
  697. data/generated/google/apis/videointelligence_v1p3beta1/service.rb +94 -0
  698. data/generated/google/apis/videointelligence_v1p3beta1.rb +36 -0
  699. data/generated/google/apis/vision_v1/classes.rb +4397 -124
  700. data/generated/google/apis/vision_v1/representations.rb +2366 -541
  701. data/generated/google/apis/vision_v1/service.rb +160 -33
  702. data/generated/google/apis/vision_v1.rb +1 -1
  703. data/generated/google/apis/vision_v1p1beta1/classes.rb +4451 -158
  704. data/generated/google/apis/vision_v1p1beta1/representations.rb +2415 -576
  705. data/generated/google/apis/vision_v1p1beta1/service.rb +73 -2
  706. data/generated/google/apis/vision_v1p1beta1.rb +1 -1
  707. data/generated/google/apis/vision_v1p2beta1/classes.rb +4451 -158
  708. data/generated/google/apis/vision_v1p2beta1/representations.rb +2443 -604
  709. data/generated/google/apis/vision_v1p2beta1/service.rb +73 -2
  710. data/generated/google/apis/vision_v1p2beta1.rb +1 -1
  711. data/generated/google/apis/webfonts_v1/service.rb +1 -1
  712. data/generated/google/apis/webmasters_v3/classes.rb +0 -166
  713. data/generated/google/apis/webmasters_v3/representations.rb +0 -93
  714. data/generated/google/apis/webmasters_v3/service.rb +9 -180
  715. data/generated/google/apis/webmasters_v3.rb +1 -1
  716. data/generated/google/apis/websecurityscanner_v1alpha/service.rb +13 -13
  717. data/generated/google/apis/websecurityscanner_v1beta/classes.rb +973 -0
  718. data/generated/google/apis/websecurityscanner_v1beta/representations.rb +452 -0
  719. data/generated/google/apis/websecurityscanner_v1beta/service.rb +548 -0
  720. data/generated/google/apis/websecurityscanner_v1beta.rb +34 -0
  721. data/generated/google/apis/youtube_analytics_v1/service.rb +8 -8
  722. data/generated/google/apis/youtube_analytics_v1beta1/service.rb +8 -8
  723. data/generated/google/apis/youtube_analytics_v2/service.rb +8 -8
  724. data/generated/google/apis/youtube_partner_v1/classes.rb +15 -34
  725. data/generated/google/apis/youtube_partner_v1/representations.rb +4 -17
  726. data/generated/google/apis/youtube_partner_v1/service.rb +74 -74
  727. data/generated/google/apis/youtube_partner_v1.rb +1 -1
  728. data/generated/google/apis/youtube_v3/service.rb +71 -71
  729. data/generated/google/apis/youtube_v3.rb +1 -1
  730. data/generated/google/apis/youtubereporting_v1/classes.rb +2 -2
  731. data/generated/google/apis/youtubereporting_v1/service.rb +8 -8
  732. data/generated/google/apis/youtubereporting_v1.rb +1 -1
  733. data/google-api-client.gemspec +2 -2
  734. data/lib/google/apis/core/http_command.rb +1 -0
  735. data/lib/google/apis/core/json_representation.rb +4 -0
  736. data/lib/google/apis/core/upload.rb +3 -3
  737. data/lib/google/apis/generator/model.rb +1 -1
  738. data/lib/google/apis/generator/templates/_method.tmpl +3 -3
  739. data/lib/google/apis/version.rb +1 -1
  740. metadata +86 -17
  741. data/.kokoro/common.cfg +0 -22
  742. data/.kokoro/windows.sh +0 -32
  743. data/generated/google/apis/logging_v2beta1/classes.rb +0 -1765
  744. data/generated/google/apis/logging_v2beta1/representations.rb +0 -537
  745. data/generated/google/apis/logging_v2beta1/service.rb +0 -570
  746. data/generated/google/apis/logging_v2beta1.rb +0 -46
  747. data/generated/google/apis/partners_v2/classes.rb +0 -2260
  748. data/generated/google/apis/partners_v2/representations.rb +0 -905
  749. data/generated/google/apis/partners_v2/service.rb +0 -1077
  750. data/samples/web/.env +0 -2
@@ -235,29 +235,1385 @@ module Google
235
235
  end
236
236
  end
237
237
 
238
+ # Normalized bounding box.
239
+ # The normalized vertex coordinates are relative to the original image.
240
+ # Range: [0, 1].
241
+ class GoogleCloudVideointelligenceV1NormalizedBoundingBox
242
+ include Google::Apis::Core::Hashable
243
+
244
+ # Bottom Y coordinate.
245
+ # Corresponds to the JSON property `bottom`
246
+ # @return [Float]
247
+ attr_accessor :bottom
248
+
249
+ # Left X coordinate.
250
+ # Corresponds to the JSON property `left`
251
+ # @return [Float]
252
+ attr_accessor :left
253
+
254
+ # Right X coordinate.
255
+ # Corresponds to the JSON property `right`
256
+ # @return [Float]
257
+ attr_accessor :right
258
+
259
+ # Top Y coordinate.
260
+ # Corresponds to the JSON property `top`
261
+ # @return [Float]
262
+ attr_accessor :top
263
+
264
+ def initialize(**args)
265
+ update!(**args)
266
+ end
267
+
268
+ # Update properties of this object
269
+ def update!(**args)
270
+ @bottom = args[:bottom] if args.key?(:bottom)
271
+ @left = args[:left] if args.key?(:left)
272
+ @right = args[:right] if args.key?(:right)
273
+ @top = args[:top] if args.key?(:top)
274
+ end
275
+ end
276
+
277
+ # Normalized bounding polygon for text (that might not be aligned with axis).
278
+ # Contains list of the corner points in clockwise order starting from
279
+ # top-left corner. For example, for a rectangular bounding box:
280
+ # When the text is horizontal it might look like:
281
+ # 0----1
282
+ # | |
283
+ # 3----2
284
+ # When it's clockwise rotated 180 degrees around the top-left corner it
285
+ # becomes:
286
+ # 2----3
287
+ # | |
288
+ # 1----0
289
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
290
+ # than 0, or greater than 1 due to trignometric calculations for location of
291
+ # the box.
292
+ class GoogleCloudVideointelligenceV1NormalizedBoundingPoly
293
+ include Google::Apis::Core::Hashable
294
+
295
+ # Normalized vertices of the bounding polygon.
296
+ # Corresponds to the JSON property `vertices`
297
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1NormalizedVertex>]
298
+ attr_accessor :vertices
299
+
300
+ def initialize(**args)
301
+ update!(**args)
302
+ end
303
+
304
+ # Update properties of this object
305
+ def update!(**args)
306
+ @vertices = args[:vertices] if args.key?(:vertices)
307
+ end
308
+ end
309
+
310
+ # A vertex represents a 2D point in the image.
311
+ # NOTE: the normalized vertex coordinates are relative to the original image
312
+ # and range from 0 to 1.
313
+ class GoogleCloudVideointelligenceV1NormalizedVertex
314
+ include Google::Apis::Core::Hashable
315
+
316
+ # X coordinate.
317
+ # Corresponds to the JSON property `x`
318
+ # @return [Float]
319
+ attr_accessor :x
320
+
321
+ # Y coordinate.
322
+ # Corresponds to the JSON property `y`
323
+ # @return [Float]
324
+ attr_accessor :y
325
+
326
+ def initialize(**args)
327
+ update!(**args)
328
+ end
329
+
330
+ # Update properties of this object
331
+ def update!(**args)
332
+ @x = args[:x] if args.key?(:x)
333
+ @y = args[:y] if args.key?(:y)
334
+ end
335
+ end
336
+
337
+ # Annotations corresponding to one tracked object.
338
+ class GoogleCloudVideointelligenceV1ObjectTrackingAnnotation
339
+ include Google::Apis::Core::Hashable
340
+
341
+ # Object category's labeling confidence of this track.
342
+ # Corresponds to the JSON property `confidence`
343
+ # @return [Float]
344
+ attr_accessor :confidence
345
+
346
+ # Detected entity from video analysis.
347
+ # Corresponds to the JSON property `entity`
348
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1Entity]
349
+ attr_accessor :entity
350
+
351
+ # Information corresponding to all frames where this object track appears.
352
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
353
+ # messages in frames.
354
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
355
+ # Corresponds to the JSON property `frames`
356
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1ObjectTrackingFrame>]
357
+ attr_accessor :frames
358
+
359
+ # Video segment.
360
+ # Corresponds to the JSON property `segment`
361
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1VideoSegment]
362
+ attr_accessor :segment
363
+
364
+ # Streaming mode ONLY.
365
+ # In streaming mode, we do not know the end time of a tracked object
366
+ # before it is completed. Hence, there is no VideoSegment info returned.
367
+ # Instead, we provide a unique identifiable integer track_id so that
368
+ # the customers can correlate the results of the ongoing
369
+ # ObjectTrackAnnotation of the same track_id over time.
370
+ # Corresponds to the JSON property `trackId`
371
+ # @return [Fixnum]
372
+ attr_accessor :track_id
373
+
374
+ def initialize(**args)
375
+ update!(**args)
376
+ end
377
+
378
+ # Update properties of this object
379
+ def update!(**args)
380
+ @confidence = args[:confidence] if args.key?(:confidence)
381
+ @entity = args[:entity] if args.key?(:entity)
382
+ @frames = args[:frames] if args.key?(:frames)
383
+ @segment = args[:segment] if args.key?(:segment)
384
+ @track_id = args[:track_id] if args.key?(:track_id)
385
+ end
386
+ end
387
+
388
+ # Video frame level annotations for object detection and tracking. This field
389
+ # stores per frame location, time offset, and confidence.
390
+ class GoogleCloudVideointelligenceV1ObjectTrackingFrame
391
+ include Google::Apis::Core::Hashable
392
+
393
+ # Normalized bounding box.
394
+ # The normalized vertex coordinates are relative to the original image.
395
+ # Range: [0, 1].
396
+ # Corresponds to the JSON property `normalizedBoundingBox`
397
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1NormalizedBoundingBox]
398
+ attr_accessor :normalized_bounding_box
399
+
400
+ # The timestamp of the frame in microseconds.
401
+ # Corresponds to the JSON property `timeOffset`
402
+ # @return [String]
403
+ attr_accessor :time_offset
404
+
405
+ def initialize(**args)
406
+ update!(**args)
407
+ end
408
+
409
+ # Update properties of this object
410
+ def update!(**args)
411
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
412
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
413
+ end
414
+ end
415
+
238
416
  # Alternative hypotheses (a.k.a. n-best list).
239
417
  class GoogleCloudVideointelligenceV1SpeechRecognitionAlternative
240
418
  include Google::Apis::Core::Hashable
241
419
 
242
- # The confidence estimate between 0.0 and 1.0. A higher number
243
- # indicates an estimated greater likelihood that the recognized words are
244
- # correct. This field is typically provided only for the top hypothesis, and
245
- # only for `is_final=true` results. Clients should not rely on the
246
- # `confidence` field as it is not guaranteed to be accurate or consistent.
247
- # The default of 0.0 is a sentinel value indicating `confidence` was not set.
248
- # Corresponds to the JSON property `confidence`
249
- # @return [Float]
250
- attr_accessor :confidence
420
+ # The confidence estimate between 0.0 and 1.0. A higher number
421
+ # indicates an estimated greater likelihood that the recognized words are
422
+ # correct. This field is typically provided only for the top hypothesis, and
423
+ # only for `is_final=true` results. Clients should not rely on the
424
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
425
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
426
+ # Corresponds to the JSON property `confidence`
427
+ # @return [Float]
428
+ attr_accessor :confidence
429
+
430
+ # Transcript text representing the words that the user spoke.
431
+ # Corresponds to the JSON property `transcript`
432
+ # @return [String]
433
+ attr_accessor :transcript
434
+
435
+ # A list of word-specific information for each recognized word.
436
+ # Corresponds to the JSON property `words`
437
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1WordInfo>]
438
+ attr_accessor :words
439
+
440
+ def initialize(**args)
441
+ update!(**args)
442
+ end
443
+
444
+ # Update properties of this object
445
+ def update!(**args)
446
+ @confidence = args[:confidence] if args.key?(:confidence)
447
+ @transcript = args[:transcript] if args.key?(:transcript)
448
+ @words = args[:words] if args.key?(:words)
449
+ end
450
+ end
451
+
452
+ # A speech recognition result corresponding to a portion of the audio.
453
+ class GoogleCloudVideointelligenceV1SpeechTranscription
454
+ include Google::Apis::Core::Hashable
455
+
456
+ # May contain one or more recognition hypotheses (up to the maximum specified
457
+ # in `max_alternatives`). These alternatives are ordered in terms of
458
+ # accuracy, with the top (first) alternative being the most probable, as
459
+ # ranked by the recognizer.
460
+ # Corresponds to the JSON property `alternatives`
461
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1SpeechRecognitionAlternative>]
462
+ attr_accessor :alternatives
463
+
464
+ # Output only. The
465
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
466
+ # language in this result. This language code was detected to have the most
467
+ # likelihood of being spoken in the audio.
468
+ # Corresponds to the JSON property `languageCode`
469
+ # @return [String]
470
+ attr_accessor :language_code
471
+
472
+ def initialize(**args)
473
+ update!(**args)
474
+ end
475
+
476
+ # Update properties of this object
477
+ def update!(**args)
478
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
479
+ @language_code = args[:language_code] if args.key?(:language_code)
480
+ end
481
+ end
482
+
483
+ # Annotations related to one detected OCR text snippet. This will contain the
484
+ # corresponding text, confidence value, and frame level information for each
485
+ # detection.
486
+ class GoogleCloudVideointelligenceV1TextAnnotation
487
+ include Google::Apis::Core::Hashable
488
+
489
+ # All video segments where OCR detected text appears.
490
+ # Corresponds to the JSON property `segments`
491
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1TextSegment>]
492
+ attr_accessor :segments
493
+
494
+ # The detected text.
495
+ # Corresponds to the JSON property `text`
496
+ # @return [String]
497
+ attr_accessor :text
498
+
499
+ def initialize(**args)
500
+ update!(**args)
501
+ end
502
+
503
+ # Update properties of this object
504
+ def update!(**args)
505
+ @segments = args[:segments] if args.key?(:segments)
506
+ @text = args[:text] if args.key?(:text)
507
+ end
508
+ end
509
+
510
+ # Video frame level annotation results for text annotation (OCR).
511
+ # Contains information regarding timestamp and bounding box locations for the
512
+ # frames containing detected OCR text snippets.
513
+ class GoogleCloudVideointelligenceV1TextFrame
514
+ include Google::Apis::Core::Hashable
515
+
516
+ # Normalized bounding polygon for text (that might not be aligned with axis).
517
+ # Contains list of the corner points in clockwise order starting from
518
+ # top-left corner. For example, for a rectangular bounding box:
519
+ # When the text is horizontal it might look like:
520
+ # 0----1
521
+ # | |
522
+ # 3----2
523
+ # When it's clockwise rotated 180 degrees around the top-left corner it
524
+ # becomes:
525
+ # 2----3
526
+ # | |
527
+ # 1----0
528
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
529
+ # than 0, or greater than 1 due to trignometric calculations for location of
530
+ # the box.
531
+ # Corresponds to the JSON property `rotatedBoundingBox`
532
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1NormalizedBoundingPoly]
533
+ attr_accessor :rotated_bounding_box
534
+
535
+ # Timestamp of this frame.
536
+ # Corresponds to the JSON property `timeOffset`
537
+ # @return [String]
538
+ attr_accessor :time_offset
539
+
540
+ def initialize(**args)
541
+ update!(**args)
542
+ end
543
+
544
+ # Update properties of this object
545
+ def update!(**args)
546
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
547
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
548
+ end
549
+ end
550
+
551
+ # Video segment level annotation results for text detection.
552
+ class GoogleCloudVideointelligenceV1TextSegment
553
+ include Google::Apis::Core::Hashable
554
+
555
+ # Confidence for the track of detected text. It is calculated as the highest
556
+ # over all frames where OCR detected text appears.
557
+ # Corresponds to the JSON property `confidence`
558
+ # @return [Float]
559
+ attr_accessor :confidence
560
+
561
+ # Information related to the frames where OCR detected text appears.
562
+ # Corresponds to the JSON property `frames`
563
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1TextFrame>]
564
+ attr_accessor :frames
565
+
566
+ # Video segment.
567
+ # Corresponds to the JSON property `segment`
568
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1VideoSegment]
569
+ attr_accessor :segment
570
+
571
+ def initialize(**args)
572
+ update!(**args)
573
+ end
574
+
575
+ # Update properties of this object
576
+ def update!(**args)
577
+ @confidence = args[:confidence] if args.key?(:confidence)
578
+ @frames = args[:frames] if args.key?(:frames)
579
+ @segment = args[:segment] if args.key?(:segment)
580
+ end
581
+ end
582
+
583
+ # Annotation progress for a single video.
584
+ class GoogleCloudVideointelligenceV1VideoAnnotationProgress
585
+ include Google::Apis::Core::Hashable
586
+
587
+ # Video file location in
588
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
589
+ # Corresponds to the JSON property `inputUri`
590
+ # @return [String]
591
+ attr_accessor :input_uri
592
+
593
+ # Approximate percentage processed thus far. Guaranteed to be
594
+ # 100 when fully processed.
595
+ # Corresponds to the JSON property `progressPercent`
596
+ # @return [Fixnum]
597
+ attr_accessor :progress_percent
598
+
599
+ # Time when the request was received.
600
+ # Corresponds to the JSON property `startTime`
601
+ # @return [String]
602
+ attr_accessor :start_time
603
+
604
+ # Time of the most recent update.
605
+ # Corresponds to the JSON property `updateTime`
606
+ # @return [String]
607
+ attr_accessor :update_time
608
+
609
+ def initialize(**args)
610
+ update!(**args)
611
+ end
612
+
613
+ # Update properties of this object
614
+ def update!(**args)
615
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
616
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
617
+ @start_time = args[:start_time] if args.key?(:start_time)
618
+ @update_time = args[:update_time] if args.key?(:update_time)
619
+ end
620
+ end
621
+
622
+ # Annotation results for a single video.
623
+ class GoogleCloudVideointelligenceV1VideoAnnotationResults
624
+ include Google::Apis::Core::Hashable
625
+
626
+ # The `Status` type defines a logical error model that is suitable for
627
+ # different programming environments, including REST APIs and RPC APIs. It is
628
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
629
+ # - Simple to use and understand for most users
630
+ # - Flexible enough to meet unexpected needs
631
+ # # Overview
632
+ # The `Status` message contains three pieces of data: error code, error
633
+ # message, and error details. The error code should be an enum value of
634
+ # google.rpc.Code, but it may accept additional error codes if needed. The
635
+ # error message should be a developer-facing English message that helps
636
+ # developers *understand* and *resolve* the error. If a localized user-facing
637
+ # error message is needed, put the localized message in the error details or
638
+ # localize it in the client. The optional error details may contain arbitrary
639
+ # information about the error. There is a predefined set of error detail types
640
+ # in the package `google.rpc` that can be used for common error conditions.
641
+ # # Language mapping
642
+ # The `Status` message is the logical representation of the error model, but it
643
+ # is not necessarily the actual wire format. When the `Status` message is
644
+ # exposed in different client libraries and different wire protocols, it can be
645
+ # mapped differently. For example, it will likely be mapped to some exceptions
646
+ # in Java, but more likely mapped to some error codes in C.
647
+ # # Other uses
648
+ # The error model and the `Status` message can be used in a variety of
649
+ # environments, either with or without APIs, to provide a
650
+ # consistent developer experience across different environments.
651
+ # Example uses of this error model include:
652
+ # - Partial errors. If a service needs to return partial errors to the client,
653
+ # it may embed the `Status` in the normal response to indicate the partial
654
+ # errors.
655
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
656
+ # have a `Status` message for error reporting.
657
+ # - Batch operations. If a client uses batch request and batch response, the
658
+ # `Status` message should be used directly inside batch response, one for
659
+ # each error sub-response.
660
+ # - Asynchronous operations. If an API call embeds asynchronous operation
661
+ # results in its response, the status of those operations should be
662
+ # represented directly using the `Status` message.
663
+ # - Logging. If some API errors are stored in logs, the message `Status` could
664
+ # be used directly after any stripping needed for security/privacy reasons.
665
+ # Corresponds to the JSON property `error`
666
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleRpcStatus]
667
+ attr_accessor :error
668
+
669
+ # Explicit content annotation (based on per-frame visual signals only).
670
+ # If no explicit content has been detected in a frame, no annotations are
671
+ # present for that frame.
672
+ # Corresponds to the JSON property `explicitAnnotation`
673
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1ExplicitContentAnnotation]
674
+ attr_accessor :explicit_annotation
675
+
676
+ # Label annotations on frame level.
677
+ # There is exactly one element for each unique label.
678
+ # Corresponds to the JSON property `frameLabelAnnotations`
679
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1LabelAnnotation>]
680
+ attr_accessor :frame_label_annotations
681
+
682
+ # Video file location in
683
+ # [Google Cloud Storage](https://cloud.google.com/storage/).
684
+ # Corresponds to the JSON property `inputUri`
685
+ # @return [String]
686
+ attr_accessor :input_uri
687
+
688
+ # Annotations for list of objects detected and tracked in video.
689
+ # Corresponds to the JSON property `objectAnnotations`
690
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>]
691
+ attr_accessor :object_annotations
692
+
693
+ # Label annotations on video level or user specified segment level.
694
+ # There is exactly one element for each unique label.
695
+ # Corresponds to the JSON property `segmentLabelAnnotations`
696
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1LabelAnnotation>]
697
+ attr_accessor :segment_label_annotations
698
+
699
+ # Shot annotations. Each shot is represented as a video segment.
700
+ # Corresponds to the JSON property `shotAnnotations`
701
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1VideoSegment>]
702
+ attr_accessor :shot_annotations
703
+
704
+ # Label annotations on shot level.
705
+ # There is exactly one element for each unique label.
706
+ # Corresponds to the JSON property `shotLabelAnnotations`
707
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1LabelAnnotation>]
708
+ attr_accessor :shot_label_annotations
709
+
710
+ # Speech transcription.
711
+ # Corresponds to the JSON property `speechTranscriptions`
712
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1SpeechTranscription>]
713
+ attr_accessor :speech_transcriptions
714
+
715
+ # OCR text detection and tracking.
716
+ # Annotations for list of detected text snippets. Each will have list of
717
+ # frame information associated with it.
718
+ # Corresponds to the JSON property `textAnnotations`
719
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1TextAnnotation>]
720
+ attr_accessor :text_annotations
721
+
722
+ def initialize(**args)
723
+ update!(**args)
724
+ end
725
+
726
+ # Update properties of this object
727
+ def update!(**args)
728
+ @error = args[:error] if args.key?(:error)
729
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
730
+ @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
731
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
732
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
733
+ @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
734
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
735
+ @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
736
+ @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
737
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
738
+ end
739
+ end
740
+
741
+ # Video segment.
742
+ class GoogleCloudVideointelligenceV1VideoSegment
743
+ include Google::Apis::Core::Hashable
744
+
745
+ # Time-offset, relative to the beginning of the video,
746
+ # corresponding to the end of the segment (inclusive).
747
+ # Corresponds to the JSON property `endTimeOffset`
748
+ # @return [String]
749
+ attr_accessor :end_time_offset
750
+
751
+ # Time-offset, relative to the beginning of the video,
752
+ # corresponding to the start of the segment (inclusive).
753
+ # Corresponds to the JSON property `startTimeOffset`
754
+ # @return [String]
755
+ attr_accessor :start_time_offset
756
+
757
+ def initialize(**args)
758
+ update!(**args)
759
+ end
760
+
761
+ # Update properties of this object
762
+ def update!(**args)
763
+ @end_time_offset = args[:end_time_offset] if args.key?(:end_time_offset)
764
+ @start_time_offset = args[:start_time_offset] if args.key?(:start_time_offset)
765
+ end
766
+ end
767
+
768
+ # Word-specific information for recognized words. Word information is only
769
+ # included in the response when certain request parameters are set, such
770
+ # as `enable_word_time_offsets`.
771
+ class GoogleCloudVideointelligenceV1WordInfo
772
+ include Google::Apis::Core::Hashable
773
+
774
+ # Output only. The confidence estimate between 0.0 and 1.0. A higher number
775
+ # indicates an estimated greater likelihood that the recognized words are
776
+ # correct. This field is set only for the top alternative.
777
+ # This field is not guaranteed to be accurate and users should not rely on it
778
+ # to be always provided.
779
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
780
+ # Corresponds to the JSON property `confidence`
781
+ # @return [Float]
782
+ attr_accessor :confidence
783
+
784
+ # Time offset relative to the beginning of the audio, and
785
+ # corresponding to the end of the spoken word. This field is only set if
786
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
787
+ # experimental feature and the accuracy of the time offset can vary.
788
+ # Corresponds to the JSON property `endTime`
789
+ # @return [String]
790
+ attr_accessor :end_time
791
+
792
+ # Output only. A distinct integer value is assigned for every speaker within
793
+ # the audio. This field specifies which one of those speakers was detected to
794
+ # have spoken this word. Value ranges from 1 up to diarization_speaker_count,
795
+ # and is only set if speaker diarization is enabled.
796
+ # Corresponds to the JSON property `speakerTag`
797
+ # @return [Fixnum]
798
+ attr_accessor :speaker_tag
799
+
800
+ # Time offset relative to the beginning of the audio, and
801
+ # corresponding to the start of the spoken word. This field is only set if
802
+ # `enable_word_time_offsets=true` and only in the top hypothesis. This is an
803
+ # experimental feature and the accuracy of the time offset can vary.
804
+ # Corresponds to the JSON property `startTime`
805
+ # @return [String]
806
+ attr_accessor :start_time
807
+
808
+ # The word corresponding to this set of information.
809
+ # Corresponds to the JSON property `word`
810
+ # @return [String]
811
+ attr_accessor :word
812
+
813
+ def initialize(**args)
814
+ update!(**args)
815
+ end
816
+
817
+ # Update properties of this object
818
+ def update!(**args)
819
+ @confidence = args[:confidence] if args.key?(:confidence)
820
+ @end_time = args[:end_time] if args.key?(:end_time)
821
+ @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
822
+ @start_time = args[:start_time] if args.key?(:start_time)
823
+ @word = args[:word] if args.key?(:word)
824
+ end
825
+ end
826
+
827
+ # Video annotation progress. Included in the `metadata`
828
+ # field of the `Operation` returned by the `GetOperation`
829
+ # call of the `google::longrunning::Operations` service.
830
+ class GoogleCloudVideointelligenceV1beta2AnnotateVideoProgress
831
+ include Google::Apis::Core::Hashable
832
+
833
+ # Progress metadata for all videos specified in `AnnotateVideoRequest`.
834
+ # Corresponds to the JSON property `annotationProgress`
835
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress>]
836
+ attr_accessor :annotation_progress
837
+
838
+ def initialize(**args)
839
+ update!(**args)
840
+ end
841
+
842
+ # Update properties of this object
843
+ def update!(**args)
844
+ @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
845
+ end
846
+ end
847
+
848
+ # Video annotation request.
849
+ class GoogleCloudVideointelligenceV1beta2AnnotateVideoRequest
850
+ include Google::Apis::Core::Hashable
851
+
852
+ # Requested video annotation features.
853
+ # Corresponds to the JSON property `features`
854
+ # @return [Array<String>]
855
+ attr_accessor :features
856
+
857
+ # The video data bytes.
858
+ # If unset, the input video(s) should be specified via `input_uri`.
859
+ # If set, `input_uri` should be unset.
860
+ # Corresponds to the JSON property `inputContent`
861
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
862
+ # @return [String]
863
+ attr_accessor :input_content
864
+
865
+ # Input video location. Currently, only
866
+ # [Google Cloud Storage](https://cloud.google.com/storage/) URIs are
867
+ # supported, which must be specified in the following format:
868
+ # `gs://bucket-id/object-id` (other URI formats return
869
+ # google.rpc.Code.INVALID_ARGUMENT). For more information, see
870
+ # [Request URIs](/storage/docs/reference-uris).
871
+ # A video URI may include wildcards in `object-id`, and thus identify
872
+ # multiple videos. Supported wildcards: '*' to match 0 or more characters;
873
+ # '?' to match 1 character. If unset, the input video should be embedded
874
+ # in the request as `input_content`. If set, `input_content` should be unset.
875
+ # Corresponds to the JSON property `inputUri`
876
+ # @return [String]
877
+ attr_accessor :input_uri
878
+
879
+ # Optional cloud region where annotation should take place. Supported cloud
880
+ # regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region
881
+ # is specified, a region will be determined based on video file location.
882
+ # Corresponds to the JSON property `locationId`
883
+ # @return [String]
884
+ attr_accessor :location_id
885
+
886
+ # Optional location where the output (in JSON format) should be stored.
887
+ # Currently, only [Google Cloud Storage](https://cloud.google.com/storage/)
888
+ # URIs are supported, which must be specified in the following format:
889
+ # `gs://bucket-id/object-id` (other URI formats return
890
+ # google.rpc.Code.INVALID_ARGUMENT). For more information, see
891
+ # [Request URIs](/storage/docs/reference-uris).
892
+ # Corresponds to the JSON property `outputUri`
893
+ # @return [String]
894
+ attr_accessor :output_uri
895
+
896
+ # Video context and/or feature-specific parameters.
897
+ # Corresponds to the JSON property `videoContext`
898
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoContext]
899
+ attr_accessor :video_context
900
+
901
+ def initialize(**args)
902
+ update!(**args)
903
+ end
904
+
905
+ # Update properties of this object
906
+ def update!(**args)
907
+ @features = args[:features] if args.key?(:features)
908
+ @input_content = args[:input_content] if args.key?(:input_content)
909
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
910
+ @location_id = args[:location_id] if args.key?(:location_id)
911
+ @output_uri = args[:output_uri] if args.key?(:output_uri)
912
+ @video_context = args[:video_context] if args.key?(:video_context)
913
+ end
914
+ end
915
+
916
+ # Video annotation response. Included in the `response`
917
+ # field of the `Operation` returned by the `GetOperation`
918
+ # call of the `google::longrunning::Operations` service.
919
+ class GoogleCloudVideointelligenceV1beta2AnnotateVideoResponse
920
+ include Google::Apis::Core::Hashable
921
+
922
+ # Annotation results for all videos specified in `AnnotateVideoRequest`.
923
+ # Corresponds to the JSON property `annotationResults`
924
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoAnnotationResults>]
925
+ attr_accessor :annotation_results
926
+
927
+ def initialize(**args)
928
+ update!(**args)
929
+ end
930
+
931
+ # Update properties of this object
932
+ def update!(**args)
933
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
934
+ end
935
+ end
936
+
937
+ # Detected entity from video analysis.
938
+ class GoogleCloudVideointelligenceV1beta2Entity
939
+ include Google::Apis::Core::Hashable
940
+
941
+ # Textual description, e.g. `Fixed-gear bicycle`.
942
+ # Corresponds to the JSON property `description`
943
+ # @return [String]
944
+ attr_accessor :description
945
+
946
+ # Opaque entity ID. Some IDs may be available in
947
+ # [Google Knowledge Graph Search
948
+ # API](https://developers.google.com/knowledge-graph/).
949
+ # Corresponds to the JSON property `entityId`
950
+ # @return [String]
951
+ attr_accessor :entity_id
952
+
953
+ # Language code for `description` in BCP-47 format.
954
+ # Corresponds to the JSON property `languageCode`
955
+ # @return [String]
956
+ attr_accessor :language_code
957
+
958
+ def initialize(**args)
959
+ update!(**args)
960
+ end
961
+
962
+ # Update properties of this object
963
+ def update!(**args)
964
+ @description = args[:description] if args.key?(:description)
965
+ @entity_id = args[:entity_id] if args.key?(:entity_id)
966
+ @language_code = args[:language_code] if args.key?(:language_code)
967
+ end
968
+ end
969
+
970
+ # Explicit content annotation (based on per-frame visual signals only).
971
+ # If no explicit content has been detected in a frame, no annotations are
972
+ # present for that frame.
973
+ class GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation
974
+ include Google::Apis::Core::Hashable
975
+
976
+ # All video frames where explicit content was detected.
977
+ # Corresponds to the JSON property `frames`
978
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ExplicitContentFrame>]
979
+ attr_accessor :frames
980
+
981
+ def initialize(**args)
982
+ update!(**args)
983
+ end
984
+
985
+ # Update properties of this object
986
+ def update!(**args)
987
+ @frames = args[:frames] if args.key?(:frames)
988
+ end
989
+ end
990
+
991
+ # Config for EXPLICIT_CONTENT_DETECTION.
992
+ class GoogleCloudVideointelligenceV1beta2ExplicitContentDetectionConfig
993
+ include Google::Apis::Core::Hashable
994
+
995
+ # Model to use for explicit content detection.
996
+ # Supported values: "builtin/stable" (the default if unset) and
997
+ # "builtin/latest".
998
+ # Corresponds to the JSON property `model`
999
+ # @return [String]
1000
+ attr_accessor :model
1001
+
1002
+ def initialize(**args)
1003
+ update!(**args)
1004
+ end
1005
+
1006
+ # Update properties of this object
1007
+ def update!(**args)
1008
+ @model = args[:model] if args.key?(:model)
1009
+ end
1010
+ end
1011
+
1012
+ # Video frame level annotation results for explicit content.
1013
+ class GoogleCloudVideointelligenceV1beta2ExplicitContentFrame
1014
+ include Google::Apis::Core::Hashable
1015
+
1016
+ # Likelihood of the pornography content..
1017
+ # Corresponds to the JSON property `pornographyLikelihood`
1018
+ # @return [String]
1019
+ attr_accessor :pornography_likelihood
1020
+
1021
+ # Time-offset, relative to the beginning of the video, corresponding to the
1022
+ # video frame for this location.
1023
+ # Corresponds to the JSON property `timeOffset`
1024
+ # @return [String]
1025
+ attr_accessor :time_offset
1026
+
1027
+ def initialize(**args)
1028
+ update!(**args)
1029
+ end
1030
+
1031
+ # Update properties of this object
1032
+ def update!(**args)
1033
+ @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
1034
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1035
+ end
1036
+ end
1037
+
1038
+ # Label annotation.
1039
+ class GoogleCloudVideointelligenceV1beta2LabelAnnotation
1040
+ include Google::Apis::Core::Hashable
1041
+
1042
+ # Common categories for the detected entity.
1043
+ # E.g. when the label is `Terrier` the category is likely `dog`. And in some
1044
+ # cases there might be more than one categories e.g. `Terrier` could also be
1045
+ # a `pet`.
1046
+ # Corresponds to the JSON property `categoryEntities`
1047
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2Entity>]
1048
+ attr_accessor :category_entities
1049
+
1050
+ # Detected entity from video analysis.
1051
+ # Corresponds to the JSON property `entity`
1052
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2Entity]
1053
+ attr_accessor :entity
1054
+
1055
+ # All video frames where a label was detected.
1056
+ # Corresponds to the JSON property `frames`
1057
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelFrame>]
1058
+ attr_accessor :frames
1059
+
1060
+ # All video segments where a label was detected.
1061
+ # Corresponds to the JSON property `segments`
1062
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelSegment>]
1063
+ attr_accessor :segments
1064
+
1065
+ def initialize(**args)
1066
+ update!(**args)
1067
+ end
1068
+
1069
+ # Update properties of this object
1070
+ def update!(**args)
1071
+ @category_entities = args[:category_entities] if args.key?(:category_entities)
1072
+ @entity = args[:entity] if args.key?(:entity)
1073
+ @frames = args[:frames] if args.key?(:frames)
1074
+ @segments = args[:segments] if args.key?(:segments)
1075
+ end
1076
+ end
1077
+
1078
+ # Config for LABEL_DETECTION.
1079
+ class GoogleCloudVideointelligenceV1beta2LabelDetectionConfig
1080
+ include Google::Apis::Core::Hashable
1081
+
1082
+ # The confidence threshold we perform filtering on the labels from
1083
+ # frame-level detection. If not set, it is set to 0.4 by default. The valid
1084
+ # range for this threshold is [0.1, 0.9]. Any value set outside of this
1085
+ # range will be clipped.
1086
+ # Note: for best results please follow the default threshold. We will update
1087
+ # the default threshold everytime when we release a new model.
1088
+ # Corresponds to the JSON property `frameConfidenceThreshold`
1089
+ # @return [Float]
1090
+ attr_accessor :frame_confidence_threshold
1091
+
1092
+ # What labels should be detected with LABEL_DETECTION, in addition to
1093
+ # video-level labels or segment-level labels.
1094
+ # If unspecified, defaults to `SHOT_MODE`.
1095
+ # Corresponds to the JSON property `labelDetectionMode`
1096
+ # @return [String]
1097
+ attr_accessor :label_detection_mode
1098
+
1099
+ # Model to use for label detection.
1100
+ # Supported values: "builtin/stable" (the default if unset) and
1101
+ # "builtin/latest".
1102
+ # Corresponds to the JSON property `model`
1103
+ # @return [String]
1104
+ attr_accessor :model
1105
+
1106
+ # Whether the video has been shot from a stationary (i.e. non-moving) camera.
1107
+ # When set to true, might improve detection accuracy for moving objects.
1108
+ # Should be used with `SHOT_AND_FRAME_MODE` enabled.
1109
+ # Corresponds to the JSON property `stationaryCamera`
1110
+ # @return [Boolean]
1111
+ attr_accessor :stationary_camera
1112
+ alias_method :stationary_camera?, :stationary_camera
1113
+
1114
+ # The confidence threshold we perform filtering on the labels from
1115
+ # video-level and shot-level detections. If not set, it is set to 0.3 by
1116
+ # default. The valid range for this threshold is [0.1, 0.9]. Any value set
1117
+ # outside of this range will be clipped.
1118
+ # Note: for best results please follow the default threshold. We will update
1119
+ # the default threshold everytime when we release a new model.
1120
+ # Corresponds to the JSON property `videoConfidenceThreshold`
1121
+ # @return [Float]
1122
+ attr_accessor :video_confidence_threshold
1123
+
1124
+ def initialize(**args)
1125
+ update!(**args)
1126
+ end
1127
+
1128
+ # Update properties of this object
1129
+ def update!(**args)
1130
+ @frame_confidence_threshold = args[:frame_confidence_threshold] if args.key?(:frame_confidence_threshold)
1131
+ @label_detection_mode = args[:label_detection_mode] if args.key?(:label_detection_mode)
1132
+ @model = args[:model] if args.key?(:model)
1133
+ @stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
1134
+ @video_confidence_threshold = args[:video_confidence_threshold] if args.key?(:video_confidence_threshold)
1135
+ end
1136
+ end
1137
+
1138
+ # Video frame level annotation results for label detection.
1139
+ class GoogleCloudVideointelligenceV1beta2LabelFrame
1140
+ include Google::Apis::Core::Hashable
1141
+
1142
+ # Confidence that the label is accurate. Range: [0, 1].
1143
+ # Corresponds to the JSON property `confidence`
1144
+ # @return [Float]
1145
+ attr_accessor :confidence
1146
+
1147
+ # Time-offset, relative to the beginning of the video, corresponding to the
1148
+ # video frame for this location.
1149
+ # Corresponds to the JSON property `timeOffset`
1150
+ # @return [String]
1151
+ attr_accessor :time_offset
1152
+
1153
+ def initialize(**args)
1154
+ update!(**args)
1155
+ end
1156
+
1157
+ # Update properties of this object
1158
+ def update!(**args)
1159
+ @confidence = args[:confidence] if args.key?(:confidence)
1160
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1161
+ end
1162
+ end
1163
+
1164
+ # Video segment level annotation results for label detection.
1165
+ class GoogleCloudVideointelligenceV1beta2LabelSegment
1166
+ include Google::Apis::Core::Hashable
1167
+
1168
+ # Confidence that the label is accurate. Range: [0, 1].
1169
+ # Corresponds to the JSON property `confidence`
1170
+ # @return [Float]
1171
+ attr_accessor :confidence
1172
+
1173
+ # Video segment.
1174
+ # Corresponds to the JSON property `segment`
1175
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment]
1176
+ attr_accessor :segment
1177
+
1178
+ def initialize(**args)
1179
+ update!(**args)
1180
+ end
1181
+
1182
+ # Update properties of this object
1183
+ def update!(**args)
1184
+ @confidence = args[:confidence] if args.key?(:confidence)
1185
+ @segment = args[:segment] if args.key?(:segment)
1186
+ end
1187
+ end
1188
+
1189
+ # Normalized bounding box.
1190
+ # The normalized vertex coordinates are relative to the original image.
1191
+ # Range: [0, 1].
1192
+ class GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox
1193
+ include Google::Apis::Core::Hashable
1194
+
1195
+ # Bottom Y coordinate.
1196
+ # Corresponds to the JSON property `bottom`
1197
+ # @return [Float]
1198
+ attr_accessor :bottom
1199
+
1200
+ # Left X coordinate.
1201
+ # Corresponds to the JSON property `left`
1202
+ # @return [Float]
1203
+ attr_accessor :left
1204
+
1205
+ # Right X coordinate.
1206
+ # Corresponds to the JSON property `right`
1207
+ # @return [Float]
1208
+ attr_accessor :right
1209
+
1210
+ # Top Y coordinate.
1211
+ # Corresponds to the JSON property `top`
1212
+ # @return [Float]
1213
+ attr_accessor :top
1214
+
1215
+ def initialize(**args)
1216
+ update!(**args)
1217
+ end
1218
+
1219
+ # Update properties of this object
1220
+ def update!(**args)
1221
+ @bottom = args[:bottom] if args.key?(:bottom)
1222
+ @left = args[:left] if args.key?(:left)
1223
+ @right = args[:right] if args.key?(:right)
1224
+ @top = args[:top] if args.key?(:top)
1225
+ end
1226
+ end
1227
+
1228
+ # Normalized bounding polygon for text (that might not be aligned with axis).
1229
+ # Contains list of the corner points in clockwise order starting from
1230
+ # top-left corner. For example, for a rectangular bounding box:
1231
+ # When the text is horizontal it might look like:
1232
+ # 0----1
1233
+ # | |
1234
+ # 3----2
1235
+ # When it's clockwise rotated 180 degrees around the top-left corner it
1236
+ # becomes:
1237
+ # 2----3
1238
+ # | |
1239
+ # 1----0
1240
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1241
+ # than 0, or greater than 1 due to trignometric calculations for location of
1242
+ # the box.
1243
+ class GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly
1244
+ include Google::Apis::Core::Hashable
1245
+
1246
+ # Normalized vertices of the bounding polygon.
1247
+ # Corresponds to the JSON property `vertices`
1248
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2NormalizedVertex>]
1249
+ attr_accessor :vertices
1250
+
1251
+ def initialize(**args)
1252
+ update!(**args)
1253
+ end
1254
+
1255
+ # Update properties of this object
1256
+ def update!(**args)
1257
+ @vertices = args[:vertices] if args.key?(:vertices)
1258
+ end
1259
+ end
1260
+
1261
+ # A vertex represents a 2D point in the image.
1262
+ # NOTE: the normalized vertex coordinates are relative to the original image
1263
+ # and range from 0 to 1.
1264
+ class GoogleCloudVideointelligenceV1beta2NormalizedVertex
1265
+ include Google::Apis::Core::Hashable
1266
+
1267
+ # X coordinate.
1268
+ # Corresponds to the JSON property `x`
1269
+ # @return [Float]
1270
+ attr_accessor :x
1271
+
1272
+ # Y coordinate.
1273
+ # Corresponds to the JSON property `y`
1274
+ # @return [Float]
1275
+ attr_accessor :y
1276
+
1277
+ def initialize(**args)
1278
+ update!(**args)
1279
+ end
1280
+
1281
+ # Update properties of this object
1282
+ def update!(**args)
1283
+ @x = args[:x] if args.key?(:x)
1284
+ @y = args[:y] if args.key?(:y)
1285
+ end
1286
+ end
1287
+
1288
+ # Annotations corresponding to one tracked object.
1289
+ class GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation
1290
+ include Google::Apis::Core::Hashable
1291
+
1292
+ # Object category's labeling confidence of this track.
1293
+ # Corresponds to the JSON property `confidence`
1294
+ # @return [Float]
1295
+ attr_accessor :confidence
1296
+
1297
+ # Detected entity from video analysis.
1298
+ # Corresponds to the JSON property `entity`
1299
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2Entity]
1300
+ attr_accessor :entity
1301
+
1302
+ # Information corresponding to all frames where this object track appears.
1303
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
1304
+ # messages in frames.
1305
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
1306
+ # Corresponds to the JSON property `frames`
1307
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame>]
1308
+ attr_accessor :frames
1309
+
1310
+ # Video segment.
1311
+ # Corresponds to the JSON property `segment`
1312
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment]
1313
+ attr_accessor :segment
1314
+
1315
+ # Streaming mode ONLY.
1316
+ # In streaming mode, we do not know the end time of a tracked object
1317
+ # before it is completed. Hence, there is no VideoSegment info returned.
1318
+ # Instead, we provide a unique identifiable integer track_id so that
1319
+ # the customers can correlate the results of the ongoing
1320
+ # ObjectTrackAnnotation of the same track_id over time.
1321
+ # Corresponds to the JSON property `trackId`
1322
+ # @return [Fixnum]
1323
+ attr_accessor :track_id
1324
+
1325
+ def initialize(**args)
1326
+ update!(**args)
1327
+ end
1328
+
1329
+ # Update properties of this object
1330
+ def update!(**args)
1331
+ @confidence = args[:confidence] if args.key?(:confidence)
1332
+ @entity = args[:entity] if args.key?(:entity)
1333
+ @frames = args[:frames] if args.key?(:frames)
1334
+ @segment = args[:segment] if args.key?(:segment)
1335
+ @track_id = args[:track_id] if args.key?(:track_id)
1336
+ end
1337
+ end
1338
+
1339
+ # Video frame level annotations for object detection and tracking. This field
1340
+ # stores per frame location, time offset, and confidence.
1341
+ class GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame
1342
+ include Google::Apis::Core::Hashable
1343
+
1344
+ # Normalized bounding box.
1345
+ # The normalized vertex coordinates are relative to the original image.
1346
+ # Range: [0, 1].
1347
+ # Corresponds to the JSON property `normalizedBoundingBox`
1348
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox]
1349
+ attr_accessor :normalized_bounding_box
1350
+
1351
+ # The timestamp of the frame in microseconds.
1352
+ # Corresponds to the JSON property `timeOffset`
1353
+ # @return [String]
1354
+ attr_accessor :time_offset
1355
+
1356
+ def initialize(**args)
1357
+ update!(**args)
1358
+ end
1359
+
1360
+ # Update properties of this object
1361
+ def update!(**args)
1362
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
1363
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1364
+ end
1365
+ end
1366
+
1367
+ # Config for SHOT_CHANGE_DETECTION.
1368
+ class GoogleCloudVideointelligenceV1beta2ShotChangeDetectionConfig
1369
+ include Google::Apis::Core::Hashable
1370
+
1371
+ # Model to use for shot change detection.
1372
+ # Supported values: "builtin/stable" (the default if unset) and
1373
+ # "builtin/latest".
1374
+ # Corresponds to the JSON property `model`
1375
+ # @return [String]
1376
+ attr_accessor :model
1377
+
1378
+ def initialize(**args)
1379
+ update!(**args)
1380
+ end
1381
+
1382
+ # Update properties of this object
1383
+ def update!(**args)
1384
+ @model = args[:model] if args.key?(:model)
1385
+ end
1386
+ end
1387
+
1388
+ # Provides "hints" to the speech recognizer to favor specific words and phrases
1389
+ # in the results.
1390
+ class GoogleCloudVideointelligenceV1beta2SpeechContext
1391
+ include Google::Apis::Core::Hashable
1392
+
1393
+ # *Optional* A list of strings containing words and phrases "hints" so that
1394
+ # the speech recognition is more likely to recognize them. This can be used
1395
+ # to improve the accuracy for specific words and phrases, for example, if
1396
+ # specific commands are typically spoken by the user. This can also be used
1397
+ # to add additional words to the vocabulary of the recognizer. See
1398
+ # [usage limits](https://cloud.google.com/speech/limits#content).
1399
+ # Corresponds to the JSON property `phrases`
1400
+ # @return [Array<String>]
1401
+ attr_accessor :phrases
1402
+
1403
+ def initialize(**args)
1404
+ update!(**args)
1405
+ end
1406
+
1407
+ # Update properties of this object
1408
+ def update!(**args)
1409
+ @phrases = args[:phrases] if args.key?(:phrases)
1410
+ end
1411
+ end
1412
+
1413
+ # Alternative hypotheses (a.k.a. n-best list).
1414
+ class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
1415
+ include Google::Apis::Core::Hashable
1416
+
1417
+ # The confidence estimate between 0.0 and 1.0. A higher number
1418
+ # indicates an estimated greater likelihood that the recognized words are
1419
+ # correct. This field is typically provided only for the top hypothesis, and
1420
+ # only for `is_final=true` results. Clients should not rely on the
1421
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
1422
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1423
+ # Corresponds to the JSON property `confidence`
1424
+ # @return [Float]
1425
+ attr_accessor :confidence
1426
+
1427
+ # Transcript text representing the words that the user spoke.
1428
+ # Corresponds to the JSON property `transcript`
1429
+ # @return [String]
1430
+ attr_accessor :transcript
1431
+
1432
+ # A list of word-specific information for each recognized word.
1433
+ # Corresponds to the JSON property `words`
1434
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2WordInfo>]
1435
+ attr_accessor :words
1436
+
1437
+ def initialize(**args)
1438
+ update!(**args)
1439
+ end
1440
+
1441
+ # Update properties of this object
1442
+ def update!(**args)
1443
+ @confidence = args[:confidence] if args.key?(:confidence)
1444
+ @transcript = args[:transcript] if args.key?(:transcript)
1445
+ @words = args[:words] if args.key?(:words)
1446
+ end
1447
+ end
1448
+
1449
+ # A speech recognition result corresponding to a portion of the audio.
1450
+ class GoogleCloudVideointelligenceV1beta2SpeechTranscription
1451
+ include Google::Apis::Core::Hashable
1452
+
1453
+ # May contain one or more recognition hypotheses (up to the maximum specified
1454
+ # in `max_alternatives`). These alternatives are ordered in terms of
1455
+ # accuracy, with the top (first) alternative being the most probable, as
1456
+ # ranked by the recognizer.
1457
+ # Corresponds to the JSON property `alternatives`
1458
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
1459
+ attr_accessor :alternatives
1460
+
1461
+ # Output only. The
1462
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
1463
+ # language in this result. This language code was detected to have the most
1464
+ # likelihood of being spoken in the audio.
1465
+ # Corresponds to the JSON property `languageCode`
1466
+ # @return [String]
1467
+ attr_accessor :language_code
1468
+
1469
+ def initialize(**args)
1470
+ update!(**args)
1471
+ end
1472
+
1473
+ # Update properties of this object
1474
+ def update!(**args)
1475
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
1476
+ @language_code = args[:language_code] if args.key?(:language_code)
1477
+ end
1478
+ end
1479
+
1480
+ # Config for SPEECH_TRANSCRIPTION.
1481
+ class GoogleCloudVideointelligenceV1beta2SpeechTranscriptionConfig
1482
+ include Google::Apis::Core::Hashable
1483
+
1484
+ # *Optional* For file formats, such as MXF or MKV, supporting multiple audio
1485
+ # tracks, specify up to two tracks. Default: track 0.
1486
+ # Corresponds to the JSON property `audioTracks`
1487
+ # @return [Array<Fixnum>]
1488
+ attr_accessor :audio_tracks
1489
+
1490
+ # *Optional*
1491
+ # If set, specifies the estimated number of speakers in the conversation.
1492
+ # If not set, defaults to '2'.
1493
+ # Ignored unless enable_speaker_diarization is set to true.
1494
+ # Corresponds to the JSON property `diarizationSpeakerCount`
1495
+ # @return [Fixnum]
1496
+ attr_accessor :diarization_speaker_count
1497
+
1498
+ # *Optional* If 'true', adds punctuation to recognition result hypotheses.
1499
+ # This feature is only available in select languages. Setting this for
1500
+ # requests in other languages has no effect at all. The default 'false' value
1501
+ # does not add punctuation to result hypotheses. NOTE: "This is currently
1502
+ # offered as an experimental service, complimentary to all users. In the
1503
+ # future this may be exclusively available as a premium feature."
1504
+ # Corresponds to the JSON property `enableAutomaticPunctuation`
1505
+ # @return [Boolean]
1506
+ attr_accessor :enable_automatic_punctuation
1507
+ alias_method :enable_automatic_punctuation?, :enable_automatic_punctuation
1508
+
1509
+ # *Optional* If 'true', enables speaker detection for each recognized word in
1510
+ # the top alternative of the recognition result using a speaker_tag provided
1511
+ # in the WordInfo.
1512
+ # Note: When this is true, we send all the words from the beginning of the
1513
+ # audio for the top alternative in every consecutive responses.
1514
+ # This is done in order to improve our speaker tags as our models learn to
1515
+ # identify the speakers in the conversation over time.
1516
+ # Corresponds to the JSON property `enableSpeakerDiarization`
1517
+ # @return [Boolean]
1518
+ attr_accessor :enable_speaker_diarization
1519
+ alias_method :enable_speaker_diarization?, :enable_speaker_diarization
1520
+
1521
+ # *Optional* If `true`, the top result includes a list of words and the
1522
+ # confidence for those words. If `false`, no word-level confidence
1523
+ # information is returned. The default is `false`.
1524
+ # Corresponds to the JSON property `enableWordConfidence`
1525
+ # @return [Boolean]
1526
+ attr_accessor :enable_word_confidence
1527
+ alias_method :enable_word_confidence?, :enable_word_confidence
1528
+
1529
+ # *Optional* If set to `true`, the server will attempt to filter out
1530
+ # profanities, replacing all but the initial character in each filtered word
1531
+ # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
1532
+ # won't be filtered out.
1533
+ # Corresponds to the JSON property `filterProfanity`
1534
+ # @return [Boolean]
1535
+ attr_accessor :filter_profanity
1536
+ alias_method :filter_profanity?, :filter_profanity
1537
+
1538
+ # *Required* The language of the supplied audio as a
1539
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
1540
+ # Example: "en-US".
1541
+ # See [Language Support](https://cloud.google.com/speech/docs/languages)
1542
+ # for a list of the currently supported language codes.
1543
+ # Corresponds to the JSON property `languageCode`
1544
+ # @return [String]
1545
+ attr_accessor :language_code
1546
+
1547
+ # *Optional* Maximum number of recognition hypotheses to be returned.
1548
+ # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
1549
+ # within each `SpeechTranscription`. The server may return fewer than
1550
+ # `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will
1551
+ # return a maximum of one. If omitted, will return a maximum of one.
1552
+ # Corresponds to the JSON property `maxAlternatives`
1553
+ # @return [Fixnum]
1554
+ attr_accessor :max_alternatives
1555
+
1556
+ # *Optional* A means to provide context to assist the speech recognition.
1557
+ # Corresponds to the JSON property `speechContexts`
1558
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechContext>]
1559
+ attr_accessor :speech_contexts
1560
+
1561
+ def initialize(**args)
1562
+ update!(**args)
1563
+ end
1564
+
1565
+ # Update properties of this object
1566
+ def update!(**args)
1567
+ @audio_tracks = args[:audio_tracks] if args.key?(:audio_tracks)
1568
+ @diarization_speaker_count = args[:diarization_speaker_count] if args.key?(:diarization_speaker_count)
1569
+ @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
1570
+ @enable_speaker_diarization = args[:enable_speaker_diarization] if args.key?(:enable_speaker_diarization)
1571
+ @enable_word_confidence = args[:enable_word_confidence] if args.key?(:enable_word_confidence)
1572
+ @filter_profanity = args[:filter_profanity] if args.key?(:filter_profanity)
1573
+ @language_code = args[:language_code] if args.key?(:language_code)
1574
+ @max_alternatives = args[:max_alternatives] if args.key?(:max_alternatives)
1575
+ @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
1576
+ end
1577
+ end
1578
+
1579
+ # Annotations related to one detected OCR text snippet. This will contain the
1580
+ # corresponding text, confidence value, and frame level information for each
1581
+ # detection.
1582
+ class GoogleCloudVideointelligenceV1beta2TextAnnotation
1583
+ include Google::Apis::Core::Hashable
1584
+
1585
+ # All video segments where OCR detected text appears.
1586
+ # Corresponds to the JSON property `segments`
1587
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2TextSegment>]
1588
+ attr_accessor :segments
251
1589
 
252
- # Transcript text representing the words that the user spoke.
253
- # Corresponds to the JSON property `transcript`
1590
+ # The detected text.
1591
+ # Corresponds to the JSON property `text`
254
1592
  # @return [String]
255
- attr_accessor :transcript
1593
+ attr_accessor :text
256
1594
 
257
- # A list of word-specific information for each recognized word.
258
- # Corresponds to the JSON property `words`
259
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1WordInfo>]
260
- attr_accessor :words
1595
+ def initialize(**args)
1596
+ update!(**args)
1597
+ end
1598
+
1599
+ # Update properties of this object
1600
+ def update!(**args)
1601
+ @segments = args[:segments] if args.key?(:segments)
1602
+ @text = args[:text] if args.key?(:text)
1603
+ end
1604
+ end
1605
+
1606
+ # Config for TEXT_DETECTION.
1607
+ class GoogleCloudVideointelligenceV1beta2TextDetectionConfig
1608
+ include Google::Apis::Core::Hashable
1609
+
1610
+ # Language hint can be specified if the language to be detected is known a
1611
+ # priori. It can increase the accuracy of the detection. Language hint must
1612
+ # be language code in BCP-47 format.
1613
+ # Automatic language detection is performed if no hint is provided.
1614
+ # Corresponds to the JSON property `languageHints`
1615
+ # @return [Array<String>]
1616
+ attr_accessor :language_hints
261
1617
 
262
1618
  def initialize(**args)
263
1619
  update!(**args)
@@ -265,31 +1621,39 @@ module Google
265
1621
 
266
1622
  # Update properties of this object
267
1623
  def update!(**args)
268
- @confidence = args[:confidence] if args.key?(:confidence)
269
- @transcript = args[:transcript] if args.key?(:transcript)
270
- @words = args[:words] if args.key?(:words)
1624
+ @language_hints = args[:language_hints] if args.key?(:language_hints)
271
1625
  end
272
1626
  end
273
1627
 
274
- # A speech recognition result corresponding to a portion of the audio.
275
- class GoogleCloudVideointelligenceV1SpeechTranscription
1628
+ # Video frame level annotation results for text annotation (OCR).
1629
+ # Contains information regarding timestamp and bounding box locations for the
1630
+ # frames containing detected OCR text snippets.
1631
+ class GoogleCloudVideointelligenceV1beta2TextFrame
276
1632
  include Google::Apis::Core::Hashable
277
1633
 
278
- # May contain one or more recognition hypotheses (up to the maximum specified
279
- # in `max_alternatives`). These alternatives are ordered in terms of
280
- # accuracy, with the top (first) alternative being the most probable, as
281
- # ranked by the recognizer.
282
- # Corresponds to the JSON property `alternatives`
283
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1SpeechRecognitionAlternative>]
284
- attr_accessor :alternatives
1634
+ # Normalized bounding polygon for text (that might not be aligned with axis).
1635
+ # Contains list of the corner points in clockwise order starting from
1636
+ # top-left corner. For example, for a rectangular bounding box:
1637
+ # When the text is horizontal it might look like:
1638
+ # 0----1
1639
+ # | |
1640
+ # 3----2
1641
+ # When it's clockwise rotated 180 degrees around the top-left corner it
1642
+ # becomes:
1643
+ # 2----3
1644
+ # | |
1645
+ # 1----0
1646
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
1647
+ # than 0, or greater than 1 due to trignometric calculations for location of
1648
+ # the box.
1649
+ # Corresponds to the JSON property `rotatedBoundingBox`
1650
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly]
1651
+ attr_accessor :rotated_bounding_box
285
1652
 
286
- # Output only. The
287
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
288
- # language in this result. This language code was detected to have the most
289
- # likelihood of being spoken in the audio.
290
- # Corresponds to the JSON property `languageCode`
1653
+ # Timestamp of this frame.
1654
+ # Corresponds to the JSON property `timeOffset`
291
1655
  # @return [String]
292
- attr_accessor :language_code
1656
+ attr_accessor :time_offset
293
1657
 
294
1658
  def initialize(**args)
295
1659
  update!(**args)
@@ -297,13 +1661,45 @@ module Google
297
1661
 
298
1662
  # Update properties of this object
299
1663
  def update!(**args)
300
- @alternatives = args[:alternatives] if args.key?(:alternatives)
301
- @language_code = args[:language_code] if args.key?(:language_code)
1664
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
1665
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1666
+ end
1667
+ end
1668
+
1669
+ # Video segment level annotation results for text detection.
1670
+ class GoogleCloudVideointelligenceV1beta2TextSegment
1671
+ include Google::Apis::Core::Hashable
1672
+
1673
+ # Confidence for the track of detected text. It is calculated as the highest
1674
+ # over all frames where OCR detected text appears.
1675
+ # Corresponds to the JSON property `confidence`
1676
+ # @return [Float]
1677
+ attr_accessor :confidence
1678
+
1679
+ # Information related to the frames where OCR detected text appears.
1680
+ # Corresponds to the JSON property `frames`
1681
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2TextFrame>]
1682
+ attr_accessor :frames
1683
+
1684
+ # Video segment.
1685
+ # Corresponds to the JSON property `segment`
1686
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment]
1687
+ attr_accessor :segment
1688
+
1689
+ def initialize(**args)
1690
+ update!(**args)
1691
+ end
1692
+
1693
+ # Update properties of this object
1694
+ def update!(**args)
1695
+ @confidence = args[:confidence] if args.key?(:confidence)
1696
+ @frames = args[:frames] if args.key?(:frames)
1697
+ @segment = args[:segment] if args.key?(:segment)
302
1698
  end
303
1699
  end
304
1700
 
305
1701
  # Annotation progress for a single video.
306
- class GoogleCloudVideointelligenceV1VideoAnnotationProgress
1702
+ class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
307
1703
  include Google::Apis::Core::Hashable
308
1704
 
309
1705
  # Video file location in
@@ -342,17 +1738,17 @@ module Google
342
1738
  end
343
1739
 
344
1740
  # Annotation results for a single video.
345
- class GoogleCloudVideointelligenceV1VideoAnnotationResults
1741
+ class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
346
1742
  include Google::Apis::Core::Hashable
347
1743
 
348
- # The `Status` type defines a logical error model that is suitable for different
349
- # programming environments, including REST APIs and RPC APIs. It is used by
350
- # [gRPC](https://github.com/grpc). The error model is designed to be:
1744
+ # The `Status` type defines a logical error model that is suitable for
1745
+ # different programming environments, including REST APIs and RPC APIs. It is
1746
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
351
1747
  # - Simple to use and understand for most users
352
1748
  # - Flexible enough to meet unexpected needs
353
1749
  # # Overview
354
- # The `Status` message contains three pieces of data: error code, error message,
355
- # and error details. The error code should be an enum value of
1750
+ # The `Status` message contains three pieces of data: error code, error
1751
+ # message, and error details. The error code should be an enum value of
356
1752
  # google.rpc.Code, but it may accept additional error codes if needed. The
357
1753
  # error message should be a developer-facing English message that helps
358
1754
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -392,13 +1788,13 @@ module Google
392
1788
  # If no explicit content has been detected in a frame, no annotations are
393
1789
  # present for that frame.
394
1790
  # Corresponds to the JSON property `explicitAnnotation`
395
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1ExplicitContentAnnotation]
1791
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
396
1792
  attr_accessor :explicit_annotation
397
1793
 
398
1794
  # Label annotations on frame level.
399
1795
  # There is exactly one element for each unique label.
400
1796
  # Corresponds to the JSON property `frameLabelAnnotations`
401
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1LabelAnnotation>]
1797
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
402
1798
  attr_accessor :frame_label_annotations
403
1799
 
404
1800
  # Video file location in
@@ -407,28 +1803,40 @@ module Google
407
1803
  # @return [String]
408
1804
  attr_accessor :input_uri
409
1805
 
1806
+ # Annotations for list of objects detected and tracked in video.
1807
+ # Corresponds to the JSON property `objectAnnotations`
1808
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation>]
1809
+ attr_accessor :object_annotations
1810
+
410
1811
  # Label annotations on video level or user specified segment level.
411
1812
  # There is exactly one element for each unique label.
412
1813
  # Corresponds to the JSON property `segmentLabelAnnotations`
413
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1LabelAnnotation>]
1814
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
414
1815
  attr_accessor :segment_label_annotations
415
1816
 
416
1817
  # Shot annotations. Each shot is represented as a video segment.
417
1818
  # Corresponds to the JSON property `shotAnnotations`
418
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1VideoSegment>]
1819
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment>]
419
1820
  attr_accessor :shot_annotations
420
1821
 
421
1822
  # Label annotations on shot level.
422
1823
  # There is exactly one element for each unique label.
423
1824
  # Corresponds to the JSON property `shotLabelAnnotations`
424
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1LabelAnnotation>]
1825
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
425
1826
  attr_accessor :shot_label_annotations
426
1827
 
427
1828
  # Speech transcription.
428
1829
  # Corresponds to the JSON property `speechTranscriptions`
429
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1SpeechTranscription>]
1830
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
430
1831
  attr_accessor :speech_transcriptions
431
1832
 
1833
+ # OCR text detection and tracking.
1834
+ # Annotations for list of detected text snippets. Each will have list of
1835
+ # frame information associated with it.
1836
+ # Corresponds to the JSON property `textAnnotations`
1837
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2TextAnnotation>]
1838
+ attr_accessor :text_annotations
1839
+
432
1840
  def initialize(**args)
433
1841
  update!(**args)
434
1842
  end
@@ -439,15 +1847,68 @@ module Google
439
1847
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
440
1848
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
441
1849
  @input_uri = args[:input_uri] if args.key?(:input_uri)
1850
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
442
1851
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
443
1852
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
444
1853
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
445
1854
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
1855
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
1856
+ end
1857
+ end
1858
+
1859
+ # Video context and/or feature-specific parameters.
1860
+ class GoogleCloudVideointelligenceV1beta2VideoContext
1861
+ include Google::Apis::Core::Hashable
1862
+
1863
+ # Config for EXPLICIT_CONTENT_DETECTION.
1864
+ # Corresponds to the JSON property `explicitContentDetectionConfig`
1865
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ExplicitContentDetectionConfig]
1866
+ attr_accessor :explicit_content_detection_config
1867
+
1868
+ # Config for LABEL_DETECTION.
1869
+ # Corresponds to the JSON property `labelDetectionConfig`
1870
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelDetectionConfig]
1871
+ attr_accessor :label_detection_config
1872
+
1873
+ # Video segments to annotate. The segments may overlap and are not required
1874
+ # to be contiguous or span the whole video. If unspecified, each video is
1875
+ # treated as a single segment.
1876
+ # Corresponds to the JSON property `segments`
1877
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment>]
1878
+ attr_accessor :segments
1879
+
1880
+ # Config for SHOT_CHANGE_DETECTION.
1881
+ # Corresponds to the JSON property `shotChangeDetectionConfig`
1882
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ShotChangeDetectionConfig]
1883
+ attr_accessor :shot_change_detection_config
1884
+
1885
+ # Config for SPEECH_TRANSCRIPTION.
1886
+ # Corresponds to the JSON property `speechTranscriptionConfig`
1887
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechTranscriptionConfig]
1888
+ attr_accessor :speech_transcription_config
1889
+
1890
+ # Config for TEXT_DETECTION.
1891
+ # Corresponds to the JSON property `textDetectionConfig`
1892
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2TextDetectionConfig]
1893
+ attr_accessor :text_detection_config
1894
+
1895
+ def initialize(**args)
1896
+ update!(**args)
1897
+ end
1898
+
1899
+ # Update properties of this object
1900
+ def update!(**args)
1901
+ @explicit_content_detection_config = args[:explicit_content_detection_config] if args.key?(:explicit_content_detection_config)
1902
+ @label_detection_config = args[:label_detection_config] if args.key?(:label_detection_config)
1903
+ @segments = args[:segments] if args.key?(:segments)
1904
+ @shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
1905
+ @speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
1906
+ @text_detection_config = args[:text_detection_config] if args.key?(:text_detection_config)
446
1907
  end
447
1908
  end
448
1909
 
449
1910
  # Video segment.
450
- class GoogleCloudVideointelligenceV1VideoSegment
1911
+ class GoogleCloudVideointelligenceV1beta2VideoSegment
451
1912
  include Google::Apis::Core::Hashable
452
1913
 
453
1914
  # Time-offset, relative to the beginning of the video,
@@ -476,7 +1937,7 @@ module Google
476
1937
  # Word-specific information for recognized words. Word information is only
477
1938
  # included in the response when certain request parameters are set, such
478
1939
  # as `enable_word_time_offsets`.
479
- class GoogleCloudVideointelligenceV1WordInfo
1940
+ class GoogleCloudVideointelligenceV1beta2WordInfo
480
1941
  include Google::Apis::Core::Hashable
481
1942
 
482
1943
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -527,84 +1988,21 @@ module Google
527
1988
  @confidence = args[:confidence] if args.key?(:confidence)
528
1989
  @end_time = args[:end_time] if args.key?(:end_time)
529
1990
  @speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
530
- @start_time = args[:start_time] if args.key?(:start_time)
531
- @word = args[:word] if args.key?(:word)
532
- end
533
- end
534
-
535
- # Video annotation progress. Included in the `metadata`
536
- # field of the `Operation` returned by the `GetOperation`
537
- # call of the `google::longrunning::Operations` service.
538
- class GoogleCloudVideointelligenceV1beta2AnnotateVideoProgress
539
- include Google::Apis::Core::Hashable
540
-
541
- # Progress metadata for all videos specified in `AnnotateVideoRequest`.
542
- # Corresponds to the JSON property `annotationProgress`
543
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress>]
544
- attr_accessor :annotation_progress
545
-
546
- def initialize(**args)
547
- update!(**args)
548
- end
549
-
550
- # Update properties of this object
551
- def update!(**args)
552
- @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
553
- end
554
- end
555
-
556
- # Video annotation request.
557
- class GoogleCloudVideointelligenceV1beta2AnnotateVideoRequest
558
- include Google::Apis::Core::Hashable
559
-
560
- # Requested video annotation features.
561
- # Corresponds to the JSON property `features`
562
- # @return [Array<String>]
563
- attr_accessor :features
564
-
565
- # The video data bytes.
566
- # If unset, the input video(s) should be specified via `input_uri`.
567
- # If set, `input_uri` should be unset.
568
- # Corresponds to the JSON property `inputContent`
569
- # NOTE: Values are automatically base64 encoded/decoded in the client library.
570
- # @return [String]
571
- attr_accessor :input_content
572
-
573
- # Input video location. Currently, only
574
- # [Google Cloud Storage](https://cloud.google.com/storage/) URIs are
575
- # supported, which must be specified in the following format:
576
- # `gs://bucket-id/object-id` (other URI formats return
577
- # google.rpc.Code.INVALID_ARGUMENT). For more information, see
578
- # [Request URIs](/storage/docs/reference-uris).
579
- # A video URI may include wildcards in `object-id`, and thus identify
580
- # multiple videos. Supported wildcards: '*' to match 0 or more characters;
581
- # '?' to match 1 character. If unset, the input video should be embedded
582
- # in the request as `input_content`. If set, `input_content` should be unset.
583
- # Corresponds to the JSON property `inputUri`
584
- # @return [String]
585
- attr_accessor :input_uri
586
-
587
- # Optional cloud region where annotation should take place. Supported cloud
588
- # regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region
589
- # is specified, a region will be determined based on video file location.
590
- # Corresponds to the JSON property `locationId`
591
- # @return [String]
592
- attr_accessor :location_id
1991
+ @start_time = args[:start_time] if args.key?(:start_time)
1992
+ @word = args[:word] if args.key?(:word)
1993
+ end
1994
+ end
593
1995
 
594
- # Optional location where the output (in JSON format) should be stored.
595
- # Currently, only [Google Cloud Storage](https://cloud.google.com/storage/)
596
- # URIs are supported, which must be specified in the following format:
597
- # `gs://bucket-id/object-id` (other URI formats return
598
- # google.rpc.Code.INVALID_ARGUMENT). For more information, see
599
- # [Request URIs](/storage/docs/reference-uris).
600
- # Corresponds to the JSON property `outputUri`
601
- # @return [String]
602
- attr_accessor :output_uri
1996
+ # Video annotation progress. Included in the `metadata`
1997
+ # field of the `Operation` returned by the `GetOperation`
1998
+ # call of the `google::longrunning::Operations` service.
1999
+ class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
2000
+ include Google::Apis::Core::Hashable
603
2001
 
604
- # Video context and/or feature-specific parameters.
605
- # Corresponds to the JSON property `videoContext`
606
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoContext]
607
- attr_accessor :video_context
2002
+ # Progress metadata for all videos specified in `AnnotateVideoRequest`.
2003
+ # Corresponds to the JSON property `annotationProgress`
2004
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
2005
+ attr_accessor :annotation_progress
608
2006
 
609
2007
  def initialize(**args)
610
2008
  update!(**args)
@@ -612,24 +2010,19 @@ module Google
612
2010
 
613
2011
  # Update properties of this object
614
2012
  def update!(**args)
615
- @features = args[:features] if args.key?(:features)
616
- @input_content = args[:input_content] if args.key?(:input_content)
617
- @input_uri = args[:input_uri] if args.key?(:input_uri)
618
- @location_id = args[:location_id] if args.key?(:location_id)
619
- @output_uri = args[:output_uri] if args.key?(:output_uri)
620
- @video_context = args[:video_context] if args.key?(:video_context)
2013
+ @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
621
2014
  end
622
2015
  end
623
2016
 
624
2017
  # Video annotation response. Included in the `response`
625
2018
  # field of the `Operation` returned by the `GetOperation`
626
2019
  # call of the `google::longrunning::Operations` service.
627
- class GoogleCloudVideointelligenceV1beta2AnnotateVideoResponse
2020
+ class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
628
2021
  include Google::Apis::Core::Hashable
629
2022
 
630
2023
  # Annotation results for all videos specified in `AnnotateVideoRequest`.
631
2024
  # Corresponds to the JSON property `annotationResults`
632
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoAnnotationResults>]
2025
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
633
2026
  attr_accessor :annotation_results
634
2027
 
635
2028
  def initialize(**args)
@@ -643,7 +2036,7 @@ module Google
643
2036
  end
644
2037
 
645
2038
  # Detected entity from video analysis.
646
- class GoogleCloudVideointelligenceV1beta2Entity
2039
+ class GoogleCloudVideointelligenceV1p1beta1Entity
647
2040
  include Google::Apis::Core::Hashable
648
2041
 
649
2042
  # Textual description, e.g. `Fixed-gear bicycle`.
@@ -678,12 +2071,12 @@ module Google
678
2071
  # Explicit content annotation (based on per-frame visual signals only).
679
2072
  # If no explicit content has been detected in a frame, no annotations are
680
2073
  # present for that frame.
681
- class GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation
2074
+ class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
682
2075
  include Google::Apis::Core::Hashable
683
2076
 
684
2077
  # All video frames where explicit content was detected.
685
2078
  # Corresponds to the JSON property `frames`
686
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ExplicitContentFrame>]
2079
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
687
2080
  attr_accessor :frames
688
2081
 
689
2082
  def initialize(**args)
@@ -696,29 +2089,8 @@ module Google
696
2089
  end
697
2090
  end
698
2091
 
699
- # Config for EXPLICIT_CONTENT_DETECTION.
700
- class GoogleCloudVideointelligenceV1beta2ExplicitContentDetectionConfig
701
- include Google::Apis::Core::Hashable
702
-
703
- # Model to use for explicit content detection.
704
- # Supported values: "builtin/stable" (the default if unset) and
705
- # "builtin/latest".
706
- # Corresponds to the JSON property `model`
707
- # @return [String]
708
- attr_accessor :model
709
-
710
- def initialize(**args)
711
- update!(**args)
712
- end
713
-
714
- # Update properties of this object
715
- def update!(**args)
716
- @model = args[:model] if args.key?(:model)
717
- end
718
- end
719
-
720
2092
  # Video frame level annotation results for explicit content.
721
- class GoogleCloudVideointelligenceV1beta2ExplicitContentFrame
2093
+ class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
722
2094
  include Google::Apis::Core::Hashable
723
2095
 
724
2096
  # Likelihood of the pornography content..
@@ -744,7 +2116,7 @@ module Google
744
2116
  end
745
2117
 
746
2118
  # Label annotation.
747
- class GoogleCloudVideointelligenceV1beta2LabelAnnotation
2119
+ class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
748
2120
  include Google::Apis::Core::Hashable
749
2121
 
750
2122
  # Common categories for the detected entity.
@@ -752,22 +2124,22 @@ module Google
752
2124
  # cases there might be more than one categories e.g. `Terrier` could also be
753
2125
  # a `pet`.
754
2126
  # Corresponds to the JSON property `categoryEntities`
755
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2Entity>]
2127
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1Entity>]
756
2128
  attr_accessor :category_entities
757
2129
 
758
2130
  # Detected entity from video analysis.
759
2131
  # Corresponds to the JSON property `entity`
760
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2Entity]
2132
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1Entity]
761
2133
  attr_accessor :entity
762
2134
 
763
2135
  # All video frames where a label was detected.
764
2136
  # Corresponds to the JSON property `frames`
765
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelFrame>]
2137
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
766
2138
  attr_accessor :frames
767
2139
 
768
2140
  # All video segments where a label was detected.
769
2141
  # Corresponds to the JSON property `segments`
770
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelSegment>]
2142
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
771
2143
  attr_accessor :segments
772
2144
 
773
2145
  def initialize(**args)
@@ -783,46 +2155,8 @@ module Google
783
2155
  end
784
2156
  end
785
2157
 
786
- # Config for LABEL_DETECTION.
787
- class GoogleCloudVideointelligenceV1beta2LabelDetectionConfig
788
- include Google::Apis::Core::Hashable
789
-
790
- # What labels should be detected with LABEL_DETECTION, in addition to
791
- # video-level labels or segment-level labels.
792
- # If unspecified, defaults to `SHOT_MODE`.
793
- # Corresponds to the JSON property `labelDetectionMode`
794
- # @return [String]
795
- attr_accessor :label_detection_mode
796
-
797
- # Model to use for label detection.
798
- # Supported values: "builtin/stable" (the default if unset) and
799
- # "builtin/latest".
800
- # Corresponds to the JSON property `model`
801
- # @return [String]
802
- attr_accessor :model
803
-
804
- # Whether the video has been shot from a stationary (i.e. non-moving) camera.
805
- # When set to true, might improve detection accuracy for moving objects.
806
- # Should be used with `SHOT_AND_FRAME_MODE` enabled.
807
- # Corresponds to the JSON property `stationaryCamera`
808
- # @return [Boolean]
809
- attr_accessor :stationary_camera
810
- alias_method :stationary_camera?, :stationary_camera
811
-
812
- def initialize(**args)
813
- update!(**args)
814
- end
815
-
816
- # Update properties of this object
817
- def update!(**args)
818
- @label_detection_mode = args[:label_detection_mode] if args.key?(:label_detection_mode)
819
- @model = args[:model] if args.key?(:model)
820
- @stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
821
- end
822
- end
823
-
824
2158
  # Video frame level annotation results for label detection.
825
- class GoogleCloudVideointelligenceV1beta2LabelFrame
2159
+ class GoogleCloudVideointelligenceV1p1beta1LabelFrame
826
2160
  include Google::Apis::Core::Hashable
827
2161
 
828
2162
  # Confidence that the label is accurate. Range: [0, 1].
@@ -848,7 +2182,7 @@ module Google
848
2182
  end
849
2183
 
850
2184
  # Video segment level annotation results for label detection.
851
- class GoogleCloudVideointelligenceV1beta2LabelSegment
2185
+ class GoogleCloudVideointelligenceV1p1beta1LabelSegment
852
2186
  include Google::Apis::Core::Hashable
853
2187
 
854
2188
  # Confidence that the label is accurate. Range: [0, 1].
@@ -858,7 +2192,7 @@ module Google
858
2192
 
859
2193
  # Video segment.
860
2194
  # Corresponds to the JSON property `segment`
861
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment]
2195
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
862
2196
  attr_accessor :segment
863
2197
 
864
2198
  def initialize(**args)
@@ -872,16 +2206,31 @@ module Google
872
2206
  end
873
2207
  end
874
2208
 
875
- # Config for SHOT_CHANGE_DETECTION.
876
- class GoogleCloudVideointelligenceV1beta2ShotChangeDetectionConfig
2209
+ # Normalized bounding box.
2210
+ # The normalized vertex coordinates are relative to the original image.
2211
+ # Range: [0, 1].
2212
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox
877
2213
  include Google::Apis::Core::Hashable
878
2214
 
879
- # Model to use for shot change detection.
880
- # Supported values: "builtin/stable" (the default if unset) and
881
- # "builtin/latest".
882
- # Corresponds to the JSON property `model`
883
- # @return [String]
884
- attr_accessor :model
2215
+ # Bottom Y coordinate.
2216
+ # Corresponds to the JSON property `bottom`
2217
+ # @return [Float]
2218
+ attr_accessor :bottom
2219
+
2220
+ # Left X coordinate.
2221
+ # Corresponds to the JSON property `left`
2222
+ # @return [Float]
2223
+ attr_accessor :left
2224
+
2225
+ # Right X coordinate.
2226
+ # Corresponds to the JSON property `right`
2227
+ # @return [Float]
2228
+ attr_accessor :right
2229
+
2230
+ # Top Y coordinate.
2231
+ # Corresponds to the JSON property `top`
2232
+ # @return [Float]
2233
+ attr_accessor :top
885
2234
 
886
2235
  def initialize(**args)
887
2236
  update!(**args)
@@ -889,24 +2238,140 @@ module Google
889
2238
 
890
2239
  # Update properties of this object
891
2240
  def update!(**args)
892
- @model = args[:model] if args.key?(:model)
2241
+ @bottom = args[:bottom] if args.key?(:bottom)
2242
+ @left = args[:left] if args.key?(:left)
2243
+ @right = args[:right] if args.key?(:right)
2244
+ @top = args[:top] if args.key?(:top)
2245
+ end
2246
+ end
2247
+
2248
+ # Normalized bounding polygon for text (that might not be aligned with axis).
2249
+ # Contains list of the corner points in clockwise order starting from
2250
+ # top-left corner. For example, for a rectangular bounding box:
2251
+ # When the text is horizontal it might look like:
2252
+ # 0----1
2253
+ # | |
2254
+ # 3----2
2255
+ # When it's clockwise rotated 180 degrees around the top-left corner it
2256
+ # becomes:
2257
+ # 2----3
2258
+ # | |
2259
+ # 1----0
2260
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2261
+ # than 0, or greater than 1 due to trignometric calculations for location of
2262
+ # the box.
2263
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly
2264
+ include Google::Apis::Core::Hashable
2265
+
2266
+ # Normalized vertices of the bounding polygon.
2267
+ # Corresponds to the JSON property `vertices`
2268
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1NormalizedVertex>]
2269
+ attr_accessor :vertices
2270
+
2271
+ def initialize(**args)
2272
+ update!(**args)
2273
+ end
2274
+
2275
+ # Update properties of this object
2276
+ def update!(**args)
2277
+ @vertices = args[:vertices] if args.key?(:vertices)
2278
+ end
2279
+ end
2280
+
2281
+ # A vertex represents a 2D point in the image.
2282
+ # NOTE: the normalized vertex coordinates are relative to the original image
2283
+ # and range from 0 to 1.
2284
+ class GoogleCloudVideointelligenceV1p1beta1NormalizedVertex
2285
+ include Google::Apis::Core::Hashable
2286
+
2287
+ # X coordinate.
2288
+ # Corresponds to the JSON property `x`
2289
+ # @return [Float]
2290
+ attr_accessor :x
2291
+
2292
+ # Y coordinate.
2293
+ # Corresponds to the JSON property `y`
2294
+ # @return [Float]
2295
+ attr_accessor :y
2296
+
2297
+ def initialize(**args)
2298
+ update!(**args)
2299
+ end
2300
+
2301
+ # Update properties of this object
2302
+ def update!(**args)
2303
+ @x = args[:x] if args.key?(:x)
2304
+ @y = args[:y] if args.key?(:y)
2305
+ end
2306
+ end
2307
+
2308
+ # Annotations corresponding to one tracked object.
2309
+ class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation
2310
+ include Google::Apis::Core::Hashable
2311
+
2312
+ # Object category's labeling confidence of this track.
2313
+ # Corresponds to the JSON property `confidence`
2314
+ # @return [Float]
2315
+ attr_accessor :confidence
2316
+
2317
+ # Detected entity from video analysis.
2318
+ # Corresponds to the JSON property `entity`
2319
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1Entity]
2320
+ attr_accessor :entity
2321
+
2322
+ # Information corresponding to all frames where this object track appears.
2323
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
2324
+ # messages in frames.
2325
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
2326
+ # Corresponds to the JSON property `frames`
2327
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame>]
2328
+ attr_accessor :frames
2329
+
2330
+ # Video segment.
2331
+ # Corresponds to the JSON property `segment`
2332
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
2333
+ attr_accessor :segment
2334
+
2335
+ # Streaming mode ONLY.
2336
+ # In streaming mode, we do not know the end time of a tracked object
2337
+ # before it is completed. Hence, there is no VideoSegment info returned.
2338
+ # Instead, we provide a unique identifiable integer track_id so that
2339
+ # the customers can correlate the results of the ongoing
2340
+ # ObjectTrackAnnotation of the same track_id over time.
2341
+ # Corresponds to the JSON property `trackId`
2342
+ # @return [Fixnum]
2343
+ attr_accessor :track_id
2344
+
2345
+ def initialize(**args)
2346
+ update!(**args)
2347
+ end
2348
+
2349
+ # Update properties of this object
2350
+ def update!(**args)
2351
+ @confidence = args[:confidence] if args.key?(:confidence)
2352
+ @entity = args[:entity] if args.key?(:entity)
2353
+ @frames = args[:frames] if args.key?(:frames)
2354
+ @segment = args[:segment] if args.key?(:segment)
2355
+ @track_id = args[:track_id] if args.key?(:track_id)
893
2356
  end
894
2357
  end
895
2358
 
896
- # Provides "hints" to the speech recognizer to favor specific words and phrases
897
- # in the results.
898
- class GoogleCloudVideointelligenceV1beta2SpeechContext
2359
+ # Video frame level annotations for object detection and tracking. This field
2360
+ # stores per frame location, time offset, and confidence.
2361
+ class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame
899
2362
  include Google::Apis::Core::Hashable
900
2363
 
901
- # *Optional* A list of strings containing words and phrases "hints" so that
902
- # the speech recognition is more likely to recognize them. This can be used
903
- # to improve the accuracy for specific words and phrases, for example, if
904
- # specific commands are typically spoken by the user. This can also be used
905
- # to add additional words to the vocabulary of the recognizer. See
906
- # [usage limits](https://cloud.google.com/speech/limits#content).
907
- # Corresponds to the JSON property `phrases`
908
- # @return [Array<String>]
909
- attr_accessor :phrases
2364
+ # Normalized bounding box.
2365
+ # The normalized vertex coordinates are relative to the original image.
2366
+ # Range: [0, 1].
2367
+ # Corresponds to the JSON property `normalizedBoundingBox`
2368
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox]
2369
+ attr_accessor :normalized_bounding_box
2370
+
2371
+ # The timestamp of the frame in microseconds.
2372
+ # Corresponds to the JSON property `timeOffset`
2373
+ # @return [String]
2374
+ attr_accessor :time_offset
910
2375
 
911
2376
  def initialize(**args)
912
2377
  update!(**args)
@@ -914,12 +2379,13 @@ module Google
914
2379
 
915
2380
  # Update properties of this object
916
2381
  def update!(**args)
917
- @phrases = args[:phrases] if args.key?(:phrases)
2382
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
2383
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
918
2384
  end
919
2385
  end
920
2386
 
921
2387
  # Alternative hypotheses (a.k.a. n-best list).
922
- class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
2388
+ class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
923
2389
  include Google::Apis::Core::Hashable
924
2390
 
925
2391
  # The confidence estimate between 0.0 and 1.0. A higher number
@@ -939,7 +2405,7 @@ module Google
939
2405
 
940
2406
  # A list of word-specific information for each recognized word.
941
2407
  # Corresponds to the JSON property `words`
942
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2WordInfo>]
2408
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
943
2409
  attr_accessor :words
944
2410
 
945
2411
  def initialize(**args)
@@ -955,7 +2421,7 @@ module Google
955
2421
  end
956
2422
 
957
2423
  # A speech recognition result corresponding to a portion of the audio.
958
- class GoogleCloudVideointelligenceV1beta2SpeechTranscription
2424
+ class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
959
2425
  include Google::Apis::Core::Hashable
960
2426
 
961
2427
  # May contain one or more recognition hypotheses (up to the maximum specified
@@ -963,7 +2429,7 @@ module Google
963
2429
  # accuracy, with the top (first) alternative being the most probable, as
964
2430
  # ranked by the recognizer.
965
2431
  # Corresponds to the JSON property `alternatives`
966
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
2432
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
967
2433
  attr_accessor :alternatives
968
2434
 
969
2435
  # Output only. The
@@ -985,86 +2451,93 @@ module Google
985
2451
  end
986
2452
  end
987
2453
 
988
- # Config for SPEECH_TRANSCRIPTION.
989
- class GoogleCloudVideointelligenceV1beta2SpeechTranscriptionConfig
2454
+ # Annotations related to one detected OCR text snippet. This will contain the
2455
+ # corresponding text, confidence value, and frame level information for each
2456
+ # detection.
2457
+ class GoogleCloudVideointelligenceV1p1beta1TextAnnotation
990
2458
  include Google::Apis::Core::Hashable
991
2459
 
992
- # *Optional* For file formats, such as MXF or MKV, supporting multiple audio
993
- # tracks, specify up to two tracks. Default: track 0.
994
- # Corresponds to the JSON property `audioTracks`
995
- # @return [Array<Fixnum>]
996
- attr_accessor :audio_tracks
2460
+ # All video segments where OCR detected text appears.
2461
+ # Corresponds to the JSON property `segments`
2462
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1TextSegment>]
2463
+ attr_accessor :segments
997
2464
 
998
- # *Optional*
999
- # If set, specifies the estimated number of speakers in the conversation.
1000
- # If not set, defaults to '2'.
1001
- # Ignored unless enable_speaker_diarization is set to true.
1002
- # Corresponds to the JSON property `diarizationSpeakerCount`
1003
- # @return [Fixnum]
1004
- attr_accessor :diarization_speaker_count
2465
+ # The detected text.
2466
+ # Corresponds to the JSON property `text`
2467
+ # @return [String]
2468
+ attr_accessor :text
1005
2469
 
1006
- # *Optional* If 'true', adds punctuation to recognition result hypotheses.
1007
- # This feature is only available in select languages. Setting this for
1008
- # requests in other languages has no effect at all. The default 'false' value
1009
- # does not add punctuation to result hypotheses. NOTE: "This is currently
1010
- # offered as an experimental service, complimentary to all users. In the
1011
- # future this may be exclusively available as a premium feature."
1012
- # Corresponds to the JSON property `enableAutomaticPunctuation`
1013
- # @return [Boolean]
1014
- attr_accessor :enable_automatic_punctuation
1015
- alias_method :enable_automatic_punctuation?, :enable_automatic_punctuation
2470
+ def initialize(**args)
2471
+ update!(**args)
2472
+ end
1016
2473
 
1017
- # *Optional* If 'true', enables speaker detection for each recognized word in
1018
- # the top alternative of the recognition result using a speaker_tag provided
1019
- # in the WordInfo.
1020
- # Note: When this is true, we send all the words from the beginning of the
1021
- # audio for the top alternative in every consecutive responses.
1022
- # This is done in order to improve our speaker tags as our models learn to
1023
- # identify the speakers in the conversation over time.
1024
- # Corresponds to the JSON property `enableSpeakerDiarization`
1025
- # @return [Boolean]
1026
- attr_accessor :enable_speaker_diarization
1027
- alias_method :enable_speaker_diarization?, :enable_speaker_diarization
2474
+ # Update properties of this object
2475
+ def update!(**args)
2476
+ @segments = args[:segments] if args.key?(:segments)
2477
+ @text = args[:text] if args.key?(:text)
2478
+ end
2479
+ end
1028
2480
 
1029
- # *Optional* If `true`, the top result includes a list of words and the
1030
- # confidence for those words. If `false`, no word-level confidence
1031
- # information is returned. The default is `false`.
1032
- # Corresponds to the JSON property `enableWordConfidence`
1033
- # @return [Boolean]
1034
- attr_accessor :enable_word_confidence
1035
- alias_method :enable_word_confidence?, :enable_word_confidence
2481
+ # Video frame level annotation results for text annotation (OCR).
2482
+ # Contains information regarding timestamp and bounding box locations for the
2483
+ # frames containing detected OCR text snippets.
2484
+ class GoogleCloudVideointelligenceV1p1beta1TextFrame
2485
+ include Google::Apis::Core::Hashable
1036
2486
 
1037
- # *Optional* If set to `true`, the server will attempt to filter out
1038
- # profanities, replacing all but the initial character in each filtered word
1039
- # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
1040
- # won't be filtered out.
1041
- # Corresponds to the JSON property `filterProfanity`
1042
- # @return [Boolean]
1043
- attr_accessor :filter_profanity
1044
- alias_method :filter_profanity?, :filter_profanity
2487
+ # Normalized bounding polygon for text (that might not be aligned with axis).
2488
+ # Contains list of the corner points in clockwise order starting from
2489
+ # top-left corner. For example, for a rectangular bounding box:
2490
+ # When the text is horizontal it might look like:
2491
+ # 0----1
2492
+ # | |
2493
+ # 3----2
2494
+ # When it's clockwise rotated 180 degrees around the top-left corner it
2495
+ # becomes:
2496
+ # 2----3
2497
+ # | |
2498
+ # 1----0
2499
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2500
+ # than 0, or greater than 1 due to trignometric calculations for location of
2501
+ # the box.
2502
+ # Corresponds to the JSON property `rotatedBoundingBox`
2503
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly]
2504
+ attr_accessor :rotated_bounding_box
1045
2505
 
1046
- # *Required* The language of the supplied audio as a
1047
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
1048
- # Example: "en-US".
1049
- # See [Language Support](https://cloud.google.com/speech/docs/languages)
1050
- # for a list of the currently supported language codes.
1051
- # Corresponds to the JSON property `languageCode`
2506
+ # Timestamp of this frame.
2507
+ # Corresponds to the JSON property `timeOffset`
1052
2508
  # @return [String]
1053
- attr_accessor :language_code
2509
+ attr_accessor :time_offset
1054
2510
 
1055
- # *Optional* Maximum number of recognition hypotheses to be returned.
1056
- # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
1057
- # within each `SpeechTranscription`. The server may return fewer than
1058
- # `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will
1059
- # return a maximum of one. If omitted, will return a maximum of one.
1060
- # Corresponds to the JSON property `maxAlternatives`
1061
- # @return [Fixnum]
1062
- attr_accessor :max_alternatives
2511
+ def initialize(**args)
2512
+ update!(**args)
2513
+ end
1063
2514
 
1064
- # *Optional* A means to provide context to assist the speech recognition.
1065
- # Corresponds to the JSON property `speechContexts`
1066
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechContext>]
1067
- attr_accessor :speech_contexts
2515
+ # Update properties of this object
2516
+ def update!(**args)
2517
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
2518
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2519
+ end
2520
+ end
2521
+
2522
+ # Video segment level annotation results for text detection.
2523
+ class GoogleCloudVideointelligenceV1p1beta1TextSegment
2524
+ include Google::Apis::Core::Hashable
2525
+
2526
+ # Confidence for the track of detected text. It is calculated as the highest
2527
+ # over all frames where OCR detected text appears.
2528
+ # Corresponds to the JSON property `confidence`
2529
+ # @return [Float]
2530
+ attr_accessor :confidence
2531
+
2532
+ # Information related to the frames where OCR detected text appears.
2533
+ # Corresponds to the JSON property `frames`
2534
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1TextFrame>]
2535
+ attr_accessor :frames
2536
+
2537
+ # Video segment.
2538
+ # Corresponds to the JSON property `segment`
2539
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
2540
+ attr_accessor :segment
1068
2541
 
1069
2542
  def initialize(**args)
1070
2543
  update!(**args)
@@ -1072,20 +2545,14 @@ module Google
1072
2545
 
1073
2546
  # Update properties of this object
1074
2547
  def update!(**args)
1075
- @audio_tracks = args[:audio_tracks] if args.key?(:audio_tracks)
1076
- @diarization_speaker_count = args[:diarization_speaker_count] if args.key?(:diarization_speaker_count)
1077
- @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
1078
- @enable_speaker_diarization = args[:enable_speaker_diarization] if args.key?(:enable_speaker_diarization)
1079
- @enable_word_confidence = args[:enable_word_confidence] if args.key?(:enable_word_confidence)
1080
- @filter_profanity = args[:filter_profanity] if args.key?(:filter_profanity)
1081
- @language_code = args[:language_code] if args.key?(:language_code)
1082
- @max_alternatives = args[:max_alternatives] if args.key?(:max_alternatives)
1083
- @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
2548
+ @confidence = args[:confidence] if args.key?(:confidence)
2549
+ @frames = args[:frames] if args.key?(:frames)
2550
+ @segment = args[:segment] if args.key?(:segment)
1084
2551
  end
1085
2552
  end
1086
2553
 
1087
2554
  # Annotation progress for a single video.
1088
- class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
2555
+ class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
1089
2556
  include Google::Apis::Core::Hashable
1090
2557
 
1091
2558
  # Video file location in
@@ -1124,17 +2591,17 @@ module Google
1124
2591
  end
1125
2592
 
1126
2593
  # Annotation results for a single video.
1127
- class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
2594
+ class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
1128
2595
  include Google::Apis::Core::Hashable
1129
2596
 
1130
- # The `Status` type defines a logical error model that is suitable for different
1131
- # programming environments, including REST APIs and RPC APIs. It is used by
1132
- # [gRPC](https://github.com/grpc). The error model is designed to be:
2597
+ # The `Status` type defines a logical error model that is suitable for
2598
+ # different programming environments, including REST APIs and RPC APIs. It is
2599
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
1133
2600
  # - Simple to use and understand for most users
1134
2601
  # - Flexible enough to meet unexpected needs
1135
2602
  # # Overview
1136
- # The `Status` message contains three pieces of data: error code, error message,
1137
- # and error details. The error code should be an enum value of
2603
+ # The `Status` message contains three pieces of data: error code, error
2604
+ # message, and error details. The error code should be an enum value of
1138
2605
  # google.rpc.Code, but it may accept additional error codes if needed. The
1139
2606
  # error message should be a developer-facing English message that helps
1140
2607
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -1174,13 +2641,13 @@ module Google
1174
2641
  # If no explicit content has been detected in a frame, no annotations are
1175
2642
  # present for that frame.
1176
2643
  # Corresponds to the JSON property `explicitAnnotation`
1177
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
2644
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
1178
2645
  attr_accessor :explicit_annotation
1179
2646
 
1180
2647
  # Label annotations on frame level.
1181
2648
  # There is exactly one element for each unique label.
1182
2649
  # Corresponds to the JSON property `frameLabelAnnotations`
1183
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
2650
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
1184
2651
  attr_accessor :frame_label_annotations
1185
2652
 
1186
2653
  # Video file location in
@@ -1189,75 +2656,39 @@ module Google
1189
2656
  # @return [String]
1190
2657
  attr_accessor :input_uri
1191
2658
 
2659
+ # Annotations for list of objects detected and tracked in video.
2660
+ # Corresponds to the JSON property `objectAnnotations`
2661
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>]
2662
+ attr_accessor :object_annotations
2663
+
1192
2664
  # Label annotations on video level or user specified segment level.
1193
2665
  # There is exactly one element for each unique label.
1194
2666
  # Corresponds to the JSON property `segmentLabelAnnotations`
1195
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
2667
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
1196
2668
  attr_accessor :segment_label_annotations
1197
2669
 
1198
2670
  # Shot annotations. Each shot is represented as a video segment.
1199
2671
  # Corresponds to the JSON property `shotAnnotations`
1200
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment>]
1201
- attr_accessor :shot_annotations
1202
-
1203
- # Label annotations on shot level.
1204
- # There is exactly one element for each unique label.
1205
- # Corresponds to the JSON property `shotLabelAnnotations`
1206
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
1207
- attr_accessor :shot_label_annotations
1208
-
1209
- # Speech transcription.
1210
- # Corresponds to the JSON property `speechTranscriptions`
1211
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
1212
- attr_accessor :speech_transcriptions
1213
-
1214
- def initialize(**args)
1215
- update!(**args)
1216
- end
1217
-
1218
- # Update properties of this object
1219
- def update!(**args)
1220
- @error = args[:error] if args.key?(:error)
1221
- @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
1222
- @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
1223
- @input_uri = args[:input_uri] if args.key?(:input_uri)
1224
- @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
1225
- @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
1226
- @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
1227
- @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
1228
- end
1229
- end
1230
-
1231
- # Video context and/or feature-specific parameters.
1232
- class GoogleCloudVideointelligenceV1beta2VideoContext
1233
- include Google::Apis::Core::Hashable
1234
-
1235
- # Config for EXPLICIT_CONTENT_DETECTION.
1236
- # Corresponds to the JSON property `explicitContentDetectionConfig`
1237
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ExplicitContentDetectionConfig]
1238
- attr_accessor :explicit_content_detection_config
1239
-
1240
- # Config for LABEL_DETECTION.
1241
- # Corresponds to the JSON property `labelDetectionConfig`
1242
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2LabelDetectionConfig]
1243
- attr_accessor :label_detection_config
1244
-
1245
- # Video segments to annotate. The segments may overlap and are not required
1246
- # to be contiguous or span the whole video. If unspecified, each video is
1247
- # treated as a single segment.
1248
- # Corresponds to the JSON property `segments`
1249
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2VideoSegment>]
1250
- attr_accessor :segments
2672
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
2673
+ attr_accessor :shot_annotations
1251
2674
 
1252
- # Config for SHOT_CHANGE_DETECTION.
1253
- # Corresponds to the JSON property `shotChangeDetectionConfig`
1254
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2ShotChangeDetectionConfig]
1255
- attr_accessor :shot_change_detection_config
2675
+ # Label annotations on shot level.
2676
+ # There is exactly one element for each unique label.
2677
+ # Corresponds to the JSON property `shotLabelAnnotations`
2678
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
2679
+ attr_accessor :shot_label_annotations
1256
2680
 
1257
- # Config for SPEECH_TRANSCRIPTION.
1258
- # Corresponds to the JSON property `speechTranscriptionConfig`
1259
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1beta2SpeechTranscriptionConfig]
1260
- attr_accessor :speech_transcription_config
2681
+ # Speech transcription.
2682
+ # Corresponds to the JSON property `speechTranscriptions`
2683
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
2684
+ attr_accessor :speech_transcriptions
2685
+
2686
+ # OCR text detection and tracking.
2687
+ # Annotations for list of detected text snippets. Each will have list of
2688
+ # frame information associated with it.
2689
+ # Corresponds to the JSON property `textAnnotations`
2690
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>]
2691
+ attr_accessor :text_annotations
1261
2692
 
1262
2693
  def initialize(**args)
1263
2694
  update!(**args)
@@ -1265,16 +2696,21 @@ module Google
1265
2696
 
1266
2697
  # Update properties of this object
1267
2698
  def update!(**args)
1268
- @explicit_content_detection_config = args[:explicit_content_detection_config] if args.key?(:explicit_content_detection_config)
1269
- @label_detection_config = args[:label_detection_config] if args.key?(:label_detection_config)
1270
- @segments = args[:segments] if args.key?(:segments)
1271
- @shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
1272
- @speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
2699
+ @error = args[:error] if args.key?(:error)
2700
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
2701
+ @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
2702
+ @input_uri = args[:input_uri] if args.key?(:input_uri)
2703
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
2704
+ @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
2705
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
2706
+ @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
2707
+ @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
2708
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
1273
2709
  end
1274
2710
  end
1275
2711
 
1276
2712
  # Video segment.
1277
- class GoogleCloudVideointelligenceV1beta2VideoSegment
2713
+ class GoogleCloudVideointelligenceV1p1beta1VideoSegment
1278
2714
  include Google::Apis::Core::Hashable
1279
2715
 
1280
2716
  # Time-offset, relative to the beginning of the video,
@@ -1303,7 +2739,7 @@ module Google
1303
2739
  # Word-specific information for recognized words. Word information is only
1304
2740
  # included in the response when certain request parameters are set, such
1305
2741
  # as `enable_word_time_offsets`.
1306
- class GoogleCloudVideointelligenceV1beta2WordInfo
2742
+ class GoogleCloudVideointelligenceV1p1beta1WordInfo
1307
2743
  include Google::Apis::Core::Hashable
1308
2744
 
1309
2745
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -1362,13 +2798,241 @@ module Google
1362
2798
  # Video annotation progress. Included in the `metadata`
1363
2799
  # field of the `Operation` returned by the `GetOperation`
1364
2800
  # call of the `google::longrunning::Operations` service.
1365
- class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
2801
+ class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
2802
+ include Google::Apis::Core::Hashable
2803
+
2804
+ # Progress metadata for all videos specified in `AnnotateVideoRequest`.
2805
+ # Corresponds to the JSON property `annotationProgress`
2806
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
2807
+ attr_accessor :annotation_progress
2808
+
2809
+ def initialize(**args)
2810
+ update!(**args)
2811
+ end
2812
+
2813
+ # Update properties of this object
2814
+ def update!(**args)
2815
+ @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
2816
+ end
2817
+ end
2818
+
2819
+ # Video annotation response. Included in the `response`
2820
+ # field of the `Operation` returned by the `GetOperation`
2821
+ # call of the `google::longrunning::Operations` service.
2822
+ class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
2823
+ include Google::Apis::Core::Hashable
2824
+
2825
+ # Annotation results for all videos specified in `AnnotateVideoRequest`.
2826
+ # Corresponds to the JSON property `annotationResults`
2827
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
2828
+ attr_accessor :annotation_results
2829
+
2830
+ def initialize(**args)
2831
+ update!(**args)
2832
+ end
2833
+
2834
+ # Update properties of this object
2835
+ def update!(**args)
2836
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
2837
+ end
2838
+ end
2839
+
2840
+ # Detected entity from video analysis.
2841
+ class GoogleCloudVideointelligenceV1p2beta1Entity
2842
+ include Google::Apis::Core::Hashable
2843
+
2844
+ # Textual description, e.g. `Fixed-gear bicycle`.
2845
+ # Corresponds to the JSON property `description`
2846
+ # @return [String]
2847
+ attr_accessor :description
2848
+
2849
+ # Opaque entity ID. Some IDs may be available in
2850
+ # [Google Knowledge Graph Search
2851
+ # API](https://developers.google.com/knowledge-graph/).
2852
+ # Corresponds to the JSON property `entityId`
2853
+ # @return [String]
2854
+ attr_accessor :entity_id
2855
+
2856
+ # Language code for `description` in BCP-47 format.
2857
+ # Corresponds to the JSON property `languageCode`
2858
+ # @return [String]
2859
+ attr_accessor :language_code
2860
+
2861
+ def initialize(**args)
2862
+ update!(**args)
2863
+ end
2864
+
2865
+ # Update properties of this object
2866
+ def update!(**args)
2867
+ @description = args[:description] if args.key?(:description)
2868
+ @entity_id = args[:entity_id] if args.key?(:entity_id)
2869
+ @language_code = args[:language_code] if args.key?(:language_code)
2870
+ end
2871
+ end
2872
+
2873
+ # Explicit content annotation (based on per-frame visual signals only).
2874
+ # If no explicit content has been detected in a frame, no annotations are
2875
+ # present for that frame.
2876
+ class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
2877
+ include Google::Apis::Core::Hashable
2878
+
2879
+ # All video frames where explicit content was detected.
2880
+ # Corresponds to the JSON property `frames`
2881
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
2882
+ attr_accessor :frames
2883
+
2884
+ def initialize(**args)
2885
+ update!(**args)
2886
+ end
2887
+
2888
+ # Update properties of this object
2889
+ def update!(**args)
2890
+ @frames = args[:frames] if args.key?(:frames)
2891
+ end
2892
+ end
2893
+
2894
+ # Video frame level annotation results for explicit content.
2895
+ class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
2896
+ include Google::Apis::Core::Hashable
2897
+
2898
+ # Likelihood of the pornography content..
2899
+ # Corresponds to the JSON property `pornographyLikelihood`
2900
+ # @return [String]
2901
+ attr_accessor :pornography_likelihood
2902
+
2903
+ # Time-offset, relative to the beginning of the video, corresponding to the
2904
+ # video frame for this location.
2905
+ # Corresponds to the JSON property `timeOffset`
2906
+ # @return [String]
2907
+ attr_accessor :time_offset
2908
+
2909
+ def initialize(**args)
2910
+ update!(**args)
2911
+ end
2912
+
2913
+ # Update properties of this object
2914
+ def update!(**args)
2915
+ @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
2916
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2917
+ end
2918
+ end
2919
+
2920
+ # Label annotation.
2921
+ class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
2922
+ include Google::Apis::Core::Hashable
2923
+
2924
+ # Common categories for the detected entity.
2925
+ # E.g. when the label is `Terrier` the category is likely `dog`. And in some
2926
+ # cases there might be more than one categories e.g. `Terrier` could also be
2927
+ # a `pet`.
2928
+ # Corresponds to the JSON property `categoryEntities`
2929
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1Entity>]
2930
+ attr_accessor :category_entities
2931
+
2932
+ # Detected entity from video analysis.
2933
+ # Corresponds to the JSON property `entity`
2934
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1Entity]
2935
+ attr_accessor :entity
2936
+
2937
+ # All video frames where a label was detected.
2938
+ # Corresponds to the JSON property `frames`
2939
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
2940
+ attr_accessor :frames
2941
+
2942
+ # All video segments where a label was detected.
2943
+ # Corresponds to the JSON property `segments`
2944
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
2945
+ attr_accessor :segments
2946
+
2947
+ def initialize(**args)
2948
+ update!(**args)
2949
+ end
2950
+
2951
+ # Update properties of this object
2952
+ def update!(**args)
2953
+ @category_entities = args[:category_entities] if args.key?(:category_entities)
2954
+ @entity = args[:entity] if args.key?(:entity)
2955
+ @frames = args[:frames] if args.key?(:frames)
2956
+ @segments = args[:segments] if args.key?(:segments)
2957
+ end
2958
+ end
2959
+
2960
+ # Video frame level annotation results for label detection.
2961
+ class GoogleCloudVideointelligenceV1p2beta1LabelFrame
2962
+ include Google::Apis::Core::Hashable
2963
+
2964
+ # Confidence that the label is accurate. Range: [0, 1].
2965
+ # Corresponds to the JSON property `confidence`
2966
+ # @return [Float]
2967
+ attr_accessor :confidence
2968
+
2969
+ # Time-offset, relative to the beginning of the video, corresponding to the
2970
+ # video frame for this location.
2971
+ # Corresponds to the JSON property `timeOffset`
2972
+ # @return [String]
2973
+ attr_accessor :time_offset
2974
+
2975
+ def initialize(**args)
2976
+ update!(**args)
2977
+ end
2978
+
2979
+ # Update properties of this object
2980
+ def update!(**args)
2981
+ @confidence = args[:confidence] if args.key?(:confidence)
2982
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
2983
+ end
2984
+ end
2985
+
2986
+ # Video segment level annotation results for label detection.
2987
+ class GoogleCloudVideointelligenceV1p2beta1LabelSegment
2988
+ include Google::Apis::Core::Hashable
2989
+
2990
+ # Confidence that the label is accurate. Range: [0, 1].
2991
+ # Corresponds to the JSON property `confidence`
2992
+ # @return [Float]
2993
+ attr_accessor :confidence
2994
+
2995
+ # Video segment.
2996
+ # Corresponds to the JSON property `segment`
2997
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
2998
+ attr_accessor :segment
2999
+
3000
+ def initialize(**args)
3001
+ update!(**args)
3002
+ end
3003
+
3004
+ # Update properties of this object
3005
+ def update!(**args)
3006
+ @confidence = args[:confidence] if args.key?(:confidence)
3007
+ @segment = args[:segment] if args.key?(:segment)
3008
+ end
3009
+ end
3010
+
3011
+ # Normalized bounding box.
3012
+ # The normalized vertex coordinates are relative to the original image.
3013
+ # Range: [0, 1].
3014
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
1366
3015
  include Google::Apis::Core::Hashable
1367
3016
 
1368
- # Progress metadata for all videos specified in `AnnotateVideoRequest`.
1369
- # Corresponds to the JSON property `annotationProgress`
1370
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
1371
- attr_accessor :annotation_progress
3017
+ # Bottom Y coordinate.
3018
+ # Corresponds to the JSON property `bottom`
3019
+ # @return [Float]
3020
+ attr_accessor :bottom
3021
+
3022
+ # Left X coordinate.
3023
+ # Corresponds to the JSON property `left`
3024
+ # @return [Float]
3025
+ attr_accessor :left
3026
+
3027
+ # Right X coordinate.
3028
+ # Corresponds to the JSON property `right`
3029
+ # @return [Float]
3030
+ attr_accessor :right
3031
+
3032
+ # Top Y coordinate.
3033
+ # Corresponds to the JSON property `top`
3034
+ # @return [Float]
3035
+ attr_accessor :top
1372
3036
 
1373
3037
  def initialize(**args)
1374
3038
  update!(**args)
@@ -1376,20 +3040,35 @@ module Google
1376
3040
 
1377
3041
  # Update properties of this object
1378
3042
  def update!(**args)
1379
- @annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
3043
+ @bottom = args[:bottom] if args.key?(:bottom)
3044
+ @left = args[:left] if args.key?(:left)
3045
+ @right = args[:right] if args.key?(:right)
3046
+ @top = args[:top] if args.key?(:top)
1380
3047
  end
1381
3048
  end
1382
3049
 
1383
- # Video annotation response. Included in the `response`
1384
- # field of the `Operation` returned by the `GetOperation`
1385
- # call of the `google::longrunning::Operations` service.
1386
- class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
3050
+ # Normalized bounding polygon for text (that might not be aligned with axis).
3051
+ # Contains list of the corner points in clockwise order starting from
3052
+ # top-left corner. For example, for a rectangular bounding box:
3053
+ # When the text is horizontal it might look like:
3054
+ # 0----1
3055
+ # | |
3056
+ # 3----2
3057
+ # When it's clockwise rotated 180 degrees around the top-left corner it
3058
+ # becomes:
3059
+ # 2----3
3060
+ # | |
3061
+ # 1----0
3062
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
3063
+ # than 0, or greater than 1 due to trignometric calculations for location of
3064
+ # the box.
3065
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
1387
3066
  include Google::Apis::Core::Hashable
1388
3067
 
1389
- # Annotation results for all videos specified in `AnnotateVideoRequest`.
1390
- # Corresponds to the JSON property `annotationResults`
1391
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
1392
- attr_accessor :annotation_results
3068
+ # Normalized vertices of the bounding polygon.
3069
+ # Corresponds to the JSON property `vertices`
3070
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
3071
+ attr_accessor :vertices
1393
3072
 
1394
3073
  def initialize(**args)
1395
3074
  update!(**args)
@@ -1397,30 +3076,25 @@ module Google
1397
3076
 
1398
3077
  # Update properties of this object
1399
3078
  def update!(**args)
1400
- @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
3079
+ @vertices = args[:vertices] if args.key?(:vertices)
1401
3080
  end
1402
3081
  end
1403
3082
 
1404
- # Detected entity from video analysis.
1405
- class GoogleCloudVideointelligenceV1p1beta1Entity
3083
+ # A vertex represents a 2D point in the image.
3084
+ # NOTE: the normalized vertex coordinates are relative to the original image
3085
+ # and range from 0 to 1.
3086
+ class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
1406
3087
  include Google::Apis::Core::Hashable
1407
3088
 
1408
- # Textual description, e.g. `Fixed-gear bicycle`.
1409
- # Corresponds to the JSON property `description`
1410
- # @return [String]
1411
- attr_accessor :description
1412
-
1413
- # Opaque entity ID. Some IDs may be available in
1414
- # [Google Knowledge Graph Search
1415
- # API](https://developers.google.com/knowledge-graph/).
1416
- # Corresponds to the JSON property `entityId`
1417
- # @return [String]
1418
- attr_accessor :entity_id
3089
+ # X coordinate.
3090
+ # Corresponds to the JSON property `x`
3091
+ # @return [Float]
3092
+ attr_accessor :x
1419
3093
 
1420
- # Language code for `description` in BCP-47 format.
1421
- # Corresponds to the JSON property `languageCode`
1422
- # @return [String]
1423
- attr_accessor :language_code
3094
+ # Y coordinate.
3095
+ # Corresponds to the JSON property `y`
3096
+ # @return [Float]
3097
+ attr_accessor :y
1424
3098
 
1425
3099
  def initialize(**args)
1426
3100
  update!(**args)
@@ -1428,44 +3102,75 @@ module Google
1428
3102
 
1429
3103
  # Update properties of this object
1430
3104
  def update!(**args)
1431
- @description = args[:description] if args.key?(:description)
1432
- @entity_id = args[:entity_id] if args.key?(:entity_id)
1433
- @language_code = args[:language_code] if args.key?(:language_code)
3105
+ @x = args[:x] if args.key?(:x)
3106
+ @y = args[:y] if args.key?(:y)
1434
3107
  end
1435
3108
  end
1436
3109
 
1437
- # Explicit content annotation (based on per-frame visual signals only).
1438
- # If no explicit content has been detected in a frame, no annotations are
1439
- # present for that frame.
1440
- class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
3110
+ # Annotations corresponding to one tracked object.
3111
+ class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
1441
3112
  include Google::Apis::Core::Hashable
1442
3113
 
1443
- # All video frames where explicit content was detected.
3114
+ # Object category's labeling confidence of this track.
3115
+ # Corresponds to the JSON property `confidence`
3116
+ # @return [Float]
3117
+ attr_accessor :confidence
3118
+
3119
+ # Detected entity from video analysis.
3120
+ # Corresponds to the JSON property `entity`
3121
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1Entity]
3122
+ attr_accessor :entity
3123
+
3124
+ # Information corresponding to all frames where this object track appears.
3125
+ # Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
3126
+ # messages in frames.
3127
+ # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
1444
3128
  # Corresponds to the JSON property `frames`
1445
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
3129
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
1446
3130
  attr_accessor :frames
1447
3131
 
3132
+ # Video segment.
3133
+ # Corresponds to the JSON property `segment`
3134
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3135
+ attr_accessor :segment
3136
+
3137
+ # Streaming mode ONLY.
3138
+ # In streaming mode, we do not know the end time of a tracked object
3139
+ # before it is completed. Hence, there is no VideoSegment info returned.
3140
+ # Instead, we provide a unique identifiable integer track_id so that
3141
+ # the customers can correlate the results of the ongoing
3142
+ # ObjectTrackAnnotation of the same track_id over time.
3143
+ # Corresponds to the JSON property `trackId`
3144
+ # @return [Fixnum]
3145
+ attr_accessor :track_id
3146
+
1448
3147
  def initialize(**args)
1449
3148
  update!(**args)
1450
3149
  end
1451
3150
 
1452
3151
  # Update properties of this object
1453
3152
  def update!(**args)
3153
+ @confidence = args[:confidence] if args.key?(:confidence)
3154
+ @entity = args[:entity] if args.key?(:entity)
1454
3155
  @frames = args[:frames] if args.key?(:frames)
3156
+ @segment = args[:segment] if args.key?(:segment)
3157
+ @track_id = args[:track_id] if args.key?(:track_id)
1455
3158
  end
1456
3159
  end
1457
3160
 
1458
- # Video frame level annotation results for explicit content.
1459
- class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
3161
+ # Video frame level annotations for object detection and tracking. This field
3162
+ # stores per frame location, time offset, and confidence.
3163
+ class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
1460
3164
  include Google::Apis::Core::Hashable
1461
3165
 
1462
- # Likelihood of the pornography content..
1463
- # Corresponds to the JSON property `pornographyLikelihood`
1464
- # @return [String]
1465
- attr_accessor :pornography_likelihood
3166
+ # Normalized bounding box.
3167
+ # The normalized vertex coordinates are relative to the original image.
3168
+ # Range: [0, 1].
3169
+ # Corresponds to the JSON property `normalizedBoundingBox`
3170
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
3171
+ attr_accessor :normalized_bounding_box
1466
3172
 
1467
- # Time-offset, relative to the beginning of the video, corresponding to the
1468
- # video frame for this location.
3173
+ # The timestamp of the frame in microseconds.
1469
3174
  # Corresponds to the JSON property `timeOffset`
1470
3175
  # @return [String]
1471
3176
  attr_accessor :time_offset
@@ -1476,37 +3181,34 @@ module Google
1476
3181
 
1477
3182
  # Update properties of this object
1478
3183
  def update!(**args)
1479
- @pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
3184
+ @normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
1480
3185
  @time_offset = args[:time_offset] if args.key?(:time_offset)
1481
3186
  end
1482
3187
  end
1483
3188
 
1484
- # Label annotation.
1485
- class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
3189
+ # Alternative hypotheses (a.k.a. n-best list).
3190
+ class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
1486
3191
  include Google::Apis::Core::Hashable
1487
3192
 
1488
- # Common categories for the detected entity.
1489
- # E.g. when the label is `Terrier` the category is likely `dog`. And in some
1490
- # cases there might be more than one categories e.g. `Terrier` could also be
1491
- # a `pet`.
1492
- # Corresponds to the JSON property `categoryEntities`
1493
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1Entity>]
1494
- attr_accessor :category_entities
1495
-
1496
- # Detected entity from video analysis.
1497
- # Corresponds to the JSON property `entity`
1498
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1Entity]
1499
- attr_accessor :entity
3193
+ # The confidence estimate between 0.0 and 1.0. A higher number
3194
+ # indicates an estimated greater likelihood that the recognized words are
3195
+ # correct. This field is typically provided only for the top hypothesis, and
3196
+ # only for `is_final=true` results. Clients should not rely on the
3197
+ # `confidence` field as it is not guaranteed to be accurate or consistent.
3198
+ # The default of 0.0 is a sentinel value indicating `confidence` was not set.
3199
+ # Corresponds to the JSON property `confidence`
3200
+ # @return [Float]
3201
+ attr_accessor :confidence
1500
3202
 
1501
- # All video frames where a label was detected.
1502
- # Corresponds to the JSON property `frames`
1503
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
1504
- attr_accessor :frames
3203
+ # Transcript text representing the words that the user spoke.
3204
+ # Corresponds to the JSON property `transcript`
3205
+ # @return [String]
3206
+ attr_accessor :transcript
1505
3207
 
1506
- # All video segments where a label was detected.
1507
- # Corresponds to the JSON property `segments`
1508
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
1509
- attr_accessor :segments
3208
+ # A list of word-specific information for each recognized word.
3209
+ # Corresponds to the JSON property `words`
3210
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
3211
+ attr_accessor :words
1510
3212
 
1511
3213
  def initialize(**args)
1512
3214
  update!(**args)
@@ -1514,27 +3216,31 @@ module Google
1514
3216
 
1515
3217
  # Update properties of this object
1516
3218
  def update!(**args)
1517
- @category_entities = args[:category_entities] if args.key?(:category_entities)
1518
- @entity = args[:entity] if args.key?(:entity)
1519
- @frames = args[:frames] if args.key?(:frames)
1520
- @segments = args[:segments] if args.key?(:segments)
3219
+ @confidence = args[:confidence] if args.key?(:confidence)
3220
+ @transcript = args[:transcript] if args.key?(:transcript)
3221
+ @words = args[:words] if args.key?(:words)
1521
3222
  end
1522
3223
  end
1523
3224
 
1524
- # Video frame level annotation results for label detection.
1525
- class GoogleCloudVideointelligenceV1p1beta1LabelFrame
3225
+ # A speech recognition result corresponding to a portion of the audio.
3226
+ class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
1526
3227
  include Google::Apis::Core::Hashable
1527
3228
 
1528
- # Confidence that the label is accurate. Range: [0, 1].
1529
- # Corresponds to the JSON property `confidence`
1530
- # @return [Float]
1531
- attr_accessor :confidence
3229
+ # May contain one or more recognition hypotheses (up to the maximum specified
3230
+ # in `max_alternatives`). These alternatives are ordered in terms of
3231
+ # accuracy, with the top (first) alternative being the most probable, as
3232
+ # ranked by the recognizer.
3233
+ # Corresponds to the JSON property `alternatives`
3234
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
3235
+ attr_accessor :alternatives
1532
3236
 
1533
- # Time-offset, relative to the beginning of the video, corresponding to the
1534
- # video frame for this location.
1535
- # Corresponds to the JSON property `timeOffset`
3237
+ # Output only. The
3238
+ # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
3239
+ # language in this result. This language code was detected to have the most
3240
+ # likelihood of being spoken in the audio.
3241
+ # Corresponds to the JSON property `languageCode`
1536
3242
  # @return [String]
1537
- attr_accessor :time_offset
3243
+ attr_accessor :language_code
1538
3244
 
1539
3245
  def initialize(**args)
1540
3246
  update!(**args)
@@ -1542,24 +3248,26 @@ module Google
1542
3248
 
1543
3249
  # Update properties of this object
1544
3250
  def update!(**args)
1545
- @confidence = args[:confidence] if args.key?(:confidence)
1546
- @time_offset = args[:time_offset] if args.key?(:time_offset)
3251
+ @alternatives = args[:alternatives] if args.key?(:alternatives)
3252
+ @language_code = args[:language_code] if args.key?(:language_code)
1547
3253
  end
1548
3254
  end
1549
3255
 
1550
- # Video segment level annotation results for label detection.
1551
- class GoogleCloudVideointelligenceV1p1beta1LabelSegment
3256
+ # Annotations related to one detected OCR text snippet. This will contain the
3257
+ # corresponding text, confidence value, and frame level information for each
3258
+ # detection.
3259
+ class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
1552
3260
  include Google::Apis::Core::Hashable
1553
3261
 
1554
- # Confidence that the label is accurate. Range: [0, 1].
1555
- # Corresponds to the JSON property `confidence`
1556
- # @return [Float]
1557
- attr_accessor :confidence
3262
+ # All video segments where OCR detected text appears.
3263
+ # Corresponds to the JSON property `segments`
3264
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
3265
+ attr_accessor :segments
1558
3266
 
1559
- # Video segment.
1560
- # Corresponds to the JSON property `segment`
1561
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
1562
- attr_accessor :segment
3267
+ # The detected text.
3268
+ # Corresponds to the JSON property `text`
3269
+ # @return [String]
3270
+ attr_accessor :text
1563
3271
 
1564
3272
  def initialize(**args)
1565
3273
  update!(**args)
@@ -1567,34 +3275,40 @@ module Google
1567
3275
 
1568
3276
  # Update properties of this object
1569
3277
  def update!(**args)
1570
- @confidence = args[:confidence] if args.key?(:confidence)
1571
- @segment = args[:segment] if args.key?(:segment)
3278
+ @segments = args[:segments] if args.key?(:segments)
3279
+ @text = args[:text] if args.key?(:text)
1572
3280
  end
1573
3281
  end
1574
3282
 
1575
- # Alternative hypotheses (a.k.a. n-best list).
1576
- class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
3283
+ # Video frame level annotation results for text annotation (OCR).
3284
+ # Contains information regarding timestamp and bounding box locations for the
3285
+ # frames containing detected OCR text snippets.
3286
+ class GoogleCloudVideointelligenceV1p2beta1TextFrame
1577
3287
  include Google::Apis::Core::Hashable
1578
3288
 
1579
- # The confidence estimate between 0.0 and 1.0. A higher number
1580
- # indicates an estimated greater likelihood that the recognized words are
1581
- # correct. This field is typically provided only for the top hypothesis, and
1582
- # only for `is_final=true` results. Clients should not rely on the
1583
- # `confidence` field as it is not guaranteed to be accurate or consistent.
1584
- # The default of 0.0 is a sentinel value indicating `confidence` was not set.
1585
- # Corresponds to the JSON property `confidence`
1586
- # @return [Float]
1587
- attr_accessor :confidence
3289
+ # Normalized bounding polygon for text (that might not be aligned with axis).
3290
+ # Contains list of the corner points in clockwise order starting from
3291
+ # top-left corner. For example, for a rectangular bounding box:
3292
+ # When the text is horizontal it might look like:
3293
+ # 0----1
3294
+ # | |
3295
+ # 3----2
3296
+ # When it's clockwise rotated 180 degrees around the top-left corner it
3297
+ # becomes:
3298
+ # 2----3
3299
+ # | |
3300
+ # 1----0
3301
+ # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
3302
+ # than 0, or greater than 1 due to trignometric calculations for location of
3303
+ # the box.
3304
+ # Corresponds to the JSON property `rotatedBoundingBox`
3305
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
3306
+ attr_accessor :rotated_bounding_box
1588
3307
 
1589
- # Transcript text representing the words that the user spoke.
1590
- # Corresponds to the JSON property `transcript`
3308
+ # Timestamp of this frame.
3309
+ # Corresponds to the JSON property `timeOffset`
1591
3310
  # @return [String]
1592
- attr_accessor :transcript
1593
-
1594
- # A list of word-specific information for each recognized word.
1595
- # Corresponds to the JSON property `words`
1596
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
1597
- attr_accessor :words
3311
+ attr_accessor :time_offset
1598
3312
 
1599
3313
  def initialize(**args)
1600
3314
  update!(**args)
@@ -1602,31 +3316,30 @@ module Google
1602
3316
 
1603
3317
  # Update properties of this object
1604
3318
  def update!(**args)
1605
- @confidence = args[:confidence] if args.key?(:confidence)
1606
- @transcript = args[:transcript] if args.key?(:transcript)
1607
- @words = args[:words] if args.key?(:words)
3319
+ @rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
3320
+ @time_offset = args[:time_offset] if args.key?(:time_offset)
1608
3321
  end
1609
3322
  end
1610
3323
 
1611
- # A speech recognition result corresponding to a portion of the audio.
1612
- class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
3324
+ # Video segment level annotation results for text detection.
3325
+ class GoogleCloudVideointelligenceV1p2beta1TextSegment
1613
3326
  include Google::Apis::Core::Hashable
1614
3327
 
1615
- # May contain one or more recognition hypotheses (up to the maximum specified
1616
- # in `max_alternatives`). These alternatives are ordered in terms of
1617
- # accuracy, with the top (first) alternative being the most probable, as
1618
- # ranked by the recognizer.
1619
- # Corresponds to the JSON property `alternatives`
1620
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
1621
- attr_accessor :alternatives
3328
+ # Confidence for the track of detected text. It is calculated as the highest
3329
+ # over all frames where OCR detected text appears.
3330
+ # Corresponds to the JSON property `confidence`
3331
+ # @return [Float]
3332
+ attr_accessor :confidence
1622
3333
 
1623
- # Output only. The
1624
- # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
1625
- # language in this result. This language code was detected to have the most
1626
- # likelihood of being spoken in the audio.
1627
- # Corresponds to the JSON property `languageCode`
1628
- # @return [String]
1629
- attr_accessor :language_code
3334
+ # Information related to the frames where OCR detected text appears.
3335
+ # Corresponds to the JSON property `frames`
3336
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
3337
+ attr_accessor :frames
3338
+
3339
+ # Video segment.
3340
+ # Corresponds to the JSON property `segment`
3341
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3342
+ attr_accessor :segment
1630
3343
 
1631
3344
  def initialize(**args)
1632
3345
  update!(**args)
@@ -1634,13 +3347,14 @@ module Google
1634
3347
 
1635
3348
  # Update properties of this object
1636
3349
  def update!(**args)
1637
- @alternatives = args[:alternatives] if args.key?(:alternatives)
1638
- @language_code = args[:language_code] if args.key?(:language_code)
3350
+ @confidence = args[:confidence] if args.key?(:confidence)
3351
+ @frames = args[:frames] if args.key?(:frames)
3352
+ @segment = args[:segment] if args.key?(:segment)
1639
3353
  end
1640
3354
  end
1641
3355
 
1642
3356
  # Annotation progress for a single video.
1643
- class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
3357
+ class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
1644
3358
  include Google::Apis::Core::Hashable
1645
3359
 
1646
3360
  # Video file location in
@@ -1679,17 +3393,17 @@ module Google
1679
3393
  end
1680
3394
 
1681
3395
  # Annotation results for a single video.
1682
- class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
3396
+ class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
1683
3397
  include Google::Apis::Core::Hashable
1684
3398
 
1685
- # The `Status` type defines a logical error model that is suitable for different
1686
- # programming environments, including REST APIs and RPC APIs. It is used by
1687
- # [gRPC](https://github.com/grpc). The error model is designed to be:
3399
+ # The `Status` type defines a logical error model that is suitable for
3400
+ # different programming environments, including REST APIs and RPC APIs. It is
3401
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
1688
3402
  # - Simple to use and understand for most users
1689
3403
  # - Flexible enough to meet unexpected needs
1690
3404
  # # Overview
1691
- # The `Status` message contains three pieces of data: error code, error message,
1692
- # and error details. The error code should be an enum value of
3405
+ # The `Status` message contains three pieces of data: error code, error
3406
+ # message, and error details. The error code should be an enum value of
1693
3407
  # google.rpc.Code, but it may accept additional error codes if needed. The
1694
3408
  # error message should be a developer-facing English message that helps
1695
3409
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -1729,13 +3443,13 @@ module Google
1729
3443
  # If no explicit content has been detected in a frame, no annotations are
1730
3444
  # present for that frame.
1731
3445
  # Corresponds to the JSON property `explicitAnnotation`
1732
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
3446
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
1733
3447
  attr_accessor :explicit_annotation
1734
3448
 
1735
3449
  # Label annotations on frame level.
1736
3450
  # There is exactly one element for each unique label.
1737
3451
  # Corresponds to the JSON property `frameLabelAnnotations`
1738
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3452
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1739
3453
  attr_accessor :frame_label_annotations
1740
3454
 
1741
3455
  # Video file location in
@@ -1744,28 +3458,40 @@ module Google
1744
3458
  # @return [String]
1745
3459
  attr_accessor :input_uri
1746
3460
 
3461
+ # Annotations for list of objects detected and tracked in video.
3462
+ # Corresponds to the JSON property `objectAnnotations`
3463
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
3464
+ attr_accessor :object_annotations
3465
+
1747
3466
  # Label annotations on video level or user specified segment level.
1748
3467
  # There is exactly one element for each unique label.
1749
3468
  # Corresponds to the JSON property `segmentLabelAnnotations`
1750
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3469
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1751
3470
  attr_accessor :segment_label_annotations
1752
3471
 
1753
3472
  # Shot annotations. Each shot is represented as a video segment.
1754
3473
  # Corresponds to the JSON property `shotAnnotations`
1755
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
3474
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
1756
3475
  attr_accessor :shot_annotations
1757
3476
 
1758
3477
  # Label annotations on shot level.
1759
3478
  # There is exactly one element for each unique label.
1760
3479
  # Corresponds to the JSON property `shotLabelAnnotations`
1761
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
3480
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
1762
3481
  attr_accessor :shot_label_annotations
1763
3482
 
1764
3483
  # Speech transcription.
1765
3484
  # Corresponds to the JSON property `speechTranscriptions`
1766
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
3485
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
1767
3486
  attr_accessor :speech_transcriptions
1768
3487
 
3488
+ # OCR text detection and tracking.
3489
+ # Annotations for list of detected text snippets. Each will have list of
3490
+ # frame information associated with it.
3491
+ # Corresponds to the JSON property `textAnnotations`
3492
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
3493
+ attr_accessor :text_annotations
3494
+
1769
3495
  def initialize(**args)
1770
3496
  update!(**args)
1771
3497
  end
@@ -1776,15 +3502,17 @@ module Google
1776
3502
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
1777
3503
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
1778
3504
  @input_uri = args[:input_uri] if args.key?(:input_uri)
3505
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
1779
3506
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
1780
3507
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
1781
3508
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
1782
3509
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
3510
+ @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
1783
3511
  end
1784
3512
  end
1785
3513
 
1786
3514
  # Video segment.
1787
- class GoogleCloudVideointelligenceV1p1beta1VideoSegment
3515
+ class GoogleCloudVideointelligenceV1p2beta1VideoSegment
1788
3516
  include Google::Apis::Core::Hashable
1789
3517
 
1790
3518
  # Time-offset, relative to the beginning of the video,
@@ -1813,7 +3541,7 @@ module Google
1813
3541
  # Word-specific information for recognized words. Word information is only
1814
3542
  # included in the response when certain request parameters are set, such
1815
3543
  # as `enable_word_time_offsets`.
1816
- class GoogleCloudVideointelligenceV1p1beta1WordInfo
3544
+ class GoogleCloudVideointelligenceV1p2beta1WordInfo
1817
3545
  include Google::Apis::Core::Hashable
1818
3546
 
1819
3547
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -1872,12 +3600,12 @@ module Google
1872
3600
  # Video annotation progress. Included in the `metadata`
1873
3601
  # field of the `Operation` returned by the `GetOperation`
1874
3602
  # call of the `google::longrunning::Operations` service.
1875
- class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
3603
+ class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoProgress
1876
3604
  include Google::Apis::Core::Hashable
1877
3605
 
1878
3606
  # Progress metadata for all videos specified in `AnnotateVideoRequest`.
1879
3607
  # Corresponds to the JSON property `annotationProgress`
1880
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
3608
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress>]
1881
3609
  attr_accessor :annotation_progress
1882
3610
 
1883
3611
  def initialize(**args)
@@ -1893,12 +3621,12 @@ module Google
1893
3621
  # Video annotation response. Included in the `response`
1894
3622
  # field of the `Operation` returned by the `GetOperation`
1895
3623
  # call of the `google::longrunning::Operations` service.
1896
- class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
3624
+ class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoResponse
1897
3625
  include Google::Apis::Core::Hashable
1898
3626
 
1899
3627
  # Annotation results for all videos specified in `AnnotateVideoRequest`.
1900
3628
  # Corresponds to the JSON property `annotationResults`
1901
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
3629
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults>]
1902
3630
  attr_accessor :annotation_results
1903
3631
 
1904
3632
  def initialize(**args)
@@ -1912,7 +3640,7 @@ module Google
1912
3640
  end
1913
3641
 
1914
3642
  # Detected entity from video analysis.
1915
- class GoogleCloudVideointelligenceV1p2beta1Entity
3643
+ class GoogleCloudVideointelligenceV1p3beta1Entity
1916
3644
  include Google::Apis::Core::Hashable
1917
3645
 
1918
3646
  # Textual description, e.g. `Fixed-gear bicycle`.
@@ -1947,12 +3675,12 @@ module Google
1947
3675
  # Explicit content annotation (based on per-frame visual signals only).
1948
3676
  # If no explicit content has been detected in a frame, no annotations are
1949
3677
  # present for that frame.
1950
- class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
3678
+ class GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation
1951
3679
  include Google::Apis::Core::Hashable
1952
3680
 
1953
3681
  # All video frames where explicit content was detected.
1954
3682
  # Corresponds to the JSON property `frames`
1955
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
3683
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame>]
1956
3684
  attr_accessor :frames
1957
3685
 
1958
3686
  def initialize(**args)
@@ -1966,7 +3694,7 @@ module Google
1966
3694
  end
1967
3695
 
1968
3696
  # Video frame level annotation results for explicit content.
1969
- class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
3697
+ class GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame
1970
3698
  include Google::Apis::Core::Hashable
1971
3699
 
1972
3700
  # Likelihood of the pornography content..
@@ -1992,7 +3720,7 @@ module Google
1992
3720
  end
1993
3721
 
1994
3722
  # Label annotation.
1995
- class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
3723
+ class GoogleCloudVideointelligenceV1p3beta1LabelAnnotation
1996
3724
  include Google::Apis::Core::Hashable
1997
3725
 
1998
3726
  # Common categories for the detected entity.
@@ -2000,22 +3728,22 @@ module Google
2000
3728
  # cases there might be more than one categories e.g. `Terrier` could also be
2001
3729
  # a `pet`.
2002
3730
  # Corresponds to the JSON property `categoryEntities`
2003
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1Entity>]
3731
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1Entity>]
2004
3732
  attr_accessor :category_entities
2005
3733
 
2006
3734
  # Detected entity from video analysis.
2007
3735
  # Corresponds to the JSON property `entity`
2008
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1Entity]
3736
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1Entity]
2009
3737
  attr_accessor :entity
2010
3738
 
2011
3739
  # All video frames where a label was detected.
2012
3740
  # Corresponds to the JSON property `frames`
2013
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
3741
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1LabelFrame>]
2014
3742
  attr_accessor :frames
2015
3743
 
2016
3744
  # All video segments where a label was detected.
2017
3745
  # Corresponds to the JSON property `segments`
2018
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
3746
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1LabelSegment>]
2019
3747
  attr_accessor :segments
2020
3748
 
2021
3749
  def initialize(**args)
@@ -2032,7 +3760,7 @@ module Google
2032
3760
  end
2033
3761
 
2034
3762
  # Video frame level annotation results for label detection.
2035
- class GoogleCloudVideointelligenceV1p2beta1LabelFrame
3763
+ class GoogleCloudVideointelligenceV1p3beta1LabelFrame
2036
3764
  include Google::Apis::Core::Hashable
2037
3765
 
2038
3766
  # Confidence that the label is accurate. Range: [0, 1].
@@ -2058,7 +3786,7 @@ module Google
2058
3786
  end
2059
3787
 
2060
3788
  # Video segment level annotation results for label detection.
2061
- class GoogleCloudVideointelligenceV1p2beta1LabelSegment
3789
+ class GoogleCloudVideointelligenceV1p3beta1LabelSegment
2062
3790
  include Google::Apis::Core::Hashable
2063
3791
 
2064
3792
  # Confidence that the label is accurate. Range: [0, 1].
@@ -2068,7 +3796,7 @@ module Google
2068
3796
 
2069
3797
  # Video segment.
2070
3798
  # Corresponds to the JSON property `segment`
2071
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3799
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2072
3800
  attr_accessor :segment
2073
3801
 
2074
3802
  def initialize(**args)
@@ -2085,7 +3813,7 @@ module Google
2085
3813
  # Normalized bounding box.
2086
3814
  # The normalized vertex coordinates are relative to the original image.
2087
3815
  # Range: [0, 1].
2088
- class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
3816
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox
2089
3817
  include Google::Apis::Core::Hashable
2090
3818
 
2091
3819
  # Bottom Y coordinate.
@@ -2136,12 +3864,12 @@ module Google
2136
3864
  # and the vertex order will still be (0, 1, 2, 3). Note that values can be less
2137
3865
  # than 0, or greater than 1 due to trignometric calculations for location of
2138
3866
  # the box.
2139
- class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
3867
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly
2140
3868
  include Google::Apis::Core::Hashable
2141
3869
 
2142
3870
  # Normalized vertices of the bounding polygon.
2143
3871
  # Corresponds to the JSON property `vertices`
2144
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
3872
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1NormalizedVertex>]
2145
3873
  attr_accessor :vertices
2146
3874
 
2147
3875
  def initialize(**args)
@@ -2157,7 +3885,7 @@ module Google
2157
3885
  # A vertex represents a 2D point in the image.
2158
3886
  # NOTE: the normalized vertex coordinates are relative to the original image
2159
3887
  # and range from 0 to 1.
2160
- class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
3888
+ class GoogleCloudVideointelligenceV1p3beta1NormalizedVertex
2161
3889
  include Google::Apis::Core::Hashable
2162
3890
 
2163
3891
  # X coordinate.
@@ -2182,7 +3910,7 @@ module Google
2182
3910
  end
2183
3911
 
2184
3912
  # Annotations corresponding to one tracked object.
2185
- class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
3913
+ class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation
2186
3914
  include Google::Apis::Core::Hashable
2187
3915
 
2188
3916
  # Object category's labeling confidence of this track.
@@ -2192,7 +3920,7 @@ module Google
2192
3920
 
2193
3921
  # Detected entity from video analysis.
2194
3922
  # Corresponds to the JSON property `entity`
2195
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1Entity]
3923
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1Entity]
2196
3924
  attr_accessor :entity
2197
3925
 
2198
3926
  # Information corresponding to all frames where this object track appears.
@@ -2200,12 +3928,12 @@ module Google
2200
3928
  # messages in frames.
2201
3929
  # Streaming mode: it can only be one ObjectTrackingFrame message in frames.
2202
3930
  # Corresponds to the JSON property `frames`
2203
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
3931
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame>]
2204
3932
  attr_accessor :frames
2205
3933
 
2206
3934
  # Video segment.
2207
3935
  # Corresponds to the JSON property `segment`
2208
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
3936
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2209
3937
  attr_accessor :segment
2210
3938
 
2211
3939
  # Streaming mode ONLY.
@@ -2234,14 +3962,14 @@ module Google
2234
3962
 
2235
3963
  # Video frame level annotations for object detection and tracking. This field
2236
3964
  # stores per frame location, time offset, and confidence.
2237
- class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
3965
+ class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame
2238
3966
  include Google::Apis::Core::Hashable
2239
3967
 
2240
3968
  # Normalized bounding box.
2241
3969
  # The normalized vertex coordinates are relative to the original image.
2242
3970
  # Range: [0, 1].
2243
3971
  # Corresponds to the JSON property `normalizedBoundingBox`
2244
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
3972
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox]
2245
3973
  attr_accessor :normalized_bounding_box
2246
3974
 
2247
3975
  # The timestamp of the frame in microseconds.
@@ -2261,7 +3989,7 @@ module Google
2261
3989
  end
2262
3990
 
2263
3991
  # Alternative hypotheses (a.k.a. n-best list).
2264
- class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
3992
+ class GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative
2265
3993
  include Google::Apis::Core::Hashable
2266
3994
 
2267
3995
  # The confidence estimate between 0.0 and 1.0. A higher number
@@ -2281,7 +4009,7 @@ module Google
2281
4009
 
2282
4010
  # A list of word-specific information for each recognized word.
2283
4011
  # Corresponds to the JSON property `words`
2284
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
4012
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1WordInfo>]
2285
4013
  attr_accessor :words
2286
4014
 
2287
4015
  def initialize(**args)
@@ -2297,7 +4025,7 @@ module Google
2297
4025
  end
2298
4026
 
2299
4027
  # A speech recognition result corresponding to a portion of the audio.
2300
- class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
4028
+ class GoogleCloudVideointelligenceV1p3beta1SpeechTranscription
2301
4029
  include Google::Apis::Core::Hashable
2302
4030
 
2303
4031
  # May contain one or more recognition hypotheses (up to the maximum specified
@@ -2305,7 +4033,7 @@ module Google
2305
4033
  # accuracy, with the top (first) alternative being the most probable, as
2306
4034
  # ranked by the recognizer.
2307
4035
  # Corresponds to the JSON property `alternatives`
2308
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
4036
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative>]
2309
4037
  attr_accessor :alternatives
2310
4038
 
2311
4039
  # Output only. The
@@ -2327,15 +4055,130 @@ module Google
2327
4055
  end
2328
4056
  end
2329
4057
 
4058
+ # `StreamingAnnotateVideoResponse` is the only message returned to the client
4059
+ # by `StreamingAnnotateVideo`. A series of zero or more
4060
+ # `StreamingAnnotateVideoResponse` messages are streamed back to the client.
4061
+ class GoogleCloudVideointelligenceV1p3beta1StreamingAnnotateVideoResponse
4062
+ include Google::Apis::Core::Hashable
4063
+
4064
+ # Streaming annotation results corresponding to a portion of the video
4065
+ # that is currently being processed.
4066
+ # Corresponds to the JSON property `annotationResults`
4067
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults]
4068
+ attr_accessor :annotation_results
4069
+
4070
+ # GCS URI that stores annotation results of one streaming session.
4071
+ # It is a directory that can hold multiple files in JSON format.
4072
+ # Example uri format:
4073
+ # gs://bucket_id/object_id/cloud_project_name-session_id
4074
+ # Corresponds to the JSON property `annotationResultsUri`
4075
+ # @return [String]
4076
+ attr_accessor :annotation_results_uri
4077
+
4078
+ # The `Status` type defines a logical error model that is suitable for
4079
+ # different programming environments, including REST APIs and RPC APIs. It is
4080
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
4081
+ # - Simple to use and understand for most users
4082
+ # - Flexible enough to meet unexpected needs
4083
+ # # Overview
4084
+ # The `Status` message contains three pieces of data: error code, error
4085
+ # message, and error details. The error code should be an enum value of
4086
+ # google.rpc.Code, but it may accept additional error codes if needed. The
4087
+ # error message should be a developer-facing English message that helps
4088
+ # developers *understand* and *resolve* the error. If a localized user-facing
4089
+ # error message is needed, put the localized message in the error details or
4090
+ # localize it in the client. The optional error details may contain arbitrary
4091
+ # information about the error. There is a predefined set of error detail types
4092
+ # in the package `google.rpc` that can be used for common error conditions.
4093
+ # # Language mapping
4094
+ # The `Status` message is the logical representation of the error model, but it
4095
+ # is not necessarily the actual wire format. When the `Status` message is
4096
+ # exposed in different client libraries and different wire protocols, it can be
4097
+ # mapped differently. For example, it will likely be mapped to some exceptions
4098
+ # in Java, but more likely mapped to some error codes in C.
4099
+ # # Other uses
4100
+ # The error model and the `Status` message can be used in a variety of
4101
+ # environments, either with or without APIs, to provide a
4102
+ # consistent developer experience across different environments.
4103
+ # Example uses of this error model include:
4104
+ # - Partial errors. If a service needs to return partial errors to the client,
4105
+ # it may embed the `Status` in the normal response to indicate the partial
4106
+ # errors.
4107
+ # - Workflow errors. A typical workflow has multiple steps. Each step may
4108
+ # have a `Status` message for error reporting.
4109
+ # - Batch operations. If a client uses batch request and batch response, the
4110
+ # `Status` message should be used directly inside batch response, one for
4111
+ # each error sub-response.
4112
+ # - Asynchronous operations. If an API call embeds asynchronous operation
4113
+ # results in its response, the status of those operations should be
4114
+ # represented directly using the `Status` message.
4115
+ # - Logging. If some API errors are stored in logs, the message `Status` could
4116
+ # be used directly after any stripping needed for security/privacy reasons.
4117
+ # Corresponds to the JSON property `error`
4118
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleRpcStatus]
4119
+ attr_accessor :error
4120
+
4121
+ def initialize(**args)
4122
+ update!(**args)
4123
+ end
4124
+
4125
+ # Update properties of this object
4126
+ def update!(**args)
4127
+ @annotation_results = args[:annotation_results] if args.key?(:annotation_results)
4128
+ @annotation_results_uri = args[:annotation_results_uri] if args.key?(:annotation_results_uri)
4129
+ @error = args[:error] if args.key?(:error)
4130
+ end
4131
+ end
4132
+
4133
+ # Streaming annotation results corresponding to a portion of the video
4134
+ # that is currently being processed.
4135
+ class GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults
4136
+ include Google::Apis::Core::Hashable
4137
+
4138
+ # Explicit content annotation (based on per-frame visual signals only).
4139
+ # If no explicit content has been detected in a frame, no annotations are
4140
+ # present for that frame.
4141
+ # Corresponds to the JSON property `explicitAnnotation`
4142
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
4143
+ attr_accessor :explicit_annotation
4144
+
4145
+ # Label annotation results.
4146
+ # Corresponds to the JSON property `labelAnnotations`
4147
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
4148
+ attr_accessor :label_annotations
4149
+
4150
+ # Object tracking results.
4151
+ # Corresponds to the JSON property `objectAnnotations`
4152
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
4153
+ attr_accessor :object_annotations
4154
+
4155
+ # Shot annotation results. Each shot is represented as a video segment.
4156
+ # Corresponds to the JSON property `shotAnnotations`
4157
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
4158
+ attr_accessor :shot_annotations
4159
+
4160
+ def initialize(**args)
4161
+ update!(**args)
4162
+ end
4163
+
4164
+ # Update properties of this object
4165
+ def update!(**args)
4166
+ @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
4167
+ @label_annotations = args[:label_annotations] if args.key?(:label_annotations)
4168
+ @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
4169
+ @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
4170
+ end
4171
+ end
4172
+
2330
4173
  # Annotations related to one detected OCR text snippet. This will contain the
2331
4174
  # corresponding text, confidence value, and frame level information for each
2332
4175
  # detection.
2333
- class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
4176
+ class GoogleCloudVideointelligenceV1p3beta1TextAnnotation
2334
4177
  include Google::Apis::Core::Hashable
2335
4178
 
2336
4179
  # All video segments where OCR detected text appears.
2337
4180
  # Corresponds to the JSON property `segments`
2338
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
4181
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1TextSegment>]
2339
4182
  attr_accessor :segments
2340
4183
 
2341
4184
  # The detected text.
@@ -2357,7 +4200,7 @@ module Google
2357
4200
  # Video frame level annotation results for text annotation (OCR).
2358
4201
  # Contains information regarding timestamp and bounding box locations for the
2359
4202
  # frames containing detected OCR text snippets.
2360
- class GoogleCloudVideointelligenceV1p2beta1TextFrame
4203
+ class GoogleCloudVideointelligenceV1p3beta1TextFrame
2361
4204
  include Google::Apis::Core::Hashable
2362
4205
 
2363
4206
  # Normalized bounding polygon for text (that might not be aligned with axis).
@@ -2376,7 +4219,7 @@ module Google
2376
4219
  # than 0, or greater than 1 due to trignometric calculations for location of
2377
4220
  # the box.
2378
4221
  # Corresponds to the JSON property `rotatedBoundingBox`
2379
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
4222
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly]
2380
4223
  attr_accessor :rotated_bounding_box
2381
4224
 
2382
4225
  # Timestamp of this frame.
@@ -2396,7 +4239,7 @@ module Google
2396
4239
  end
2397
4240
 
2398
4241
  # Video segment level annotation results for text detection.
2399
- class GoogleCloudVideointelligenceV1p2beta1TextSegment
4242
+ class GoogleCloudVideointelligenceV1p3beta1TextSegment
2400
4243
  include Google::Apis::Core::Hashable
2401
4244
 
2402
4245
  # Confidence for the track of detected text. It is calculated as the highest
@@ -2407,12 +4250,12 @@ module Google
2407
4250
 
2408
4251
  # Information related to the frames where OCR detected text appears.
2409
4252
  # Corresponds to the JSON property `frames`
2410
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
4253
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1TextFrame>]
2411
4254
  attr_accessor :frames
2412
4255
 
2413
4256
  # Video segment.
2414
4257
  # Corresponds to the JSON property `segment`
2415
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
4258
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
2416
4259
  attr_accessor :segment
2417
4260
 
2418
4261
  def initialize(**args)
@@ -2428,7 +4271,7 @@ module Google
2428
4271
  end
2429
4272
 
2430
4273
  # Annotation progress for a single video.
2431
- class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
4274
+ class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress
2432
4275
  include Google::Apis::Core::Hashable
2433
4276
 
2434
4277
  # Video file location in
@@ -2467,17 +4310,17 @@ module Google
2467
4310
  end
2468
4311
 
2469
4312
  # Annotation results for a single video.
2470
- class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
4313
+ class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
2471
4314
  include Google::Apis::Core::Hashable
2472
4315
 
2473
- # The `Status` type defines a logical error model that is suitable for different
2474
- # programming environments, including REST APIs and RPC APIs. It is used by
2475
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4316
+ # The `Status` type defines a logical error model that is suitable for
4317
+ # different programming environments, including REST APIs and RPC APIs. It is
4318
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2476
4319
  # - Simple to use and understand for most users
2477
4320
  # - Flexible enough to meet unexpected needs
2478
4321
  # # Overview
2479
- # The `Status` message contains three pieces of data: error code, error message,
2480
- # and error details. The error code should be an enum value of
4322
+ # The `Status` message contains three pieces of data: error code, error
4323
+ # message, and error details. The error code should be an enum value of
2481
4324
  # google.rpc.Code, but it may accept additional error codes if needed. The
2482
4325
  # error message should be a developer-facing English message that helps
2483
4326
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -2517,13 +4360,13 @@ module Google
2517
4360
  # If no explicit content has been detected in a frame, no annotations are
2518
4361
  # present for that frame.
2519
4362
  # Corresponds to the JSON property `explicitAnnotation`
2520
- # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
4363
+ # @return [Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
2521
4364
  attr_accessor :explicit_annotation
2522
4365
 
2523
4366
  # Label annotations on frame level.
2524
4367
  # There is exactly one element for each unique label.
2525
4368
  # Corresponds to the JSON property `frameLabelAnnotations`
2526
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4369
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2527
4370
  attr_accessor :frame_label_annotations
2528
4371
 
2529
4372
  # Video file location in
@@ -2534,36 +4377,36 @@ module Google
2534
4377
 
2535
4378
  # Annotations for list of objects detected and tracked in video.
2536
4379
  # Corresponds to the JSON property `objectAnnotations`
2537
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
4380
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
2538
4381
  attr_accessor :object_annotations
2539
4382
 
2540
4383
  # Label annotations on video level or user specified segment level.
2541
4384
  # There is exactly one element for each unique label.
2542
4385
  # Corresponds to the JSON property `segmentLabelAnnotations`
2543
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4386
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2544
4387
  attr_accessor :segment_label_annotations
2545
4388
 
2546
4389
  # Shot annotations. Each shot is represented as a video segment.
2547
4390
  # Corresponds to the JSON property `shotAnnotations`
2548
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
4391
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
2549
4392
  attr_accessor :shot_annotations
2550
4393
 
2551
4394
  # Label annotations on shot level.
2552
4395
  # There is exactly one element for each unique label.
2553
4396
  # Corresponds to the JSON property `shotLabelAnnotations`
2554
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
4397
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
2555
4398
  attr_accessor :shot_label_annotations
2556
4399
 
2557
4400
  # Speech transcription.
2558
4401
  # Corresponds to the JSON property `speechTranscriptions`
2559
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
4402
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>]
2560
4403
  attr_accessor :speech_transcriptions
2561
4404
 
2562
4405
  # OCR text detection and tracking.
2563
4406
  # Annotations for list of detected text snippets. Each will have list of
2564
4407
  # frame information associated with it.
2565
4408
  # Corresponds to the JSON property `textAnnotations`
2566
- # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
4409
+ # @return [Array<Google::Apis::VideointelligenceV1beta2::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>]
2567
4410
  attr_accessor :text_annotations
2568
4411
 
2569
4412
  def initialize(**args)
@@ -2586,7 +4429,7 @@ module Google
2586
4429
  end
2587
4430
 
2588
4431
  # Video segment.
2589
- class GoogleCloudVideointelligenceV1p2beta1VideoSegment
4432
+ class GoogleCloudVideointelligenceV1p3beta1VideoSegment
2590
4433
  include Google::Apis::Core::Hashable
2591
4434
 
2592
4435
  # Time-offset, relative to the beginning of the video,
@@ -2615,7 +4458,7 @@ module Google
2615
4458
  # Word-specific information for recognized words. Word information is only
2616
4459
  # included in the response when certain request parameters are set, such
2617
4460
  # as `enable_word_time_offsets`.
2618
- class GoogleCloudVideointelligenceV1p2beta1WordInfo
4461
+ class GoogleCloudVideointelligenceV1p3beta1WordInfo
2619
4462
  include Google::Apis::Core::Hashable
2620
4463
 
2621
4464
  # Output only. The confidence estimate between 0.0 and 1.0. A higher number
@@ -2684,14 +4527,14 @@ module Google
2684
4527
  attr_accessor :done
2685
4528
  alias_method :done?, :done
2686
4529
 
2687
- # The `Status` type defines a logical error model that is suitable for different
2688
- # programming environments, including REST APIs and RPC APIs. It is used by
2689
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4530
+ # The `Status` type defines a logical error model that is suitable for
4531
+ # different programming environments, including REST APIs and RPC APIs. It is
4532
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2690
4533
  # - Simple to use and understand for most users
2691
4534
  # - Flexible enough to meet unexpected needs
2692
4535
  # # Overview
2693
- # The `Status` message contains three pieces of data: error code, error message,
2694
- # and error details. The error code should be an enum value of
4536
+ # The `Status` message contains three pieces of data: error code, error
4537
+ # message, and error details. The error code should be an enum value of
2695
4538
  # google.rpc.Code, but it may accept additional error codes if needed. The
2696
4539
  # error message should be a developer-facing English message that helps
2697
4540
  # developers *understand* and *resolve* the error. If a localized user-facing
@@ -2768,14 +4611,14 @@ module Google
2768
4611
  end
2769
4612
  end
2770
4613
 
2771
- # The `Status` type defines a logical error model that is suitable for different
2772
- # programming environments, including REST APIs and RPC APIs. It is used by
2773
- # [gRPC](https://github.com/grpc). The error model is designed to be:
4614
+ # The `Status` type defines a logical error model that is suitable for
4615
+ # different programming environments, including REST APIs and RPC APIs. It is
4616
+ # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2774
4617
  # - Simple to use and understand for most users
2775
4618
  # - Flexible enough to meet unexpected needs
2776
4619
  # # Overview
2777
- # The `Status` message contains three pieces of data: error code, error message,
2778
- # and error details. The error code should be an enum value of
4620
+ # The `Status` message contains three pieces of data: error code, error
4621
+ # message, and error details. The error code should be an enum value of
2779
4622
  # google.rpc.Code, but it may accept additional error codes if needed. The
2780
4623
  # error message should be a developer-facing English message that helps
2781
4624
  # developers *understand* and *resolve* the error. If a localized user-facing