google-api-client 0.28.4 → 0.29.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.kokoro/build.bat +9 -6
- data/.kokoro/build.sh +2 -34
- data/.kokoro/continuous/common.cfg +6 -1
- data/.kokoro/continuous/linux.cfg +1 -1
- data/.kokoro/continuous/windows.cfg +17 -1
- data/.kokoro/osx.sh +2 -33
- data/.kokoro/presubmit/common.cfg +6 -1
- data/.kokoro/presubmit/linux.cfg +1 -1
- data/.kokoro/presubmit/windows.cfg +17 -1
- data/.kokoro/trampoline.bat +10 -0
- data/.kokoro/trampoline.sh +3 -23
- data/CHANGELOG.md +460 -0
- data/README.md +1 -1
- data/Rakefile +31 -0
- data/bin/generate-api +4 -2
- data/generated/google/apis/abusiveexperiencereport_v1/service.rb +2 -2
- data/generated/google/apis/acceleratedmobilepageurl_v1/service.rb +1 -1
- data/generated/google/apis/accessapproval_v1beta1/classes.rb +333 -0
- data/generated/google/apis/accessapproval_v1beta1/representations.rb +174 -0
- data/generated/google/apis/accessapproval_v1beta1/service.rb +728 -0
- data/generated/google/apis/accessapproval_v1beta1.rb +34 -0
- data/generated/google/apis/accesscontextmanager_v1/classes.rb +755 -0
- data/generated/google/apis/accesscontextmanager_v1/representations.rb +282 -0
- data/generated/google/apis/accesscontextmanager_v1/service.rb +788 -0
- data/generated/google/apis/accesscontextmanager_v1.rb +34 -0
- data/generated/google/apis/accesscontextmanager_v1beta/classes.rb +47 -31
- data/generated/google/apis/accesscontextmanager_v1beta/representations.rb +4 -0
- data/generated/google/apis/accesscontextmanager_v1beta/service.rb +16 -16
- data/generated/google/apis/accesscontextmanager_v1beta.rb +1 -1
- data/generated/google/apis/adexchangebuyer2_v2beta1/classes.rb +95 -200
- data/generated/google/apis/adexchangebuyer2_v2beta1/representations.rb +0 -32
- data/generated/google/apis/adexchangebuyer2_v2beta1/service.rb +64 -104
- data/generated/google/apis/adexchangebuyer2_v2beta1.rb +1 -1
- data/generated/google/apis/adexchangebuyer_v1_2/service.rb +7 -7
- data/generated/google/apis/adexchangebuyer_v1_3/service.rb +21 -21
- data/generated/google/apis/adexchangebuyer_v1_4/service.rb +38 -38
- data/generated/google/apis/adexperiencereport_v1/service.rb +2 -2
- data/generated/google/apis/admin_datatransfer_v1/service.rb +5 -5
- data/generated/google/apis/admin_directory_v1/classes.rb +5 -50
- data/generated/google/apis/admin_directory_v1/representations.rb +0 -2
- data/generated/google/apis/admin_directory_v1/service.rb +113 -113
- data/generated/google/apis/admin_directory_v1.rb +1 -1
- data/generated/google/apis/admin_reports_v1/service.rb +6 -6
- data/generated/google/apis/admin_reports_v1.rb +1 -1
- data/generated/google/apis/adsense_v1_4/service.rb +39 -39
- data/generated/google/apis/adsensehost_v4_1/service.rb +26 -26
- data/generated/google/apis/alertcenter_v1beta1/classes.rb +101 -2
- data/generated/google/apis/alertcenter_v1beta1/representations.rb +25 -0
- data/generated/google/apis/alertcenter_v1beta1/service.rb +17 -16
- data/generated/google/apis/alertcenter_v1beta1.rb +1 -1
- data/generated/google/apis/analytics_v2_4/service.rb +6 -6
- data/generated/google/apis/analytics_v3/service.rb +88 -88
- data/generated/google/apis/analyticsreporting_v4/classes.rb +638 -0
- data/generated/google/apis/analyticsreporting_v4/representations.rb +248 -0
- data/generated/google/apis/analyticsreporting_v4/service.rb +31 -1
- data/generated/google/apis/analyticsreporting_v4.rb +1 -1
- data/generated/google/apis/androiddeviceprovisioning_v1/classes.rb +51 -11
- data/generated/google/apis/androiddeviceprovisioning_v1/representations.rb +6 -0
- data/generated/google/apis/androiddeviceprovisioning_v1/service.rb +26 -26
- data/generated/google/apis/androiddeviceprovisioning_v1.rb +1 -1
- data/generated/google/apis/androidenterprise_v1/classes.rb +26 -30
- data/generated/google/apis/androidenterprise_v1/representations.rb +2 -14
- data/generated/google/apis/androidenterprise_v1/service.rb +85 -121
- data/generated/google/apis/androidenterprise_v1.rb +1 -1
- data/generated/google/apis/androidmanagement_v1/classes.rb +358 -4
- data/generated/google/apis/androidmanagement_v1/representations.rb +163 -0
- data/generated/google/apis/androidmanagement_v1/service.rb +191 -21
- data/generated/google/apis/androidmanagement_v1.rb +1 -1
- data/generated/google/apis/androidpublisher_v1/service.rb +2 -2
- data/generated/google/apis/androidpublisher_v1_1/service.rb +3 -3
- data/generated/google/apis/androidpublisher_v2/service.rb +64 -70
- data/generated/google/apis/androidpublisher_v2.rb +1 -1
- data/generated/google/apis/androidpublisher_v3/classes.rb +113 -0
- data/generated/google/apis/androidpublisher_v3/representations.rb +58 -0
- data/generated/google/apis/androidpublisher_v3/service.rb +234 -64
- data/generated/google/apis/androidpublisher_v3.rb +1 -1
- data/generated/google/apis/appengine_v1/classes.rb +45 -100
- data/generated/google/apis/appengine_v1/representations.rb +17 -35
- data/generated/google/apis/appengine_v1/service.rb +45 -39
- data/generated/google/apis/appengine_v1.rb +1 -1
- data/generated/google/apis/appengine_v1alpha/classes.rb +2 -99
- data/generated/google/apis/appengine_v1alpha/representations.rb +0 -35
- data/generated/google/apis/appengine_v1alpha/service.rb +15 -15
- data/generated/google/apis/appengine_v1alpha.rb +1 -1
- data/generated/google/apis/appengine_v1beta/classes.rb +7 -102
- data/generated/google/apis/appengine_v1beta/representations.rb +0 -35
- data/generated/google/apis/appengine_v1beta/service.rb +45 -39
- data/generated/google/apis/appengine_v1beta.rb +1 -1
- data/generated/google/apis/appengine_v1beta4/service.rb +20 -20
- data/generated/google/apis/appengine_v1beta5/service.rb +20 -20
- data/generated/google/apis/appsactivity_v1/service.rb +5 -4
- data/generated/google/apis/appsactivity_v1.rb +1 -1
- data/generated/google/apis/appsmarket_v2/service.rb +3 -3
- data/generated/google/apis/appstate_v1/service.rb +5 -5
- data/generated/google/apis/bigquery_v2/classes.rb +1121 -114
- data/generated/google/apis/bigquery_v2/representations.rb +414 -26
- data/generated/google/apis/bigquery_v2/service.rb +184 -22
- data/generated/google/apis/bigquery_v2.rb +1 -1
- data/generated/google/apis/bigquerydatatransfer_v1/classes.rb +88 -10
- data/generated/google/apis/bigquerydatatransfer_v1/representations.rb +43 -0
- data/generated/google/apis/bigquerydatatransfer_v1/service.rb +142 -34
- data/generated/google/apis/bigquerydatatransfer_v1.rb +3 -3
- data/generated/google/apis/bigtableadmin_v1/service.rb +3 -3
- data/generated/google/apis/bigtableadmin_v1.rb +2 -2
- data/generated/google/apis/bigtableadmin_v2/classes.rb +14 -14
- data/generated/google/apis/bigtableadmin_v2/service.rb +142 -33
- data/generated/google/apis/bigtableadmin_v2.rb +2 -2
- data/generated/google/apis/binaryauthorization_v1beta1/classes.rb +66 -6
- data/generated/google/apis/binaryauthorization_v1beta1/representations.rb +17 -0
- data/generated/google/apis/binaryauthorization_v1beta1/service.rb +17 -13
- data/generated/google/apis/binaryauthorization_v1beta1.rb +1 -1
- data/generated/google/apis/blogger_v2/service.rb +9 -9
- data/generated/google/apis/blogger_v3/service.rb +33 -33
- data/generated/google/apis/books_v1/service.rb +51 -51
- data/generated/google/apis/calendar_v3/classes.rb +1 -1
- data/generated/google/apis/calendar_v3/service.rb +47 -47
- data/generated/google/apis/calendar_v3.rb +1 -1
- data/generated/google/apis/chat_v1/service.rb +8 -8
- data/generated/google/apis/civicinfo_v2/service.rb +5 -5
- data/generated/google/apis/classroom_v1/classes.rb +77 -0
- data/generated/google/apis/classroom_v1/representations.rb +32 -0
- data/generated/google/apis/classroom_v1/service.rb +276 -51
- data/generated/google/apis/classroom_v1.rb +7 -1
- data/generated/google/apis/cloudasset_v1/classes.rb +818 -0
- data/generated/google/apis/cloudasset_v1/representations.rb +264 -0
- data/generated/google/apis/cloudasset_v1/service.rb +191 -0
- data/generated/google/apis/cloudasset_v1.rb +34 -0
- data/generated/google/apis/cloudasset_v1beta1/classes.rb +33 -18
- data/generated/google/apis/cloudasset_v1beta1/representations.rb +1 -0
- data/generated/google/apis/cloudasset_v1beta1/service.rb +13 -13
- data/generated/google/apis/cloudasset_v1beta1.rb +2 -2
- data/generated/google/apis/cloudbilling_v1/classes.rb +1 -1
- data/generated/google/apis/cloudbilling_v1/service.rb +14 -14
- data/generated/google/apis/cloudbilling_v1.rb +1 -1
- data/generated/google/apis/cloudbuild_v1/classes.rb +162 -11
- data/generated/google/apis/cloudbuild_v1/representations.rb +67 -0
- data/generated/google/apis/cloudbuild_v1/service.rb +21 -15
- data/generated/google/apis/cloudbuild_v1.rb +1 -1
- data/generated/google/apis/cloudbuild_v1alpha1/classes.rb +7 -1
- data/generated/google/apis/cloudbuild_v1alpha1/representations.rb +2 -0
- data/generated/google/apis/cloudbuild_v1alpha1/service.rb +6 -6
- data/generated/google/apis/cloudbuild_v1alpha1.rb +1 -1
- data/generated/google/apis/clouddebugger_v2/service.rb +8 -8
- data/generated/google/apis/clouderrorreporting_v1beta1/classes.rb +19 -16
- data/generated/google/apis/clouderrorreporting_v1beta1/service.rb +12 -11
- data/generated/google/apis/clouderrorreporting_v1beta1.rb +1 -1
- data/generated/google/apis/cloudfunctions_v1/classes.rb +21 -17
- data/generated/google/apis/cloudfunctions_v1/service.rb +22 -16
- data/generated/google/apis/cloudfunctions_v1.rb +1 -1
- data/generated/google/apis/cloudfunctions_v1beta2/classes.rb +20 -16
- data/generated/google/apis/cloudfunctions_v1beta2/service.rb +17 -11
- data/generated/google/apis/cloudfunctions_v1beta2.rb +1 -1
- data/generated/google/apis/cloudidentity_v1/classes.rb +14 -14
- data/generated/google/apis/cloudidentity_v1/service.rb +18 -27
- data/generated/google/apis/cloudidentity_v1.rb +7 -1
- data/generated/google/apis/cloudidentity_v1beta1/classes.rb +11 -11
- data/generated/google/apis/cloudidentity_v1beta1/service.rb +15 -21
- data/generated/google/apis/cloudidentity_v1beta1.rb +7 -1
- data/generated/google/apis/cloudiot_v1/classes.rb +11 -11
- data/generated/google/apis/cloudiot_v1/service.rb +23 -330
- data/generated/google/apis/cloudiot_v1.rb +1 -1
- data/generated/google/apis/cloudkms_v1/classes.rb +7 -3
- data/generated/google/apis/cloudkms_v1/service.rb +30 -30
- data/generated/google/apis/cloudkms_v1.rb +1 -1
- data/generated/google/apis/cloudprivatecatalog_v1beta1/classes.rb +358 -0
- data/generated/google/apis/cloudprivatecatalog_v1beta1/representations.rb +123 -0
- data/generated/google/apis/cloudprivatecatalog_v1beta1/service.rb +486 -0
- data/generated/google/apis/cloudprivatecatalog_v1beta1.rb +35 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/classes.rb +1212 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/representations.rb +399 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/service.rb +1073 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1.rb +35 -0
- data/generated/google/apis/cloudprofiler_v2/service.rb +3 -3
- data/generated/google/apis/cloudresourcemanager_v1/classes.rb +24 -22
- data/generated/google/apis/cloudresourcemanager_v1/service.rb +68 -59
- data/generated/google/apis/cloudresourcemanager_v1.rb +1 -1
- data/generated/google/apis/cloudresourcemanager_v1beta1/classes.rb +3 -3
- data/generated/google/apis/cloudresourcemanager_v1beta1/service.rb +53 -42
- data/generated/google/apis/cloudresourcemanager_v1beta1.rb +1 -1
- data/generated/google/apis/cloudresourcemanager_v2/classes.rb +15 -16
- data/generated/google/apis/cloudresourcemanager_v2/service.rb +13 -13
- data/generated/google/apis/cloudresourcemanager_v2.rb +1 -1
- data/generated/google/apis/cloudresourcemanager_v2beta1/classes.rb +15 -16
- data/generated/google/apis/cloudresourcemanager_v2beta1/service.rb +13 -13
- data/generated/google/apis/cloudresourcemanager_v2beta1.rb +1 -1
- data/generated/google/apis/cloudscheduler_v1/classes.rb +994 -0
- data/generated/google/apis/cloudscheduler_v1/representations.rb +297 -0
- data/generated/google/apis/cloudscheduler_v1/service.rb +448 -0
- data/generated/google/apis/cloudscheduler_v1.rb +34 -0
- data/generated/google/apis/cloudscheduler_v1beta1/classes.rb +160 -44
- data/generated/google/apis/cloudscheduler_v1beta1/representations.rb +33 -0
- data/generated/google/apis/cloudscheduler_v1beta1/service.rb +15 -12
- data/generated/google/apis/cloudscheduler_v1beta1.rb +1 -1
- data/generated/google/apis/cloudsearch_v1/classes.rb +245 -59
- data/generated/google/apis/cloudsearch_v1/representations.rb +91 -0
- data/generated/google/apis/cloudsearch_v1/service.rb +86 -80
- data/generated/google/apis/cloudsearch_v1.rb +1 -1
- data/generated/google/apis/cloudshell_v1/classes.rb +11 -11
- data/generated/google/apis/cloudshell_v1/service.rb +4 -4
- data/generated/google/apis/cloudshell_v1.rb +1 -1
- data/generated/google/apis/cloudshell_v1alpha1/classes.rb +24 -11
- data/generated/google/apis/cloudshell_v1alpha1/representations.rb +2 -0
- data/generated/google/apis/cloudshell_v1alpha1/service.rb +11 -10
- data/generated/google/apis/cloudshell_v1alpha1.rb +1 -1
- data/generated/google/apis/cloudtasks_v2/classes.rb +1436 -0
- data/generated/google/apis/cloudtasks_v2/representations.rb +408 -0
- data/generated/google/apis/cloudtasks_v2/service.rb +856 -0
- data/generated/google/apis/{partners_v2.rb → cloudtasks_v2.rb} +11 -9
- data/generated/google/apis/cloudtasks_v2beta2/classes.rb +141 -102
- data/generated/google/apis/cloudtasks_v2beta2/service.rb +44 -43
- data/generated/google/apis/cloudtasks_v2beta2.rb +1 -1
- data/generated/google/apis/cloudtasks_v2beta3/classes.rb +388 -108
- data/generated/google/apis/cloudtasks_v2beta3/representations.rb +65 -0
- data/generated/google/apis/cloudtasks_v2beta3/service.rb +40 -39
- data/generated/google/apis/cloudtasks_v2beta3.rb +1 -1
- data/generated/google/apis/cloudtrace_v1/service.rb +3 -3
- data/generated/google/apis/cloudtrace_v2/classes.rb +10 -10
- data/generated/google/apis/cloudtrace_v2/service.rb +2 -2
- data/generated/google/apis/cloudtrace_v2.rb +1 -1
- data/generated/google/apis/commentanalyzer_v1alpha1/classes.rb +484 -0
- data/generated/google/apis/commentanalyzer_v1alpha1/representations.rb +210 -0
- data/generated/google/apis/commentanalyzer_v1alpha1/service.rb +124 -0
- data/generated/google/apis/commentanalyzer_v1alpha1.rb +39 -0
- data/generated/google/apis/composer_v1/classes.rb +21 -15
- data/generated/google/apis/composer_v1/service.rb +9 -9
- data/generated/google/apis/composer_v1.rb +1 -1
- data/generated/google/apis/composer_v1beta1/classes.rb +175 -36
- data/generated/google/apis/composer_v1beta1/representations.rb +50 -0
- data/generated/google/apis/composer_v1beta1/service.rb +9 -9
- data/generated/google/apis/composer_v1beta1.rb +1 -1
- data/generated/google/apis/compute_alpha/classes.rb +10112 -7289
- data/generated/google/apis/compute_alpha/representations.rb +1337 -219
- data/generated/google/apis/compute_alpha/service.rb +4259 -2728
- data/generated/google/apis/compute_alpha.rb +1 -1
- data/generated/google/apis/compute_beta/classes.rb +4254 -2781
- data/generated/google/apis/compute_beta/representations.rb +853 -283
- data/generated/google/apis/compute_beta/service.rb +7077 -5955
- data/generated/google/apis/compute_beta.rb +1 -1
- data/generated/google/apis/compute_v1/classes.rb +1259 -93
- data/generated/google/apis/compute_v1/representations.rb +450 -1
- data/generated/google/apis/compute_v1/service.rb +1085 -400
- data/generated/google/apis/compute_v1.rb +1 -1
- data/generated/google/apis/container_v1/classes.rb +201 -22
- data/generated/google/apis/container_v1/representations.rb +69 -0
- data/generated/google/apis/container_v1/service.rb +151 -102
- data/generated/google/apis/container_v1.rb +1 -1
- data/generated/google/apis/container_v1beta1/classes.rb +215 -25
- data/generated/google/apis/container_v1beta1/representations.rb +86 -0
- data/generated/google/apis/container_v1beta1/service.rb +106 -106
- data/generated/google/apis/container_v1beta1.rb +1 -1
- data/generated/google/apis/containeranalysis_v1alpha1/classes.rb +26 -18
- data/generated/google/apis/containeranalysis_v1alpha1/representations.rb +1 -0
- data/generated/google/apis/containeranalysis_v1alpha1/service.rb +33 -33
- data/generated/google/apis/containeranalysis_v1alpha1.rb +1 -1
- data/generated/google/apis/containeranalysis_v1beta1/classes.rb +226 -12
- data/generated/google/apis/containeranalysis_v1beta1/representations.rb +58 -0
- data/generated/google/apis/containeranalysis_v1beta1/service.rb +24 -24
- data/generated/google/apis/containeranalysis_v1beta1.rb +1 -1
- data/generated/google/apis/content_v2/classes.rb +218 -101
- data/generated/google/apis/content_v2/representations.rb +49 -0
- data/generated/google/apis/content_v2/service.rb +189 -152
- data/generated/google/apis/content_v2.rb +1 -1
- data/generated/google/apis/content_v2_1/classes.rb +387 -216
- data/generated/google/apis/content_v2_1/representations.rb +131 -56
- data/generated/google/apis/content_v2_1/service.rb +190 -107
- data/generated/google/apis/content_v2_1.rb +1 -1
- data/generated/google/apis/customsearch_v1/service.rb +2 -2
- data/generated/google/apis/dataflow_v1b3/classes.rb +148 -31
- data/generated/google/apis/dataflow_v1b3/representations.rb +45 -0
- data/generated/google/apis/dataflow_v1b3/service.rb +415 -56
- data/generated/google/apis/dataflow_v1b3.rb +1 -1
- data/generated/google/apis/datafusion_v1beta1/classes.rb +1304 -0
- data/generated/google/apis/datafusion_v1beta1/representations.rb +469 -0
- data/generated/google/apis/datafusion_v1beta1/service.rb +657 -0
- data/generated/google/apis/datafusion_v1beta1.rb +43 -0
- data/generated/google/apis/dataproc_v1/classes.rb +27 -22
- data/generated/google/apis/dataproc_v1/representations.rb +1 -0
- data/generated/google/apis/dataproc_v1/service.rb +261 -45
- data/generated/google/apis/dataproc_v1.rb +1 -1
- data/generated/google/apis/dataproc_v1beta2/classes.rb +534 -50
- data/generated/google/apis/dataproc_v1beta2/representations.rb +185 -7
- data/generated/google/apis/dataproc_v1beta2/service.rb +617 -51
- data/generated/google/apis/dataproc_v1beta2.rb +1 -1
- data/generated/google/apis/datastore_v1/classes.rb +20 -16
- data/generated/google/apis/datastore_v1/service.rb +15 -15
- data/generated/google/apis/datastore_v1.rb +1 -1
- data/generated/google/apis/datastore_v1beta1/classes.rb +10 -10
- data/generated/google/apis/datastore_v1beta1/service.rb +2 -2
- data/generated/google/apis/datastore_v1beta1.rb +1 -1
- data/generated/google/apis/datastore_v1beta3/classes.rb +10 -6
- data/generated/google/apis/datastore_v1beta3/service.rb +7 -7
- data/generated/google/apis/datastore_v1beta3.rb +1 -1
- data/generated/google/apis/deploymentmanager_alpha/service.rb +37 -37
- data/generated/google/apis/deploymentmanager_v2/service.rb +18 -18
- data/generated/google/apis/deploymentmanager_v2beta/service.rb +32 -32
- data/generated/google/apis/dfareporting_v3_1/service.rb +206 -206
- data/generated/google/apis/dfareporting_v3_2/service.rb +206 -206
- data/generated/google/apis/dfareporting_v3_3/classes.rb +3 -3
- data/generated/google/apis/dfareporting_v3_3/service.rb +204 -204
- data/generated/google/apis/dfareporting_v3_3.rb +1 -1
- data/generated/google/apis/dialogflow_v2/classes.rb +367 -82
- data/generated/google/apis/dialogflow_v2/representations.rb +99 -0
- data/generated/google/apis/dialogflow_v2/service.rb +76 -60
- data/generated/google/apis/dialogflow_v2.rb +1 -1
- data/generated/google/apis/dialogflow_v2beta1/classes.rb +199 -88
- data/generated/google/apis/dialogflow_v2beta1/representations.rb +31 -0
- data/generated/google/apis/dialogflow_v2beta1/service.rb +154 -94
- data/generated/google/apis/dialogflow_v2beta1.rb +1 -1
- data/generated/google/apis/digitalassetlinks_v1/service.rb +7 -6
- data/generated/google/apis/digitalassetlinks_v1.rb +1 -1
- data/generated/google/apis/discovery_v1/service.rb +2 -2
- data/generated/google/apis/dlp_v2/classes.rb +116 -45
- data/generated/google/apis/dlp_v2/representations.rb +32 -0
- data/generated/google/apis/dlp_v2/service.rb +85 -45
- data/generated/google/apis/dlp_v2.rb +1 -1
- data/generated/google/apis/dns_v1/classes.rb +83 -1
- data/generated/google/apis/dns_v1/representations.rb +34 -0
- data/generated/google/apis/dns_v1/service.rb +15 -15
- data/generated/google/apis/dns_v1.rb +1 -1
- data/generated/google/apis/dns_v1beta2/classes.rb +81 -1
- data/generated/google/apis/dns_v1beta2/representations.rb +33 -0
- data/generated/google/apis/dns_v1beta2/service.rb +21 -21
- data/generated/google/apis/dns_v1beta2.rb +1 -1
- data/generated/google/apis/dns_v2beta1/classes.rb +83 -1
- data/generated/google/apis/dns_v2beta1/representations.rb +34 -0
- data/generated/google/apis/dns_v2beta1/service.rb +16 -16
- data/generated/google/apis/dns_v2beta1.rb +1 -1
- data/generated/google/apis/docs_v1/classes.rb +265 -47
- data/generated/google/apis/docs_v1/representations.rb +96 -0
- data/generated/google/apis/docs_v1/service.rb +3 -3
- data/generated/google/apis/docs_v1.rb +1 -1
- data/generated/google/apis/doubleclickbidmanager_v1/classes.rb +6 -4
- data/generated/google/apis/doubleclickbidmanager_v1/service.rb +9 -9
- data/generated/google/apis/doubleclickbidmanager_v1.rb +1 -1
- data/generated/google/apis/doubleclicksearch_v2/service.rb +10 -10
- data/generated/google/apis/drive_v2/classes.rb +601 -80
- data/generated/google/apis/drive_v2/representations.rb +152 -0
- data/generated/google/apis/drive_v2/service.rb +574 -164
- data/generated/google/apis/drive_v2.rb +1 -1
- data/generated/google/apis/drive_v3/classes.rb +591 -75
- data/generated/google/apis/drive_v3/representations.rb +151 -0
- data/generated/google/apis/drive_v3/service.rb +483 -116
- data/generated/google/apis/drive_v3.rb +1 -1
- data/generated/google/apis/driveactivity_v2/classes.rb +149 -17
- data/generated/google/apis/driveactivity_v2/representations.rb +69 -0
- data/generated/google/apis/driveactivity_v2/service.rb +1 -1
- data/generated/google/apis/driveactivity_v2.rb +1 -1
- data/generated/google/apis/factchecktools_v1alpha1/classes.rb +459 -0
- data/generated/google/apis/factchecktools_v1alpha1/representations.rb +207 -0
- data/generated/google/apis/factchecktools_v1alpha1/service.rb +300 -0
- data/generated/google/apis/factchecktools_v1alpha1.rb +34 -0
- data/generated/google/apis/fcm_v1/classes.rb +424 -0
- data/generated/google/apis/fcm_v1/representations.rb +167 -0
- data/generated/google/apis/fcm_v1/service.rb +97 -0
- data/generated/google/apis/fcm_v1.rb +35 -0
- data/generated/google/apis/file_v1/classes.rb +646 -11
- data/generated/google/apis/file_v1/representations.rb +207 -0
- data/generated/google/apis/file_v1/service.rb +196 -6
- data/generated/google/apis/file_v1.rb +1 -1
- data/generated/google/apis/file_v1beta1/classes.rb +461 -19
- data/generated/google/apis/file_v1beta1/representations.rb +137 -0
- data/generated/google/apis/file_v1beta1/service.rb +11 -11
- data/generated/google/apis/file_v1beta1.rb +1 -1
- data/generated/google/apis/firebasedynamiclinks_v1/classes.rb +41 -14
- data/generated/google/apis/firebasedynamiclinks_v1/representations.rb +4 -0
- data/generated/google/apis/firebasedynamiclinks_v1/service.rb +5 -5
- data/generated/google/apis/firebasedynamiclinks_v1.rb +1 -1
- data/generated/google/apis/firebasehosting_v1beta1/classes.rb +13 -13
- data/generated/google/apis/firebasehosting_v1beta1/service.rb +14 -14
- data/generated/google/apis/firebasehosting_v1beta1.rb +1 -1
- data/generated/google/apis/firebaserules_v1/classes.rb +10 -2
- data/generated/google/apis/firebaserules_v1/service.rb +12 -12
- data/generated/google/apis/firebaserules_v1.rb +1 -1
- data/generated/google/apis/firestore_v1/classes.rb +15 -15
- data/generated/google/apis/firestore_v1/service.rb +28 -28
- data/generated/google/apis/firestore_v1.rb +1 -1
- data/generated/google/apis/firestore_v1beta1/classes.rb +15 -15
- data/generated/google/apis/firestore_v1beta1/service.rb +19 -19
- data/generated/google/apis/firestore_v1beta1.rb +1 -1
- data/generated/google/apis/firestore_v1beta2/classes.rb +10 -10
- data/generated/google/apis/firestore_v1beta2/service.rb +9 -9
- data/generated/google/apis/firestore_v1beta2.rb +1 -1
- data/generated/google/apis/fitness_v1/classes.rb +4 -1
- data/generated/google/apis/fitness_v1/service.rb +14 -58
- data/generated/google/apis/fitness_v1.rb +1 -1
- data/generated/google/apis/fusiontables_v1/service.rb +32 -32
- data/generated/google/apis/fusiontables_v2/service.rb +34 -34
- data/generated/google/apis/games_configuration_v1configuration/service.rb +13 -13
- data/generated/google/apis/games_management_v1management/service.rb +27 -27
- data/generated/google/apis/games_management_v1management.rb +2 -2
- data/generated/google/apis/games_v1/service.rb +53 -53
- data/generated/google/apis/games_v1.rb +3 -3
- data/generated/google/apis/genomics_v1/classes.rb +190 -3321
- data/generated/google/apis/genomics_v1/representations.rb +128 -1265
- data/generated/google/apis/genomics_v1/service.rb +75 -1982
- data/generated/google/apis/genomics_v1.rb +1 -10
- data/generated/google/apis/genomics_v1alpha2/classes.rb +13 -53
- data/generated/google/apis/genomics_v1alpha2/representations.rb +0 -26
- data/generated/google/apis/genomics_v1alpha2/service.rb +11 -12
- data/generated/google/apis/genomics_v1alpha2.rb +1 -1
- data/generated/google/apis/genomics_v2alpha1/classes.rb +26 -58
- data/generated/google/apis/genomics_v2alpha1/representations.rb +1 -26
- data/generated/google/apis/genomics_v2alpha1/service.rb +6 -7
- data/generated/google/apis/genomics_v2alpha1.rb +1 -1
- data/generated/google/apis/gmail_v1/classes.rb +29 -0
- data/generated/google/apis/gmail_v1/representations.rb +13 -0
- data/generated/google/apis/gmail_v1/service.rb +142 -66
- data/generated/google/apis/gmail_v1.rb +1 -1
- data/generated/google/apis/groupsmigration_v1/service.rb +1 -1
- data/generated/google/apis/groupssettings_v1/classes.rb +126 -1
- data/generated/google/apis/groupssettings_v1/representations.rb +18 -0
- data/generated/google/apis/groupssettings_v1/service.rb +4 -4
- data/generated/google/apis/groupssettings_v1.rb +2 -2
- data/generated/google/apis/healthcare_v1alpha2/classes.rb +2849 -0
- data/generated/google/apis/healthcare_v1alpha2/representations.rb +1260 -0
- data/generated/google/apis/healthcare_v1alpha2/service.rb +4011 -0
- data/generated/google/apis/healthcare_v1alpha2.rb +34 -0
- data/generated/google/apis/healthcare_v1beta1/classes.rb +2464 -0
- data/generated/google/apis/healthcare_v1beta1/representations.rb +1042 -0
- data/generated/google/apis/healthcare_v1beta1/service.rb +3413 -0
- data/generated/google/apis/healthcare_v1beta1.rb +34 -0
- data/generated/google/apis/iam_v1/classes.rb +171 -1
- data/generated/google/apis/iam_v1/representations.rb +95 -0
- data/generated/google/apis/iam_v1/service.rb +249 -39
- data/generated/google/apis/iam_v1.rb +1 -1
- data/generated/google/apis/iamcredentials_v1/classes.rb +8 -4
- data/generated/google/apis/iamcredentials_v1/service.rb +15 -10
- data/generated/google/apis/iamcredentials_v1.rb +1 -1
- data/generated/google/apis/iap_v1/classes.rb +1 -1
- data/generated/google/apis/iap_v1/service.rb +3 -3
- data/generated/google/apis/iap_v1.rb +1 -1
- data/generated/google/apis/iap_v1beta1/classes.rb +1 -1
- data/generated/google/apis/iap_v1beta1/service.rb +3 -3
- data/generated/google/apis/iap_v1beta1.rb +1 -1
- data/generated/google/apis/identitytoolkit_v3/service.rb +20 -20
- data/generated/google/apis/indexing_v3/service.rb +2 -2
- data/generated/google/apis/jobs_v2/classes.rb +16 -17
- data/generated/google/apis/jobs_v2/service.rb +17 -17
- data/generated/google/apis/jobs_v2.rb +1 -1
- data/generated/google/apis/jobs_v3/classes.rb +14 -8
- data/generated/google/apis/jobs_v3/service.rb +16 -17
- data/generated/google/apis/jobs_v3.rb +1 -1
- data/generated/google/apis/jobs_v3p1beta1/classes.rb +26 -20
- data/generated/google/apis/jobs_v3p1beta1/service.rb +17 -18
- data/generated/google/apis/jobs_v3p1beta1.rb +1 -1
- data/generated/google/apis/kgsearch_v1/service.rb +1 -1
- data/generated/google/apis/language_v1/classes.rb +8 -7
- data/generated/google/apis/language_v1/service.rb +6 -6
- data/generated/google/apis/language_v1.rb +1 -1
- data/generated/google/apis/language_v1beta1/classes.rb +5 -5
- data/generated/google/apis/language_v1beta1/service.rb +4 -4
- data/generated/google/apis/language_v1beta1.rb +1 -1
- data/generated/google/apis/language_v1beta2/classes.rb +8 -7
- data/generated/google/apis/language_v1beta2/service.rb +6 -6
- data/generated/google/apis/language_v1beta2.rb +1 -1
- data/generated/google/apis/libraryagent_v1/service.rb +6 -6
- data/generated/google/apis/licensing_v1/service.rb +7 -7
- data/generated/google/apis/logging_v2/classes.rb +8 -3
- data/generated/google/apis/logging_v2/representations.rb +1 -0
- data/generated/google/apis/logging_v2/service.rb +72 -72
- data/generated/google/apis/logging_v2.rb +1 -1
- data/generated/google/apis/manufacturers_v1/service.rb +4 -4
- data/generated/google/apis/mirror_v1/service.rb +24 -24
- data/generated/google/apis/ml_v1/classes.rb +240 -52
- data/generated/google/apis/ml_v1/representations.rb +25 -2
- data/generated/google/apis/ml_v1/service.rb +36 -36
- data/generated/google/apis/ml_v1.rb +1 -1
- data/generated/google/apis/monitoring_v3/classes.rb +22 -18
- data/generated/google/apis/monitoring_v3/representations.rb +2 -1
- data/generated/google/apis/monitoring_v3/service.rb +42 -37
- data/generated/google/apis/monitoring_v3.rb +1 -1
- data/generated/google/apis/oauth2_v1/classes.rb +0 -124
- data/generated/google/apis/oauth2_v1/representations.rb +0 -62
- data/generated/google/apis/oauth2_v1/service.rb +3 -162
- data/generated/google/apis/oauth2_v1.rb +3 -6
- data/generated/google/apis/oauth2_v2/service.rb +4 -4
- data/generated/google/apis/oauth2_v2.rb +3 -6
- data/generated/google/apis/oslogin_v1/service.rb +8 -7
- data/generated/google/apis/oslogin_v1.rb +3 -2
- data/generated/google/apis/oslogin_v1alpha/service.rb +8 -7
- data/generated/google/apis/oslogin_v1alpha.rb +3 -2
- data/generated/google/apis/oslogin_v1beta/service.rb +8 -7
- data/generated/google/apis/oslogin_v1beta.rb +3 -2
- data/generated/google/apis/pagespeedonline_v1/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v2/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v4/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v5/classes.rb +43 -0
- data/generated/google/apis/pagespeedonline_v5/representations.rb +18 -0
- data/generated/google/apis/pagespeedonline_v5/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v5.rb +1 -1
- data/generated/google/apis/people_v1/classes.rb +38 -29
- data/generated/google/apis/people_v1/representations.rb +1 -0
- data/generated/google/apis/people_v1/service.rb +18 -13
- data/generated/google/apis/people_v1.rb +2 -5
- data/generated/google/apis/playcustomapp_v1/service.rb +1 -1
- data/generated/google/apis/plus_domains_v1/service.rb +18 -392
- data/generated/google/apis/plus_domains_v1.rb +4 -10
- data/generated/google/apis/plus_v1/service.rb +16 -16
- data/generated/google/apis/plus_v1.rb +4 -4
- data/generated/google/apis/poly_v1/classes.rb +8 -6
- data/generated/google/apis/poly_v1/service.rb +15 -12
- data/generated/google/apis/poly_v1.rb +1 -1
- data/generated/google/apis/proximitybeacon_v1beta1/classes.rb +8 -6
- data/generated/google/apis/proximitybeacon_v1beta1/service.rb +17 -17
- data/generated/google/apis/proximitybeacon_v1beta1.rb +1 -1
- data/generated/google/apis/pubsub_v1/classes.rb +55 -39
- data/generated/google/apis/pubsub_v1/representations.rb +16 -0
- data/generated/google/apis/pubsub_v1/service.rb +46 -69
- data/generated/google/apis/pubsub_v1.rb +1 -1
- data/generated/google/apis/pubsub_v1beta1a/service.rb +15 -15
- data/generated/google/apis/pubsub_v1beta2/classes.rb +45 -1
- data/generated/google/apis/pubsub_v1beta2/representations.rb +16 -0
- data/generated/google/apis/pubsub_v1beta2/service.rb +20 -20
- data/generated/google/apis/pubsub_v1beta2.rb +1 -1
- data/generated/google/apis/redis_v1/classes.rb +30 -10
- data/generated/google/apis/redis_v1/representations.rb +13 -0
- data/generated/google/apis/redis_v1/service.rb +51 -15
- data/generated/google/apis/redis_v1.rb +1 -1
- data/generated/google/apis/redis_v1beta1/classes.rb +18 -21
- data/generated/google/apis/redis_v1beta1/representations.rb +0 -1
- data/generated/google/apis/redis_v1beta1/service.rb +15 -15
- data/generated/google/apis/redis_v1beta1.rb +1 -1
- data/generated/google/apis/remotebuildexecution_v1/classes.rb +50 -35
- data/generated/google/apis/remotebuildexecution_v1/representations.rb +2 -0
- data/generated/google/apis/remotebuildexecution_v1/service.rb +7 -7
- data/generated/google/apis/remotebuildexecution_v1.rb +1 -1
- data/generated/google/apis/remotebuildexecution_v1alpha/classes.rb +48 -33
- data/generated/google/apis/remotebuildexecution_v1alpha/representations.rb +2 -0
- data/generated/google/apis/remotebuildexecution_v1alpha/service.rb +10 -10
- data/generated/google/apis/remotebuildexecution_v1alpha.rb +1 -1
- data/generated/google/apis/remotebuildexecution_v2/classes.rb +58 -43
- data/generated/google/apis/remotebuildexecution_v2/representations.rb +2 -0
- data/generated/google/apis/remotebuildexecution_v2/service.rb +9 -9
- data/generated/google/apis/remotebuildexecution_v2.rb +1 -1
- data/generated/google/apis/replicapool_v1beta1/service.rb +10 -10
- data/generated/google/apis/reseller_v1/classes.rb +32 -39
- data/generated/google/apis/reseller_v1/service.rb +18 -18
- data/generated/google/apis/reseller_v1.rb +1 -1
- data/generated/google/apis/run_v1/classes.rb +73 -0
- data/generated/google/apis/run_v1/representations.rb +43 -0
- data/generated/google/apis/run_v1/service.rb +90 -0
- data/generated/google/apis/run_v1.rb +35 -0
- data/generated/google/apis/run_v1alpha1/classes.rb +3882 -0
- data/generated/google/apis/run_v1alpha1/representations.rb +1425 -0
- data/generated/google/apis/run_v1alpha1/service.rb +2071 -0
- data/generated/google/apis/run_v1alpha1.rb +35 -0
- data/generated/google/apis/runtimeconfig_v1/classes.rb +11 -11
- data/generated/google/apis/runtimeconfig_v1/service.rb +3 -3
- data/generated/google/apis/runtimeconfig_v1.rb +1 -1
- data/generated/google/apis/runtimeconfig_v1beta1/classes.rb +26 -25
- data/generated/google/apis/runtimeconfig_v1beta1/service.rb +22 -22
- data/generated/google/apis/runtimeconfig_v1beta1.rb +1 -1
- data/generated/google/apis/safebrowsing_v4/service.rb +7 -7
- data/generated/google/apis/script_v1/classes.rb +167 -6
- data/generated/google/apis/script_v1/representations.rb +79 -1
- data/generated/google/apis/script_v1/service.rb +16 -16
- data/generated/google/apis/script_v1.rb +1 -1
- data/generated/google/apis/searchconsole_v1/service.rb +1 -1
- data/generated/google/apis/securitycenter_v1/classes.rb +1627 -0
- data/generated/google/apis/securitycenter_v1/representations.rb +569 -0
- data/generated/google/apis/securitycenter_v1/service.rb +1110 -0
- data/generated/google/apis/securitycenter_v1.rb +35 -0
- data/generated/google/apis/securitycenter_v1beta1/classes.rb +1514 -0
- data/generated/google/apis/securitycenter_v1beta1/representations.rb +548 -0
- data/generated/google/apis/securitycenter_v1beta1/service.rb +1035 -0
- data/generated/google/apis/securitycenter_v1beta1.rb +35 -0
- data/generated/google/apis/servicebroker_v1/classes.rb +1 -1
- data/generated/google/apis/servicebroker_v1/service.rb +3 -3
- data/generated/google/apis/servicebroker_v1.rb +1 -1
- data/generated/google/apis/servicebroker_v1alpha1/classes.rb +1 -1
- data/generated/google/apis/servicebroker_v1alpha1/service.rb +16 -16
- data/generated/google/apis/servicebroker_v1alpha1.rb +1 -1
- data/generated/google/apis/servicebroker_v1beta1/classes.rb +1 -1
- data/generated/google/apis/servicebroker_v1beta1/service.rb +21 -21
- data/generated/google/apis/servicebroker_v1beta1.rb +1 -1
- data/generated/google/apis/serviceconsumermanagement_v1/classes.rb +453 -149
- data/generated/google/apis/serviceconsumermanagement_v1/representations.rb +202 -29
- data/generated/google/apis/serviceconsumermanagement_v1/service.rb +148 -62
- data/generated/google/apis/serviceconsumermanagement_v1.rb +1 -1
- data/generated/google/apis/servicecontrol_v1/classes.rb +122 -25
- data/generated/google/apis/servicecontrol_v1/representations.rb +47 -0
- data/generated/google/apis/servicecontrol_v1/service.rb +3 -3
- data/generated/google/apis/servicecontrol_v1.rb +1 -1
- data/generated/google/apis/servicemanagement_v1/classes.rb +93 -110
- data/generated/google/apis/servicemanagement_v1/representations.rb +13 -26
- data/generated/google/apis/servicemanagement_v1/service.rb +30 -27
- data/generated/google/apis/servicemanagement_v1.rb +1 -1
- data/generated/google/apis/servicenetworking_v1/classes.rb +3626 -0
- data/generated/google/apis/servicenetworking_v1/representations.rb +1055 -0
- data/generated/google/apis/servicenetworking_v1/service.rb +440 -0
- data/generated/google/apis/servicenetworking_v1.rb +38 -0
- data/generated/google/apis/servicenetworking_v1beta/classes.rb +65 -108
- data/generated/google/apis/servicenetworking_v1beta/representations.rb +2 -29
- data/generated/google/apis/servicenetworking_v1beta/service.rb +6 -6
- data/generated/google/apis/servicenetworking_v1beta.rb +1 -1
- data/generated/google/apis/serviceusage_v1/classes.rb +160 -109
- data/generated/google/apis/serviceusage_v1/representations.rb +42 -26
- data/generated/google/apis/serviceusage_v1/service.rb +17 -19
- data/generated/google/apis/serviceusage_v1.rb +1 -1
- data/generated/google/apis/serviceusage_v1beta1/classes.rb +161 -110
- data/generated/google/apis/serviceusage_v1beta1/representations.rb +42 -26
- data/generated/google/apis/serviceusage_v1beta1/service.rb +7 -7
- data/generated/google/apis/serviceusage_v1beta1.rb +1 -1
- data/generated/google/apis/sheets_v4/classes.rb +115 -26
- data/generated/google/apis/sheets_v4/service.rb +17 -17
- data/generated/google/apis/sheets_v4.rb +1 -1
- data/generated/google/apis/site_verification_v1/service.rb +7 -7
- data/generated/google/apis/slides_v1/classes.rb +2 -2
- data/generated/google/apis/slides_v1/service.rb +5 -5
- data/generated/google/apis/slides_v1.rb +1 -1
- data/generated/google/apis/sourcerepo_v1/classes.rb +183 -1
- data/generated/google/apis/sourcerepo_v1/representations.rb +45 -0
- data/generated/google/apis/sourcerepo_v1/service.rb +45 -10
- data/generated/google/apis/sourcerepo_v1.rb +1 -1
- data/generated/google/apis/spanner_v1/classes.rb +231 -17
- data/generated/google/apis/spanner_v1/representations.rb +66 -0
- data/generated/google/apis/spanner_v1/service.rb +92 -42
- data/generated/google/apis/spanner_v1.rb +1 -1
- data/generated/google/apis/speech_v1/classes.rb +110 -13
- data/generated/google/apis/speech_v1/representations.rb +24 -0
- data/generated/google/apis/speech_v1/service.rb +9 -7
- data/generated/google/apis/speech_v1.rb +1 -1
- data/generated/google/apis/speech_v1p1beta1/classes.rb +19 -13
- data/generated/google/apis/speech_v1p1beta1/representations.rb +1 -0
- data/generated/google/apis/speech_v1p1beta1/service.rb +9 -7
- data/generated/google/apis/speech_v1p1beta1.rb +1 -1
- data/generated/google/apis/sqladmin_v1beta4/classes.rb +94 -17
- data/generated/google/apis/sqladmin_v1beta4/representations.rb +36 -0
- data/generated/google/apis/sqladmin_v1beta4/service.rb +44 -44
- data/generated/google/apis/sqladmin_v1beta4.rb +1 -1
- data/generated/google/apis/storage_v1/classes.rb +201 -4
- data/generated/google/apis/storage_v1/representations.rb +76 -1
- data/generated/google/apis/storage_v1/service.rb +488 -93
- data/generated/google/apis/storage_v1.rb +1 -1
- data/generated/google/apis/storage_v1beta1/service.rb +24 -24
- data/generated/google/apis/storage_v1beta2/service.rb +34 -34
- data/generated/google/apis/storagetransfer_v1/classes.rb +44 -44
- data/generated/google/apis/storagetransfer_v1/service.rb +35 -36
- data/generated/google/apis/storagetransfer_v1.rb +2 -2
- data/generated/google/apis/streetviewpublish_v1/classes.rb +27 -27
- data/generated/google/apis/streetviewpublish_v1/service.rb +36 -40
- data/generated/google/apis/streetviewpublish_v1.rb +1 -1
- data/generated/google/apis/surveys_v2/service.rb +8 -8
- data/generated/google/apis/tagmanager_v1/service.rb +49 -95
- data/generated/google/apis/tagmanager_v1.rb +1 -1
- data/generated/google/apis/tagmanager_v2/classes.rb +197 -292
- data/generated/google/apis/tagmanager_v2/representations.rb +62 -103
- data/generated/google/apis/tagmanager_v2/service.rb +287 -249
- data/generated/google/apis/tagmanager_v2.rb +1 -1
- data/generated/google/apis/tasks_v1/service.rb +19 -19
- data/generated/google/apis/tasks_v1.rb +2 -2
- data/generated/google/apis/testing_v1/classes.rb +44 -39
- data/generated/google/apis/testing_v1/representations.rb +3 -1
- data/generated/google/apis/testing_v1/service.rb +5 -5
- data/generated/google/apis/testing_v1.rb +1 -1
- data/generated/google/apis/texttospeech_v1/service.rb +2 -2
- data/generated/google/apis/texttospeech_v1.rb +1 -1
- data/generated/google/apis/texttospeech_v1beta1/service.rb +2 -2
- data/generated/google/apis/texttospeech_v1beta1.rb +1 -1
- data/generated/google/apis/toolresults_v1beta3/classes.rb +340 -17
- data/generated/google/apis/toolresults_v1beta3/representations.rb +90 -0
- data/generated/google/apis/toolresults_v1beta3/service.rb +140 -24
- data/generated/google/apis/toolresults_v1beta3.rb +1 -1
- data/generated/google/apis/tpu_v1/classes.rb +21 -15
- data/generated/google/apis/tpu_v1/representations.rb +1 -0
- data/generated/google/apis/tpu_v1/service.rb +17 -17
- data/generated/google/apis/tpu_v1.rb +1 -1
- data/generated/google/apis/tpu_v1alpha1/classes.rb +21 -15
- data/generated/google/apis/tpu_v1alpha1/representations.rb +1 -0
- data/generated/google/apis/tpu_v1alpha1/service.rb +17 -17
- data/generated/google/apis/tpu_v1alpha1.rb +1 -1
- data/generated/google/apis/translate_v2/service.rb +5 -5
- data/generated/google/apis/urlshortener_v1/service.rb +3 -3
- data/generated/google/apis/vault_v1/classes.rb +44 -18
- data/generated/google/apis/vault_v1/representations.rb +4 -0
- data/generated/google/apis/vault_v1/service.rb +28 -28
- data/generated/google/apis/vault_v1.rb +1 -1
- data/generated/google/apis/videointelligence_v1/classes.rb +2193 -350
- data/generated/google/apis/videointelligence_v1/representations.rb +805 -6
- data/generated/google/apis/videointelligence_v1/service.rb +7 -6
- data/generated/google/apis/videointelligence_v1.rb +3 -2
- data/generated/google/apis/videointelligence_v1beta2/classes.rb +2448 -605
- data/generated/google/apis/videointelligence_v1beta2/representations.rb +806 -7
- data/generated/google/apis/videointelligence_v1beta2/service.rb +3 -2
- data/generated/google/apis/videointelligence_v1beta2.rb +3 -2
- data/generated/google/apis/videointelligence_v1p1beta1/classes.rb +2422 -579
- data/generated/google/apis/videointelligence_v1p1beta1/representations.rb +806 -7
- data/generated/google/apis/videointelligence_v1p1beta1/service.rb +3 -2
- data/generated/google/apis/videointelligence_v1p1beta1.rb +3 -2
- data/generated/google/apis/videointelligence_v1p2beta1/classes.rb +2645 -830
- data/generated/google/apis/videointelligence_v1p2beta1/representations.rb +796 -12
- data/generated/google/apis/videointelligence_v1p2beta1/service.rb +3 -2
- data/generated/google/apis/videointelligence_v1p2beta1.rb +3 -2
- data/generated/google/apis/videointelligence_v1p3beta1/classes.rb +4687 -0
- data/generated/google/apis/videointelligence_v1p3beta1/representations.rb +2005 -0
- data/generated/google/apis/videointelligence_v1p3beta1/service.rb +94 -0
- data/generated/google/apis/videointelligence_v1p3beta1.rb +36 -0
- data/generated/google/apis/vision_v1/classes.rb +4397 -124
- data/generated/google/apis/vision_v1/representations.rb +2366 -541
- data/generated/google/apis/vision_v1/service.rb +160 -33
- data/generated/google/apis/vision_v1.rb +1 -1
- data/generated/google/apis/vision_v1p1beta1/classes.rb +4451 -158
- data/generated/google/apis/vision_v1p1beta1/representations.rb +2415 -576
- data/generated/google/apis/vision_v1p1beta1/service.rb +73 -2
- data/generated/google/apis/vision_v1p1beta1.rb +1 -1
- data/generated/google/apis/vision_v1p2beta1/classes.rb +4451 -158
- data/generated/google/apis/vision_v1p2beta1/representations.rb +2443 -604
- data/generated/google/apis/vision_v1p2beta1/service.rb +73 -2
- data/generated/google/apis/vision_v1p2beta1.rb +1 -1
- data/generated/google/apis/webfonts_v1/service.rb +1 -1
- data/generated/google/apis/webmasters_v3/classes.rb +0 -166
- data/generated/google/apis/webmasters_v3/representations.rb +0 -93
- data/generated/google/apis/webmasters_v3/service.rb +9 -180
- data/generated/google/apis/webmasters_v3.rb +1 -1
- data/generated/google/apis/websecurityscanner_v1alpha/service.rb +13 -13
- data/generated/google/apis/websecurityscanner_v1beta/classes.rb +973 -0
- data/generated/google/apis/websecurityscanner_v1beta/representations.rb +452 -0
- data/generated/google/apis/websecurityscanner_v1beta/service.rb +548 -0
- data/generated/google/apis/websecurityscanner_v1beta.rb +34 -0
- data/generated/google/apis/youtube_analytics_v1/service.rb +8 -8
- data/generated/google/apis/youtube_analytics_v1beta1/service.rb +8 -8
- data/generated/google/apis/youtube_analytics_v2/service.rb +8 -8
- data/generated/google/apis/youtube_partner_v1/classes.rb +15 -34
- data/generated/google/apis/youtube_partner_v1/representations.rb +4 -17
- data/generated/google/apis/youtube_partner_v1/service.rb +74 -74
- data/generated/google/apis/youtube_partner_v1.rb +1 -1
- data/generated/google/apis/youtube_v3/service.rb +71 -71
- data/generated/google/apis/youtube_v3.rb +1 -1
- data/generated/google/apis/youtubereporting_v1/classes.rb +2 -2
- data/generated/google/apis/youtubereporting_v1/service.rb +8 -8
- data/generated/google/apis/youtubereporting_v1.rb +1 -1
- data/google-api-client.gemspec +2 -2
- data/lib/google/apis/core/http_command.rb +1 -0
- data/lib/google/apis/core/json_representation.rb +4 -0
- data/lib/google/apis/core/upload.rb +3 -3
- data/lib/google/apis/generator/model.rb +1 -1
- data/lib/google/apis/generator/templates/_method.tmpl +3 -3
- data/lib/google/apis/version.rb +1 -1
- metadata +86 -17
- data/.kokoro/common.cfg +0 -22
- data/.kokoro/windows.sh +0 -32
- data/generated/google/apis/logging_v2beta1/classes.rb +0 -1765
- data/generated/google/apis/logging_v2beta1/representations.rb +0 -537
- data/generated/google/apis/logging_v2beta1/service.rb +0 -570
- data/generated/google/apis/logging_v2beta1.rb +0 -46
- data/generated/google/apis/partners_v2/classes.rb +0 -2260
- data/generated/google/apis/partners_v2/representations.rb +0 -905
- data/generated/google/apis/partners_v2/service.rb +0 -1077
- data/samples/web/.env +0 -2
|
@@ -277,6 +277,16 @@ module Google
|
|
|
277
277
|
class GoogleCloudVideointelligenceV1LabelDetectionConfig
|
|
278
278
|
include Google::Apis::Core::Hashable
|
|
279
279
|
|
|
280
|
+
# The confidence threshold we perform filtering on the labels from
|
|
281
|
+
# frame-level detection. If not set, it is set to 0.4 by default. The valid
|
|
282
|
+
# range for this threshold is [0.1, 0.9]. Any value set outside of this
|
|
283
|
+
# range will be clipped.
|
|
284
|
+
# Note: for best results please follow the default threshold. We will update
|
|
285
|
+
# the default threshold everytime when we release a new model.
|
|
286
|
+
# Corresponds to the JSON property `frameConfidenceThreshold`
|
|
287
|
+
# @return [Float]
|
|
288
|
+
attr_accessor :frame_confidence_threshold
|
|
289
|
+
|
|
280
290
|
# What labels should be detected with LABEL_DETECTION, in addition to
|
|
281
291
|
# video-level labels or segment-level labels.
|
|
282
292
|
# If unspecified, defaults to `SHOT_MODE`.
|
|
@@ -299,15 +309,27 @@ module Google
|
|
|
299
309
|
attr_accessor :stationary_camera
|
|
300
310
|
alias_method :stationary_camera?, :stationary_camera
|
|
301
311
|
|
|
312
|
+
# The confidence threshold we perform filtering on the labels from
|
|
313
|
+
# video-level and shot-level detections. If not set, it is set to 0.3 by
|
|
314
|
+
# default. The valid range for this threshold is [0.1, 0.9]. Any value set
|
|
315
|
+
# outside of this range will be clipped.
|
|
316
|
+
# Note: for best results please follow the default threshold. We will update
|
|
317
|
+
# the default threshold everytime when we release a new model.
|
|
318
|
+
# Corresponds to the JSON property `videoConfidenceThreshold`
|
|
319
|
+
# @return [Float]
|
|
320
|
+
attr_accessor :video_confidence_threshold
|
|
321
|
+
|
|
302
322
|
def initialize(**args)
|
|
303
323
|
update!(**args)
|
|
304
324
|
end
|
|
305
325
|
|
|
306
326
|
# Update properties of this object
|
|
307
327
|
def update!(**args)
|
|
328
|
+
@frame_confidence_threshold = args[:frame_confidence_threshold] if args.key?(:frame_confidence_threshold)
|
|
308
329
|
@label_detection_mode = args[:label_detection_mode] if args.key?(:label_detection_mode)
|
|
309
330
|
@model = args[:model] if args.key?(:model)
|
|
310
331
|
@stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
|
|
332
|
+
@video_confidence_threshold = args[:video_confidence_threshold] if args.key?(:video_confidence_threshold)
|
|
311
333
|
end
|
|
312
334
|
end
|
|
313
335
|
|
|
@@ -362,6 +384,184 @@ module Google
|
|
|
362
384
|
end
|
|
363
385
|
end
|
|
364
386
|
|
|
387
|
+
# Normalized bounding box.
|
|
388
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
389
|
+
# Range: [0, 1].
|
|
390
|
+
class GoogleCloudVideointelligenceV1NormalizedBoundingBox
|
|
391
|
+
include Google::Apis::Core::Hashable
|
|
392
|
+
|
|
393
|
+
# Bottom Y coordinate.
|
|
394
|
+
# Corresponds to the JSON property `bottom`
|
|
395
|
+
# @return [Float]
|
|
396
|
+
attr_accessor :bottom
|
|
397
|
+
|
|
398
|
+
# Left X coordinate.
|
|
399
|
+
# Corresponds to the JSON property `left`
|
|
400
|
+
# @return [Float]
|
|
401
|
+
attr_accessor :left
|
|
402
|
+
|
|
403
|
+
# Right X coordinate.
|
|
404
|
+
# Corresponds to the JSON property `right`
|
|
405
|
+
# @return [Float]
|
|
406
|
+
attr_accessor :right
|
|
407
|
+
|
|
408
|
+
# Top Y coordinate.
|
|
409
|
+
# Corresponds to the JSON property `top`
|
|
410
|
+
# @return [Float]
|
|
411
|
+
attr_accessor :top
|
|
412
|
+
|
|
413
|
+
def initialize(**args)
|
|
414
|
+
update!(**args)
|
|
415
|
+
end
|
|
416
|
+
|
|
417
|
+
# Update properties of this object
|
|
418
|
+
def update!(**args)
|
|
419
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
420
|
+
@left = args[:left] if args.key?(:left)
|
|
421
|
+
@right = args[:right] if args.key?(:right)
|
|
422
|
+
@top = args[:top] if args.key?(:top)
|
|
423
|
+
end
|
|
424
|
+
end
|
|
425
|
+
|
|
426
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
427
|
+
# Contains list of the corner points in clockwise order starting from
|
|
428
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
429
|
+
# When the text is horizontal it might look like:
|
|
430
|
+
# 0----1
|
|
431
|
+
# | |
|
|
432
|
+
# 3----2
|
|
433
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
434
|
+
# becomes:
|
|
435
|
+
# 2----3
|
|
436
|
+
# | |
|
|
437
|
+
# 1----0
|
|
438
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
439
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
440
|
+
# the box.
|
|
441
|
+
class GoogleCloudVideointelligenceV1NormalizedBoundingPoly
|
|
442
|
+
include Google::Apis::Core::Hashable
|
|
443
|
+
|
|
444
|
+
# Normalized vertices of the bounding polygon.
|
|
445
|
+
# Corresponds to the JSON property `vertices`
|
|
446
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1NormalizedVertex>]
|
|
447
|
+
attr_accessor :vertices
|
|
448
|
+
|
|
449
|
+
def initialize(**args)
|
|
450
|
+
update!(**args)
|
|
451
|
+
end
|
|
452
|
+
|
|
453
|
+
# Update properties of this object
|
|
454
|
+
def update!(**args)
|
|
455
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
456
|
+
end
|
|
457
|
+
end
|
|
458
|
+
|
|
459
|
+
# A vertex represents a 2D point in the image.
|
|
460
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
461
|
+
# and range from 0 to 1.
|
|
462
|
+
class GoogleCloudVideointelligenceV1NormalizedVertex
|
|
463
|
+
include Google::Apis::Core::Hashable
|
|
464
|
+
|
|
465
|
+
# X coordinate.
|
|
466
|
+
# Corresponds to the JSON property `x`
|
|
467
|
+
# @return [Float]
|
|
468
|
+
attr_accessor :x
|
|
469
|
+
|
|
470
|
+
# Y coordinate.
|
|
471
|
+
# Corresponds to the JSON property `y`
|
|
472
|
+
# @return [Float]
|
|
473
|
+
attr_accessor :y
|
|
474
|
+
|
|
475
|
+
def initialize(**args)
|
|
476
|
+
update!(**args)
|
|
477
|
+
end
|
|
478
|
+
|
|
479
|
+
# Update properties of this object
|
|
480
|
+
def update!(**args)
|
|
481
|
+
@x = args[:x] if args.key?(:x)
|
|
482
|
+
@y = args[:y] if args.key?(:y)
|
|
483
|
+
end
|
|
484
|
+
end
|
|
485
|
+
|
|
486
|
+
# Annotations corresponding to one tracked object.
|
|
487
|
+
class GoogleCloudVideointelligenceV1ObjectTrackingAnnotation
|
|
488
|
+
include Google::Apis::Core::Hashable
|
|
489
|
+
|
|
490
|
+
# Object category's labeling confidence of this track.
|
|
491
|
+
# Corresponds to the JSON property `confidence`
|
|
492
|
+
# @return [Float]
|
|
493
|
+
attr_accessor :confidence
|
|
494
|
+
|
|
495
|
+
# Detected entity from video analysis.
|
|
496
|
+
# Corresponds to the JSON property `entity`
|
|
497
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1Entity]
|
|
498
|
+
attr_accessor :entity
|
|
499
|
+
|
|
500
|
+
# Information corresponding to all frames where this object track appears.
|
|
501
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
502
|
+
# messages in frames.
|
|
503
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
504
|
+
# Corresponds to the JSON property `frames`
|
|
505
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ObjectTrackingFrame>]
|
|
506
|
+
attr_accessor :frames
|
|
507
|
+
|
|
508
|
+
# Video segment.
|
|
509
|
+
# Corresponds to the JSON property `segment`
|
|
510
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment]
|
|
511
|
+
attr_accessor :segment
|
|
512
|
+
|
|
513
|
+
# Streaming mode ONLY.
|
|
514
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
515
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
516
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
517
|
+
# the customers can correlate the results of the ongoing
|
|
518
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
519
|
+
# Corresponds to the JSON property `trackId`
|
|
520
|
+
# @return [Fixnum]
|
|
521
|
+
attr_accessor :track_id
|
|
522
|
+
|
|
523
|
+
def initialize(**args)
|
|
524
|
+
update!(**args)
|
|
525
|
+
end
|
|
526
|
+
|
|
527
|
+
# Update properties of this object
|
|
528
|
+
def update!(**args)
|
|
529
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
530
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
531
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
532
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
533
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
534
|
+
end
|
|
535
|
+
end
|
|
536
|
+
|
|
537
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
538
|
+
# stores per frame location, time offset, and confidence.
|
|
539
|
+
class GoogleCloudVideointelligenceV1ObjectTrackingFrame
|
|
540
|
+
include Google::Apis::Core::Hashable
|
|
541
|
+
|
|
542
|
+
# Normalized bounding box.
|
|
543
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
544
|
+
# Range: [0, 1].
|
|
545
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
546
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1NormalizedBoundingBox]
|
|
547
|
+
attr_accessor :normalized_bounding_box
|
|
548
|
+
|
|
549
|
+
# The timestamp of the frame in microseconds.
|
|
550
|
+
# Corresponds to the JSON property `timeOffset`
|
|
551
|
+
# @return [String]
|
|
552
|
+
attr_accessor :time_offset
|
|
553
|
+
|
|
554
|
+
def initialize(**args)
|
|
555
|
+
update!(**args)
|
|
556
|
+
end
|
|
557
|
+
|
|
558
|
+
# Update properties of this object
|
|
559
|
+
def update!(**args)
|
|
560
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
561
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
562
|
+
end
|
|
563
|
+
end
|
|
564
|
+
|
|
365
565
|
# Config for SHOT_CHANGE_DETECTION.
|
|
366
566
|
class GoogleCloudVideointelligenceV1ShotChangeDetectionConfig
|
|
367
567
|
include Google::Apis::Core::Hashable
|
|
@@ -574,31 +774,44 @@ module Google
|
|
|
574
774
|
end
|
|
575
775
|
end
|
|
576
776
|
|
|
577
|
-
#
|
|
578
|
-
|
|
777
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
778
|
+
# corresponding text, confidence value, and frame level information for each
|
|
779
|
+
# detection.
|
|
780
|
+
class GoogleCloudVideointelligenceV1TextAnnotation
|
|
579
781
|
include Google::Apis::Core::Hashable
|
|
580
782
|
|
|
581
|
-
#
|
|
582
|
-
#
|
|
583
|
-
#
|
|
783
|
+
# All video segments where OCR detected text appears.
|
|
784
|
+
# Corresponds to the JSON property `segments`
|
|
785
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextSegment>]
|
|
786
|
+
attr_accessor :segments
|
|
787
|
+
|
|
788
|
+
# The detected text.
|
|
789
|
+
# Corresponds to the JSON property `text`
|
|
584
790
|
# @return [String]
|
|
585
|
-
attr_accessor :
|
|
791
|
+
attr_accessor :text
|
|
586
792
|
|
|
587
|
-
|
|
588
|
-
|
|
589
|
-
|
|
590
|
-
# @return [Fixnum]
|
|
591
|
-
attr_accessor :progress_percent
|
|
793
|
+
def initialize(**args)
|
|
794
|
+
update!(**args)
|
|
795
|
+
end
|
|
592
796
|
|
|
593
|
-
#
|
|
594
|
-
|
|
595
|
-
|
|
596
|
-
|
|
797
|
+
# Update properties of this object
|
|
798
|
+
def update!(**args)
|
|
799
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
800
|
+
@text = args[:text] if args.key?(:text)
|
|
801
|
+
end
|
|
802
|
+
end
|
|
597
803
|
|
|
598
|
-
|
|
599
|
-
|
|
600
|
-
|
|
601
|
-
|
|
804
|
+
# Config for TEXT_DETECTION.
|
|
805
|
+
class GoogleCloudVideointelligenceV1TextDetectionConfig
|
|
806
|
+
include Google::Apis::Core::Hashable
|
|
807
|
+
|
|
808
|
+
# Language hint can be specified if the language to be detected is known a
|
|
809
|
+
# priori. It can increase the accuracy of the detection. Language hint must
|
|
810
|
+
# be language code in BCP-47 format.
|
|
811
|
+
# Automatic language detection is performed if no hint is provided.
|
|
812
|
+
# Corresponds to the JSON property `languageHints`
|
|
813
|
+
# @return [Array<String>]
|
|
814
|
+
attr_accessor :language_hints
|
|
602
815
|
|
|
603
816
|
def initialize(**args)
|
|
604
817
|
update!(**args)
|
|
@@ -606,54 +819,163 @@ module Google
|
|
|
606
819
|
|
|
607
820
|
# Update properties of this object
|
|
608
821
|
def update!(**args)
|
|
609
|
-
@
|
|
610
|
-
@progress_percent = args[:progress_percent] if args.key?(:progress_percent)
|
|
611
|
-
@start_time = args[:start_time] if args.key?(:start_time)
|
|
612
|
-
@update_time = args[:update_time] if args.key?(:update_time)
|
|
822
|
+
@language_hints = args[:language_hints] if args.key?(:language_hints)
|
|
613
823
|
end
|
|
614
824
|
end
|
|
615
825
|
|
|
616
|
-
#
|
|
617
|
-
|
|
826
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
827
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
828
|
+
# frames containing detected OCR text snippets.
|
|
829
|
+
class GoogleCloudVideointelligenceV1TextFrame
|
|
618
830
|
include Google::Apis::Core::Hashable
|
|
619
831
|
|
|
620
|
-
#
|
|
621
|
-
#
|
|
622
|
-
#
|
|
623
|
-
#
|
|
624
|
-
#
|
|
625
|
-
#
|
|
626
|
-
#
|
|
627
|
-
#
|
|
628
|
-
#
|
|
629
|
-
#
|
|
630
|
-
#
|
|
631
|
-
#
|
|
632
|
-
#
|
|
633
|
-
#
|
|
634
|
-
#
|
|
635
|
-
#
|
|
636
|
-
#
|
|
637
|
-
|
|
638
|
-
|
|
639
|
-
#
|
|
640
|
-
#
|
|
641
|
-
#
|
|
642
|
-
|
|
643
|
-
|
|
644
|
-
|
|
645
|
-
|
|
646
|
-
|
|
647
|
-
|
|
648
|
-
#
|
|
649
|
-
|
|
650
|
-
|
|
651
|
-
|
|
652
|
-
|
|
653
|
-
|
|
654
|
-
|
|
655
|
-
|
|
656
|
-
|
|
832
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
833
|
+
# Contains list of the corner points in clockwise order starting from
|
|
834
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
835
|
+
# When the text is horizontal it might look like:
|
|
836
|
+
# 0----1
|
|
837
|
+
# | |
|
|
838
|
+
# 3----2
|
|
839
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
840
|
+
# becomes:
|
|
841
|
+
# 2----3
|
|
842
|
+
# | |
|
|
843
|
+
# 1----0
|
|
844
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
845
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
846
|
+
# the box.
|
|
847
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
848
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1NormalizedBoundingPoly]
|
|
849
|
+
attr_accessor :rotated_bounding_box
|
|
850
|
+
|
|
851
|
+
# Timestamp of this frame.
|
|
852
|
+
# Corresponds to the JSON property `timeOffset`
|
|
853
|
+
# @return [String]
|
|
854
|
+
attr_accessor :time_offset
|
|
855
|
+
|
|
856
|
+
def initialize(**args)
|
|
857
|
+
update!(**args)
|
|
858
|
+
end
|
|
859
|
+
|
|
860
|
+
# Update properties of this object
|
|
861
|
+
def update!(**args)
|
|
862
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
863
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
864
|
+
end
|
|
865
|
+
end
|
|
866
|
+
|
|
867
|
+
# Video segment level annotation results for text detection.
|
|
868
|
+
class GoogleCloudVideointelligenceV1TextSegment
|
|
869
|
+
include Google::Apis::Core::Hashable
|
|
870
|
+
|
|
871
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
872
|
+
# over all frames where OCR detected text appears.
|
|
873
|
+
# Corresponds to the JSON property `confidence`
|
|
874
|
+
# @return [Float]
|
|
875
|
+
attr_accessor :confidence
|
|
876
|
+
|
|
877
|
+
# Information related to the frames where OCR detected text appears.
|
|
878
|
+
# Corresponds to the JSON property `frames`
|
|
879
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextFrame>]
|
|
880
|
+
attr_accessor :frames
|
|
881
|
+
|
|
882
|
+
# Video segment.
|
|
883
|
+
# Corresponds to the JSON property `segment`
|
|
884
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment]
|
|
885
|
+
attr_accessor :segment
|
|
886
|
+
|
|
887
|
+
def initialize(**args)
|
|
888
|
+
update!(**args)
|
|
889
|
+
end
|
|
890
|
+
|
|
891
|
+
# Update properties of this object
|
|
892
|
+
def update!(**args)
|
|
893
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
894
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
895
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
896
|
+
end
|
|
897
|
+
end
|
|
898
|
+
|
|
899
|
+
# Annotation progress for a single video.
|
|
900
|
+
class GoogleCloudVideointelligenceV1VideoAnnotationProgress
|
|
901
|
+
include Google::Apis::Core::Hashable
|
|
902
|
+
|
|
903
|
+
# Video file location in
|
|
904
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/).
|
|
905
|
+
# Corresponds to the JSON property `inputUri`
|
|
906
|
+
# @return [String]
|
|
907
|
+
attr_accessor :input_uri
|
|
908
|
+
|
|
909
|
+
# Approximate percentage processed thus far. Guaranteed to be
|
|
910
|
+
# 100 when fully processed.
|
|
911
|
+
# Corresponds to the JSON property `progressPercent`
|
|
912
|
+
# @return [Fixnum]
|
|
913
|
+
attr_accessor :progress_percent
|
|
914
|
+
|
|
915
|
+
# Time when the request was received.
|
|
916
|
+
# Corresponds to the JSON property `startTime`
|
|
917
|
+
# @return [String]
|
|
918
|
+
attr_accessor :start_time
|
|
919
|
+
|
|
920
|
+
# Time of the most recent update.
|
|
921
|
+
# Corresponds to the JSON property `updateTime`
|
|
922
|
+
# @return [String]
|
|
923
|
+
attr_accessor :update_time
|
|
924
|
+
|
|
925
|
+
def initialize(**args)
|
|
926
|
+
update!(**args)
|
|
927
|
+
end
|
|
928
|
+
|
|
929
|
+
# Update properties of this object
|
|
930
|
+
def update!(**args)
|
|
931
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
932
|
+
@progress_percent = args[:progress_percent] if args.key?(:progress_percent)
|
|
933
|
+
@start_time = args[:start_time] if args.key?(:start_time)
|
|
934
|
+
@update_time = args[:update_time] if args.key?(:update_time)
|
|
935
|
+
end
|
|
936
|
+
end
|
|
937
|
+
|
|
938
|
+
# Annotation results for a single video.
|
|
939
|
+
class GoogleCloudVideointelligenceV1VideoAnnotationResults
|
|
940
|
+
include Google::Apis::Core::Hashable
|
|
941
|
+
|
|
942
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
943
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
944
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
945
|
+
# - Simple to use and understand for most users
|
|
946
|
+
# - Flexible enough to meet unexpected needs
|
|
947
|
+
# # Overview
|
|
948
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
949
|
+
# message, and error details. The error code should be an enum value of
|
|
950
|
+
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
951
|
+
# error message should be a developer-facing English message that helps
|
|
952
|
+
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
953
|
+
# error message is needed, put the localized message in the error details or
|
|
954
|
+
# localize it in the client. The optional error details may contain arbitrary
|
|
955
|
+
# information about the error. There is a predefined set of error detail types
|
|
956
|
+
# in the package `google.rpc` that can be used for common error conditions.
|
|
957
|
+
# # Language mapping
|
|
958
|
+
# The `Status` message is the logical representation of the error model, but it
|
|
959
|
+
# is not necessarily the actual wire format. When the `Status` message is
|
|
960
|
+
# exposed in different client libraries and different wire protocols, it can be
|
|
961
|
+
# mapped differently. For example, it will likely be mapped to some exceptions
|
|
962
|
+
# in Java, but more likely mapped to some error codes in C.
|
|
963
|
+
# # Other uses
|
|
964
|
+
# The error model and the `Status` message can be used in a variety of
|
|
965
|
+
# environments, either with or without APIs, to provide a
|
|
966
|
+
# consistent developer experience across different environments.
|
|
967
|
+
# Example uses of this error model include:
|
|
968
|
+
# - Partial errors. If a service needs to return partial errors to the client,
|
|
969
|
+
# it may embed the `Status` in the normal response to indicate the partial
|
|
970
|
+
# errors.
|
|
971
|
+
# - Workflow errors. A typical workflow has multiple steps. Each step may
|
|
972
|
+
# have a `Status` message for error reporting.
|
|
973
|
+
# - Batch operations. If a client uses batch request and batch response, the
|
|
974
|
+
# `Status` message should be used directly inside batch response, one for
|
|
975
|
+
# each error sub-response.
|
|
976
|
+
# - Asynchronous operations. If an API call embeds asynchronous operation
|
|
977
|
+
# results in its response, the status of those operations should be
|
|
978
|
+
# represented directly using the `Status` message.
|
|
657
979
|
# - Logging. If some API errors are stored in logs, the message `Status` could
|
|
658
980
|
# be used directly after any stripping needed for security/privacy reasons.
|
|
659
981
|
# Corresponds to the JSON property `error`
|
|
@@ -679,6 +1001,11 @@ module Google
|
|
|
679
1001
|
# @return [String]
|
|
680
1002
|
attr_accessor :input_uri
|
|
681
1003
|
|
|
1004
|
+
# Annotations for list of objects detected and tracked in video.
|
|
1005
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
1006
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>]
|
|
1007
|
+
attr_accessor :object_annotations
|
|
1008
|
+
|
|
682
1009
|
# Label annotations on video level or user specified segment level.
|
|
683
1010
|
# There is exactly one element for each unique label.
|
|
684
1011
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
@@ -701,6 +1028,13 @@ module Google
|
|
|
701
1028
|
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1SpeechTranscription>]
|
|
702
1029
|
attr_accessor :speech_transcriptions
|
|
703
1030
|
|
|
1031
|
+
# OCR text detection and tracking.
|
|
1032
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
1033
|
+
# frame information associated with it.
|
|
1034
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
1035
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextAnnotation>]
|
|
1036
|
+
attr_accessor :text_annotations
|
|
1037
|
+
|
|
704
1038
|
def initialize(**args)
|
|
705
1039
|
update!(**args)
|
|
706
1040
|
end
|
|
@@ -711,10 +1045,12 @@ module Google
|
|
|
711
1045
|
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
712
1046
|
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
713
1047
|
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
1048
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
714
1049
|
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
715
1050
|
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
716
1051
|
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
717
1052
|
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
1053
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
718
1054
|
end
|
|
719
1055
|
end
|
|
720
1056
|
|
|
@@ -749,6 +1085,11 @@ module Google
|
|
|
749
1085
|
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1SpeechTranscriptionConfig]
|
|
750
1086
|
attr_accessor :speech_transcription_config
|
|
751
1087
|
|
|
1088
|
+
# Config for TEXT_DETECTION.
|
|
1089
|
+
# Corresponds to the JSON property `textDetectionConfig`
|
|
1090
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextDetectionConfig]
|
|
1091
|
+
attr_accessor :text_detection_config
|
|
1092
|
+
|
|
752
1093
|
def initialize(**args)
|
|
753
1094
|
update!(**args)
|
|
754
1095
|
end
|
|
@@ -760,6 +1101,7 @@ module Google
|
|
|
760
1101
|
@segments = args[:segments] if args.key?(:segments)
|
|
761
1102
|
@shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
|
|
762
1103
|
@speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
|
|
1104
|
+
@text_detection_config = args[:text_detection_config] if args.key?(:text_detection_config)
|
|
763
1105
|
end
|
|
764
1106
|
end
|
|
765
1107
|
|
|
@@ -1062,29 +1404,31 @@ module Google
|
|
|
1062
1404
|
end
|
|
1063
1405
|
end
|
|
1064
1406
|
|
|
1065
|
-
#
|
|
1066
|
-
|
|
1407
|
+
# Normalized bounding box.
|
|
1408
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
1409
|
+
# Range: [0, 1].
|
|
1410
|
+
class GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox
|
|
1067
1411
|
include Google::Apis::Core::Hashable
|
|
1068
1412
|
|
|
1069
|
-
#
|
|
1070
|
-
#
|
|
1071
|
-
# correct. This field is typically provided only for the top hypothesis, and
|
|
1072
|
-
# only for `is_final=true` results. Clients should not rely on the
|
|
1073
|
-
# `confidence` field as it is not guaranteed to be accurate or consistent.
|
|
1074
|
-
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
1075
|
-
# Corresponds to the JSON property `confidence`
|
|
1413
|
+
# Bottom Y coordinate.
|
|
1414
|
+
# Corresponds to the JSON property `bottom`
|
|
1076
1415
|
# @return [Float]
|
|
1077
|
-
attr_accessor :
|
|
1416
|
+
attr_accessor :bottom
|
|
1078
1417
|
|
|
1079
|
-
#
|
|
1080
|
-
# Corresponds to the JSON property `
|
|
1081
|
-
# @return [
|
|
1082
|
-
attr_accessor :
|
|
1418
|
+
# Left X coordinate.
|
|
1419
|
+
# Corresponds to the JSON property `left`
|
|
1420
|
+
# @return [Float]
|
|
1421
|
+
attr_accessor :left
|
|
1083
1422
|
|
|
1084
|
-
#
|
|
1085
|
-
# Corresponds to the JSON property `
|
|
1086
|
-
# @return [
|
|
1087
|
-
attr_accessor :
|
|
1423
|
+
# Right X coordinate.
|
|
1424
|
+
# Corresponds to the JSON property `right`
|
|
1425
|
+
# @return [Float]
|
|
1426
|
+
attr_accessor :right
|
|
1427
|
+
|
|
1428
|
+
# Top Y coordinate.
|
|
1429
|
+
# Corresponds to the JSON property `top`
|
|
1430
|
+
# @return [Float]
|
|
1431
|
+
attr_accessor :top
|
|
1088
1432
|
|
|
1089
1433
|
def initialize(**args)
|
|
1090
1434
|
update!(**args)
|
|
@@ -1092,31 +1436,35 @@ module Google
|
|
|
1092
1436
|
|
|
1093
1437
|
# Update properties of this object
|
|
1094
1438
|
def update!(**args)
|
|
1095
|
-
@
|
|
1096
|
-
@
|
|
1097
|
-
@
|
|
1439
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
1440
|
+
@left = args[:left] if args.key?(:left)
|
|
1441
|
+
@right = args[:right] if args.key?(:right)
|
|
1442
|
+
@top = args[:top] if args.key?(:top)
|
|
1098
1443
|
end
|
|
1099
1444
|
end
|
|
1100
1445
|
|
|
1101
|
-
#
|
|
1102
|
-
|
|
1446
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
1447
|
+
# Contains list of the corner points in clockwise order starting from
|
|
1448
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
1449
|
+
# When the text is horizontal it might look like:
|
|
1450
|
+
# 0----1
|
|
1451
|
+
# | |
|
|
1452
|
+
# 3----2
|
|
1453
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
1454
|
+
# becomes:
|
|
1455
|
+
# 2----3
|
|
1456
|
+
# | |
|
|
1457
|
+
# 1----0
|
|
1458
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
1459
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
1460
|
+
# the box.
|
|
1461
|
+
class GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly
|
|
1103
1462
|
include Google::Apis::Core::Hashable
|
|
1104
1463
|
|
|
1105
|
-
#
|
|
1106
|
-
#
|
|
1107
|
-
#
|
|
1108
|
-
|
|
1109
|
-
# Corresponds to the JSON property `alternatives`
|
|
1110
|
-
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
|
|
1111
|
-
attr_accessor :alternatives
|
|
1112
|
-
|
|
1113
|
-
# Output only. The
|
|
1114
|
-
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
|
|
1115
|
-
# language in this result. This language code was detected to have the most
|
|
1116
|
-
# likelihood of being spoken in the audio.
|
|
1117
|
-
# Corresponds to the JSON property `languageCode`
|
|
1118
|
-
# @return [String]
|
|
1119
|
-
attr_accessor :language_code
|
|
1464
|
+
# Normalized vertices of the bounding polygon.
|
|
1465
|
+
# Corresponds to the JSON property `vertices`
|
|
1466
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2NormalizedVertex>]
|
|
1467
|
+
attr_accessor :vertices
|
|
1120
1468
|
|
|
1121
1469
|
def initialize(**args)
|
|
1122
1470
|
update!(**args)
|
|
@@ -1124,29 +1472,1103 @@ module Google
|
|
|
1124
1472
|
|
|
1125
1473
|
# Update properties of this object
|
|
1126
1474
|
def update!(**args)
|
|
1127
|
-
@
|
|
1128
|
-
@language_code = args[:language_code] if args.key?(:language_code)
|
|
1475
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
1129
1476
|
end
|
|
1130
1477
|
end
|
|
1131
1478
|
|
|
1132
|
-
#
|
|
1133
|
-
|
|
1479
|
+
# A vertex represents a 2D point in the image.
|
|
1480
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
1481
|
+
# and range from 0 to 1.
|
|
1482
|
+
class GoogleCloudVideointelligenceV1beta2NormalizedVertex
|
|
1134
1483
|
include Google::Apis::Core::Hashable
|
|
1135
1484
|
|
|
1136
|
-
#
|
|
1137
|
-
#
|
|
1138
|
-
#
|
|
1139
|
-
|
|
1140
|
-
attr_accessor :input_uri
|
|
1141
|
-
|
|
1142
|
-
# Approximate percentage processed thus far. Guaranteed to be
|
|
1143
|
-
# 100 when fully processed.
|
|
1144
|
-
# Corresponds to the JSON property `progressPercent`
|
|
1145
|
-
# @return [Fixnum]
|
|
1146
|
-
attr_accessor :progress_percent
|
|
1485
|
+
# X coordinate.
|
|
1486
|
+
# Corresponds to the JSON property `x`
|
|
1487
|
+
# @return [Float]
|
|
1488
|
+
attr_accessor :x
|
|
1147
1489
|
|
|
1148
|
-
#
|
|
1149
|
-
# Corresponds to the JSON property `
|
|
1490
|
+
# Y coordinate.
|
|
1491
|
+
# Corresponds to the JSON property `y`
|
|
1492
|
+
# @return [Float]
|
|
1493
|
+
attr_accessor :y
|
|
1494
|
+
|
|
1495
|
+
def initialize(**args)
|
|
1496
|
+
update!(**args)
|
|
1497
|
+
end
|
|
1498
|
+
|
|
1499
|
+
# Update properties of this object
|
|
1500
|
+
def update!(**args)
|
|
1501
|
+
@x = args[:x] if args.key?(:x)
|
|
1502
|
+
@y = args[:y] if args.key?(:y)
|
|
1503
|
+
end
|
|
1504
|
+
end
|
|
1505
|
+
|
|
1506
|
+
# Annotations corresponding to one tracked object.
|
|
1507
|
+
class GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation
|
|
1508
|
+
include Google::Apis::Core::Hashable
|
|
1509
|
+
|
|
1510
|
+
# Object category's labeling confidence of this track.
|
|
1511
|
+
# Corresponds to the JSON property `confidence`
|
|
1512
|
+
# @return [Float]
|
|
1513
|
+
attr_accessor :confidence
|
|
1514
|
+
|
|
1515
|
+
# Detected entity from video analysis.
|
|
1516
|
+
# Corresponds to the JSON property `entity`
|
|
1517
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2Entity]
|
|
1518
|
+
attr_accessor :entity
|
|
1519
|
+
|
|
1520
|
+
# Information corresponding to all frames where this object track appears.
|
|
1521
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
1522
|
+
# messages in frames.
|
|
1523
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
1524
|
+
# Corresponds to the JSON property `frames`
|
|
1525
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame>]
|
|
1526
|
+
attr_accessor :frames
|
|
1527
|
+
|
|
1528
|
+
# Video segment.
|
|
1529
|
+
# Corresponds to the JSON property `segment`
|
|
1530
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2VideoSegment]
|
|
1531
|
+
attr_accessor :segment
|
|
1532
|
+
|
|
1533
|
+
# Streaming mode ONLY.
|
|
1534
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
1535
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
1536
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
1537
|
+
# the customers can correlate the results of the ongoing
|
|
1538
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
1539
|
+
# Corresponds to the JSON property `trackId`
|
|
1540
|
+
# @return [Fixnum]
|
|
1541
|
+
attr_accessor :track_id
|
|
1542
|
+
|
|
1543
|
+
def initialize(**args)
|
|
1544
|
+
update!(**args)
|
|
1545
|
+
end
|
|
1546
|
+
|
|
1547
|
+
# Update properties of this object
|
|
1548
|
+
def update!(**args)
|
|
1549
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1550
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
1551
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
1552
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1553
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
1554
|
+
end
|
|
1555
|
+
end
|
|
1556
|
+
|
|
1557
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
1558
|
+
# stores per frame location, time offset, and confidence.
|
|
1559
|
+
class GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame
|
|
1560
|
+
include Google::Apis::Core::Hashable
|
|
1561
|
+
|
|
1562
|
+
# Normalized bounding box.
|
|
1563
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
1564
|
+
# Range: [0, 1].
|
|
1565
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
1566
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox]
|
|
1567
|
+
attr_accessor :normalized_bounding_box
|
|
1568
|
+
|
|
1569
|
+
# The timestamp of the frame in microseconds.
|
|
1570
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1571
|
+
# @return [String]
|
|
1572
|
+
attr_accessor :time_offset
|
|
1573
|
+
|
|
1574
|
+
def initialize(**args)
|
|
1575
|
+
update!(**args)
|
|
1576
|
+
end
|
|
1577
|
+
|
|
1578
|
+
# Update properties of this object
|
|
1579
|
+
def update!(**args)
|
|
1580
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
1581
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1582
|
+
end
|
|
1583
|
+
end
|
|
1584
|
+
|
|
1585
|
+
# Alternative hypotheses (a.k.a. n-best list).
|
|
1586
|
+
class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
|
|
1587
|
+
include Google::Apis::Core::Hashable
|
|
1588
|
+
|
|
1589
|
+
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
1590
|
+
# indicates an estimated greater likelihood that the recognized words are
|
|
1591
|
+
# correct. This field is typically provided only for the top hypothesis, and
|
|
1592
|
+
# only for `is_final=true` results. Clients should not rely on the
|
|
1593
|
+
# `confidence` field as it is not guaranteed to be accurate or consistent.
|
|
1594
|
+
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
1595
|
+
# Corresponds to the JSON property `confidence`
|
|
1596
|
+
# @return [Float]
|
|
1597
|
+
attr_accessor :confidence
|
|
1598
|
+
|
|
1599
|
+
# Transcript text representing the words that the user spoke.
|
|
1600
|
+
# Corresponds to the JSON property `transcript`
|
|
1601
|
+
# @return [String]
|
|
1602
|
+
attr_accessor :transcript
|
|
1603
|
+
|
|
1604
|
+
# A list of word-specific information for each recognized word.
|
|
1605
|
+
# Corresponds to the JSON property `words`
|
|
1606
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2WordInfo>]
|
|
1607
|
+
attr_accessor :words
|
|
1608
|
+
|
|
1609
|
+
def initialize(**args)
|
|
1610
|
+
update!(**args)
|
|
1611
|
+
end
|
|
1612
|
+
|
|
1613
|
+
# Update properties of this object
|
|
1614
|
+
def update!(**args)
|
|
1615
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1616
|
+
@transcript = args[:transcript] if args.key?(:transcript)
|
|
1617
|
+
@words = args[:words] if args.key?(:words)
|
|
1618
|
+
end
|
|
1619
|
+
end
|
|
1620
|
+
|
|
1621
|
+
# A speech recognition result corresponding to a portion of the audio.
|
|
1622
|
+
class GoogleCloudVideointelligenceV1beta2SpeechTranscription
|
|
1623
|
+
include Google::Apis::Core::Hashable
|
|
1624
|
+
|
|
1625
|
+
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
1626
|
+
# in `max_alternatives`). These alternatives are ordered in terms of
|
|
1627
|
+
# accuracy, with the top (first) alternative being the most probable, as
|
|
1628
|
+
# ranked by the recognizer.
|
|
1629
|
+
# Corresponds to the JSON property `alternatives`
|
|
1630
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
|
|
1631
|
+
attr_accessor :alternatives
|
|
1632
|
+
|
|
1633
|
+
# Output only. The
|
|
1634
|
+
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
|
|
1635
|
+
# language in this result. This language code was detected to have the most
|
|
1636
|
+
# likelihood of being spoken in the audio.
|
|
1637
|
+
# Corresponds to the JSON property `languageCode`
|
|
1638
|
+
# @return [String]
|
|
1639
|
+
attr_accessor :language_code
|
|
1640
|
+
|
|
1641
|
+
def initialize(**args)
|
|
1642
|
+
update!(**args)
|
|
1643
|
+
end
|
|
1644
|
+
|
|
1645
|
+
# Update properties of this object
|
|
1646
|
+
def update!(**args)
|
|
1647
|
+
@alternatives = args[:alternatives] if args.key?(:alternatives)
|
|
1648
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
1649
|
+
end
|
|
1650
|
+
end
|
|
1651
|
+
|
|
1652
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
1653
|
+
# corresponding text, confidence value, and frame level information for each
|
|
1654
|
+
# detection.
|
|
1655
|
+
class GoogleCloudVideointelligenceV1beta2TextAnnotation
|
|
1656
|
+
include Google::Apis::Core::Hashable
|
|
1657
|
+
|
|
1658
|
+
# All video segments where OCR detected text appears.
|
|
1659
|
+
# Corresponds to the JSON property `segments`
|
|
1660
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2TextSegment>]
|
|
1661
|
+
attr_accessor :segments
|
|
1662
|
+
|
|
1663
|
+
# The detected text.
|
|
1664
|
+
# Corresponds to the JSON property `text`
|
|
1665
|
+
# @return [String]
|
|
1666
|
+
attr_accessor :text
|
|
1667
|
+
|
|
1668
|
+
def initialize(**args)
|
|
1669
|
+
update!(**args)
|
|
1670
|
+
end
|
|
1671
|
+
|
|
1672
|
+
# Update properties of this object
|
|
1673
|
+
def update!(**args)
|
|
1674
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
1675
|
+
@text = args[:text] if args.key?(:text)
|
|
1676
|
+
end
|
|
1677
|
+
end
|
|
1678
|
+
|
|
1679
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
1680
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
1681
|
+
# frames containing detected OCR text snippets.
|
|
1682
|
+
class GoogleCloudVideointelligenceV1beta2TextFrame
|
|
1683
|
+
include Google::Apis::Core::Hashable
|
|
1684
|
+
|
|
1685
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
1686
|
+
# Contains list of the corner points in clockwise order starting from
|
|
1687
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
1688
|
+
# When the text is horizontal it might look like:
|
|
1689
|
+
# 0----1
|
|
1690
|
+
# | |
|
|
1691
|
+
# 3----2
|
|
1692
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
1693
|
+
# becomes:
|
|
1694
|
+
# 2----3
|
|
1695
|
+
# | |
|
|
1696
|
+
# 1----0
|
|
1697
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
1698
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
1699
|
+
# the box.
|
|
1700
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
1701
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly]
|
|
1702
|
+
attr_accessor :rotated_bounding_box
|
|
1703
|
+
|
|
1704
|
+
# Timestamp of this frame.
|
|
1705
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1706
|
+
# @return [String]
|
|
1707
|
+
attr_accessor :time_offset
|
|
1708
|
+
|
|
1709
|
+
def initialize(**args)
|
|
1710
|
+
update!(**args)
|
|
1711
|
+
end
|
|
1712
|
+
|
|
1713
|
+
# Update properties of this object
|
|
1714
|
+
def update!(**args)
|
|
1715
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
1716
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1717
|
+
end
|
|
1718
|
+
end
|
|
1719
|
+
|
|
1720
|
+
# Video segment level annotation results for text detection.
|
|
1721
|
+
class GoogleCloudVideointelligenceV1beta2TextSegment
|
|
1722
|
+
include Google::Apis::Core::Hashable
|
|
1723
|
+
|
|
1724
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
1725
|
+
# over all frames where OCR detected text appears.
|
|
1726
|
+
# Corresponds to the JSON property `confidence`
|
|
1727
|
+
# @return [Float]
|
|
1728
|
+
attr_accessor :confidence
|
|
1729
|
+
|
|
1730
|
+
# Information related to the frames where OCR detected text appears.
|
|
1731
|
+
# Corresponds to the JSON property `frames`
|
|
1732
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2TextFrame>]
|
|
1733
|
+
attr_accessor :frames
|
|
1734
|
+
|
|
1735
|
+
# Video segment.
|
|
1736
|
+
# Corresponds to the JSON property `segment`
|
|
1737
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2VideoSegment]
|
|
1738
|
+
attr_accessor :segment
|
|
1739
|
+
|
|
1740
|
+
def initialize(**args)
|
|
1741
|
+
update!(**args)
|
|
1742
|
+
end
|
|
1743
|
+
|
|
1744
|
+
# Update properties of this object
|
|
1745
|
+
def update!(**args)
|
|
1746
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1747
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
1748
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1749
|
+
end
|
|
1750
|
+
end
|
|
1751
|
+
|
|
1752
|
+
# Annotation progress for a single video.
|
|
1753
|
+
class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
|
|
1754
|
+
include Google::Apis::Core::Hashable
|
|
1755
|
+
|
|
1756
|
+
# Video file location in
|
|
1757
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/).
|
|
1758
|
+
# Corresponds to the JSON property `inputUri`
|
|
1759
|
+
# @return [String]
|
|
1760
|
+
attr_accessor :input_uri
|
|
1761
|
+
|
|
1762
|
+
# Approximate percentage processed thus far. Guaranteed to be
|
|
1763
|
+
# 100 when fully processed.
|
|
1764
|
+
# Corresponds to the JSON property `progressPercent`
|
|
1765
|
+
# @return [Fixnum]
|
|
1766
|
+
attr_accessor :progress_percent
|
|
1767
|
+
|
|
1768
|
+
# Time when the request was received.
|
|
1769
|
+
# Corresponds to the JSON property `startTime`
|
|
1770
|
+
# @return [String]
|
|
1771
|
+
attr_accessor :start_time
|
|
1772
|
+
|
|
1773
|
+
# Time of the most recent update.
|
|
1774
|
+
# Corresponds to the JSON property `updateTime`
|
|
1775
|
+
# @return [String]
|
|
1776
|
+
attr_accessor :update_time
|
|
1777
|
+
|
|
1778
|
+
def initialize(**args)
|
|
1779
|
+
update!(**args)
|
|
1780
|
+
end
|
|
1781
|
+
|
|
1782
|
+
# Update properties of this object
|
|
1783
|
+
def update!(**args)
|
|
1784
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
1785
|
+
@progress_percent = args[:progress_percent] if args.key?(:progress_percent)
|
|
1786
|
+
@start_time = args[:start_time] if args.key?(:start_time)
|
|
1787
|
+
@update_time = args[:update_time] if args.key?(:update_time)
|
|
1788
|
+
end
|
|
1789
|
+
end
|
|
1790
|
+
|
|
1791
|
+
# Annotation results for a single video.
|
|
1792
|
+
class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
|
|
1793
|
+
include Google::Apis::Core::Hashable
|
|
1794
|
+
|
|
1795
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
1796
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
1797
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
1798
|
+
# - Simple to use and understand for most users
|
|
1799
|
+
# - Flexible enough to meet unexpected needs
|
|
1800
|
+
# # Overview
|
|
1801
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
1802
|
+
# message, and error details. The error code should be an enum value of
|
|
1803
|
+
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
1804
|
+
# error message should be a developer-facing English message that helps
|
|
1805
|
+
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
1806
|
+
# error message is needed, put the localized message in the error details or
|
|
1807
|
+
# localize it in the client. The optional error details may contain arbitrary
|
|
1808
|
+
# information about the error. There is a predefined set of error detail types
|
|
1809
|
+
# in the package `google.rpc` that can be used for common error conditions.
|
|
1810
|
+
# # Language mapping
|
|
1811
|
+
# The `Status` message is the logical representation of the error model, but it
|
|
1812
|
+
# is not necessarily the actual wire format. When the `Status` message is
|
|
1813
|
+
# exposed in different client libraries and different wire protocols, it can be
|
|
1814
|
+
# mapped differently. For example, it will likely be mapped to some exceptions
|
|
1815
|
+
# in Java, but more likely mapped to some error codes in C.
|
|
1816
|
+
# # Other uses
|
|
1817
|
+
# The error model and the `Status` message can be used in a variety of
|
|
1818
|
+
# environments, either with or without APIs, to provide a
|
|
1819
|
+
# consistent developer experience across different environments.
|
|
1820
|
+
# Example uses of this error model include:
|
|
1821
|
+
# - Partial errors. If a service needs to return partial errors to the client,
|
|
1822
|
+
# it may embed the `Status` in the normal response to indicate the partial
|
|
1823
|
+
# errors.
|
|
1824
|
+
# - Workflow errors. A typical workflow has multiple steps. Each step may
|
|
1825
|
+
# have a `Status` message for error reporting.
|
|
1826
|
+
# - Batch operations. If a client uses batch request and batch response, the
|
|
1827
|
+
# `Status` message should be used directly inside batch response, one for
|
|
1828
|
+
# each error sub-response.
|
|
1829
|
+
# - Asynchronous operations. If an API call embeds asynchronous operation
|
|
1830
|
+
# results in its response, the status of those operations should be
|
|
1831
|
+
# represented directly using the `Status` message.
|
|
1832
|
+
# - Logging. If some API errors are stored in logs, the message `Status` could
|
|
1833
|
+
# be used directly after any stripping needed for security/privacy reasons.
|
|
1834
|
+
# Corresponds to the JSON property `error`
|
|
1835
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleRpcStatus]
|
|
1836
|
+
attr_accessor :error
|
|
1837
|
+
|
|
1838
|
+
# Explicit content annotation (based on per-frame visual signals only).
|
|
1839
|
+
# If no explicit content has been detected in a frame, no annotations are
|
|
1840
|
+
# present for that frame.
|
|
1841
|
+
# Corresponds to the JSON property `explicitAnnotation`
|
|
1842
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
|
|
1843
|
+
attr_accessor :explicit_annotation
|
|
1844
|
+
|
|
1845
|
+
# Label annotations on frame level.
|
|
1846
|
+
# There is exactly one element for each unique label.
|
|
1847
|
+
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
1848
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
|
|
1849
|
+
attr_accessor :frame_label_annotations
|
|
1850
|
+
|
|
1851
|
+
# Video file location in
|
|
1852
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/).
|
|
1853
|
+
# Corresponds to the JSON property `inputUri`
|
|
1854
|
+
# @return [String]
|
|
1855
|
+
attr_accessor :input_uri
|
|
1856
|
+
|
|
1857
|
+
# Annotations for list of objects detected and tracked in video.
|
|
1858
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
1859
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation>]
|
|
1860
|
+
attr_accessor :object_annotations
|
|
1861
|
+
|
|
1862
|
+
# Label annotations on video level or user specified segment level.
|
|
1863
|
+
# There is exactly one element for each unique label.
|
|
1864
|
+
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
1865
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
|
|
1866
|
+
attr_accessor :segment_label_annotations
|
|
1867
|
+
|
|
1868
|
+
# Shot annotations. Each shot is represented as a video segment.
|
|
1869
|
+
# Corresponds to the JSON property `shotAnnotations`
|
|
1870
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2VideoSegment>]
|
|
1871
|
+
attr_accessor :shot_annotations
|
|
1872
|
+
|
|
1873
|
+
# Label annotations on shot level.
|
|
1874
|
+
# There is exactly one element for each unique label.
|
|
1875
|
+
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
1876
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
|
|
1877
|
+
attr_accessor :shot_label_annotations
|
|
1878
|
+
|
|
1879
|
+
# Speech transcription.
|
|
1880
|
+
# Corresponds to the JSON property `speechTranscriptions`
|
|
1881
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
|
|
1882
|
+
attr_accessor :speech_transcriptions
|
|
1883
|
+
|
|
1884
|
+
# OCR text detection and tracking.
|
|
1885
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
1886
|
+
# frame information associated with it.
|
|
1887
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
1888
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1beta2TextAnnotation>]
|
|
1889
|
+
attr_accessor :text_annotations
|
|
1890
|
+
|
|
1891
|
+
def initialize(**args)
|
|
1892
|
+
update!(**args)
|
|
1893
|
+
end
|
|
1894
|
+
|
|
1895
|
+
# Update properties of this object
|
|
1896
|
+
def update!(**args)
|
|
1897
|
+
@error = args[:error] if args.key?(:error)
|
|
1898
|
+
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
1899
|
+
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
1900
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
1901
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
1902
|
+
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
1903
|
+
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
1904
|
+
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
1905
|
+
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
1906
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
1907
|
+
end
|
|
1908
|
+
end
|
|
1909
|
+
|
|
1910
|
+
# Video segment.
|
|
1911
|
+
class GoogleCloudVideointelligenceV1beta2VideoSegment
|
|
1912
|
+
include Google::Apis::Core::Hashable
|
|
1913
|
+
|
|
1914
|
+
# Time-offset, relative to the beginning of the video,
|
|
1915
|
+
# corresponding to the end of the segment (inclusive).
|
|
1916
|
+
# Corresponds to the JSON property `endTimeOffset`
|
|
1917
|
+
# @return [String]
|
|
1918
|
+
attr_accessor :end_time_offset
|
|
1919
|
+
|
|
1920
|
+
# Time-offset, relative to the beginning of the video,
|
|
1921
|
+
# corresponding to the start of the segment (inclusive).
|
|
1922
|
+
# Corresponds to the JSON property `startTimeOffset`
|
|
1923
|
+
# @return [String]
|
|
1924
|
+
attr_accessor :start_time_offset
|
|
1925
|
+
|
|
1926
|
+
def initialize(**args)
|
|
1927
|
+
update!(**args)
|
|
1928
|
+
end
|
|
1929
|
+
|
|
1930
|
+
# Update properties of this object
|
|
1931
|
+
def update!(**args)
|
|
1932
|
+
@end_time_offset = args[:end_time_offset] if args.key?(:end_time_offset)
|
|
1933
|
+
@start_time_offset = args[:start_time_offset] if args.key?(:start_time_offset)
|
|
1934
|
+
end
|
|
1935
|
+
end
|
|
1936
|
+
|
|
1937
|
+
# Word-specific information for recognized words. Word information is only
|
|
1938
|
+
# included in the response when certain request parameters are set, such
|
|
1939
|
+
# as `enable_word_time_offsets`.
|
|
1940
|
+
class GoogleCloudVideointelligenceV1beta2WordInfo
|
|
1941
|
+
include Google::Apis::Core::Hashable
|
|
1942
|
+
|
|
1943
|
+
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
1944
|
+
# indicates an estimated greater likelihood that the recognized words are
|
|
1945
|
+
# correct. This field is set only for the top alternative.
|
|
1946
|
+
# This field is not guaranteed to be accurate and users should not rely on it
|
|
1947
|
+
# to be always provided.
|
|
1948
|
+
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
1949
|
+
# Corresponds to the JSON property `confidence`
|
|
1950
|
+
# @return [Float]
|
|
1951
|
+
attr_accessor :confidence
|
|
1952
|
+
|
|
1953
|
+
# Time offset relative to the beginning of the audio, and
|
|
1954
|
+
# corresponding to the end of the spoken word. This field is only set if
|
|
1955
|
+
# `enable_word_time_offsets=true` and only in the top hypothesis. This is an
|
|
1956
|
+
# experimental feature and the accuracy of the time offset can vary.
|
|
1957
|
+
# Corresponds to the JSON property `endTime`
|
|
1958
|
+
# @return [String]
|
|
1959
|
+
attr_accessor :end_time
|
|
1960
|
+
|
|
1961
|
+
# Output only. A distinct integer value is assigned for every speaker within
|
|
1962
|
+
# the audio. This field specifies which one of those speakers was detected to
|
|
1963
|
+
# have spoken this word. Value ranges from 1 up to diarization_speaker_count,
|
|
1964
|
+
# and is only set if speaker diarization is enabled.
|
|
1965
|
+
# Corresponds to the JSON property `speakerTag`
|
|
1966
|
+
# @return [Fixnum]
|
|
1967
|
+
attr_accessor :speaker_tag
|
|
1968
|
+
|
|
1969
|
+
# Time offset relative to the beginning of the audio, and
|
|
1970
|
+
# corresponding to the start of the spoken word. This field is only set if
|
|
1971
|
+
# `enable_word_time_offsets=true` and only in the top hypothesis. This is an
|
|
1972
|
+
# experimental feature and the accuracy of the time offset can vary.
|
|
1973
|
+
# Corresponds to the JSON property `startTime`
|
|
1974
|
+
# @return [String]
|
|
1975
|
+
attr_accessor :start_time
|
|
1976
|
+
|
|
1977
|
+
# The word corresponding to this set of information.
|
|
1978
|
+
# Corresponds to the JSON property `word`
|
|
1979
|
+
# @return [String]
|
|
1980
|
+
attr_accessor :word
|
|
1981
|
+
|
|
1982
|
+
def initialize(**args)
|
|
1983
|
+
update!(**args)
|
|
1984
|
+
end
|
|
1985
|
+
|
|
1986
|
+
# Update properties of this object
|
|
1987
|
+
def update!(**args)
|
|
1988
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1989
|
+
@end_time = args[:end_time] if args.key?(:end_time)
|
|
1990
|
+
@speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
|
|
1991
|
+
@start_time = args[:start_time] if args.key?(:start_time)
|
|
1992
|
+
@word = args[:word] if args.key?(:word)
|
|
1993
|
+
end
|
|
1994
|
+
end
|
|
1995
|
+
|
|
1996
|
+
# Video annotation progress. Included in the `metadata`
|
|
1997
|
+
# field of the `Operation` returned by the `GetOperation`
|
|
1998
|
+
# call of the `google::longrunning::Operations` service.
|
|
1999
|
+
class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
|
|
2000
|
+
include Google::Apis::Core::Hashable
|
|
2001
|
+
|
|
2002
|
+
# Progress metadata for all videos specified in `AnnotateVideoRequest`.
|
|
2003
|
+
# Corresponds to the JSON property `annotationProgress`
|
|
2004
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
|
|
2005
|
+
attr_accessor :annotation_progress
|
|
2006
|
+
|
|
2007
|
+
def initialize(**args)
|
|
2008
|
+
update!(**args)
|
|
2009
|
+
end
|
|
2010
|
+
|
|
2011
|
+
# Update properties of this object
|
|
2012
|
+
def update!(**args)
|
|
2013
|
+
@annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
|
|
2014
|
+
end
|
|
2015
|
+
end
|
|
2016
|
+
|
|
2017
|
+
# Video annotation response. Included in the `response`
|
|
2018
|
+
# field of the `Operation` returned by the `GetOperation`
|
|
2019
|
+
# call of the `google::longrunning::Operations` service.
|
|
2020
|
+
class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
|
|
2021
|
+
include Google::Apis::Core::Hashable
|
|
2022
|
+
|
|
2023
|
+
# Annotation results for all videos specified in `AnnotateVideoRequest`.
|
|
2024
|
+
# Corresponds to the JSON property `annotationResults`
|
|
2025
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
|
|
2026
|
+
attr_accessor :annotation_results
|
|
2027
|
+
|
|
2028
|
+
def initialize(**args)
|
|
2029
|
+
update!(**args)
|
|
2030
|
+
end
|
|
2031
|
+
|
|
2032
|
+
# Update properties of this object
|
|
2033
|
+
def update!(**args)
|
|
2034
|
+
@annotation_results = args[:annotation_results] if args.key?(:annotation_results)
|
|
2035
|
+
end
|
|
2036
|
+
end
|
|
2037
|
+
|
|
2038
|
+
# Detected entity from video analysis.
|
|
2039
|
+
class GoogleCloudVideointelligenceV1p1beta1Entity
|
|
2040
|
+
include Google::Apis::Core::Hashable
|
|
2041
|
+
|
|
2042
|
+
# Textual description, e.g. `Fixed-gear bicycle`.
|
|
2043
|
+
# Corresponds to the JSON property `description`
|
|
2044
|
+
# @return [String]
|
|
2045
|
+
attr_accessor :description
|
|
2046
|
+
|
|
2047
|
+
# Opaque entity ID. Some IDs may be available in
|
|
2048
|
+
# [Google Knowledge Graph Search
|
|
2049
|
+
# API](https://developers.google.com/knowledge-graph/).
|
|
2050
|
+
# Corresponds to the JSON property `entityId`
|
|
2051
|
+
# @return [String]
|
|
2052
|
+
attr_accessor :entity_id
|
|
2053
|
+
|
|
2054
|
+
# Language code for `description` in BCP-47 format.
|
|
2055
|
+
# Corresponds to the JSON property `languageCode`
|
|
2056
|
+
# @return [String]
|
|
2057
|
+
attr_accessor :language_code
|
|
2058
|
+
|
|
2059
|
+
def initialize(**args)
|
|
2060
|
+
update!(**args)
|
|
2061
|
+
end
|
|
2062
|
+
|
|
2063
|
+
# Update properties of this object
|
|
2064
|
+
def update!(**args)
|
|
2065
|
+
@description = args[:description] if args.key?(:description)
|
|
2066
|
+
@entity_id = args[:entity_id] if args.key?(:entity_id)
|
|
2067
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
2068
|
+
end
|
|
2069
|
+
end
|
|
2070
|
+
|
|
2071
|
+
# Explicit content annotation (based on per-frame visual signals only).
|
|
2072
|
+
# If no explicit content has been detected in a frame, no annotations are
|
|
2073
|
+
# present for that frame.
|
|
2074
|
+
class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
|
|
2075
|
+
include Google::Apis::Core::Hashable
|
|
2076
|
+
|
|
2077
|
+
# All video frames where explicit content was detected.
|
|
2078
|
+
# Corresponds to the JSON property `frames`
|
|
2079
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
|
|
2080
|
+
attr_accessor :frames
|
|
2081
|
+
|
|
2082
|
+
def initialize(**args)
|
|
2083
|
+
update!(**args)
|
|
2084
|
+
end
|
|
2085
|
+
|
|
2086
|
+
# Update properties of this object
|
|
2087
|
+
def update!(**args)
|
|
2088
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
2089
|
+
end
|
|
2090
|
+
end
|
|
2091
|
+
|
|
2092
|
+
# Video frame level annotation results for explicit content.
|
|
2093
|
+
class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
|
|
2094
|
+
include Google::Apis::Core::Hashable
|
|
2095
|
+
|
|
2096
|
+
# Likelihood of the pornography content..
|
|
2097
|
+
# Corresponds to the JSON property `pornographyLikelihood`
|
|
2098
|
+
# @return [String]
|
|
2099
|
+
attr_accessor :pornography_likelihood
|
|
2100
|
+
|
|
2101
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
2102
|
+
# video frame for this location.
|
|
2103
|
+
# Corresponds to the JSON property `timeOffset`
|
|
2104
|
+
# @return [String]
|
|
2105
|
+
attr_accessor :time_offset
|
|
2106
|
+
|
|
2107
|
+
def initialize(**args)
|
|
2108
|
+
update!(**args)
|
|
2109
|
+
end
|
|
2110
|
+
|
|
2111
|
+
# Update properties of this object
|
|
2112
|
+
def update!(**args)
|
|
2113
|
+
@pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
|
|
2114
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2115
|
+
end
|
|
2116
|
+
end
|
|
2117
|
+
|
|
2118
|
+
# Label annotation.
|
|
2119
|
+
class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
|
|
2120
|
+
include Google::Apis::Core::Hashable
|
|
2121
|
+
|
|
2122
|
+
# Common categories for the detected entity.
|
|
2123
|
+
# E.g. when the label is `Terrier` the category is likely `dog`. And in some
|
|
2124
|
+
# cases there might be more than one categories e.g. `Terrier` could also be
|
|
2125
|
+
# a `pet`.
|
|
2126
|
+
# Corresponds to the JSON property `categoryEntities`
|
|
2127
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity>]
|
|
2128
|
+
attr_accessor :category_entities
|
|
2129
|
+
|
|
2130
|
+
# Detected entity from video analysis.
|
|
2131
|
+
# Corresponds to the JSON property `entity`
|
|
2132
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity]
|
|
2133
|
+
attr_accessor :entity
|
|
2134
|
+
|
|
2135
|
+
# All video frames where a label was detected.
|
|
2136
|
+
# Corresponds to the JSON property `frames`
|
|
2137
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
|
|
2138
|
+
attr_accessor :frames
|
|
2139
|
+
|
|
2140
|
+
# All video segments where a label was detected.
|
|
2141
|
+
# Corresponds to the JSON property `segments`
|
|
2142
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
|
|
2143
|
+
attr_accessor :segments
|
|
2144
|
+
|
|
2145
|
+
def initialize(**args)
|
|
2146
|
+
update!(**args)
|
|
2147
|
+
end
|
|
2148
|
+
|
|
2149
|
+
# Update properties of this object
|
|
2150
|
+
def update!(**args)
|
|
2151
|
+
@category_entities = args[:category_entities] if args.key?(:category_entities)
|
|
2152
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
2153
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
2154
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
2155
|
+
end
|
|
2156
|
+
end
|
|
2157
|
+
|
|
2158
|
+
# Video frame level annotation results for label detection.
|
|
2159
|
+
class GoogleCloudVideointelligenceV1p1beta1LabelFrame
|
|
2160
|
+
include Google::Apis::Core::Hashable
|
|
2161
|
+
|
|
2162
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
2163
|
+
# Corresponds to the JSON property `confidence`
|
|
2164
|
+
# @return [Float]
|
|
2165
|
+
attr_accessor :confidence
|
|
2166
|
+
|
|
2167
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
2168
|
+
# video frame for this location.
|
|
2169
|
+
# Corresponds to the JSON property `timeOffset`
|
|
2170
|
+
# @return [String]
|
|
2171
|
+
attr_accessor :time_offset
|
|
2172
|
+
|
|
2173
|
+
def initialize(**args)
|
|
2174
|
+
update!(**args)
|
|
2175
|
+
end
|
|
2176
|
+
|
|
2177
|
+
# Update properties of this object
|
|
2178
|
+
def update!(**args)
|
|
2179
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2180
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2181
|
+
end
|
|
2182
|
+
end
|
|
2183
|
+
|
|
2184
|
+
# Video segment level annotation results for label detection.
|
|
2185
|
+
class GoogleCloudVideointelligenceV1p1beta1LabelSegment
|
|
2186
|
+
include Google::Apis::Core::Hashable
|
|
2187
|
+
|
|
2188
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
2189
|
+
# Corresponds to the JSON property `confidence`
|
|
2190
|
+
# @return [Float]
|
|
2191
|
+
attr_accessor :confidence
|
|
2192
|
+
|
|
2193
|
+
# Video segment.
|
|
2194
|
+
# Corresponds to the JSON property `segment`
|
|
2195
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
|
|
2196
|
+
attr_accessor :segment
|
|
2197
|
+
|
|
2198
|
+
def initialize(**args)
|
|
2199
|
+
update!(**args)
|
|
2200
|
+
end
|
|
2201
|
+
|
|
2202
|
+
# Update properties of this object
|
|
2203
|
+
def update!(**args)
|
|
2204
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2205
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
2206
|
+
end
|
|
2207
|
+
end
|
|
2208
|
+
|
|
2209
|
+
# Normalized bounding box.
|
|
2210
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
2211
|
+
# Range: [0, 1].
|
|
2212
|
+
class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox
|
|
2213
|
+
include Google::Apis::Core::Hashable
|
|
2214
|
+
|
|
2215
|
+
# Bottom Y coordinate.
|
|
2216
|
+
# Corresponds to the JSON property `bottom`
|
|
2217
|
+
# @return [Float]
|
|
2218
|
+
attr_accessor :bottom
|
|
2219
|
+
|
|
2220
|
+
# Left X coordinate.
|
|
2221
|
+
# Corresponds to the JSON property `left`
|
|
2222
|
+
# @return [Float]
|
|
2223
|
+
attr_accessor :left
|
|
2224
|
+
|
|
2225
|
+
# Right X coordinate.
|
|
2226
|
+
# Corresponds to the JSON property `right`
|
|
2227
|
+
# @return [Float]
|
|
2228
|
+
attr_accessor :right
|
|
2229
|
+
|
|
2230
|
+
# Top Y coordinate.
|
|
2231
|
+
# Corresponds to the JSON property `top`
|
|
2232
|
+
# @return [Float]
|
|
2233
|
+
attr_accessor :top
|
|
2234
|
+
|
|
2235
|
+
def initialize(**args)
|
|
2236
|
+
update!(**args)
|
|
2237
|
+
end
|
|
2238
|
+
|
|
2239
|
+
# Update properties of this object
|
|
2240
|
+
def update!(**args)
|
|
2241
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
2242
|
+
@left = args[:left] if args.key?(:left)
|
|
2243
|
+
@right = args[:right] if args.key?(:right)
|
|
2244
|
+
@top = args[:top] if args.key?(:top)
|
|
2245
|
+
end
|
|
2246
|
+
end
|
|
2247
|
+
|
|
2248
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
2249
|
+
# Contains list of the corner points in clockwise order starting from
|
|
2250
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
2251
|
+
# When the text is horizontal it might look like:
|
|
2252
|
+
# 0----1
|
|
2253
|
+
# | |
|
|
2254
|
+
# 3----2
|
|
2255
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
2256
|
+
# becomes:
|
|
2257
|
+
# 2----3
|
|
2258
|
+
# | |
|
|
2259
|
+
# 1----0
|
|
2260
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
2261
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2262
|
+
# the box.
|
|
2263
|
+
class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly
|
|
2264
|
+
include Google::Apis::Core::Hashable
|
|
2265
|
+
|
|
2266
|
+
# Normalized vertices of the bounding polygon.
|
|
2267
|
+
# Corresponds to the JSON property `vertices`
|
|
2268
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1NormalizedVertex>]
|
|
2269
|
+
attr_accessor :vertices
|
|
2270
|
+
|
|
2271
|
+
def initialize(**args)
|
|
2272
|
+
update!(**args)
|
|
2273
|
+
end
|
|
2274
|
+
|
|
2275
|
+
# Update properties of this object
|
|
2276
|
+
def update!(**args)
|
|
2277
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
2278
|
+
end
|
|
2279
|
+
end
|
|
2280
|
+
|
|
2281
|
+
# A vertex represents a 2D point in the image.
|
|
2282
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
2283
|
+
# and range from 0 to 1.
|
|
2284
|
+
class GoogleCloudVideointelligenceV1p1beta1NormalizedVertex
|
|
2285
|
+
include Google::Apis::Core::Hashable
|
|
2286
|
+
|
|
2287
|
+
# X coordinate.
|
|
2288
|
+
# Corresponds to the JSON property `x`
|
|
2289
|
+
# @return [Float]
|
|
2290
|
+
attr_accessor :x
|
|
2291
|
+
|
|
2292
|
+
# Y coordinate.
|
|
2293
|
+
# Corresponds to the JSON property `y`
|
|
2294
|
+
# @return [Float]
|
|
2295
|
+
attr_accessor :y
|
|
2296
|
+
|
|
2297
|
+
def initialize(**args)
|
|
2298
|
+
update!(**args)
|
|
2299
|
+
end
|
|
2300
|
+
|
|
2301
|
+
# Update properties of this object
|
|
2302
|
+
def update!(**args)
|
|
2303
|
+
@x = args[:x] if args.key?(:x)
|
|
2304
|
+
@y = args[:y] if args.key?(:y)
|
|
2305
|
+
end
|
|
2306
|
+
end
|
|
2307
|
+
|
|
2308
|
+
# Annotations corresponding to one tracked object.
|
|
2309
|
+
class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation
|
|
2310
|
+
include Google::Apis::Core::Hashable
|
|
2311
|
+
|
|
2312
|
+
# Object category's labeling confidence of this track.
|
|
2313
|
+
# Corresponds to the JSON property `confidence`
|
|
2314
|
+
# @return [Float]
|
|
2315
|
+
attr_accessor :confidence
|
|
2316
|
+
|
|
2317
|
+
# Detected entity from video analysis.
|
|
2318
|
+
# Corresponds to the JSON property `entity`
|
|
2319
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity]
|
|
2320
|
+
attr_accessor :entity
|
|
2321
|
+
|
|
2322
|
+
# Information corresponding to all frames where this object track appears.
|
|
2323
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
2324
|
+
# messages in frames.
|
|
2325
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
2326
|
+
# Corresponds to the JSON property `frames`
|
|
2327
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame>]
|
|
2328
|
+
attr_accessor :frames
|
|
2329
|
+
|
|
2330
|
+
# Video segment.
|
|
2331
|
+
# Corresponds to the JSON property `segment`
|
|
2332
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
|
|
2333
|
+
attr_accessor :segment
|
|
2334
|
+
|
|
2335
|
+
# Streaming mode ONLY.
|
|
2336
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
2337
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
2338
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
2339
|
+
# the customers can correlate the results of the ongoing
|
|
2340
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
2341
|
+
# Corresponds to the JSON property `trackId`
|
|
2342
|
+
# @return [Fixnum]
|
|
2343
|
+
attr_accessor :track_id
|
|
2344
|
+
|
|
2345
|
+
def initialize(**args)
|
|
2346
|
+
update!(**args)
|
|
2347
|
+
end
|
|
2348
|
+
|
|
2349
|
+
# Update properties of this object
|
|
2350
|
+
def update!(**args)
|
|
2351
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2352
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
2353
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
2354
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
2355
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
2356
|
+
end
|
|
2357
|
+
end
|
|
2358
|
+
|
|
2359
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
2360
|
+
# stores per frame location, time offset, and confidence.
|
|
2361
|
+
class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame
|
|
2362
|
+
include Google::Apis::Core::Hashable
|
|
2363
|
+
|
|
2364
|
+
# Normalized bounding box.
|
|
2365
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
2366
|
+
# Range: [0, 1].
|
|
2367
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
2368
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox]
|
|
2369
|
+
attr_accessor :normalized_bounding_box
|
|
2370
|
+
|
|
2371
|
+
# The timestamp of the frame in microseconds.
|
|
2372
|
+
# Corresponds to the JSON property `timeOffset`
|
|
2373
|
+
# @return [String]
|
|
2374
|
+
attr_accessor :time_offset
|
|
2375
|
+
|
|
2376
|
+
def initialize(**args)
|
|
2377
|
+
update!(**args)
|
|
2378
|
+
end
|
|
2379
|
+
|
|
2380
|
+
# Update properties of this object
|
|
2381
|
+
def update!(**args)
|
|
2382
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
2383
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2384
|
+
end
|
|
2385
|
+
end
|
|
2386
|
+
|
|
2387
|
+
# Alternative hypotheses (a.k.a. n-best list).
|
|
2388
|
+
class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
|
|
2389
|
+
include Google::Apis::Core::Hashable
|
|
2390
|
+
|
|
2391
|
+
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
2392
|
+
# indicates an estimated greater likelihood that the recognized words are
|
|
2393
|
+
# correct. This field is typically provided only for the top hypothesis, and
|
|
2394
|
+
# only for `is_final=true` results. Clients should not rely on the
|
|
2395
|
+
# `confidence` field as it is not guaranteed to be accurate or consistent.
|
|
2396
|
+
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
2397
|
+
# Corresponds to the JSON property `confidence`
|
|
2398
|
+
# @return [Float]
|
|
2399
|
+
attr_accessor :confidence
|
|
2400
|
+
|
|
2401
|
+
# Transcript text representing the words that the user spoke.
|
|
2402
|
+
# Corresponds to the JSON property `transcript`
|
|
2403
|
+
# @return [String]
|
|
2404
|
+
attr_accessor :transcript
|
|
2405
|
+
|
|
2406
|
+
# A list of word-specific information for each recognized word.
|
|
2407
|
+
# Corresponds to the JSON property `words`
|
|
2408
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
|
|
2409
|
+
attr_accessor :words
|
|
2410
|
+
|
|
2411
|
+
def initialize(**args)
|
|
2412
|
+
update!(**args)
|
|
2413
|
+
end
|
|
2414
|
+
|
|
2415
|
+
# Update properties of this object
|
|
2416
|
+
def update!(**args)
|
|
2417
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2418
|
+
@transcript = args[:transcript] if args.key?(:transcript)
|
|
2419
|
+
@words = args[:words] if args.key?(:words)
|
|
2420
|
+
end
|
|
2421
|
+
end
|
|
2422
|
+
|
|
2423
|
+
# A speech recognition result corresponding to a portion of the audio.
|
|
2424
|
+
class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
|
|
2425
|
+
include Google::Apis::Core::Hashable
|
|
2426
|
+
|
|
2427
|
+
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
2428
|
+
# in `max_alternatives`). These alternatives are ordered in terms of
|
|
2429
|
+
# accuracy, with the top (first) alternative being the most probable, as
|
|
2430
|
+
# ranked by the recognizer.
|
|
2431
|
+
# Corresponds to the JSON property `alternatives`
|
|
2432
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
|
|
2433
|
+
attr_accessor :alternatives
|
|
2434
|
+
|
|
2435
|
+
# Output only. The
|
|
2436
|
+
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
|
|
2437
|
+
# language in this result. This language code was detected to have the most
|
|
2438
|
+
# likelihood of being spoken in the audio.
|
|
2439
|
+
# Corresponds to the JSON property `languageCode`
|
|
2440
|
+
# @return [String]
|
|
2441
|
+
attr_accessor :language_code
|
|
2442
|
+
|
|
2443
|
+
def initialize(**args)
|
|
2444
|
+
update!(**args)
|
|
2445
|
+
end
|
|
2446
|
+
|
|
2447
|
+
# Update properties of this object
|
|
2448
|
+
def update!(**args)
|
|
2449
|
+
@alternatives = args[:alternatives] if args.key?(:alternatives)
|
|
2450
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
2451
|
+
end
|
|
2452
|
+
end
|
|
2453
|
+
|
|
2454
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
2455
|
+
# corresponding text, confidence value, and frame level information for each
|
|
2456
|
+
# detection.
|
|
2457
|
+
class GoogleCloudVideointelligenceV1p1beta1TextAnnotation
|
|
2458
|
+
include Google::Apis::Core::Hashable
|
|
2459
|
+
|
|
2460
|
+
# All video segments where OCR detected text appears.
|
|
2461
|
+
# Corresponds to the JSON property `segments`
|
|
2462
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1TextSegment>]
|
|
2463
|
+
attr_accessor :segments
|
|
2464
|
+
|
|
2465
|
+
# The detected text.
|
|
2466
|
+
# Corresponds to the JSON property `text`
|
|
2467
|
+
# @return [String]
|
|
2468
|
+
attr_accessor :text
|
|
2469
|
+
|
|
2470
|
+
def initialize(**args)
|
|
2471
|
+
update!(**args)
|
|
2472
|
+
end
|
|
2473
|
+
|
|
2474
|
+
# Update properties of this object
|
|
2475
|
+
def update!(**args)
|
|
2476
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
2477
|
+
@text = args[:text] if args.key?(:text)
|
|
2478
|
+
end
|
|
2479
|
+
end
|
|
2480
|
+
|
|
2481
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
2482
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
2483
|
+
# frames containing detected OCR text snippets.
|
|
2484
|
+
class GoogleCloudVideointelligenceV1p1beta1TextFrame
|
|
2485
|
+
include Google::Apis::Core::Hashable
|
|
2486
|
+
|
|
2487
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
2488
|
+
# Contains list of the corner points in clockwise order starting from
|
|
2489
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
2490
|
+
# When the text is horizontal it might look like:
|
|
2491
|
+
# 0----1
|
|
2492
|
+
# | |
|
|
2493
|
+
# 3----2
|
|
2494
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
2495
|
+
# becomes:
|
|
2496
|
+
# 2----3
|
|
2497
|
+
# | |
|
|
2498
|
+
# 1----0
|
|
2499
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
2500
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2501
|
+
# the box.
|
|
2502
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
2503
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly]
|
|
2504
|
+
attr_accessor :rotated_bounding_box
|
|
2505
|
+
|
|
2506
|
+
# Timestamp of this frame.
|
|
2507
|
+
# Corresponds to the JSON property `timeOffset`
|
|
2508
|
+
# @return [String]
|
|
2509
|
+
attr_accessor :time_offset
|
|
2510
|
+
|
|
2511
|
+
def initialize(**args)
|
|
2512
|
+
update!(**args)
|
|
2513
|
+
end
|
|
2514
|
+
|
|
2515
|
+
# Update properties of this object
|
|
2516
|
+
def update!(**args)
|
|
2517
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
2518
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2519
|
+
end
|
|
2520
|
+
end
|
|
2521
|
+
|
|
2522
|
+
# Video segment level annotation results for text detection.
|
|
2523
|
+
class GoogleCloudVideointelligenceV1p1beta1TextSegment
|
|
2524
|
+
include Google::Apis::Core::Hashable
|
|
2525
|
+
|
|
2526
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
2527
|
+
# over all frames where OCR detected text appears.
|
|
2528
|
+
# Corresponds to the JSON property `confidence`
|
|
2529
|
+
# @return [Float]
|
|
2530
|
+
attr_accessor :confidence
|
|
2531
|
+
|
|
2532
|
+
# Information related to the frames where OCR detected text appears.
|
|
2533
|
+
# Corresponds to the JSON property `frames`
|
|
2534
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1TextFrame>]
|
|
2535
|
+
attr_accessor :frames
|
|
2536
|
+
|
|
2537
|
+
# Video segment.
|
|
2538
|
+
# Corresponds to the JSON property `segment`
|
|
2539
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
|
|
2540
|
+
attr_accessor :segment
|
|
2541
|
+
|
|
2542
|
+
def initialize(**args)
|
|
2543
|
+
update!(**args)
|
|
2544
|
+
end
|
|
2545
|
+
|
|
2546
|
+
# Update properties of this object
|
|
2547
|
+
def update!(**args)
|
|
2548
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2549
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
2550
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
2551
|
+
end
|
|
2552
|
+
end
|
|
2553
|
+
|
|
2554
|
+
# Annotation progress for a single video.
|
|
2555
|
+
class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
|
|
2556
|
+
include Google::Apis::Core::Hashable
|
|
2557
|
+
|
|
2558
|
+
# Video file location in
|
|
2559
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/).
|
|
2560
|
+
# Corresponds to the JSON property `inputUri`
|
|
2561
|
+
# @return [String]
|
|
2562
|
+
attr_accessor :input_uri
|
|
2563
|
+
|
|
2564
|
+
# Approximate percentage processed thus far. Guaranteed to be
|
|
2565
|
+
# 100 when fully processed.
|
|
2566
|
+
# Corresponds to the JSON property `progressPercent`
|
|
2567
|
+
# @return [Fixnum]
|
|
2568
|
+
attr_accessor :progress_percent
|
|
2569
|
+
|
|
2570
|
+
# Time when the request was received.
|
|
2571
|
+
# Corresponds to the JSON property `startTime`
|
|
1150
2572
|
# @return [String]
|
|
1151
2573
|
attr_accessor :start_time
|
|
1152
2574
|
|
|
@@ -1169,17 +2591,17 @@ module Google
|
|
|
1169
2591
|
end
|
|
1170
2592
|
|
|
1171
2593
|
# Annotation results for a single video.
|
|
1172
|
-
class
|
|
2594
|
+
class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
|
|
1173
2595
|
include Google::Apis::Core::Hashable
|
|
1174
2596
|
|
|
1175
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
1176
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
1177
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2597
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
2598
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
2599
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
1178
2600
|
# - Simple to use and understand for most users
|
|
1179
2601
|
# - Flexible enough to meet unexpected needs
|
|
1180
2602
|
# # Overview
|
|
1181
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
1182
|
-
# and error details. The error code should be an enum value of
|
|
2603
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
2604
|
+
# message, and error details. The error code should be an enum value of
|
|
1183
2605
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
1184
2606
|
# error message should be a developer-facing English message that helps
|
|
1185
2607
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -1219,13 +2641,13 @@ module Google
|
|
|
1219
2641
|
# If no explicit content has been detected in a frame, no annotations are
|
|
1220
2642
|
# present for that frame.
|
|
1221
2643
|
# Corresponds to the JSON property `explicitAnnotation`
|
|
1222
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
2644
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
|
|
1223
2645
|
attr_accessor :explicit_annotation
|
|
1224
2646
|
|
|
1225
2647
|
# Label annotations on frame level.
|
|
1226
2648
|
# There is exactly one element for each unique label.
|
|
1227
2649
|
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
1228
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
2650
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
|
|
1229
2651
|
attr_accessor :frame_label_annotations
|
|
1230
2652
|
|
|
1231
2653
|
# Video file location in
|
|
@@ -1234,28 +2656,40 @@ module Google
|
|
|
1234
2656
|
# @return [String]
|
|
1235
2657
|
attr_accessor :input_uri
|
|
1236
2658
|
|
|
2659
|
+
# Annotations for list of objects detected and tracked in video.
|
|
2660
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
2661
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>]
|
|
2662
|
+
attr_accessor :object_annotations
|
|
2663
|
+
|
|
1237
2664
|
# Label annotations on video level or user specified segment level.
|
|
1238
2665
|
# There is exactly one element for each unique label.
|
|
1239
2666
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
1240
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
2667
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
|
|
1241
2668
|
attr_accessor :segment_label_annotations
|
|
1242
2669
|
|
|
1243
2670
|
# Shot annotations. Each shot is represented as a video segment.
|
|
1244
2671
|
# Corresponds to the JSON property `shotAnnotations`
|
|
1245
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
2672
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
|
|
1246
2673
|
attr_accessor :shot_annotations
|
|
1247
2674
|
|
|
1248
2675
|
# Label annotations on shot level.
|
|
1249
2676
|
# There is exactly one element for each unique label.
|
|
1250
2677
|
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
1251
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
2678
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
|
|
1252
2679
|
attr_accessor :shot_label_annotations
|
|
1253
2680
|
|
|
1254
2681
|
# Speech transcription.
|
|
1255
2682
|
# Corresponds to the JSON property `speechTranscriptions`
|
|
1256
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
2683
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
|
|
1257
2684
|
attr_accessor :speech_transcriptions
|
|
1258
2685
|
|
|
2686
|
+
# OCR text detection and tracking.
|
|
2687
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
2688
|
+
# frame information associated with it.
|
|
2689
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
2690
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>]
|
|
2691
|
+
attr_accessor :text_annotations
|
|
2692
|
+
|
|
1259
2693
|
def initialize(**args)
|
|
1260
2694
|
update!(**args)
|
|
1261
2695
|
end
|
|
@@ -1266,15 +2700,17 @@ module Google
|
|
|
1266
2700
|
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
1267
2701
|
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
1268
2702
|
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
2703
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
1269
2704
|
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
1270
2705
|
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
1271
2706
|
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
1272
2707
|
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
2708
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
1273
2709
|
end
|
|
1274
2710
|
end
|
|
1275
2711
|
|
|
1276
2712
|
# Video segment.
|
|
1277
|
-
class
|
|
2713
|
+
class GoogleCloudVideointelligenceV1p1beta1VideoSegment
|
|
1278
2714
|
include Google::Apis::Core::Hashable
|
|
1279
2715
|
|
|
1280
2716
|
# Time-offset, relative to the beginning of the video,
|
|
@@ -1303,7 +2739,7 @@ module Google
|
|
|
1303
2739
|
# Word-specific information for recognized words. Word information is only
|
|
1304
2740
|
# included in the response when certain request parameters are set, such
|
|
1305
2741
|
# as `enable_word_time_offsets`.
|
|
1306
|
-
class
|
|
2742
|
+
class GoogleCloudVideointelligenceV1p1beta1WordInfo
|
|
1307
2743
|
include Google::Apis::Core::Hashable
|
|
1308
2744
|
|
|
1309
2745
|
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -1362,13 +2798,241 @@ module Google
|
|
|
1362
2798
|
# Video annotation progress. Included in the `metadata`
|
|
1363
2799
|
# field of the `Operation` returned by the `GetOperation`
|
|
1364
2800
|
# call of the `google::longrunning::Operations` service.
|
|
1365
|
-
class
|
|
2801
|
+
class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
|
|
2802
|
+
include Google::Apis::Core::Hashable
|
|
2803
|
+
|
|
2804
|
+
# Progress metadata for all videos specified in `AnnotateVideoRequest`.
|
|
2805
|
+
# Corresponds to the JSON property `annotationProgress`
|
|
2806
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
|
|
2807
|
+
attr_accessor :annotation_progress
|
|
2808
|
+
|
|
2809
|
+
def initialize(**args)
|
|
2810
|
+
update!(**args)
|
|
2811
|
+
end
|
|
2812
|
+
|
|
2813
|
+
# Update properties of this object
|
|
2814
|
+
def update!(**args)
|
|
2815
|
+
@annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
|
|
2816
|
+
end
|
|
2817
|
+
end
|
|
2818
|
+
|
|
2819
|
+
# Video annotation response. Included in the `response`
|
|
2820
|
+
# field of the `Operation` returned by the `GetOperation`
|
|
2821
|
+
# call of the `google::longrunning::Operations` service.
|
|
2822
|
+
class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
|
|
2823
|
+
include Google::Apis::Core::Hashable
|
|
2824
|
+
|
|
2825
|
+
# Annotation results for all videos specified in `AnnotateVideoRequest`.
|
|
2826
|
+
# Corresponds to the JSON property `annotationResults`
|
|
2827
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
|
|
2828
|
+
attr_accessor :annotation_results
|
|
2829
|
+
|
|
2830
|
+
def initialize(**args)
|
|
2831
|
+
update!(**args)
|
|
2832
|
+
end
|
|
2833
|
+
|
|
2834
|
+
# Update properties of this object
|
|
2835
|
+
def update!(**args)
|
|
2836
|
+
@annotation_results = args[:annotation_results] if args.key?(:annotation_results)
|
|
2837
|
+
end
|
|
2838
|
+
end
|
|
2839
|
+
|
|
2840
|
+
# Detected entity from video analysis.
|
|
2841
|
+
class GoogleCloudVideointelligenceV1p2beta1Entity
|
|
2842
|
+
include Google::Apis::Core::Hashable
|
|
2843
|
+
|
|
2844
|
+
# Textual description, e.g. `Fixed-gear bicycle`.
|
|
2845
|
+
# Corresponds to the JSON property `description`
|
|
2846
|
+
# @return [String]
|
|
2847
|
+
attr_accessor :description
|
|
2848
|
+
|
|
2849
|
+
# Opaque entity ID. Some IDs may be available in
|
|
2850
|
+
# [Google Knowledge Graph Search
|
|
2851
|
+
# API](https://developers.google.com/knowledge-graph/).
|
|
2852
|
+
# Corresponds to the JSON property `entityId`
|
|
2853
|
+
# @return [String]
|
|
2854
|
+
attr_accessor :entity_id
|
|
2855
|
+
|
|
2856
|
+
# Language code for `description` in BCP-47 format.
|
|
2857
|
+
# Corresponds to the JSON property `languageCode`
|
|
2858
|
+
# @return [String]
|
|
2859
|
+
attr_accessor :language_code
|
|
2860
|
+
|
|
2861
|
+
def initialize(**args)
|
|
2862
|
+
update!(**args)
|
|
2863
|
+
end
|
|
2864
|
+
|
|
2865
|
+
# Update properties of this object
|
|
2866
|
+
def update!(**args)
|
|
2867
|
+
@description = args[:description] if args.key?(:description)
|
|
2868
|
+
@entity_id = args[:entity_id] if args.key?(:entity_id)
|
|
2869
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
2870
|
+
end
|
|
2871
|
+
end
|
|
2872
|
+
|
|
2873
|
+
# Explicit content annotation (based on per-frame visual signals only).
|
|
2874
|
+
# If no explicit content has been detected in a frame, no annotations are
|
|
2875
|
+
# present for that frame.
|
|
2876
|
+
class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
|
|
2877
|
+
include Google::Apis::Core::Hashable
|
|
2878
|
+
|
|
2879
|
+
# All video frames where explicit content was detected.
|
|
2880
|
+
# Corresponds to the JSON property `frames`
|
|
2881
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
|
|
2882
|
+
attr_accessor :frames
|
|
2883
|
+
|
|
2884
|
+
def initialize(**args)
|
|
2885
|
+
update!(**args)
|
|
2886
|
+
end
|
|
2887
|
+
|
|
2888
|
+
# Update properties of this object
|
|
2889
|
+
def update!(**args)
|
|
2890
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
2891
|
+
end
|
|
2892
|
+
end
|
|
2893
|
+
|
|
2894
|
+
# Video frame level annotation results for explicit content.
|
|
2895
|
+
class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
|
|
2896
|
+
include Google::Apis::Core::Hashable
|
|
2897
|
+
|
|
2898
|
+
# Likelihood of the pornography content..
|
|
2899
|
+
# Corresponds to the JSON property `pornographyLikelihood`
|
|
2900
|
+
# @return [String]
|
|
2901
|
+
attr_accessor :pornography_likelihood
|
|
2902
|
+
|
|
2903
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
2904
|
+
# video frame for this location.
|
|
2905
|
+
# Corresponds to the JSON property `timeOffset`
|
|
2906
|
+
# @return [String]
|
|
2907
|
+
attr_accessor :time_offset
|
|
2908
|
+
|
|
2909
|
+
def initialize(**args)
|
|
2910
|
+
update!(**args)
|
|
2911
|
+
end
|
|
2912
|
+
|
|
2913
|
+
# Update properties of this object
|
|
2914
|
+
def update!(**args)
|
|
2915
|
+
@pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
|
|
2916
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2917
|
+
end
|
|
2918
|
+
end
|
|
2919
|
+
|
|
2920
|
+
# Label annotation.
|
|
2921
|
+
class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
|
|
2922
|
+
include Google::Apis::Core::Hashable
|
|
2923
|
+
|
|
2924
|
+
# Common categories for the detected entity.
|
|
2925
|
+
# E.g. when the label is `Terrier` the category is likely `dog`. And in some
|
|
2926
|
+
# cases there might be more than one categories e.g. `Terrier` could also be
|
|
2927
|
+
# a `pet`.
|
|
2928
|
+
# Corresponds to the JSON property `categoryEntities`
|
|
2929
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity>]
|
|
2930
|
+
attr_accessor :category_entities
|
|
2931
|
+
|
|
2932
|
+
# Detected entity from video analysis.
|
|
2933
|
+
# Corresponds to the JSON property `entity`
|
|
2934
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity]
|
|
2935
|
+
attr_accessor :entity
|
|
2936
|
+
|
|
2937
|
+
# All video frames where a label was detected.
|
|
2938
|
+
# Corresponds to the JSON property `frames`
|
|
2939
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
|
|
2940
|
+
attr_accessor :frames
|
|
2941
|
+
|
|
2942
|
+
# All video segments where a label was detected.
|
|
2943
|
+
# Corresponds to the JSON property `segments`
|
|
2944
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
|
|
2945
|
+
attr_accessor :segments
|
|
2946
|
+
|
|
2947
|
+
def initialize(**args)
|
|
2948
|
+
update!(**args)
|
|
2949
|
+
end
|
|
2950
|
+
|
|
2951
|
+
# Update properties of this object
|
|
2952
|
+
def update!(**args)
|
|
2953
|
+
@category_entities = args[:category_entities] if args.key?(:category_entities)
|
|
2954
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
2955
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
2956
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
2957
|
+
end
|
|
2958
|
+
end
|
|
2959
|
+
|
|
2960
|
+
# Video frame level annotation results for label detection.
|
|
2961
|
+
class GoogleCloudVideointelligenceV1p2beta1LabelFrame
|
|
2962
|
+
include Google::Apis::Core::Hashable
|
|
2963
|
+
|
|
2964
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
2965
|
+
# Corresponds to the JSON property `confidence`
|
|
2966
|
+
# @return [Float]
|
|
2967
|
+
attr_accessor :confidence
|
|
2968
|
+
|
|
2969
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
2970
|
+
# video frame for this location.
|
|
2971
|
+
# Corresponds to the JSON property `timeOffset`
|
|
2972
|
+
# @return [String]
|
|
2973
|
+
attr_accessor :time_offset
|
|
2974
|
+
|
|
2975
|
+
def initialize(**args)
|
|
2976
|
+
update!(**args)
|
|
2977
|
+
end
|
|
2978
|
+
|
|
2979
|
+
# Update properties of this object
|
|
2980
|
+
def update!(**args)
|
|
2981
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2982
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2983
|
+
end
|
|
2984
|
+
end
|
|
2985
|
+
|
|
2986
|
+
# Video segment level annotation results for label detection.
|
|
2987
|
+
class GoogleCloudVideointelligenceV1p2beta1LabelSegment
|
|
2988
|
+
include Google::Apis::Core::Hashable
|
|
2989
|
+
|
|
2990
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
2991
|
+
# Corresponds to the JSON property `confidence`
|
|
2992
|
+
# @return [Float]
|
|
2993
|
+
attr_accessor :confidence
|
|
2994
|
+
|
|
2995
|
+
# Video segment.
|
|
2996
|
+
# Corresponds to the JSON property `segment`
|
|
2997
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
|
|
2998
|
+
attr_accessor :segment
|
|
2999
|
+
|
|
3000
|
+
def initialize(**args)
|
|
3001
|
+
update!(**args)
|
|
3002
|
+
end
|
|
3003
|
+
|
|
3004
|
+
# Update properties of this object
|
|
3005
|
+
def update!(**args)
|
|
3006
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
3007
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
3008
|
+
end
|
|
3009
|
+
end
|
|
3010
|
+
|
|
3011
|
+
# Normalized bounding box.
|
|
3012
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
3013
|
+
# Range: [0, 1].
|
|
3014
|
+
class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
|
|
1366
3015
|
include Google::Apis::Core::Hashable
|
|
1367
3016
|
|
|
1368
|
-
#
|
|
1369
|
-
# Corresponds to the JSON property `
|
|
1370
|
-
# @return [
|
|
1371
|
-
attr_accessor :
|
|
3017
|
+
# Bottom Y coordinate.
|
|
3018
|
+
# Corresponds to the JSON property `bottom`
|
|
3019
|
+
# @return [Float]
|
|
3020
|
+
attr_accessor :bottom
|
|
3021
|
+
|
|
3022
|
+
# Left X coordinate.
|
|
3023
|
+
# Corresponds to the JSON property `left`
|
|
3024
|
+
# @return [Float]
|
|
3025
|
+
attr_accessor :left
|
|
3026
|
+
|
|
3027
|
+
# Right X coordinate.
|
|
3028
|
+
# Corresponds to the JSON property `right`
|
|
3029
|
+
# @return [Float]
|
|
3030
|
+
attr_accessor :right
|
|
3031
|
+
|
|
3032
|
+
# Top Y coordinate.
|
|
3033
|
+
# Corresponds to the JSON property `top`
|
|
3034
|
+
# @return [Float]
|
|
3035
|
+
attr_accessor :top
|
|
1372
3036
|
|
|
1373
3037
|
def initialize(**args)
|
|
1374
3038
|
update!(**args)
|
|
@@ -1376,20 +3040,35 @@ module Google
|
|
|
1376
3040
|
|
|
1377
3041
|
# Update properties of this object
|
|
1378
3042
|
def update!(**args)
|
|
1379
|
-
@
|
|
3043
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
3044
|
+
@left = args[:left] if args.key?(:left)
|
|
3045
|
+
@right = args[:right] if args.key?(:right)
|
|
3046
|
+
@top = args[:top] if args.key?(:top)
|
|
1380
3047
|
end
|
|
1381
3048
|
end
|
|
1382
3049
|
|
|
1383
|
-
#
|
|
1384
|
-
#
|
|
1385
|
-
#
|
|
1386
|
-
|
|
3050
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
3051
|
+
# Contains list of the corner points in clockwise order starting from
|
|
3052
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
3053
|
+
# When the text is horizontal it might look like:
|
|
3054
|
+
# 0----1
|
|
3055
|
+
# | |
|
|
3056
|
+
# 3----2
|
|
3057
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
3058
|
+
# becomes:
|
|
3059
|
+
# 2----3
|
|
3060
|
+
# | |
|
|
3061
|
+
# 1----0
|
|
3062
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
3063
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
3064
|
+
# the box.
|
|
3065
|
+
class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
|
|
1387
3066
|
include Google::Apis::Core::Hashable
|
|
1388
3067
|
|
|
1389
|
-
#
|
|
1390
|
-
# Corresponds to the JSON property `
|
|
1391
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
1392
|
-
attr_accessor :
|
|
3068
|
+
# Normalized vertices of the bounding polygon.
|
|
3069
|
+
# Corresponds to the JSON property `vertices`
|
|
3070
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
|
|
3071
|
+
attr_accessor :vertices
|
|
1393
3072
|
|
|
1394
3073
|
def initialize(**args)
|
|
1395
3074
|
update!(**args)
|
|
@@ -1397,30 +3076,25 @@ module Google
|
|
|
1397
3076
|
|
|
1398
3077
|
# Update properties of this object
|
|
1399
3078
|
def update!(**args)
|
|
1400
|
-
@
|
|
3079
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
1401
3080
|
end
|
|
1402
3081
|
end
|
|
1403
3082
|
|
|
1404
|
-
#
|
|
1405
|
-
|
|
3083
|
+
# A vertex represents a 2D point in the image.
|
|
3084
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
3085
|
+
# and range from 0 to 1.
|
|
3086
|
+
class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
|
|
1406
3087
|
include Google::Apis::Core::Hashable
|
|
1407
3088
|
|
|
1408
|
-
#
|
|
1409
|
-
# Corresponds to the JSON property `
|
|
1410
|
-
# @return [
|
|
1411
|
-
attr_accessor :
|
|
1412
|
-
|
|
1413
|
-
# Opaque entity ID. Some IDs may be available in
|
|
1414
|
-
# [Google Knowledge Graph Search
|
|
1415
|
-
# API](https://developers.google.com/knowledge-graph/).
|
|
1416
|
-
# Corresponds to the JSON property `entityId`
|
|
1417
|
-
# @return [String]
|
|
1418
|
-
attr_accessor :entity_id
|
|
3089
|
+
# X coordinate.
|
|
3090
|
+
# Corresponds to the JSON property `x`
|
|
3091
|
+
# @return [Float]
|
|
3092
|
+
attr_accessor :x
|
|
1419
3093
|
|
|
1420
|
-
#
|
|
1421
|
-
# Corresponds to the JSON property `
|
|
1422
|
-
# @return [
|
|
1423
|
-
attr_accessor :
|
|
3094
|
+
# Y coordinate.
|
|
3095
|
+
# Corresponds to the JSON property `y`
|
|
3096
|
+
# @return [Float]
|
|
3097
|
+
attr_accessor :y
|
|
1424
3098
|
|
|
1425
3099
|
def initialize(**args)
|
|
1426
3100
|
update!(**args)
|
|
@@ -1428,44 +3102,75 @@ module Google
|
|
|
1428
3102
|
|
|
1429
3103
|
# Update properties of this object
|
|
1430
3104
|
def update!(**args)
|
|
1431
|
-
@
|
|
1432
|
-
@
|
|
1433
|
-
@language_code = args[:language_code] if args.key?(:language_code)
|
|
3105
|
+
@x = args[:x] if args.key?(:x)
|
|
3106
|
+
@y = args[:y] if args.key?(:y)
|
|
1434
3107
|
end
|
|
1435
3108
|
end
|
|
1436
3109
|
|
|
1437
|
-
#
|
|
1438
|
-
|
|
1439
|
-
# present for that frame.
|
|
1440
|
-
class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
|
|
3110
|
+
# Annotations corresponding to one tracked object.
|
|
3111
|
+
class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
|
|
1441
3112
|
include Google::Apis::Core::Hashable
|
|
1442
3113
|
|
|
1443
|
-
#
|
|
3114
|
+
# Object category's labeling confidence of this track.
|
|
3115
|
+
# Corresponds to the JSON property `confidence`
|
|
3116
|
+
# @return [Float]
|
|
3117
|
+
attr_accessor :confidence
|
|
3118
|
+
|
|
3119
|
+
# Detected entity from video analysis.
|
|
3120
|
+
# Corresponds to the JSON property `entity`
|
|
3121
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1Entity]
|
|
3122
|
+
attr_accessor :entity
|
|
3123
|
+
|
|
3124
|
+
# Information corresponding to all frames where this object track appears.
|
|
3125
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
3126
|
+
# messages in frames.
|
|
3127
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
1444
3128
|
# Corresponds to the JSON property `frames`
|
|
1445
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3129
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
|
|
1446
3130
|
attr_accessor :frames
|
|
1447
3131
|
|
|
3132
|
+
# Video segment.
|
|
3133
|
+
# Corresponds to the JSON property `segment`
|
|
3134
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
|
|
3135
|
+
attr_accessor :segment
|
|
3136
|
+
|
|
3137
|
+
# Streaming mode ONLY.
|
|
3138
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
3139
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
3140
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
3141
|
+
# the customers can correlate the results of the ongoing
|
|
3142
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
3143
|
+
# Corresponds to the JSON property `trackId`
|
|
3144
|
+
# @return [Fixnum]
|
|
3145
|
+
attr_accessor :track_id
|
|
3146
|
+
|
|
1448
3147
|
def initialize(**args)
|
|
1449
3148
|
update!(**args)
|
|
1450
3149
|
end
|
|
1451
3150
|
|
|
1452
3151
|
# Update properties of this object
|
|
1453
3152
|
def update!(**args)
|
|
3153
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
3154
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
1454
3155
|
@frames = args[:frames] if args.key?(:frames)
|
|
3156
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
3157
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
1455
3158
|
end
|
|
1456
3159
|
end
|
|
1457
3160
|
|
|
1458
|
-
# Video frame level
|
|
1459
|
-
|
|
3161
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
3162
|
+
# stores per frame location, time offset, and confidence.
|
|
3163
|
+
class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
|
|
1460
3164
|
include Google::Apis::Core::Hashable
|
|
1461
3165
|
|
|
1462
|
-
#
|
|
1463
|
-
#
|
|
1464
|
-
#
|
|
1465
|
-
|
|
3166
|
+
# Normalized bounding box.
|
|
3167
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
3168
|
+
# Range: [0, 1].
|
|
3169
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
3170
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
|
|
3171
|
+
attr_accessor :normalized_bounding_box
|
|
1466
3172
|
|
|
1467
|
-
#
|
|
1468
|
-
# video frame for this location.
|
|
3173
|
+
# The timestamp of the frame in microseconds.
|
|
1469
3174
|
# Corresponds to the JSON property `timeOffset`
|
|
1470
3175
|
# @return [String]
|
|
1471
3176
|
attr_accessor :time_offset
|
|
@@ -1476,37 +3181,34 @@ module Google
|
|
|
1476
3181
|
|
|
1477
3182
|
# Update properties of this object
|
|
1478
3183
|
def update!(**args)
|
|
1479
|
-
@
|
|
3184
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
1480
3185
|
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1481
3186
|
end
|
|
1482
3187
|
end
|
|
1483
3188
|
|
|
1484
|
-
#
|
|
1485
|
-
class
|
|
3189
|
+
# Alternative hypotheses (a.k.a. n-best list).
|
|
3190
|
+
class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
|
|
1486
3191
|
include Google::Apis::Core::Hashable
|
|
1487
3192
|
|
|
1488
|
-
#
|
|
1489
|
-
#
|
|
1490
|
-
#
|
|
1491
|
-
#
|
|
1492
|
-
#
|
|
1493
|
-
#
|
|
1494
|
-
|
|
1495
|
-
|
|
1496
|
-
|
|
1497
|
-
# Corresponds to the JSON property `entity`
|
|
1498
|
-
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1Entity]
|
|
1499
|
-
attr_accessor :entity
|
|
3193
|
+
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
3194
|
+
# indicates an estimated greater likelihood that the recognized words are
|
|
3195
|
+
# correct. This field is typically provided only for the top hypothesis, and
|
|
3196
|
+
# only for `is_final=true` results. Clients should not rely on the
|
|
3197
|
+
# `confidence` field as it is not guaranteed to be accurate or consistent.
|
|
3198
|
+
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
3199
|
+
# Corresponds to the JSON property `confidence`
|
|
3200
|
+
# @return [Float]
|
|
3201
|
+
attr_accessor :confidence
|
|
1500
3202
|
|
|
1501
|
-
#
|
|
1502
|
-
# Corresponds to the JSON property `
|
|
1503
|
-
# @return [
|
|
1504
|
-
attr_accessor :
|
|
3203
|
+
# Transcript text representing the words that the user spoke.
|
|
3204
|
+
# Corresponds to the JSON property `transcript`
|
|
3205
|
+
# @return [String]
|
|
3206
|
+
attr_accessor :transcript
|
|
1505
3207
|
|
|
1506
|
-
#
|
|
1507
|
-
# Corresponds to the JSON property `
|
|
1508
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
1509
|
-
attr_accessor :
|
|
3208
|
+
# A list of word-specific information for each recognized word.
|
|
3209
|
+
# Corresponds to the JSON property `words`
|
|
3210
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
|
|
3211
|
+
attr_accessor :words
|
|
1510
3212
|
|
|
1511
3213
|
def initialize(**args)
|
|
1512
3214
|
update!(**args)
|
|
@@ -1514,27 +3216,31 @@ module Google
|
|
|
1514
3216
|
|
|
1515
3217
|
# Update properties of this object
|
|
1516
3218
|
def update!(**args)
|
|
1517
|
-
@
|
|
1518
|
-
@
|
|
1519
|
-
@
|
|
1520
|
-
@segments = args[:segments] if args.key?(:segments)
|
|
3219
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
3220
|
+
@transcript = args[:transcript] if args.key?(:transcript)
|
|
3221
|
+
@words = args[:words] if args.key?(:words)
|
|
1521
3222
|
end
|
|
1522
3223
|
end
|
|
1523
3224
|
|
|
1524
|
-
#
|
|
1525
|
-
class
|
|
3225
|
+
# A speech recognition result corresponding to a portion of the audio.
|
|
3226
|
+
class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
|
|
1526
3227
|
include Google::Apis::Core::Hashable
|
|
1527
3228
|
|
|
1528
|
-
#
|
|
1529
|
-
#
|
|
1530
|
-
#
|
|
1531
|
-
|
|
3229
|
+
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
3230
|
+
# in `max_alternatives`). These alternatives are ordered in terms of
|
|
3231
|
+
# accuracy, with the top (first) alternative being the most probable, as
|
|
3232
|
+
# ranked by the recognizer.
|
|
3233
|
+
# Corresponds to the JSON property `alternatives`
|
|
3234
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
|
|
3235
|
+
attr_accessor :alternatives
|
|
1532
3236
|
|
|
1533
|
-
#
|
|
1534
|
-
#
|
|
1535
|
-
#
|
|
3237
|
+
# Output only. The
|
|
3238
|
+
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
|
|
3239
|
+
# language in this result. This language code was detected to have the most
|
|
3240
|
+
# likelihood of being spoken in the audio.
|
|
3241
|
+
# Corresponds to the JSON property `languageCode`
|
|
1536
3242
|
# @return [String]
|
|
1537
|
-
attr_accessor :
|
|
3243
|
+
attr_accessor :language_code
|
|
1538
3244
|
|
|
1539
3245
|
def initialize(**args)
|
|
1540
3246
|
update!(**args)
|
|
@@ -1542,24 +3248,26 @@ module Google
|
|
|
1542
3248
|
|
|
1543
3249
|
# Update properties of this object
|
|
1544
3250
|
def update!(**args)
|
|
1545
|
-
@
|
|
1546
|
-
@
|
|
3251
|
+
@alternatives = args[:alternatives] if args.key?(:alternatives)
|
|
3252
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
1547
3253
|
end
|
|
1548
3254
|
end
|
|
1549
3255
|
|
|
1550
|
-
#
|
|
1551
|
-
|
|
3256
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
3257
|
+
# corresponding text, confidence value, and frame level information for each
|
|
3258
|
+
# detection.
|
|
3259
|
+
class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
|
|
1552
3260
|
include Google::Apis::Core::Hashable
|
|
1553
3261
|
|
|
1554
|
-
#
|
|
1555
|
-
# Corresponds to the JSON property `
|
|
1556
|
-
# @return [
|
|
1557
|
-
attr_accessor :
|
|
3262
|
+
# All video segments where OCR detected text appears.
|
|
3263
|
+
# Corresponds to the JSON property `segments`
|
|
3264
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
|
|
3265
|
+
attr_accessor :segments
|
|
1558
3266
|
|
|
1559
|
-
#
|
|
1560
|
-
# Corresponds to the JSON property `
|
|
1561
|
-
# @return [
|
|
1562
|
-
attr_accessor :
|
|
3267
|
+
# The detected text.
|
|
3268
|
+
# Corresponds to the JSON property `text`
|
|
3269
|
+
# @return [String]
|
|
3270
|
+
attr_accessor :text
|
|
1563
3271
|
|
|
1564
3272
|
def initialize(**args)
|
|
1565
3273
|
update!(**args)
|
|
@@ -1567,34 +3275,40 @@ module Google
|
|
|
1567
3275
|
|
|
1568
3276
|
# Update properties of this object
|
|
1569
3277
|
def update!(**args)
|
|
1570
|
-
@
|
|
1571
|
-
@
|
|
3278
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
3279
|
+
@text = args[:text] if args.key?(:text)
|
|
1572
3280
|
end
|
|
1573
3281
|
end
|
|
1574
3282
|
|
|
1575
|
-
#
|
|
1576
|
-
|
|
3283
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
3284
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
3285
|
+
# frames containing detected OCR text snippets.
|
|
3286
|
+
class GoogleCloudVideointelligenceV1p2beta1TextFrame
|
|
1577
3287
|
include Google::Apis::Core::Hashable
|
|
1578
3288
|
|
|
1579
|
-
#
|
|
1580
|
-
#
|
|
1581
|
-
#
|
|
1582
|
-
#
|
|
1583
|
-
#
|
|
1584
|
-
#
|
|
1585
|
-
#
|
|
1586
|
-
#
|
|
1587
|
-
|
|
3289
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
3290
|
+
# Contains list of the corner points in clockwise order starting from
|
|
3291
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
3292
|
+
# When the text is horizontal it might look like:
|
|
3293
|
+
# 0----1
|
|
3294
|
+
# | |
|
|
3295
|
+
# 3----2
|
|
3296
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
3297
|
+
# becomes:
|
|
3298
|
+
# 2----3
|
|
3299
|
+
# | |
|
|
3300
|
+
# 1----0
|
|
3301
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
3302
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
3303
|
+
# the box.
|
|
3304
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
3305
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
|
|
3306
|
+
attr_accessor :rotated_bounding_box
|
|
1588
3307
|
|
|
1589
|
-
#
|
|
1590
|
-
# Corresponds to the JSON property `
|
|
3308
|
+
# Timestamp of this frame.
|
|
3309
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1591
3310
|
# @return [String]
|
|
1592
|
-
attr_accessor :
|
|
1593
|
-
|
|
1594
|
-
# A list of word-specific information for each recognized word.
|
|
1595
|
-
# Corresponds to the JSON property `words`
|
|
1596
|
-
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
|
|
1597
|
-
attr_accessor :words
|
|
3311
|
+
attr_accessor :time_offset
|
|
1598
3312
|
|
|
1599
3313
|
def initialize(**args)
|
|
1600
3314
|
update!(**args)
|
|
@@ -1602,31 +3316,30 @@ module Google
|
|
|
1602
3316
|
|
|
1603
3317
|
# Update properties of this object
|
|
1604
3318
|
def update!(**args)
|
|
1605
|
-
@
|
|
1606
|
-
@
|
|
1607
|
-
@words = args[:words] if args.key?(:words)
|
|
3319
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
3320
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1608
3321
|
end
|
|
1609
3322
|
end
|
|
1610
3323
|
|
|
1611
|
-
#
|
|
1612
|
-
class
|
|
3324
|
+
# Video segment level annotation results for text detection.
|
|
3325
|
+
class GoogleCloudVideointelligenceV1p2beta1TextSegment
|
|
1613
3326
|
include Google::Apis::Core::Hashable
|
|
1614
3327
|
|
|
1615
|
-
#
|
|
1616
|
-
#
|
|
1617
|
-
#
|
|
1618
|
-
#
|
|
1619
|
-
|
|
1620
|
-
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
|
|
1621
|
-
attr_accessor :alternatives
|
|
3328
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
3329
|
+
# over all frames where OCR detected text appears.
|
|
3330
|
+
# Corresponds to the JSON property `confidence`
|
|
3331
|
+
# @return [Float]
|
|
3332
|
+
attr_accessor :confidence
|
|
1622
3333
|
|
|
1623
|
-
#
|
|
1624
|
-
#
|
|
1625
|
-
#
|
|
1626
|
-
|
|
1627
|
-
|
|
1628
|
-
#
|
|
1629
|
-
|
|
3334
|
+
# Information related to the frames where OCR detected text appears.
|
|
3335
|
+
# Corresponds to the JSON property `frames`
|
|
3336
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
|
|
3337
|
+
attr_accessor :frames
|
|
3338
|
+
|
|
3339
|
+
# Video segment.
|
|
3340
|
+
# Corresponds to the JSON property `segment`
|
|
3341
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
|
|
3342
|
+
attr_accessor :segment
|
|
1630
3343
|
|
|
1631
3344
|
def initialize(**args)
|
|
1632
3345
|
update!(**args)
|
|
@@ -1634,13 +3347,14 @@ module Google
|
|
|
1634
3347
|
|
|
1635
3348
|
# Update properties of this object
|
|
1636
3349
|
def update!(**args)
|
|
1637
|
-
@
|
|
1638
|
-
@
|
|
3350
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
3351
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
3352
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1639
3353
|
end
|
|
1640
3354
|
end
|
|
1641
3355
|
|
|
1642
3356
|
# Annotation progress for a single video.
|
|
1643
|
-
class
|
|
3357
|
+
class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
|
|
1644
3358
|
include Google::Apis::Core::Hashable
|
|
1645
3359
|
|
|
1646
3360
|
# Video file location in
|
|
@@ -1679,17 +3393,17 @@ module Google
|
|
|
1679
3393
|
end
|
|
1680
3394
|
|
|
1681
3395
|
# Annotation results for a single video.
|
|
1682
|
-
class
|
|
3396
|
+
class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
|
|
1683
3397
|
include Google::Apis::Core::Hashable
|
|
1684
3398
|
|
|
1685
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
1686
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
1687
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
3399
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
3400
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
3401
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
1688
3402
|
# - Simple to use and understand for most users
|
|
1689
3403
|
# - Flexible enough to meet unexpected needs
|
|
1690
3404
|
# # Overview
|
|
1691
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
1692
|
-
# and error details. The error code should be an enum value of
|
|
3405
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
3406
|
+
# message, and error details. The error code should be an enum value of
|
|
1693
3407
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
1694
3408
|
# error message should be a developer-facing English message that helps
|
|
1695
3409
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -1729,13 +3443,13 @@ module Google
|
|
|
1729
3443
|
# If no explicit content has been detected in a frame, no annotations are
|
|
1730
3444
|
# present for that frame.
|
|
1731
3445
|
# Corresponds to the JSON property `explicitAnnotation`
|
|
1732
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
3446
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
|
|
1733
3447
|
attr_accessor :explicit_annotation
|
|
1734
3448
|
|
|
1735
3449
|
# Label annotations on frame level.
|
|
1736
3450
|
# There is exactly one element for each unique label.
|
|
1737
3451
|
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
1738
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3452
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
|
|
1739
3453
|
attr_accessor :frame_label_annotations
|
|
1740
3454
|
|
|
1741
3455
|
# Video file location in
|
|
@@ -1744,28 +3458,40 @@ module Google
|
|
|
1744
3458
|
# @return [String]
|
|
1745
3459
|
attr_accessor :input_uri
|
|
1746
3460
|
|
|
3461
|
+
# Annotations for list of objects detected and tracked in video.
|
|
3462
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
3463
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
|
|
3464
|
+
attr_accessor :object_annotations
|
|
3465
|
+
|
|
1747
3466
|
# Label annotations on video level or user specified segment level.
|
|
1748
3467
|
# There is exactly one element for each unique label.
|
|
1749
3468
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
1750
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3469
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
|
|
1751
3470
|
attr_accessor :segment_label_annotations
|
|
1752
3471
|
|
|
1753
3472
|
# Shot annotations. Each shot is represented as a video segment.
|
|
1754
3473
|
# Corresponds to the JSON property `shotAnnotations`
|
|
1755
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3474
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
|
|
1756
3475
|
attr_accessor :shot_annotations
|
|
1757
3476
|
|
|
1758
3477
|
# Label annotations on shot level.
|
|
1759
3478
|
# There is exactly one element for each unique label.
|
|
1760
3479
|
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
1761
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3480
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
|
|
1762
3481
|
attr_accessor :shot_label_annotations
|
|
1763
3482
|
|
|
1764
3483
|
# Speech transcription.
|
|
1765
3484
|
# Corresponds to the JSON property `speechTranscriptions`
|
|
1766
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3485
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
|
|
1767
3486
|
attr_accessor :speech_transcriptions
|
|
1768
3487
|
|
|
3488
|
+
# OCR text detection and tracking.
|
|
3489
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
3490
|
+
# frame information associated with it.
|
|
3491
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
3492
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
|
|
3493
|
+
attr_accessor :text_annotations
|
|
3494
|
+
|
|
1769
3495
|
def initialize(**args)
|
|
1770
3496
|
update!(**args)
|
|
1771
3497
|
end
|
|
@@ -1776,15 +3502,17 @@ module Google
|
|
|
1776
3502
|
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
1777
3503
|
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
1778
3504
|
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
3505
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
1779
3506
|
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
1780
3507
|
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
1781
3508
|
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
1782
3509
|
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
3510
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
1783
3511
|
end
|
|
1784
3512
|
end
|
|
1785
3513
|
|
|
1786
3514
|
# Video segment.
|
|
1787
|
-
class
|
|
3515
|
+
class GoogleCloudVideointelligenceV1p2beta1VideoSegment
|
|
1788
3516
|
include Google::Apis::Core::Hashable
|
|
1789
3517
|
|
|
1790
3518
|
# Time-offset, relative to the beginning of the video,
|
|
@@ -1813,7 +3541,7 @@ module Google
|
|
|
1813
3541
|
# Word-specific information for recognized words. Word information is only
|
|
1814
3542
|
# included in the response when certain request parameters are set, such
|
|
1815
3543
|
# as `enable_word_time_offsets`.
|
|
1816
|
-
class
|
|
3544
|
+
class GoogleCloudVideointelligenceV1p2beta1WordInfo
|
|
1817
3545
|
include Google::Apis::Core::Hashable
|
|
1818
3546
|
|
|
1819
3547
|
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -1872,12 +3600,12 @@ module Google
|
|
|
1872
3600
|
# Video annotation progress. Included in the `metadata`
|
|
1873
3601
|
# field of the `Operation` returned by the `GetOperation`
|
|
1874
3602
|
# call of the `google::longrunning::Operations` service.
|
|
1875
|
-
class
|
|
3603
|
+
class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoProgress
|
|
1876
3604
|
include Google::Apis::Core::Hashable
|
|
1877
3605
|
|
|
1878
3606
|
# Progress metadata for all videos specified in `AnnotateVideoRequest`.
|
|
1879
3607
|
# Corresponds to the JSON property `annotationProgress`
|
|
1880
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3608
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress>]
|
|
1881
3609
|
attr_accessor :annotation_progress
|
|
1882
3610
|
|
|
1883
3611
|
def initialize(**args)
|
|
@@ -1893,12 +3621,12 @@ module Google
|
|
|
1893
3621
|
# Video annotation response. Included in the `response`
|
|
1894
3622
|
# field of the `Operation` returned by the `GetOperation`
|
|
1895
3623
|
# call of the `google::longrunning::Operations` service.
|
|
1896
|
-
class
|
|
3624
|
+
class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoResponse
|
|
1897
3625
|
include Google::Apis::Core::Hashable
|
|
1898
3626
|
|
|
1899
3627
|
# Annotation results for all videos specified in `AnnotateVideoRequest`.
|
|
1900
3628
|
# Corresponds to the JSON property `annotationResults`
|
|
1901
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3629
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults>]
|
|
1902
3630
|
attr_accessor :annotation_results
|
|
1903
3631
|
|
|
1904
3632
|
def initialize(**args)
|
|
@@ -1912,7 +3640,7 @@ module Google
|
|
|
1912
3640
|
end
|
|
1913
3641
|
|
|
1914
3642
|
# Detected entity from video analysis.
|
|
1915
|
-
class
|
|
3643
|
+
class GoogleCloudVideointelligenceV1p3beta1Entity
|
|
1916
3644
|
include Google::Apis::Core::Hashable
|
|
1917
3645
|
|
|
1918
3646
|
# Textual description, e.g. `Fixed-gear bicycle`.
|
|
@@ -1947,12 +3675,12 @@ module Google
|
|
|
1947
3675
|
# Explicit content annotation (based on per-frame visual signals only).
|
|
1948
3676
|
# If no explicit content has been detected in a frame, no annotations are
|
|
1949
3677
|
# present for that frame.
|
|
1950
|
-
class
|
|
3678
|
+
class GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation
|
|
1951
3679
|
include Google::Apis::Core::Hashable
|
|
1952
3680
|
|
|
1953
3681
|
# All video frames where explicit content was detected.
|
|
1954
3682
|
# Corresponds to the JSON property `frames`
|
|
1955
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3683
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame>]
|
|
1956
3684
|
attr_accessor :frames
|
|
1957
3685
|
|
|
1958
3686
|
def initialize(**args)
|
|
@@ -1966,7 +3694,7 @@ module Google
|
|
|
1966
3694
|
end
|
|
1967
3695
|
|
|
1968
3696
|
# Video frame level annotation results for explicit content.
|
|
1969
|
-
class
|
|
3697
|
+
class GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame
|
|
1970
3698
|
include Google::Apis::Core::Hashable
|
|
1971
3699
|
|
|
1972
3700
|
# Likelihood of the pornography content..
|
|
@@ -1992,7 +3720,7 @@ module Google
|
|
|
1992
3720
|
end
|
|
1993
3721
|
|
|
1994
3722
|
# Label annotation.
|
|
1995
|
-
class
|
|
3723
|
+
class GoogleCloudVideointelligenceV1p3beta1LabelAnnotation
|
|
1996
3724
|
include Google::Apis::Core::Hashable
|
|
1997
3725
|
|
|
1998
3726
|
# Common categories for the detected entity.
|
|
@@ -2000,22 +3728,22 @@ module Google
|
|
|
2000
3728
|
# cases there might be more than one categories e.g. `Terrier` could also be
|
|
2001
3729
|
# a `pet`.
|
|
2002
3730
|
# Corresponds to the JSON property `categoryEntities`
|
|
2003
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3731
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1Entity>]
|
|
2004
3732
|
attr_accessor :category_entities
|
|
2005
3733
|
|
|
2006
3734
|
# Detected entity from video analysis.
|
|
2007
3735
|
# Corresponds to the JSON property `entity`
|
|
2008
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
3736
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1Entity]
|
|
2009
3737
|
attr_accessor :entity
|
|
2010
3738
|
|
|
2011
3739
|
# All video frames where a label was detected.
|
|
2012
3740
|
# Corresponds to the JSON property `frames`
|
|
2013
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3741
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelFrame>]
|
|
2014
3742
|
attr_accessor :frames
|
|
2015
3743
|
|
|
2016
3744
|
# All video segments where a label was detected.
|
|
2017
3745
|
# Corresponds to the JSON property `segments`
|
|
2018
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3746
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelSegment>]
|
|
2019
3747
|
attr_accessor :segments
|
|
2020
3748
|
|
|
2021
3749
|
def initialize(**args)
|
|
@@ -2032,7 +3760,7 @@ module Google
|
|
|
2032
3760
|
end
|
|
2033
3761
|
|
|
2034
3762
|
# Video frame level annotation results for label detection.
|
|
2035
|
-
class
|
|
3763
|
+
class GoogleCloudVideointelligenceV1p3beta1LabelFrame
|
|
2036
3764
|
include Google::Apis::Core::Hashable
|
|
2037
3765
|
|
|
2038
3766
|
# Confidence that the label is accurate. Range: [0, 1].
|
|
@@ -2058,7 +3786,7 @@ module Google
|
|
|
2058
3786
|
end
|
|
2059
3787
|
|
|
2060
3788
|
# Video segment level annotation results for label detection.
|
|
2061
|
-
class
|
|
3789
|
+
class GoogleCloudVideointelligenceV1p3beta1LabelSegment
|
|
2062
3790
|
include Google::Apis::Core::Hashable
|
|
2063
3791
|
|
|
2064
3792
|
# Confidence that the label is accurate. Range: [0, 1].
|
|
@@ -2068,7 +3796,7 @@ module Google
|
|
|
2068
3796
|
|
|
2069
3797
|
# Video segment.
|
|
2070
3798
|
# Corresponds to the JSON property `segment`
|
|
2071
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
3799
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
|
|
2072
3800
|
attr_accessor :segment
|
|
2073
3801
|
|
|
2074
3802
|
def initialize(**args)
|
|
@@ -2085,7 +3813,7 @@ module Google
|
|
|
2085
3813
|
# Normalized bounding box.
|
|
2086
3814
|
# The normalized vertex coordinates are relative to the original image.
|
|
2087
3815
|
# Range: [0, 1].
|
|
2088
|
-
class
|
|
3816
|
+
class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox
|
|
2089
3817
|
include Google::Apis::Core::Hashable
|
|
2090
3818
|
|
|
2091
3819
|
# Bottom Y coordinate.
|
|
@@ -2136,12 +3864,12 @@ module Google
|
|
|
2136
3864
|
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
2137
3865
|
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2138
3866
|
# the box.
|
|
2139
|
-
class
|
|
3867
|
+
class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly
|
|
2140
3868
|
include Google::Apis::Core::Hashable
|
|
2141
3869
|
|
|
2142
3870
|
# Normalized vertices of the bounding polygon.
|
|
2143
3871
|
# Corresponds to the JSON property `vertices`
|
|
2144
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3872
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1NormalizedVertex>]
|
|
2145
3873
|
attr_accessor :vertices
|
|
2146
3874
|
|
|
2147
3875
|
def initialize(**args)
|
|
@@ -2157,7 +3885,7 @@ module Google
|
|
|
2157
3885
|
# A vertex represents a 2D point in the image.
|
|
2158
3886
|
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
2159
3887
|
# and range from 0 to 1.
|
|
2160
|
-
class
|
|
3888
|
+
class GoogleCloudVideointelligenceV1p3beta1NormalizedVertex
|
|
2161
3889
|
include Google::Apis::Core::Hashable
|
|
2162
3890
|
|
|
2163
3891
|
# X coordinate.
|
|
@@ -2182,7 +3910,7 @@ module Google
|
|
|
2182
3910
|
end
|
|
2183
3911
|
|
|
2184
3912
|
# Annotations corresponding to one tracked object.
|
|
2185
|
-
class
|
|
3913
|
+
class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation
|
|
2186
3914
|
include Google::Apis::Core::Hashable
|
|
2187
3915
|
|
|
2188
3916
|
# Object category's labeling confidence of this track.
|
|
@@ -2192,7 +3920,7 @@ module Google
|
|
|
2192
3920
|
|
|
2193
3921
|
# Detected entity from video analysis.
|
|
2194
3922
|
# Corresponds to the JSON property `entity`
|
|
2195
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
3923
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1Entity]
|
|
2196
3924
|
attr_accessor :entity
|
|
2197
3925
|
|
|
2198
3926
|
# Information corresponding to all frames where this object track appears.
|
|
@@ -2200,12 +3928,12 @@ module Google
|
|
|
2200
3928
|
# messages in frames.
|
|
2201
3929
|
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
2202
3930
|
# Corresponds to the JSON property `frames`
|
|
2203
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
3931
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame>]
|
|
2204
3932
|
attr_accessor :frames
|
|
2205
3933
|
|
|
2206
3934
|
# Video segment.
|
|
2207
3935
|
# Corresponds to the JSON property `segment`
|
|
2208
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
3936
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
|
|
2209
3937
|
attr_accessor :segment
|
|
2210
3938
|
|
|
2211
3939
|
# Streaming mode ONLY.
|
|
@@ -2234,14 +3962,14 @@ module Google
|
|
|
2234
3962
|
|
|
2235
3963
|
# Video frame level annotations for object detection and tracking. This field
|
|
2236
3964
|
# stores per frame location, time offset, and confidence.
|
|
2237
|
-
class
|
|
3965
|
+
class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame
|
|
2238
3966
|
include Google::Apis::Core::Hashable
|
|
2239
3967
|
|
|
2240
3968
|
# Normalized bounding box.
|
|
2241
3969
|
# The normalized vertex coordinates are relative to the original image.
|
|
2242
3970
|
# Range: [0, 1].
|
|
2243
3971
|
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
2244
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
3972
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox]
|
|
2245
3973
|
attr_accessor :normalized_bounding_box
|
|
2246
3974
|
|
|
2247
3975
|
# The timestamp of the frame in microseconds.
|
|
@@ -2261,7 +3989,7 @@ module Google
|
|
|
2261
3989
|
end
|
|
2262
3990
|
|
|
2263
3991
|
# Alternative hypotheses (a.k.a. n-best list).
|
|
2264
|
-
class
|
|
3992
|
+
class GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative
|
|
2265
3993
|
include Google::Apis::Core::Hashable
|
|
2266
3994
|
|
|
2267
3995
|
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -2281,7 +4009,7 @@ module Google
|
|
|
2281
4009
|
|
|
2282
4010
|
# A list of word-specific information for each recognized word.
|
|
2283
4011
|
# Corresponds to the JSON property `words`
|
|
2284
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4012
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1WordInfo>]
|
|
2285
4013
|
attr_accessor :words
|
|
2286
4014
|
|
|
2287
4015
|
def initialize(**args)
|
|
@@ -2297,7 +4025,7 @@ module Google
|
|
|
2297
4025
|
end
|
|
2298
4026
|
|
|
2299
4027
|
# A speech recognition result corresponding to a portion of the audio.
|
|
2300
|
-
class
|
|
4028
|
+
class GoogleCloudVideointelligenceV1p3beta1SpeechTranscription
|
|
2301
4029
|
include Google::Apis::Core::Hashable
|
|
2302
4030
|
|
|
2303
4031
|
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
@@ -2305,7 +4033,7 @@ module Google
|
|
|
2305
4033
|
# accuracy, with the top (first) alternative being the most probable, as
|
|
2306
4034
|
# ranked by the recognizer.
|
|
2307
4035
|
# Corresponds to the JSON property `alternatives`
|
|
2308
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4036
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative>]
|
|
2309
4037
|
attr_accessor :alternatives
|
|
2310
4038
|
|
|
2311
4039
|
# Output only. The
|
|
@@ -2327,15 +4055,130 @@ module Google
|
|
|
2327
4055
|
end
|
|
2328
4056
|
end
|
|
2329
4057
|
|
|
4058
|
+
# `StreamingAnnotateVideoResponse` is the only message returned to the client
|
|
4059
|
+
# by `StreamingAnnotateVideo`. A series of zero or more
|
|
4060
|
+
# `StreamingAnnotateVideoResponse` messages are streamed back to the client.
|
|
4061
|
+
class GoogleCloudVideointelligenceV1p3beta1StreamingAnnotateVideoResponse
|
|
4062
|
+
include Google::Apis::Core::Hashable
|
|
4063
|
+
|
|
4064
|
+
# Streaming annotation results corresponding to a portion of the video
|
|
4065
|
+
# that is currently being processed.
|
|
4066
|
+
# Corresponds to the JSON property `annotationResults`
|
|
4067
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults]
|
|
4068
|
+
attr_accessor :annotation_results
|
|
4069
|
+
|
|
4070
|
+
# GCS URI that stores annotation results of one streaming session.
|
|
4071
|
+
# It is a directory that can hold multiple files in JSON format.
|
|
4072
|
+
# Example uri format:
|
|
4073
|
+
# gs://bucket_id/object_id/cloud_project_name-session_id
|
|
4074
|
+
# Corresponds to the JSON property `annotationResultsUri`
|
|
4075
|
+
# @return [String]
|
|
4076
|
+
attr_accessor :annotation_results_uri
|
|
4077
|
+
|
|
4078
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4079
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4080
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4081
|
+
# - Simple to use and understand for most users
|
|
4082
|
+
# - Flexible enough to meet unexpected needs
|
|
4083
|
+
# # Overview
|
|
4084
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4085
|
+
# message, and error details. The error code should be an enum value of
|
|
4086
|
+
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
4087
|
+
# error message should be a developer-facing English message that helps
|
|
4088
|
+
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
4089
|
+
# error message is needed, put the localized message in the error details or
|
|
4090
|
+
# localize it in the client. The optional error details may contain arbitrary
|
|
4091
|
+
# information about the error. There is a predefined set of error detail types
|
|
4092
|
+
# in the package `google.rpc` that can be used for common error conditions.
|
|
4093
|
+
# # Language mapping
|
|
4094
|
+
# The `Status` message is the logical representation of the error model, but it
|
|
4095
|
+
# is not necessarily the actual wire format. When the `Status` message is
|
|
4096
|
+
# exposed in different client libraries and different wire protocols, it can be
|
|
4097
|
+
# mapped differently. For example, it will likely be mapped to some exceptions
|
|
4098
|
+
# in Java, but more likely mapped to some error codes in C.
|
|
4099
|
+
# # Other uses
|
|
4100
|
+
# The error model and the `Status` message can be used in a variety of
|
|
4101
|
+
# environments, either with or without APIs, to provide a
|
|
4102
|
+
# consistent developer experience across different environments.
|
|
4103
|
+
# Example uses of this error model include:
|
|
4104
|
+
# - Partial errors. If a service needs to return partial errors to the client,
|
|
4105
|
+
# it may embed the `Status` in the normal response to indicate the partial
|
|
4106
|
+
# errors.
|
|
4107
|
+
# - Workflow errors. A typical workflow has multiple steps. Each step may
|
|
4108
|
+
# have a `Status` message for error reporting.
|
|
4109
|
+
# - Batch operations. If a client uses batch request and batch response, the
|
|
4110
|
+
# `Status` message should be used directly inside batch response, one for
|
|
4111
|
+
# each error sub-response.
|
|
4112
|
+
# - Asynchronous operations. If an API call embeds asynchronous operation
|
|
4113
|
+
# results in its response, the status of those operations should be
|
|
4114
|
+
# represented directly using the `Status` message.
|
|
4115
|
+
# - Logging. If some API errors are stored in logs, the message `Status` could
|
|
4116
|
+
# be used directly after any stripping needed for security/privacy reasons.
|
|
4117
|
+
# Corresponds to the JSON property `error`
|
|
4118
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleRpcStatus]
|
|
4119
|
+
attr_accessor :error
|
|
4120
|
+
|
|
4121
|
+
def initialize(**args)
|
|
4122
|
+
update!(**args)
|
|
4123
|
+
end
|
|
4124
|
+
|
|
4125
|
+
# Update properties of this object
|
|
4126
|
+
def update!(**args)
|
|
4127
|
+
@annotation_results = args[:annotation_results] if args.key?(:annotation_results)
|
|
4128
|
+
@annotation_results_uri = args[:annotation_results_uri] if args.key?(:annotation_results_uri)
|
|
4129
|
+
@error = args[:error] if args.key?(:error)
|
|
4130
|
+
end
|
|
4131
|
+
end
|
|
4132
|
+
|
|
4133
|
+
# Streaming annotation results corresponding to a portion of the video
|
|
4134
|
+
# that is currently being processed.
|
|
4135
|
+
class GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults
|
|
4136
|
+
include Google::Apis::Core::Hashable
|
|
4137
|
+
|
|
4138
|
+
# Explicit content annotation (based on per-frame visual signals only).
|
|
4139
|
+
# If no explicit content has been detected in a frame, no annotations are
|
|
4140
|
+
# present for that frame.
|
|
4141
|
+
# Corresponds to the JSON property `explicitAnnotation`
|
|
4142
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
|
|
4143
|
+
attr_accessor :explicit_annotation
|
|
4144
|
+
|
|
4145
|
+
# Label annotation results.
|
|
4146
|
+
# Corresponds to the JSON property `labelAnnotations`
|
|
4147
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
4148
|
+
attr_accessor :label_annotations
|
|
4149
|
+
|
|
4150
|
+
# Object tracking results.
|
|
4151
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
4152
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
|
|
4153
|
+
attr_accessor :object_annotations
|
|
4154
|
+
|
|
4155
|
+
# Shot annotation results. Each shot is represented as a video segment.
|
|
4156
|
+
# Corresponds to the JSON property `shotAnnotations`
|
|
4157
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
|
|
4158
|
+
attr_accessor :shot_annotations
|
|
4159
|
+
|
|
4160
|
+
def initialize(**args)
|
|
4161
|
+
update!(**args)
|
|
4162
|
+
end
|
|
4163
|
+
|
|
4164
|
+
# Update properties of this object
|
|
4165
|
+
def update!(**args)
|
|
4166
|
+
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
4167
|
+
@label_annotations = args[:label_annotations] if args.key?(:label_annotations)
|
|
4168
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
4169
|
+
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
4170
|
+
end
|
|
4171
|
+
end
|
|
4172
|
+
|
|
2330
4173
|
# Annotations related to one detected OCR text snippet. This will contain the
|
|
2331
4174
|
# corresponding text, confidence value, and frame level information for each
|
|
2332
4175
|
# detection.
|
|
2333
|
-
class
|
|
4176
|
+
class GoogleCloudVideointelligenceV1p3beta1TextAnnotation
|
|
2334
4177
|
include Google::Apis::Core::Hashable
|
|
2335
4178
|
|
|
2336
4179
|
# All video segments where OCR detected text appears.
|
|
2337
4180
|
# Corresponds to the JSON property `segments`
|
|
2338
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4181
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1TextSegment>]
|
|
2339
4182
|
attr_accessor :segments
|
|
2340
4183
|
|
|
2341
4184
|
# The detected text.
|
|
@@ -2357,7 +4200,7 @@ module Google
|
|
|
2357
4200
|
# Video frame level annotation results for text annotation (OCR).
|
|
2358
4201
|
# Contains information regarding timestamp and bounding box locations for the
|
|
2359
4202
|
# frames containing detected OCR text snippets.
|
|
2360
|
-
class
|
|
4203
|
+
class GoogleCloudVideointelligenceV1p3beta1TextFrame
|
|
2361
4204
|
include Google::Apis::Core::Hashable
|
|
2362
4205
|
|
|
2363
4206
|
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
@@ -2376,7 +4219,7 @@ module Google
|
|
|
2376
4219
|
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2377
4220
|
# the box.
|
|
2378
4221
|
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
2379
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
4222
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly]
|
|
2380
4223
|
attr_accessor :rotated_bounding_box
|
|
2381
4224
|
|
|
2382
4225
|
# Timestamp of this frame.
|
|
@@ -2396,7 +4239,7 @@ module Google
|
|
|
2396
4239
|
end
|
|
2397
4240
|
|
|
2398
4241
|
# Video segment level annotation results for text detection.
|
|
2399
|
-
class
|
|
4242
|
+
class GoogleCloudVideointelligenceV1p3beta1TextSegment
|
|
2400
4243
|
include Google::Apis::Core::Hashable
|
|
2401
4244
|
|
|
2402
4245
|
# Confidence for the track of detected text. It is calculated as the highest
|
|
@@ -2407,12 +4250,12 @@ module Google
|
|
|
2407
4250
|
|
|
2408
4251
|
# Information related to the frames where OCR detected text appears.
|
|
2409
4252
|
# Corresponds to the JSON property `frames`
|
|
2410
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4253
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1TextFrame>]
|
|
2411
4254
|
attr_accessor :frames
|
|
2412
4255
|
|
|
2413
4256
|
# Video segment.
|
|
2414
4257
|
# Corresponds to the JSON property `segment`
|
|
2415
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
4258
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
|
|
2416
4259
|
attr_accessor :segment
|
|
2417
4260
|
|
|
2418
4261
|
def initialize(**args)
|
|
@@ -2428,7 +4271,7 @@ module Google
|
|
|
2428
4271
|
end
|
|
2429
4272
|
|
|
2430
4273
|
# Annotation progress for a single video.
|
|
2431
|
-
class
|
|
4274
|
+
class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress
|
|
2432
4275
|
include Google::Apis::Core::Hashable
|
|
2433
4276
|
|
|
2434
4277
|
# Video file location in
|
|
@@ -2467,17 +4310,17 @@ module Google
|
|
|
2467
4310
|
end
|
|
2468
4311
|
|
|
2469
4312
|
# Annotation results for a single video.
|
|
2470
|
-
class
|
|
4313
|
+
class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
|
|
2471
4314
|
include Google::Apis::Core::Hashable
|
|
2472
4315
|
|
|
2473
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
2474
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
2475
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4316
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4317
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4318
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2476
4319
|
# - Simple to use and understand for most users
|
|
2477
4320
|
# - Flexible enough to meet unexpected needs
|
|
2478
4321
|
# # Overview
|
|
2479
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
2480
|
-
# and error details. The error code should be an enum value of
|
|
4322
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4323
|
+
# message, and error details. The error code should be an enum value of
|
|
2481
4324
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
2482
4325
|
# error message should be a developer-facing English message that helps
|
|
2483
4326
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -2517,13 +4360,13 @@ module Google
|
|
|
2517
4360
|
# If no explicit content has been detected in a frame, no annotations are
|
|
2518
4361
|
# present for that frame.
|
|
2519
4362
|
# Corresponds to the JSON property `explicitAnnotation`
|
|
2520
|
-
# @return [Google::Apis::VideointelligenceV1::
|
|
4363
|
+
# @return [Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
|
|
2521
4364
|
attr_accessor :explicit_annotation
|
|
2522
4365
|
|
|
2523
4366
|
# Label annotations on frame level.
|
|
2524
4367
|
# There is exactly one element for each unique label.
|
|
2525
4368
|
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
2526
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4369
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
2527
4370
|
attr_accessor :frame_label_annotations
|
|
2528
4371
|
|
|
2529
4372
|
# Video file location in
|
|
@@ -2534,36 +4377,36 @@ module Google
|
|
|
2534
4377
|
|
|
2535
4378
|
# Annotations for list of objects detected and tracked in video.
|
|
2536
4379
|
# Corresponds to the JSON property `objectAnnotations`
|
|
2537
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4380
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
|
|
2538
4381
|
attr_accessor :object_annotations
|
|
2539
4382
|
|
|
2540
4383
|
# Label annotations on video level or user specified segment level.
|
|
2541
4384
|
# There is exactly one element for each unique label.
|
|
2542
4385
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
2543
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4386
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
2544
4387
|
attr_accessor :segment_label_annotations
|
|
2545
4388
|
|
|
2546
4389
|
# Shot annotations. Each shot is represented as a video segment.
|
|
2547
4390
|
# Corresponds to the JSON property `shotAnnotations`
|
|
2548
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4391
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
|
|
2549
4392
|
attr_accessor :shot_annotations
|
|
2550
4393
|
|
|
2551
4394
|
# Label annotations on shot level.
|
|
2552
4395
|
# There is exactly one element for each unique label.
|
|
2553
4396
|
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
2554
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4397
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
2555
4398
|
attr_accessor :shot_label_annotations
|
|
2556
4399
|
|
|
2557
4400
|
# Speech transcription.
|
|
2558
4401
|
# Corresponds to the JSON property `speechTranscriptions`
|
|
2559
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4402
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>]
|
|
2560
4403
|
attr_accessor :speech_transcriptions
|
|
2561
4404
|
|
|
2562
4405
|
# OCR text detection and tracking.
|
|
2563
4406
|
# Annotations for list of detected text snippets. Each will have list of
|
|
2564
4407
|
# frame information associated with it.
|
|
2565
4408
|
# Corresponds to the JSON property `textAnnotations`
|
|
2566
|
-
# @return [Array<Google::Apis::VideointelligenceV1::
|
|
4409
|
+
# @return [Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>]
|
|
2567
4410
|
attr_accessor :text_annotations
|
|
2568
4411
|
|
|
2569
4412
|
def initialize(**args)
|
|
@@ -2586,7 +4429,7 @@ module Google
|
|
|
2586
4429
|
end
|
|
2587
4430
|
|
|
2588
4431
|
# Video segment.
|
|
2589
|
-
class
|
|
4432
|
+
class GoogleCloudVideointelligenceV1p3beta1VideoSegment
|
|
2590
4433
|
include Google::Apis::Core::Hashable
|
|
2591
4434
|
|
|
2592
4435
|
# Time-offset, relative to the beginning of the video,
|
|
@@ -2615,7 +4458,7 @@ module Google
|
|
|
2615
4458
|
# Word-specific information for recognized words. Word information is only
|
|
2616
4459
|
# included in the response when certain request parameters are set, such
|
|
2617
4460
|
# as `enable_word_time_offsets`.
|
|
2618
|
-
class
|
|
4461
|
+
class GoogleCloudVideointelligenceV1p3beta1WordInfo
|
|
2619
4462
|
include Google::Apis::Core::Hashable
|
|
2620
4463
|
|
|
2621
4464
|
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -2722,14 +4565,14 @@ module Google
|
|
|
2722
4565
|
attr_accessor :done
|
|
2723
4566
|
alias_method :done?, :done
|
|
2724
4567
|
|
|
2725
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
2726
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
2727
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4568
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4569
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4570
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2728
4571
|
# - Simple to use and understand for most users
|
|
2729
4572
|
# - Flexible enough to meet unexpected needs
|
|
2730
4573
|
# # Overview
|
|
2731
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
2732
|
-
# and error details. The error code should be an enum value of
|
|
4574
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4575
|
+
# message, and error details. The error code should be an enum value of
|
|
2733
4576
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
2734
4577
|
# error message should be a developer-facing English message that helps
|
|
2735
4578
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -2825,14 +4668,14 @@ module Google
|
|
|
2825
4668
|
end
|
|
2826
4669
|
end
|
|
2827
4670
|
|
|
2828
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
2829
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
2830
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4671
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4672
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4673
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2831
4674
|
# - Simple to use and understand for most users
|
|
2832
4675
|
# - Flexible enough to meet unexpected needs
|
|
2833
4676
|
# # Overview
|
|
2834
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
2835
|
-
# and error details. The error code should be an enum value of
|
|
4677
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4678
|
+
# message, and error details. The error code should be an enum value of
|
|
2836
4679
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
2837
4680
|
# error message should be a developer-facing English message that helps
|
|
2838
4681
|
# developers *understand* and *resolve* the error. If a localized user-facing
|