google-api-client 0.28.4 → 0.29.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.kokoro/build.bat +9 -6
- data/.kokoro/build.sh +2 -34
- data/.kokoro/continuous/common.cfg +6 -1
- data/.kokoro/continuous/linux.cfg +1 -1
- data/.kokoro/continuous/windows.cfg +17 -1
- data/.kokoro/osx.sh +2 -33
- data/.kokoro/presubmit/common.cfg +6 -1
- data/.kokoro/presubmit/linux.cfg +1 -1
- data/.kokoro/presubmit/windows.cfg +17 -1
- data/.kokoro/trampoline.bat +10 -0
- data/.kokoro/trampoline.sh +3 -23
- data/CHANGELOG.md +460 -0
- data/README.md +1 -1
- data/Rakefile +31 -0
- data/bin/generate-api +4 -2
- data/generated/google/apis/abusiveexperiencereport_v1/service.rb +2 -2
- data/generated/google/apis/acceleratedmobilepageurl_v1/service.rb +1 -1
- data/generated/google/apis/accessapproval_v1beta1/classes.rb +333 -0
- data/generated/google/apis/accessapproval_v1beta1/representations.rb +174 -0
- data/generated/google/apis/accessapproval_v1beta1/service.rb +728 -0
- data/generated/google/apis/accessapproval_v1beta1.rb +34 -0
- data/generated/google/apis/accesscontextmanager_v1/classes.rb +755 -0
- data/generated/google/apis/accesscontextmanager_v1/representations.rb +282 -0
- data/generated/google/apis/accesscontextmanager_v1/service.rb +788 -0
- data/generated/google/apis/accesscontextmanager_v1.rb +34 -0
- data/generated/google/apis/accesscontextmanager_v1beta/classes.rb +47 -31
- data/generated/google/apis/accesscontextmanager_v1beta/representations.rb +4 -0
- data/generated/google/apis/accesscontextmanager_v1beta/service.rb +16 -16
- data/generated/google/apis/accesscontextmanager_v1beta.rb +1 -1
- data/generated/google/apis/adexchangebuyer2_v2beta1/classes.rb +95 -200
- data/generated/google/apis/adexchangebuyer2_v2beta1/representations.rb +0 -32
- data/generated/google/apis/adexchangebuyer2_v2beta1/service.rb +64 -104
- data/generated/google/apis/adexchangebuyer2_v2beta1.rb +1 -1
- data/generated/google/apis/adexchangebuyer_v1_2/service.rb +7 -7
- data/generated/google/apis/adexchangebuyer_v1_3/service.rb +21 -21
- data/generated/google/apis/adexchangebuyer_v1_4/service.rb +38 -38
- data/generated/google/apis/adexperiencereport_v1/service.rb +2 -2
- data/generated/google/apis/admin_datatransfer_v1/service.rb +5 -5
- data/generated/google/apis/admin_directory_v1/classes.rb +5 -50
- data/generated/google/apis/admin_directory_v1/representations.rb +0 -2
- data/generated/google/apis/admin_directory_v1/service.rb +113 -113
- data/generated/google/apis/admin_directory_v1.rb +1 -1
- data/generated/google/apis/admin_reports_v1/service.rb +6 -6
- data/generated/google/apis/admin_reports_v1.rb +1 -1
- data/generated/google/apis/adsense_v1_4/service.rb +39 -39
- data/generated/google/apis/adsensehost_v4_1/service.rb +26 -26
- data/generated/google/apis/alertcenter_v1beta1/classes.rb +101 -2
- data/generated/google/apis/alertcenter_v1beta1/representations.rb +25 -0
- data/generated/google/apis/alertcenter_v1beta1/service.rb +17 -16
- data/generated/google/apis/alertcenter_v1beta1.rb +1 -1
- data/generated/google/apis/analytics_v2_4/service.rb +6 -6
- data/generated/google/apis/analytics_v3/service.rb +88 -88
- data/generated/google/apis/analyticsreporting_v4/classes.rb +638 -0
- data/generated/google/apis/analyticsreporting_v4/representations.rb +248 -0
- data/generated/google/apis/analyticsreporting_v4/service.rb +31 -1
- data/generated/google/apis/analyticsreporting_v4.rb +1 -1
- data/generated/google/apis/androiddeviceprovisioning_v1/classes.rb +51 -11
- data/generated/google/apis/androiddeviceprovisioning_v1/representations.rb +6 -0
- data/generated/google/apis/androiddeviceprovisioning_v1/service.rb +26 -26
- data/generated/google/apis/androiddeviceprovisioning_v1.rb +1 -1
- data/generated/google/apis/androidenterprise_v1/classes.rb +26 -30
- data/generated/google/apis/androidenterprise_v1/representations.rb +2 -14
- data/generated/google/apis/androidenterprise_v1/service.rb +85 -121
- data/generated/google/apis/androidenterprise_v1.rb +1 -1
- data/generated/google/apis/androidmanagement_v1/classes.rb +358 -4
- data/generated/google/apis/androidmanagement_v1/representations.rb +163 -0
- data/generated/google/apis/androidmanagement_v1/service.rb +191 -21
- data/generated/google/apis/androidmanagement_v1.rb +1 -1
- data/generated/google/apis/androidpublisher_v1/service.rb +2 -2
- data/generated/google/apis/androidpublisher_v1_1/service.rb +3 -3
- data/generated/google/apis/androidpublisher_v2/service.rb +64 -70
- data/generated/google/apis/androidpublisher_v2.rb +1 -1
- data/generated/google/apis/androidpublisher_v3/classes.rb +113 -0
- data/generated/google/apis/androidpublisher_v3/representations.rb +58 -0
- data/generated/google/apis/androidpublisher_v3/service.rb +234 -64
- data/generated/google/apis/androidpublisher_v3.rb +1 -1
- data/generated/google/apis/appengine_v1/classes.rb +45 -100
- data/generated/google/apis/appengine_v1/representations.rb +17 -35
- data/generated/google/apis/appengine_v1/service.rb +45 -39
- data/generated/google/apis/appengine_v1.rb +1 -1
- data/generated/google/apis/appengine_v1alpha/classes.rb +2 -99
- data/generated/google/apis/appengine_v1alpha/representations.rb +0 -35
- data/generated/google/apis/appengine_v1alpha/service.rb +15 -15
- data/generated/google/apis/appengine_v1alpha.rb +1 -1
- data/generated/google/apis/appengine_v1beta/classes.rb +7 -102
- data/generated/google/apis/appengine_v1beta/representations.rb +0 -35
- data/generated/google/apis/appengine_v1beta/service.rb +45 -39
- data/generated/google/apis/appengine_v1beta.rb +1 -1
- data/generated/google/apis/appengine_v1beta4/service.rb +20 -20
- data/generated/google/apis/appengine_v1beta5/service.rb +20 -20
- data/generated/google/apis/appsactivity_v1/service.rb +5 -4
- data/generated/google/apis/appsactivity_v1.rb +1 -1
- data/generated/google/apis/appsmarket_v2/service.rb +3 -3
- data/generated/google/apis/appstate_v1/service.rb +5 -5
- data/generated/google/apis/bigquery_v2/classes.rb +1121 -114
- data/generated/google/apis/bigquery_v2/representations.rb +414 -26
- data/generated/google/apis/bigquery_v2/service.rb +184 -22
- data/generated/google/apis/bigquery_v2.rb +1 -1
- data/generated/google/apis/bigquerydatatransfer_v1/classes.rb +88 -10
- data/generated/google/apis/bigquerydatatransfer_v1/representations.rb +43 -0
- data/generated/google/apis/bigquerydatatransfer_v1/service.rb +142 -34
- data/generated/google/apis/bigquerydatatransfer_v1.rb +3 -3
- data/generated/google/apis/bigtableadmin_v1/service.rb +3 -3
- data/generated/google/apis/bigtableadmin_v1.rb +2 -2
- data/generated/google/apis/bigtableadmin_v2/classes.rb +14 -14
- data/generated/google/apis/bigtableadmin_v2/service.rb +142 -33
- data/generated/google/apis/bigtableadmin_v2.rb +2 -2
- data/generated/google/apis/binaryauthorization_v1beta1/classes.rb +66 -6
- data/generated/google/apis/binaryauthorization_v1beta1/representations.rb +17 -0
- data/generated/google/apis/binaryauthorization_v1beta1/service.rb +17 -13
- data/generated/google/apis/binaryauthorization_v1beta1.rb +1 -1
- data/generated/google/apis/blogger_v2/service.rb +9 -9
- data/generated/google/apis/blogger_v3/service.rb +33 -33
- data/generated/google/apis/books_v1/service.rb +51 -51
- data/generated/google/apis/calendar_v3/classes.rb +1 -1
- data/generated/google/apis/calendar_v3/service.rb +47 -47
- data/generated/google/apis/calendar_v3.rb +1 -1
- data/generated/google/apis/chat_v1/service.rb +8 -8
- data/generated/google/apis/civicinfo_v2/service.rb +5 -5
- data/generated/google/apis/classroom_v1/classes.rb +77 -0
- data/generated/google/apis/classroom_v1/representations.rb +32 -0
- data/generated/google/apis/classroom_v1/service.rb +276 -51
- data/generated/google/apis/classroom_v1.rb +7 -1
- data/generated/google/apis/cloudasset_v1/classes.rb +818 -0
- data/generated/google/apis/cloudasset_v1/representations.rb +264 -0
- data/generated/google/apis/cloudasset_v1/service.rb +191 -0
- data/generated/google/apis/cloudasset_v1.rb +34 -0
- data/generated/google/apis/cloudasset_v1beta1/classes.rb +33 -18
- data/generated/google/apis/cloudasset_v1beta1/representations.rb +1 -0
- data/generated/google/apis/cloudasset_v1beta1/service.rb +13 -13
- data/generated/google/apis/cloudasset_v1beta1.rb +2 -2
- data/generated/google/apis/cloudbilling_v1/classes.rb +1 -1
- data/generated/google/apis/cloudbilling_v1/service.rb +14 -14
- data/generated/google/apis/cloudbilling_v1.rb +1 -1
- data/generated/google/apis/cloudbuild_v1/classes.rb +162 -11
- data/generated/google/apis/cloudbuild_v1/representations.rb +67 -0
- data/generated/google/apis/cloudbuild_v1/service.rb +21 -15
- data/generated/google/apis/cloudbuild_v1.rb +1 -1
- data/generated/google/apis/cloudbuild_v1alpha1/classes.rb +7 -1
- data/generated/google/apis/cloudbuild_v1alpha1/representations.rb +2 -0
- data/generated/google/apis/cloudbuild_v1alpha1/service.rb +6 -6
- data/generated/google/apis/cloudbuild_v1alpha1.rb +1 -1
- data/generated/google/apis/clouddebugger_v2/service.rb +8 -8
- data/generated/google/apis/clouderrorreporting_v1beta1/classes.rb +19 -16
- data/generated/google/apis/clouderrorreporting_v1beta1/service.rb +12 -11
- data/generated/google/apis/clouderrorreporting_v1beta1.rb +1 -1
- data/generated/google/apis/cloudfunctions_v1/classes.rb +21 -17
- data/generated/google/apis/cloudfunctions_v1/service.rb +22 -16
- data/generated/google/apis/cloudfunctions_v1.rb +1 -1
- data/generated/google/apis/cloudfunctions_v1beta2/classes.rb +20 -16
- data/generated/google/apis/cloudfunctions_v1beta2/service.rb +17 -11
- data/generated/google/apis/cloudfunctions_v1beta2.rb +1 -1
- data/generated/google/apis/cloudidentity_v1/classes.rb +14 -14
- data/generated/google/apis/cloudidentity_v1/service.rb +18 -27
- data/generated/google/apis/cloudidentity_v1.rb +7 -1
- data/generated/google/apis/cloudidentity_v1beta1/classes.rb +11 -11
- data/generated/google/apis/cloudidentity_v1beta1/service.rb +15 -21
- data/generated/google/apis/cloudidentity_v1beta1.rb +7 -1
- data/generated/google/apis/cloudiot_v1/classes.rb +11 -11
- data/generated/google/apis/cloudiot_v1/service.rb +23 -330
- data/generated/google/apis/cloudiot_v1.rb +1 -1
- data/generated/google/apis/cloudkms_v1/classes.rb +7 -3
- data/generated/google/apis/cloudkms_v1/service.rb +30 -30
- data/generated/google/apis/cloudkms_v1.rb +1 -1
- data/generated/google/apis/cloudprivatecatalog_v1beta1/classes.rb +358 -0
- data/generated/google/apis/cloudprivatecatalog_v1beta1/representations.rb +123 -0
- data/generated/google/apis/cloudprivatecatalog_v1beta1/service.rb +486 -0
- data/generated/google/apis/cloudprivatecatalog_v1beta1.rb +35 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/classes.rb +1212 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/representations.rb +399 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1/service.rb +1073 -0
- data/generated/google/apis/cloudprivatecatalogproducer_v1beta1.rb +35 -0
- data/generated/google/apis/cloudprofiler_v2/service.rb +3 -3
- data/generated/google/apis/cloudresourcemanager_v1/classes.rb +24 -22
- data/generated/google/apis/cloudresourcemanager_v1/service.rb +68 -59
- data/generated/google/apis/cloudresourcemanager_v1.rb +1 -1
- data/generated/google/apis/cloudresourcemanager_v1beta1/classes.rb +3 -3
- data/generated/google/apis/cloudresourcemanager_v1beta1/service.rb +53 -42
- data/generated/google/apis/cloudresourcemanager_v1beta1.rb +1 -1
- data/generated/google/apis/cloudresourcemanager_v2/classes.rb +15 -16
- data/generated/google/apis/cloudresourcemanager_v2/service.rb +13 -13
- data/generated/google/apis/cloudresourcemanager_v2.rb +1 -1
- data/generated/google/apis/cloudresourcemanager_v2beta1/classes.rb +15 -16
- data/generated/google/apis/cloudresourcemanager_v2beta1/service.rb +13 -13
- data/generated/google/apis/cloudresourcemanager_v2beta1.rb +1 -1
- data/generated/google/apis/cloudscheduler_v1/classes.rb +994 -0
- data/generated/google/apis/cloudscheduler_v1/representations.rb +297 -0
- data/generated/google/apis/cloudscheduler_v1/service.rb +448 -0
- data/generated/google/apis/cloudscheduler_v1.rb +34 -0
- data/generated/google/apis/cloudscheduler_v1beta1/classes.rb +160 -44
- data/generated/google/apis/cloudscheduler_v1beta1/representations.rb +33 -0
- data/generated/google/apis/cloudscheduler_v1beta1/service.rb +15 -12
- data/generated/google/apis/cloudscheduler_v1beta1.rb +1 -1
- data/generated/google/apis/cloudsearch_v1/classes.rb +245 -59
- data/generated/google/apis/cloudsearch_v1/representations.rb +91 -0
- data/generated/google/apis/cloudsearch_v1/service.rb +86 -80
- data/generated/google/apis/cloudsearch_v1.rb +1 -1
- data/generated/google/apis/cloudshell_v1/classes.rb +11 -11
- data/generated/google/apis/cloudshell_v1/service.rb +4 -4
- data/generated/google/apis/cloudshell_v1.rb +1 -1
- data/generated/google/apis/cloudshell_v1alpha1/classes.rb +24 -11
- data/generated/google/apis/cloudshell_v1alpha1/representations.rb +2 -0
- data/generated/google/apis/cloudshell_v1alpha1/service.rb +11 -10
- data/generated/google/apis/cloudshell_v1alpha1.rb +1 -1
- data/generated/google/apis/cloudtasks_v2/classes.rb +1436 -0
- data/generated/google/apis/cloudtasks_v2/representations.rb +408 -0
- data/generated/google/apis/cloudtasks_v2/service.rb +856 -0
- data/generated/google/apis/{partners_v2.rb → cloudtasks_v2.rb} +11 -9
- data/generated/google/apis/cloudtasks_v2beta2/classes.rb +141 -102
- data/generated/google/apis/cloudtasks_v2beta2/service.rb +44 -43
- data/generated/google/apis/cloudtasks_v2beta2.rb +1 -1
- data/generated/google/apis/cloudtasks_v2beta3/classes.rb +388 -108
- data/generated/google/apis/cloudtasks_v2beta3/representations.rb +65 -0
- data/generated/google/apis/cloudtasks_v2beta3/service.rb +40 -39
- data/generated/google/apis/cloudtasks_v2beta3.rb +1 -1
- data/generated/google/apis/cloudtrace_v1/service.rb +3 -3
- data/generated/google/apis/cloudtrace_v2/classes.rb +10 -10
- data/generated/google/apis/cloudtrace_v2/service.rb +2 -2
- data/generated/google/apis/cloudtrace_v2.rb +1 -1
- data/generated/google/apis/commentanalyzer_v1alpha1/classes.rb +484 -0
- data/generated/google/apis/commentanalyzer_v1alpha1/representations.rb +210 -0
- data/generated/google/apis/commentanalyzer_v1alpha1/service.rb +124 -0
- data/generated/google/apis/commentanalyzer_v1alpha1.rb +39 -0
- data/generated/google/apis/composer_v1/classes.rb +21 -15
- data/generated/google/apis/composer_v1/service.rb +9 -9
- data/generated/google/apis/composer_v1.rb +1 -1
- data/generated/google/apis/composer_v1beta1/classes.rb +175 -36
- data/generated/google/apis/composer_v1beta1/representations.rb +50 -0
- data/generated/google/apis/composer_v1beta1/service.rb +9 -9
- data/generated/google/apis/composer_v1beta1.rb +1 -1
- data/generated/google/apis/compute_alpha/classes.rb +10112 -7289
- data/generated/google/apis/compute_alpha/representations.rb +1337 -219
- data/generated/google/apis/compute_alpha/service.rb +4259 -2728
- data/generated/google/apis/compute_alpha.rb +1 -1
- data/generated/google/apis/compute_beta/classes.rb +4254 -2781
- data/generated/google/apis/compute_beta/representations.rb +853 -283
- data/generated/google/apis/compute_beta/service.rb +7077 -5955
- data/generated/google/apis/compute_beta.rb +1 -1
- data/generated/google/apis/compute_v1/classes.rb +1259 -93
- data/generated/google/apis/compute_v1/representations.rb +450 -1
- data/generated/google/apis/compute_v1/service.rb +1085 -400
- data/generated/google/apis/compute_v1.rb +1 -1
- data/generated/google/apis/container_v1/classes.rb +201 -22
- data/generated/google/apis/container_v1/representations.rb +69 -0
- data/generated/google/apis/container_v1/service.rb +151 -102
- data/generated/google/apis/container_v1.rb +1 -1
- data/generated/google/apis/container_v1beta1/classes.rb +215 -25
- data/generated/google/apis/container_v1beta1/representations.rb +86 -0
- data/generated/google/apis/container_v1beta1/service.rb +106 -106
- data/generated/google/apis/container_v1beta1.rb +1 -1
- data/generated/google/apis/containeranalysis_v1alpha1/classes.rb +26 -18
- data/generated/google/apis/containeranalysis_v1alpha1/representations.rb +1 -0
- data/generated/google/apis/containeranalysis_v1alpha1/service.rb +33 -33
- data/generated/google/apis/containeranalysis_v1alpha1.rb +1 -1
- data/generated/google/apis/containeranalysis_v1beta1/classes.rb +226 -12
- data/generated/google/apis/containeranalysis_v1beta1/representations.rb +58 -0
- data/generated/google/apis/containeranalysis_v1beta1/service.rb +24 -24
- data/generated/google/apis/containeranalysis_v1beta1.rb +1 -1
- data/generated/google/apis/content_v2/classes.rb +218 -101
- data/generated/google/apis/content_v2/representations.rb +49 -0
- data/generated/google/apis/content_v2/service.rb +189 -152
- data/generated/google/apis/content_v2.rb +1 -1
- data/generated/google/apis/content_v2_1/classes.rb +387 -216
- data/generated/google/apis/content_v2_1/representations.rb +131 -56
- data/generated/google/apis/content_v2_1/service.rb +190 -107
- data/generated/google/apis/content_v2_1.rb +1 -1
- data/generated/google/apis/customsearch_v1/service.rb +2 -2
- data/generated/google/apis/dataflow_v1b3/classes.rb +148 -31
- data/generated/google/apis/dataflow_v1b3/representations.rb +45 -0
- data/generated/google/apis/dataflow_v1b3/service.rb +415 -56
- data/generated/google/apis/dataflow_v1b3.rb +1 -1
- data/generated/google/apis/datafusion_v1beta1/classes.rb +1304 -0
- data/generated/google/apis/datafusion_v1beta1/representations.rb +469 -0
- data/generated/google/apis/datafusion_v1beta1/service.rb +657 -0
- data/generated/google/apis/datafusion_v1beta1.rb +43 -0
- data/generated/google/apis/dataproc_v1/classes.rb +27 -22
- data/generated/google/apis/dataproc_v1/representations.rb +1 -0
- data/generated/google/apis/dataproc_v1/service.rb +261 -45
- data/generated/google/apis/dataproc_v1.rb +1 -1
- data/generated/google/apis/dataproc_v1beta2/classes.rb +534 -50
- data/generated/google/apis/dataproc_v1beta2/representations.rb +185 -7
- data/generated/google/apis/dataproc_v1beta2/service.rb +617 -51
- data/generated/google/apis/dataproc_v1beta2.rb +1 -1
- data/generated/google/apis/datastore_v1/classes.rb +20 -16
- data/generated/google/apis/datastore_v1/service.rb +15 -15
- data/generated/google/apis/datastore_v1.rb +1 -1
- data/generated/google/apis/datastore_v1beta1/classes.rb +10 -10
- data/generated/google/apis/datastore_v1beta1/service.rb +2 -2
- data/generated/google/apis/datastore_v1beta1.rb +1 -1
- data/generated/google/apis/datastore_v1beta3/classes.rb +10 -6
- data/generated/google/apis/datastore_v1beta3/service.rb +7 -7
- data/generated/google/apis/datastore_v1beta3.rb +1 -1
- data/generated/google/apis/deploymentmanager_alpha/service.rb +37 -37
- data/generated/google/apis/deploymentmanager_v2/service.rb +18 -18
- data/generated/google/apis/deploymentmanager_v2beta/service.rb +32 -32
- data/generated/google/apis/dfareporting_v3_1/service.rb +206 -206
- data/generated/google/apis/dfareporting_v3_2/service.rb +206 -206
- data/generated/google/apis/dfareporting_v3_3/classes.rb +3 -3
- data/generated/google/apis/dfareporting_v3_3/service.rb +204 -204
- data/generated/google/apis/dfareporting_v3_3.rb +1 -1
- data/generated/google/apis/dialogflow_v2/classes.rb +367 -82
- data/generated/google/apis/dialogflow_v2/representations.rb +99 -0
- data/generated/google/apis/dialogflow_v2/service.rb +76 -60
- data/generated/google/apis/dialogflow_v2.rb +1 -1
- data/generated/google/apis/dialogflow_v2beta1/classes.rb +199 -88
- data/generated/google/apis/dialogflow_v2beta1/representations.rb +31 -0
- data/generated/google/apis/dialogflow_v2beta1/service.rb +154 -94
- data/generated/google/apis/dialogflow_v2beta1.rb +1 -1
- data/generated/google/apis/digitalassetlinks_v1/service.rb +7 -6
- data/generated/google/apis/digitalassetlinks_v1.rb +1 -1
- data/generated/google/apis/discovery_v1/service.rb +2 -2
- data/generated/google/apis/dlp_v2/classes.rb +116 -45
- data/generated/google/apis/dlp_v2/representations.rb +32 -0
- data/generated/google/apis/dlp_v2/service.rb +85 -45
- data/generated/google/apis/dlp_v2.rb +1 -1
- data/generated/google/apis/dns_v1/classes.rb +83 -1
- data/generated/google/apis/dns_v1/representations.rb +34 -0
- data/generated/google/apis/dns_v1/service.rb +15 -15
- data/generated/google/apis/dns_v1.rb +1 -1
- data/generated/google/apis/dns_v1beta2/classes.rb +81 -1
- data/generated/google/apis/dns_v1beta2/representations.rb +33 -0
- data/generated/google/apis/dns_v1beta2/service.rb +21 -21
- data/generated/google/apis/dns_v1beta2.rb +1 -1
- data/generated/google/apis/dns_v2beta1/classes.rb +83 -1
- data/generated/google/apis/dns_v2beta1/representations.rb +34 -0
- data/generated/google/apis/dns_v2beta1/service.rb +16 -16
- data/generated/google/apis/dns_v2beta1.rb +1 -1
- data/generated/google/apis/docs_v1/classes.rb +265 -47
- data/generated/google/apis/docs_v1/representations.rb +96 -0
- data/generated/google/apis/docs_v1/service.rb +3 -3
- data/generated/google/apis/docs_v1.rb +1 -1
- data/generated/google/apis/doubleclickbidmanager_v1/classes.rb +6 -4
- data/generated/google/apis/doubleclickbidmanager_v1/service.rb +9 -9
- data/generated/google/apis/doubleclickbidmanager_v1.rb +1 -1
- data/generated/google/apis/doubleclicksearch_v2/service.rb +10 -10
- data/generated/google/apis/drive_v2/classes.rb +601 -80
- data/generated/google/apis/drive_v2/representations.rb +152 -0
- data/generated/google/apis/drive_v2/service.rb +574 -164
- data/generated/google/apis/drive_v2.rb +1 -1
- data/generated/google/apis/drive_v3/classes.rb +591 -75
- data/generated/google/apis/drive_v3/representations.rb +151 -0
- data/generated/google/apis/drive_v3/service.rb +483 -116
- data/generated/google/apis/drive_v3.rb +1 -1
- data/generated/google/apis/driveactivity_v2/classes.rb +149 -17
- data/generated/google/apis/driveactivity_v2/representations.rb +69 -0
- data/generated/google/apis/driveactivity_v2/service.rb +1 -1
- data/generated/google/apis/driveactivity_v2.rb +1 -1
- data/generated/google/apis/factchecktools_v1alpha1/classes.rb +459 -0
- data/generated/google/apis/factchecktools_v1alpha1/representations.rb +207 -0
- data/generated/google/apis/factchecktools_v1alpha1/service.rb +300 -0
- data/generated/google/apis/factchecktools_v1alpha1.rb +34 -0
- data/generated/google/apis/fcm_v1/classes.rb +424 -0
- data/generated/google/apis/fcm_v1/representations.rb +167 -0
- data/generated/google/apis/fcm_v1/service.rb +97 -0
- data/generated/google/apis/fcm_v1.rb +35 -0
- data/generated/google/apis/file_v1/classes.rb +646 -11
- data/generated/google/apis/file_v1/representations.rb +207 -0
- data/generated/google/apis/file_v1/service.rb +196 -6
- data/generated/google/apis/file_v1.rb +1 -1
- data/generated/google/apis/file_v1beta1/classes.rb +461 -19
- data/generated/google/apis/file_v1beta1/representations.rb +137 -0
- data/generated/google/apis/file_v1beta1/service.rb +11 -11
- data/generated/google/apis/file_v1beta1.rb +1 -1
- data/generated/google/apis/firebasedynamiclinks_v1/classes.rb +41 -14
- data/generated/google/apis/firebasedynamiclinks_v1/representations.rb +4 -0
- data/generated/google/apis/firebasedynamiclinks_v1/service.rb +5 -5
- data/generated/google/apis/firebasedynamiclinks_v1.rb +1 -1
- data/generated/google/apis/firebasehosting_v1beta1/classes.rb +13 -13
- data/generated/google/apis/firebasehosting_v1beta1/service.rb +14 -14
- data/generated/google/apis/firebasehosting_v1beta1.rb +1 -1
- data/generated/google/apis/firebaserules_v1/classes.rb +10 -2
- data/generated/google/apis/firebaserules_v1/service.rb +12 -12
- data/generated/google/apis/firebaserules_v1.rb +1 -1
- data/generated/google/apis/firestore_v1/classes.rb +15 -15
- data/generated/google/apis/firestore_v1/service.rb +28 -28
- data/generated/google/apis/firestore_v1.rb +1 -1
- data/generated/google/apis/firestore_v1beta1/classes.rb +15 -15
- data/generated/google/apis/firestore_v1beta1/service.rb +19 -19
- data/generated/google/apis/firestore_v1beta1.rb +1 -1
- data/generated/google/apis/firestore_v1beta2/classes.rb +10 -10
- data/generated/google/apis/firestore_v1beta2/service.rb +9 -9
- data/generated/google/apis/firestore_v1beta2.rb +1 -1
- data/generated/google/apis/fitness_v1/classes.rb +4 -1
- data/generated/google/apis/fitness_v1/service.rb +14 -58
- data/generated/google/apis/fitness_v1.rb +1 -1
- data/generated/google/apis/fusiontables_v1/service.rb +32 -32
- data/generated/google/apis/fusiontables_v2/service.rb +34 -34
- data/generated/google/apis/games_configuration_v1configuration/service.rb +13 -13
- data/generated/google/apis/games_management_v1management/service.rb +27 -27
- data/generated/google/apis/games_management_v1management.rb +2 -2
- data/generated/google/apis/games_v1/service.rb +53 -53
- data/generated/google/apis/games_v1.rb +3 -3
- data/generated/google/apis/genomics_v1/classes.rb +190 -3321
- data/generated/google/apis/genomics_v1/representations.rb +128 -1265
- data/generated/google/apis/genomics_v1/service.rb +75 -1982
- data/generated/google/apis/genomics_v1.rb +1 -10
- data/generated/google/apis/genomics_v1alpha2/classes.rb +13 -53
- data/generated/google/apis/genomics_v1alpha2/representations.rb +0 -26
- data/generated/google/apis/genomics_v1alpha2/service.rb +11 -12
- data/generated/google/apis/genomics_v1alpha2.rb +1 -1
- data/generated/google/apis/genomics_v2alpha1/classes.rb +26 -58
- data/generated/google/apis/genomics_v2alpha1/representations.rb +1 -26
- data/generated/google/apis/genomics_v2alpha1/service.rb +6 -7
- data/generated/google/apis/genomics_v2alpha1.rb +1 -1
- data/generated/google/apis/gmail_v1/classes.rb +29 -0
- data/generated/google/apis/gmail_v1/representations.rb +13 -0
- data/generated/google/apis/gmail_v1/service.rb +142 -66
- data/generated/google/apis/gmail_v1.rb +1 -1
- data/generated/google/apis/groupsmigration_v1/service.rb +1 -1
- data/generated/google/apis/groupssettings_v1/classes.rb +126 -1
- data/generated/google/apis/groupssettings_v1/representations.rb +18 -0
- data/generated/google/apis/groupssettings_v1/service.rb +4 -4
- data/generated/google/apis/groupssettings_v1.rb +2 -2
- data/generated/google/apis/healthcare_v1alpha2/classes.rb +2849 -0
- data/generated/google/apis/healthcare_v1alpha2/representations.rb +1260 -0
- data/generated/google/apis/healthcare_v1alpha2/service.rb +4011 -0
- data/generated/google/apis/healthcare_v1alpha2.rb +34 -0
- data/generated/google/apis/healthcare_v1beta1/classes.rb +2464 -0
- data/generated/google/apis/healthcare_v1beta1/representations.rb +1042 -0
- data/generated/google/apis/healthcare_v1beta1/service.rb +3413 -0
- data/generated/google/apis/healthcare_v1beta1.rb +34 -0
- data/generated/google/apis/iam_v1/classes.rb +171 -1
- data/generated/google/apis/iam_v1/representations.rb +95 -0
- data/generated/google/apis/iam_v1/service.rb +249 -39
- data/generated/google/apis/iam_v1.rb +1 -1
- data/generated/google/apis/iamcredentials_v1/classes.rb +8 -4
- data/generated/google/apis/iamcredentials_v1/service.rb +15 -10
- data/generated/google/apis/iamcredentials_v1.rb +1 -1
- data/generated/google/apis/iap_v1/classes.rb +1 -1
- data/generated/google/apis/iap_v1/service.rb +3 -3
- data/generated/google/apis/iap_v1.rb +1 -1
- data/generated/google/apis/iap_v1beta1/classes.rb +1 -1
- data/generated/google/apis/iap_v1beta1/service.rb +3 -3
- data/generated/google/apis/iap_v1beta1.rb +1 -1
- data/generated/google/apis/identitytoolkit_v3/service.rb +20 -20
- data/generated/google/apis/indexing_v3/service.rb +2 -2
- data/generated/google/apis/jobs_v2/classes.rb +16 -17
- data/generated/google/apis/jobs_v2/service.rb +17 -17
- data/generated/google/apis/jobs_v2.rb +1 -1
- data/generated/google/apis/jobs_v3/classes.rb +14 -8
- data/generated/google/apis/jobs_v3/service.rb +16 -17
- data/generated/google/apis/jobs_v3.rb +1 -1
- data/generated/google/apis/jobs_v3p1beta1/classes.rb +26 -20
- data/generated/google/apis/jobs_v3p1beta1/service.rb +17 -18
- data/generated/google/apis/jobs_v3p1beta1.rb +1 -1
- data/generated/google/apis/kgsearch_v1/service.rb +1 -1
- data/generated/google/apis/language_v1/classes.rb +8 -7
- data/generated/google/apis/language_v1/service.rb +6 -6
- data/generated/google/apis/language_v1.rb +1 -1
- data/generated/google/apis/language_v1beta1/classes.rb +5 -5
- data/generated/google/apis/language_v1beta1/service.rb +4 -4
- data/generated/google/apis/language_v1beta1.rb +1 -1
- data/generated/google/apis/language_v1beta2/classes.rb +8 -7
- data/generated/google/apis/language_v1beta2/service.rb +6 -6
- data/generated/google/apis/language_v1beta2.rb +1 -1
- data/generated/google/apis/libraryagent_v1/service.rb +6 -6
- data/generated/google/apis/licensing_v1/service.rb +7 -7
- data/generated/google/apis/logging_v2/classes.rb +8 -3
- data/generated/google/apis/logging_v2/representations.rb +1 -0
- data/generated/google/apis/logging_v2/service.rb +72 -72
- data/generated/google/apis/logging_v2.rb +1 -1
- data/generated/google/apis/manufacturers_v1/service.rb +4 -4
- data/generated/google/apis/mirror_v1/service.rb +24 -24
- data/generated/google/apis/ml_v1/classes.rb +240 -52
- data/generated/google/apis/ml_v1/representations.rb +25 -2
- data/generated/google/apis/ml_v1/service.rb +36 -36
- data/generated/google/apis/ml_v1.rb +1 -1
- data/generated/google/apis/monitoring_v3/classes.rb +22 -18
- data/generated/google/apis/monitoring_v3/representations.rb +2 -1
- data/generated/google/apis/monitoring_v3/service.rb +42 -37
- data/generated/google/apis/monitoring_v3.rb +1 -1
- data/generated/google/apis/oauth2_v1/classes.rb +0 -124
- data/generated/google/apis/oauth2_v1/representations.rb +0 -62
- data/generated/google/apis/oauth2_v1/service.rb +3 -162
- data/generated/google/apis/oauth2_v1.rb +3 -6
- data/generated/google/apis/oauth2_v2/service.rb +4 -4
- data/generated/google/apis/oauth2_v2.rb +3 -6
- data/generated/google/apis/oslogin_v1/service.rb +8 -7
- data/generated/google/apis/oslogin_v1.rb +3 -2
- data/generated/google/apis/oslogin_v1alpha/service.rb +8 -7
- data/generated/google/apis/oslogin_v1alpha.rb +3 -2
- data/generated/google/apis/oslogin_v1beta/service.rb +8 -7
- data/generated/google/apis/oslogin_v1beta.rb +3 -2
- data/generated/google/apis/pagespeedonline_v1/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v2/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v4/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v5/classes.rb +43 -0
- data/generated/google/apis/pagespeedonline_v5/representations.rb +18 -0
- data/generated/google/apis/pagespeedonline_v5/service.rb +1 -1
- data/generated/google/apis/pagespeedonline_v5.rb +1 -1
- data/generated/google/apis/people_v1/classes.rb +38 -29
- data/generated/google/apis/people_v1/representations.rb +1 -0
- data/generated/google/apis/people_v1/service.rb +18 -13
- data/generated/google/apis/people_v1.rb +2 -5
- data/generated/google/apis/playcustomapp_v1/service.rb +1 -1
- data/generated/google/apis/plus_domains_v1/service.rb +18 -392
- data/generated/google/apis/plus_domains_v1.rb +4 -10
- data/generated/google/apis/plus_v1/service.rb +16 -16
- data/generated/google/apis/plus_v1.rb +4 -4
- data/generated/google/apis/poly_v1/classes.rb +8 -6
- data/generated/google/apis/poly_v1/service.rb +15 -12
- data/generated/google/apis/poly_v1.rb +1 -1
- data/generated/google/apis/proximitybeacon_v1beta1/classes.rb +8 -6
- data/generated/google/apis/proximitybeacon_v1beta1/service.rb +17 -17
- data/generated/google/apis/proximitybeacon_v1beta1.rb +1 -1
- data/generated/google/apis/pubsub_v1/classes.rb +55 -39
- data/generated/google/apis/pubsub_v1/representations.rb +16 -0
- data/generated/google/apis/pubsub_v1/service.rb +46 -69
- data/generated/google/apis/pubsub_v1.rb +1 -1
- data/generated/google/apis/pubsub_v1beta1a/service.rb +15 -15
- data/generated/google/apis/pubsub_v1beta2/classes.rb +45 -1
- data/generated/google/apis/pubsub_v1beta2/representations.rb +16 -0
- data/generated/google/apis/pubsub_v1beta2/service.rb +20 -20
- data/generated/google/apis/pubsub_v1beta2.rb +1 -1
- data/generated/google/apis/redis_v1/classes.rb +30 -10
- data/generated/google/apis/redis_v1/representations.rb +13 -0
- data/generated/google/apis/redis_v1/service.rb +51 -15
- data/generated/google/apis/redis_v1.rb +1 -1
- data/generated/google/apis/redis_v1beta1/classes.rb +18 -21
- data/generated/google/apis/redis_v1beta1/representations.rb +0 -1
- data/generated/google/apis/redis_v1beta1/service.rb +15 -15
- data/generated/google/apis/redis_v1beta1.rb +1 -1
- data/generated/google/apis/remotebuildexecution_v1/classes.rb +50 -35
- data/generated/google/apis/remotebuildexecution_v1/representations.rb +2 -0
- data/generated/google/apis/remotebuildexecution_v1/service.rb +7 -7
- data/generated/google/apis/remotebuildexecution_v1.rb +1 -1
- data/generated/google/apis/remotebuildexecution_v1alpha/classes.rb +48 -33
- data/generated/google/apis/remotebuildexecution_v1alpha/representations.rb +2 -0
- data/generated/google/apis/remotebuildexecution_v1alpha/service.rb +10 -10
- data/generated/google/apis/remotebuildexecution_v1alpha.rb +1 -1
- data/generated/google/apis/remotebuildexecution_v2/classes.rb +58 -43
- data/generated/google/apis/remotebuildexecution_v2/representations.rb +2 -0
- data/generated/google/apis/remotebuildexecution_v2/service.rb +9 -9
- data/generated/google/apis/remotebuildexecution_v2.rb +1 -1
- data/generated/google/apis/replicapool_v1beta1/service.rb +10 -10
- data/generated/google/apis/reseller_v1/classes.rb +32 -39
- data/generated/google/apis/reseller_v1/service.rb +18 -18
- data/generated/google/apis/reseller_v1.rb +1 -1
- data/generated/google/apis/run_v1/classes.rb +73 -0
- data/generated/google/apis/run_v1/representations.rb +43 -0
- data/generated/google/apis/run_v1/service.rb +90 -0
- data/generated/google/apis/run_v1.rb +35 -0
- data/generated/google/apis/run_v1alpha1/classes.rb +3882 -0
- data/generated/google/apis/run_v1alpha1/representations.rb +1425 -0
- data/generated/google/apis/run_v1alpha1/service.rb +2071 -0
- data/generated/google/apis/run_v1alpha1.rb +35 -0
- data/generated/google/apis/runtimeconfig_v1/classes.rb +11 -11
- data/generated/google/apis/runtimeconfig_v1/service.rb +3 -3
- data/generated/google/apis/runtimeconfig_v1.rb +1 -1
- data/generated/google/apis/runtimeconfig_v1beta1/classes.rb +26 -25
- data/generated/google/apis/runtimeconfig_v1beta1/service.rb +22 -22
- data/generated/google/apis/runtimeconfig_v1beta1.rb +1 -1
- data/generated/google/apis/safebrowsing_v4/service.rb +7 -7
- data/generated/google/apis/script_v1/classes.rb +167 -6
- data/generated/google/apis/script_v1/representations.rb +79 -1
- data/generated/google/apis/script_v1/service.rb +16 -16
- data/generated/google/apis/script_v1.rb +1 -1
- data/generated/google/apis/searchconsole_v1/service.rb +1 -1
- data/generated/google/apis/securitycenter_v1/classes.rb +1627 -0
- data/generated/google/apis/securitycenter_v1/representations.rb +569 -0
- data/generated/google/apis/securitycenter_v1/service.rb +1110 -0
- data/generated/google/apis/securitycenter_v1.rb +35 -0
- data/generated/google/apis/securitycenter_v1beta1/classes.rb +1514 -0
- data/generated/google/apis/securitycenter_v1beta1/representations.rb +548 -0
- data/generated/google/apis/securitycenter_v1beta1/service.rb +1035 -0
- data/generated/google/apis/securitycenter_v1beta1.rb +35 -0
- data/generated/google/apis/servicebroker_v1/classes.rb +1 -1
- data/generated/google/apis/servicebroker_v1/service.rb +3 -3
- data/generated/google/apis/servicebroker_v1.rb +1 -1
- data/generated/google/apis/servicebroker_v1alpha1/classes.rb +1 -1
- data/generated/google/apis/servicebroker_v1alpha1/service.rb +16 -16
- data/generated/google/apis/servicebroker_v1alpha1.rb +1 -1
- data/generated/google/apis/servicebroker_v1beta1/classes.rb +1 -1
- data/generated/google/apis/servicebroker_v1beta1/service.rb +21 -21
- data/generated/google/apis/servicebroker_v1beta1.rb +1 -1
- data/generated/google/apis/serviceconsumermanagement_v1/classes.rb +453 -149
- data/generated/google/apis/serviceconsumermanagement_v1/representations.rb +202 -29
- data/generated/google/apis/serviceconsumermanagement_v1/service.rb +148 -62
- data/generated/google/apis/serviceconsumermanagement_v1.rb +1 -1
- data/generated/google/apis/servicecontrol_v1/classes.rb +122 -25
- data/generated/google/apis/servicecontrol_v1/representations.rb +47 -0
- data/generated/google/apis/servicecontrol_v1/service.rb +3 -3
- data/generated/google/apis/servicecontrol_v1.rb +1 -1
- data/generated/google/apis/servicemanagement_v1/classes.rb +93 -110
- data/generated/google/apis/servicemanagement_v1/representations.rb +13 -26
- data/generated/google/apis/servicemanagement_v1/service.rb +30 -27
- data/generated/google/apis/servicemanagement_v1.rb +1 -1
- data/generated/google/apis/servicenetworking_v1/classes.rb +3626 -0
- data/generated/google/apis/servicenetworking_v1/representations.rb +1055 -0
- data/generated/google/apis/servicenetworking_v1/service.rb +440 -0
- data/generated/google/apis/servicenetworking_v1.rb +38 -0
- data/generated/google/apis/servicenetworking_v1beta/classes.rb +65 -108
- data/generated/google/apis/servicenetworking_v1beta/representations.rb +2 -29
- data/generated/google/apis/servicenetworking_v1beta/service.rb +6 -6
- data/generated/google/apis/servicenetworking_v1beta.rb +1 -1
- data/generated/google/apis/serviceusage_v1/classes.rb +160 -109
- data/generated/google/apis/serviceusage_v1/representations.rb +42 -26
- data/generated/google/apis/serviceusage_v1/service.rb +17 -19
- data/generated/google/apis/serviceusage_v1.rb +1 -1
- data/generated/google/apis/serviceusage_v1beta1/classes.rb +161 -110
- data/generated/google/apis/serviceusage_v1beta1/representations.rb +42 -26
- data/generated/google/apis/serviceusage_v1beta1/service.rb +7 -7
- data/generated/google/apis/serviceusage_v1beta1.rb +1 -1
- data/generated/google/apis/sheets_v4/classes.rb +115 -26
- data/generated/google/apis/sheets_v4/service.rb +17 -17
- data/generated/google/apis/sheets_v4.rb +1 -1
- data/generated/google/apis/site_verification_v1/service.rb +7 -7
- data/generated/google/apis/slides_v1/classes.rb +2 -2
- data/generated/google/apis/slides_v1/service.rb +5 -5
- data/generated/google/apis/slides_v1.rb +1 -1
- data/generated/google/apis/sourcerepo_v1/classes.rb +183 -1
- data/generated/google/apis/sourcerepo_v1/representations.rb +45 -0
- data/generated/google/apis/sourcerepo_v1/service.rb +45 -10
- data/generated/google/apis/sourcerepo_v1.rb +1 -1
- data/generated/google/apis/spanner_v1/classes.rb +231 -17
- data/generated/google/apis/spanner_v1/representations.rb +66 -0
- data/generated/google/apis/spanner_v1/service.rb +92 -42
- data/generated/google/apis/spanner_v1.rb +1 -1
- data/generated/google/apis/speech_v1/classes.rb +110 -13
- data/generated/google/apis/speech_v1/representations.rb +24 -0
- data/generated/google/apis/speech_v1/service.rb +9 -7
- data/generated/google/apis/speech_v1.rb +1 -1
- data/generated/google/apis/speech_v1p1beta1/classes.rb +19 -13
- data/generated/google/apis/speech_v1p1beta1/representations.rb +1 -0
- data/generated/google/apis/speech_v1p1beta1/service.rb +9 -7
- data/generated/google/apis/speech_v1p1beta1.rb +1 -1
- data/generated/google/apis/sqladmin_v1beta4/classes.rb +94 -17
- data/generated/google/apis/sqladmin_v1beta4/representations.rb +36 -0
- data/generated/google/apis/sqladmin_v1beta4/service.rb +44 -44
- data/generated/google/apis/sqladmin_v1beta4.rb +1 -1
- data/generated/google/apis/storage_v1/classes.rb +201 -4
- data/generated/google/apis/storage_v1/representations.rb +76 -1
- data/generated/google/apis/storage_v1/service.rb +488 -93
- data/generated/google/apis/storage_v1.rb +1 -1
- data/generated/google/apis/storage_v1beta1/service.rb +24 -24
- data/generated/google/apis/storage_v1beta2/service.rb +34 -34
- data/generated/google/apis/storagetransfer_v1/classes.rb +44 -44
- data/generated/google/apis/storagetransfer_v1/service.rb +35 -36
- data/generated/google/apis/storagetransfer_v1.rb +2 -2
- data/generated/google/apis/streetviewpublish_v1/classes.rb +27 -27
- data/generated/google/apis/streetviewpublish_v1/service.rb +36 -40
- data/generated/google/apis/streetviewpublish_v1.rb +1 -1
- data/generated/google/apis/surveys_v2/service.rb +8 -8
- data/generated/google/apis/tagmanager_v1/service.rb +49 -95
- data/generated/google/apis/tagmanager_v1.rb +1 -1
- data/generated/google/apis/tagmanager_v2/classes.rb +197 -292
- data/generated/google/apis/tagmanager_v2/representations.rb +62 -103
- data/generated/google/apis/tagmanager_v2/service.rb +287 -249
- data/generated/google/apis/tagmanager_v2.rb +1 -1
- data/generated/google/apis/tasks_v1/service.rb +19 -19
- data/generated/google/apis/tasks_v1.rb +2 -2
- data/generated/google/apis/testing_v1/classes.rb +44 -39
- data/generated/google/apis/testing_v1/representations.rb +3 -1
- data/generated/google/apis/testing_v1/service.rb +5 -5
- data/generated/google/apis/testing_v1.rb +1 -1
- data/generated/google/apis/texttospeech_v1/service.rb +2 -2
- data/generated/google/apis/texttospeech_v1.rb +1 -1
- data/generated/google/apis/texttospeech_v1beta1/service.rb +2 -2
- data/generated/google/apis/texttospeech_v1beta1.rb +1 -1
- data/generated/google/apis/toolresults_v1beta3/classes.rb +340 -17
- data/generated/google/apis/toolresults_v1beta3/representations.rb +90 -0
- data/generated/google/apis/toolresults_v1beta3/service.rb +140 -24
- data/generated/google/apis/toolresults_v1beta3.rb +1 -1
- data/generated/google/apis/tpu_v1/classes.rb +21 -15
- data/generated/google/apis/tpu_v1/representations.rb +1 -0
- data/generated/google/apis/tpu_v1/service.rb +17 -17
- data/generated/google/apis/tpu_v1.rb +1 -1
- data/generated/google/apis/tpu_v1alpha1/classes.rb +21 -15
- data/generated/google/apis/tpu_v1alpha1/representations.rb +1 -0
- data/generated/google/apis/tpu_v1alpha1/service.rb +17 -17
- data/generated/google/apis/tpu_v1alpha1.rb +1 -1
- data/generated/google/apis/translate_v2/service.rb +5 -5
- data/generated/google/apis/urlshortener_v1/service.rb +3 -3
- data/generated/google/apis/vault_v1/classes.rb +44 -18
- data/generated/google/apis/vault_v1/representations.rb +4 -0
- data/generated/google/apis/vault_v1/service.rb +28 -28
- data/generated/google/apis/vault_v1.rb +1 -1
- data/generated/google/apis/videointelligence_v1/classes.rb +2193 -350
- data/generated/google/apis/videointelligence_v1/representations.rb +805 -6
- data/generated/google/apis/videointelligence_v1/service.rb +7 -6
- data/generated/google/apis/videointelligence_v1.rb +3 -2
- data/generated/google/apis/videointelligence_v1beta2/classes.rb +2448 -605
- data/generated/google/apis/videointelligence_v1beta2/representations.rb +806 -7
- data/generated/google/apis/videointelligence_v1beta2/service.rb +3 -2
- data/generated/google/apis/videointelligence_v1beta2.rb +3 -2
- data/generated/google/apis/videointelligence_v1p1beta1/classes.rb +2422 -579
- data/generated/google/apis/videointelligence_v1p1beta1/representations.rb +806 -7
- data/generated/google/apis/videointelligence_v1p1beta1/service.rb +3 -2
- data/generated/google/apis/videointelligence_v1p1beta1.rb +3 -2
- data/generated/google/apis/videointelligence_v1p2beta1/classes.rb +2645 -830
- data/generated/google/apis/videointelligence_v1p2beta1/representations.rb +796 -12
- data/generated/google/apis/videointelligence_v1p2beta1/service.rb +3 -2
- data/generated/google/apis/videointelligence_v1p2beta1.rb +3 -2
- data/generated/google/apis/videointelligence_v1p3beta1/classes.rb +4687 -0
- data/generated/google/apis/videointelligence_v1p3beta1/representations.rb +2005 -0
- data/generated/google/apis/videointelligence_v1p3beta1/service.rb +94 -0
- data/generated/google/apis/videointelligence_v1p3beta1.rb +36 -0
- data/generated/google/apis/vision_v1/classes.rb +4397 -124
- data/generated/google/apis/vision_v1/representations.rb +2366 -541
- data/generated/google/apis/vision_v1/service.rb +160 -33
- data/generated/google/apis/vision_v1.rb +1 -1
- data/generated/google/apis/vision_v1p1beta1/classes.rb +4451 -158
- data/generated/google/apis/vision_v1p1beta1/representations.rb +2415 -576
- data/generated/google/apis/vision_v1p1beta1/service.rb +73 -2
- data/generated/google/apis/vision_v1p1beta1.rb +1 -1
- data/generated/google/apis/vision_v1p2beta1/classes.rb +4451 -158
- data/generated/google/apis/vision_v1p2beta1/representations.rb +2443 -604
- data/generated/google/apis/vision_v1p2beta1/service.rb +73 -2
- data/generated/google/apis/vision_v1p2beta1.rb +1 -1
- data/generated/google/apis/webfonts_v1/service.rb +1 -1
- data/generated/google/apis/webmasters_v3/classes.rb +0 -166
- data/generated/google/apis/webmasters_v3/representations.rb +0 -93
- data/generated/google/apis/webmasters_v3/service.rb +9 -180
- data/generated/google/apis/webmasters_v3.rb +1 -1
- data/generated/google/apis/websecurityscanner_v1alpha/service.rb +13 -13
- data/generated/google/apis/websecurityscanner_v1beta/classes.rb +973 -0
- data/generated/google/apis/websecurityscanner_v1beta/representations.rb +452 -0
- data/generated/google/apis/websecurityscanner_v1beta/service.rb +548 -0
- data/generated/google/apis/websecurityscanner_v1beta.rb +34 -0
- data/generated/google/apis/youtube_analytics_v1/service.rb +8 -8
- data/generated/google/apis/youtube_analytics_v1beta1/service.rb +8 -8
- data/generated/google/apis/youtube_analytics_v2/service.rb +8 -8
- data/generated/google/apis/youtube_partner_v1/classes.rb +15 -34
- data/generated/google/apis/youtube_partner_v1/representations.rb +4 -17
- data/generated/google/apis/youtube_partner_v1/service.rb +74 -74
- data/generated/google/apis/youtube_partner_v1.rb +1 -1
- data/generated/google/apis/youtube_v3/service.rb +71 -71
- data/generated/google/apis/youtube_v3.rb +1 -1
- data/generated/google/apis/youtubereporting_v1/classes.rb +2 -2
- data/generated/google/apis/youtubereporting_v1/service.rb +8 -8
- data/generated/google/apis/youtubereporting_v1.rb +1 -1
- data/google-api-client.gemspec +2 -2
- data/lib/google/apis/core/http_command.rb +1 -0
- data/lib/google/apis/core/json_representation.rb +4 -0
- data/lib/google/apis/core/upload.rb +3 -3
- data/lib/google/apis/generator/model.rb +1 -1
- data/lib/google/apis/generator/templates/_method.tmpl +3 -3
- data/lib/google/apis/version.rb +1 -1
- metadata +86 -17
- data/.kokoro/common.cfg +0 -22
- data/.kokoro/windows.sh +0 -32
- data/generated/google/apis/logging_v2beta1/classes.rb +0 -1765
- data/generated/google/apis/logging_v2beta1/representations.rb +0 -537
- data/generated/google/apis/logging_v2beta1/service.rb +0 -570
- data/generated/google/apis/logging_v2beta1.rb +0 -46
- data/generated/google/apis/partners_v2/classes.rb +0 -2260
- data/generated/google/apis/partners_v2/representations.rb +0 -905
- data/generated/google/apis/partners_v2/service.rb +0 -1077
- data/samples/web/.env +0 -2
|
@@ -235,6 +235,184 @@ module Google
|
|
|
235
235
|
end
|
|
236
236
|
end
|
|
237
237
|
|
|
238
|
+
# Normalized bounding box.
|
|
239
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
240
|
+
# Range: [0, 1].
|
|
241
|
+
class GoogleCloudVideointelligenceV1NormalizedBoundingBox
|
|
242
|
+
include Google::Apis::Core::Hashable
|
|
243
|
+
|
|
244
|
+
# Bottom Y coordinate.
|
|
245
|
+
# Corresponds to the JSON property `bottom`
|
|
246
|
+
# @return [Float]
|
|
247
|
+
attr_accessor :bottom
|
|
248
|
+
|
|
249
|
+
# Left X coordinate.
|
|
250
|
+
# Corresponds to the JSON property `left`
|
|
251
|
+
# @return [Float]
|
|
252
|
+
attr_accessor :left
|
|
253
|
+
|
|
254
|
+
# Right X coordinate.
|
|
255
|
+
# Corresponds to the JSON property `right`
|
|
256
|
+
# @return [Float]
|
|
257
|
+
attr_accessor :right
|
|
258
|
+
|
|
259
|
+
# Top Y coordinate.
|
|
260
|
+
# Corresponds to the JSON property `top`
|
|
261
|
+
# @return [Float]
|
|
262
|
+
attr_accessor :top
|
|
263
|
+
|
|
264
|
+
def initialize(**args)
|
|
265
|
+
update!(**args)
|
|
266
|
+
end
|
|
267
|
+
|
|
268
|
+
# Update properties of this object
|
|
269
|
+
def update!(**args)
|
|
270
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
271
|
+
@left = args[:left] if args.key?(:left)
|
|
272
|
+
@right = args[:right] if args.key?(:right)
|
|
273
|
+
@top = args[:top] if args.key?(:top)
|
|
274
|
+
end
|
|
275
|
+
end
|
|
276
|
+
|
|
277
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
278
|
+
# Contains list of the corner points in clockwise order starting from
|
|
279
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
280
|
+
# When the text is horizontal it might look like:
|
|
281
|
+
# 0----1
|
|
282
|
+
# | |
|
|
283
|
+
# 3----2
|
|
284
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
285
|
+
# becomes:
|
|
286
|
+
# 2----3
|
|
287
|
+
# | |
|
|
288
|
+
# 1----0
|
|
289
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
290
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
291
|
+
# the box.
|
|
292
|
+
class GoogleCloudVideointelligenceV1NormalizedBoundingPoly
|
|
293
|
+
include Google::Apis::Core::Hashable
|
|
294
|
+
|
|
295
|
+
# Normalized vertices of the bounding polygon.
|
|
296
|
+
# Corresponds to the JSON property `vertices`
|
|
297
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1NormalizedVertex>]
|
|
298
|
+
attr_accessor :vertices
|
|
299
|
+
|
|
300
|
+
def initialize(**args)
|
|
301
|
+
update!(**args)
|
|
302
|
+
end
|
|
303
|
+
|
|
304
|
+
# Update properties of this object
|
|
305
|
+
def update!(**args)
|
|
306
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
307
|
+
end
|
|
308
|
+
end
|
|
309
|
+
|
|
310
|
+
# A vertex represents a 2D point in the image.
|
|
311
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
312
|
+
# and range from 0 to 1.
|
|
313
|
+
class GoogleCloudVideointelligenceV1NormalizedVertex
|
|
314
|
+
include Google::Apis::Core::Hashable
|
|
315
|
+
|
|
316
|
+
# X coordinate.
|
|
317
|
+
# Corresponds to the JSON property `x`
|
|
318
|
+
# @return [Float]
|
|
319
|
+
attr_accessor :x
|
|
320
|
+
|
|
321
|
+
# Y coordinate.
|
|
322
|
+
# Corresponds to the JSON property `y`
|
|
323
|
+
# @return [Float]
|
|
324
|
+
attr_accessor :y
|
|
325
|
+
|
|
326
|
+
def initialize(**args)
|
|
327
|
+
update!(**args)
|
|
328
|
+
end
|
|
329
|
+
|
|
330
|
+
# Update properties of this object
|
|
331
|
+
def update!(**args)
|
|
332
|
+
@x = args[:x] if args.key?(:x)
|
|
333
|
+
@y = args[:y] if args.key?(:y)
|
|
334
|
+
end
|
|
335
|
+
end
|
|
336
|
+
|
|
337
|
+
# Annotations corresponding to one tracked object.
|
|
338
|
+
class GoogleCloudVideointelligenceV1ObjectTrackingAnnotation
|
|
339
|
+
include Google::Apis::Core::Hashable
|
|
340
|
+
|
|
341
|
+
# Object category's labeling confidence of this track.
|
|
342
|
+
# Corresponds to the JSON property `confidence`
|
|
343
|
+
# @return [Float]
|
|
344
|
+
attr_accessor :confidence
|
|
345
|
+
|
|
346
|
+
# Detected entity from video analysis.
|
|
347
|
+
# Corresponds to the JSON property `entity`
|
|
348
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1Entity]
|
|
349
|
+
attr_accessor :entity
|
|
350
|
+
|
|
351
|
+
# Information corresponding to all frames where this object track appears.
|
|
352
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
353
|
+
# messages in frames.
|
|
354
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
355
|
+
# Corresponds to the JSON property `frames`
|
|
356
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1ObjectTrackingFrame>]
|
|
357
|
+
attr_accessor :frames
|
|
358
|
+
|
|
359
|
+
# Video segment.
|
|
360
|
+
# Corresponds to the JSON property `segment`
|
|
361
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1VideoSegment]
|
|
362
|
+
attr_accessor :segment
|
|
363
|
+
|
|
364
|
+
# Streaming mode ONLY.
|
|
365
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
366
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
367
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
368
|
+
# the customers can correlate the results of the ongoing
|
|
369
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
370
|
+
# Corresponds to the JSON property `trackId`
|
|
371
|
+
# @return [Fixnum]
|
|
372
|
+
attr_accessor :track_id
|
|
373
|
+
|
|
374
|
+
def initialize(**args)
|
|
375
|
+
update!(**args)
|
|
376
|
+
end
|
|
377
|
+
|
|
378
|
+
# Update properties of this object
|
|
379
|
+
def update!(**args)
|
|
380
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
381
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
382
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
383
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
384
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
385
|
+
end
|
|
386
|
+
end
|
|
387
|
+
|
|
388
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
389
|
+
# stores per frame location, time offset, and confidence.
|
|
390
|
+
class GoogleCloudVideointelligenceV1ObjectTrackingFrame
|
|
391
|
+
include Google::Apis::Core::Hashable
|
|
392
|
+
|
|
393
|
+
# Normalized bounding box.
|
|
394
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
395
|
+
# Range: [0, 1].
|
|
396
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
397
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1NormalizedBoundingBox]
|
|
398
|
+
attr_accessor :normalized_bounding_box
|
|
399
|
+
|
|
400
|
+
# The timestamp of the frame in microseconds.
|
|
401
|
+
# Corresponds to the JSON property `timeOffset`
|
|
402
|
+
# @return [String]
|
|
403
|
+
attr_accessor :time_offset
|
|
404
|
+
|
|
405
|
+
def initialize(**args)
|
|
406
|
+
update!(**args)
|
|
407
|
+
end
|
|
408
|
+
|
|
409
|
+
# Update properties of this object
|
|
410
|
+
def update!(**args)
|
|
411
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
412
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
413
|
+
end
|
|
414
|
+
end
|
|
415
|
+
|
|
238
416
|
# Alternative hypotheses (a.k.a. n-best list).
|
|
239
417
|
class GoogleCloudVideointelligenceV1SpeechRecognitionAlternative
|
|
240
418
|
include Google::Apis::Core::Hashable
|
|
@@ -302,31 +480,62 @@ module Google
|
|
|
302
480
|
end
|
|
303
481
|
end
|
|
304
482
|
|
|
305
|
-
#
|
|
306
|
-
|
|
483
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
484
|
+
# corresponding text, confidence value, and frame level information for each
|
|
485
|
+
# detection.
|
|
486
|
+
class GoogleCloudVideointelligenceV1TextAnnotation
|
|
307
487
|
include Google::Apis::Core::Hashable
|
|
308
488
|
|
|
309
|
-
#
|
|
310
|
-
#
|
|
311
|
-
#
|
|
489
|
+
# All video segments where OCR detected text appears.
|
|
490
|
+
# Corresponds to the JSON property `segments`
|
|
491
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1TextSegment>]
|
|
492
|
+
attr_accessor :segments
|
|
493
|
+
|
|
494
|
+
# The detected text.
|
|
495
|
+
# Corresponds to the JSON property `text`
|
|
312
496
|
# @return [String]
|
|
313
|
-
attr_accessor :
|
|
497
|
+
attr_accessor :text
|
|
314
498
|
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
# @return [Fixnum]
|
|
319
|
-
attr_accessor :progress_percent
|
|
499
|
+
def initialize(**args)
|
|
500
|
+
update!(**args)
|
|
501
|
+
end
|
|
320
502
|
|
|
321
|
-
#
|
|
322
|
-
|
|
323
|
-
|
|
324
|
-
|
|
503
|
+
# Update properties of this object
|
|
504
|
+
def update!(**args)
|
|
505
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
506
|
+
@text = args[:text] if args.key?(:text)
|
|
507
|
+
end
|
|
508
|
+
end
|
|
325
509
|
|
|
326
|
-
|
|
327
|
-
|
|
510
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
511
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
512
|
+
# frames containing detected OCR text snippets.
|
|
513
|
+
class GoogleCloudVideointelligenceV1TextFrame
|
|
514
|
+
include Google::Apis::Core::Hashable
|
|
515
|
+
|
|
516
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
517
|
+
# Contains list of the corner points in clockwise order starting from
|
|
518
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
519
|
+
# When the text is horizontal it might look like:
|
|
520
|
+
# 0----1
|
|
521
|
+
# | |
|
|
522
|
+
# 3----2
|
|
523
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
524
|
+
# becomes:
|
|
525
|
+
# 2----3
|
|
526
|
+
# | |
|
|
527
|
+
# 1----0
|
|
528
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
529
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
530
|
+
# the box.
|
|
531
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
532
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1NormalizedBoundingPoly]
|
|
533
|
+
attr_accessor :rotated_bounding_box
|
|
534
|
+
|
|
535
|
+
# Timestamp of this frame.
|
|
536
|
+
# Corresponds to the JSON property `timeOffset`
|
|
328
537
|
# @return [String]
|
|
329
|
-
attr_accessor :
|
|
538
|
+
attr_accessor :time_offset
|
|
330
539
|
|
|
331
540
|
def initialize(**args)
|
|
332
541
|
update!(**args)
|
|
@@ -334,36 +543,105 @@ module Google
|
|
|
334
543
|
|
|
335
544
|
# Update properties of this object
|
|
336
545
|
def update!(**args)
|
|
337
|
-
@
|
|
338
|
-
@
|
|
339
|
-
@start_time = args[:start_time] if args.key?(:start_time)
|
|
340
|
-
@update_time = args[:update_time] if args.key?(:update_time)
|
|
546
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
547
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
341
548
|
end
|
|
342
549
|
end
|
|
343
550
|
|
|
344
|
-
#
|
|
345
|
-
class
|
|
551
|
+
# Video segment level annotation results for text detection.
|
|
552
|
+
class GoogleCloudVideointelligenceV1TextSegment
|
|
346
553
|
include Google::Apis::Core::Hashable
|
|
347
554
|
|
|
348
|
-
#
|
|
349
|
-
#
|
|
350
|
-
#
|
|
351
|
-
#
|
|
352
|
-
|
|
353
|
-
|
|
354
|
-
#
|
|
355
|
-
#
|
|
356
|
-
#
|
|
357
|
-
|
|
358
|
-
|
|
359
|
-
#
|
|
360
|
-
#
|
|
361
|
-
#
|
|
362
|
-
|
|
363
|
-
|
|
364
|
-
|
|
365
|
-
|
|
366
|
-
|
|
555
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
556
|
+
# over all frames where OCR detected text appears.
|
|
557
|
+
# Corresponds to the JSON property `confidence`
|
|
558
|
+
# @return [Float]
|
|
559
|
+
attr_accessor :confidence
|
|
560
|
+
|
|
561
|
+
# Information related to the frames where OCR detected text appears.
|
|
562
|
+
# Corresponds to the JSON property `frames`
|
|
563
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1TextFrame>]
|
|
564
|
+
attr_accessor :frames
|
|
565
|
+
|
|
566
|
+
# Video segment.
|
|
567
|
+
# Corresponds to the JSON property `segment`
|
|
568
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1VideoSegment]
|
|
569
|
+
attr_accessor :segment
|
|
570
|
+
|
|
571
|
+
def initialize(**args)
|
|
572
|
+
update!(**args)
|
|
573
|
+
end
|
|
574
|
+
|
|
575
|
+
# Update properties of this object
|
|
576
|
+
def update!(**args)
|
|
577
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
578
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
579
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
580
|
+
end
|
|
581
|
+
end
|
|
582
|
+
|
|
583
|
+
# Annotation progress for a single video.
|
|
584
|
+
class GoogleCloudVideointelligenceV1VideoAnnotationProgress
|
|
585
|
+
include Google::Apis::Core::Hashable
|
|
586
|
+
|
|
587
|
+
# Video file location in
|
|
588
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/).
|
|
589
|
+
# Corresponds to the JSON property `inputUri`
|
|
590
|
+
# @return [String]
|
|
591
|
+
attr_accessor :input_uri
|
|
592
|
+
|
|
593
|
+
# Approximate percentage processed thus far. Guaranteed to be
|
|
594
|
+
# 100 when fully processed.
|
|
595
|
+
# Corresponds to the JSON property `progressPercent`
|
|
596
|
+
# @return [Fixnum]
|
|
597
|
+
attr_accessor :progress_percent
|
|
598
|
+
|
|
599
|
+
# Time when the request was received.
|
|
600
|
+
# Corresponds to the JSON property `startTime`
|
|
601
|
+
# @return [String]
|
|
602
|
+
attr_accessor :start_time
|
|
603
|
+
|
|
604
|
+
# Time of the most recent update.
|
|
605
|
+
# Corresponds to the JSON property `updateTime`
|
|
606
|
+
# @return [String]
|
|
607
|
+
attr_accessor :update_time
|
|
608
|
+
|
|
609
|
+
def initialize(**args)
|
|
610
|
+
update!(**args)
|
|
611
|
+
end
|
|
612
|
+
|
|
613
|
+
# Update properties of this object
|
|
614
|
+
def update!(**args)
|
|
615
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
616
|
+
@progress_percent = args[:progress_percent] if args.key?(:progress_percent)
|
|
617
|
+
@start_time = args[:start_time] if args.key?(:start_time)
|
|
618
|
+
@update_time = args[:update_time] if args.key?(:update_time)
|
|
619
|
+
end
|
|
620
|
+
end
|
|
621
|
+
|
|
622
|
+
# Annotation results for a single video.
|
|
623
|
+
class GoogleCloudVideointelligenceV1VideoAnnotationResults
|
|
624
|
+
include Google::Apis::Core::Hashable
|
|
625
|
+
|
|
626
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
627
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
628
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
629
|
+
# - Simple to use and understand for most users
|
|
630
|
+
# - Flexible enough to meet unexpected needs
|
|
631
|
+
# # Overview
|
|
632
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
633
|
+
# message, and error details. The error code should be an enum value of
|
|
634
|
+
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
635
|
+
# error message should be a developer-facing English message that helps
|
|
636
|
+
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
637
|
+
# error message is needed, put the localized message in the error details or
|
|
638
|
+
# localize it in the client. The optional error details may contain arbitrary
|
|
639
|
+
# information about the error. There is a predefined set of error detail types
|
|
640
|
+
# in the package `google.rpc` that can be used for common error conditions.
|
|
641
|
+
# # Language mapping
|
|
642
|
+
# The `Status` message is the logical representation of the error model, but it
|
|
643
|
+
# is not necessarily the actual wire format. When the `Status` message is
|
|
644
|
+
# exposed in different client libraries and different wire protocols, it can be
|
|
367
645
|
# mapped differently. For example, it will likely be mapped to some exceptions
|
|
368
646
|
# in Java, but more likely mapped to some error codes in C.
|
|
369
647
|
# # Other uses
|
|
@@ -407,6 +685,11 @@ module Google
|
|
|
407
685
|
# @return [String]
|
|
408
686
|
attr_accessor :input_uri
|
|
409
687
|
|
|
688
|
+
# Annotations for list of objects detected and tracked in video.
|
|
689
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
690
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>]
|
|
691
|
+
attr_accessor :object_annotations
|
|
692
|
+
|
|
410
693
|
# Label annotations on video level or user specified segment level.
|
|
411
694
|
# There is exactly one element for each unique label.
|
|
412
695
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
@@ -429,6 +712,13 @@ module Google
|
|
|
429
712
|
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1SpeechTranscription>]
|
|
430
713
|
attr_accessor :speech_transcriptions
|
|
431
714
|
|
|
715
|
+
# OCR text detection and tracking.
|
|
716
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
717
|
+
# frame information associated with it.
|
|
718
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
719
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1TextAnnotation>]
|
|
720
|
+
attr_accessor :text_annotations
|
|
721
|
+
|
|
432
722
|
def initialize(**args)
|
|
433
723
|
update!(**args)
|
|
434
724
|
end
|
|
@@ -439,10 +729,12 @@ module Google
|
|
|
439
729
|
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
440
730
|
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
441
731
|
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
732
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
442
733
|
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
443
734
|
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
444
735
|
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
445
736
|
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
737
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
446
738
|
end
|
|
447
739
|
end
|
|
448
740
|
|
|
@@ -613,10 +905,1460 @@ module Google
|
|
|
613
905
|
class GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation
|
|
614
906
|
include Google::Apis::Core::Hashable
|
|
615
907
|
|
|
616
|
-
# All video frames where explicit content was detected.
|
|
617
|
-
# Corresponds to the JSON property `frames`
|
|
618
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2ExplicitContentFrame>]
|
|
619
|
-
attr_accessor :frames
|
|
908
|
+
# All video frames where explicit content was detected.
|
|
909
|
+
# Corresponds to the JSON property `frames`
|
|
910
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2ExplicitContentFrame>]
|
|
911
|
+
attr_accessor :frames
|
|
912
|
+
|
|
913
|
+
def initialize(**args)
|
|
914
|
+
update!(**args)
|
|
915
|
+
end
|
|
916
|
+
|
|
917
|
+
# Update properties of this object
|
|
918
|
+
def update!(**args)
|
|
919
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
920
|
+
end
|
|
921
|
+
end
|
|
922
|
+
|
|
923
|
+
# Video frame level annotation results for explicit content.
|
|
924
|
+
class GoogleCloudVideointelligenceV1beta2ExplicitContentFrame
|
|
925
|
+
include Google::Apis::Core::Hashable
|
|
926
|
+
|
|
927
|
+
# Likelihood of the pornography content..
|
|
928
|
+
# Corresponds to the JSON property `pornographyLikelihood`
|
|
929
|
+
# @return [String]
|
|
930
|
+
attr_accessor :pornography_likelihood
|
|
931
|
+
|
|
932
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
933
|
+
# video frame for this location.
|
|
934
|
+
# Corresponds to the JSON property `timeOffset`
|
|
935
|
+
# @return [String]
|
|
936
|
+
attr_accessor :time_offset
|
|
937
|
+
|
|
938
|
+
def initialize(**args)
|
|
939
|
+
update!(**args)
|
|
940
|
+
end
|
|
941
|
+
|
|
942
|
+
# Update properties of this object
|
|
943
|
+
def update!(**args)
|
|
944
|
+
@pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
|
|
945
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
946
|
+
end
|
|
947
|
+
end
|
|
948
|
+
|
|
949
|
+
# Label annotation.
|
|
950
|
+
class GoogleCloudVideointelligenceV1beta2LabelAnnotation
|
|
951
|
+
include Google::Apis::Core::Hashable
|
|
952
|
+
|
|
953
|
+
# Common categories for the detected entity.
|
|
954
|
+
# E.g. when the label is `Terrier` the category is likely `dog`. And in some
|
|
955
|
+
# cases there might be more than one categories e.g. `Terrier` could also be
|
|
956
|
+
# a `pet`.
|
|
957
|
+
# Corresponds to the JSON property `categoryEntities`
|
|
958
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2Entity>]
|
|
959
|
+
attr_accessor :category_entities
|
|
960
|
+
|
|
961
|
+
# Detected entity from video analysis.
|
|
962
|
+
# Corresponds to the JSON property `entity`
|
|
963
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2Entity]
|
|
964
|
+
attr_accessor :entity
|
|
965
|
+
|
|
966
|
+
# All video frames where a label was detected.
|
|
967
|
+
# Corresponds to the JSON property `frames`
|
|
968
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2LabelFrame>]
|
|
969
|
+
attr_accessor :frames
|
|
970
|
+
|
|
971
|
+
# All video segments where a label was detected.
|
|
972
|
+
# Corresponds to the JSON property `segments`
|
|
973
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2LabelSegment>]
|
|
974
|
+
attr_accessor :segments
|
|
975
|
+
|
|
976
|
+
def initialize(**args)
|
|
977
|
+
update!(**args)
|
|
978
|
+
end
|
|
979
|
+
|
|
980
|
+
# Update properties of this object
|
|
981
|
+
def update!(**args)
|
|
982
|
+
@category_entities = args[:category_entities] if args.key?(:category_entities)
|
|
983
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
984
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
985
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
986
|
+
end
|
|
987
|
+
end
|
|
988
|
+
|
|
989
|
+
# Video frame level annotation results for label detection.
|
|
990
|
+
class GoogleCloudVideointelligenceV1beta2LabelFrame
|
|
991
|
+
include Google::Apis::Core::Hashable
|
|
992
|
+
|
|
993
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
994
|
+
# Corresponds to the JSON property `confidence`
|
|
995
|
+
# @return [Float]
|
|
996
|
+
attr_accessor :confidence
|
|
997
|
+
|
|
998
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
999
|
+
# video frame for this location.
|
|
1000
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1001
|
+
# @return [String]
|
|
1002
|
+
attr_accessor :time_offset
|
|
1003
|
+
|
|
1004
|
+
def initialize(**args)
|
|
1005
|
+
update!(**args)
|
|
1006
|
+
end
|
|
1007
|
+
|
|
1008
|
+
# Update properties of this object
|
|
1009
|
+
def update!(**args)
|
|
1010
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1011
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1012
|
+
end
|
|
1013
|
+
end
|
|
1014
|
+
|
|
1015
|
+
# Video segment level annotation results for label detection.
|
|
1016
|
+
class GoogleCloudVideointelligenceV1beta2LabelSegment
|
|
1017
|
+
include Google::Apis::Core::Hashable
|
|
1018
|
+
|
|
1019
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
1020
|
+
# Corresponds to the JSON property `confidence`
|
|
1021
|
+
# @return [Float]
|
|
1022
|
+
attr_accessor :confidence
|
|
1023
|
+
|
|
1024
|
+
# Video segment.
|
|
1025
|
+
# Corresponds to the JSON property `segment`
|
|
1026
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2VideoSegment]
|
|
1027
|
+
attr_accessor :segment
|
|
1028
|
+
|
|
1029
|
+
def initialize(**args)
|
|
1030
|
+
update!(**args)
|
|
1031
|
+
end
|
|
1032
|
+
|
|
1033
|
+
# Update properties of this object
|
|
1034
|
+
def update!(**args)
|
|
1035
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1036
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1037
|
+
end
|
|
1038
|
+
end
|
|
1039
|
+
|
|
1040
|
+
# Normalized bounding box.
|
|
1041
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
1042
|
+
# Range: [0, 1].
|
|
1043
|
+
class GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox
|
|
1044
|
+
include Google::Apis::Core::Hashable
|
|
1045
|
+
|
|
1046
|
+
# Bottom Y coordinate.
|
|
1047
|
+
# Corresponds to the JSON property `bottom`
|
|
1048
|
+
# @return [Float]
|
|
1049
|
+
attr_accessor :bottom
|
|
1050
|
+
|
|
1051
|
+
# Left X coordinate.
|
|
1052
|
+
# Corresponds to the JSON property `left`
|
|
1053
|
+
# @return [Float]
|
|
1054
|
+
attr_accessor :left
|
|
1055
|
+
|
|
1056
|
+
# Right X coordinate.
|
|
1057
|
+
# Corresponds to the JSON property `right`
|
|
1058
|
+
# @return [Float]
|
|
1059
|
+
attr_accessor :right
|
|
1060
|
+
|
|
1061
|
+
# Top Y coordinate.
|
|
1062
|
+
# Corresponds to the JSON property `top`
|
|
1063
|
+
# @return [Float]
|
|
1064
|
+
attr_accessor :top
|
|
1065
|
+
|
|
1066
|
+
def initialize(**args)
|
|
1067
|
+
update!(**args)
|
|
1068
|
+
end
|
|
1069
|
+
|
|
1070
|
+
# Update properties of this object
|
|
1071
|
+
def update!(**args)
|
|
1072
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
1073
|
+
@left = args[:left] if args.key?(:left)
|
|
1074
|
+
@right = args[:right] if args.key?(:right)
|
|
1075
|
+
@top = args[:top] if args.key?(:top)
|
|
1076
|
+
end
|
|
1077
|
+
end
|
|
1078
|
+
|
|
1079
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
1080
|
+
# Contains list of the corner points in clockwise order starting from
|
|
1081
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
1082
|
+
# When the text is horizontal it might look like:
|
|
1083
|
+
# 0----1
|
|
1084
|
+
# | |
|
|
1085
|
+
# 3----2
|
|
1086
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
1087
|
+
# becomes:
|
|
1088
|
+
# 2----3
|
|
1089
|
+
# | |
|
|
1090
|
+
# 1----0
|
|
1091
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
1092
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
1093
|
+
# the box.
|
|
1094
|
+
class GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly
|
|
1095
|
+
include Google::Apis::Core::Hashable
|
|
1096
|
+
|
|
1097
|
+
# Normalized vertices of the bounding polygon.
|
|
1098
|
+
# Corresponds to the JSON property `vertices`
|
|
1099
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2NormalizedVertex>]
|
|
1100
|
+
attr_accessor :vertices
|
|
1101
|
+
|
|
1102
|
+
def initialize(**args)
|
|
1103
|
+
update!(**args)
|
|
1104
|
+
end
|
|
1105
|
+
|
|
1106
|
+
# Update properties of this object
|
|
1107
|
+
def update!(**args)
|
|
1108
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
1109
|
+
end
|
|
1110
|
+
end
|
|
1111
|
+
|
|
1112
|
+
# A vertex represents a 2D point in the image.
|
|
1113
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
1114
|
+
# and range from 0 to 1.
|
|
1115
|
+
class GoogleCloudVideointelligenceV1beta2NormalizedVertex
|
|
1116
|
+
include Google::Apis::Core::Hashable
|
|
1117
|
+
|
|
1118
|
+
# X coordinate.
|
|
1119
|
+
# Corresponds to the JSON property `x`
|
|
1120
|
+
# @return [Float]
|
|
1121
|
+
attr_accessor :x
|
|
1122
|
+
|
|
1123
|
+
# Y coordinate.
|
|
1124
|
+
# Corresponds to the JSON property `y`
|
|
1125
|
+
# @return [Float]
|
|
1126
|
+
attr_accessor :y
|
|
1127
|
+
|
|
1128
|
+
def initialize(**args)
|
|
1129
|
+
update!(**args)
|
|
1130
|
+
end
|
|
1131
|
+
|
|
1132
|
+
# Update properties of this object
|
|
1133
|
+
def update!(**args)
|
|
1134
|
+
@x = args[:x] if args.key?(:x)
|
|
1135
|
+
@y = args[:y] if args.key?(:y)
|
|
1136
|
+
end
|
|
1137
|
+
end
|
|
1138
|
+
|
|
1139
|
+
# Annotations corresponding to one tracked object.
|
|
1140
|
+
class GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation
|
|
1141
|
+
include Google::Apis::Core::Hashable
|
|
1142
|
+
|
|
1143
|
+
# Object category's labeling confidence of this track.
|
|
1144
|
+
# Corresponds to the JSON property `confidence`
|
|
1145
|
+
# @return [Float]
|
|
1146
|
+
attr_accessor :confidence
|
|
1147
|
+
|
|
1148
|
+
# Detected entity from video analysis.
|
|
1149
|
+
# Corresponds to the JSON property `entity`
|
|
1150
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2Entity]
|
|
1151
|
+
attr_accessor :entity
|
|
1152
|
+
|
|
1153
|
+
# Information corresponding to all frames where this object track appears.
|
|
1154
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
1155
|
+
# messages in frames.
|
|
1156
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
1157
|
+
# Corresponds to the JSON property `frames`
|
|
1158
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame>]
|
|
1159
|
+
attr_accessor :frames
|
|
1160
|
+
|
|
1161
|
+
# Video segment.
|
|
1162
|
+
# Corresponds to the JSON property `segment`
|
|
1163
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2VideoSegment]
|
|
1164
|
+
attr_accessor :segment
|
|
1165
|
+
|
|
1166
|
+
# Streaming mode ONLY.
|
|
1167
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
1168
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
1169
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
1170
|
+
# the customers can correlate the results of the ongoing
|
|
1171
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
1172
|
+
# Corresponds to the JSON property `trackId`
|
|
1173
|
+
# @return [Fixnum]
|
|
1174
|
+
attr_accessor :track_id
|
|
1175
|
+
|
|
1176
|
+
def initialize(**args)
|
|
1177
|
+
update!(**args)
|
|
1178
|
+
end
|
|
1179
|
+
|
|
1180
|
+
# Update properties of this object
|
|
1181
|
+
def update!(**args)
|
|
1182
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1183
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
1184
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
1185
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1186
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
1187
|
+
end
|
|
1188
|
+
end
|
|
1189
|
+
|
|
1190
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
1191
|
+
# stores per frame location, time offset, and confidence.
|
|
1192
|
+
class GoogleCloudVideointelligenceV1beta2ObjectTrackingFrame
|
|
1193
|
+
include Google::Apis::Core::Hashable
|
|
1194
|
+
|
|
1195
|
+
# Normalized bounding box.
|
|
1196
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
1197
|
+
# Range: [0, 1].
|
|
1198
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
1199
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingBox]
|
|
1200
|
+
attr_accessor :normalized_bounding_box
|
|
1201
|
+
|
|
1202
|
+
# The timestamp of the frame in microseconds.
|
|
1203
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1204
|
+
# @return [String]
|
|
1205
|
+
attr_accessor :time_offset
|
|
1206
|
+
|
|
1207
|
+
def initialize(**args)
|
|
1208
|
+
update!(**args)
|
|
1209
|
+
end
|
|
1210
|
+
|
|
1211
|
+
# Update properties of this object
|
|
1212
|
+
def update!(**args)
|
|
1213
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
1214
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1215
|
+
end
|
|
1216
|
+
end
|
|
1217
|
+
|
|
1218
|
+
# Alternative hypotheses (a.k.a. n-best list).
|
|
1219
|
+
class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
|
|
1220
|
+
include Google::Apis::Core::Hashable
|
|
1221
|
+
|
|
1222
|
+
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
1223
|
+
# indicates an estimated greater likelihood that the recognized words are
|
|
1224
|
+
# correct. This field is typically provided only for the top hypothesis, and
|
|
1225
|
+
# only for `is_final=true` results. Clients should not rely on the
|
|
1226
|
+
# `confidence` field as it is not guaranteed to be accurate or consistent.
|
|
1227
|
+
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
1228
|
+
# Corresponds to the JSON property `confidence`
|
|
1229
|
+
# @return [Float]
|
|
1230
|
+
attr_accessor :confidence
|
|
1231
|
+
|
|
1232
|
+
# Transcript text representing the words that the user spoke.
|
|
1233
|
+
# Corresponds to the JSON property `transcript`
|
|
1234
|
+
# @return [String]
|
|
1235
|
+
attr_accessor :transcript
|
|
1236
|
+
|
|
1237
|
+
# A list of word-specific information for each recognized word.
|
|
1238
|
+
# Corresponds to the JSON property `words`
|
|
1239
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2WordInfo>]
|
|
1240
|
+
attr_accessor :words
|
|
1241
|
+
|
|
1242
|
+
def initialize(**args)
|
|
1243
|
+
update!(**args)
|
|
1244
|
+
end
|
|
1245
|
+
|
|
1246
|
+
# Update properties of this object
|
|
1247
|
+
def update!(**args)
|
|
1248
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1249
|
+
@transcript = args[:transcript] if args.key?(:transcript)
|
|
1250
|
+
@words = args[:words] if args.key?(:words)
|
|
1251
|
+
end
|
|
1252
|
+
end
|
|
1253
|
+
|
|
1254
|
+
# A speech recognition result corresponding to a portion of the audio.
|
|
1255
|
+
class GoogleCloudVideointelligenceV1beta2SpeechTranscription
|
|
1256
|
+
include Google::Apis::Core::Hashable
|
|
1257
|
+
|
|
1258
|
+
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
1259
|
+
# in `max_alternatives`). These alternatives are ordered in terms of
|
|
1260
|
+
# accuracy, with the top (first) alternative being the most probable, as
|
|
1261
|
+
# ranked by the recognizer.
|
|
1262
|
+
# Corresponds to the JSON property `alternatives`
|
|
1263
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
|
|
1264
|
+
attr_accessor :alternatives
|
|
1265
|
+
|
|
1266
|
+
# Output only. The
|
|
1267
|
+
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
|
|
1268
|
+
# language in this result. This language code was detected to have the most
|
|
1269
|
+
# likelihood of being spoken in the audio.
|
|
1270
|
+
# Corresponds to the JSON property `languageCode`
|
|
1271
|
+
# @return [String]
|
|
1272
|
+
attr_accessor :language_code
|
|
1273
|
+
|
|
1274
|
+
def initialize(**args)
|
|
1275
|
+
update!(**args)
|
|
1276
|
+
end
|
|
1277
|
+
|
|
1278
|
+
# Update properties of this object
|
|
1279
|
+
def update!(**args)
|
|
1280
|
+
@alternatives = args[:alternatives] if args.key?(:alternatives)
|
|
1281
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
1282
|
+
end
|
|
1283
|
+
end
|
|
1284
|
+
|
|
1285
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
1286
|
+
# corresponding text, confidence value, and frame level information for each
|
|
1287
|
+
# detection.
|
|
1288
|
+
class GoogleCloudVideointelligenceV1beta2TextAnnotation
|
|
1289
|
+
include Google::Apis::Core::Hashable
|
|
1290
|
+
|
|
1291
|
+
# All video segments where OCR detected text appears.
|
|
1292
|
+
# Corresponds to the JSON property `segments`
|
|
1293
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2TextSegment>]
|
|
1294
|
+
attr_accessor :segments
|
|
1295
|
+
|
|
1296
|
+
# The detected text.
|
|
1297
|
+
# Corresponds to the JSON property `text`
|
|
1298
|
+
# @return [String]
|
|
1299
|
+
attr_accessor :text
|
|
1300
|
+
|
|
1301
|
+
def initialize(**args)
|
|
1302
|
+
update!(**args)
|
|
1303
|
+
end
|
|
1304
|
+
|
|
1305
|
+
# Update properties of this object
|
|
1306
|
+
def update!(**args)
|
|
1307
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
1308
|
+
@text = args[:text] if args.key?(:text)
|
|
1309
|
+
end
|
|
1310
|
+
end
|
|
1311
|
+
|
|
1312
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
1313
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
1314
|
+
# frames containing detected OCR text snippets.
|
|
1315
|
+
class GoogleCloudVideointelligenceV1beta2TextFrame
|
|
1316
|
+
include Google::Apis::Core::Hashable
|
|
1317
|
+
|
|
1318
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
1319
|
+
# Contains list of the corner points in clockwise order starting from
|
|
1320
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
1321
|
+
# When the text is horizontal it might look like:
|
|
1322
|
+
# 0----1
|
|
1323
|
+
# | |
|
|
1324
|
+
# 3----2
|
|
1325
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
1326
|
+
# becomes:
|
|
1327
|
+
# 2----3
|
|
1328
|
+
# | |
|
|
1329
|
+
# 1----0
|
|
1330
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
1331
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
1332
|
+
# the box.
|
|
1333
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
1334
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2NormalizedBoundingPoly]
|
|
1335
|
+
attr_accessor :rotated_bounding_box
|
|
1336
|
+
|
|
1337
|
+
# Timestamp of this frame.
|
|
1338
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1339
|
+
# @return [String]
|
|
1340
|
+
attr_accessor :time_offset
|
|
1341
|
+
|
|
1342
|
+
def initialize(**args)
|
|
1343
|
+
update!(**args)
|
|
1344
|
+
end
|
|
1345
|
+
|
|
1346
|
+
# Update properties of this object
|
|
1347
|
+
def update!(**args)
|
|
1348
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
1349
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1350
|
+
end
|
|
1351
|
+
end
|
|
1352
|
+
|
|
1353
|
+
# Video segment level annotation results for text detection.
|
|
1354
|
+
class GoogleCloudVideointelligenceV1beta2TextSegment
|
|
1355
|
+
include Google::Apis::Core::Hashable
|
|
1356
|
+
|
|
1357
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
1358
|
+
# over all frames where OCR detected text appears.
|
|
1359
|
+
# Corresponds to the JSON property `confidence`
|
|
1360
|
+
# @return [Float]
|
|
1361
|
+
attr_accessor :confidence
|
|
1362
|
+
|
|
1363
|
+
# Information related to the frames where OCR detected text appears.
|
|
1364
|
+
# Corresponds to the JSON property `frames`
|
|
1365
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2TextFrame>]
|
|
1366
|
+
attr_accessor :frames
|
|
1367
|
+
|
|
1368
|
+
# Video segment.
|
|
1369
|
+
# Corresponds to the JSON property `segment`
|
|
1370
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2VideoSegment]
|
|
1371
|
+
attr_accessor :segment
|
|
1372
|
+
|
|
1373
|
+
def initialize(**args)
|
|
1374
|
+
update!(**args)
|
|
1375
|
+
end
|
|
1376
|
+
|
|
1377
|
+
# Update properties of this object
|
|
1378
|
+
def update!(**args)
|
|
1379
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1380
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
1381
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1382
|
+
end
|
|
1383
|
+
end
|
|
1384
|
+
|
|
1385
|
+
# Annotation progress for a single video.
|
|
1386
|
+
class GoogleCloudVideointelligenceV1beta2VideoAnnotationProgress
|
|
1387
|
+
include Google::Apis::Core::Hashable
|
|
1388
|
+
|
|
1389
|
+
# Video file location in
|
|
1390
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/).
|
|
1391
|
+
# Corresponds to the JSON property `inputUri`
|
|
1392
|
+
# @return [String]
|
|
1393
|
+
attr_accessor :input_uri
|
|
1394
|
+
|
|
1395
|
+
# Approximate percentage processed thus far. Guaranteed to be
|
|
1396
|
+
# 100 when fully processed.
|
|
1397
|
+
# Corresponds to the JSON property `progressPercent`
|
|
1398
|
+
# @return [Fixnum]
|
|
1399
|
+
attr_accessor :progress_percent
|
|
1400
|
+
|
|
1401
|
+
# Time when the request was received.
|
|
1402
|
+
# Corresponds to the JSON property `startTime`
|
|
1403
|
+
# @return [String]
|
|
1404
|
+
attr_accessor :start_time
|
|
1405
|
+
|
|
1406
|
+
# Time of the most recent update.
|
|
1407
|
+
# Corresponds to the JSON property `updateTime`
|
|
1408
|
+
# @return [String]
|
|
1409
|
+
attr_accessor :update_time
|
|
1410
|
+
|
|
1411
|
+
def initialize(**args)
|
|
1412
|
+
update!(**args)
|
|
1413
|
+
end
|
|
1414
|
+
|
|
1415
|
+
# Update properties of this object
|
|
1416
|
+
def update!(**args)
|
|
1417
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
1418
|
+
@progress_percent = args[:progress_percent] if args.key?(:progress_percent)
|
|
1419
|
+
@start_time = args[:start_time] if args.key?(:start_time)
|
|
1420
|
+
@update_time = args[:update_time] if args.key?(:update_time)
|
|
1421
|
+
end
|
|
1422
|
+
end
|
|
1423
|
+
|
|
1424
|
+
# Annotation results for a single video.
|
|
1425
|
+
class GoogleCloudVideointelligenceV1beta2VideoAnnotationResults
|
|
1426
|
+
include Google::Apis::Core::Hashable
|
|
1427
|
+
|
|
1428
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
1429
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
1430
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
1431
|
+
# - Simple to use and understand for most users
|
|
1432
|
+
# - Flexible enough to meet unexpected needs
|
|
1433
|
+
# # Overview
|
|
1434
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
1435
|
+
# message, and error details. The error code should be an enum value of
|
|
1436
|
+
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
1437
|
+
# error message should be a developer-facing English message that helps
|
|
1438
|
+
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
1439
|
+
# error message is needed, put the localized message in the error details or
|
|
1440
|
+
# localize it in the client. The optional error details may contain arbitrary
|
|
1441
|
+
# information about the error. There is a predefined set of error detail types
|
|
1442
|
+
# in the package `google.rpc` that can be used for common error conditions.
|
|
1443
|
+
# # Language mapping
|
|
1444
|
+
# The `Status` message is the logical representation of the error model, but it
|
|
1445
|
+
# is not necessarily the actual wire format. When the `Status` message is
|
|
1446
|
+
# exposed in different client libraries and different wire protocols, it can be
|
|
1447
|
+
# mapped differently. For example, it will likely be mapped to some exceptions
|
|
1448
|
+
# in Java, but more likely mapped to some error codes in C.
|
|
1449
|
+
# # Other uses
|
|
1450
|
+
# The error model and the `Status` message can be used in a variety of
|
|
1451
|
+
# environments, either with or without APIs, to provide a
|
|
1452
|
+
# consistent developer experience across different environments.
|
|
1453
|
+
# Example uses of this error model include:
|
|
1454
|
+
# - Partial errors. If a service needs to return partial errors to the client,
|
|
1455
|
+
# it may embed the `Status` in the normal response to indicate the partial
|
|
1456
|
+
# errors.
|
|
1457
|
+
# - Workflow errors. A typical workflow has multiple steps. Each step may
|
|
1458
|
+
# have a `Status` message for error reporting.
|
|
1459
|
+
# - Batch operations. If a client uses batch request and batch response, the
|
|
1460
|
+
# `Status` message should be used directly inside batch response, one for
|
|
1461
|
+
# each error sub-response.
|
|
1462
|
+
# - Asynchronous operations. If an API call embeds asynchronous operation
|
|
1463
|
+
# results in its response, the status of those operations should be
|
|
1464
|
+
# represented directly using the `Status` message.
|
|
1465
|
+
# - Logging. If some API errors are stored in logs, the message `Status` could
|
|
1466
|
+
# be used directly after any stripping needed for security/privacy reasons.
|
|
1467
|
+
# Corresponds to the JSON property `error`
|
|
1468
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleRpcStatus]
|
|
1469
|
+
attr_accessor :error
|
|
1470
|
+
|
|
1471
|
+
# Explicit content annotation (based on per-frame visual signals only).
|
|
1472
|
+
# If no explicit content has been detected in a frame, no annotations are
|
|
1473
|
+
# present for that frame.
|
|
1474
|
+
# Corresponds to the JSON property `explicitAnnotation`
|
|
1475
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation]
|
|
1476
|
+
attr_accessor :explicit_annotation
|
|
1477
|
+
|
|
1478
|
+
# Label annotations on frame level.
|
|
1479
|
+
# There is exactly one element for each unique label.
|
|
1480
|
+
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
1481
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
|
|
1482
|
+
attr_accessor :frame_label_annotations
|
|
1483
|
+
|
|
1484
|
+
# Video file location in
|
|
1485
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/).
|
|
1486
|
+
# Corresponds to the JSON property `inputUri`
|
|
1487
|
+
# @return [String]
|
|
1488
|
+
attr_accessor :input_uri
|
|
1489
|
+
|
|
1490
|
+
# Annotations for list of objects detected and tracked in video.
|
|
1491
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
1492
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation>]
|
|
1493
|
+
attr_accessor :object_annotations
|
|
1494
|
+
|
|
1495
|
+
# Label annotations on video level or user specified segment level.
|
|
1496
|
+
# There is exactly one element for each unique label.
|
|
1497
|
+
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
1498
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
|
|
1499
|
+
attr_accessor :segment_label_annotations
|
|
1500
|
+
|
|
1501
|
+
# Shot annotations. Each shot is represented as a video segment.
|
|
1502
|
+
# Corresponds to the JSON property `shotAnnotations`
|
|
1503
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2VideoSegment>]
|
|
1504
|
+
attr_accessor :shot_annotations
|
|
1505
|
+
|
|
1506
|
+
# Label annotations on shot level.
|
|
1507
|
+
# There is exactly one element for each unique label.
|
|
1508
|
+
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
1509
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>]
|
|
1510
|
+
attr_accessor :shot_label_annotations
|
|
1511
|
+
|
|
1512
|
+
# Speech transcription.
|
|
1513
|
+
# Corresponds to the JSON property `speechTranscriptions`
|
|
1514
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2SpeechTranscription>]
|
|
1515
|
+
attr_accessor :speech_transcriptions
|
|
1516
|
+
|
|
1517
|
+
# OCR text detection and tracking.
|
|
1518
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
1519
|
+
# frame information associated with it.
|
|
1520
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
1521
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2TextAnnotation>]
|
|
1522
|
+
attr_accessor :text_annotations
|
|
1523
|
+
|
|
1524
|
+
def initialize(**args)
|
|
1525
|
+
update!(**args)
|
|
1526
|
+
end
|
|
1527
|
+
|
|
1528
|
+
# Update properties of this object
|
|
1529
|
+
def update!(**args)
|
|
1530
|
+
@error = args[:error] if args.key?(:error)
|
|
1531
|
+
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
1532
|
+
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
1533
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
1534
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
1535
|
+
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
1536
|
+
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
1537
|
+
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
1538
|
+
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
1539
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
1540
|
+
end
|
|
1541
|
+
end
|
|
1542
|
+
|
|
1543
|
+
# Video segment.
|
|
1544
|
+
class GoogleCloudVideointelligenceV1beta2VideoSegment
|
|
1545
|
+
include Google::Apis::Core::Hashable
|
|
1546
|
+
|
|
1547
|
+
# Time-offset, relative to the beginning of the video,
|
|
1548
|
+
# corresponding to the end of the segment (inclusive).
|
|
1549
|
+
# Corresponds to the JSON property `endTimeOffset`
|
|
1550
|
+
# @return [String]
|
|
1551
|
+
attr_accessor :end_time_offset
|
|
1552
|
+
|
|
1553
|
+
# Time-offset, relative to the beginning of the video,
|
|
1554
|
+
# corresponding to the start of the segment (inclusive).
|
|
1555
|
+
# Corresponds to the JSON property `startTimeOffset`
|
|
1556
|
+
# @return [String]
|
|
1557
|
+
attr_accessor :start_time_offset
|
|
1558
|
+
|
|
1559
|
+
def initialize(**args)
|
|
1560
|
+
update!(**args)
|
|
1561
|
+
end
|
|
1562
|
+
|
|
1563
|
+
# Update properties of this object
|
|
1564
|
+
def update!(**args)
|
|
1565
|
+
@end_time_offset = args[:end_time_offset] if args.key?(:end_time_offset)
|
|
1566
|
+
@start_time_offset = args[:start_time_offset] if args.key?(:start_time_offset)
|
|
1567
|
+
end
|
|
1568
|
+
end
|
|
1569
|
+
|
|
1570
|
+
# Word-specific information for recognized words. Word information is only
|
|
1571
|
+
# included in the response when certain request parameters are set, such
|
|
1572
|
+
# as `enable_word_time_offsets`.
|
|
1573
|
+
class GoogleCloudVideointelligenceV1beta2WordInfo
|
|
1574
|
+
include Google::Apis::Core::Hashable
|
|
1575
|
+
|
|
1576
|
+
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
1577
|
+
# indicates an estimated greater likelihood that the recognized words are
|
|
1578
|
+
# correct. This field is set only for the top alternative.
|
|
1579
|
+
# This field is not guaranteed to be accurate and users should not rely on it
|
|
1580
|
+
# to be always provided.
|
|
1581
|
+
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
1582
|
+
# Corresponds to the JSON property `confidence`
|
|
1583
|
+
# @return [Float]
|
|
1584
|
+
attr_accessor :confidence
|
|
1585
|
+
|
|
1586
|
+
# Time offset relative to the beginning of the audio, and
|
|
1587
|
+
# corresponding to the end of the spoken word. This field is only set if
|
|
1588
|
+
# `enable_word_time_offsets=true` and only in the top hypothesis. This is an
|
|
1589
|
+
# experimental feature and the accuracy of the time offset can vary.
|
|
1590
|
+
# Corresponds to the JSON property `endTime`
|
|
1591
|
+
# @return [String]
|
|
1592
|
+
attr_accessor :end_time
|
|
1593
|
+
|
|
1594
|
+
# Output only. A distinct integer value is assigned for every speaker within
|
|
1595
|
+
# the audio. This field specifies which one of those speakers was detected to
|
|
1596
|
+
# have spoken this word. Value ranges from 1 up to diarization_speaker_count,
|
|
1597
|
+
# and is only set if speaker diarization is enabled.
|
|
1598
|
+
# Corresponds to the JSON property `speakerTag`
|
|
1599
|
+
# @return [Fixnum]
|
|
1600
|
+
attr_accessor :speaker_tag
|
|
1601
|
+
|
|
1602
|
+
# Time offset relative to the beginning of the audio, and
|
|
1603
|
+
# corresponding to the start of the spoken word. This field is only set if
|
|
1604
|
+
# `enable_word_time_offsets=true` and only in the top hypothesis. This is an
|
|
1605
|
+
# experimental feature and the accuracy of the time offset can vary.
|
|
1606
|
+
# Corresponds to the JSON property `startTime`
|
|
1607
|
+
# @return [String]
|
|
1608
|
+
attr_accessor :start_time
|
|
1609
|
+
|
|
1610
|
+
# The word corresponding to this set of information.
|
|
1611
|
+
# Corresponds to the JSON property `word`
|
|
1612
|
+
# @return [String]
|
|
1613
|
+
attr_accessor :word
|
|
1614
|
+
|
|
1615
|
+
def initialize(**args)
|
|
1616
|
+
update!(**args)
|
|
1617
|
+
end
|
|
1618
|
+
|
|
1619
|
+
# Update properties of this object
|
|
1620
|
+
def update!(**args)
|
|
1621
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1622
|
+
@end_time = args[:end_time] if args.key?(:end_time)
|
|
1623
|
+
@speaker_tag = args[:speaker_tag] if args.key?(:speaker_tag)
|
|
1624
|
+
@start_time = args[:start_time] if args.key?(:start_time)
|
|
1625
|
+
@word = args[:word] if args.key?(:word)
|
|
1626
|
+
end
|
|
1627
|
+
end
|
|
1628
|
+
|
|
1629
|
+
# Video annotation progress. Included in the `metadata`
|
|
1630
|
+
# field of the `Operation` returned by the `GetOperation`
|
|
1631
|
+
# call of the `google::longrunning::Operations` service.
|
|
1632
|
+
class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoProgress
|
|
1633
|
+
include Google::Apis::Core::Hashable
|
|
1634
|
+
|
|
1635
|
+
# Progress metadata for all videos specified in `AnnotateVideoRequest`.
|
|
1636
|
+
# Corresponds to the JSON property `annotationProgress`
|
|
1637
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress>]
|
|
1638
|
+
attr_accessor :annotation_progress
|
|
1639
|
+
|
|
1640
|
+
def initialize(**args)
|
|
1641
|
+
update!(**args)
|
|
1642
|
+
end
|
|
1643
|
+
|
|
1644
|
+
# Update properties of this object
|
|
1645
|
+
def update!(**args)
|
|
1646
|
+
@annotation_progress = args[:annotation_progress] if args.key?(:annotation_progress)
|
|
1647
|
+
end
|
|
1648
|
+
end
|
|
1649
|
+
|
|
1650
|
+
# Video annotation request.
|
|
1651
|
+
class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoRequest
|
|
1652
|
+
include Google::Apis::Core::Hashable
|
|
1653
|
+
|
|
1654
|
+
# Requested video annotation features.
|
|
1655
|
+
# Corresponds to the JSON property `features`
|
|
1656
|
+
# @return [Array<String>]
|
|
1657
|
+
attr_accessor :features
|
|
1658
|
+
|
|
1659
|
+
# The video data bytes.
|
|
1660
|
+
# If unset, the input video(s) should be specified via `input_uri`.
|
|
1661
|
+
# If set, `input_uri` should be unset.
|
|
1662
|
+
# Corresponds to the JSON property `inputContent`
|
|
1663
|
+
# NOTE: Values are automatically base64 encoded/decoded in the client library.
|
|
1664
|
+
# @return [String]
|
|
1665
|
+
attr_accessor :input_content
|
|
1666
|
+
|
|
1667
|
+
# Input video location. Currently, only
|
|
1668
|
+
# [Google Cloud Storage](https://cloud.google.com/storage/) URIs are
|
|
1669
|
+
# supported, which must be specified in the following format:
|
|
1670
|
+
# `gs://bucket-id/object-id` (other URI formats return
|
|
1671
|
+
# google.rpc.Code.INVALID_ARGUMENT). For more information, see
|
|
1672
|
+
# [Request URIs](/storage/docs/reference-uris).
|
|
1673
|
+
# A video URI may include wildcards in `object-id`, and thus identify
|
|
1674
|
+
# multiple videos. Supported wildcards: '*' to match 0 or more characters;
|
|
1675
|
+
# '?' to match 1 character. If unset, the input video should be embedded
|
|
1676
|
+
# in the request as `input_content`. If set, `input_content` should be unset.
|
|
1677
|
+
# Corresponds to the JSON property `inputUri`
|
|
1678
|
+
# @return [String]
|
|
1679
|
+
attr_accessor :input_uri
|
|
1680
|
+
|
|
1681
|
+
# Optional cloud region where annotation should take place. Supported cloud
|
|
1682
|
+
# regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region
|
|
1683
|
+
# is specified, a region will be determined based on video file location.
|
|
1684
|
+
# Corresponds to the JSON property `locationId`
|
|
1685
|
+
# @return [String]
|
|
1686
|
+
attr_accessor :location_id
|
|
1687
|
+
|
|
1688
|
+
# Optional location where the output (in JSON format) should be stored.
|
|
1689
|
+
# Currently, only [Google Cloud Storage](https://cloud.google.com/storage/)
|
|
1690
|
+
# URIs are supported, which must be specified in the following format:
|
|
1691
|
+
# `gs://bucket-id/object-id` (other URI formats return
|
|
1692
|
+
# google.rpc.Code.INVALID_ARGUMENT). For more information, see
|
|
1693
|
+
# [Request URIs](/storage/docs/reference-uris).
|
|
1694
|
+
# Corresponds to the JSON property `outputUri`
|
|
1695
|
+
# @return [String]
|
|
1696
|
+
attr_accessor :output_uri
|
|
1697
|
+
|
|
1698
|
+
# Video context and/or feature-specific parameters.
|
|
1699
|
+
# Corresponds to the JSON property `videoContext`
|
|
1700
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoContext]
|
|
1701
|
+
attr_accessor :video_context
|
|
1702
|
+
|
|
1703
|
+
def initialize(**args)
|
|
1704
|
+
update!(**args)
|
|
1705
|
+
end
|
|
1706
|
+
|
|
1707
|
+
# Update properties of this object
|
|
1708
|
+
def update!(**args)
|
|
1709
|
+
@features = args[:features] if args.key?(:features)
|
|
1710
|
+
@input_content = args[:input_content] if args.key?(:input_content)
|
|
1711
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
1712
|
+
@location_id = args[:location_id] if args.key?(:location_id)
|
|
1713
|
+
@output_uri = args[:output_uri] if args.key?(:output_uri)
|
|
1714
|
+
@video_context = args[:video_context] if args.key?(:video_context)
|
|
1715
|
+
end
|
|
1716
|
+
end
|
|
1717
|
+
|
|
1718
|
+
# Video annotation response. Included in the `response`
|
|
1719
|
+
# field of the `Operation` returned by the `GetOperation`
|
|
1720
|
+
# call of the `google::longrunning::Operations` service.
|
|
1721
|
+
class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoResponse
|
|
1722
|
+
include Google::Apis::Core::Hashable
|
|
1723
|
+
|
|
1724
|
+
# Annotation results for all videos specified in `AnnotateVideoRequest`.
|
|
1725
|
+
# Corresponds to the JSON property `annotationResults`
|
|
1726
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults>]
|
|
1727
|
+
attr_accessor :annotation_results
|
|
1728
|
+
|
|
1729
|
+
def initialize(**args)
|
|
1730
|
+
update!(**args)
|
|
1731
|
+
end
|
|
1732
|
+
|
|
1733
|
+
# Update properties of this object
|
|
1734
|
+
def update!(**args)
|
|
1735
|
+
@annotation_results = args[:annotation_results] if args.key?(:annotation_results)
|
|
1736
|
+
end
|
|
1737
|
+
end
|
|
1738
|
+
|
|
1739
|
+
# Detected entity from video analysis.
|
|
1740
|
+
class GoogleCloudVideointelligenceV1p1beta1Entity
|
|
1741
|
+
include Google::Apis::Core::Hashable
|
|
1742
|
+
|
|
1743
|
+
# Textual description, e.g. `Fixed-gear bicycle`.
|
|
1744
|
+
# Corresponds to the JSON property `description`
|
|
1745
|
+
# @return [String]
|
|
1746
|
+
attr_accessor :description
|
|
1747
|
+
|
|
1748
|
+
# Opaque entity ID. Some IDs may be available in
|
|
1749
|
+
# [Google Knowledge Graph Search
|
|
1750
|
+
# API](https://developers.google.com/knowledge-graph/).
|
|
1751
|
+
# Corresponds to the JSON property `entityId`
|
|
1752
|
+
# @return [String]
|
|
1753
|
+
attr_accessor :entity_id
|
|
1754
|
+
|
|
1755
|
+
# Language code for `description` in BCP-47 format.
|
|
1756
|
+
# Corresponds to the JSON property `languageCode`
|
|
1757
|
+
# @return [String]
|
|
1758
|
+
attr_accessor :language_code
|
|
1759
|
+
|
|
1760
|
+
def initialize(**args)
|
|
1761
|
+
update!(**args)
|
|
1762
|
+
end
|
|
1763
|
+
|
|
1764
|
+
# Update properties of this object
|
|
1765
|
+
def update!(**args)
|
|
1766
|
+
@description = args[:description] if args.key?(:description)
|
|
1767
|
+
@entity_id = args[:entity_id] if args.key?(:entity_id)
|
|
1768
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
1769
|
+
end
|
|
1770
|
+
end
|
|
1771
|
+
|
|
1772
|
+
# Explicit content annotation (based on per-frame visual signals only).
|
|
1773
|
+
# If no explicit content has been detected in a frame, no annotations are
|
|
1774
|
+
# present for that frame.
|
|
1775
|
+
class GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
|
|
1776
|
+
include Google::Apis::Core::Hashable
|
|
1777
|
+
|
|
1778
|
+
# All video frames where explicit content was detected.
|
|
1779
|
+
# Corresponds to the JSON property `frames`
|
|
1780
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame>]
|
|
1781
|
+
attr_accessor :frames
|
|
1782
|
+
|
|
1783
|
+
def initialize(**args)
|
|
1784
|
+
update!(**args)
|
|
1785
|
+
end
|
|
1786
|
+
|
|
1787
|
+
# Update properties of this object
|
|
1788
|
+
def update!(**args)
|
|
1789
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
1790
|
+
end
|
|
1791
|
+
end
|
|
1792
|
+
|
|
1793
|
+
# Config for EXPLICIT_CONTENT_DETECTION.
|
|
1794
|
+
class GoogleCloudVideointelligenceV1p1beta1ExplicitContentDetectionConfig
|
|
1795
|
+
include Google::Apis::Core::Hashable
|
|
1796
|
+
|
|
1797
|
+
# Model to use for explicit content detection.
|
|
1798
|
+
# Supported values: "builtin/stable" (the default if unset) and
|
|
1799
|
+
# "builtin/latest".
|
|
1800
|
+
# Corresponds to the JSON property `model`
|
|
1801
|
+
# @return [String]
|
|
1802
|
+
attr_accessor :model
|
|
1803
|
+
|
|
1804
|
+
def initialize(**args)
|
|
1805
|
+
update!(**args)
|
|
1806
|
+
end
|
|
1807
|
+
|
|
1808
|
+
# Update properties of this object
|
|
1809
|
+
def update!(**args)
|
|
1810
|
+
@model = args[:model] if args.key?(:model)
|
|
1811
|
+
end
|
|
1812
|
+
end
|
|
1813
|
+
|
|
1814
|
+
# Video frame level annotation results for explicit content.
|
|
1815
|
+
class GoogleCloudVideointelligenceV1p1beta1ExplicitContentFrame
|
|
1816
|
+
include Google::Apis::Core::Hashable
|
|
1817
|
+
|
|
1818
|
+
# Likelihood of the pornography content..
|
|
1819
|
+
# Corresponds to the JSON property `pornographyLikelihood`
|
|
1820
|
+
# @return [String]
|
|
1821
|
+
attr_accessor :pornography_likelihood
|
|
1822
|
+
|
|
1823
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
1824
|
+
# video frame for this location.
|
|
1825
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1826
|
+
# @return [String]
|
|
1827
|
+
attr_accessor :time_offset
|
|
1828
|
+
|
|
1829
|
+
def initialize(**args)
|
|
1830
|
+
update!(**args)
|
|
1831
|
+
end
|
|
1832
|
+
|
|
1833
|
+
# Update properties of this object
|
|
1834
|
+
def update!(**args)
|
|
1835
|
+
@pornography_likelihood = args[:pornography_likelihood] if args.key?(:pornography_likelihood)
|
|
1836
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1837
|
+
end
|
|
1838
|
+
end
|
|
1839
|
+
|
|
1840
|
+
# Label annotation.
|
|
1841
|
+
class GoogleCloudVideointelligenceV1p1beta1LabelAnnotation
|
|
1842
|
+
include Google::Apis::Core::Hashable
|
|
1843
|
+
|
|
1844
|
+
# Common categories for the detected entity.
|
|
1845
|
+
# E.g. when the label is `Terrier` the category is likely `dog`. And in some
|
|
1846
|
+
# cases there might be more than one categories e.g. `Terrier` could also be
|
|
1847
|
+
# a `pet`.
|
|
1848
|
+
# Corresponds to the JSON property `categoryEntities`
|
|
1849
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1Entity>]
|
|
1850
|
+
attr_accessor :category_entities
|
|
1851
|
+
|
|
1852
|
+
# Detected entity from video analysis.
|
|
1853
|
+
# Corresponds to the JSON property `entity`
|
|
1854
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1Entity]
|
|
1855
|
+
attr_accessor :entity
|
|
1856
|
+
|
|
1857
|
+
# All video frames where a label was detected.
|
|
1858
|
+
# Corresponds to the JSON property `frames`
|
|
1859
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelFrame>]
|
|
1860
|
+
attr_accessor :frames
|
|
1861
|
+
|
|
1862
|
+
# All video segments where a label was detected.
|
|
1863
|
+
# Corresponds to the JSON property `segments`
|
|
1864
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelSegment>]
|
|
1865
|
+
attr_accessor :segments
|
|
1866
|
+
|
|
1867
|
+
def initialize(**args)
|
|
1868
|
+
update!(**args)
|
|
1869
|
+
end
|
|
1870
|
+
|
|
1871
|
+
# Update properties of this object
|
|
1872
|
+
def update!(**args)
|
|
1873
|
+
@category_entities = args[:category_entities] if args.key?(:category_entities)
|
|
1874
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
1875
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
1876
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
1877
|
+
end
|
|
1878
|
+
end
|
|
1879
|
+
|
|
1880
|
+
# Config for LABEL_DETECTION.
|
|
1881
|
+
class GoogleCloudVideointelligenceV1p1beta1LabelDetectionConfig
|
|
1882
|
+
include Google::Apis::Core::Hashable
|
|
1883
|
+
|
|
1884
|
+
# The confidence threshold we perform filtering on the labels from
|
|
1885
|
+
# frame-level detection. If not set, it is set to 0.4 by default. The valid
|
|
1886
|
+
# range for this threshold is [0.1, 0.9]. Any value set outside of this
|
|
1887
|
+
# range will be clipped.
|
|
1888
|
+
# Note: for best results please follow the default threshold. We will update
|
|
1889
|
+
# the default threshold everytime when we release a new model.
|
|
1890
|
+
# Corresponds to the JSON property `frameConfidenceThreshold`
|
|
1891
|
+
# @return [Float]
|
|
1892
|
+
attr_accessor :frame_confidence_threshold
|
|
1893
|
+
|
|
1894
|
+
# What labels should be detected with LABEL_DETECTION, in addition to
|
|
1895
|
+
# video-level labels or segment-level labels.
|
|
1896
|
+
# If unspecified, defaults to `SHOT_MODE`.
|
|
1897
|
+
# Corresponds to the JSON property `labelDetectionMode`
|
|
1898
|
+
# @return [String]
|
|
1899
|
+
attr_accessor :label_detection_mode
|
|
1900
|
+
|
|
1901
|
+
# Model to use for label detection.
|
|
1902
|
+
# Supported values: "builtin/stable" (the default if unset) and
|
|
1903
|
+
# "builtin/latest".
|
|
1904
|
+
# Corresponds to the JSON property `model`
|
|
1905
|
+
# @return [String]
|
|
1906
|
+
attr_accessor :model
|
|
1907
|
+
|
|
1908
|
+
# Whether the video has been shot from a stationary (i.e. non-moving) camera.
|
|
1909
|
+
# When set to true, might improve detection accuracy for moving objects.
|
|
1910
|
+
# Should be used with `SHOT_AND_FRAME_MODE` enabled.
|
|
1911
|
+
# Corresponds to the JSON property `stationaryCamera`
|
|
1912
|
+
# @return [Boolean]
|
|
1913
|
+
attr_accessor :stationary_camera
|
|
1914
|
+
alias_method :stationary_camera?, :stationary_camera
|
|
1915
|
+
|
|
1916
|
+
# The confidence threshold we perform filtering on the labels from
|
|
1917
|
+
# video-level and shot-level detections. If not set, it is set to 0.3 by
|
|
1918
|
+
# default. The valid range for this threshold is [0.1, 0.9]. Any value set
|
|
1919
|
+
# outside of this range will be clipped.
|
|
1920
|
+
# Note: for best results please follow the default threshold. We will update
|
|
1921
|
+
# the default threshold everytime when we release a new model.
|
|
1922
|
+
# Corresponds to the JSON property `videoConfidenceThreshold`
|
|
1923
|
+
# @return [Float]
|
|
1924
|
+
attr_accessor :video_confidence_threshold
|
|
1925
|
+
|
|
1926
|
+
def initialize(**args)
|
|
1927
|
+
update!(**args)
|
|
1928
|
+
end
|
|
1929
|
+
|
|
1930
|
+
# Update properties of this object
|
|
1931
|
+
def update!(**args)
|
|
1932
|
+
@frame_confidence_threshold = args[:frame_confidence_threshold] if args.key?(:frame_confidence_threshold)
|
|
1933
|
+
@label_detection_mode = args[:label_detection_mode] if args.key?(:label_detection_mode)
|
|
1934
|
+
@model = args[:model] if args.key?(:model)
|
|
1935
|
+
@stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
|
|
1936
|
+
@video_confidence_threshold = args[:video_confidence_threshold] if args.key?(:video_confidence_threshold)
|
|
1937
|
+
end
|
|
1938
|
+
end
|
|
1939
|
+
|
|
1940
|
+
# Video frame level annotation results for label detection.
|
|
1941
|
+
class GoogleCloudVideointelligenceV1p1beta1LabelFrame
|
|
1942
|
+
include Google::Apis::Core::Hashable
|
|
1943
|
+
|
|
1944
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
1945
|
+
# Corresponds to the JSON property `confidence`
|
|
1946
|
+
# @return [Float]
|
|
1947
|
+
attr_accessor :confidence
|
|
1948
|
+
|
|
1949
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
1950
|
+
# video frame for this location.
|
|
1951
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1952
|
+
# @return [String]
|
|
1953
|
+
attr_accessor :time_offset
|
|
1954
|
+
|
|
1955
|
+
def initialize(**args)
|
|
1956
|
+
update!(**args)
|
|
1957
|
+
end
|
|
1958
|
+
|
|
1959
|
+
# Update properties of this object
|
|
1960
|
+
def update!(**args)
|
|
1961
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1962
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1963
|
+
end
|
|
1964
|
+
end
|
|
1965
|
+
|
|
1966
|
+
# Video segment level annotation results for label detection.
|
|
1967
|
+
class GoogleCloudVideointelligenceV1p1beta1LabelSegment
|
|
1968
|
+
include Google::Apis::Core::Hashable
|
|
1969
|
+
|
|
1970
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
1971
|
+
# Corresponds to the JSON property `confidence`
|
|
1972
|
+
# @return [Float]
|
|
1973
|
+
attr_accessor :confidence
|
|
1974
|
+
|
|
1975
|
+
# Video segment.
|
|
1976
|
+
# Corresponds to the JSON property `segment`
|
|
1977
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
|
|
1978
|
+
attr_accessor :segment
|
|
1979
|
+
|
|
1980
|
+
def initialize(**args)
|
|
1981
|
+
update!(**args)
|
|
1982
|
+
end
|
|
1983
|
+
|
|
1984
|
+
# Update properties of this object
|
|
1985
|
+
def update!(**args)
|
|
1986
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
1987
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1988
|
+
end
|
|
1989
|
+
end
|
|
1990
|
+
|
|
1991
|
+
# Normalized bounding box.
|
|
1992
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
1993
|
+
# Range: [0, 1].
|
|
1994
|
+
class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox
|
|
1995
|
+
include Google::Apis::Core::Hashable
|
|
1996
|
+
|
|
1997
|
+
# Bottom Y coordinate.
|
|
1998
|
+
# Corresponds to the JSON property `bottom`
|
|
1999
|
+
# @return [Float]
|
|
2000
|
+
attr_accessor :bottom
|
|
2001
|
+
|
|
2002
|
+
# Left X coordinate.
|
|
2003
|
+
# Corresponds to the JSON property `left`
|
|
2004
|
+
# @return [Float]
|
|
2005
|
+
attr_accessor :left
|
|
2006
|
+
|
|
2007
|
+
# Right X coordinate.
|
|
2008
|
+
# Corresponds to the JSON property `right`
|
|
2009
|
+
# @return [Float]
|
|
2010
|
+
attr_accessor :right
|
|
2011
|
+
|
|
2012
|
+
# Top Y coordinate.
|
|
2013
|
+
# Corresponds to the JSON property `top`
|
|
2014
|
+
# @return [Float]
|
|
2015
|
+
attr_accessor :top
|
|
2016
|
+
|
|
2017
|
+
def initialize(**args)
|
|
2018
|
+
update!(**args)
|
|
2019
|
+
end
|
|
2020
|
+
|
|
2021
|
+
# Update properties of this object
|
|
2022
|
+
def update!(**args)
|
|
2023
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
2024
|
+
@left = args[:left] if args.key?(:left)
|
|
2025
|
+
@right = args[:right] if args.key?(:right)
|
|
2026
|
+
@top = args[:top] if args.key?(:top)
|
|
2027
|
+
end
|
|
2028
|
+
end
|
|
2029
|
+
|
|
2030
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
2031
|
+
# Contains list of the corner points in clockwise order starting from
|
|
2032
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
2033
|
+
# When the text is horizontal it might look like:
|
|
2034
|
+
# 0----1
|
|
2035
|
+
# | |
|
|
2036
|
+
# 3----2
|
|
2037
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
2038
|
+
# becomes:
|
|
2039
|
+
# 2----3
|
|
2040
|
+
# | |
|
|
2041
|
+
# 1----0
|
|
2042
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
2043
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2044
|
+
# the box.
|
|
2045
|
+
class GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly
|
|
2046
|
+
include Google::Apis::Core::Hashable
|
|
2047
|
+
|
|
2048
|
+
# Normalized vertices of the bounding polygon.
|
|
2049
|
+
# Corresponds to the JSON property `vertices`
|
|
2050
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1NormalizedVertex>]
|
|
2051
|
+
attr_accessor :vertices
|
|
2052
|
+
|
|
2053
|
+
def initialize(**args)
|
|
2054
|
+
update!(**args)
|
|
2055
|
+
end
|
|
2056
|
+
|
|
2057
|
+
# Update properties of this object
|
|
2058
|
+
def update!(**args)
|
|
2059
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
2060
|
+
end
|
|
2061
|
+
end
|
|
2062
|
+
|
|
2063
|
+
# A vertex represents a 2D point in the image.
|
|
2064
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
2065
|
+
# and range from 0 to 1.
|
|
2066
|
+
class GoogleCloudVideointelligenceV1p1beta1NormalizedVertex
|
|
2067
|
+
include Google::Apis::Core::Hashable
|
|
2068
|
+
|
|
2069
|
+
# X coordinate.
|
|
2070
|
+
# Corresponds to the JSON property `x`
|
|
2071
|
+
# @return [Float]
|
|
2072
|
+
attr_accessor :x
|
|
2073
|
+
|
|
2074
|
+
# Y coordinate.
|
|
2075
|
+
# Corresponds to the JSON property `y`
|
|
2076
|
+
# @return [Float]
|
|
2077
|
+
attr_accessor :y
|
|
2078
|
+
|
|
2079
|
+
def initialize(**args)
|
|
2080
|
+
update!(**args)
|
|
2081
|
+
end
|
|
2082
|
+
|
|
2083
|
+
# Update properties of this object
|
|
2084
|
+
def update!(**args)
|
|
2085
|
+
@x = args[:x] if args.key?(:x)
|
|
2086
|
+
@y = args[:y] if args.key?(:y)
|
|
2087
|
+
end
|
|
2088
|
+
end
|
|
2089
|
+
|
|
2090
|
+
# Annotations corresponding to one tracked object.
|
|
2091
|
+
class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation
|
|
2092
|
+
include Google::Apis::Core::Hashable
|
|
2093
|
+
|
|
2094
|
+
# Object category's labeling confidence of this track.
|
|
2095
|
+
# Corresponds to the JSON property `confidence`
|
|
2096
|
+
# @return [Float]
|
|
2097
|
+
attr_accessor :confidence
|
|
2098
|
+
|
|
2099
|
+
# Detected entity from video analysis.
|
|
2100
|
+
# Corresponds to the JSON property `entity`
|
|
2101
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1Entity]
|
|
2102
|
+
attr_accessor :entity
|
|
2103
|
+
|
|
2104
|
+
# Information corresponding to all frames where this object track appears.
|
|
2105
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
2106
|
+
# messages in frames.
|
|
2107
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
2108
|
+
# Corresponds to the JSON property `frames`
|
|
2109
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame>]
|
|
2110
|
+
attr_accessor :frames
|
|
2111
|
+
|
|
2112
|
+
# Video segment.
|
|
2113
|
+
# Corresponds to the JSON property `segment`
|
|
2114
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
|
|
2115
|
+
attr_accessor :segment
|
|
2116
|
+
|
|
2117
|
+
# Streaming mode ONLY.
|
|
2118
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
2119
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
2120
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
2121
|
+
# the customers can correlate the results of the ongoing
|
|
2122
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
2123
|
+
# Corresponds to the JSON property `trackId`
|
|
2124
|
+
# @return [Fixnum]
|
|
2125
|
+
attr_accessor :track_id
|
|
2126
|
+
|
|
2127
|
+
def initialize(**args)
|
|
2128
|
+
update!(**args)
|
|
2129
|
+
end
|
|
2130
|
+
|
|
2131
|
+
# Update properties of this object
|
|
2132
|
+
def update!(**args)
|
|
2133
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2134
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
2135
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
2136
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
2137
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
2138
|
+
end
|
|
2139
|
+
end
|
|
2140
|
+
|
|
2141
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
2142
|
+
# stores per frame location, time offset, and confidence.
|
|
2143
|
+
class GoogleCloudVideointelligenceV1p1beta1ObjectTrackingFrame
|
|
2144
|
+
include Google::Apis::Core::Hashable
|
|
2145
|
+
|
|
2146
|
+
# Normalized bounding box.
|
|
2147
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
2148
|
+
# Range: [0, 1].
|
|
2149
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
2150
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingBox]
|
|
2151
|
+
attr_accessor :normalized_bounding_box
|
|
2152
|
+
|
|
2153
|
+
# The timestamp of the frame in microseconds.
|
|
2154
|
+
# Corresponds to the JSON property `timeOffset`
|
|
2155
|
+
# @return [String]
|
|
2156
|
+
attr_accessor :time_offset
|
|
2157
|
+
|
|
2158
|
+
def initialize(**args)
|
|
2159
|
+
update!(**args)
|
|
2160
|
+
end
|
|
2161
|
+
|
|
2162
|
+
# Update properties of this object
|
|
2163
|
+
def update!(**args)
|
|
2164
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
2165
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2166
|
+
end
|
|
2167
|
+
end
|
|
2168
|
+
|
|
2169
|
+
# Config for SHOT_CHANGE_DETECTION.
|
|
2170
|
+
class GoogleCloudVideointelligenceV1p1beta1ShotChangeDetectionConfig
|
|
2171
|
+
include Google::Apis::Core::Hashable
|
|
2172
|
+
|
|
2173
|
+
# Model to use for shot change detection.
|
|
2174
|
+
# Supported values: "builtin/stable" (the default if unset) and
|
|
2175
|
+
# "builtin/latest".
|
|
2176
|
+
# Corresponds to the JSON property `model`
|
|
2177
|
+
# @return [String]
|
|
2178
|
+
attr_accessor :model
|
|
2179
|
+
|
|
2180
|
+
def initialize(**args)
|
|
2181
|
+
update!(**args)
|
|
2182
|
+
end
|
|
2183
|
+
|
|
2184
|
+
# Update properties of this object
|
|
2185
|
+
def update!(**args)
|
|
2186
|
+
@model = args[:model] if args.key?(:model)
|
|
2187
|
+
end
|
|
2188
|
+
end
|
|
2189
|
+
|
|
2190
|
+
# Provides "hints" to the speech recognizer to favor specific words and phrases
|
|
2191
|
+
# in the results.
|
|
2192
|
+
class GoogleCloudVideointelligenceV1p1beta1SpeechContext
|
|
2193
|
+
include Google::Apis::Core::Hashable
|
|
2194
|
+
|
|
2195
|
+
# *Optional* A list of strings containing words and phrases "hints" so that
|
|
2196
|
+
# the speech recognition is more likely to recognize them. This can be used
|
|
2197
|
+
# to improve the accuracy for specific words and phrases, for example, if
|
|
2198
|
+
# specific commands are typically spoken by the user. This can also be used
|
|
2199
|
+
# to add additional words to the vocabulary of the recognizer. See
|
|
2200
|
+
# [usage limits](https://cloud.google.com/speech/limits#content).
|
|
2201
|
+
# Corresponds to the JSON property `phrases`
|
|
2202
|
+
# @return [Array<String>]
|
|
2203
|
+
attr_accessor :phrases
|
|
2204
|
+
|
|
2205
|
+
def initialize(**args)
|
|
2206
|
+
update!(**args)
|
|
2207
|
+
end
|
|
2208
|
+
|
|
2209
|
+
# Update properties of this object
|
|
2210
|
+
def update!(**args)
|
|
2211
|
+
@phrases = args[:phrases] if args.key?(:phrases)
|
|
2212
|
+
end
|
|
2213
|
+
end
|
|
2214
|
+
|
|
2215
|
+
# Alternative hypotheses (a.k.a. n-best list).
|
|
2216
|
+
class GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative
|
|
2217
|
+
include Google::Apis::Core::Hashable
|
|
2218
|
+
|
|
2219
|
+
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
2220
|
+
# indicates an estimated greater likelihood that the recognized words are
|
|
2221
|
+
# correct. This field is typically provided only for the top hypothesis, and
|
|
2222
|
+
# only for `is_final=true` results. Clients should not rely on the
|
|
2223
|
+
# `confidence` field as it is not guaranteed to be accurate or consistent.
|
|
2224
|
+
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
2225
|
+
# Corresponds to the JSON property `confidence`
|
|
2226
|
+
# @return [Float]
|
|
2227
|
+
attr_accessor :confidence
|
|
2228
|
+
|
|
2229
|
+
# Transcript text representing the words that the user spoke.
|
|
2230
|
+
# Corresponds to the JSON property `transcript`
|
|
2231
|
+
# @return [String]
|
|
2232
|
+
attr_accessor :transcript
|
|
2233
|
+
|
|
2234
|
+
# A list of word-specific information for each recognized word.
|
|
2235
|
+
# Corresponds to the JSON property `words`
|
|
2236
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1WordInfo>]
|
|
2237
|
+
attr_accessor :words
|
|
2238
|
+
|
|
2239
|
+
def initialize(**args)
|
|
2240
|
+
update!(**args)
|
|
2241
|
+
end
|
|
2242
|
+
|
|
2243
|
+
# Update properties of this object
|
|
2244
|
+
def update!(**args)
|
|
2245
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2246
|
+
@transcript = args[:transcript] if args.key?(:transcript)
|
|
2247
|
+
@words = args[:words] if args.key?(:words)
|
|
2248
|
+
end
|
|
2249
|
+
end
|
|
2250
|
+
|
|
2251
|
+
# A speech recognition result corresponding to a portion of the audio.
|
|
2252
|
+
class GoogleCloudVideointelligenceV1p1beta1SpeechTranscription
|
|
2253
|
+
include Google::Apis::Core::Hashable
|
|
2254
|
+
|
|
2255
|
+
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
2256
|
+
# in `max_alternatives`). These alternatives are ordered in terms of
|
|
2257
|
+
# accuracy, with the top (first) alternative being the most probable, as
|
|
2258
|
+
# ranked by the recognizer.
|
|
2259
|
+
# Corresponds to the JSON property `alternatives`
|
|
2260
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1SpeechRecognitionAlternative>]
|
|
2261
|
+
attr_accessor :alternatives
|
|
2262
|
+
|
|
2263
|
+
# Output only. The
|
|
2264
|
+
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
|
|
2265
|
+
# language in this result. This language code was detected to have the most
|
|
2266
|
+
# likelihood of being spoken in the audio.
|
|
2267
|
+
# Corresponds to the JSON property `languageCode`
|
|
2268
|
+
# @return [String]
|
|
2269
|
+
attr_accessor :language_code
|
|
2270
|
+
|
|
2271
|
+
def initialize(**args)
|
|
2272
|
+
update!(**args)
|
|
2273
|
+
end
|
|
2274
|
+
|
|
2275
|
+
# Update properties of this object
|
|
2276
|
+
def update!(**args)
|
|
2277
|
+
@alternatives = args[:alternatives] if args.key?(:alternatives)
|
|
2278
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
2279
|
+
end
|
|
2280
|
+
end
|
|
2281
|
+
|
|
2282
|
+
# Config for SPEECH_TRANSCRIPTION.
|
|
2283
|
+
class GoogleCloudVideointelligenceV1p1beta1SpeechTranscriptionConfig
|
|
2284
|
+
include Google::Apis::Core::Hashable
|
|
2285
|
+
|
|
2286
|
+
# *Optional* For file formats, such as MXF or MKV, supporting multiple audio
|
|
2287
|
+
# tracks, specify up to two tracks. Default: track 0.
|
|
2288
|
+
# Corresponds to the JSON property `audioTracks`
|
|
2289
|
+
# @return [Array<Fixnum>]
|
|
2290
|
+
attr_accessor :audio_tracks
|
|
2291
|
+
|
|
2292
|
+
# *Optional*
|
|
2293
|
+
# If set, specifies the estimated number of speakers in the conversation.
|
|
2294
|
+
# If not set, defaults to '2'.
|
|
2295
|
+
# Ignored unless enable_speaker_diarization is set to true.
|
|
2296
|
+
# Corresponds to the JSON property `diarizationSpeakerCount`
|
|
2297
|
+
# @return [Fixnum]
|
|
2298
|
+
attr_accessor :diarization_speaker_count
|
|
2299
|
+
|
|
2300
|
+
# *Optional* If 'true', adds punctuation to recognition result hypotheses.
|
|
2301
|
+
# This feature is only available in select languages. Setting this for
|
|
2302
|
+
# requests in other languages has no effect at all. The default 'false' value
|
|
2303
|
+
# does not add punctuation to result hypotheses. NOTE: "This is currently
|
|
2304
|
+
# offered as an experimental service, complimentary to all users. In the
|
|
2305
|
+
# future this may be exclusively available as a premium feature."
|
|
2306
|
+
# Corresponds to the JSON property `enableAutomaticPunctuation`
|
|
2307
|
+
# @return [Boolean]
|
|
2308
|
+
attr_accessor :enable_automatic_punctuation
|
|
2309
|
+
alias_method :enable_automatic_punctuation?, :enable_automatic_punctuation
|
|
2310
|
+
|
|
2311
|
+
# *Optional* If 'true', enables speaker detection for each recognized word in
|
|
2312
|
+
# the top alternative of the recognition result using a speaker_tag provided
|
|
2313
|
+
# in the WordInfo.
|
|
2314
|
+
# Note: When this is true, we send all the words from the beginning of the
|
|
2315
|
+
# audio for the top alternative in every consecutive responses.
|
|
2316
|
+
# This is done in order to improve our speaker tags as our models learn to
|
|
2317
|
+
# identify the speakers in the conversation over time.
|
|
2318
|
+
# Corresponds to the JSON property `enableSpeakerDiarization`
|
|
2319
|
+
# @return [Boolean]
|
|
2320
|
+
attr_accessor :enable_speaker_diarization
|
|
2321
|
+
alias_method :enable_speaker_diarization?, :enable_speaker_diarization
|
|
2322
|
+
|
|
2323
|
+
# *Optional* If `true`, the top result includes a list of words and the
|
|
2324
|
+
# confidence for those words. If `false`, no word-level confidence
|
|
2325
|
+
# information is returned. The default is `false`.
|
|
2326
|
+
# Corresponds to the JSON property `enableWordConfidence`
|
|
2327
|
+
# @return [Boolean]
|
|
2328
|
+
attr_accessor :enable_word_confidence
|
|
2329
|
+
alias_method :enable_word_confidence?, :enable_word_confidence
|
|
2330
|
+
|
|
2331
|
+
# *Optional* If set to `true`, the server will attempt to filter out
|
|
2332
|
+
# profanities, replacing all but the initial character in each filtered word
|
|
2333
|
+
# with asterisks, e.g. "f***". If set to `false` or omitted, profanities
|
|
2334
|
+
# won't be filtered out.
|
|
2335
|
+
# Corresponds to the JSON property `filterProfanity`
|
|
2336
|
+
# @return [Boolean]
|
|
2337
|
+
attr_accessor :filter_profanity
|
|
2338
|
+
alias_method :filter_profanity?, :filter_profanity
|
|
2339
|
+
|
|
2340
|
+
# *Required* The language of the supplied audio as a
|
|
2341
|
+
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
|
|
2342
|
+
# Example: "en-US".
|
|
2343
|
+
# See [Language Support](https://cloud.google.com/speech/docs/languages)
|
|
2344
|
+
# for a list of the currently supported language codes.
|
|
2345
|
+
# Corresponds to the JSON property `languageCode`
|
|
2346
|
+
# @return [String]
|
|
2347
|
+
attr_accessor :language_code
|
|
2348
|
+
|
|
2349
|
+
# *Optional* Maximum number of recognition hypotheses to be returned.
|
|
2350
|
+
# Specifically, the maximum number of `SpeechRecognitionAlternative` messages
|
|
2351
|
+
# within each `SpeechTranscription`. The server may return fewer than
|
|
2352
|
+
# `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will
|
|
2353
|
+
# return a maximum of one. If omitted, will return a maximum of one.
|
|
2354
|
+
# Corresponds to the JSON property `maxAlternatives`
|
|
2355
|
+
# @return [Fixnum]
|
|
2356
|
+
attr_accessor :max_alternatives
|
|
2357
|
+
|
|
2358
|
+
# *Optional* A means to provide context to assist the speech recognition.
|
|
2359
|
+
# Corresponds to the JSON property `speechContexts`
|
|
2360
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1SpeechContext>]
|
|
2361
|
+
attr_accessor :speech_contexts
|
|
620
2362
|
|
|
621
2363
|
def initialize(**args)
|
|
622
2364
|
update!(**args)
|
|
@@ -624,24 +2366,33 @@ module Google
|
|
|
624
2366
|
|
|
625
2367
|
# Update properties of this object
|
|
626
2368
|
def update!(**args)
|
|
627
|
-
@
|
|
2369
|
+
@audio_tracks = args[:audio_tracks] if args.key?(:audio_tracks)
|
|
2370
|
+
@diarization_speaker_count = args[:diarization_speaker_count] if args.key?(:diarization_speaker_count)
|
|
2371
|
+
@enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
|
|
2372
|
+
@enable_speaker_diarization = args[:enable_speaker_diarization] if args.key?(:enable_speaker_diarization)
|
|
2373
|
+
@enable_word_confidence = args[:enable_word_confidence] if args.key?(:enable_word_confidence)
|
|
2374
|
+
@filter_profanity = args[:filter_profanity] if args.key?(:filter_profanity)
|
|
2375
|
+
@language_code = args[:language_code] if args.key?(:language_code)
|
|
2376
|
+
@max_alternatives = args[:max_alternatives] if args.key?(:max_alternatives)
|
|
2377
|
+
@speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
|
|
628
2378
|
end
|
|
629
2379
|
end
|
|
630
2380
|
|
|
631
|
-
#
|
|
632
|
-
|
|
2381
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
2382
|
+
# corresponding text, confidence value, and frame level information for each
|
|
2383
|
+
# detection.
|
|
2384
|
+
class GoogleCloudVideointelligenceV1p1beta1TextAnnotation
|
|
633
2385
|
include Google::Apis::Core::Hashable
|
|
634
2386
|
|
|
635
|
-
#
|
|
636
|
-
# Corresponds to the JSON property `
|
|
637
|
-
# @return [
|
|
638
|
-
attr_accessor :
|
|
2387
|
+
# All video segments where OCR detected text appears.
|
|
2388
|
+
# Corresponds to the JSON property `segments`
|
|
2389
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1TextSegment>]
|
|
2390
|
+
attr_accessor :segments
|
|
639
2391
|
|
|
640
|
-
#
|
|
641
|
-
#
|
|
642
|
-
# Corresponds to the JSON property `timeOffset`
|
|
2392
|
+
# The detected text.
|
|
2393
|
+
# Corresponds to the JSON property `text`
|
|
643
2394
|
# @return [String]
|
|
644
|
-
attr_accessor :
|
|
2395
|
+
attr_accessor :text
|
|
645
2396
|
|
|
646
2397
|
def initialize(**args)
|
|
647
2398
|
update!(**args)
|
|
@@ -649,37 +2400,22 @@ module Google
|
|
|
649
2400
|
|
|
650
2401
|
# Update properties of this object
|
|
651
2402
|
def update!(**args)
|
|
652
|
-
@
|
|
653
|
-
@
|
|
2403
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
2404
|
+
@text = args[:text] if args.key?(:text)
|
|
654
2405
|
end
|
|
655
2406
|
end
|
|
656
2407
|
|
|
657
|
-
#
|
|
658
|
-
class
|
|
2408
|
+
# Config for TEXT_DETECTION.
|
|
2409
|
+
class GoogleCloudVideointelligenceV1p1beta1TextDetectionConfig
|
|
659
2410
|
include Google::Apis::Core::Hashable
|
|
660
2411
|
|
|
661
|
-
#
|
|
662
|
-
#
|
|
663
|
-
#
|
|
664
|
-
#
|
|
665
|
-
# Corresponds to the JSON property `
|
|
666
|
-
# @return [Array<
|
|
667
|
-
attr_accessor :
|
|
668
|
-
|
|
669
|
-
# Detected entity from video analysis.
|
|
670
|
-
# Corresponds to the JSON property `entity`
|
|
671
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2Entity]
|
|
672
|
-
attr_accessor :entity
|
|
673
|
-
|
|
674
|
-
# All video frames where a label was detected.
|
|
675
|
-
# Corresponds to the JSON property `frames`
|
|
676
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2LabelFrame>]
|
|
677
|
-
attr_accessor :frames
|
|
678
|
-
|
|
679
|
-
# All video segments where a label was detected.
|
|
680
|
-
# Corresponds to the JSON property `segments`
|
|
681
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2LabelSegment>]
|
|
682
|
-
attr_accessor :segments
|
|
2412
|
+
# Language hint can be specified if the language to be detected is known a
|
|
2413
|
+
# priori. It can increase the accuracy of the detection. Language hint must
|
|
2414
|
+
# be language code in BCP-47 format.
|
|
2415
|
+
# Automatic language detection is performed if no hint is provided.
|
|
2416
|
+
# Corresponds to the JSON property `languageHints`
|
|
2417
|
+
# @return [Array<String>]
|
|
2418
|
+
attr_accessor :language_hints
|
|
683
2419
|
|
|
684
2420
|
def initialize(**args)
|
|
685
2421
|
update!(**args)
|
|
@@ -687,24 +2423,36 @@ module Google
|
|
|
687
2423
|
|
|
688
2424
|
# Update properties of this object
|
|
689
2425
|
def update!(**args)
|
|
690
|
-
@
|
|
691
|
-
@entity = args[:entity] if args.key?(:entity)
|
|
692
|
-
@frames = args[:frames] if args.key?(:frames)
|
|
693
|
-
@segments = args[:segments] if args.key?(:segments)
|
|
2426
|
+
@language_hints = args[:language_hints] if args.key?(:language_hints)
|
|
694
2427
|
end
|
|
695
2428
|
end
|
|
696
2429
|
|
|
697
|
-
# Video frame level annotation results for
|
|
698
|
-
|
|
2430
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
2431
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
2432
|
+
# frames containing detected OCR text snippets.
|
|
2433
|
+
class GoogleCloudVideointelligenceV1p1beta1TextFrame
|
|
699
2434
|
include Google::Apis::Core::Hashable
|
|
700
2435
|
|
|
701
|
-
#
|
|
702
|
-
#
|
|
703
|
-
#
|
|
704
|
-
|
|
2436
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
2437
|
+
# Contains list of the corner points in clockwise order starting from
|
|
2438
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
2439
|
+
# When the text is horizontal it might look like:
|
|
2440
|
+
# 0----1
|
|
2441
|
+
# | |
|
|
2442
|
+
# 3----2
|
|
2443
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
2444
|
+
# becomes:
|
|
2445
|
+
# 2----3
|
|
2446
|
+
# | |
|
|
2447
|
+
# 1----0
|
|
2448
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
2449
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2450
|
+
# the box.
|
|
2451
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
2452
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1NormalizedBoundingPoly]
|
|
2453
|
+
attr_accessor :rotated_bounding_box
|
|
705
2454
|
|
|
706
|
-
#
|
|
707
|
-
# video frame for this location.
|
|
2455
|
+
# Timestamp of this frame.
|
|
708
2456
|
# Corresponds to the JSON property `timeOffset`
|
|
709
2457
|
# @return [String]
|
|
710
2458
|
attr_accessor :time_offset
|
|
@@ -715,23 +2463,29 @@ module Google
|
|
|
715
2463
|
|
|
716
2464
|
# Update properties of this object
|
|
717
2465
|
def update!(**args)
|
|
718
|
-
@
|
|
2466
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
719
2467
|
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
720
2468
|
end
|
|
721
2469
|
end
|
|
722
2470
|
|
|
723
|
-
# Video segment level annotation results for
|
|
724
|
-
class
|
|
2471
|
+
# Video segment level annotation results for text detection.
|
|
2472
|
+
class GoogleCloudVideointelligenceV1p1beta1TextSegment
|
|
725
2473
|
include Google::Apis::Core::Hashable
|
|
726
2474
|
|
|
727
|
-
# Confidence
|
|
2475
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
2476
|
+
# over all frames where OCR detected text appears.
|
|
728
2477
|
# Corresponds to the JSON property `confidence`
|
|
729
2478
|
# @return [Float]
|
|
730
2479
|
attr_accessor :confidence
|
|
731
2480
|
|
|
2481
|
+
# Information related to the frames where OCR detected text appears.
|
|
2482
|
+
# Corresponds to the JSON property `frames`
|
|
2483
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1TextFrame>]
|
|
2484
|
+
attr_accessor :frames
|
|
2485
|
+
|
|
732
2486
|
# Video segment.
|
|
733
2487
|
# Corresponds to the JSON property `segment`
|
|
734
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
2488
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment]
|
|
735
2489
|
attr_accessor :segment
|
|
736
2490
|
|
|
737
2491
|
def initialize(**args)
|
|
@@ -741,79 +2495,13 @@ module Google
|
|
|
741
2495
|
# Update properties of this object
|
|
742
2496
|
def update!(**args)
|
|
743
2497
|
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2498
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
744
2499
|
@segment = args[:segment] if args.key?(:segment)
|
|
745
2500
|
end
|
|
746
2501
|
end
|
|
747
2502
|
|
|
748
|
-
# Alternative hypotheses (a.k.a. n-best list).
|
|
749
|
-
class GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative
|
|
750
|
-
include Google::Apis::Core::Hashable
|
|
751
|
-
|
|
752
|
-
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
753
|
-
# indicates an estimated greater likelihood that the recognized words are
|
|
754
|
-
# correct. This field is typically provided only for the top hypothesis, and
|
|
755
|
-
# only for `is_final=true` results. Clients should not rely on the
|
|
756
|
-
# `confidence` field as it is not guaranteed to be accurate or consistent.
|
|
757
|
-
# The default of 0.0 is a sentinel value indicating `confidence` was not set.
|
|
758
|
-
# Corresponds to the JSON property `confidence`
|
|
759
|
-
# @return [Float]
|
|
760
|
-
attr_accessor :confidence
|
|
761
|
-
|
|
762
|
-
# Transcript text representing the words that the user spoke.
|
|
763
|
-
# Corresponds to the JSON property `transcript`
|
|
764
|
-
# @return [String]
|
|
765
|
-
attr_accessor :transcript
|
|
766
|
-
|
|
767
|
-
# A list of word-specific information for each recognized word.
|
|
768
|
-
# Corresponds to the JSON property `words`
|
|
769
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2WordInfo>]
|
|
770
|
-
attr_accessor :words
|
|
771
|
-
|
|
772
|
-
def initialize(**args)
|
|
773
|
-
update!(**args)
|
|
774
|
-
end
|
|
775
|
-
|
|
776
|
-
# Update properties of this object
|
|
777
|
-
def update!(**args)
|
|
778
|
-
@confidence = args[:confidence] if args.key?(:confidence)
|
|
779
|
-
@transcript = args[:transcript] if args.key?(:transcript)
|
|
780
|
-
@words = args[:words] if args.key?(:words)
|
|
781
|
-
end
|
|
782
|
-
end
|
|
783
|
-
|
|
784
|
-
# A speech recognition result corresponding to a portion of the audio.
|
|
785
|
-
class GoogleCloudVideointelligenceV1beta2SpeechTranscription
|
|
786
|
-
include Google::Apis::Core::Hashable
|
|
787
|
-
|
|
788
|
-
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
789
|
-
# in `max_alternatives`). These alternatives are ordered in terms of
|
|
790
|
-
# accuracy, with the top (first) alternative being the most probable, as
|
|
791
|
-
# ranked by the recognizer.
|
|
792
|
-
# Corresponds to the JSON property `alternatives`
|
|
793
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1beta2SpeechRecognitionAlternative>]
|
|
794
|
-
attr_accessor :alternatives
|
|
795
|
-
|
|
796
|
-
# Output only. The
|
|
797
|
-
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the
|
|
798
|
-
# language in this result. This language code was detected to have the most
|
|
799
|
-
# likelihood of being spoken in the audio.
|
|
800
|
-
# Corresponds to the JSON property `languageCode`
|
|
801
|
-
# @return [String]
|
|
802
|
-
attr_accessor :language_code
|
|
803
|
-
|
|
804
|
-
def initialize(**args)
|
|
805
|
-
update!(**args)
|
|
806
|
-
end
|
|
807
|
-
|
|
808
|
-
# Update properties of this object
|
|
809
|
-
def update!(**args)
|
|
810
|
-
@alternatives = args[:alternatives] if args.key?(:alternatives)
|
|
811
|
-
@language_code = args[:language_code] if args.key?(:language_code)
|
|
812
|
-
end
|
|
813
|
-
end
|
|
814
|
-
|
|
815
2503
|
# Annotation progress for a single video.
|
|
816
|
-
class
|
|
2504
|
+
class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationProgress
|
|
817
2505
|
include Google::Apis::Core::Hashable
|
|
818
2506
|
|
|
819
2507
|
# Video file location in
|
|
@@ -852,17 +2540,17 @@ module Google
|
|
|
852
2540
|
end
|
|
853
2541
|
|
|
854
2542
|
# Annotation results for a single video.
|
|
855
|
-
class
|
|
2543
|
+
class GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
|
|
856
2544
|
include Google::Apis::Core::Hashable
|
|
857
2545
|
|
|
858
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
859
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
860
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2546
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
2547
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
2548
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
861
2549
|
# - Simple to use and understand for most users
|
|
862
2550
|
# - Flexible enough to meet unexpected needs
|
|
863
2551
|
# # Overview
|
|
864
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
865
|
-
# and error details. The error code should be an enum value of
|
|
2552
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
2553
|
+
# message, and error details. The error code should be an enum value of
|
|
866
2554
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
867
2555
|
# error message should be a developer-facing English message that helps
|
|
868
2556
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -902,13 +2590,13 @@ module Google
|
|
|
902
2590
|
# If no explicit content has been detected in a frame, no annotations are
|
|
903
2591
|
# present for that frame.
|
|
904
2592
|
# Corresponds to the JSON property `explicitAnnotation`
|
|
905
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
2593
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation]
|
|
906
2594
|
attr_accessor :explicit_annotation
|
|
907
2595
|
|
|
908
2596
|
# Label annotations on frame level.
|
|
909
2597
|
# There is exactly one element for each unique label.
|
|
910
2598
|
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
911
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2599
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
|
|
912
2600
|
attr_accessor :frame_label_annotations
|
|
913
2601
|
|
|
914
2602
|
# Video file location in
|
|
@@ -917,27 +2605,94 @@ module Google
|
|
|
917
2605
|
# @return [String]
|
|
918
2606
|
attr_accessor :input_uri
|
|
919
2607
|
|
|
2608
|
+
# Annotations for list of objects detected and tracked in video.
|
|
2609
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
2610
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>]
|
|
2611
|
+
attr_accessor :object_annotations
|
|
2612
|
+
|
|
920
2613
|
# Label annotations on video level or user specified segment level.
|
|
921
2614
|
# There is exactly one element for each unique label.
|
|
922
2615
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
923
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2616
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
|
|
924
2617
|
attr_accessor :segment_label_annotations
|
|
925
2618
|
|
|
926
2619
|
# Shot annotations. Each shot is represented as a video segment.
|
|
927
2620
|
# Corresponds to the JSON property `shotAnnotations`
|
|
928
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2621
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
|
|
929
2622
|
attr_accessor :shot_annotations
|
|
930
2623
|
|
|
931
|
-
# Label annotations on shot level.
|
|
932
|
-
# There is exactly one element for each unique label.
|
|
933
|
-
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
934
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
935
|
-
attr_accessor :shot_label_annotations
|
|
2624
|
+
# Label annotations on shot level.
|
|
2625
|
+
# There is exactly one element for each unique label.
|
|
2626
|
+
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
2627
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>]
|
|
2628
|
+
attr_accessor :shot_label_annotations
|
|
2629
|
+
|
|
2630
|
+
# Speech transcription.
|
|
2631
|
+
# Corresponds to the JSON property `speechTranscriptions`
|
|
2632
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>]
|
|
2633
|
+
attr_accessor :speech_transcriptions
|
|
2634
|
+
|
|
2635
|
+
# OCR text detection and tracking.
|
|
2636
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
2637
|
+
# frame information associated with it.
|
|
2638
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
2639
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>]
|
|
2640
|
+
attr_accessor :text_annotations
|
|
2641
|
+
|
|
2642
|
+
def initialize(**args)
|
|
2643
|
+
update!(**args)
|
|
2644
|
+
end
|
|
2645
|
+
|
|
2646
|
+
# Update properties of this object
|
|
2647
|
+
def update!(**args)
|
|
2648
|
+
@error = args[:error] if args.key?(:error)
|
|
2649
|
+
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
2650
|
+
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
2651
|
+
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
2652
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
2653
|
+
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
2654
|
+
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
2655
|
+
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
2656
|
+
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
2657
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
2658
|
+
end
|
|
2659
|
+
end
|
|
2660
|
+
|
|
2661
|
+
# Video context and/or feature-specific parameters.
|
|
2662
|
+
class GoogleCloudVideointelligenceV1p1beta1VideoContext
|
|
2663
|
+
include Google::Apis::Core::Hashable
|
|
2664
|
+
|
|
2665
|
+
# Config for EXPLICIT_CONTENT_DETECTION.
|
|
2666
|
+
# Corresponds to the JSON property `explicitContentDetectionConfig`
|
|
2667
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentDetectionConfig]
|
|
2668
|
+
attr_accessor :explicit_content_detection_config
|
|
2669
|
+
|
|
2670
|
+
# Config for LABEL_DETECTION.
|
|
2671
|
+
# Corresponds to the JSON property `labelDetectionConfig`
|
|
2672
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelDetectionConfig]
|
|
2673
|
+
attr_accessor :label_detection_config
|
|
2674
|
+
|
|
2675
|
+
# Video segments to annotate. The segments may overlap and are not required
|
|
2676
|
+
# to be contiguous or span the whole video. If unspecified, each video is
|
|
2677
|
+
# treated as a single segment.
|
|
2678
|
+
# Corresponds to the JSON property `segments`
|
|
2679
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
|
|
2680
|
+
attr_accessor :segments
|
|
2681
|
+
|
|
2682
|
+
# Config for SHOT_CHANGE_DETECTION.
|
|
2683
|
+
# Corresponds to the JSON property `shotChangeDetectionConfig`
|
|
2684
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ShotChangeDetectionConfig]
|
|
2685
|
+
attr_accessor :shot_change_detection_config
|
|
936
2686
|
|
|
937
|
-
#
|
|
938
|
-
# Corresponds to the JSON property `
|
|
939
|
-
# @return [
|
|
940
|
-
attr_accessor :
|
|
2687
|
+
# Config for SPEECH_TRANSCRIPTION.
|
|
2688
|
+
# Corresponds to the JSON property `speechTranscriptionConfig`
|
|
2689
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscriptionConfig]
|
|
2690
|
+
attr_accessor :speech_transcription_config
|
|
2691
|
+
|
|
2692
|
+
# Config for TEXT_DETECTION.
|
|
2693
|
+
# Corresponds to the JSON property `textDetectionConfig`
|
|
2694
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1TextDetectionConfig]
|
|
2695
|
+
attr_accessor :text_detection_config
|
|
941
2696
|
|
|
942
2697
|
def initialize(**args)
|
|
943
2698
|
update!(**args)
|
|
@@ -945,19 +2700,17 @@ module Google
|
|
|
945
2700
|
|
|
946
2701
|
# Update properties of this object
|
|
947
2702
|
def update!(**args)
|
|
948
|
-
@
|
|
949
|
-
@
|
|
950
|
-
@
|
|
951
|
-
@
|
|
952
|
-
@
|
|
953
|
-
@
|
|
954
|
-
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
955
|
-
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
2703
|
+
@explicit_content_detection_config = args[:explicit_content_detection_config] if args.key?(:explicit_content_detection_config)
|
|
2704
|
+
@label_detection_config = args[:label_detection_config] if args.key?(:label_detection_config)
|
|
2705
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
2706
|
+
@shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
|
|
2707
|
+
@speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
|
|
2708
|
+
@text_detection_config = args[:text_detection_config] if args.key?(:text_detection_config)
|
|
956
2709
|
end
|
|
957
2710
|
end
|
|
958
2711
|
|
|
959
2712
|
# Video segment.
|
|
960
|
-
class
|
|
2713
|
+
class GoogleCloudVideointelligenceV1p1beta1VideoSegment
|
|
961
2714
|
include Google::Apis::Core::Hashable
|
|
962
2715
|
|
|
963
2716
|
# Time-offset, relative to the beginning of the video,
|
|
@@ -986,7 +2739,7 @@ module Google
|
|
|
986
2739
|
# Word-specific information for recognized words. Word information is only
|
|
987
2740
|
# included in the response when certain request parameters are set, such
|
|
988
2741
|
# as `enable_word_time_offsets`.
|
|
989
|
-
class
|
|
2742
|
+
class GoogleCloudVideointelligenceV1p1beta1WordInfo
|
|
990
2743
|
include Google::Apis::Core::Hashable
|
|
991
2744
|
|
|
992
2745
|
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -1045,12 +2798,12 @@ module Google
|
|
|
1045
2798
|
# Video annotation progress. Included in the `metadata`
|
|
1046
2799
|
# field of the `Operation` returned by the `GetOperation`
|
|
1047
2800
|
# call of the `google::longrunning::Operations` service.
|
|
1048
|
-
class
|
|
2801
|
+
class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoProgress
|
|
1049
2802
|
include Google::Apis::Core::Hashable
|
|
1050
2803
|
|
|
1051
2804
|
# Progress metadata for all videos specified in `AnnotateVideoRequest`.
|
|
1052
2805
|
# Corresponds to the JSON property `annotationProgress`
|
|
1053
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2806
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress>]
|
|
1054
2807
|
attr_accessor :annotation_progress
|
|
1055
2808
|
|
|
1056
2809
|
def initialize(**args)
|
|
@@ -1063,83 +2816,15 @@ module Google
|
|
|
1063
2816
|
end
|
|
1064
2817
|
end
|
|
1065
2818
|
|
|
1066
|
-
# Video annotation request.
|
|
1067
|
-
class GoogleCloudVideointelligenceV1p1beta1AnnotateVideoRequest
|
|
1068
|
-
include Google::Apis::Core::Hashable
|
|
1069
|
-
|
|
1070
|
-
# Requested video annotation features.
|
|
1071
|
-
# Corresponds to the JSON property `features`
|
|
1072
|
-
# @return [Array<String>]
|
|
1073
|
-
attr_accessor :features
|
|
1074
|
-
|
|
1075
|
-
# The video data bytes.
|
|
1076
|
-
# If unset, the input video(s) should be specified via `input_uri`.
|
|
1077
|
-
# If set, `input_uri` should be unset.
|
|
1078
|
-
# Corresponds to the JSON property `inputContent`
|
|
1079
|
-
# NOTE: Values are automatically base64 encoded/decoded in the client library.
|
|
1080
|
-
# @return [String]
|
|
1081
|
-
attr_accessor :input_content
|
|
1082
|
-
|
|
1083
|
-
# Input video location. Currently, only
|
|
1084
|
-
# [Google Cloud Storage](https://cloud.google.com/storage/) URIs are
|
|
1085
|
-
# supported, which must be specified in the following format:
|
|
1086
|
-
# `gs://bucket-id/object-id` (other URI formats return
|
|
1087
|
-
# google.rpc.Code.INVALID_ARGUMENT). For more information, see
|
|
1088
|
-
# [Request URIs](/storage/docs/reference-uris).
|
|
1089
|
-
# A video URI may include wildcards in `object-id`, and thus identify
|
|
1090
|
-
# multiple videos. Supported wildcards: '*' to match 0 or more characters;
|
|
1091
|
-
# '?' to match 1 character. If unset, the input video should be embedded
|
|
1092
|
-
# in the request as `input_content`. If set, `input_content` should be unset.
|
|
1093
|
-
# Corresponds to the JSON property `inputUri`
|
|
1094
|
-
# @return [String]
|
|
1095
|
-
attr_accessor :input_uri
|
|
1096
|
-
|
|
1097
|
-
# Optional cloud region where annotation should take place. Supported cloud
|
|
1098
|
-
# regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region
|
|
1099
|
-
# is specified, a region will be determined based on video file location.
|
|
1100
|
-
# Corresponds to the JSON property `locationId`
|
|
1101
|
-
# @return [String]
|
|
1102
|
-
attr_accessor :location_id
|
|
1103
|
-
|
|
1104
|
-
# Optional location where the output (in JSON format) should be stored.
|
|
1105
|
-
# Currently, only [Google Cloud Storage](https://cloud.google.com/storage/)
|
|
1106
|
-
# URIs are supported, which must be specified in the following format:
|
|
1107
|
-
# `gs://bucket-id/object-id` (other URI formats return
|
|
1108
|
-
# google.rpc.Code.INVALID_ARGUMENT). For more information, see
|
|
1109
|
-
# [Request URIs](/storage/docs/reference-uris).
|
|
1110
|
-
# Corresponds to the JSON property `outputUri`
|
|
1111
|
-
# @return [String]
|
|
1112
|
-
attr_accessor :output_uri
|
|
1113
|
-
|
|
1114
|
-
# Video context and/or feature-specific parameters.
|
|
1115
|
-
# Corresponds to the JSON property `videoContext`
|
|
1116
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoContext]
|
|
1117
|
-
attr_accessor :video_context
|
|
1118
|
-
|
|
1119
|
-
def initialize(**args)
|
|
1120
|
-
update!(**args)
|
|
1121
|
-
end
|
|
1122
|
-
|
|
1123
|
-
# Update properties of this object
|
|
1124
|
-
def update!(**args)
|
|
1125
|
-
@features = args[:features] if args.key?(:features)
|
|
1126
|
-
@input_content = args[:input_content] if args.key?(:input_content)
|
|
1127
|
-
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
1128
|
-
@location_id = args[:location_id] if args.key?(:location_id)
|
|
1129
|
-
@output_uri = args[:output_uri] if args.key?(:output_uri)
|
|
1130
|
-
@video_context = args[:video_context] if args.key?(:video_context)
|
|
1131
|
-
end
|
|
1132
|
-
end
|
|
1133
|
-
|
|
1134
2819
|
# Video annotation response. Included in the `response`
|
|
1135
2820
|
# field of the `Operation` returned by the `GetOperation`
|
|
1136
2821
|
# call of the `google::longrunning::Operations` service.
|
|
1137
|
-
class
|
|
2822
|
+
class GoogleCloudVideointelligenceV1p2beta1AnnotateVideoResponse
|
|
1138
2823
|
include Google::Apis::Core::Hashable
|
|
1139
2824
|
|
|
1140
2825
|
# Annotation results for all videos specified in `AnnotateVideoRequest`.
|
|
1141
2826
|
# Corresponds to the JSON property `annotationResults`
|
|
1142
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2827
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults>]
|
|
1143
2828
|
attr_accessor :annotation_results
|
|
1144
2829
|
|
|
1145
2830
|
def initialize(**args)
|
|
@@ -1153,7 +2838,7 @@ module Google
|
|
|
1153
2838
|
end
|
|
1154
2839
|
|
|
1155
2840
|
# Detected entity from video analysis.
|
|
1156
|
-
class
|
|
2841
|
+
class GoogleCloudVideointelligenceV1p2beta1Entity
|
|
1157
2842
|
include Google::Apis::Core::Hashable
|
|
1158
2843
|
|
|
1159
2844
|
# Textual description, e.g. `Fixed-gear bicycle`.
|
|
@@ -1188,12 +2873,12 @@ module Google
|
|
|
1188
2873
|
# Explicit content annotation (based on per-frame visual signals only).
|
|
1189
2874
|
# If no explicit content has been detected in a frame, no annotations are
|
|
1190
2875
|
# present for that frame.
|
|
1191
|
-
class
|
|
2876
|
+
class GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
|
|
1192
2877
|
include Google::Apis::Core::Hashable
|
|
1193
2878
|
|
|
1194
2879
|
# All video frames where explicit content was detected.
|
|
1195
2880
|
# Corresponds to the JSON property `frames`
|
|
1196
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2881
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame>]
|
|
1197
2882
|
attr_accessor :frames
|
|
1198
2883
|
|
|
1199
2884
|
def initialize(**args)
|
|
@@ -1206,29 +2891,8 @@ module Google
|
|
|
1206
2891
|
end
|
|
1207
2892
|
end
|
|
1208
2893
|
|
|
1209
|
-
# Config for EXPLICIT_CONTENT_DETECTION.
|
|
1210
|
-
class GoogleCloudVideointelligenceV1p1beta1ExplicitContentDetectionConfig
|
|
1211
|
-
include Google::Apis::Core::Hashable
|
|
1212
|
-
|
|
1213
|
-
# Model to use for explicit content detection.
|
|
1214
|
-
# Supported values: "builtin/stable" (the default if unset) and
|
|
1215
|
-
# "builtin/latest".
|
|
1216
|
-
# Corresponds to the JSON property `model`
|
|
1217
|
-
# @return [String]
|
|
1218
|
-
attr_accessor :model
|
|
1219
|
-
|
|
1220
|
-
def initialize(**args)
|
|
1221
|
-
update!(**args)
|
|
1222
|
-
end
|
|
1223
|
-
|
|
1224
|
-
# Update properties of this object
|
|
1225
|
-
def update!(**args)
|
|
1226
|
-
@model = args[:model] if args.key?(:model)
|
|
1227
|
-
end
|
|
1228
|
-
end
|
|
1229
|
-
|
|
1230
2894
|
# Video frame level annotation results for explicit content.
|
|
1231
|
-
class
|
|
2895
|
+
class GoogleCloudVideointelligenceV1p2beta1ExplicitContentFrame
|
|
1232
2896
|
include Google::Apis::Core::Hashable
|
|
1233
2897
|
|
|
1234
2898
|
# Likelihood of the pornography content..
|
|
@@ -1254,7 +2918,7 @@ module Google
|
|
|
1254
2918
|
end
|
|
1255
2919
|
|
|
1256
2920
|
# Label annotation.
|
|
1257
|
-
class
|
|
2921
|
+
class GoogleCloudVideointelligenceV1p2beta1LabelAnnotation
|
|
1258
2922
|
include Google::Apis::Core::Hashable
|
|
1259
2923
|
|
|
1260
2924
|
# Common categories for the detected entity.
|
|
@@ -1262,22 +2926,22 @@ module Google
|
|
|
1262
2926
|
# cases there might be more than one categories e.g. `Terrier` could also be
|
|
1263
2927
|
# a `pet`.
|
|
1264
2928
|
# Corresponds to the JSON property `categoryEntities`
|
|
1265
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2929
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1Entity>]
|
|
1266
2930
|
attr_accessor :category_entities
|
|
1267
2931
|
|
|
1268
2932
|
# Detected entity from video analysis.
|
|
1269
2933
|
# Corresponds to the JSON property `entity`
|
|
1270
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
2934
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1Entity]
|
|
1271
2935
|
attr_accessor :entity
|
|
1272
2936
|
|
|
1273
2937
|
# All video frames where a label was detected.
|
|
1274
2938
|
# Corresponds to the JSON property `frames`
|
|
1275
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2939
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelFrame>]
|
|
1276
2940
|
attr_accessor :frames
|
|
1277
2941
|
|
|
1278
2942
|
# All video segments where a label was detected.
|
|
1279
2943
|
# Corresponds to the JSON property `segments`
|
|
1280
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
2944
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelSegment>]
|
|
1281
2945
|
attr_accessor :segments
|
|
1282
2946
|
|
|
1283
2947
|
def initialize(**args)
|
|
@@ -1293,31 +2957,118 @@ module Google
|
|
|
1293
2957
|
end
|
|
1294
2958
|
end
|
|
1295
2959
|
|
|
1296
|
-
#
|
|
1297
|
-
class
|
|
2960
|
+
# Video frame level annotation results for label detection.
|
|
2961
|
+
class GoogleCloudVideointelligenceV1p2beta1LabelFrame
|
|
1298
2962
|
include Google::Apis::Core::Hashable
|
|
1299
2963
|
|
|
1300
|
-
#
|
|
1301
|
-
#
|
|
1302
|
-
#
|
|
1303
|
-
|
|
1304
|
-
# @return [String]
|
|
1305
|
-
attr_accessor :label_detection_mode
|
|
2964
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
2965
|
+
# Corresponds to the JSON property `confidence`
|
|
2966
|
+
# @return [Float]
|
|
2967
|
+
attr_accessor :confidence
|
|
1306
2968
|
|
|
1307
|
-
#
|
|
1308
|
-
#
|
|
1309
|
-
#
|
|
1310
|
-
# Corresponds to the JSON property `model`
|
|
2969
|
+
# Time-offset, relative to the beginning of the video, corresponding to the
|
|
2970
|
+
# video frame for this location.
|
|
2971
|
+
# Corresponds to the JSON property `timeOffset`
|
|
1311
2972
|
# @return [String]
|
|
1312
|
-
attr_accessor :
|
|
2973
|
+
attr_accessor :time_offset
|
|
2974
|
+
|
|
2975
|
+
def initialize(**args)
|
|
2976
|
+
update!(**args)
|
|
2977
|
+
end
|
|
2978
|
+
|
|
2979
|
+
# Update properties of this object
|
|
2980
|
+
def update!(**args)
|
|
2981
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
2982
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
2983
|
+
end
|
|
2984
|
+
end
|
|
2985
|
+
|
|
2986
|
+
# Video segment level annotation results for label detection.
|
|
2987
|
+
class GoogleCloudVideointelligenceV1p2beta1LabelSegment
|
|
2988
|
+
include Google::Apis::Core::Hashable
|
|
2989
|
+
|
|
2990
|
+
# Confidence that the label is accurate. Range: [0, 1].
|
|
2991
|
+
# Corresponds to the JSON property `confidence`
|
|
2992
|
+
# @return [Float]
|
|
2993
|
+
attr_accessor :confidence
|
|
2994
|
+
|
|
2995
|
+
# Video segment.
|
|
2996
|
+
# Corresponds to the JSON property `segment`
|
|
2997
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
|
|
2998
|
+
attr_accessor :segment
|
|
2999
|
+
|
|
3000
|
+
def initialize(**args)
|
|
3001
|
+
update!(**args)
|
|
3002
|
+
end
|
|
3003
|
+
|
|
3004
|
+
# Update properties of this object
|
|
3005
|
+
def update!(**args)
|
|
3006
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
3007
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
3008
|
+
end
|
|
3009
|
+
end
|
|
3010
|
+
|
|
3011
|
+
# Normalized bounding box.
|
|
3012
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
3013
|
+
# Range: [0, 1].
|
|
3014
|
+
class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox
|
|
3015
|
+
include Google::Apis::Core::Hashable
|
|
3016
|
+
|
|
3017
|
+
# Bottom Y coordinate.
|
|
3018
|
+
# Corresponds to the JSON property `bottom`
|
|
3019
|
+
# @return [Float]
|
|
3020
|
+
attr_accessor :bottom
|
|
3021
|
+
|
|
3022
|
+
# Left X coordinate.
|
|
3023
|
+
# Corresponds to the JSON property `left`
|
|
3024
|
+
# @return [Float]
|
|
3025
|
+
attr_accessor :left
|
|
3026
|
+
|
|
3027
|
+
# Right X coordinate.
|
|
3028
|
+
# Corresponds to the JSON property `right`
|
|
3029
|
+
# @return [Float]
|
|
3030
|
+
attr_accessor :right
|
|
3031
|
+
|
|
3032
|
+
# Top Y coordinate.
|
|
3033
|
+
# Corresponds to the JSON property `top`
|
|
3034
|
+
# @return [Float]
|
|
3035
|
+
attr_accessor :top
|
|
3036
|
+
|
|
3037
|
+
def initialize(**args)
|
|
3038
|
+
update!(**args)
|
|
3039
|
+
end
|
|
3040
|
+
|
|
3041
|
+
# Update properties of this object
|
|
3042
|
+
def update!(**args)
|
|
3043
|
+
@bottom = args[:bottom] if args.key?(:bottom)
|
|
3044
|
+
@left = args[:left] if args.key?(:left)
|
|
3045
|
+
@right = args[:right] if args.key?(:right)
|
|
3046
|
+
@top = args[:top] if args.key?(:top)
|
|
3047
|
+
end
|
|
3048
|
+
end
|
|
3049
|
+
|
|
3050
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
3051
|
+
# Contains list of the corner points in clockwise order starting from
|
|
3052
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
3053
|
+
# When the text is horizontal it might look like:
|
|
3054
|
+
# 0----1
|
|
3055
|
+
# | |
|
|
3056
|
+
# 3----2
|
|
3057
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
3058
|
+
# becomes:
|
|
3059
|
+
# 2----3
|
|
3060
|
+
# | |
|
|
3061
|
+
# 1----0
|
|
3062
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
3063
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
3064
|
+
# the box.
|
|
3065
|
+
class GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly
|
|
3066
|
+
include Google::Apis::Core::Hashable
|
|
1313
3067
|
|
|
1314
|
-
#
|
|
1315
|
-
#
|
|
1316
|
-
#
|
|
1317
|
-
|
|
1318
|
-
# @return [Boolean]
|
|
1319
|
-
attr_accessor :stationary_camera
|
|
1320
|
-
alias_method :stationary_camera?, :stationary_camera
|
|
3068
|
+
# Normalized vertices of the bounding polygon.
|
|
3069
|
+
# Corresponds to the JSON property `vertices`
|
|
3070
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedVertex>]
|
|
3071
|
+
attr_accessor :vertices
|
|
1321
3072
|
|
|
1322
3073
|
def initialize(**args)
|
|
1323
3074
|
update!(**args)
|
|
@@ -1325,26 +3076,25 @@ module Google
|
|
|
1325
3076
|
|
|
1326
3077
|
# Update properties of this object
|
|
1327
3078
|
def update!(**args)
|
|
1328
|
-
@
|
|
1329
|
-
@model = args[:model] if args.key?(:model)
|
|
1330
|
-
@stationary_camera = args[:stationary_camera] if args.key?(:stationary_camera)
|
|
3079
|
+
@vertices = args[:vertices] if args.key?(:vertices)
|
|
1331
3080
|
end
|
|
1332
3081
|
end
|
|
1333
3082
|
|
|
1334
|
-
#
|
|
1335
|
-
|
|
3083
|
+
# A vertex represents a 2D point in the image.
|
|
3084
|
+
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
3085
|
+
# and range from 0 to 1.
|
|
3086
|
+
class GoogleCloudVideointelligenceV1p2beta1NormalizedVertex
|
|
1336
3087
|
include Google::Apis::Core::Hashable
|
|
1337
3088
|
|
|
1338
|
-
#
|
|
1339
|
-
# Corresponds to the JSON property `
|
|
3089
|
+
# X coordinate.
|
|
3090
|
+
# Corresponds to the JSON property `x`
|
|
1340
3091
|
# @return [Float]
|
|
1341
|
-
attr_accessor :
|
|
3092
|
+
attr_accessor :x
|
|
1342
3093
|
|
|
1343
|
-
#
|
|
1344
|
-
#
|
|
1345
|
-
#
|
|
1346
|
-
|
|
1347
|
-
attr_accessor :time_offset
|
|
3094
|
+
# Y coordinate.
|
|
3095
|
+
# Corresponds to the JSON property `y`
|
|
3096
|
+
# @return [Float]
|
|
3097
|
+
attr_accessor :y
|
|
1348
3098
|
|
|
1349
3099
|
def initialize(**args)
|
|
1350
3100
|
update!(**args)
|
|
@@ -1352,25 +3102,48 @@ module Google
|
|
|
1352
3102
|
|
|
1353
3103
|
# Update properties of this object
|
|
1354
3104
|
def update!(**args)
|
|
1355
|
-
@
|
|
1356
|
-
@
|
|
3105
|
+
@x = args[:x] if args.key?(:x)
|
|
3106
|
+
@y = args[:y] if args.key?(:y)
|
|
1357
3107
|
end
|
|
1358
3108
|
end
|
|
1359
3109
|
|
|
1360
|
-
#
|
|
1361
|
-
class
|
|
3110
|
+
# Annotations corresponding to one tracked object.
|
|
3111
|
+
class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation
|
|
1362
3112
|
include Google::Apis::Core::Hashable
|
|
1363
3113
|
|
|
1364
|
-
#
|
|
3114
|
+
# Object category's labeling confidence of this track.
|
|
1365
3115
|
# Corresponds to the JSON property `confidence`
|
|
1366
3116
|
# @return [Float]
|
|
1367
3117
|
attr_accessor :confidence
|
|
1368
3118
|
|
|
3119
|
+
# Detected entity from video analysis.
|
|
3120
|
+
# Corresponds to the JSON property `entity`
|
|
3121
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1Entity]
|
|
3122
|
+
attr_accessor :entity
|
|
3123
|
+
|
|
3124
|
+
# Information corresponding to all frames where this object track appears.
|
|
3125
|
+
# Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame
|
|
3126
|
+
# messages in frames.
|
|
3127
|
+
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
3128
|
+
# Corresponds to the JSON property `frames`
|
|
3129
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame>]
|
|
3130
|
+
attr_accessor :frames
|
|
3131
|
+
|
|
1369
3132
|
# Video segment.
|
|
1370
3133
|
# Corresponds to the JSON property `segment`
|
|
1371
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
3134
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
|
|
1372
3135
|
attr_accessor :segment
|
|
1373
3136
|
|
|
3137
|
+
# Streaming mode ONLY.
|
|
3138
|
+
# In streaming mode, we do not know the end time of a tracked object
|
|
3139
|
+
# before it is completed. Hence, there is no VideoSegment info returned.
|
|
3140
|
+
# Instead, we provide a unique identifiable integer track_id so that
|
|
3141
|
+
# the customers can correlate the results of the ongoing
|
|
3142
|
+
# ObjectTrackAnnotation of the same track_id over time.
|
|
3143
|
+
# Corresponds to the JSON property `trackId`
|
|
3144
|
+
# @return [Fixnum]
|
|
3145
|
+
attr_accessor :track_id
|
|
3146
|
+
|
|
1374
3147
|
def initialize(**args)
|
|
1375
3148
|
update!(**args)
|
|
1376
3149
|
end
|
|
@@ -1378,45 +3151,29 @@ module Google
|
|
|
1378
3151
|
# Update properties of this object
|
|
1379
3152
|
def update!(**args)
|
|
1380
3153
|
@confidence = args[:confidence] if args.key?(:confidence)
|
|
3154
|
+
@entity = args[:entity] if args.key?(:entity)
|
|
3155
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
1381
3156
|
@segment = args[:segment] if args.key?(:segment)
|
|
3157
|
+
@track_id = args[:track_id] if args.key?(:track_id)
|
|
1382
3158
|
end
|
|
1383
3159
|
end
|
|
1384
3160
|
|
|
1385
|
-
#
|
|
1386
|
-
|
|
3161
|
+
# Video frame level annotations for object detection and tracking. This field
|
|
3162
|
+
# stores per frame location, time offset, and confidence.
|
|
3163
|
+
class GoogleCloudVideointelligenceV1p2beta1ObjectTrackingFrame
|
|
1387
3164
|
include Google::Apis::Core::Hashable
|
|
1388
3165
|
|
|
1389
|
-
#
|
|
1390
|
-
#
|
|
1391
|
-
#
|
|
1392
|
-
# Corresponds to the JSON property `
|
|
1393
|
-
# @return [
|
|
1394
|
-
attr_accessor :
|
|
1395
|
-
|
|
1396
|
-
def initialize(**args)
|
|
1397
|
-
update!(**args)
|
|
1398
|
-
end
|
|
1399
|
-
|
|
1400
|
-
# Update properties of this object
|
|
1401
|
-
def update!(**args)
|
|
1402
|
-
@model = args[:model] if args.key?(:model)
|
|
1403
|
-
end
|
|
1404
|
-
end
|
|
1405
|
-
|
|
1406
|
-
# Provides "hints" to the speech recognizer to favor specific words and phrases
|
|
1407
|
-
# in the results.
|
|
1408
|
-
class GoogleCloudVideointelligenceV1p1beta1SpeechContext
|
|
1409
|
-
include Google::Apis::Core::Hashable
|
|
3166
|
+
# Normalized bounding box.
|
|
3167
|
+
# The normalized vertex coordinates are relative to the original image.
|
|
3168
|
+
# Range: [0, 1].
|
|
3169
|
+
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
3170
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingBox]
|
|
3171
|
+
attr_accessor :normalized_bounding_box
|
|
1410
3172
|
|
|
1411
|
-
#
|
|
1412
|
-
#
|
|
1413
|
-
#
|
|
1414
|
-
|
|
1415
|
-
# to add additional words to the vocabulary of the recognizer. See
|
|
1416
|
-
# [usage limits](https://cloud.google.com/speech/limits#content).
|
|
1417
|
-
# Corresponds to the JSON property `phrases`
|
|
1418
|
-
# @return [Array<String>]
|
|
1419
|
-
attr_accessor :phrases
|
|
3173
|
+
# The timestamp of the frame in microseconds.
|
|
3174
|
+
# Corresponds to the JSON property `timeOffset`
|
|
3175
|
+
# @return [String]
|
|
3176
|
+
attr_accessor :time_offset
|
|
1420
3177
|
|
|
1421
3178
|
def initialize(**args)
|
|
1422
3179
|
update!(**args)
|
|
@@ -1424,12 +3181,13 @@ module Google
|
|
|
1424
3181
|
|
|
1425
3182
|
# Update properties of this object
|
|
1426
3183
|
def update!(**args)
|
|
1427
|
-
@
|
|
3184
|
+
@normalized_bounding_box = args[:normalized_bounding_box] if args.key?(:normalized_bounding_box)
|
|
3185
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
1428
3186
|
end
|
|
1429
3187
|
end
|
|
1430
3188
|
|
|
1431
3189
|
# Alternative hypotheses (a.k.a. n-best list).
|
|
1432
|
-
class
|
|
3190
|
+
class GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative
|
|
1433
3191
|
include Google::Apis::Core::Hashable
|
|
1434
3192
|
|
|
1435
3193
|
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -1449,7 +3207,7 @@ module Google
|
|
|
1449
3207
|
|
|
1450
3208
|
# A list of word-specific information for each recognized word.
|
|
1451
3209
|
# Corresponds to the JSON property `words`
|
|
1452
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3210
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1WordInfo>]
|
|
1453
3211
|
attr_accessor :words
|
|
1454
3212
|
|
|
1455
3213
|
def initialize(**args)
|
|
@@ -1465,7 +3223,7 @@ module Google
|
|
|
1465
3223
|
end
|
|
1466
3224
|
|
|
1467
3225
|
# A speech recognition result corresponding to a portion of the audio.
|
|
1468
|
-
class
|
|
3226
|
+
class GoogleCloudVideointelligenceV1p2beta1SpeechTranscription
|
|
1469
3227
|
include Google::Apis::Core::Hashable
|
|
1470
3228
|
|
|
1471
3229
|
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
@@ -1473,7 +3231,7 @@ module Google
|
|
|
1473
3231
|
# accuracy, with the top (first) alternative being the most probable, as
|
|
1474
3232
|
# ranked by the recognizer.
|
|
1475
3233
|
# Corresponds to the JSON property `alternatives`
|
|
1476
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3234
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1SpeechRecognitionAlternative>]
|
|
1477
3235
|
attr_accessor :alternatives
|
|
1478
3236
|
|
|
1479
3237
|
# Output only. The
|
|
@@ -1495,86 +3253,93 @@ module Google
|
|
|
1495
3253
|
end
|
|
1496
3254
|
end
|
|
1497
3255
|
|
|
1498
|
-
#
|
|
1499
|
-
|
|
3256
|
+
# Annotations related to one detected OCR text snippet. This will contain the
|
|
3257
|
+
# corresponding text, confidence value, and frame level information for each
|
|
3258
|
+
# detection.
|
|
3259
|
+
class GoogleCloudVideointelligenceV1p2beta1TextAnnotation
|
|
1500
3260
|
include Google::Apis::Core::Hashable
|
|
1501
3261
|
|
|
1502
|
-
#
|
|
1503
|
-
#
|
|
1504
|
-
#
|
|
1505
|
-
|
|
1506
|
-
attr_accessor :audio_tracks
|
|
3262
|
+
# All video segments where OCR detected text appears.
|
|
3263
|
+
# Corresponds to the JSON property `segments`
|
|
3264
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1TextSegment>]
|
|
3265
|
+
attr_accessor :segments
|
|
1507
3266
|
|
|
1508
|
-
#
|
|
1509
|
-
#
|
|
1510
|
-
#
|
|
1511
|
-
|
|
1512
|
-
# Corresponds to the JSON property `diarizationSpeakerCount`
|
|
1513
|
-
# @return [Fixnum]
|
|
1514
|
-
attr_accessor :diarization_speaker_count
|
|
3267
|
+
# The detected text.
|
|
3268
|
+
# Corresponds to the JSON property `text`
|
|
3269
|
+
# @return [String]
|
|
3270
|
+
attr_accessor :text
|
|
1515
3271
|
|
|
1516
|
-
|
|
1517
|
-
|
|
1518
|
-
|
|
1519
|
-
# does not add punctuation to result hypotheses. NOTE: "This is currently
|
|
1520
|
-
# offered as an experimental service, complimentary to all users. In the
|
|
1521
|
-
# future this may be exclusively available as a premium feature."
|
|
1522
|
-
# Corresponds to the JSON property `enableAutomaticPunctuation`
|
|
1523
|
-
# @return [Boolean]
|
|
1524
|
-
attr_accessor :enable_automatic_punctuation
|
|
1525
|
-
alias_method :enable_automatic_punctuation?, :enable_automatic_punctuation
|
|
3272
|
+
def initialize(**args)
|
|
3273
|
+
update!(**args)
|
|
3274
|
+
end
|
|
1526
3275
|
|
|
1527
|
-
#
|
|
1528
|
-
|
|
1529
|
-
|
|
1530
|
-
|
|
1531
|
-
|
|
1532
|
-
|
|
1533
|
-
# identify the speakers in the conversation over time.
|
|
1534
|
-
# Corresponds to the JSON property `enableSpeakerDiarization`
|
|
1535
|
-
# @return [Boolean]
|
|
1536
|
-
attr_accessor :enable_speaker_diarization
|
|
1537
|
-
alias_method :enable_speaker_diarization?, :enable_speaker_diarization
|
|
3276
|
+
# Update properties of this object
|
|
3277
|
+
def update!(**args)
|
|
3278
|
+
@segments = args[:segments] if args.key?(:segments)
|
|
3279
|
+
@text = args[:text] if args.key?(:text)
|
|
3280
|
+
end
|
|
3281
|
+
end
|
|
1538
3282
|
|
|
1539
|
-
|
|
1540
|
-
|
|
1541
|
-
|
|
1542
|
-
|
|
1543
|
-
|
|
1544
|
-
attr_accessor :enable_word_confidence
|
|
1545
|
-
alias_method :enable_word_confidence?, :enable_word_confidence
|
|
3283
|
+
# Video frame level annotation results for text annotation (OCR).
|
|
3284
|
+
# Contains information regarding timestamp and bounding box locations for the
|
|
3285
|
+
# frames containing detected OCR text snippets.
|
|
3286
|
+
class GoogleCloudVideointelligenceV1p2beta1TextFrame
|
|
3287
|
+
include Google::Apis::Core::Hashable
|
|
1546
3288
|
|
|
1547
|
-
#
|
|
1548
|
-
#
|
|
1549
|
-
#
|
|
1550
|
-
#
|
|
1551
|
-
#
|
|
1552
|
-
#
|
|
1553
|
-
|
|
1554
|
-
|
|
3289
|
+
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
3290
|
+
# Contains list of the corner points in clockwise order starting from
|
|
3291
|
+
# top-left corner. For example, for a rectangular bounding box:
|
|
3292
|
+
# When the text is horizontal it might look like:
|
|
3293
|
+
# 0----1
|
|
3294
|
+
# | |
|
|
3295
|
+
# 3----2
|
|
3296
|
+
# When it's clockwise rotated 180 degrees around the top-left corner it
|
|
3297
|
+
# becomes:
|
|
3298
|
+
# 2----3
|
|
3299
|
+
# | |
|
|
3300
|
+
# 1----0
|
|
3301
|
+
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
3302
|
+
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
3303
|
+
# the box.
|
|
3304
|
+
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
3305
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1NormalizedBoundingPoly]
|
|
3306
|
+
attr_accessor :rotated_bounding_box
|
|
3307
|
+
|
|
3308
|
+
# Timestamp of this frame.
|
|
3309
|
+
# Corresponds to the JSON property `timeOffset`
|
|
3310
|
+
# @return [String]
|
|
3311
|
+
attr_accessor :time_offset
|
|
3312
|
+
|
|
3313
|
+
def initialize(**args)
|
|
3314
|
+
update!(**args)
|
|
3315
|
+
end
|
|
3316
|
+
|
|
3317
|
+
# Update properties of this object
|
|
3318
|
+
def update!(**args)
|
|
3319
|
+
@rotated_bounding_box = args[:rotated_bounding_box] if args.key?(:rotated_bounding_box)
|
|
3320
|
+
@time_offset = args[:time_offset] if args.key?(:time_offset)
|
|
3321
|
+
end
|
|
3322
|
+
end
|
|
3323
|
+
|
|
3324
|
+
# Video segment level annotation results for text detection.
|
|
3325
|
+
class GoogleCloudVideointelligenceV1p2beta1TextSegment
|
|
3326
|
+
include Google::Apis::Core::Hashable
|
|
1555
3327
|
|
|
1556
|
-
#
|
|
1557
|
-
#
|
|
1558
|
-
#
|
|
1559
|
-
#
|
|
1560
|
-
|
|
1561
|
-
# Corresponds to the JSON property `languageCode`
|
|
1562
|
-
# @return [String]
|
|
1563
|
-
attr_accessor :language_code
|
|
3328
|
+
# Confidence for the track of detected text. It is calculated as the highest
|
|
3329
|
+
# over all frames where OCR detected text appears.
|
|
3330
|
+
# Corresponds to the JSON property `confidence`
|
|
3331
|
+
# @return [Float]
|
|
3332
|
+
attr_accessor :confidence
|
|
1564
3333
|
|
|
1565
|
-
#
|
|
1566
|
-
#
|
|
1567
|
-
#
|
|
1568
|
-
|
|
1569
|
-
# return a maximum of one. If omitted, will return a maximum of one.
|
|
1570
|
-
# Corresponds to the JSON property `maxAlternatives`
|
|
1571
|
-
# @return [Fixnum]
|
|
1572
|
-
attr_accessor :max_alternatives
|
|
3334
|
+
# Information related to the frames where OCR detected text appears.
|
|
3335
|
+
# Corresponds to the JSON property `frames`
|
|
3336
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1TextFrame>]
|
|
3337
|
+
attr_accessor :frames
|
|
1573
3338
|
|
|
1574
|
-
#
|
|
1575
|
-
# Corresponds to the JSON property `
|
|
1576
|
-
# @return [
|
|
1577
|
-
attr_accessor :
|
|
3339
|
+
# Video segment.
|
|
3340
|
+
# Corresponds to the JSON property `segment`
|
|
3341
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment]
|
|
3342
|
+
attr_accessor :segment
|
|
1578
3343
|
|
|
1579
3344
|
def initialize(**args)
|
|
1580
3345
|
update!(**args)
|
|
@@ -1582,20 +3347,14 @@ module Google
|
|
|
1582
3347
|
|
|
1583
3348
|
# Update properties of this object
|
|
1584
3349
|
def update!(**args)
|
|
1585
|
-
@
|
|
1586
|
-
@
|
|
1587
|
-
@
|
|
1588
|
-
@enable_speaker_diarization = args[:enable_speaker_diarization] if args.key?(:enable_speaker_diarization)
|
|
1589
|
-
@enable_word_confidence = args[:enable_word_confidence] if args.key?(:enable_word_confidence)
|
|
1590
|
-
@filter_profanity = args[:filter_profanity] if args.key?(:filter_profanity)
|
|
1591
|
-
@language_code = args[:language_code] if args.key?(:language_code)
|
|
1592
|
-
@max_alternatives = args[:max_alternatives] if args.key?(:max_alternatives)
|
|
1593
|
-
@speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
|
|
3350
|
+
@confidence = args[:confidence] if args.key?(:confidence)
|
|
3351
|
+
@frames = args[:frames] if args.key?(:frames)
|
|
3352
|
+
@segment = args[:segment] if args.key?(:segment)
|
|
1594
3353
|
end
|
|
1595
3354
|
end
|
|
1596
3355
|
|
|
1597
3356
|
# Annotation progress for a single video.
|
|
1598
|
-
class
|
|
3357
|
+
class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationProgress
|
|
1599
3358
|
include Google::Apis::Core::Hashable
|
|
1600
3359
|
|
|
1601
3360
|
# Video file location in
|
|
@@ -1634,17 +3393,17 @@ module Google
|
|
|
1634
3393
|
end
|
|
1635
3394
|
|
|
1636
3395
|
# Annotation results for a single video.
|
|
1637
|
-
class
|
|
3396
|
+
class GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
|
|
1638
3397
|
include Google::Apis::Core::Hashable
|
|
1639
3398
|
|
|
1640
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
1641
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
1642
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
3399
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
3400
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
3401
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
1643
3402
|
# - Simple to use and understand for most users
|
|
1644
3403
|
# - Flexible enough to meet unexpected needs
|
|
1645
3404
|
# # Overview
|
|
1646
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
1647
|
-
# and error details. The error code should be an enum value of
|
|
3405
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
3406
|
+
# message, and error details. The error code should be an enum value of
|
|
1648
3407
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
1649
3408
|
# error message should be a developer-facing English message that helps
|
|
1650
3409
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -1684,13 +3443,13 @@ module Google
|
|
|
1684
3443
|
# If no explicit content has been detected in a frame, no annotations are
|
|
1685
3444
|
# present for that frame.
|
|
1686
3445
|
# Corresponds to the JSON property `explicitAnnotation`
|
|
1687
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
3446
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation]
|
|
1688
3447
|
attr_accessor :explicit_annotation
|
|
1689
3448
|
|
|
1690
3449
|
# Label annotations on frame level.
|
|
1691
3450
|
# There is exactly one element for each unique label.
|
|
1692
3451
|
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
1693
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3452
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
|
|
1694
3453
|
attr_accessor :frame_label_annotations
|
|
1695
3454
|
|
|
1696
3455
|
# Video file location in
|
|
@@ -1699,28 +3458,40 @@ module Google
|
|
|
1699
3458
|
# @return [String]
|
|
1700
3459
|
attr_accessor :input_uri
|
|
1701
3460
|
|
|
3461
|
+
# Annotations for list of objects detected and tracked in video.
|
|
3462
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
3463
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>]
|
|
3464
|
+
attr_accessor :object_annotations
|
|
3465
|
+
|
|
1702
3466
|
# Label annotations on video level or user specified segment level.
|
|
1703
3467
|
# There is exactly one element for each unique label.
|
|
1704
3468
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
1705
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3469
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
|
|
1706
3470
|
attr_accessor :segment_label_annotations
|
|
1707
3471
|
|
|
1708
3472
|
# Shot annotations. Each shot is represented as a video segment.
|
|
1709
3473
|
# Corresponds to the JSON property `shotAnnotations`
|
|
1710
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3474
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>]
|
|
1711
3475
|
attr_accessor :shot_annotations
|
|
1712
3476
|
|
|
1713
3477
|
# Label annotations on shot level.
|
|
1714
3478
|
# There is exactly one element for each unique label.
|
|
1715
3479
|
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
1716
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3480
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>]
|
|
1717
3481
|
attr_accessor :shot_label_annotations
|
|
1718
3482
|
|
|
1719
3483
|
# Speech transcription.
|
|
1720
3484
|
# Corresponds to the JSON property `speechTranscriptions`
|
|
1721
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3485
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>]
|
|
1722
3486
|
attr_accessor :speech_transcriptions
|
|
1723
3487
|
|
|
3488
|
+
# OCR text detection and tracking.
|
|
3489
|
+
# Annotations for list of detected text snippets. Each will have list of
|
|
3490
|
+
# frame information associated with it.
|
|
3491
|
+
# Corresponds to the JSON property `textAnnotations`
|
|
3492
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>]
|
|
3493
|
+
attr_accessor :text_annotations
|
|
3494
|
+
|
|
1724
3495
|
def initialize(**args)
|
|
1725
3496
|
update!(**args)
|
|
1726
3497
|
end
|
|
@@ -1731,60 +3502,17 @@ module Google
|
|
|
1731
3502
|
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
1732
3503
|
@frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
|
|
1733
3504
|
@input_uri = args[:input_uri] if args.key?(:input_uri)
|
|
3505
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
1734
3506
|
@segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
|
|
1735
3507
|
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
1736
3508
|
@shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
|
|
1737
3509
|
@speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
|
|
1738
|
-
|
|
1739
|
-
end
|
|
1740
|
-
|
|
1741
|
-
# Video context and/or feature-specific parameters.
|
|
1742
|
-
class GoogleCloudVideointelligenceV1p1beta1VideoContext
|
|
1743
|
-
include Google::Apis::Core::Hashable
|
|
1744
|
-
|
|
1745
|
-
# Config for EXPLICIT_CONTENT_DETECTION.
|
|
1746
|
-
# Corresponds to the JSON property `explicitContentDetectionConfig`
|
|
1747
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentDetectionConfig]
|
|
1748
|
-
attr_accessor :explicit_content_detection_config
|
|
1749
|
-
|
|
1750
|
-
# Config for LABEL_DETECTION.
|
|
1751
|
-
# Corresponds to the JSON property `labelDetectionConfig`
|
|
1752
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelDetectionConfig]
|
|
1753
|
-
attr_accessor :label_detection_config
|
|
1754
|
-
|
|
1755
|
-
# Video segments to annotate. The segments may overlap and are not required
|
|
1756
|
-
# to be contiguous or span the whole video. If unspecified, each video is
|
|
1757
|
-
# treated as a single segment.
|
|
1758
|
-
# Corresponds to the JSON property `segments`
|
|
1759
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>]
|
|
1760
|
-
attr_accessor :segments
|
|
1761
|
-
|
|
1762
|
-
# Config for SHOT_CHANGE_DETECTION.
|
|
1763
|
-
# Corresponds to the JSON property `shotChangeDetectionConfig`
|
|
1764
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ShotChangeDetectionConfig]
|
|
1765
|
-
attr_accessor :shot_change_detection_config
|
|
1766
|
-
|
|
1767
|
-
# Config for SPEECH_TRANSCRIPTION.
|
|
1768
|
-
# Corresponds to the JSON property `speechTranscriptionConfig`
|
|
1769
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscriptionConfig]
|
|
1770
|
-
attr_accessor :speech_transcription_config
|
|
1771
|
-
|
|
1772
|
-
def initialize(**args)
|
|
1773
|
-
update!(**args)
|
|
1774
|
-
end
|
|
1775
|
-
|
|
1776
|
-
# Update properties of this object
|
|
1777
|
-
def update!(**args)
|
|
1778
|
-
@explicit_content_detection_config = args[:explicit_content_detection_config] if args.key?(:explicit_content_detection_config)
|
|
1779
|
-
@label_detection_config = args[:label_detection_config] if args.key?(:label_detection_config)
|
|
1780
|
-
@segments = args[:segments] if args.key?(:segments)
|
|
1781
|
-
@shot_change_detection_config = args[:shot_change_detection_config] if args.key?(:shot_change_detection_config)
|
|
1782
|
-
@speech_transcription_config = args[:speech_transcription_config] if args.key?(:speech_transcription_config)
|
|
3510
|
+
@text_annotations = args[:text_annotations] if args.key?(:text_annotations)
|
|
1783
3511
|
end
|
|
1784
3512
|
end
|
|
1785
3513
|
|
|
1786
3514
|
# Video segment.
|
|
1787
|
-
class
|
|
3515
|
+
class GoogleCloudVideointelligenceV1p2beta1VideoSegment
|
|
1788
3516
|
include Google::Apis::Core::Hashable
|
|
1789
3517
|
|
|
1790
3518
|
# Time-offset, relative to the beginning of the video,
|
|
@@ -1813,7 +3541,7 @@ module Google
|
|
|
1813
3541
|
# Word-specific information for recognized words. Word information is only
|
|
1814
3542
|
# included in the response when certain request parameters are set, such
|
|
1815
3543
|
# as `enable_word_time_offsets`.
|
|
1816
|
-
class
|
|
3544
|
+
class GoogleCloudVideointelligenceV1p2beta1WordInfo
|
|
1817
3545
|
include Google::Apis::Core::Hashable
|
|
1818
3546
|
|
|
1819
3547
|
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -1872,12 +3600,12 @@ module Google
|
|
|
1872
3600
|
# Video annotation progress. Included in the `metadata`
|
|
1873
3601
|
# field of the `Operation` returned by the `GetOperation`
|
|
1874
3602
|
# call of the `google::longrunning::Operations` service.
|
|
1875
|
-
class
|
|
3603
|
+
class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoProgress
|
|
1876
3604
|
include Google::Apis::Core::Hashable
|
|
1877
3605
|
|
|
1878
3606
|
# Progress metadata for all videos specified in `AnnotateVideoRequest`.
|
|
1879
3607
|
# Corresponds to the JSON property `annotationProgress`
|
|
1880
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3608
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress>]
|
|
1881
3609
|
attr_accessor :annotation_progress
|
|
1882
3610
|
|
|
1883
3611
|
def initialize(**args)
|
|
@@ -1893,12 +3621,12 @@ module Google
|
|
|
1893
3621
|
# Video annotation response. Included in the `response`
|
|
1894
3622
|
# field of the `Operation` returned by the `GetOperation`
|
|
1895
3623
|
# call of the `google::longrunning::Operations` service.
|
|
1896
|
-
class
|
|
3624
|
+
class GoogleCloudVideointelligenceV1p3beta1AnnotateVideoResponse
|
|
1897
3625
|
include Google::Apis::Core::Hashable
|
|
1898
3626
|
|
|
1899
3627
|
# Annotation results for all videos specified in `AnnotateVideoRequest`.
|
|
1900
3628
|
# Corresponds to the JSON property `annotationResults`
|
|
1901
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3629
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults>]
|
|
1902
3630
|
attr_accessor :annotation_results
|
|
1903
3631
|
|
|
1904
3632
|
def initialize(**args)
|
|
@@ -1912,7 +3640,7 @@ module Google
|
|
|
1912
3640
|
end
|
|
1913
3641
|
|
|
1914
3642
|
# Detected entity from video analysis.
|
|
1915
|
-
class
|
|
3643
|
+
class GoogleCloudVideointelligenceV1p3beta1Entity
|
|
1916
3644
|
include Google::Apis::Core::Hashable
|
|
1917
3645
|
|
|
1918
3646
|
# Textual description, e.g. `Fixed-gear bicycle`.
|
|
@@ -1947,12 +3675,12 @@ module Google
|
|
|
1947
3675
|
# Explicit content annotation (based on per-frame visual signals only).
|
|
1948
3676
|
# If no explicit content has been detected in a frame, no annotations are
|
|
1949
3677
|
# present for that frame.
|
|
1950
|
-
class
|
|
3678
|
+
class GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation
|
|
1951
3679
|
include Google::Apis::Core::Hashable
|
|
1952
3680
|
|
|
1953
3681
|
# All video frames where explicit content was detected.
|
|
1954
3682
|
# Corresponds to the JSON property `frames`
|
|
1955
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3683
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame>]
|
|
1956
3684
|
attr_accessor :frames
|
|
1957
3685
|
|
|
1958
3686
|
def initialize(**args)
|
|
@@ -1966,7 +3694,7 @@ module Google
|
|
|
1966
3694
|
end
|
|
1967
3695
|
|
|
1968
3696
|
# Video frame level annotation results for explicit content.
|
|
1969
|
-
class
|
|
3697
|
+
class GoogleCloudVideointelligenceV1p3beta1ExplicitContentFrame
|
|
1970
3698
|
include Google::Apis::Core::Hashable
|
|
1971
3699
|
|
|
1972
3700
|
# Likelihood of the pornography content..
|
|
@@ -1992,7 +3720,7 @@ module Google
|
|
|
1992
3720
|
end
|
|
1993
3721
|
|
|
1994
3722
|
# Label annotation.
|
|
1995
|
-
class
|
|
3723
|
+
class GoogleCloudVideointelligenceV1p3beta1LabelAnnotation
|
|
1996
3724
|
include Google::Apis::Core::Hashable
|
|
1997
3725
|
|
|
1998
3726
|
# Common categories for the detected entity.
|
|
@@ -2000,22 +3728,22 @@ module Google
|
|
|
2000
3728
|
# cases there might be more than one categories e.g. `Terrier` could also be
|
|
2001
3729
|
# a `pet`.
|
|
2002
3730
|
# Corresponds to the JSON property `categoryEntities`
|
|
2003
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3731
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1Entity>]
|
|
2004
3732
|
attr_accessor :category_entities
|
|
2005
3733
|
|
|
2006
3734
|
# Detected entity from video analysis.
|
|
2007
3735
|
# Corresponds to the JSON property `entity`
|
|
2008
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
3736
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1Entity]
|
|
2009
3737
|
attr_accessor :entity
|
|
2010
3738
|
|
|
2011
3739
|
# All video frames where a label was detected.
|
|
2012
3740
|
# Corresponds to the JSON property `frames`
|
|
2013
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3741
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelFrame>]
|
|
2014
3742
|
attr_accessor :frames
|
|
2015
3743
|
|
|
2016
3744
|
# All video segments where a label was detected.
|
|
2017
3745
|
# Corresponds to the JSON property `segments`
|
|
2018
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3746
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelSegment>]
|
|
2019
3747
|
attr_accessor :segments
|
|
2020
3748
|
|
|
2021
3749
|
def initialize(**args)
|
|
@@ -2032,7 +3760,7 @@ module Google
|
|
|
2032
3760
|
end
|
|
2033
3761
|
|
|
2034
3762
|
# Video frame level annotation results for label detection.
|
|
2035
|
-
class
|
|
3763
|
+
class GoogleCloudVideointelligenceV1p3beta1LabelFrame
|
|
2036
3764
|
include Google::Apis::Core::Hashable
|
|
2037
3765
|
|
|
2038
3766
|
# Confidence that the label is accurate. Range: [0, 1].
|
|
@@ -2058,7 +3786,7 @@ module Google
|
|
|
2058
3786
|
end
|
|
2059
3787
|
|
|
2060
3788
|
# Video segment level annotation results for label detection.
|
|
2061
|
-
class
|
|
3789
|
+
class GoogleCloudVideointelligenceV1p3beta1LabelSegment
|
|
2062
3790
|
include Google::Apis::Core::Hashable
|
|
2063
3791
|
|
|
2064
3792
|
# Confidence that the label is accurate. Range: [0, 1].
|
|
@@ -2068,7 +3796,7 @@ module Google
|
|
|
2068
3796
|
|
|
2069
3797
|
# Video segment.
|
|
2070
3798
|
# Corresponds to the JSON property `segment`
|
|
2071
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
3799
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
|
|
2072
3800
|
attr_accessor :segment
|
|
2073
3801
|
|
|
2074
3802
|
def initialize(**args)
|
|
@@ -2085,7 +3813,7 @@ module Google
|
|
|
2085
3813
|
# Normalized bounding box.
|
|
2086
3814
|
# The normalized vertex coordinates are relative to the original image.
|
|
2087
3815
|
# Range: [0, 1].
|
|
2088
|
-
class
|
|
3816
|
+
class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox
|
|
2089
3817
|
include Google::Apis::Core::Hashable
|
|
2090
3818
|
|
|
2091
3819
|
# Bottom Y coordinate.
|
|
@@ -2136,12 +3864,12 @@ module Google
|
|
|
2136
3864
|
# and the vertex order will still be (0, 1, 2, 3). Note that values can be less
|
|
2137
3865
|
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2138
3866
|
# the box.
|
|
2139
|
-
class
|
|
3867
|
+
class GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly
|
|
2140
3868
|
include Google::Apis::Core::Hashable
|
|
2141
3869
|
|
|
2142
3870
|
# Normalized vertices of the bounding polygon.
|
|
2143
3871
|
# Corresponds to the JSON property `vertices`
|
|
2144
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3872
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1NormalizedVertex>]
|
|
2145
3873
|
attr_accessor :vertices
|
|
2146
3874
|
|
|
2147
3875
|
def initialize(**args)
|
|
@@ -2157,7 +3885,7 @@ module Google
|
|
|
2157
3885
|
# A vertex represents a 2D point in the image.
|
|
2158
3886
|
# NOTE: the normalized vertex coordinates are relative to the original image
|
|
2159
3887
|
# and range from 0 to 1.
|
|
2160
|
-
class
|
|
3888
|
+
class GoogleCloudVideointelligenceV1p3beta1NormalizedVertex
|
|
2161
3889
|
include Google::Apis::Core::Hashable
|
|
2162
3890
|
|
|
2163
3891
|
# X coordinate.
|
|
@@ -2182,7 +3910,7 @@ module Google
|
|
|
2182
3910
|
end
|
|
2183
3911
|
|
|
2184
3912
|
# Annotations corresponding to one tracked object.
|
|
2185
|
-
class
|
|
3913
|
+
class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation
|
|
2186
3914
|
include Google::Apis::Core::Hashable
|
|
2187
3915
|
|
|
2188
3916
|
# Object category's labeling confidence of this track.
|
|
@@ -2192,7 +3920,7 @@ module Google
|
|
|
2192
3920
|
|
|
2193
3921
|
# Detected entity from video analysis.
|
|
2194
3922
|
# Corresponds to the JSON property `entity`
|
|
2195
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
3923
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1Entity]
|
|
2196
3924
|
attr_accessor :entity
|
|
2197
3925
|
|
|
2198
3926
|
# Information corresponding to all frames where this object track appears.
|
|
@@ -2200,12 +3928,12 @@ module Google
|
|
|
2200
3928
|
# messages in frames.
|
|
2201
3929
|
# Streaming mode: it can only be one ObjectTrackingFrame message in frames.
|
|
2202
3930
|
# Corresponds to the JSON property `frames`
|
|
2203
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
3931
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame>]
|
|
2204
3932
|
attr_accessor :frames
|
|
2205
3933
|
|
|
2206
3934
|
# Video segment.
|
|
2207
3935
|
# Corresponds to the JSON property `segment`
|
|
2208
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
3936
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
|
|
2209
3937
|
attr_accessor :segment
|
|
2210
3938
|
|
|
2211
3939
|
# Streaming mode ONLY.
|
|
@@ -2234,14 +3962,14 @@ module Google
|
|
|
2234
3962
|
|
|
2235
3963
|
# Video frame level annotations for object detection and tracking. This field
|
|
2236
3964
|
# stores per frame location, time offset, and confidence.
|
|
2237
|
-
class
|
|
3965
|
+
class GoogleCloudVideointelligenceV1p3beta1ObjectTrackingFrame
|
|
2238
3966
|
include Google::Apis::Core::Hashable
|
|
2239
3967
|
|
|
2240
3968
|
# Normalized bounding box.
|
|
2241
3969
|
# The normalized vertex coordinates are relative to the original image.
|
|
2242
3970
|
# Range: [0, 1].
|
|
2243
3971
|
# Corresponds to the JSON property `normalizedBoundingBox`
|
|
2244
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
3972
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingBox]
|
|
2245
3973
|
attr_accessor :normalized_bounding_box
|
|
2246
3974
|
|
|
2247
3975
|
# The timestamp of the frame in microseconds.
|
|
@@ -2261,7 +3989,7 @@ module Google
|
|
|
2261
3989
|
end
|
|
2262
3990
|
|
|
2263
3991
|
# Alternative hypotheses (a.k.a. n-best list).
|
|
2264
|
-
class
|
|
3992
|
+
class GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative
|
|
2265
3993
|
include Google::Apis::Core::Hashable
|
|
2266
3994
|
|
|
2267
3995
|
# The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -2281,7 +4009,7 @@ module Google
|
|
|
2281
4009
|
|
|
2282
4010
|
# A list of word-specific information for each recognized word.
|
|
2283
4011
|
# Corresponds to the JSON property `words`
|
|
2284
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4012
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1WordInfo>]
|
|
2285
4013
|
attr_accessor :words
|
|
2286
4014
|
|
|
2287
4015
|
def initialize(**args)
|
|
@@ -2297,7 +4025,7 @@ module Google
|
|
|
2297
4025
|
end
|
|
2298
4026
|
|
|
2299
4027
|
# A speech recognition result corresponding to a portion of the audio.
|
|
2300
|
-
class
|
|
4028
|
+
class GoogleCloudVideointelligenceV1p3beta1SpeechTranscription
|
|
2301
4029
|
include Google::Apis::Core::Hashable
|
|
2302
4030
|
|
|
2303
4031
|
# May contain one or more recognition hypotheses (up to the maximum specified
|
|
@@ -2305,7 +4033,7 @@ module Google
|
|
|
2305
4033
|
# accuracy, with the top (first) alternative being the most probable, as
|
|
2306
4034
|
# ranked by the recognizer.
|
|
2307
4035
|
# Corresponds to the JSON property `alternatives`
|
|
2308
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4036
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1SpeechRecognitionAlternative>]
|
|
2309
4037
|
attr_accessor :alternatives
|
|
2310
4038
|
|
|
2311
4039
|
# Output only. The
|
|
@@ -2327,15 +4055,130 @@ module Google
|
|
|
2327
4055
|
end
|
|
2328
4056
|
end
|
|
2329
4057
|
|
|
4058
|
+
# `StreamingAnnotateVideoResponse` is the only message returned to the client
|
|
4059
|
+
# by `StreamingAnnotateVideo`. A series of zero or more
|
|
4060
|
+
# `StreamingAnnotateVideoResponse` messages are streamed back to the client.
|
|
4061
|
+
class GoogleCloudVideointelligenceV1p3beta1StreamingAnnotateVideoResponse
|
|
4062
|
+
include Google::Apis::Core::Hashable
|
|
4063
|
+
|
|
4064
|
+
# Streaming annotation results corresponding to a portion of the video
|
|
4065
|
+
# that is currently being processed.
|
|
4066
|
+
# Corresponds to the JSON property `annotationResults`
|
|
4067
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults]
|
|
4068
|
+
attr_accessor :annotation_results
|
|
4069
|
+
|
|
4070
|
+
# GCS URI that stores annotation results of one streaming session.
|
|
4071
|
+
# It is a directory that can hold multiple files in JSON format.
|
|
4072
|
+
# Example uri format:
|
|
4073
|
+
# gs://bucket_id/object_id/cloud_project_name-session_id
|
|
4074
|
+
# Corresponds to the JSON property `annotationResultsUri`
|
|
4075
|
+
# @return [String]
|
|
4076
|
+
attr_accessor :annotation_results_uri
|
|
4077
|
+
|
|
4078
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4079
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4080
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4081
|
+
# - Simple to use and understand for most users
|
|
4082
|
+
# - Flexible enough to meet unexpected needs
|
|
4083
|
+
# # Overview
|
|
4084
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4085
|
+
# message, and error details. The error code should be an enum value of
|
|
4086
|
+
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
4087
|
+
# error message should be a developer-facing English message that helps
|
|
4088
|
+
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
4089
|
+
# error message is needed, put the localized message in the error details or
|
|
4090
|
+
# localize it in the client. The optional error details may contain arbitrary
|
|
4091
|
+
# information about the error. There is a predefined set of error detail types
|
|
4092
|
+
# in the package `google.rpc` that can be used for common error conditions.
|
|
4093
|
+
# # Language mapping
|
|
4094
|
+
# The `Status` message is the logical representation of the error model, but it
|
|
4095
|
+
# is not necessarily the actual wire format. When the `Status` message is
|
|
4096
|
+
# exposed in different client libraries and different wire protocols, it can be
|
|
4097
|
+
# mapped differently. For example, it will likely be mapped to some exceptions
|
|
4098
|
+
# in Java, but more likely mapped to some error codes in C.
|
|
4099
|
+
# # Other uses
|
|
4100
|
+
# The error model and the `Status` message can be used in a variety of
|
|
4101
|
+
# environments, either with or without APIs, to provide a
|
|
4102
|
+
# consistent developer experience across different environments.
|
|
4103
|
+
# Example uses of this error model include:
|
|
4104
|
+
# - Partial errors. If a service needs to return partial errors to the client,
|
|
4105
|
+
# it may embed the `Status` in the normal response to indicate the partial
|
|
4106
|
+
# errors.
|
|
4107
|
+
# - Workflow errors. A typical workflow has multiple steps. Each step may
|
|
4108
|
+
# have a `Status` message for error reporting.
|
|
4109
|
+
# - Batch operations. If a client uses batch request and batch response, the
|
|
4110
|
+
# `Status` message should be used directly inside batch response, one for
|
|
4111
|
+
# each error sub-response.
|
|
4112
|
+
# - Asynchronous operations. If an API call embeds asynchronous operation
|
|
4113
|
+
# results in its response, the status of those operations should be
|
|
4114
|
+
# represented directly using the `Status` message.
|
|
4115
|
+
# - Logging. If some API errors are stored in logs, the message `Status` could
|
|
4116
|
+
# be used directly after any stripping needed for security/privacy reasons.
|
|
4117
|
+
# Corresponds to the JSON property `error`
|
|
4118
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleRpcStatus]
|
|
4119
|
+
attr_accessor :error
|
|
4120
|
+
|
|
4121
|
+
def initialize(**args)
|
|
4122
|
+
update!(**args)
|
|
4123
|
+
end
|
|
4124
|
+
|
|
4125
|
+
# Update properties of this object
|
|
4126
|
+
def update!(**args)
|
|
4127
|
+
@annotation_results = args[:annotation_results] if args.key?(:annotation_results)
|
|
4128
|
+
@annotation_results_uri = args[:annotation_results_uri] if args.key?(:annotation_results_uri)
|
|
4129
|
+
@error = args[:error] if args.key?(:error)
|
|
4130
|
+
end
|
|
4131
|
+
end
|
|
4132
|
+
|
|
4133
|
+
# Streaming annotation results corresponding to a portion of the video
|
|
4134
|
+
# that is currently being processed.
|
|
4135
|
+
class GoogleCloudVideointelligenceV1p3beta1StreamingVideoAnnotationResults
|
|
4136
|
+
include Google::Apis::Core::Hashable
|
|
4137
|
+
|
|
4138
|
+
# Explicit content annotation (based on per-frame visual signals only).
|
|
4139
|
+
# If no explicit content has been detected in a frame, no annotations are
|
|
4140
|
+
# present for that frame.
|
|
4141
|
+
# Corresponds to the JSON property `explicitAnnotation`
|
|
4142
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
|
|
4143
|
+
attr_accessor :explicit_annotation
|
|
4144
|
+
|
|
4145
|
+
# Label annotation results.
|
|
4146
|
+
# Corresponds to the JSON property `labelAnnotations`
|
|
4147
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
4148
|
+
attr_accessor :label_annotations
|
|
4149
|
+
|
|
4150
|
+
# Object tracking results.
|
|
4151
|
+
# Corresponds to the JSON property `objectAnnotations`
|
|
4152
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
|
|
4153
|
+
attr_accessor :object_annotations
|
|
4154
|
+
|
|
4155
|
+
# Shot annotation results. Each shot is represented as a video segment.
|
|
4156
|
+
# Corresponds to the JSON property `shotAnnotations`
|
|
4157
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
|
|
4158
|
+
attr_accessor :shot_annotations
|
|
4159
|
+
|
|
4160
|
+
def initialize(**args)
|
|
4161
|
+
update!(**args)
|
|
4162
|
+
end
|
|
4163
|
+
|
|
4164
|
+
# Update properties of this object
|
|
4165
|
+
def update!(**args)
|
|
4166
|
+
@explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
|
|
4167
|
+
@label_annotations = args[:label_annotations] if args.key?(:label_annotations)
|
|
4168
|
+
@object_annotations = args[:object_annotations] if args.key?(:object_annotations)
|
|
4169
|
+
@shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
|
|
4170
|
+
end
|
|
4171
|
+
end
|
|
4172
|
+
|
|
2330
4173
|
# Annotations related to one detected OCR text snippet. This will contain the
|
|
2331
4174
|
# corresponding text, confidence value, and frame level information for each
|
|
2332
4175
|
# detection.
|
|
2333
|
-
class
|
|
4176
|
+
class GoogleCloudVideointelligenceV1p3beta1TextAnnotation
|
|
2334
4177
|
include Google::Apis::Core::Hashable
|
|
2335
4178
|
|
|
2336
4179
|
# All video segments where OCR detected text appears.
|
|
2337
4180
|
# Corresponds to the JSON property `segments`
|
|
2338
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4181
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1TextSegment>]
|
|
2339
4182
|
attr_accessor :segments
|
|
2340
4183
|
|
|
2341
4184
|
# The detected text.
|
|
@@ -2357,7 +4200,7 @@ module Google
|
|
|
2357
4200
|
# Video frame level annotation results for text annotation (OCR).
|
|
2358
4201
|
# Contains information regarding timestamp and bounding box locations for the
|
|
2359
4202
|
# frames containing detected OCR text snippets.
|
|
2360
|
-
class
|
|
4203
|
+
class GoogleCloudVideointelligenceV1p3beta1TextFrame
|
|
2361
4204
|
include Google::Apis::Core::Hashable
|
|
2362
4205
|
|
|
2363
4206
|
# Normalized bounding polygon for text (that might not be aligned with axis).
|
|
@@ -2376,7 +4219,7 @@ module Google
|
|
|
2376
4219
|
# than 0, or greater than 1 due to trignometric calculations for location of
|
|
2377
4220
|
# the box.
|
|
2378
4221
|
# Corresponds to the JSON property `rotatedBoundingBox`
|
|
2379
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
4222
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1NormalizedBoundingPoly]
|
|
2380
4223
|
attr_accessor :rotated_bounding_box
|
|
2381
4224
|
|
|
2382
4225
|
# Timestamp of this frame.
|
|
@@ -2396,7 +4239,7 @@ module Google
|
|
|
2396
4239
|
end
|
|
2397
4240
|
|
|
2398
4241
|
# Video segment level annotation results for text detection.
|
|
2399
|
-
class
|
|
4242
|
+
class GoogleCloudVideointelligenceV1p3beta1TextSegment
|
|
2400
4243
|
include Google::Apis::Core::Hashable
|
|
2401
4244
|
|
|
2402
4245
|
# Confidence for the track of detected text. It is calculated as the highest
|
|
@@ -2407,12 +4250,12 @@ module Google
|
|
|
2407
4250
|
|
|
2408
4251
|
# Information related to the frames where OCR detected text appears.
|
|
2409
4252
|
# Corresponds to the JSON property `frames`
|
|
2410
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4253
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1TextFrame>]
|
|
2411
4254
|
attr_accessor :frames
|
|
2412
4255
|
|
|
2413
4256
|
# Video segment.
|
|
2414
4257
|
# Corresponds to the JSON property `segment`
|
|
2415
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
4258
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]
|
|
2416
4259
|
attr_accessor :segment
|
|
2417
4260
|
|
|
2418
4261
|
def initialize(**args)
|
|
@@ -2428,7 +4271,7 @@ module Google
|
|
|
2428
4271
|
end
|
|
2429
4272
|
|
|
2430
4273
|
# Annotation progress for a single video.
|
|
2431
|
-
class
|
|
4274
|
+
class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationProgress
|
|
2432
4275
|
include Google::Apis::Core::Hashable
|
|
2433
4276
|
|
|
2434
4277
|
# Video file location in
|
|
@@ -2467,17 +4310,17 @@ module Google
|
|
|
2467
4310
|
end
|
|
2468
4311
|
|
|
2469
4312
|
# Annotation results for a single video.
|
|
2470
|
-
class
|
|
4313
|
+
class GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
|
|
2471
4314
|
include Google::Apis::Core::Hashable
|
|
2472
4315
|
|
|
2473
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
2474
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
2475
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4316
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4317
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4318
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2476
4319
|
# - Simple to use and understand for most users
|
|
2477
4320
|
# - Flexible enough to meet unexpected needs
|
|
2478
4321
|
# # Overview
|
|
2479
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
2480
|
-
# and error details. The error code should be an enum value of
|
|
4322
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4323
|
+
# message, and error details. The error code should be an enum value of
|
|
2481
4324
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
2482
4325
|
# error message should be a developer-facing English message that helps
|
|
2483
4326
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -2517,13 +4360,13 @@ module Google
|
|
|
2517
4360
|
# If no explicit content has been detected in a frame, no annotations are
|
|
2518
4361
|
# present for that frame.
|
|
2519
4362
|
# Corresponds to the JSON property `explicitAnnotation`
|
|
2520
|
-
# @return [Google::Apis::VideointelligenceV1p1beta1::
|
|
4363
|
+
# @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]
|
|
2521
4364
|
attr_accessor :explicit_annotation
|
|
2522
4365
|
|
|
2523
4366
|
# Label annotations on frame level.
|
|
2524
4367
|
# There is exactly one element for each unique label.
|
|
2525
4368
|
# Corresponds to the JSON property `frameLabelAnnotations`
|
|
2526
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4369
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
2527
4370
|
attr_accessor :frame_label_annotations
|
|
2528
4371
|
|
|
2529
4372
|
# Video file location in
|
|
@@ -2534,36 +4377,36 @@ module Google
|
|
|
2534
4377
|
|
|
2535
4378
|
# Annotations for list of objects detected and tracked in video.
|
|
2536
4379
|
# Corresponds to the JSON property `objectAnnotations`
|
|
2537
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4380
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]
|
|
2538
4381
|
attr_accessor :object_annotations
|
|
2539
4382
|
|
|
2540
4383
|
# Label annotations on video level or user specified segment level.
|
|
2541
4384
|
# There is exactly one element for each unique label.
|
|
2542
4385
|
# Corresponds to the JSON property `segmentLabelAnnotations`
|
|
2543
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4386
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
2544
4387
|
attr_accessor :segment_label_annotations
|
|
2545
4388
|
|
|
2546
4389
|
# Shot annotations. Each shot is represented as a video segment.
|
|
2547
4390
|
# Corresponds to the JSON property `shotAnnotations`
|
|
2548
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4391
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]
|
|
2549
4392
|
attr_accessor :shot_annotations
|
|
2550
4393
|
|
|
2551
4394
|
# Label annotations on shot level.
|
|
2552
4395
|
# There is exactly one element for each unique label.
|
|
2553
4396
|
# Corresponds to the JSON property `shotLabelAnnotations`
|
|
2554
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4397
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]
|
|
2555
4398
|
attr_accessor :shot_label_annotations
|
|
2556
4399
|
|
|
2557
4400
|
# Speech transcription.
|
|
2558
4401
|
# Corresponds to the JSON property `speechTranscriptions`
|
|
2559
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4402
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>]
|
|
2560
4403
|
attr_accessor :speech_transcriptions
|
|
2561
4404
|
|
|
2562
4405
|
# OCR text detection and tracking.
|
|
2563
4406
|
# Annotations for list of detected text snippets. Each will have list of
|
|
2564
4407
|
# frame information associated with it.
|
|
2565
4408
|
# Corresponds to the JSON property `textAnnotations`
|
|
2566
|
-
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::
|
|
4409
|
+
# @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>]
|
|
2567
4410
|
attr_accessor :text_annotations
|
|
2568
4411
|
|
|
2569
4412
|
def initialize(**args)
|
|
@@ -2586,7 +4429,7 @@ module Google
|
|
|
2586
4429
|
end
|
|
2587
4430
|
|
|
2588
4431
|
# Video segment.
|
|
2589
|
-
class
|
|
4432
|
+
class GoogleCloudVideointelligenceV1p3beta1VideoSegment
|
|
2590
4433
|
include Google::Apis::Core::Hashable
|
|
2591
4434
|
|
|
2592
4435
|
# Time-offset, relative to the beginning of the video,
|
|
@@ -2615,7 +4458,7 @@ module Google
|
|
|
2615
4458
|
# Word-specific information for recognized words. Word information is only
|
|
2616
4459
|
# included in the response when certain request parameters are set, such
|
|
2617
4460
|
# as `enable_word_time_offsets`.
|
|
2618
|
-
class
|
|
4461
|
+
class GoogleCloudVideointelligenceV1p3beta1WordInfo
|
|
2619
4462
|
include Google::Apis::Core::Hashable
|
|
2620
4463
|
|
|
2621
4464
|
# Output only. The confidence estimate between 0.0 and 1.0. A higher number
|
|
@@ -2684,14 +4527,14 @@ module Google
|
|
|
2684
4527
|
attr_accessor :done
|
|
2685
4528
|
alias_method :done?, :done
|
|
2686
4529
|
|
|
2687
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
2688
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
2689
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4530
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4531
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4532
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2690
4533
|
# - Simple to use and understand for most users
|
|
2691
4534
|
# - Flexible enough to meet unexpected needs
|
|
2692
4535
|
# # Overview
|
|
2693
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
2694
|
-
# and error details. The error code should be an enum value of
|
|
4536
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4537
|
+
# message, and error details. The error code should be an enum value of
|
|
2695
4538
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
2696
4539
|
# error message should be a developer-facing English message that helps
|
|
2697
4540
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
@@ -2768,14 +4611,14 @@ module Google
|
|
|
2768
4611
|
end
|
|
2769
4612
|
end
|
|
2770
4613
|
|
|
2771
|
-
# The `Status` type defines a logical error model that is suitable for
|
|
2772
|
-
# programming environments, including REST APIs and RPC APIs. It is
|
|
2773
|
-
# [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
4614
|
+
# The `Status` type defines a logical error model that is suitable for
|
|
4615
|
+
# different programming environments, including REST APIs and RPC APIs. It is
|
|
4616
|
+
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
2774
4617
|
# - Simple to use and understand for most users
|
|
2775
4618
|
# - Flexible enough to meet unexpected needs
|
|
2776
4619
|
# # Overview
|
|
2777
|
-
# The `Status` message contains three pieces of data: error code, error
|
|
2778
|
-
# and error details. The error code should be an enum value of
|
|
4620
|
+
# The `Status` message contains three pieces of data: error code, error
|
|
4621
|
+
# message, and error details. The error code should be an enum value of
|
|
2779
4622
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
2780
4623
|
# error message should be a developer-facing English message that helps
|
|
2781
4624
|
# developers *understand* and *resolve* the error. If a localized user-facing
|